title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Equivariant flow matching
Accept (poster)
Summary: The manuscript builds on recent progress in simulation-free loss functions for continuous normalizing flows which allow to scale CNFs to significantly larger dimensions and can be thought of as a generalizion of continuous time diffusion models. Specifically, the manuscript extends the recently proposed conditional flow matching approach which proposed to use results of optimal transport to construct particular efficient probability paths. A notable novelty of the paper is to consider the question of how to incorporate equivariance in the conditional flow matching loss. This is achieved by modifying the cost matrix c(x, x') of the Wasserstein distance to be the minimal cost *over the entire orbit* of x' (or equivalently x). The proposal is studied experimental in the context of Boltzmann generators which are normalizing flows trained to act as sampling density for importance weighting (or, alternatively, Markov Chain Monte Carlo) of unnormalized physical target densities of Quantum Chemistry. Normalizing flows are particularly suited for this task as they provide a tractable likelihood as well as fast sampling. Equivariance is of pivotal importance for Boltzmann generators since the studied physical systems often have a high degree of symmetry, e.g. SE(3) and permutation symmetry. The paper establishes in detailed numerical experiments that the flow matching approach is beneficial in the context of Boltzmann generators and compares favorably to likelihood-based training. This is shown for standard benchmarks such as Lennard Jones particles and Alanine dipeptide. Overall, I think however this is a valuable contribution in a very exciting and rapidly evolving field of research. I therefore tend to recommend acceptance. I would however encourage the authors to add an appendix summarising the conditional OT flow matching procedure of 2302.00482 on which their method builds. Unfortunately, the conditional flow matching paper is not the most readable and it would make the ms for self-contained and accessible to add a brief summary. Strengths: - The paper establishes, to the best of my knowledge, for the first time that flow matching is benefical in the context of Botzmann generators. It also presents a detailed comparison to both likelihood-based as well as OT conditional flow matching training objectives on standard benchmark sets. - The question of how equivariance is of high relevance as CNFs are widely deployed in physics applications for which symmetry is a fundamental ingredient for successful learning. - The presentation is well structured and clear Weaknesses: - The proposed method is rather specific to the case of SE(3) and permutation invariance. It seems non-trival to me how one would extend the treatment to larger symmetry groups, such as in applications to Lattice Field Theories. - A crucial element of Boltzmann generators is that they are often trained using self-sampled energy training (Variational Inference). The flow matching condition does not allow for such type of training. In the original conditional flow matching paper, a reweighting procedure was proposed. I think the authors made a good choice in not discussing this in this contexts as it is to be expected that reweighting will fail for reasonably sized systems such as alanine dipeptide. Nevertheless, the fact that (an efficient version of) energy-based training is not available for flow matching is a major downside of flow-matching. - I am a bit unclear on how much the proposed equivariant method actually helps. There is a substantial gain in the LJ55 setup while on Alanine dipeptide the previously proposed OT flow matching leads to higher ESS (at the cost of slightly longer trajectory length). Minor comments: P.3 L.80-81: Continuous time diffusion models provide a tractable likelihood. In fact, they are continuous normalizing flows albeit trained with a different objective as can be seen by noticing that the reverse process is equivalent to a deterministic ODE (known as probability flow ODE). P.3. L.100: distribution -> density P.3 Eq 3: I find the notation a bit contrived. Why not simply use f_\theta(t, x)? P.4 L .111: Jacobian trace -> trace of Jacobian or divergence P.4 Eq 6: x_1 \sim \mu(x_1) -> x_1\sim \mu P.4 L. 137 \sim q(x_0) -> \sim q and \sim \mu(x_1) -> \sim \mu P.6 Table 1: Please mention in the caption how the errors are determined. Most readers will wonder about this while looking at the table so it would be good to provide this information already here. P.6 Eq 15-18: Various commas are missing after the equations P.6 L. 208: mean free Gaussian is not equal to \mathcal{N}(x_0| 0, 1), as it has different normalizer. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - How is the minimum over the SO(D) group orbit in Eq (14) actually performed (the group is continuous)? - What is the advantage of cartesian coordinates for alanine dipeptide as opposed to the standard parameterization in terms of dihedrals, bond angles, ...? - What is the precise definition of path length in Table 1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Limitations are properly addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful review, and appreciate their assessment that our paper is a valuable contribution in a very exciting and rapidly evolving field. We now address their comments individually. > Unfortunately, the conditional flow matching paper is not the most readable and it would make the ms for self-contained and accessible to add a brief summary. We agree on the readability of the original flow matching paper. We will include a more detailed discussion both in Sec. 3.4 and in the appendix. We agree that this will make our work more cohere. > The proposed method is rather specific to the case of SE(3) and permutation invariance. It seems non-trival to me how one would extend the treatment to larger symmetry groups, such as in applications to Lattice Field Theories We discuss equivariant OT flow matching for more general symmetry groups in appendix B.1 in great detail. We will include the main findings in Sec. 4 in the main part of the manuscript. However, how Eq. 14 can be efficiently approximated for other symmetry groups remains for future research. The $SO(3)$ and $S(N)$ symmetry groups (as well as the translation) are the most important ones for molecular systems, which our paper aims to tackle. > A crucial element of Boltzmann generators is that they are often trained using self-sampled energy training (Variational Inference). The flow matching condition does not allow for such type of training. In the original conditional flow matching paper, a reweighting procedure was proposed. I think the authors made a good choice in not discussing this in this contexts as it is to be expected that reweighting will fail for reasonably sized systems such as alanine dipeptide. Nevertheless, the fact that (an efficient version of) energy-based training is not available for flow matching is a major downside of flow-matching. We agree that this is limitation of flow matching. However, it is not feasible to train CNF energy based, as shown in appendix A.2 for the larger systems we investigated. As the reviewer mentions, the proposed reweighting procedure in the flow matching paper is not feasible for larger systems. As the importance weights will be zero, when reweighting from a Gaussian to the target distribution. An alternative approach is to first train a CNF with equivariant OT flow matching, with only a few (potentially biased) samples and then generating new samples with the flow. As these can then be reweighted with non-zero weights, they can then be added to the training set. This could be done iteratively and the final training set would resemble samples from the target equilibrium distribution. We will include this idea in the future work section, as it is beyond the scope of this paper. > I am a bit unclear on how much the proposed equivariant method actually helps. There is a substantial gain in the LJ55 setup while on Alanine dipeptide the previously proposed OT flow matching leads to higher ESS (at the cost of slightly longer trajectory length). The difference for equivariant OT compared to normal OT is only marginal for systems with few symmetries, i.e. less than 15 interchangeable particles, which is the case for DW4, LJ13 and ALA2. However, for the much larger LJ55 system (at least 3 times as many identical particles) the difference is significant. Crucially, the straight OT sampling paths allow using a Runge-Kutta integrator instead of the adaptive dopri5, which is significantly faster. With the “normal” OT the particle integration paths change directions (see Figure 2b,c,e), which requires small step sizes where the particles turn. Hence, this shows that it is important to use equivariant OT flow matching when scaling to larger systems to obtain optimal paths. This effect is not that prominent for the smaller systems, where normal OT flow matching already works quite well. We included further experiments in the global rebuttal that highlight the benefits of equivaraint OT flow matching. > How is the minimum over the SO(D) group orbit in Eq (14) actually performed (the group is continuous)? It is performed with the Kabsch algorithm, we will include this in the final version of the manuscript. For more details see also the global rebuttal. > What is the advantage of cartesian coordinates for alanine dipeptide as opposed to the standard parameterization in terms of dihedrals, bond angles, ...? There are multiple advantages of using Cartesian coordinates over internal coordinates. The main two are transferability and the long *robot arm problem*. (i) Transferability: Training a transferable Boltzmann generator will be easier in Cartesian coordinates, as these are not specific to each system. (ii) The long robot arm problem: Peptides and proteins often have different metastable states, which can for example be folded or unfolded. In these folded states some non-bonded parts will be close, but the information that these are close needs to be propagated though the whole backbone structure in the form of torsion angles, angles and bond lengths (long robot arm). Hence, a model in internal coordinates, as it does have the information of this distance implicitly, will fail to learn the distribution accurately. Moreover, going to explicit solvent systems, i.e. with water molecules, internal coordinates will not be possible and accounting for the large amount of possible permutation of interchangeable water molecules will be crucial. Highlighting again the importance of using our equivariant OT flow matching when scaling to larger systems, as shown in the LJ55 results. > What is the precise definition of path length in Table 1? The path length as reported in the table and shown in the figures is the usual arc length between two points in $N \times D$ dimensions. We will include this in the final version of the manuscript. We thank the reviewer for their helpful minor comments and agree with all of them. We will change the final version accordingly. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for the detailed rebuttal. Your reply clarified my questions. I only have a minor follow-up question: I am confused about the 'transferability' aspect in your last reply. Why is the standard parameterization specific to the individual peptide? As far as I can see any peptide can be expressed using such a representation? --- Reply to Comment 1.1.1: Comment: We appreciate that the reviewer is satisfied with our rebuttal and that we could answer their questions. > I am confused about the 'transferability' aspect in your last reply. Why is the standard parameterization specific to the individual peptide? As far as I can see any peptide can be expressed using such a representation? While each peptide can indeed be represented in internal coordinates, their descriptions differ significantly due to varying atom counts, leading to distinct numbers and types of torsion angles, angles, and bonds. Inferring from one learned set to another isn't straightforward, especially considering the challenge of constructing a model capable of handling varying internal coordinate counts. In contrast, Cartesian coordinates easily allow varying input sizes. For instance, our approach could be directly applied on different molecules or particle counts.
Summary: This paper studies equivariant flow matching, an extension to the flow matching paradigm for training continuous normalizing flows by regressing parametric vector fields to conditional vector fields. Building upon prior work, conditional vector fields can be derived from the distribution of the 2-Wasserstein optimal transport map $\pi(x0, x1)$ between the prior $q(x0)$ and the target $\mu(x1)$. The central contribution of this paper is to bake symmetries into the optimal transport cost $c(x_0, x_1)$ through a sequential search procedure for the symmetry group $S(N)$ and $SO(N)$. This is done by first applying the Hungarian algorithm followed by finding a rotation matrix that minimizes the cost, for each element in the cost matrix. Finally, the authors parameterize an equivariant vector field using the EGNN architecture of Satorras et. al 2021. Experiments are done on $n$-body particle dynamics and training a Boltzmann Generator for small proteins in Alanine dipeptide. Strengths: **Originality** The main strength of this paper is that all presented material follows naturally from the desire to bake in equivariance in continuous normalizing flows. As equivariance has already been studied extensively in the generative modeling literature this application to flow matching is a reasonable extension. The originality of this work is limited to solely baking symmetries in the cost matrix as equivariant vector fields using EGNN have already been employed in the literature e.g. $E(N)$-normalizing flow (Satorras et. al 2022). **Quality** The quality of the presented work is a good first attempt at tackling the problem of equivariant flow matching, but unfortunately, it is below the standard that is expected on several fronts which are outlined in the weaknesses section. **Clarity** In general, the work is fairly clear. The presented ideas are straightforward to grasp but a few technical details are omitted which could improve understanding and readability. For example, how do you align the rotations after performing the Hungarian algorithm? We are minimizing over $SO(D)$, this is a non-trivial manifold. Of course, the appendix and code have this information but given that this is a crucial part of the contribution more detail would improve readability. **Signifance** As equivariance in generative modeling has largely been studied in the literature the contribution of this work is limited. While equivariance has not been exclusively studied in flow matching, this extension is an early step and has limited novelty. This would be fine if there were large benefits empirically, theoretically, or computationally but this is not the case as far as I can tell and as a result, this paper has limited significance at present. Weaknesses: This paper has many potential weaknesses some of which are already alluded to in the previous section. Firstly, it has very little novelty as many of the concepts regarding equivariance and flows are known in the literature. In fact, using EGNN in normalizing flows for this very symmetry group has already been studied by Satorras et. al 2022. Moreover, one of the main benefits of flow matching is that one sidesteps the training complexity of regular CNFs as we do not need to backprop through an ODE solver. This brings huge computational benefits during training, albeit inference is still costly. In this paper, training is also expensive as solving for the optimal coupling in mini-batch OT is already expensive but even more so because the Hungarian algorithm is employed which scales $O(N^3)$. This is prohibitively expensive for any large-scale machine learning system. I encourage the authors to investigate other means of approximately solving for permutations. Some examples and directions for investigation can include learning the permutations (see Git Rebasin Ainsworth et. al 2022) using the straight-through estimator, or using Gumbel Sinkhorn (Mena 2018). With regard to the proposed approach, the authors break down the problem by doing a sequential minimization by first finding the best permutation and then finding the best rotation. But this is not the original problem which may be indeed intractable. This is a ripe avenue to do a bit of theory to justify the proposed approach. For example, why is this sensible and not the first thing one can do? Can we do a little bit of error analysis to bound the error of the optimal cost matrix found in the sequential search to the actual one? The experiments section is also rudimentary. While I appreciated the visualization of the shorter and straighter OT-Paths it is difficult to rationalize the gains here knowing that the overall algorithm is likely more expensive. A detailed time-complexity analysis of the overall method is needed here and proper comparison to regular flow matching and $E(N)$ NF is also a good idea. Moreover, the results seem a bit mixed as equivariant models do worse on Alanine dipeptide compared to non-equivariant models. The current explanation in the main text for this result is unsatisfactory. We expect equivariance to help here, why is it not? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. One key benefit of equivariant models is that they are more data efficient. This avenue seems to be underexplored. A table similar to Table 1 in $E(N)$-NF would be nice to see. 2. There are many equivariant models for molecular data and simulation but only a small dataset in Alanine dipeptide was used. Can the authors justify not including a large body of equivariant generative models for molecular simulation here? 3. While this paper claims equivariant flow matching as a general procedure the main method is limited to the groups $S(N)$ and $SO(D)$. This is a bit misleading as the current approach for finding the optimal cost matrix is not applicable to other groups. The authors may consider a different title for this work as well as a fairer presentation in the abstract and introduction of what is actually done. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review and questions. To keep the rebuttal within the character limit, we often only cite parts of each question, but always answer the whole one. > How do you align the rotations after performing the Hungarian algorithm? With the Kabsch algorithm, see also global rebuttal. > While equivariance has not been exclusively studied in flow matching, this extension is an early step and has limited novelty. ... this paper has limited significance at present ... We respectfully disagree regarding the limited novelty and significance. Enhancing MD simulations using ML models is a rapidly growing field, as evidenced by the increasing number of AI4Science groups focused on this topic. Prior to our work, it was uncertain whether training a Boltzmann Generator in Cartesian coordinates to generate samples from the equilibrium distribution of real molecules was possible. We are the first to present a model and training algorithm that successfully achieve this. We believe our results are of great interest to the community and will stimulate further research in this area, potentially leading to transferable Boltzmann Generators. Moreover, we consider an even more difficult problem as we train the model with biased samples, instead of sample from the equilibrium distribution. We present two training algorithms that achieve this. Both including the known concepts of flow matching and EGNNs. However, we are the first to combine these, resulting in this novel finding. Additionally, we show that when scaling to larger systems with more symmetries, it becomes critical to do equivariant OT flow matching, to preserve the optimal transport sampling paths. Interestingly our work shows that for systems with less symmetries, e.g. fewer identical particles like DW4, LJ13, and alanine dipeptide, the benefits of including equivariant OT flow matching are marginal (see also appendix A.1). We will highlight these findings more in the final version of the manuscript. > ... training is also expensive as solving for the optimal coupling in mini-batch OT is already expensive ... We agree that especially the reordering introduces additional computational cost, as shown in appendix C.3 and discussed in Sec. 8. We share the suggested approximations of the Hungarian algorithm, as detailed in Sec. 8. However, the training data preparation can be performed prior to or during the training and is trivially parallelizable on CPUs. Allowing to essentially train the equivariant flow matching models as fast as the OT flow matching models. See the global rebuttal for more details. > the authors break down the problem by doing a sequential minimization by first finding the best permutation and then finding the best rotation... We discuss equivariant OT flow matching for more general symmetry groups in appendix B.1 in great detail. We will include the main findings in Sec. 4 in the final version. We agree that this minimization problem is intractable, but we show that our approach is close to the correct solution. See global rebuttal for a detailed analysis. > The experiments section is also rudimentary... We respectfully disagree that the experiments section is rudimentary. As mentioned, we are the first to train a Boltzmann generator for real molecules in Cartesian coordinates. Nevertheless, we added more experiments and additional evaluations, as shown in the global rebuttal. We report runtimes of the training in appendix C.3. However, note that in practice we can do the data preparation in parallel before or during the training. We already compare to $E(N)$-NF flows, also on their biased test set in appendix A.3. Note that we do not compare for LJ55 and alanine dipeptide, as likelihood training requires too much memory, as shown in appendix A.2. We added comparison to normal flow matching as suggested. > the results seem a bit mixed as equivariant models do worse on Alanine dipeptide compared to non-equivariant models... The difference for equivariant OT compared to normal OT is marginal for systems with few symmetries, i.e. less than 15 interchangeable particles, which is the case for DW4, LJ13 and alanine dipeptide. For alanine dipeptide, there are moreover different particle types. However, for the much larger LJ55 system (at least 3 times as many identical particles) the difference is significant. Crucially, the straight OT sampling paths allow using a Runge-Kutta integrator instead of the adaptive dopri5, which is significantly faster. With the “normal” OT the particle integration paths change directions (see Figure 2b,c,e), which requires small step sizes where the particles turn. Hence, this shows that it is important to use equivariant OT flow matching when scaling to larger systems to obtain optimal paths. > One key benefit of equivariant models is that they are more data efficient. This avenue seems to be underexplored... We thank the reviewer for this suggestion, we included this experiment in the global rebuttal. > There are many equivariant models for molecular data and simulation... We investigate standard benchmarking systems in the literature and even propose new ones, which were previously out of reach for Boltzmann generators in Cartesian coordinates. We leave it for future work to scale to even larger systems. However, we showed that in order to get OT integration paths, equivariant OT flow matching is required when scaling to larger systems. > While this paper claims equivariant flow matching as a general procedure the main method is limited to the groups $S(N)$ and $SO(D)$... As mentioned, the theory for more general symmetry groups is somewhat hidden in appendix B.1. However, the challenge remains of finding a good approximation to eq. 14 for other symmetry groups. Nevertheless, the investigated symmetry groups in our paper are the most important for molecular data. We will change the abstract as suggested. --- Rebuttal Comment 1.1: Title: Re:Rebuttal Comment: I thank the authors for their detailed rebuttal. The additional experiments in the global response are appreciated. However, after another careful reading of this work, I will have to, unfortunately, maintain my current position on these papers. The authors claim that the of training Boltzmann generators in Cartesian coordinates using Flow matching is novel and important for AI4Science applications. I disagree on the novelty but I do agree on the potential impact for AI4Science. First, looking at the code provided you require an MCMC step to generate a dataset for NLL-based training. The training part can really be done with any generative model because we have the dataset. The choice of using Flow matching here is, respectfully, not that crazy. Indeed, if you really had to do training and generate samples using ONLY the energy function and no MCMC using something like the reverse KL that would be very intriguing. Unfortunately, this is already done in [1] who also do flow matching. Regarding the experiments in the global response, I seem to have missed where the experiments in the low data regime are done as asked in my original review. I do see Table 4 but this seems to be varying batch size and not dataset size. Regarding the difficulty of the OT problem. I believe the authors will find more interesting ways if they modify the ground cost function $c$, by looking into manifold OT. For Lie groups one idea is to use the geodesic cost instead and that way you do not need to do this approximation. For $S_n$, I do not have a better answer yet. [1] Máté, Bálint, and François Fleuret. "Learning Interpolations between Boltzmann Densities." Transactions on Machine Learning Research (2023). --- Reply to Comment 1.1.1: Title: Answer to response - Part 1 Comment: We appreciate the reviewer's extra time spent to reviewing our paper. We now answer their additional responses individually. > The authors claim that the of training Boltzmann generators in Cartesian coordinates using Flow matching is novel and important for AI4Science applications. I disagree on the novelty but I do agree on the potential impact for AI4Science. We are pleased that the reviewer now acknowledges the potential impact of our work in the AI4Science community. On their assessment about novelty, we respectfully disagree and have elaborated on our stance earlier and below. > First, looking at the code provided you require an MCMC step to generate a dataset for NLL-based training. We agree that we require data to train with flow matching, which is generally true for flow matching. We mention this in Sec. 3, 6, and in appendix C.2, C.4. However, we do not require training samples from the target distribution, as discussed below. > The training part can really be done with any generative model because we have the dataset. We respectfully disagree, as we require an exact likelihood model for reweighting to the equilibrium distribution, which differs from the reviewer's assertion that any generative model can be used. Previous research has shown that flows are the most promising approach for addressing this challenge (see Sec. 1 + 2). For the more complex Cartesian coordinate representations, they relied on CNFs due to their enhanced expressiveness. However, applying CNFs to larger systems like alanine dipeptide or LJ55 proved prohibitively expensive, as outlined in appendix A.2. This motivated or solution. > The choice of using Flow matching here is, respectfully, not that crazy. Indeed, if you really had to do training and generate samples using ONLY the energy function and no MCMC using something like the reverse KL that would be very intriguing. Unfortunately, this is already done in [1] who also do flow matching. We agree that training equivariant flows with flow matching was in proximity, but it remained unexplored until our work. However, as mentioned in the rebuttal, we show that when scaling to larger systems with more symmetries, it becomes critical to do equivariant OT flow matching to preserve the optimal transport sampling paths. Or algorithm for this was not as straightforwardly apparent. Respectfully, we differ on the evaluation of [1]. Their approach, evident from Eq 16, does not involve flow matching or simulation-free training. Instead, they introduce a better way of doing energy-based training. Consequently, it is impractical for the larger systems we explored (see appendix A.2). [1] solely assesses their method on small toy systems, raising doubts about its suitability for our larger, more rigid systems, given that one could somehow scale their version of energy based training. Lastly, interpolating the potential as in [1] will generally not lead to OT integration paths. We agree with the reviewer's point, that aiming for simulation-free, energy-based training similar to flow matching holds promise. However, its viability is currently unclear. A potential alternative approach is to initially train a CNF using flow matching with a small set of samples. Subsequent sample generation through the CNF, followed by reweighting to the target distribution, allows these samples to be added iteratively to the training set. Importantly, this process does not rely on backpropagation, enabling scalability for larger systems. While beyond this paper's scope, we will mention this surrogate concept for energy-based training in our future work section. Moreover, our approach does not require the training set to originate from the target distribution. This is exemplified in our alanine dipeptide experiments. There, we initially generate samples employing a classical force field and subsequently relax them concerning a three orders of magnitude more expensive semi-empirical force field. As a result, the training samples are bias but are generated significantly faster than solely simulating using the semi-empirical force field. Despite the flow learning from these biased samples, we are able to reweight to the unbiased distribution (see Sec. 6.3). Therefore, this aligns with the reviewer's request, as our method entails notably fewer energy evaluations compared to MD simulations of the target potential, akin to energy-based training. We will clarify this in the final version.
Summary: The authors propose equivariant flow matching, which provides a way of incorporating syymetries into flow matching objectives. Specifically, they propose to replace the squared Euclidian distance cost used in the flow matching objective, with (approximately) the minimum squared Euclidian distance over all possible group actions. They demonstrate in their experiments that this improves training of equivariant flows - obtaining similar or improved performance than vanilla OT flow matching with shorter paths. Strengths: - They identify a clear problem of how symmetries (especially permutation symmetries) cause issues with OT flow matching, resulting a (possibly prohibitively) large batch size required. - Their solution is simple and fits the problem well. - Their experimental results are consistent with the theory (their CNF has shorter paths). Weaknesses: - The performance results are mixed - the flow trained with vanilla OT is sometimes better. Specifically for alanine dipeptide the flow trained with vanilla OT performs better - thus the Equivariant flow matching method does not seem relevant to one of the headline results "for the first time we obtain a Boltzmann generator with significant sampling efficiency without relying on tailored internal coordinate featurization". On first read of the abstract the wording makes it seem like this result was obtained due to the authors proposed method rather than applying the existing flow matching technique to the problem. - A CNF trained with a classic score matching loss instead of flow OT would be a good obvious baseline but is not included. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How was the effective sample size calculated? (equation, number of samples etc) - The training time for the Equivariant OT flow matching is significantly longer than the vanilla OT flow matching for LJ55 and alanine dipeptide, even though it is trained for less epochs with a smaller batch size - why is this the case? (e.g. is it due to the extra compute required for the search over rotations/permutations?) - Could you please provide further description on the rationale behind the hyper-parameters used for each experiment in the appendix? - Could you please specify the compute used for the MD simulation (e.g. number of target energy evaluations, runtime) in the appendix? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The Equivariant flow matching method results in longer training times for LJ55 and alanine dipeptide, but no discussion of why / analysis of this is provided (see Questions). One of the headline results from the abstract (re alanine dipeptide) is obtained using an existing method (without the new proposed method). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review and questions. We now address their comments individually. > The performance results are mixed - the flow trained with vanilla OT is sometimes better. The difference for equivariant OT compared to normal OT is only marginal for systems with few symmetries, i.e. less than 15 interchangeable particles, which is the case for DW4, LJ13 and ALA2. However, for the much larger LJ55 system (at least 3 times as many identical particles) the difference is significant. Crucially, the straight OT sampling paths allow using a Runge-Kutta integrator instead of the adaptive dopri5, which is significantly faster. With the “normal” OT the particle integration paths change directions (see Figure 2b,c,e), which requires small step sizes where the particles turn. Hence, this shows that it is important to use equivariant OT flow matching when scaling to larger systems to obtain optimal paths. This effect is not that prominent for the smaller systems, where normal OT flow matching already works quite well. > Specifically for alanine dipeptide the flow trained with vanilla OT performs better - thus the Equivariant flow matching method does not seem relevant to one of the headline results "for the first time we obtain a Boltzmann generator with significant sampling efficiency without relying on tailored internal coordinate featurization". On first read of the abstract the wording makes it seem like this result was obtained due to the authors proposed method rather than applying the existing flow matching technique to the problem. We obtain the described result with both the OT flow matching and the equivariant OT flow matching. We extend Table 2 with results obtained from equivariant OT flow matching, see global rebuttal. Enhancing MD simulations using ML models is a rapidly growing field, as evidenced by the increasing number of AI4Science groups focused on this topic. Prior to our work, it was uncertain whether training a Boltzmann Generator in Cartesian coordinates to generate samples from the equilibrium distribution of real molecules was possible. We are the first to present a model and training algorithm that successfully achieve this. We believe our results are of great interest to the community and will stimulate further research in this area, potentially leading to transferable Boltzmann Generators. Moreover, we consider an even more difficult problem as we train the model with biased samples, instead of sample from the equilibrium distribution. We present two training algorithms that achieve this. Both including the known concepts of flow matching and EGNNs. However, we are the first to combine these concepts, resulting in this novel finding. Moreover, we show that when scaling to larger systems with more symmetries, e.g. as shown for LJ55 with 55 identical Lennard-Jones particles, it becomes critical to do equivariant OT flow matching, to preserve the optimal transport sampling paths. Interestingly our work shows that for systems with less symmetries, e.g. fewer identical particles like DW4, LJ13, and alanine dipeptide, the benefits of including equivariant OT flow matching are marginal and sometimes even show worse performance as in the case of alainine dipeptides. We will highlight these findings more in the final version of the manuscript and change the abstract accordingly. > A CNF trained with a classic score matching loss instead of flow OT would be a good obvious baseline but is not included. We included normal flow matching as baseline, see global rebuttal. > How was the effective sample size calculated? Kish's equation. We used 10000 samples for the DW4 and LJ13 system and 100000 for LJ55 and alanine dipeptide. We will add the equation, reference, and values to the appendix. > The training time for the Equivariant OT flow matching is significantly longer than the vanilla OT flow matching for LJ55 and alanine dipeptide, even though it is trained for less epochs with a smaller batch size - why is this the case? (e.g. is it due to the extra compute required for the search over rotations/permutations?) Yes, the reviewer's intuition is correct. The training takes longer for the equivariant flow matching because of the search over rotations and especially permutations. However, the training data preparation can be performed prior to or during the training and is trivially parallelizable on CPUs. Allowing to essentially train the equivariant flow matching models as fast as the OT flow matching models. See the global rebuttal for more details. > Could you please provide further description on the rationale behind the hyper-parameters used for each experiment in the appendix? The used hyperparameters are discussed in appendix C.3. We performed mostly manual hyperparameter searches and used larger models for the more challenging systems. > Could you please specify the compute used for the MD simulation (e.g. number of target energy evaluations, runtime) in the appendix? We will include more details in the appendix. Many simulation details are already included in appendix C.2. > The Equivariant flow matching method results in longer training times for LJ55 and alanine dipeptide, but no discussion of why / analysis of this is provided (see Questions). We add a more detailed discussion in the final version, also exploring alternatives and the parallel way to prepare the training data, as discussed above and in the global rebuttal. > One of the headline results from the abstract (re alanine dipeptide) is obtained using an existing method (without the new proposed method). The novel result of training a Boltzmann generator for molecules in Cartesian coordinates is obtained with both the equivariant OT flow matching and OT flow matching. See the discussion above as well as the global rebuttal. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I have the following remaining concern: > The novel result of training a Boltzmann generator for molecules in Cartesian coordinates is obtained with both the equivariant OT flow matching and OT flow matching. It seems that the paper could be split into two key contributions: (1) Applying flow matching to the Boltzmann distribution of molecules and (2) Improving flow matching with equivariant flow matching. Currently I think the abstract strongly emphasizes the contribution (2), and I think the typical reader would conclude from reading the following claim in the abstract > where for the first time we obtain a Boltzmann generator with significant sampling efficiency without relying on tailored internal coordinate featurization is only possible with the Equivariant flow matching technique. However, in their rebuttal the authors are strongly emphasizing contribution (1) - which upon my first reads I took as more of a "baseline" than a contribution. I agree that both of these contributions are valuable. However, I agree with Reviewer coc5 that (1) is not very novel as there is already work that uses score matching for cartesian coordinate molecular data (https://arxiv.org/abs/2203.17003) with equivariant networks which is very similar in problem/solution structure (all though of course there are some differences). I think differentiating contributions (1) and (2) in the abstract would clarify the paper - to make sure the presented claims are accurately interpreted. Lastly, for contribution (1) to be a significant contribution I think the paper would need to provide more insight into some specifics of training CNFs as Boltzmann generators (such as training by energy) and detailed comparisons to existing approaches (such as comparing to discrete flows on internal coordinates). --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's extra time spent to reviewing our paper. We now answer their additional responses individually, again only citing parts of each question, but answering the whole one. > ... I agree that both of these contributions are valuable. However, I agree with Reviewer coc5 that (1) is not very novel as there is already work that uses score matching for cartesian coordinate molecular data (...) with equivariant networks which is very similar in problem/solution structure (all though of course there are some differences). I think differentiating contributions (1) and (2) in the abstract would clarify the paper - to make sure the presented claims are accurately interpreted. We acknowledge that the initial abstract formulation might have caused confusion. As recommended by the reviewer, we will emphasize the two primary contributions throughout the abstract, introduction, and conclusion. We appreciate the reviewer's recognition of the value of both these contributions. We concur that the work by Hoogeboom et al. addresses a related issue using a similar method, as we acknowledge in Section 2. However, their focus lies in generating molecular conformers rather than sampling from the Boltzmann distribution, employing a diffusion model instead of a flow. Notably, their conformer generation process does not necessitate reweighting to the target distribution, thus avoiding the utilization of the corresponding probability flow ODE for sampling, which would have been closer to our approach. Moreover, previous studies on flow matching have demonstrated that score-based models yield significantly longer integration paths compared to even naïve flow matching. Consequently, generating samples with the probability flow ODE becomes more expensive. We agree that training equivariant flows with flow matching was in proximity, but it nevertheless remained unexplored until our work. > Lastly, for contribution (1) to be a significant contribution I think the paper would need to provide more insight into some specifics of training CNFs as Boltzmann generators (such as training by energy) and detailed comparisons to existing approaches (such as comparing to discrete flows on internal coordinates). In our revised version, we will provide a more detailed explanation than in the initial paper to explain why CNFs combined with flow matching offer the most promising results and future potential for generating Boltzmann distribution samples. Energy based training is currently impossible with simulation free training. Hence, it is infeasible for CNFs for larger systems, as shown in appendix A.2. A potential alternative approach is to initially train a CNF using flow matching with a small set of samples. Subsequent sample generation through the CNF, followed by reweighting to the target distribution, allows these samples to be added iteratively to the training set. Importantly, this process does not rely on backpropagation, enabling scalability for larger systems. Moreover, our approach does not require the training set to originate from the target distribution. This is exemplified in our alanine dipeptide experiments. There, we initially generate samples employing a classical force field and subsequently relax them concerning a three orders of magnitude more expensive semi-empirical force field. As a result, the training samples are biased but are generated significantly faster than solely simulating using the semi-empirical force field. Despite the flow learning from these biased samples, we are able to reweight to the unbiased distribution (Sec. 6.3). This training requires a similar amount of energy evaluations as traditional energy based. There are multiple advantages of using Cartesian coordinates over internal coordinates for molecular systems. The main two are (i) transferability and (ii) the long *robot arm problem*: (i) Training a transferable Boltzmann generator will be easier in Cartesian coordinates, as these are not specific to each system. The internal coordinate descriptions differ significantly for varying atom counts. (ii) For example, in folded states some non-bonded parts will be close, but the information that these are close needs to be propagated though the whole backbone structure in the form of torsion angles, angles and bond lengths (long robot arm). Hence, a model in internal coordinates, as it does have the information of this distance implicitly, will fail to learn the distribution accurately. Moreover, going to explicit solvent systems, i.e. with water molecules, internal coordinates will not be possible and accounting for the large amount of possible permutation of interchangeable water molecules will be crucial. Highlighting again the importance of using our equivariant OT flow matching when scaling to larger systems, as shown in the LJ55 results. Note that it is difficult to represent the Lennard-Jones clusters (LJ13, LJ55) in internal coordinates.
Summary: This paper extends on existing works on Flow Matching with minibatch OT solutions to the case of invariant cost functions. In particular, where the invariance is given by an SO (permutation + rotation) group. This is mainly a method of correcting the minibatch bias, since (non-equivariant) minibatch OT will still converge to the correct mapping when the minibatch size goes to infinity. Empirically, it is shown that equivariance OT matching results in lower transport costs, and hence shorter path lengths which may indicate that it is computationally faster to simulate after training than non-equivariant OT (though not shown). Strengths: - Well-written and easy to understand the high-level idea. - Straightforward extension of existing works to training equivariant flows. Weaknesses: - Eq 13 and 14 are not easy to solve, and it is unclear how much compute (or wall-clock time) these subproblems require. - Lack of comparison to other baselines (equivariant diffusion models, standard flow matching). - The empirical differences between equivariant OT and OT seem marginal. - While the paper discusses transport cost, it is not shown that these translate to faster sampling algorithms. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Eq 14 gives a suboptimal solution to Eq 13. Has the authors looked into how this affects either the transport cost or the final trained model? For instance, one could perhaps solve in an alternating fashion the optimal permutation and rotation. The current method can be seen as a single step approximation of this procedure. - It seems that all of the experiments are on fitting to potential functions. However, the main algorithm assumes data is sampled IID. This mismatch between the problem statement and benchmark problems seems to not be discussed much at all. Is my understanding correct that the training datasets are all simulated from some MCMC procedure? - Related to the above, why not experiment on data sets such as QM9, which seems to be used by prior works such as equivariant diffusion models [1]? - Table 1 doesn't seem to show large improvement for equivariant OT. Why is that? From what I understand, at small batch sizes, equivariant OT should perform better. But is it perhaps that at regular batch sizes, the difference between OT and equivariant OT becomes much smaller? - Also, it would be interesting to see how the regular flow matching performs here, since flow matching with conditional OT paths can approximate OT solutions quite well already [2], and there has been a few papers discussing the relation between standard diffusion models and optimal transport [3]. - As part of future work section, authors discuss approximations to solving OT problems. It might be worth testing out the approximate algorithms proposed in the published work on minibatch OT + flow matching [2], which the authors should cite. Here they showed heuristic algorithms that have similar performance to minibatch OT but with faster compute cost. - Regarding the transport cost, one reason for wanting a low transport cost that this reviewer is familiar with, is the sampling efficiency. However, this aspect isn't shown quantitatively in the experiments. For instance, a plot of ESS (or any performance metric) vs NFE, comparing OT and equivariant OT, would be very useful. From the figures, it is not strongly convincing that equivariant OT is actually performing significantly better than OT yet. - Another avenue perhaps is a plot of ESS vs batch size. It would be good to understand at what batch sizes, equivariant OT exhibits better performance than OT. - Showing wallclock time of equivariant OT, OT and regular flow matching would be ideal. For instance, Table 9 of [2] shows that using minibatch OT solutions barely increases compute by 4%, but I think having to solve for optimal permutation (especially when the number of particles is high) is a much harder task. It isn't clear whether the increase in compute cost is a good tradeoff for better transport cost (I can believe it is; just that it isn't shown in the paper). For instance, a convergence plot vs wallclock time would be very convincing. - As a reviewer who is not familiar with the experimental setups and cannot recognize these systems by name, it would be useful to have a table summarizing each experiment. For instance, list out what is the potential function, and how many particles are in the system? This can help provide a better sense of how challenging each task is. - Finally, I am just somewhat perplexed by the motivation of training an equivariant generative model from potential functions, when the training aspect requires MCMC sampling from the desired potential function as a first step. This seems to nullify the simulation-free aspect of flow matching, and it seems that an approach for posterior inference would be much better suited. At the same time, I'm sure the proposed algorithm could work well in settings where a dataset is provided (e.g. QM9 seems to be standard), but these experiments do not appear in this paper. [1] "Equivariant Diffusion for Molecule Generation in 3D" ICML 2022. https://arxiv.org/abs/2203.17003. [2] "Multisample Flow Matching: Straightening Flows with Minibatch Couplings". ICML 2023. https://arxiv.org/abs/2304.14772. [3] "Understanding DDPM Latent Codes Through Optimal Transport". ICLR 2023. https://openreview.net/forum?id=6PIrhAx1j4i. ---- I am satisfied with the authors' answers and have increased my score in light of the new experiments and clarifications. See my reply for some comments regarding the rebuttal. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Overall, I think the benefits of equivariant OT compared to OT (and regular flow matching) can be showcased more. Theoretically, I can believe what the authors are claiming, but empirically I don't quite see these emphasized as part of the empirical results yet. I suggested some ideas for additional plots in the Questions section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review and questions. We now address their comments individually. To keep the rebuttal within the character limit, we often only cite parts of each question. > Eq 13 and 14 are not easy to solve ... Eq 13 is indeed intractable, but Eq 14 gives a good tractable approximation, see global rebuttal. > Lack of comparison to other baselines ... We added the suggested standard flow matching baseline in the global rebuttal. > The empirical differences between equivariant OT and OT seem marginal. The difference for equivariant OT compared to normal OT is only marginal for systems with few symmetries, i.e. less than 15 interchangeable particles, which is the case for DW4, LJ13 and ALA2. However, for the much larger LJ55 system (at least 3 times as many identical particles) the difference is significant. Crucially, the straight OT sampling paths allow using a Runge-Kutta integrator instead of the adaptive dopri5, which is significantly faster. With the “normal” OT the particle integration paths change directions (see Figure 2b,c,e), which requires small step sizes where the particles turn. Hence, this shows that it is important to use equivariant OT flow matching when scaling to larger systems to obtain optimal paths. This effect is not that prominent for the smaller systems, where normal OT flow matching already works quite well. See also the global rebuttal. > While the paper discusses transport cost, it is not shown that these translate to faster sampling algorithms. We report in section 6.2 a speed-up of about 10 for the inference for the LJ55 system. Moreover, we performed additional experiments to highlight his further, see global rebuttal. > Eq 14 gives a suboptimal solution to Eq 13... For the discussed systems, our approximation is quite close. We also tested the suggested approximation algorithm, see global rebuttal. > It seems that all of the experiments are on fitting to potential functions... The algorithm does not require IID samples, the samples can be biased. In fact, we show this for the molecule experiments, where the training samples do not stem from the target potential, which is a semi-empirical potential. Instead, we generate the samples with a classical force field, which is about three orders of magnitude cheaper, and then relax these samples with respect to the semi-empirical force field. Hence, the training samples are biased and not IID. Although, the flow then learns a biased potential function, we can reweight the flow samples to the unbiased distribution (see appendix B.3). This is potentially much faster than doing the simulation wrt to the semi-empirical force field. We further validate that we indeed generate samples from the unbiased distribution by comparing to an umbrella simulation (Sec 6.3). > why not experiment on data sets such as QM9, ... QM9 is a different task, i.e. conformer generation (single conformations instead of the Boltzmann distribution). Moreover, the QM9 dataset consists of only small molecules and the aim of this paper is to scale to Boltzmann Generators for the first time to significantly larger systems. > Table 1 doesn't seem to show large improvement for equivariant OT ... See above. We investigated different batch sizes in the global rebuttal. > ... how the regular flow matching performs here,... We added this baseline, see global rebuttal. > ... approximations to solving OT problems ... We will cite that in the related work section. See global response. > ... plot of ESS vs NFE... and a plot of ESS vs batch size ... These are excellent suggestion. We compare as suggested, highlighting the importance of equivariant flow matching when scaling to larger systems. See general rebuttal. > ... wallclock time ... We compare the wall-clock training time in appendix C.3 Table 5. The training does take longer for the equivariant flow matching. However, a simple way to speed up the equivariant OT training is to generate the batch pairs not during training. This process is highly parallelizable and can be performed on CPUs, which are in practice usually more available than GPUs. See global rebuttal for the suggested plot in Figure 1h. > ... table summarizing each experiment We agree that the information of the dataset should be more visible. We included most of the information only in the appendix C.2 and will move parts to the main part in the final version of the manuscript. > ... motivation of training an equivariant generative model from potential functions, when the training aspect requires MCMC sampling from the desired potential function as a first step... We show for Alanine dipeptide that we do not require samples from the equilibrium target Boltzmann distribution, as discussed above. This is a common setting, as the target potential function is often known up to a constant, but sampling from the equilibrium distribution is difficult. However, generating biased sample, e.g. in different meta-stable states, is often feasible, which can then be used to train a Boltzmann Generator and produce unbiased samples. Unfortunately, posterior inference requires backpropagating through the whole integration path, which is infeasible for the larger system, as discussed in appendix A.2. Note that also for posterior inference, we usually require some data from the target distribution for initial training of the model. However, our work paves the way for transferable Boltzmann generators, as we operate in Cartesian coordinates. These will allow training on (biased) trajectories of a few molecules and be applicable to unseen ones. Hence, requiring simulations only for the training molecules. We leave this exciting avenue for future research. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and updated evaluations, especially regarding ESS vs NFE and the naive flow matching (but with equivariant architecture) approach. To me, I feel that the technical contribution of equivariant OT extension is quite interesting and sufficiently novel. However, the application of learning equivariant generative models for stationary distributions seems like it is a mere subset of the potential applications an equivariant flow matching model can be used for. I agree this is a difficult problem, I just don't understand the motivation for focusing on this and not other equivariant data distributions. While I am not familiar with the literature around Boltzmann Generators, I do agree with the authors that the work (Máté, Bálint, and François Fleuret. "Learning Interpolations between Boltzmann Densities.") brought up by another reviewer is closer to a physics-informed neural net approach of fitting PDEs, which wouldn't scale. On the other hand, one could also say that this work requires sampling (either exactly or approximately) from the stationary distribution beforehand which adds additional complexity and reliance on the sampling algorithm. Overall though, I appreciate the new plots in the rebuttal (namely, figures b and d) and I am still in favor of accepting this work. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their additional time spent reviewing our paper. We will now address their additional comments below. > I thank the authors for their response and updated evaluations, especially regarding ESS vs NFE and the naive flow matching (but with equivariant architecture) approach. To me, I feel that the technical contribution of equivariant OT extension is quite interesting and sufficiently novel. However, the application of learning equivariant generative models for stationary distributions seems like it is a mere subset of the potential applications an equivariant flow matching model can be used for. I agree this is a difficult problem, I just don't understand the motivation for focusing on this and not other equivariant data distributions. We are thankful for the reviewer's acknowledgment of our rebuttal's effectiveness and their positive assessment of the novelty and significance of our equivariant OT flow matching algorithm. We agree that applying flow matching to equivariant flow models could theoretically extend to molecular conformer generation. However, it is crucial to recognize that this represents a fundamentally distinct problem, potentially not best suited for the flow matching approach. Unlike the sampling task from the Boltzmann distribution that we address, determining the Boltzmann distribution for individual molecules in datasets like QM9 or GEOM is not feasible nor the learning objective. This eliminates the need for reweighting generated samples to match a target Boltzmann distribution, rendering an exact likelihood model unnecessary. Furthermore, these datasets often contain only a few samples per molecule. This sparse representation might pose challenges for OT flow matching, as the limited sample count prevents the reordering of batches to generate optimal transport paths. It is due to these reasons that we have chosen to focus on sample generation from Boltzmann distributions rather than pursuing the conformer generation problem. > While I am not familiar with the literature around Boltzmann Generators, I do agree with the authors that the work (Máté, Bálint, and François Fleuret. "Learning Interpolations between Boltzmann Densities.") brought up by another reviewer is closer to a physics-informed neural net approach of fitting PDEs, which wouldn't scale. On the other hand, one could also say that this work requires sampling (either exactly or approximately) from the stationary distribution beforehand which adds additional complexity and reliance on the sampling algorithm. We agree with the reviewer that the work of Máté et al. does not scale to the larger systems we investigated, as they require integration for their energy based loss function and, hence, can not perform simulation free training. However, any energy based training is currently impossible to do simulation free. Hence, it is infeasible for CNFs for larger systems, as shown in appendix A.2. A potential alternative approach is to initially train a CNF using flow matching with a small set of samples. Subsequent sample generation through the CNF, followed by reweighting to the target distribution, allows these samples to be added iteratively to the training set. Importantly, this process does not rely on backpropagation, enabling scalability for larger systems, contrary to energy based training. Moreover, as detailed in our initial rebuttal, our approach does not require the training set to originate from the target distribution. This training method then requires a similar amount of energy evaluations as traditional energy based (or the advanced energy based training as proposed by Máté et al.).
Rebuttal 1: Rebuttal: We thank the reviewers for their time reviewing our paper and for their insightful questions and suggestions. We present results for most of the suggested additional experiments and evaluations below and in the attached pdf. Moreover, we address some common questions and clarifications as well. **The theory for equivariant flow matching** for more general symmetry groups was somewhat hidden in appendix B.1 We will move the main findings to Sec. 4 and highlight this more in the abstract as suggested. **Approximation of Eq 13** The approximation given in Eq. 14 is in practice performed using the Hungarian algorithm for permutations and the Kabsch algorithm for rotations. We compare several different other suggested approximations of Eq. 13 in Fig. 1c,d in the pdf. The baseline reference is computed with an expensive search over the approximation given in Eq. 14. Namely, we evaluate Eq. 14 for 100 random rotations combined with the global reflection, denoted as $O_{200}$, for each sample, i.e. $\hat{c}(x_0, x_1)=\min_{o\in O_{200}(D)}\tilde{c}(x_0, \rho(o) x_1),$ where $\tilde{c}(x_0, x_1)$ is given by Eq 14. Hence, this is 200 times more expensive than our approach. This baseline should be much closer to the true batch OT solution. The presented results for alanine dipeptide show that our approximation is simple, while also being a very good approximation, which we also show in our experiments in the pdf and the main paper. Applying our approximation multiple times reduced the transportation cost slightly. Performing the rotations first, lead to inferior results. We observe the same behavior for the LJ55 system. **Parallel batch generation** In the current version of the paper, we perform the batch preparation during training. However, a simple way to speed up the equivariant OT training is to generate the batches beforehand or in parallel to the training process. This process is highly parallelizable and can be performed on CPUs, which are in practice usually more available than GPUs and also in higher numbers. This also allows for larger batch sizes for the equivariant OT model and comes at little additional cost. Hence, scaling equivariant flow matching to even larger systems should not be an issue. We use this procedure for the new experiments and will mention this in the final version. **Naïve flow matching baseline** We include naïve flow matching with the same equivariant architecture as an additional baseline as requested by multiple reviewers. Naïve flow matching results in even longer integration paths, as shown in Figure 1 and Table 1. The other results are close to the results of OT flow matching. Flow matching with a non equivariant architecture, i.e. a dense neural network, failed for all systems but DW4 and is hence not reported. **Benefits of equivariant OT flow matching** We show that when scaling to larger system sizes, e.g. the LJ55 system, equivariant OT flow matching is crucial to maintain optimal integration paths, which result in significantly faster sampling (Figure 1b) and allows the usage of fixed step integrators. Moreover, also the training is faster for the equivariant OT models, as they converge faster (Figure 1h). **Training set sizes** We compare different training set sizes for alanine dipeptide and LJ55 in Table 2 in the pdf. Especially the integration paths lengths are the same for the different training set sizes. Note that we used $10^6$ training samples for all the results in the main paper. All other hyperparameters are the same. As observed in prior work, equivariant models are quite data efficient, which is also reproduced by our findings. **Batch sizes** We compare different batch sizes for the LJ55 system in Table 3 in the pdf. The integration paths lengths are again similar across the different batch sizes. However, smaller batch sizes resulted in better likelihoods for both flow matching models. **Simulation details** We prepare the training data before training as described above. We will use more runs for the error calculation and more samples to estimate ESS in the final version. The reported ESS might be misleading as we do not generate many samples and the negative log likelihoods are higher than observed for the larger training set sizes. Hence, some states might be missed by the model for the LJ55 system. We will merge Table 1 in the pdf with Table 1 in the paper. Table 2 and 3 in the pdf will be added to the appendix, while the results will be discussed in the main part. Table 4 in the pdf will replace Table 2 in the paper. We thank the reviewers for their valuable suggestions and hope for an engaging discussion period. Pdf: /pdf/6ac233ce4e1ecda5545fe6edf08669ffa45b1259.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Frequency-Enhanced Data Augmentation for Vision-and-Language Navigation
Accept (poster)
Summary: Authors propose a Frequency enhanced Data Augmentation (FDA) technique to improve generalization/robustness to unseen environments in the vision-language navigation task. This work provides an initial analysis that shows high frequency information impacts performance. Results are provided on R2R, RxR, and CVDN datasets and are better than prior work when using the same models. Strengths: - The paper is clearly written and easy to follow. - Experimentally the authors demonstrate that the proposed FDA data augmentation technique improves performance on most of the models and validation splits. Weaknesses: - The premise of the paper is slightly confusing. The high and low frequency perturbed images essentially are different domains, so of course performance drops. Additionally, counter to what the writing of the paper details, low frequency information also helps on SR for the val seen split in Table 1, and the unseen results vary but < 1 point on SR. - FDA can hurt performance or doesn’t improve as much on seen compared to unseen results (e.g., R2R DUET, val seen). - Table 4. It seems like an obvious result that a method trained with FDA is going to do better on high frequency perturbed images, this table may not be necessary or at least does not need to be included in the main text. Are there downstream scenarios where this kind of perturbation would even realistically happen? - While many types of data augmentation are referenced in the related work section, the authors only experimentally compare to one (ENVEDIT) data augmentation work, and do not try baselines with other data augmentations that may affect similar information to the high frequency perturbation. Because of this, it feels the experiments are incomplete, and generally underwhelming due to the small SR performance changes. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Questions:** - Figure 1 and the original analysis that founds this paper begs the question: how much of the performance effect shown by high frequency perturbation is caused by loss of color in the resulting image? What if you made the image black and white, or the low frequency image black and white? The analysis feels lacking to truly demonstrate that it is frequency information that matters. - Did you try other methods of fusing the high frequency information into the training images? Or other training regimes instead of alternating the images per even and odd training steps per Eq. (6)? - Are all of the results for a single experiment run? **Comments and Suggestions:** - L38-L40, I suggest notating where in the Figure to look. I.e. "Figure 1 (middle)" or "Figure 1 (right)" instead of repeating just "Figure 1". - L59-64 is a very long single sentence, it should be broken up to several shorter sentences to make it easier to follow. - Figure 4: there is room within the figure to have an abbreviation legend since there are a ton of acronyms in the figure that are not defined in the figure caption - i.e., what is GHPF? FFT? etc. They are defined in earlier figures but it would be better to have them defined within the scope of this figure, too. - It would be better if figures appeared on pages they were referenced. - L111-118, I am not really sure what the difference is between the points being made in 2) and 3). - The notation in Eq. (2): It switched between using the $F$ notation for frequency and the $\mathcal{F}$ notation for FFT. Isn’t the high and low frequency $H$ on the original image $I$, while $\hat{H}$ should be on the interference image $\hat{I}$? Right now $H$ and $\hat{H}$ have the same definition in the equation. - L156 "Lengt" --> "length", L153 "7k" --> "7,000", no comma in "2050" --> "2,050" - L158 "Process" --> "Progress"? - L178 says GP performance increases by 3.6, but the table shows a **0.36** improvement. - L230 add a space after "et al."; generally throughout paper this is an issue. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The initial analysis prompting this work is not very convincing of the importance of high frequency information in images, and the paper does not compare to multiple data augmentation techniques. It could have additional baselines for simple image augmentations or other augmentation methods from prior work. This is particularly important because the performance improvement is not very large across the different models and datasets. Given the small performance differences, if the reported results are from a single experiment it is not convincing that the FDA approach is consistently effective. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Answer to Weakness-1:** Thanks for your comments. Figure 2 reveals important insights into the behavior of baseline models: HAMT, DUET, and TD-STP. These models show relatively high SR under low-frequency perturbations but experience significant SR drops under high-frequency perturbations. This indicates VLN models’ deficiency in effectively capturing the required high-frequency information. In Table 1, the small 0.19 SR improvement on val seen might stem from overfitting, given the significant 0.9 SR drop on val unseen. In VLN, navigation ability is primarily evaluated using unseen environments for better generalization. On val unseen, TD-STP+high frequency shows a 0.72 SR increase over the baseline and a 1.62 improvement compared to TD-STP+low frequency. This indicates high-frequency plays a more crucial role than low-frequency in cross-modal navigation. Considering the above-mentioned reasons, we design our approach to enhance the agent’s ability to capture crucial high-frequency information necessary for cross-modal navigation. Moreover, an improvement in SR and SPL on a strong (SoTA) baseline (SR 69.65, SPL 62.97), with gains of 0.72 and 0.65, reaching 70.37 and 63.62, constitutes a noteworthy advancement. Furthermore, TD-STP+HF outperforms TD-STP+LF by 1.62/1.66 in SR/SPL on val unseen, showcasing substantial enhancement. **Answer to Weakness-2:** Thanks for your concern. The mentioned phenomenon results from severe overfitting of DUET to val seen. This overfitting issue in seen environments distorts the model’s true navigational capability assessment. Thus, seen environments have less significance as a reference in VLN. Instead, unseen environments are the main way to evaluate navigation ability, as they best reflect generalization capacity. On the VLN official competition leaderboards, agents are only evaluated and ranked in unseen environments. Hence, our optimal model choice is solely based on val unseen performance. In practice, during training, DUET+FDA exhibited 78.55/71.48 SR on val seen/unseen. Due to its underperformance on the val unseen compared to the paper’s reported model (76.00/71.86 on validation seen/unseen), its navigational ability is deemed inferior. For val unseen, our method, applied to HAMT, DUET, and TD-STP, notably enhances SR by 1.88/1.66/2.13, respectively. **Answer to Weakness-3:** Table 4 is to illustrate FDA-enhanced models excelling in both the unseen perturbed images (val unseen) and the unseen perturbation images (ImageNet). While such scenarios may not exist in reality, they serve as compelling evidence of our method’s remarkable capability to capture essential high-frequency information for cross-modal navigation. This consideration led us to include it in the main manuscript. **Answer to Weakness-4-Part1:** Since no existing VLN data augmentation method aligns with our approach, which uniquely requires no additional parameters, data, or external models, comparisons with such methods are not available. Instead, we conduct a comparison with ENVEDIT, the only method sharing a similar image-level data augmentation strategy with ours. For reference, we’ve added comparisons between the FDA method with two other methods: Speaker [11*] and PREVALENT [13*], as shown in Table 1 of the attached PDF, using Baseline-1 [11*] and Baseline-2 [53*] models (* indicates the reference number in the main paper.). The result shows that our method achieves an additional gain of 5.2 compared to the Speaker based on extra model. Furthermore, our approach can effectively complement the two data augmentation methods. **Answer to Weakness-4-Part2-[small SR performance changes]:** Our method applied to HAMT, DUET, and TDSTP significantly boosts val unseen performance of these SoTA methods, with SR increases of 1.88/1.66/2.13 respectively. Our improvements compared to the previous SoTA methods (DUET, TD-STP) rival or even surpass that of previous studies [29*, 37*, 50*, 1, 2, 3] (mainly below 2 points). Additionally, we experiment on two LSTM-based models: Baseline-1 [11*] and Baseline-3 [39*], as depicted in Table 2 of the attached PDF. These two models represent impressive 7.3/3.4 improvements. [1] KERM: ... CVPR. 2023. [2] GeoVLN: ... CVPR. 2023. [3] Local Slot Attention for VLN. CVPR. 2023. **Answer to Question-1:** We use the method in Figure 4 to generate high-frequency interference (HFI) and low-frequency interference (LFI) images for Figure 2. See the “Augmented Image" in Figure 4 for the HFI image. Since the image with HFI retains low-frequency components where the color information mainly resides (as illustrated in Figure 1), the color remains largely intact in the HFI image. However, in Figure 2, the model performs notably worse with HFI scenarios compared to LFI scenarios. This underscores that high-frequency, not color, is a key factor influencing cross-modal navigation. **Answer to Question-2:** We attempted to introduce high-frequency interference using weighted fusion on original images. The SR on val unseen was 70.54, notably below our method’s 71.78. Thus, we chose the current approach. Additionally, we experimented with random training on original and augmented images during navigation. The result is slightly inferior to the current method, with a 71.65 SR. **Answer to Question-3:** In Appendix Section B, we experiment with four seeds. Results consistently show notable navigation performance enhancement. SR improves from 1.74 to 2.89, with a mean increase of 2.13. **Answer to Comments and Suggestions:** Regarding 2) and 3) (L111-118): 2) highlights high-frequency information’s role in enhancing robustness for performance under environmental variations. 3) shows high-frequency information’s role in reducing overfitting for performance in new environments. We sincerely appreciate your thoughtful feedback regarding the issues with formulas, spelling, formatting, and expression. We will earnestly make the necessary revisions as suggested. --- Rebuttal Comment 1.1: Title: Rebuttal Comment: I'd like to thank the authors for answering my questions and for the additional experiments they have included that make the paper results more compelling. After reading the rebuttal and answers to other reviewers' questions, I will be raising my score to weak accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer bEiK, We greatly appreciate your diligent review of this paper. We are delighted to learn that our efforts could well address your concerns. Thanks so much for your recognition! Best wishes, Paper12863 Authors
Summary: In this paper, the authors propose a Frequency-enhanced Data Augmentation (FDA) for VLN models to enhance their ability to capture high-frequency information, thus improving their performance. First, at each step, they take the view as Reference Image and randomly sample an image as Interference Image. Then they transform these two images to the frequency domain space via FFT and mix the high-frequency components of two images. Finally, they obtain the augmented image via iFFT and alternately use original views’ images and augmented images to train the agent. the Experiments on different benchmarks reveal the proposed augmentation method is helpful to VLN models. However, I have some concerns about this paper. My detailed comments are as follows. Strengths: 1. The authors take an analysis of frequency information domain by fuse images’ high-frequency or low-frequency components into original images, and find the importance of high-frequency for the agent’s better performance in VLN tasks. 2. The authors propose a simple data augmentation method, Frequency-enhanced Data Augmentation (FDA), which needs no external models and does not add complexity. The proposed method enhances the models’ ability to capture necessary high-frequency information and has been demonstrated to improve the models’ performance on multiple datasets. Weaknesses: 1. I wonder why the authors choose to incorporate the high-frequency components from randomly sampled interference images into the original image, rather than using a fixed interference image, or utilizing the high-frequency components of the image itself, or randomizing certain high-frequency components of the original image. It would be better if the author could discuss these approaches or include them in the ablation experiments. 2. While the importance of high frequency compared to low frequency has been demonstrated by other experiments, it would be beneficial to conduct an ablation experiment utilizing low frequency for data augmentation under the same settings. 3. Some novel SoTA models are not shown in the tables in this paper, such as Airbert [1] on R2R benchmark. It would be better for the authors to add these state-of-the-art models into the comparative experiments which would demonstrate the strengths and limitations of the proposed method more comprehensively. Also, it is better to add the brief introduction about the VLN works on continued environment, such as [2,3] for the sake of completeness 4. There appear several low-level calculation/statistical errors that should have been avoided in the paper. For instance, on line 216, it should be "4 and 3 points," and on line 222, it should be "3.3 and 3.3 (2 and 1.7)." Additionally, on line 178, it should be "boost the GP performance by 0.36." The author should carefully review the data in the tables and the calculations presented throughout the text. **Minor issues 1. In Equation 2, since Equation 1 already uses $F_{\hat{I}}^{\{rgb\}}$ to represent the operation result of $\mathcal{F}^{\{rgb\}}(\hat{I})$, it would be better to use $F_{\hat{I}}^{\{rgb\}}$ instead of $\mathcal{F}^{\{rgb\}}(\hat{I})$ in Equation 2. 2. On page 5, line 156, “Path Lengt” should be “Path Length”. 3. On page 6, line 178, “boost the GP performance by 3.6” should be “boost the GP performance by 0.36”. 4. On page 8, line 216, “by 4 and 2 points” should be “by 4 and 3 points”. 5. On page 8, line 222, “by 3.4 and 3.3” should be “by 3.3 and 3.3”. [1] Airbert: In-domain Pretraining for Vision-and-Language Navigation, ICCV 2021 [2] Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language Navigation, NeurIPS 2022. [3] Cross-modal Map Learning for Vision and Language Navigation, CVPR 2022. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The manuscript is well organized and the proposed consistent improvement upon several strong baselines on 3 datasets. It would be better for the authors to evaluate the proposed FDA technique on other multi-model tasks, not limited to the VLN task. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: The authors have adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: I wonder why the authors choose to incorporate the high-frequency components from randomly sampled interference images into the original image, rather than using a fixed interference image, or utilizing the high-frequency components of the image itself, or randomizing certain high-frequency components of the original image. It would be better if the author could discuss these approaches or include them in the ablation experiments.** A1: Thank you for your intriguing suggestion. The reason we randomly incorporate the high-frequency components of the interference image into the original image is to provide a more diverse set of high-frequency negative samples. In response to your suggestion, we conduct experiments under three settings: (1) Setting 1 - only using a fixed interference image’s high-frequency components as interference; (2) Setting 2 - using the high-frequency components of one channel in the image as interference for other channels; (3) Setting 3 -randomizing certain high-frequency components of the original image. In Setting 1, the model achieves SR score of 70.92. In Setting 2, the score is 69.26, and in Setting 3, the score is 69.73. Compared to the baseline model whose SR is 69.65, we observe a slight decline in performance in Setting 2, a slight improvement in Setting 3, and the most significant improvement in Setting 1. This suggests that incorporating high-frequency negative samples from other images effectively enhances the model’s ability to capture the required high-frequency information and improve cross-modal alignment. Comparing our method with Setting 1, we achieve superior performance with SR score of 71.78. The diverse sampling of high-frequency negative samples further improves the model’s ability to capture the necessary high-frequency information and enhance cross-modal alignment. We will add this discussion to the revision as suggested. **Q2: While the importance of high frequency compared to low frequency has been demonstrated by other experiments, it would be beneficial to conduct an ablation experiment utilizing low frequency for data augmentation under the same settings.** A2: Thanks for your valuable suggestion. In this experiment, we achieve an SR of 69.52. The result exhibits a minor decrement when compared to the baseline’s SR of 69.65, and notably trails behind the remarkable SR of 71.78 attained through high-frequency-based data augmentation. These findings will be incorporated into the revised version as recommended. **Q3: Some novel SoTA models are not shown in the tables in this paper, such as Airbert [1] on R2R benchmark. It would be better for the authors to add these state-of-the-art models into the comparative experiments which would demonstrate the strengths and limitations of the proposed method more comprehensively. Also, it is better to add the brief introduction about the VLN works on continued environment, such as [2,3] for the sake of completeness。** A3: Thanks for your valuable suggestion. We apologize for the oversight. Airbert [1] is an outstanding work, and we will include a comparison with it in the revised version. Additionally, as per your suggestion, we will introduce an overview of research on VLN-CE, such as [2,3]. [1] Guhur, Pierre-Louis, et al. Airbert: In-domain pretraining for vision-and-language navigation. ICCV. 2021. [2] Chen, Peihao, et al. Weakly-supervised multi-granularity map learning for vision-and-language navigation. NeurIPS. 2022. [3] Georgakis, Georgios, et al. Cross-modal map learning for vision and language navigation. CVPR. 2022. **Q4: There appear several low-level calculation/statistical errors that should have been avoided in the paper. For instance, on line 216, it should be "4 and 3 points," and on line 222, it should be "3.3 and 3.3 (2 and 1.7)." Additionally, on line 178, it should be "boost the GP performance by 0.36." The author should carefully review the data in the tables and the calculations presented throughout the text.** A4: Thank you very much for your thorough review. We will definitely correct these errors in the revised version. **Q5: Minor issues.** A5: Thank you very much for your attention to detail. We will correct these errors in the revision. **Q6: The manuscript is well organized and the proposed consistent improvement upon several strong baselines on 3 datasets. It would be better for the authors to evaluate the proposed FDA technique on other multi-model tasks, not limited to the VLN task.** A6: Thank you for your sincere and enlightening feedback. In response to your suggestions, we conduct additional experiment on the REVERIE task using the SoTA model DUET. Based on our method, the model’s performance on SR, SPL, RGS, and RGSPL metrics shows significant improvement compared to the baseline model, with gains of 2.69/3.54/3.18/3.45, from 44.65/32.92/28.57/21.01 to 47.34/36.46/31.75/24.46. The comprehensive improvement of the agent’s navigation capabilities in the R2R, RxR, CVDN, and REVERIE tasks thoroughly validates the effectiveness of our method. In the next phase of our research, we will further extend the findings of this study to a broader range of cross-modal tasks. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: We thank for the authors' response. The response solves all my concerns and thus I will raise my rating from 5 to 6.
Summary: In this paper, the authors investigate the VLN task from a Fourier domain perspective, which aims to enhance the vision-language matching ability of agents. A Frequency-enhanced Data Augmentation (FDA) technique is proposed to improve the capability of capturing high-frequency information, which is seen as an important factor for training VLN agents. The experimental results show that baseline models can be further improved by adding the FDA technique. Strengths: - This work investigates the VLN task from an alternative perspective, i.e., the Fourier domain. - A reasonable FDA method is proposed to enhance the baseline model to capture high-frequency information without the need for additional parameters. - With the FDA technique, the performances of three transformer-based methods are further improved, confirming its effectiveness. Weaknesses: - The adopted HAMT, DUET, and TD-STP are all transformer-based agents. However, the LSTM-based agents with FDA also need to be verified. - In Table 2, DUET+FDA exhibits a significant performance drop on the val-seen set compared to the original DUET, e.g., SPL from 73.71\% to 69.66\%. - Lacking experimental comparison on the high-level instruction REVERIE dataset. - As shown in Table 2 of the supplementary material, different random seeds have a significant impact on the model performance. For example, seed-3 and seed-4 differ by 1\% in SPL. - The writing needs to be further polished, e.g., the highest accuracy numbers in all tables should be bolded, and the title of section 3.4 should be 'Comparison with the State-of-the-art Methods'. - Missing numerous references such as [1-4]. [1] Adaptive Zone-Aware Hierarchical Planner for Vision-Language Navigation. In CVPR 2023. [2] HOP+: History-enhanced and Order-aware Pre-training for Vision-and-Language Navigation. In TPAMI. [3] Reinforced Structured State-Evolution for Vision-Language Navigation. In CVPR 2022. [4] LANA: A Language-Capable Navigator for Instruction Following and Generation. In CVPR 2023. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The adopted HAMT, DUET, and TD-STP are all transformer-based agents. However, the LSTM-based agents with FDA also need to be verified.** A1: Thanks for your valuable suggestion. We conduct experiments on two LSTM-based models: Agent [1] and Follower [2]. The first baseline model, Agent, achieves an SR of 51.1 on R2R val unseen. After applying our FDA enhancement, the SR of the model increases to 54.5, a notable 3.4 improvement. The second baseline model, Follower, achieves an SR of 38.3 on R2R val unseen. After FDA enhancement, the SR of the model increases to 45.6, representing an impressive 7.3 improvement. These results demonstrate the effectiveness of our method not only for Transformer-based models but also for LSTM-based models. We will include these suggested experiments in the revised version. [1] S. Shen, et al. How much can CLIP benefit vision-and-language tasks? ICLR. 2022. [2] D. Fried, et al. Speaker-follower models for vision-and-language navigation. NeurIPS. 2018. **Q2: In Table 2, DUET+FDA exhibits a significant performance drop on the val-seen set compared to the original DUET, e.g., SPL from 73.71 to 69.66.** A2: Thank you for your comment. The phenomenon mentioned is a result of severe overfitting of DUET to the val seen split. The issue of overfitting in seen environments can impact the assessment of the model’s true navigational capability. Thus, seen environments have less significance as a reference in the VLN task. Instead, VLN unseen environments are the main way to evaluate navigation ability. This is because navigating unseen environments best reflects the model’s generalization ability. On the VLN competition leaderboards (both R2R and RxR), agents’ navigation capabilities are only evaluated and ranked in unseen environments. Hence, we base our best model selection solely on VLN’s performance in the val unseen split. In practice, during the training process, DUET+FDA exhibited an SR of 78.55 on val seen and 71.48 on val unseen. Nevertheless, due to its underperformance in the unseen environments compared to the model reported (val seen SR 76.00, val unseen SR 71.86) in our paper, its navigational ability is considered inferior. On validation unseen split, our method, applied to HAMT, DUET, and TD-STP, substantially improves the SR by 1.88, 1.66, and 2.13, respectively. Notably, our method significantly improves the performance of TD-STP to achieve a new SoTA performance. Moreover, our method can effectively mitigate overfitting in models. It reduces the gap in SR between the seen and unseen splits compared to the baseline models, reducing it from 9.38, 8.35, and 6.65 to 8.29, 4.14, and 5.89, respectively. **Q3: Lacking experimental comparison on the high-level instruction REVERIE dataset.** A3: Thanks for your valuable suggestion and we will add it to our revision. For your reference, we have conducted experiments on the REVERIE dataset using the SoTA model DUET. Based on our method, the model’s performance on SR, SPL, RGS, and RGSPL metrics shows significant improvement compared to the baseline model, with an increase of 2.69/3.54/3.18/3.45, from 44.65/32.92/28.57/21.01 to 47.34/36.46/31.75/24.46. The comprehensive improvement of the agent’s navigation capabilities in the R2R, RxR, CVDN, and REVERIE tasks thoroughly validates the effectiveness of our method. **Q4: As shown in Table 2 of the supplementary material, different random seeds have a significant impact on the model performance. For example, seed-3 and seed-4 differ by 1 in SPL.** A4: Thank you for your comment. The fluctuations observed in the navigation metrics within Table 2 of the Appendix are considered normal. The average values for SR and SPL are 71.78 and 63.64, respectively. The maximum deviations of SR and SPL from their respective means, across the four experimental groups, are only 0.76, which is a reasonable value. We notice that HAMT [1] and EnvAg [2] also reported multiple experimental results, and our deviation values are quite similar to them. SR deviation reaches 1.4 and 2.14 for HAMT and EnvAg, while SPL deviation hits 1.1 and 2.12 for the same. [1] S. Chen, et al. History aware multimodal transformer for vision-andlanguage navigation. NeurIPS. 2021. [2] X. E. Wang, et al. Environment-agnostic multitask learning for natural language grounded navigation. ECCV. 2020. **Q5: The writing needs to be further polished, e.g., the highest accuracy numbers in all tables should be bolded, and the title of section 3.4 should be ’Comparison with the State-of-the-art Methods’.** A5: Thank you for your valuable suggestions. We are committed to earnestly incorporating your suggestions to enhance the clarity and precision of our expression. **Q6: Missing numerous references such as [1-4].** A6: Thank you for the reminder. We sincerely apologize for overlooking these four outstanding and interesting works. Among them, two CVPR 2023 papers were not published at the time of our submission. We will include proper citations for all of them in the revised version. [1] Gao, Chen, et al. Adaptive Zone-Aware Hierarchical Planner for Vision-Language Navigation. CVPR. 2023. [2] Qiao, Yanyuan, et al. HOP+: History-enhanced and Order-aware Pre-training for Vision-and-Language Navigation. TPAMI. 2023. [3] Chen, Jinyu, et al. Reinforced structured state-evolution for vision-language navigation. CVPR. 2022. [4] Wang, Xiaohan, et al. LANA: A Language-Capable Navigator for Instruction Following and Generation. CVPR. 2023. --- Rebuttal Comment 1.1: Title: Post rebuttal Comment: The authors solve all my concerns in the response, and I believe this work has good insights for the community. Thus I decide to raise my score to 6.
Summary: This paper aims at investigating the Fourier domain of vision-and-language navigation (VLN). They propose Frequency-enhanced Data Augmentation (FDA) to improve the visual-text matching processing based on high-frequency visual information. Their method can achieve promising results efficiently on various VLN datasets, including R2R, RxR, and CVDN, with the same parameter size. Strengths: + This paper is well-written and easy to follow. + The aspect of the Fourier domain and lower-/higer- frequency visual representation is new in VLN. + Though data augmentation is not novel in VLN, the frequency-based sampling seems effective, which helps achieve new/competitive SOTA on various datasets. + The sensitive analysis in Figure 2 and Table 1 is a good intro convincing. Weaknesses: + Though the performance looks effective, what is the motivation to rely on Fourier representation instead of the widely-used learned visual features (maybe I missed it, but I cannot find a clear answer in the draft)? + Does visual information get lost during the Fourier transform? How to overcome this issue during the navigation process? + A detailed analysis of data augmentation should be considered. For example, how much data can lead to how much improvement? If we keep training with more data, does the navigator works better and better? + There should include some qualitative examples to demonstrate how Fourier features can actually lead to the final goal. The comparison with baselines is required, and both successful/failed cases should be involved to guide future research. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: Please see Weakness Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Though the performance looks effective, what is the motivation to rely on Fourier representation instead of the widely-used learned visual features (maybe I missed it, but I cannot find a clear answer in the draft)?** A1: Thank you for your question. Our method is essentially a combination of Fourier transform and learned visual features. In data augmentation phase, we rely on Fourier transform to obtain the high and low-frequency information of the reference image and the interference image. By using the Mix operation (Formula 3) and Fourier inverse transforms, we generate the augmented image. However, during the training and testing phases, we do not use Fourier transforms. Instead, we utilize learned visual features. In the training phase, we feed the reference image and augmented image into the image encoder to extract learned visual features. Subsequently, we input these features into the baseline model to learn downstream visual features and perform navigation reasoning based on them. During the test phase, we directly extract visual features from the reference image using the image encoder and feed them into the FDA-enhanced model to accomplish navigation reasoning. Because of this characteristic of our method, our lightweight approach can easily enhance existing navigation models based on learned visual features, such as TD-STP, DUET and HAMT. We apologize for any unclear expressions and will ensure clarity in the revised version. **Q2: Does visual information get lost during the Fourier transform? How to overcome this issue during the navigation process?** A2: Thanks for pointing out your concern. We implement the Fourier forward and inverse transforms using numpy.fft.fftn and numpy.fft.ifftn, which almost incur no information loss. Meanwhile, our method enhances the ability to capture high-frequency information of the model itself. As a result, during the test phase, the model directly processes RGB images, same to its baseline model, without relying on Fourier transform. **Q3: A detailed analysis of data augmentation should be considered. For example, how much data can lead to how much improvement? If we keep training with more data, does the navigator works better and better?** A3: Thanks for your constructive suggestion. In order to identify some underlying patterns, we conduct a preliminary exploratory experiment using current augmented data volume’s 10%, 100%, and 200%. Compared to the baseline model, which achieves an SR of 69.65, the model trained with 10% augmented data exhibits an SR of 69.77. Similarly, the model trained with 100% augmented data achieves an SR of 71.78, and the model trained with 200% augmented data achieves an SR of 71.86. As the volume of augmented data increases, the model’s performance gradually improves; however, this improvement trend begins to plateau. We will add this discussion to the revision. **Q4: There should include some qualitative examples to demonstrate how Fourier features can actually lead to the final goal. The comparison with baselines is required, and both successful/failed cases should be involved to guide future research.** A4: Thanks for your valuable suggestions. Due to the fact that our method does not require the Fourier features during navigation, but only relies on learned visual features, it is not possible to perform a direct qualitative analysis of Fourier features. However, we have still attempted to analyze the results from other perspectives. Firstly, we conduct navigation experiments under high-frequency interference settings (Section 3.3) and provide corresponding visualizations (Appendix - Figure 1, Figure 2). The results confirm that our model can effectively capture the necessary high-frequency information from images. Secondly, we provide visualizations of visual and textual matching (Appendix - Figure 3, Figure 4), which demonstrates that models with stronger high-frequency capturing abilities can utilize the captured high-frequency information to improve cross-modal attention weight assignment and cross-modal matching. Following your suggestions, we will add failed cases in the revised version and make every effort to provide in-depth analysis to assist other researchers in the community with their future research endeavors.
Rebuttal 1: Rebuttal: Dear Chairs and Reviewers, We would like to express our heartfelt gratitude to you for managing this paper and dedicating your valuable time to providing insightful comments. We also extend our sincere appreciation to the reviewers for recognizing the significance of our work. We have taken meticulous care in addressing all the concerns raised by the reviewers. For detailed information regarding the specific concerns, we kindly direct you to the Rebuttal Section. The attached PDF file includes Tables 1-3 showing our experimental results: 1. Table 1 shows the comparison between our FDA method with two classic data augmentation methods: Speaker and PREVALENT. The result shows that our method, **relying on no additional parameters, data and external model**, achieves an **additional gain of 5.2 on SR** compared to the Speaker based on extra model. Furthermore, our approach can effectively **complement the two data augmentation methods**. 2. Table 2 represents the result of different LSTM-based models enhanced by the FDA method on R2R. The two LSTM-based models represent **impressive 7.3/3.4 improvements on SR**, demonstrating **the effectiveness of our method not only for Transformer-based models but also for LSTM-based models**. 3. Table 3 shows the results of our FDA method on the REVERIE task with the SoTA baseline DUET. Based on our method, the model’s performance on SR, SPL, RGS, and RGSPL metrics shows significant improvement compared to the baseline model, with **significant gains of 2.69/3.54/3.18/3.45**, from 44.65/32.92/28.57/21.01 to 47.34/36.46/31.75/24.46. The comprehensive improvement of the agent’s navigation capabilities in **the R2R, RxR, CVDN, and REVERIE tasks thoroughly validates the effectiveness of our method**. If you have any further questions, we are more than willing to engage in further discussions. Best wishes, Paper12863 Authors Pdf: /pdf/b6b126e83361a26a5354f846bfcda2b9a32399c5.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper investigates the sensitivity of existing state of the art VLN models to high frequency perturbations in the image features, and identifies that this could be a way to improve the models performance by making them more robust. It also provides evidence to show that adding high frequency information explicitly to the models may improve their performance. Taken together, this implies that if the model can be made more sensitive to changes in high frequency components while being able to discriminate which ones are relevant to the instruction, it could improve the generalization power of the models. The paper then proposes a data augmentation approach which involves replacing high frequency components of images with those of randomly selected other images, and training the model on this data. The resulting models degrade less in performance when evaluated on perturbed images and improve performance on standard benchmarks compared to the baseline models. ------- Edit: Updated score after clarifications from the rebuttal! Strengths: ## Originality As far as I know, this is a new exploration in terms of data augmentation by perturbing visual features in the frequency domain ## Quality, Clarity, Significance The paper provides an interesting exploration and provides evidence that supports their hypothesis. It is well written and easy to follow. The paper provides experiments based on three state of the art models, and provides a lightweight method to improve models' performance. Weaknesses: ## Clarity * Lacking in implementation details regarding choice of model components such as image encoder, text encoder, etc. ## Discrepancies in numbers reported * The numbers reported in the table for RxR seem to not be consistent from those in literature (for example in terms of NDTW, HAMT on RxR Val seen achieves 65.3 and 63.1 on val unseen according to their paper [1] as opposed to the 63.1 and 59.94 reported here. * In Table 5 (Comparison to EnvEdit), the numbers reported for HAMT + EnvEdit for SR and SPL are 67.1 and 62.84 as opposed to 68.9 and 64.4 in the EnvEdit paper [2] * Line 177: Text states that the improvement in GP is by 3.6 points whereas the table mentions 0.36 points. Could the authors please explain these discrepancies? ## Minor issues / typos * Line 156: Path Lengt -> Path Length * Line 158: Goal Process -> Goal Progress * Table 4 caption. Please state more clearly that this evaluation set is non-standard R2R, and is created as part of this paper [1] History Aware Multimodal Transformer for Vision-and-Language Navigation. Shizhe Chen et al. [2] ENVEDIT: Environment Editing for Vision-and-Language Navigation. Jialu Li et al. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * What happens if you just do an interpolation between one of the channels from a random image, instead of doing it in the fourier domain? This could be additional proof that augmentation by manipulating in the fourier domain is most effective. * According to Eq 3, the resulting image is likely to have perturbed features for more most of the channels - how did you decide the ratio for keeping original vs replacing with the distractor image features. Is this applied for all the views and all the images in the trajectory? * What happens if you train only with the augmented data? what was the reason you chose the set up in (6) * Since the edge and corner features are what this paper finds to be specifically useful, and given that the image feature extractors based on CNNs also extract such features in early layers, have the authors considered or tried reusing intermediate features from the image extractor to emphasize these features? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Lacking in implementation details regarding choice of model components such as image encoder, text encoder, etc.** A1: Thank you for your concern. Our model components closely match baseline models (TD-STP, DUET, HAMT). For R2R, TD-STP, DUET, and HAMT use ViT-B/16 for image encoder and BERT for text encoder. For CVDN, HAMT uses ViT-B/16 for image encoder and BERT for text encoder. For RxR, HAMT uses CLIP for image encoder and XLM-R for text encoder. As our focus is image-level data augmentation, we’ve detailed image encoders in Supplementary Section D. We’ll add text encoder information for better reader understanding. **Q2: The numbers reported in the table for RxR seem to not be consistent from those in literature (for example in terms of NDTW, HAMT on RxR Val seen achieves 65.3 and 63.1 on val unseen according to their paper [1] as opposed to the 63.1 and 59.94 reported here.** A2: Thanks for pointing this out. We anticipate that the reviewer erroneously refers to the incorrect numbers. In essence, our research aligns with the results in paper [1]. The numbers 65.3 and 63.1 in Table 13 of [1] are for val seen and val unseen, respectively. In our paper, the values 63.1 and 59.94 in Table 7 correspond to val unseen and test unseen, respectively. For [1]’s test unseen (nDTW=59.94), we suggest looking at its Table 12. [1] S. Chen, et al. History aware multimodal transformer for vision-andlanguage navigation. NeurIPS. 2021. **Q3: In Table 5 (Comparison to EnvEdit), the numbers reported for HAMT + EnvEdit for SR and SPL are 67.1 and 62.84 as opposed to 68.9 and 64.4 in the EnvEdit paper [2].** A3: Thanks for your question. We apologize for the unclear expression. ENVEDIT [2] implements a total of three types of data augmentation: Editing Style, Editing Appearance, and Editing Objects. The latter two require the semantic segmentation model to provide object and semantic information in addition to the generative model. The performance of 68.9 and 64.4 reported in [2] actually results from the ensemble of the three models trained on the three types of augmented ENVEDIT data, respectively. Unlike ENVEDIT, our method operates without external models. To ensure fairness, we compare our approach with Editing Style, which uses the fewest external models. [2] reports SR and SPL as 67.3 and 62.6, respectively. Our reimplementation confirms these as 67.31 and 62.84, seen in our Table 5. Our current statement (L205: “a strong data augmentation method ENVEDIT in VLN, which aims to increase the diversity of the environment using GANs to alter the environment style.") is not entirely clear, and we will enhance its clarity in our revisions. [2] J. Li, et al. Envedit: Environment editing for vision-and-language navigation. CVPR. 2022. **Q4: Line 177: Text states that the improvement in GP is by 3.6 points whereas the table mentions 0.36 points.** A4: We sincerely apologize for the oversight and greatly appreciate your attention to detail. The accurate value is indeed 0.36. We will rectify this error in the revised version. **Q5: Minor issues / typos.** A5: Thank you for your correction. We will further correct typos, and provide clearer clarifications in the revised version as suggested. **Q6: What happens if you just do an interpolation between one of the channels from a random image, instead of doing it in the fourier domain? This could be additional proof that augmentation by manipulating in the fourier domain is most effective.** A6: Thanks for your suggestion. The interpolation method achieves SR/SPL of 70.16/62.79 on R2R val unseen, notably lower than FDA’s 71.78/64.00. This indicates Fourier domain manipulation is superior since it can fully leverage high frequency for augmentation, which the spatial domain cannot achieve. **Q7: According to Eq 3, the resulting image is likely to have perturbed features for more most of the channels - how did you decide the ratio for keeping original vs replacing with the distractor image features. Is this applied for all the views and all the images in the trajectory?** A7: Thanks for your question. We empirically selected the current ratio, which has shown quite remarkable results. This ratio is consistently used for all views and images in the trajectory. **Q8: What happens if you train only with the augmented data?** A8: Thank you for bringing up this concern. The model trained only on augmented data achieves SR 67.01 and SPL 57.86 on R2R val unseen, performing worse than the model trained with both original and augmented data (SR 71.78, SPL 64.00). Exclusive reliance on augmented data may lack proper visual cues for the model to emphasize crucial high-frequency components, thus affecting cross-modal alignment. **Q9: What was the reason you chose the set up in (6)?** A9: Empirically, using the approach in setting (6) yields better performance (SR 71.78) than random selection of original and augmented data (SR 71.65). Thus, we opted for setting (6). **Q10: Since the edge and corner features are what this paper finds to be specifically useful, and given that the image feature extractors based on CNNs also extract such features in early layers, have the authors considered or tried reusing intermediate features from the image extractor to emphasize these features?** A10: Thanks for your interesting question. We conduct a preliminary experiment following a structure similar to Figure 3, using the layer1 features of CLIP-ResNet50. It showed an improvement over the baseline (SR 69.65), reaching SR 70.11. This underscores high-frequency information’s significance in cross-modal navigation. In contrast, our method achieves far better performance (SR 71.78) without additional parameters or model reliance.
null
null
null
null
null
null
What Planning Problems Can A Relational Neural Network Solve?
Accept (spotlight)
Summary: This paper focuses on formalizing what class of planning problems can be solved by a relational neural networks. It aims to bridge the gap between the expressivity of relational neural networks and the complexity of planning problems. To do this, a serialized goal regression rule and search algorithm is defined. Three classes of planning problems are defined based on the serialized goal regression search and show these classes can provide network parameters for relational neural network. Experiments with two planning domains show the empirical implications of the results. Strengths: * The paper provides a novel approach to quantifying the complexity of planning problems * As far as I am aware, this is the first paper to analyze what kind of planning problems can be solved by neural networks. * The formal analysis can aid in better evaluations of the neural approaches proposed for planning problems. Weaknesses: * Some of the notations are not clear (See questions for details). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: **Major** 1. Algorithm 1 1. Plain backward search should satisfy two conditions for selection of action: $g \cup eff_{+}(a) \neq \emptyset$ and $g \cup eff_{-}(a) = \emptyset$. The second condition is captured in the Algorithm. But the first is not. 1. Action should be mentioned as input in Algorithm 1. 1. `goal stack` is not defined 1. Algorithm 2 1. $R_o$ should be part of the input 1. `goal stack` is not defined 1. `goal_set` is not defined 1. Is $\pi$ a plan? If so, $\pi[-1]$ is the last action. So is $s_i$ an intermittent state or action? 1. If any of the $p_i$ is available in the goal stack, then that branch is not explored in the backward search? What is the benefit of doing that? 1. Line 107 mentions that there will be a goal regression rule for any possible permutation of pre(a), but as explained in Line 100 all of these are not feasible. Will the regression set contain infeasible rules as well? 1. How are the generalized goal regression rules identified? For example, consider the rule in Line 139. Can such a rule be identified automatically? 1. Were the serialized goal regression rules used in defining the relational neural network? Can explicit knowledge of such rules enable efficient learning? 1. Which relational network was used in the experiments? NLM, ReNN, or NLRL? 1. This is not a question. Just a discussion point. The traditional relational learning approaches learn rules one predicate at a time. The serialized goal-regression defines a class of planning problems that can be solved by satisfying one predicate at a time. So can this aid in understanding what class of planning problems can be solved by traditional relational learning approaches? 1. Can authors provide the classes for common IPC domains like Logistics, Gripper, Rover, Elevator, etc? **Minor** 9. Regression rule set notation has discrepancies. In Algorithm 2, $R_0$ is used, whereas in Line 109 $R^0$ is defined. 9. There is a discrepancy in the function name. Line 111 uses S-GRS but Algorithm 2 uses GRS. 9. Is $P_0$ defined in Line 61 the same as $P_{s_0}$? The set of predicates in the initial state does not define the set of all atoms. 9. In Definition 3.1, is $\bar{a}$ a trajectory or action? For consistency, I recommend using the symbol $a$ for actions alone. 9. The term 'generalized goal regression rule' is not formally introduced. In line 141 such a rule is referred to as the reduced goal regression rule. For consistency, I recommend sticking to a single term. 9. I know the space is limited, but I would still recommend moving the connection to plan width from appendix to main paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful comments! **Q1:** Notations in algorithm 1 and 2. **A1:** Thank you for all the suggestions. We will update our notations. For algorithm 1, we will include $\mathcal{A}$ as input, define `goal_set` as the set of goals to achieve during the current backward search process (initialized as the goal of the planning problem), and define the goal stack as the set of `goal_set` arguments currently in the depth-first search stack. We intentionally omitted the condition of $\textit{eff}_+$ in the algorithm to save space, because this does not affect the correctness of the algorithm (but only efficiency). We will similarly update algorithm 2 to define the goal stack and include $R$ as the input. **Q2:** Definition of $\pi$. **A2:** Thanks for the catching that! $pi$ was intended to include only a sequence of actions. Therefore, we will rewrite the line of $s_i = \pi_i[-1]$ to be $s_i = \mathcal{T}(s_{i-1}, \pi_i)$ (i.e., simulate all actions in $\pi_i$ based on $s_{i-1}$). **Q3:** Not consider $p_i$ if $p_i$ is already in the goal stack. **A3:** In short, this design is to avoid a potential infinite loop. First, adding this line will not hurt the accuracy or optimality of the search algorithm, because the search algorithm seeks optimal trajectories to achieve a certain goal. If the subgoal in the current regression rule is a subgoal that we wish to achieve at an earlier search depth, then using this regression rule will not yield an optimal trajectory for the earlier search depth subgoal. If we do not add this condition, then there will be infinite loops: consider two regression rules for two predicates $p$ and $q$: `p <- q || a_1`, `q <- p || a_2`. **Q4:** Infeasible rules in the regression set. **A4:** Yes, the search algorithm does allow infeasible regression rules in $R$. This is because if a goal regression rule is infeasible (e.g., after achieving one of the precondition, we can not extend the current trajectory to achieve another precondition), the set `possible_t` will just be empty. Since the search algorithm enumerates all candidate rules to use, the search algorithm will return an answer as long as there exists a subset of feasible rules in the given rule set that can yield a successful plan. **Q5:** Identifying generalized goal regression rules. **A5:** In this paper, we did not discuss any particular algorithms to identify generalized goal regression rules. In general, in order to prove the correctness of a generalized goal regression rule, we need to verify that Definition 3.2 holds for the generalized rule. This is generally hard, and we have done this in a case-by-case manner (e.g., proving a particular generalized rule is applicable in BlocksWorld). **Q6:** Were the serialized goal regression rules used in defining the relational neural network? Can explicit knowledge of such rules enable efficient learning? **A6:** Thank you for this suggestion! Currently, the goal regression rules are not used in defining the network (we use regression rules to reason about the network size only). We think there are at least two interesting directions given your suggested idea. One would be to explicitly compile some set of given serialized goal regression rules into a network, either by manipulating network weights, or by introducing additional supervision. Another possible direction might be one where we know the basic form of the serialized regression rules, but don't know the value of some of the conditions: in such a case, we could encode those rules in a network leaving the unknown conditions "free", in the sense of representing them as an MLP, and then backprop to train just the weights in those MLPs, which we would expect to be substantially more sample efficient than learning the whole policy. **Q7:** Which relational network was used in the experiments? NLM, ReNN, or NLRL? **A7:** We were using NLMs for all the experiments. **Q8:** Connection to relational learning approaches. **A8:** Thanks for the suggestion! We think this is a very interesting idea. We think there are two possible ideas in leveraging tranditional relational learning in learning and planning. First, we can learn a PDDL-like domain definition, using traditional relational learning approaches; then, we can find ways to compile the learned definitions into a RelNN-like policy. Alternatively, one can consider learning a policy (such as all the regression rules in a domain). The difficulty with this approach is that in most domains, learning a single rule usually does not allow you to immediately solve novel problems. By contrast, multiple rules must be learned and used together in order to compute a plan. In this case, even though each regression rule only achieves only one single atom, we do not have direct supervision for them (but only learning signals when a set of rules have been learned). **Q9:** Classes for IPC planning problems. **A9:** Thanks for the great suggestion! We have included more analysis of the classic problems you suggested. Please refer to our general response for a detailed discussion. **Q10:** Notation discrepancies. **A10:** Thank you for catching these! We will fix the notation discrepancies for $R_0$, function names, and the definition for trajectories. **Q11:** Other clarifications and structures. **A11:** We appreciate all the suggestions. We will clarify that the set of predicates in the initial state does not define the set of all atoms, define generalized goal regression rule before Definition 3.2, and move back connections to plan width from appendix to the main paper. [1] Richard Li, Allan Jabri, Trevor Darrell, Pulkit Agrawal. Towards Practical Multi-Object Manipulation using Relational Reinforcement Learning. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I have read the rebuttal and would like to keep my score the same. Good job.
Summary: The paper explores the application of relational neural networks, seen as circuits, in representing policies for discrete planning problems. Specifically, it introduces a circuit complexity analysis that categorizes planning problems into three classes based on how circuit width and depth grow, by establishing connections with serialized goal regression search (S-GRS), which is a method for generating plans by working backwards from goals. In essence, the main contribution of the paper is that it combines previously introduced planning problem complexity related quantifications (width, depth) with circuit/policy complexity analysis to analyze the resource/representation requirements of policies. This analysis helps to better understand the capabilities of relational neural networks in solving planning problems, which in turn helps designing them for policy learning. By analyzing goal-conditioned policies for such planning problems, upper bounds on the circuit complexity of such policies are provided as a function of the problem's regression width. The experimental results support the theoretical formulations and demonstrate the practical applicability of the proposed approach for relatively small sized/simpler discrete planning problems. Strengths: - Originality: The paper proposes a new perspective for circuit complexity analysis for relational neural networks (GNNs, transformers), and thus what they can compute (i.e., their limits). - Quality: The proposed method is supported by theoretical analysis and proofs. A small set of experiments were conducted to validate the approach. - Clarity: The order of presentation is mostly nice, even though the terminology used throughout the paper sometimes makes it hard to follow. - Significance: Helpful for constructing relational neural networks for planning problems, and the proposed formulation can also help investigate solutions to planning problems for future works in this area. If the problems analyzed had a broader scope, it would have increased the overall impact of the paper. Weaknesses: - Lack of a convincing set of quantitative results: Proposed methods might be tested on different networks and problems (only two different problems and networks are presented). - A broader discussion of the applicability of the proposed analysis and formulation on different planning problems, especially to real-world (e.g., robotics) scenarios, would help convince the reader about the impact of the paper. - Related work section might be placed earlier (e.g., after introduction) for better placing this work w.r.t. the literature. - Supplementary material helps clarify some points, due to page limit it is hard to integrate this information but for clarity there might be more reference to it within the main text. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What are the main bottlenecks/issues to apply this formulation for continuous planning problems, e.g., such as the ones in robotics? - Would this analysis and construction be still useful/practical for larger/more complex problems, which require larger networks? If yes, how? - The problems investigated are solvable by classical search/planning methods, then what’s the advantage of using a RelNN that was constructed based on the provided analysis? - Serialization of goal regression rules seem to be highly related to mathematical optimization / dynamical programming. Can you elaborate on this connection? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: - Limitations of the work have been barely touched upon on the paper. - The discussion on applying this formulation to continuous planning problems, e.g., in robotics, can be extended. - In general, an evaluation and discussion on the type of problems that the proposed formulation is not applicable to would be nice. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful comments! **Q1:** Quantitative results: **A1:** Our main objective in this paper is to formally clarify the nuances of when relational neural network circuits can realize a policy for specific planning problems, and to understand the size of the circuits necessary to achieve this. We begin by exploring discrete planning problems. We hope our preliminary quantitative findings do provide some initial insights into practical algorithms. In our answer to your next question, we briefly discuss additional challenges that we might encounter when we shift towards constructing policy architectures and learning within more realistic frameworks, such as robotics. **Q2:** Analysis and construction for larger/more complex problems. **A2:** The theorems and constructions are general, so they apply to problems of any size. However, to build practical systems that plan in very large domains, additional methods for abstracting and factoring the planning problem, e.g., via hierarchical decomposition, will be crucial for efficient solutions. Neural network architecture: prior work [1] has demonstrated how different aggregation functions in graph neural networks carry different inductive biases. An intuitive example is that max-pooling over node features can act as an "existential" quantification, whereas mean-pooling can act as a majority voting. These considerations become crucial when tailoring solutions to specific problems. Of course, as the necessary circuits get more complex, the NN training may become more difficult. Also, more training data might be needed to provide something close to a "complete presentation" of the problem so that the RelNN can learn all regression rules in the domain. We'd like to emphasize that our work is an attempt to bring us a step closer to a comprehensive understanding of learning and planning with object-centric representations. We hope it will pave the way for future explorations. [1] Keyulu Xu et al. How Powerful are Graph Neural Networks? In ICLR, 2019. **Q3:** Applicability in real-world scenarios. **A3:** Thanks for bringing up this question. We will extend the discussion in future revisions. The proposed analysis and formulation does generalize directly to "real-world" settings (e.g., object features should be extracted from visual perception inputs) if we assume the input state can be represented in an object-centric, and discrete manner (a.k.a. as logic "predicates"). For example, [2] has implemented relational neural network architectures for BlocksWorld from visual inputs. It is challenging to generalize directly to scenarios where state variables and action parameters are truly continuous (e.g., in robotic manipulation tasks). The intuition is that, if actions have continuous parameters, there will be an infinite number of possible actions available at each state (depending on the choice of grasping poses, object placement poses, etc.). Therefore, the depth-first-search-based algorithm used in this paper will not work. Consequently, our policy compilation strategy will not work. Robotic manipulation problems can generally be more computationally complex than the discrete problems we studied because they can certainly contain those discrete problems as a sub-problem (e.g., a "robotic" blocks world). Meanwhile, any robotic planning problem that involves moving an articulated object with many links among many polygonal obstacles is PSPACE-complete [3]. Therefore, there are no efficient exact algorithms for these problems; many motion planning algorithms, for example, rely on sampling and discretization. However, how to better produce samples/discretize the space is generally hard. We believe that it is an interesting question to investigate similar "width"-like definitions in sampling and discretization-based continuous planning algorithms. [2] Richard Li, et al. Towards Practical Multi-Object Manipulation Using Relational Reinforcement Learning. In ICRA, 2020. [3] William Vega-Brown and Nicholas Roy. Task and Motion Planning is PSPACE-Complete. In AAAI, 2020. **Q4:** Paper organization, supplementary materials and links. **A4:** Thank you for suggestions. We will try our best to use the page in NeurIPS camera-ready if the paper gets accepted to include more intuitions, and definitely add links. **Q5:** The advantage of using a RelNN. **A5:** Thanks for raising this point. There are two major advantages of learning a RelNN. First, when domain is known (in STRIPS), the RelNN will have computation time O(depth of the circuit), due to its parallel execution nature. This is generally more efficient than running the classical planning algorithm. Second, there are scenarios when the domain is unknown to the agent, then RelNN is a method that supports learning policies from interactions or demonstrations. For example, consider an agent that is learning to play Minecraft, in which case we know that some crafting rules (e.g., in order to build an axe, we need wood planks and iron ore) have certain forms and thus can be serialized and have low width, but we still need to perform exploration in the environment to learn those rules. **Q6:** Serialization in mathematical optimization and dynamic programming. **A6:** Serialization is strongly related to hierarchical decomposition, in which we assume we can solve some subpart of a problem without worrying about the rest of the problem. The connection to dynamic programming (in which we re-use solutions to subproblems in order to solve a bigger problem) is less clear to us, but it definitely bears additional thought! **Q7:** Applicability of the proposed formulation. **A7:** In this paper, we address the class of planning problems that can be formulated in atomic STRIPS. That is really only a small set of all interesting planning problems, and does not address continuous state/action spaces, stochasticity, partial observability, execution actions in parallel, etc. --- Rebuttal Comment 1.1: Title: Thanks Comment: I appreciate the sincere and detailed explanations by the authors. Even though practicality concerns are addressed to some degree, it is not solid enough for me to raise my score at the moment.
Summary: This paper discusses the expressivity of classical relational architectures and its implication for solving planning problems. In particular, the authors identify characteristics of the environments that make planning hard. They prove a theorem that bounds the size of a network sufficient to solve the problem of planning perfectly, under some assumptions on the underlying domain. Finally, they show experimental results in two simple environments. Strengths: The paper is very clear and easy to follow. It's very convenient that you provide intuitive explanations of the derived theorems and explain them in an example environment. The motivation is clearly stated in the introduction. The problem is interesting and deserves a study. The presented results are novel and provide a deeper understanding of the capabilities of widely used architectures. Weaknesses: In my opinion, the greatest weakness of this paper is an insufficient discussion of the connection between the formulated theory and practical applications. I understand that the contribution of the work is mostly theoretical. Anyway, it would be nice to show how the presented theory should influence my design choices. For instance, I suggest providing a list of environments with a short explanation of their properties (finite/unbounded depth, serializability, width, theoretical bounds values, implications, etc.), even without experiments, even in the appendix. Technical Quality: 3 good Clarity: 3 good Questions for Authors: l.99 It's very convenient that you explain the discussed properties with an example. However, it would be even better if you briefly explained the meaning of actions. Although after reading I know what _clear_(A) means, explaining it at the beginning would be helpful. Also, generally, there is no precise description (even brief) of the BlocksWorld environment in the paper. l.114 Please refer to a specific appendix. l.124 I'm not sure if the provided intuition is correct. As far as I understand, it should be "A rule is optimally serializable if any optimal trajectory extended with an optimal continuation forms an optimal plan." That is, the intuition described in the paper suggests just an existence of $a_2$. Is that correct? l.130 Does Theorem 3.1 have proof? Please, provide a reference. l. 317 It would be nice to show how the presented theory should influence design choices when using relational networks. For instance, I suggest providing a list of environments with a short explanation of their properties (finite/unbounded depth, serializability, width, theoretical bounds values, implications, etc.), even without experiments, even in the appendix. The discussion provided in l.305-317 is a good starting point. The discussion in the appendix (again, no reference) is also nice, but I believe it can be extended. Notably, with more examples. This way the theory you provide could be much more impactful. l.324 I like that you provide experiments both for theoretically bounded and unbounded depth. However, it's a shame that it turns out that unbounded depth is actually no bottleneck. In general, it's good to show such an environment, but here I'd like to see an experiment in an environment with truly unbounded depth and witness the growing success rate up to a practical bound. Because BlocksWorld actually requires little complexity, there is no experiment that links your theory with practice. How about BlocksWorld with a limited number of stacks? But it may fall in the category of "resource constraints", is that a problem? Actually, I do want here an environment with a _small_ but not _extremely small_ circuit. l.334 What does it mean that a model has breadth 2? Does it mean that a layer has 2 neurons? I suppose not, please clarify that. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations are briefly discussed in the conclusion. Negative societal impact is not an issue with this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful comments! **Q1:** Connections to practical problems. **A1:** Thank you for the suggestion. We have included a new discussion of 6 popular problems that have been widely used in AI planning communities and discuss the circuit depth and breadth of them. We summarized their regression width, circuit depth, and circuit breadth in the following table. Please refer to our general response for more discussions (e.g., finding optimal plans vs. finding satisficing plans). These results can indeed be used in choosing neural network hyperparameters (e.g., the depth). We hope these discretized/simplified versions of practical problems can give us insight into their relative difficulty. | Problem | Breadth | Regression Width | Depth | | -------- | -------- | ---------------- | -------- | | BlocksWorld | Constant | 1 | Unbounded | | Logistics | Constant | 0 | Unbounded | | Gripper | Constant | 0 | Constant | | Rover | Constant | 0 | Unbounded | | Elevator | Constant | 0 | Unbounded | | Sokoban | Unbounded | N/A (not serializable) | Unbounded | **Q2:** Clarifications on the Blocks World environment and appendix references. **A2:** Thanks for the suggestion, we will include descriptions of the state representations in the BlocksWorld example, and put exact section numbers when we refer to appendices. **Q3:** L124, intuition about the goal regresion rule. **A3:** Thanks for catching that! Your intuition is correct and we will update the manuscript. **Q4:** Proof for Theorem 3.1. **A4:** Yes, the proof was attached in Appendix A.1. We will add an reference. **Q5:** Experiments showing the importance of truly unbounded depth. **A5:** Thanks for the suggestion! We totally agree that there should be an additional environment better showing the importance of unbounded depth. In addition to the "no resource constraint" feature you mentioned, there is another aspect of BlocksWorld that makes finding a satisficing plan easy: non-existence of deadends (as long as you keep moving objects down to the table, you will eventually make the target object clear). An environment is that does not have this structure is Logistics (or any graph path problem in a non-strongly-connected graph). In particular, we constructed a simple Logistics domain with only cities and trucks (no airplanes), and we construct the graph so that it's not strongly connected (by first sampling a directed tree and then add forward-connecting edges). Similar to other models trained in the environment, we train the policy network on problems with less than n=10 cities and test it on n=30 and n=50. The results are summarized below (with f(n)= n/5 + 1). | Model | n = 10 | n = 30 | n = 50 | | -------- | -------- | -------- | -------- | | RelNN(3, 2) | 1.0 | 0.30 | 0.23 | | RelNN(f(n), 2) | 1.0 | 1.0 | 1.0 | The results show that having an unbounded depth circuit is important in learning a generalizable policy in this domain. We will add code for reproducing this experiment to the released code. **Q6:** L334 Breadth. **A6:** Breadth means the maximum arity in the relational representation. Breadth 2 means that in the RelNN, there is a vector representation between each pair of objects. For the "number of neurons", we use 64 for all experiments, which should be sufficiently large for the particular domains we are considering. --- Rebuttal Comment 1.1: Comment: Thank you for the response. Including the clarifications and the presented discussion of additional environments makes the paper stronger. I acknowledge the proposed changes by increasing my rating.
Summary: The paper presents a mapping from tractable but incomplete algorithm classical planning into circuits that can be represented by a class of neural networks called relational neural networks RelNN including transformers architecture and graph neural networks. The paper is divided into two parts. The first part presents an extension of Chen and Gomez [CG2007] that introduced a notion of width, in turn, related to the more practical notion of width proposed by Lipovetzky and Geffner [LG2012]. The change is about using regression search –not a big change given that as a theoretical algorithm– while keeping the same idea of keeping track of a reduced context. The second part considers how to map policies based on those algorithms –given a set of regression rules– into a RelNN. The paper discusses the size of the required network in different cases. The best scenario is when the size of the NN doesn't grow linear with the size of the problems. Using the ideas of the first part, the paper shows how can that be expressed as a RelNN. Preliminary empirical results over a set of problems suggest a MLP is learning a policy that matches the ideas of the paper. Strengths: - Innovative combination of tractable planning with the expressivity of some NN. Serialization is a suggestive idea for a tractable fragment for planning. - The idea of 'regression rule selector' might be hard to use in an effective symbolic algorithm. Framing it as learning is an innovative idea. Weaknesses: - The paper offers little insight into how depth and breadth behave in different problems. In particular, it's not clear how the complexity of the rules is related to that. Perhaps a better title for the current manuscript would be "Relational Neural Network can Solve some Planning Problems", as the paper does not help to identify what problems can be solved except by trial-and-error. That might be a good contribution, but then the tone of the paper might be different. - The paper doesn't clarify whether the goal of the approach is to solve a single problem or problems from a domain. - For instance, the width of [LG2012] refers to problems, but then they discuss width across instances of a problem. - This might be easily fixed in the next discussion. - Experiments cannot be replicated as we lack details. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - What's the goal stack in the algorithms? - How are the objects encoded in the RelNN? - What's the difference/similarity in the encoding of problems where the policy have bounded vs unbouded depth? - Is the encoding the same for two problems where we interchange the name of two objects? For instance, rename A s B and B as A in a blocks world problem. - Might such renaming have any effect in the experiments? - Are MLP per se part of the class of relational NN? - How are the problems and outputs encoded in the MLP? - The proof of 4.3 says that rules include \rho, any kind of FOL. Lemma 4.1 refers to FOL but only restricts the number of variables. - How does D change with the complexities of the formulas? - In thm 4.3, why does the breath only grows with max(B_r, max arity of predicates)? - Perhaps it's because the preconditions are evaluated once at a time, but the end of the proof says "\beta is required to store the input state". A state includes more than one predicate. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 2 fair Limitations: This is a very interesting line of work combining results in different areas. The theoretical arguments focus on the existence of a bound, there is little formal study or intuition of how the complexity grows across different problems. For instance, if conditions are free FOL, for a fixed number of variables and objects, then the resulting circuits can have many different sizes. In general many intuitions are hidden in the proofs ideas and not in the main text. That's a possible reasonable decision, but sometimes the key intuitions are hidden there and not connected with the overall picture. Lacking intuitions, the empirical study offer some light in this direction, but a reader in NeurIPS might need further details on how this was done. Finally, I think this would be a better paper with a more clear distinction between problems and domains. Previous work using the notion of width tends to discuss it across a class of problems. In that setting, providing or obtaining a set of rules is more clear that for solving a single problem. The set of rules might be very different for subsets of problems. (For instance, blocks world instances where all the blocks start at the table). That framing would allow to establish connections with work in learning for generalized planning. For that reason and others, this one is a very relevant reference: Learning Sketches for Decomposing Planning Problems into Subproblems of Bounded Width. Dominik Drexler, Jendrik Seipp, Hector Geffner. ICAPS 2022. https://arxiv.org/abs/2203.14852 All in all, I learned a lot from the paper. That's a positive sign. Minor point(s): - why can serialization be incomplete? Perhaps explaining this might be useful: https://en.wikipedia.org/wiki/Sussman_anomaly Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful comments! **Q1:** Replication of experiments. **A1:** Thanks for the suggestion. We will extend Appendix C to include more details. Also, we have released an anonymized version of the code with the supplement (since NeurIPS rebuttal does not allow for URLs, please check our Appendix D for the link). **Q2:** What's the goal stack in the algorithms? **A2:** Recall that both BWD and GRS are depth-first search algorithms. Here, "goal stack" means all the `goal` (or `goal_set`) arguments in the function calling stack. We will clarify that in the revision. **Q3:** How are the objects encoded in the RelNN? **A3:** In RelNNs (e.g., graph neural networks), the state is represented as a graph (objects are nodes and relations are edges; possibly there will hyperedges for multi-ary relations). Therefore, the objects are encoded as node indices. For example, the input contains nullary features (a single vector for the entire environment), unary features (a vector for each object), binary features (a vector for each pair of objects), etc. They are usually represented as tensors. **Q4:** What's the difference/similarity in the encoding of problems where the policy has bounded vs unbounded depth? **A4:** We use the same encoding style for all problems (the graph-like relationship). Therefore, whether different problems have bounded or unbounded width primarily depends on the available goal regression rules, how these rules can be serialized, and the width of the rules. **Q5:** Will changing of object names affect results? **A5:** No. In all our experiments, objects do not have names / identity information exposed to the neural network. This is related to the "permutation-invariance" of RelNNs, which means that any permutation of the input object set will not change the model performance. **Q6:** Is MLP a RelNN? **A6:** Yes, MLP is a "degenerate" RelNN with breadth 0. That is, the entire state is represented as a single vector. **Q7:** How are the problems and outputs encoded in the MLP? **A7:** We assume you meant to ask how problems and outputs are encoded in the relational neural networks. Here, the inputs to the neural network are tensor representations of the current state (e.g., $N\times N$ matrices for a predicate of arity 2, where $N$ is the number of objects). Since we assume that the goal contains only one single atom, goals can be encoded as tensors too. For example, suppose that our goal is on(obj1, obj2), then it can be encoded as a matrix where only the entry corresponding to the pair (obj1, obj2) is 1; all other entries are 0. **Q8:** Depth bounds in Thm 4.3. **Q8:** Thank you for pointing this out! We plan to add more intuition for both Lemma 4.1 and later theorems. In short, there is a constructive proof for using RelNNs to realize FOL formulas. The breadth B is the number of variables in FOL, the depth D is the number of "nested quantifiers" in an FOL formula. For example, the formula `exists x. forall y. p(x, y)` needs two layers while `exists x. forall y. forall z. p(x, y, z)` needs 3. We will also clarify that the FOL formulas in $\rho$ should be realizable by a RelNN with constant depth and breadth. **Q9:** Breadth bounds in Thm 4.3. **A9:** The intuition is that, if there are multiple preconditions for different rules, they can be evaluated "in parallel" and do not lead to larger breadth than individual rules: recall that breadth is actually the "arity" of edges in the graph. If we have many rules, we can increase the "hidden dimension" for each edge representation. Similarly, for state encoding, if we have multiple (say, 8) predicates (say, of arity 3), we can represent them by a tensor of size $N \times N \times N \times 8$, where $N$ is the number of objects in the state. Here, we say the breadth of this representation is 3, because it needs $O(N^3)$ memory to encode. **Q10:** Limitations on missing intuitions behind proofs and how complexity grows across different environments. **A10:** Thank you for suggesting these. We will try our best to use the additional page in NeurIPS camera-ready version, if the paper gets accepted, to include more intuitions behind theorems and definitions. Based on your suggestion, we also added ... **Q11:** Distinction between problems and domains. **A11:** Thank you for the great suggestion! We will definitely incorprate this suggestion in our future revision. Concretely, we plan to add a definition for "problem classes" composed of problems that share the same domain (e.g., transition model) but vary in $s_0$ and goals. And then, we will discuss width of problem classes and policy complexities for problem classes. **Q12:** References. **A12:** Thank you for the references! We will cite the Drexler et al. paper on learning sketches: their idea of sketch is definitely related to our goal regression rule. The key differences are: their skeches can be viewed as high-level, feed-forward policies. By contrast, in our conditional rules, there might be multiple rules applicable at the current state, so search is still required. We will also use the Sussman anomaly example to illustrate the idea of incompleteness.' --- Rebuttal Comment 1.1: Title: thank you Comment: Thank you for the thoughtful response. I'm satisfied. I think the analysis per domain can be quite insightful. In your response, I see the mention of parameters on the structure of the formulas. In my opinion, this suggests that the encoding, decoding, and parameters should be made clear as the paper progress. "Enconding graph" is sometimes abused in the literature, leaving the reader with no idea of about affects the size of the network and the expressivity. --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: Thank you for your response and constructive suggestions! We will incorporate a more clear definition of the representation of the input and the output, especially for multi-ary relations. We will also add details for the computation in RelNNs. Thank you again for your valuable feedback.
Rebuttal 1: Rebuttal: We thank all reviewers for their thoughtful and constructive comments. Some of the reviewers have suggested applying the theorems and analyses developed in this paper to a broader set of problems. Here, we list 6 popular problems that have been widely used in AI planning communities and discuss their circuit depth and breadth. We summarize our results in the following table. | Problem | Breadth | Regression Width | Depth | | -------- | -------- | ---------------- | -------- | | BlocksWorld | Constant | 1 | Unbounded | | Logistics | Constant | 0 | Unbounded | | Gripper | Constant | 0 | Constant | | Rover | Constant | 0 | Unbounded | | Elevator | Constant | 0 | Unbounded | | Sokoban | Unbounded | N/A (not serializable) | Unbounded | Note that here we are limiting our discussion to the case where the goal of the planning problem is a single atom. In summary, most of these problems have a constant breadth (i.e., the regression width of the corresponding problem is constant) except for Sokoban. Most problems have an unbounded depth: that is, the depth of the circuit will depend on the number of objects (e.g., the size of the graph or the number of blocks. The constant breadth results can be proved by construction: list all regression rules in the domain. We have analyzed the complexity of BlocksWorld and Logistics in Appendix A.6; all other problems can be proved in a similar way. The problem gripper has a constant depth under single-atom goal because it assumes the agent can move directly between any two rooms; therefore, no pathfinding is required. Sokoban has an unbounded breadth even for single-atom goals because if there are multiple boxes blocking the way to a designated position, the order to move the boxes can not be determined by simple rules. For problems in this list, when there are multiple goals, usually the goals are not serializable (in the optimal planning case). This can possibly be addressed by introducing "super predicates" that combine two literals. For example, if two goal atoms p(x) and q(x) are not serializable, we can introduce a new predicate p_and_q(x), and rewrite all operators with this new super predicate. Appendix A.5 provides more discussion and examples. This will make the problem serializable but at the cost of significantly enlarging the set of predicates (exponentially with respect to the number of goal atoms). If we only care about satisficing plans, for Logistics, Gripper, Rover, and Elevator, there exists a simple serialization for any conjunction of goal atoms (basically achieving one goal at a time). These theoretical analyses can be used in determining the size of the policy circuit needed for each problem. For example, one should use a RelNN of breadth $(k + 1) \cdot \beta$, where $k$ is the regression width and $\beta$ is the max arity of predicates. For problems that have unbounded depth, usually, the depth of the circuit grows in $O(N)$, where $N$ is the number of objects in the environment (e.g., in Elevator, the number of floors).
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
AD-PT: Autonomous Driving Pre-Training with Large-scale Point Cloud Dataset
Accept (poster)
Summary: This paper studies model pre-training for autonomous driving by utilizing a large-scale point cloud dataset, which largely facilitates the generalization performance of pre-trained models. To reconstruct such a dataset, scene- and instance-level distribution diversity are carefully enhanced. The overall pretraining is conducted in a semi-supervised manner, and its effectiveness is validated on several benchmark datasets with different baseline models. Strengths: 1. Compared to the traditional pre-training setting for point cloud, the AD-PT pretraining setting is more practical since it is expected to learn a generalized pre-trained model for all downstream tasks. 2. Compared to existing pre-trained methods trained in a fully unsupervised manner, this paper adopts a semi-supervised strategy, which is somewhat novel. 3. The data diversity enhancement with re-sampling strategies is reasonable. So does the unknown aware instance learning head. Weaknesses: 1. Although the AD-PT presented an improved performance on downstream tasks with limited training data (e.g., 20% data amount on Waymo), it is interesting to illustrate the results with more training samples in downstream tasks (e.g., 100% data amount on Waymo). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Na Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Response to Reviewer 5kGB Thanks for your review and suggestions. **Q1: Although the AD-PT presented an improved performance on downstream tasks with limited training data (e.g., 20% data amount on Waymo), it is interesting to illustrate the results with more training samples in downstream tasks (e.g., 100% data amount on Waymo).** **A1**: For a fair comparison with the previous works like ProposalContrast and BEV-MAE, we conduct experiments on fine-tuning using 20% Waymo data in the main text. Here, we show the results fine-tuned on 100% Waymo data. |Method|Data amount | Overall L2 AP / APH | Vehicle| Pedestrian | Cyclist| | --- | --- | --- | --- | --- | --- | |From scratch (PV-RCNN++) | 100%| 71.66 / 69.45 | 70.61 / 70.18 | 73.17 / 68.00 | 71.21 / 70.19 | |AD-PT (PV-RCNN++) | 100% | 72.39 / 70.10 | 71.01 / 70.60 | 74.84 / 69.46 | 71.32 / 70.25 | It can be seen that the performance can be consistently improved using more fine-tuning data.
Summary: This paper focuses on the point-cloud pre-training problem in the autonomous driving scenarios. Particularly, the authors regard the pre-training task as a semi-supervised learning problem and generate pseudo-labels for massive unlabeled data with few labeled frames. In the pseudo-label generation phase, they proposed two specific designs for this task, including diversity-based pre-training data preparation and unknown-aware instance learning. The experiments conducted on the KITTI, nuScenes, and Waymo datasets show the effectiveness of the proposed method. Strengths: - The topic is meaningful. Different from the 2D vision task, the 3D detection community lacks effective pretraining methods and pre-trained backbones, and the strong pre-training backbones are helpful to the whole community. - The authors provide a pre-trained paradigm, and the experiments show the effectiveness of the pre-trained model, especially when training samples are limited. - Code will be publicly available. The authors provide source code for reproducibility. Weaknesses: - The work claims that it provides a general pre-trained backbone for the LiDAR-based 3D object detection task. However, different from 2D perception tasks which are dominated by a few kinds of backbones, there are several popular detection pipelines with different backbone nets in this field. Some 3D detectors even use different input data representations, e.g. points, pillars, voxels, range images, etc. It is hard to build unified 3D pre-trained models for these works, and what this work provided is a voxel-based pre-trained backbone. Based on this, I think this work is over-claimed. - The 3D-based backbone is highly customized. Take the voxel-based backbone as an example, we need to pre-define the voxel size, and I think the pre-trained voxel size used in pretraining is the same as that used in the downstream task. I would like to know what would happen if different voxel sizes were used for pre-training and downstream tasks. If we must re-pretrain each backbone with different resolutions, then the significance of this work is not very important. - The improvement in performance is limited. I found that the pre-trained model has a relatively small impact on the final performance, especially in the full dataset setting. Considering the extra pre-training cost, performance improvement is limited, and I want to know whether simply extending the training schedule can achieve comparable performance or not. - Novelty is limited. This work regards the pre-training task as a semi-supervised learning problem with a pseudo-labeling scheme. However, using a few labeled data to generate pseudo-labels for LiDAR-based 3D detection has been investigated in [1], which achieves better performance. [1] Pseudo-labeling for Scalable 3D Object Detection, https://arxiv.org/abs/2103.02093 Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: More results of using the points, pillars, voxel-based methods to demonstrate the generalization ability of AD-PT.** **A1**: - As mentioned by the reviewer, there are several different types of 3D object detection backbones, *e.g.* pillar-based, point-based, and voxel-based. We verify the generalization ability of AD-PT based on the voxel-based method, due to it being the most commonly-used backbone currently. - Furthermore, we conduct experiments on different types of backbones, *e.g.* pillar- and point-based. The results are shown in the following tables. |Methods (PointPillar)|Data amount (Waymo)|Overall L2 AP/APH|Veh.|Ped.|Cyc.| |-|-|-|-|-|-| |Scratch|3%|48.56/39.30|54.28/53.51|47.11/25.50|44.29/38.89| |AD-PT|3%|52.01/43.99|58.51/57.85|50.22/32.52|47.31/41.59| |Scratch|20%|57.85/50.69|62.18/61.64|58.18/40.64|53.18/49.80| |AD-PT|20%|59.71/53.49|64.10/63.54|59.00/43.13|56.04/53.80| |Methods (PointRCNN)|Data amount (KITTI)|Overall|Car|Ped.|Cyc.| |-|-|-|-|-|-| |Scratch|20%|64.12|75.30|52.52|69.55| |AD-PT|20%|67.67|77.20|54.16|71.86| |Scratch|100%|68.40|78.70|54.41|72.11| |AD-PT|100%|70.47|80.25|57.13|74.04| We conduct point-based experiments on KITTI dataset for a fair comparison due to the lack of config for Waymo in OpenPCDet. The fine-tuning performance of point- and pillar-based methods can be further improved, when the models are initialized by our pre-trained checkpoint, showing that our method can combine with multiple types of 3D detectors. - We agree with the reviewer's suggestion that we do not provide a general backbone but provide a general pre-training pipeline that can be used on various types of 3D detectors. **We will revise it in the next version**. **Q2: About the voxel size used in pre-training.** **A2**: - The voxel size used in the experiments. |Dataset (setting)|Voxel size| |-|-| |ONCE (pre-training)|[0.1,0.1,0.2]| |Waymo (Fine-tuning)|[0.1,0.1,0.15]| |nuScenes (Fine-tuning)|[0.1,0.1,0.2]| |KITTI (Fine-tuning)|[0.05,0.05,0.1]| As shown in the table, the voxel size used in pre-training is different from that used in the downstream task (fine-tuning stage). Especially for KITTI dataset, the table shows the flexibility of loading the backbone with different voxel sizes. - For fair comparison to previous methods like BEV-MAE, we use [0.1,0.1,0.2] on nuScenes in the main text. Further, we also supplement the results, by conducting the experiment on nuScenes using [0.075,0.075,0.2] in the following table. |Methods (CenterPoint)|mAP|NDS| |-|-|-| |Scratch|58.07|66.00| |AD-PT|59.30|66.72| **Q3: I want to know whether extending the training schedule can achieve comparable performance or not.** **A3**: - First, we conduct experiments on extending the training schedule, and the results are shown as follows. |Methods (PV-RCNN++)|Data amount (Waymo)|Epochs|Overall L2 AP/APH|Veh.|Ped.|Cyc.|time cost| |-|-|-|-|-|-|-|-| |Scratch|20%|150|70.95/68.51|70.25/69.81|72.18/66.35|70.43/69.38|5x time| |AD-PT|20%|30|71.55/69.2|70.62/70.19|72.36/66.82|71.69/70.70|1x time| |Scratch|3%|150|68.76/66.18|68.10/67.62|70.08/63.88|68.11/67.03|5x time| |AD-PT|3%|150|69.72/67.14|69.00/68.52 |71.11/64.93|69.04/67.96|5x time| Initialized by our pre-trained model, the results of only training 30 epochs can exceed the results of 150 epochs of training from scratch. Besides, the performance using our pre-trained model can consistently outperform train from scratch when extending the training schedule (e.g., 3% fine-tuning data for 150 epochs). - Further, our method can easily scale up the pre-training data compared with previous methods, and the performance of the downstream dataset can be further improved as shown in table 8 and the following table. |Methods (SECOND)|Overall L2 AP/APH| Veh.|Ped.|Cyc.| |-|-|-|-|-| |Scratch|60.62/56.86|64.26/63.73|59.72/50.38|57.87/56.48| |AD-PT (100K pre-training data)|61.26/57.69|64.54/64.00|60.25/51.21|59. 00/57.86| |AD-PT (500K pre-training data)|62.34/58.74|65.20/64.66|61.26/52.20|60. 56/59.36| |improvement|+1.72/+1.88|+0.94/+0.93|+1.54/+1.82|+2.79/+2.88| **Q4: Difference between our proposed AD-PT and pseudo-labels based 3D semi-supervised learning.** **A4**: - *Task-level differences*: Different from pseudo-labeling and semi-supervised learning mentioned by the reviewer which aims to find a better pseudo-labeling method, our work aims to propose a *pre-training* method that can learn unified 3D representations that can improve the performance on multiple downstream datasets. Meanwhile, previous 3D pre-training methods mainly focus on pre-training and fine-tuning on the same dataset. AD-PT is the first work to focus on pre-training and evaluating on different downstream datasets (Waymo, KITTI, nuScenes) in this community. - *Algorithm-level differences*: - Pseudo-labeling mentioned by the reviewer tries to generate more useful pseudo labels using semi-supervised learning methods. As mentioned in our Method Section, we use the semi-supervised learning method to improve the quality of pseudo labels, which is **only a part** of our pre-training pipeline. - Our proposed method mainly considers the generalization of the model while improving the performance of downstream tasks which is totally different from previous pre-training methods and semi-supervised learning methods. - The re-scaling and re-sampling methods aim to increase the diversity of pre-trained data which improves the backbone's generalization ability at data-level. - Unknown-aware instance learning head considers the difference between pre-training and fine-tuning dataset which improves the backbone's generalization ability at algorithm-level. - *Application-level differences*: As the amount of pre-training data increases, the performance of downstream tasks will be continuously improved as shown in Table 8 and table in **A3**. Such a phenomenon **cannot be observed** by previous 3D AD pre-training methods. --- Rebuttal Comment 1.1: Title: Final Rating Comment: After reading the comments from other reviews and the rebuttal from the authors, I still think the novelty of this work is limited. However, I appreciate the additional experiments which shows the generalization ability of this work. So I'd like to slightly improve my score to 'borderline accept' --- Reply to Comment 1.1.1: Title: Thanks for your approval, and clarify our novelty Comment: Thank you very much for your approval of AD-PT's generalization ability. We appreciate the reviewer's comments on our works. Due to the limited number of characters in the rebuttal phase, we would like to clarify the novelty of AD-PT from the following three aspects. **Motivation-level**: AD-PT is the first work aiming at using totally different pre-training and fine-tuning datasets and the first work that can provide backbones with strong generalization ability. Such a new pre-training paradigm can bring the following three advantages. - Data-efficient. Previous works mainly follow paradigms that use **the same dataset** for pre-training and fine-tuning. However, when the sensor is updated, the previous pre-training model can not improve the performance of the new data. In contrast, AD-PT can provide backbones with strong generalization ability, which means that the previous pre-trained backbone can improve the performance of new data. Meanwhile, in the rebuttal phase, we show that our method can easily combine with multi-dataset pre-training, which means we can use data from old sensors. **In conclusion, AD-PT can reuse previous data to provide a good pre-training model without re-collecting a large amount of data from the new sensor**. - Training-efficient. Previous methods need to pre-train new backbone parameters using new data when we want to improve the performance of newly collected data, which is time-consuming. **In contrast, AD-PT can achieve a one-shot pre-training to improve the performance of multiple domains, greatly reducing pre-training time and computation power consumption**. - Scaling up performance. Limited by the amount of data, previous works are hard to observe the performance gains when increasing the pre-training data. However, **AD-PT verifies that the downstream performance is positively correlated with the amount of the upstream pre-training data**. **Framework-level**: AD-PT is the first work to use semi-supervised pre-training and provides a complete set of pre-training processes including data labeling and training algorithm process (We will release the large-scale pseudo-labeled point clouds, about 1M data, which is the largest-scale labeled outdoor point-cloud dataset). Meanwhile, the starting point of AD-PT is totally different from previous semi-supervised learning, and data augmentation works as follows. - Previous pre-training methods mainly use MAE-based or contrastive learning-based methods to obtain backbone parameters to achieve the best in the single domain, while AD-PT explores a new pseudo-label-based method and totally considers the differences between upstream and downstream datasets, such as the differences in semantic classes between upstream and downstream. As a result, we propose to Unknown-aware Instance Learning Head to perform the open-set detection on the downstream dataset. - Furthermore, we provide a complete pre-training process including data preparation and a pre-training algorithm that is totally different from the previous pre-training methods as shown in Fig. 2. **Component-level**: Finally, each proposed module is also different from the existing ones. - Data preparation. Based on the observation that the quality of pseudo labels can affect the performance of pre-training process. We propose to use the class-aware pseudo generator and semi-supervised learning to obtain as accurate pseudo labels as possible. Pseudo-labeling as mentioned by the reviewer improves semi-supervised performance by using pseudo-labeling by means of threshold which is only a part of our method. Besides, our re-scaling and re-sampling methods meanly consider improving the diversity of pre-training data, which is different from most previous data augmentation methods aiming at improving the performance of the single-domain model. - Training algorithm. AD-PT is the first work that introduces the idea of open-set learning to the pre-training task, which fully considers the taxonomy difference between upstream and downstream datasets. We found some negative samples, that could be ignored during the upstream pre-training period, may be useful for different downstream fine-tuning tasks (due to the taxonomy differences between datasets). As a result, we consider such samples as undetected instances. Such an idea is different from previous MAE-based and contrastive-learning-based methods.
Summary: The paper proposes an autonomous driving pre-training method, where a large-scale pre-training dataset is built and a generalizable representation is created. To further improve the performances on the down-stream tasks, an advanced pre-training technique of semi-supervised method is proposed. Moreover, an unknown-aware instance learning head is proposed to learn the open-set detection. Various experiments are conducted to show the effectiveness of the method. Strengths: 1. The paper is well-organized and clearly written. 2. Thorough experiments are conducted on various benchmarks. Weaknesses: 1. The novelty is a bit limited, since the proposed techniques look like a combination of existing approaches, such as data augmentation and semi-supervised learning. 2. Although the improvements on various benchmarks are promising, the results are inferior compared to SOTA Lidar based object detection approaches such as VISTA [A] and MGTANet [B] on the nuScenes dataset. Hence, it would be great to also test on current SOTA approaches, which could further verify the generalization ability and robustness of the method. 3. Eq. 2 seems to be incorrect since x and y share the same calculation. It would be further clearer if a figure is drawn to show the calculation. 4. Line 293-294: Should the improvements for Waymo and nuScenes be 0.65%/0.54% and 7.37%, respectively? [A]: Deng, Shengheng, et al. "Vista: Boosting 3d object detection via dual cross-view spatial attention." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [B]: Koh, Junho, et al. "MGTANet: Encoding Sequential LiDAR Points Using Long Short-Term Motion-Guided Temporal Attention for 3D Object Detection." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 1. 2023. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: None Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The limitation discussion seems to be missing. Possible memory consumption or time efficiency could be discussed to show a more comprehensive comparison with other SOTA works., Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer hHpC We sincerely appreciate the Reviewer's efforts and comments. We have also tried our best to clarify the novelty of the proposed method and supplement more experimental results and discussions here. **Q1: Novelty is a bit limited, since the proposed techniques look like a combination of existing approaches, such as data augmentation and semi-supervised learning.** **A1**: - Our study provides the autonomous driving community with **knowledge** that scaling up the amount of pre-training samples can boost the performance on **multiple downstream datasets** simultaneously. Such an insight can inspire the pre-training study in the autonomous driving community, with the purpose of fully leveraging the constantly increasing autonomous driving data. - Our proposed method mainly considers the generalization ability of the model while improving the performance of downstream tasks. - The re-scaling and re-sampling methods aim to increase the diversity of pre-trained data which improves the backbone's generalization ability in **data-level**. - Unknown-aware instance learning head fully considers the difference of pre-training and fine-tuning dataset which improves the backbone's generalization ability in **algorithm-level**. As shown in Tables 8, 9, 10 of the main text, initialized by our pre-trained backbone (**a single checkpoint**), the performance of multiple downstream datasets can be improved and even surpass the baselines which pre-train and fine-tune on the same dataset. - Further, *as the amount of pre-trained data increases, the performance of downstream tasks will be continuously improved as shown in Table 8* of the main text. Such a phenomenon **cannot be observed** by previous pre-training methods (*e.g.* ProposalContrast, BEV-MAE, etc.), due to that the scale of the pre-training dataset is difficult to be continuously expanded, only using previous pre-training methods (they perform the pre-training and fine-tuning on the same benchmark). Besides, we conduct experiments on more fine-tuning data (*i.e.* Waymo 20%), as shown in the following table. Overall, different from previous semi-supervised learning and 3D pre-training methods (*e.g.* ProposalContrast, BEV-MAE, etc.), which aim to improve the performance **on the same dataset**, our work aims to perform the pre-training process and downstream fine-tuning process **across different datasets**, reducing the model retraining cost and achieving the fine-tuning performance scalability. This is the first work to focus on pre-training 3D backbone that can be verified to be effective in many autonomous driving datasets such Waymo, nuScenes, KITTI. |Method| Overall L2 AP / APH | Vehicle| Pedestrian | Cyclist| | --- | --- | --- | --- | --- | |AD-PT (SECOND, 100K pre-trained data) | 61.26 / 57.69 | 64.54 / 64.00 | 60.25 / 51.21 | 59. 00 / 57.86 | |AD-PT (SECOND, **500K** pre-trained data) | 62.34 / 58.74 | 65.20 / 64.66| 61.26 / 52.20 | 60. 56 / 59.36 | - Current semi-supervised learning methods such as MeanTeacher cannot achieve a significant performance gain towards the downstream dataset, as shown in Table 1 of the main text and the following table. |Method| mAP| NDS| | --- | --- |---| |MeanTeacher | 55.46 | 63.93 | |AD-PT | 57.17 | 65.48 | **Q2: Although the improvements on various benchmarks are promising, the results are inferior compared to SOTA Lidar based object detection approaches such as VISTA.** **A2**: Thanks for your valuable comment. According to this Reviewer's suggestion, we supplement the experiments of initializing VISTA [Ref-A] with our pre-trained model using AD-PT. The result of training VISTA from scratch is 60.8 / 68.1 (as reported in their original paper [Ref-A]), and the result of training VISTA using the AD-PT checkpoint is 61.12 / 68.38. We report the results on nuScenes validation set, and the experimental results demonstrate that VISTA can be further improved by pre-training using the AD-PT method, which further verifies the generalization ability of our method. Furthermore, we will add the comparison results with VISTA [Ref-A] **in the next version**. [Ref-A] Deng, Shengheng, et al. "Vista: Boosting 3d object detection via dual cross-view spatial attention." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. **Q3: Eq. 2 seems to be incorrect since x and y share the same calculation. It would be further clearer if a figure is drawn to show the calculation.** **A3**: We thank the Reviewer very much for pointing it out. The correct formula should be: $ x=rcos(\phi)cos(\theta), y=rcos(\phi)sin(\theta), z=rsin(\phi) $ We will correct the formula in the next version and provide a figure to clearly show the calculation in the **one-page PDF**. **Q4: Line 293-294: Should the improvements for Waymo and nuScenes be 0.65%/0.54% and 7.37%, respectively?** **A4**: Thank the Reviewer very much for this comment, and we will correct it in the next version. Limitation: Although the AD-PT pre-trained backbone can improve the performance on multiple downstream datasets, it needs to be verified in more actual road scenarios. Meanwhile, training a backbone with more generalization capabilities through data from different sensors is also a future direction. We will add a limitation section to our next version. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. To make the comparison to existing works fair enough, it is highly encouraged to directly compare to the widely used nuScenes testing data shown in Tab.1 of VISTA [A] and MGTANet [B] to show the effectiveness of the proposed method. --- Reply to Comment 1.1.1: Title: Thanks for your suggestion, and the nuScenes test results Comment: We would like to thank the reviewer for this valuable suggestion. According to the reviewer's comment, we further submit the nuScenes test results of VISTA initialized by our proposed AD-PT to the **nuScenes server** to make a fair comparison. We follow VISTA to use double flip for testing augmentation. The test performance is shown below, where we use the performance of VISTA-OHS reported in their original paper for a fair comparison. | Methods | NDS | mAP | car | truck | cons. | bus | trailer | barrier | motorcycle | bicycle | pedestrian | traffic cone| | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | |VISTA-OHS (from scratch) | 69.8 | 63.0 | 84.4 | 55.1 | 25.1 | 63.7 | 54.2 | 71.4 | 70.0 | 45.4 | 82.8 | 78.5 | |VISTA-OHS (AD-PT) | 70.47 | 63.84 | 84.6 | 54.1 | 29.0 | 64.3 | 55.3 | 71.3 | 71.2 | 45.4 | 83.7 | 78.9 | We are very grateful for this insightful comment and will add the comparison of VISTA-OHS mentioned by the reviewer in the next version. Furthermore, we would like to emphasize that, the checkpoint obtained by our proposed AD-PT boosts the model performance not only on the nuScenes dataset (includes validation set and test set) but also on other public datasets, such as Waymo dataset (1.65% gain on PV-RCNN++ as shown in Table 3 of the main text), KITTI dataset (2.44% gain on PV-RCNN as shown in Table 5 of the main text).
Summary: This paper aims at large-scale point cloud dataset pre-training. The authors propose AD-PT method to build pre-training dataset with diverse data distribution and learn generalizable representations. In details, they design a diversity-based pre-training data preparation procedure and unknown-aware instance learning. The extensive experimental results on Waymo, nuScenes and KITTI datasets verify the effectiveness of the proposed method. Strengths: 1. Pre-training on the large-scale point cloud dataset is meaningful for autonomous driving. On the one hand, human annotation is expensive and how to use unlabeled data is an improtant direction. On the other hand, the point clouds of different LiDAR sensors have different patterns and we need consider these domain gaps. 2. The proposed method is reasonable. Although some thechniques are not new, suach as contrastive learning, I think they are reasonable for large-scale point cloud pretraining. For example, the LiDAR beam re-sampling can increase the robustness of beam domain gap. 3. The paper is well-written and easy to read. Weaknesses: 1. The results in Table 3 cannot show the superiority of the proposed method. The performance gain with SS-PT methods such as BEV-MAE is marginal. 2. I think a more useful setting is that we have some datasets A,B and C collected from different LiDAR sensors. We can use the labels in all three datasets and we want to improve the detector's performance on C dataset. Can the proposed method be applied to this setting? I hope the authors can give some insights. 3. The authors should give an analysis between the detector's performance on finetuning dataset and the labels amount in pre-training datasets. As an extreme case, if we do not have labels in pre-training dataset, will the performance drop largely? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please reply to the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper does not have limitation section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer 8so5 Dear Reviewer XpxU, Thanks for your review, we provide more experimental results and explanations of your question. **Q1: Table 3 cannot show the superiority of the proposed method. The performance gain with SS-PT methods such as BEV-MAE is marginal.** **A1**: - Different from previous pre-training methods like BEV-MAE, which use **the same** dataset as the pre-training and fine-tuning dataset. AD-PT mainly focuses on designing a general pre-training pipeline that can pre-train backbones with strong generalization ability which can improve the performance on **multiple downstream datasets** (e.g., nuScenes, Waymo). - Here we list the percentage of our performance improvement over BEV-MAE compared to training from scratch ($(ADPT - BEVMAE) / (BEVMAE - Scratch)$). As shown in the following table, the percentage of Waymo's improvement is more than 45% compared with BEV-MAE under multiple detectors. |Detector|Percentage (AP / APH) | | --- | --- | |SECOND| 56.1% / 88.6% | |CenterPoint| 55.6% / 45.4% | |PV-RCNN++| 198.1% / 211.3% | - Meanwhile, using the same AD-PT pre-trained checkpoint, the performance on multiple datasets can be improved (e.g., nuScenes). However, under cross-dataset settings (i.e., Waymo pre-train, nuScenes fine-tune), performance can not be improved when initialized by BEV-MAE, as shown in the following table. |Method|mAP|NDS| | --- | ---| --- | |From scratch (CenterPoint) | 56.2 | 64.5| |BEV-MAE (CenterPoint) | 56.30| 64.62| |AD-PT (CenterPoint) | 57.17 | 65.48 | - Further, AD-PT achieves continuous performance gains, by expanding the scale of the pre-training dataset. When pre-training on a more large-scale dataset (i.e., 500K ONCE samples), the performance can be further improved as shown in the following table. |Method| Overall L2 AP / APH | Vehicle| Pedestrian | Cyclist| | --- | --- | --- | --- | --- | |BEV-MAE (SECOND) | 61.03 / 57.30 | 64.42 / 63.87| 59.97 / 50.65| 58.69 / 57.39 | |AD-PT (SECOND, 100K pre-trained data) | 61.26 / 57.69| 64.54 / 64.00 | 60.25 / 51.21 | 59. 00 / 57.86 | |AD-PT (SECOND, 500K pre-trained data) | 62.34 / 58.74| 65.20 / 64.66| 61.26 / 52.20 | 60. 56 / 59.36 | - When fine-tuning on a small amount of data, AD-PT shows better performance compared to previous methods. |Method | Data amount | Overall L2 AP / APH | Vehicle| Pedestrian | Cyclist| | ---|---|---|---|---|---| |From scratch (PV-RCNN++) | 3% | 63.81 / 61.10 | 64.42 / 63.93 | 64.33 / 57.79 | 62.69 / 61.59 | |BEV-MAE (PV-RCNN++) | 3% | 64.87 / 62.05 | 65.54 / 65.04 | 65.46 / 58.98 | 63.62 / 62.15| |AD-PT (PV-RCNN++) | 3% | 68.33 / 65.69 | 68.17 / 67.70 | 68.82 / 62.39 | 68.00 / 67.00| **Q2: ...We have some datasets A,B and C collected from different LiDAR sensors. ... Can the proposed method be applied to this setting?** **A2**: Thanks for your constructive suggestion, data from different sensors can greatly improve the diversity of the pre-training dataset. AD-PT proposes a method to increase the diversity of data with almost zero cost and an effective pre-training method. When obtaining more datasets with different sensors, the AD-PT can also effectively increase the diversity and our pre-trained method can easily be combined with different datasets. We conduct simple experiments to show that our method can be used in **multi-dataset pretraining scenarios**. We use KITTI dataset, ONCE labeled dataset and our pseudo-labeled dataset. KITTI dataset is collected by a 64-beam LiDAR and ONCE dataset is collected by a 40-beam LiDAR. Inspired by Uni3D[1], we align the point cloud range. The following table shows the fine-tuning results on 3% Waymo train set. |Method|Fune-tuning data amount|Pre-training data |Overall L2 AP / APH |Vehicle|Pedestrian |Cyclist| | ---|---|---|---|---|---| ---| |AD-PT (PV-RCNN++) | 3% | once labeled dataset, pseudo-labeled dataset |68.33/65.69|68.17/67.70|68.82/62.39|68.00/67.00| |AD-PT (PV-RCNN++) | 3% | KITTI dataset, once labeled dataset, pseudo-labeled dataset |68.67/66.00|68.47/67.98|69.12/62.63|68.41/67.39| [1] Zhang B, Yuan J, Shi B, et al. Uni3d: A unified baseline for multi-dataset 3d object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 9253-9262. **Q3: Give an analysis between the detector's performance on finetuning dataset and label amount in pre-training datasets.** **A3**: - AD-PT uses the semi-supervised manner to pseudo a large amount of unlabeled data. Here, we conduct experiments on reducing the labeled data, and the results are shown in the following table. The ONCE labeled set consists of a total of 6 sequences, and we reduce the amount of data for labeling by extracting random sequences. |Method| Labeled data | Overall L2 AP / APH | Vehicle| Pedestrian | Cyclist| | --- | --- | --- | --- | --- | --- | |AD-PT| 1 sequence from labeled set | 67.11/64.72 | 66.98/66.48 | 67.93/61.29 | 66.44/65.41 | |AD-PT| 3 sequences from labeled set | 67.47 / 64.78 | 67.52/67.03 |68.35/61.80|66.54/65.52| |AD-PT| 6 sequences from labeled set | 68.33 / 65.69 | 68.17/67.70 | 68.82/62.39 | 68.00/67.00 | It can be seen that performance degrades as the amount of annotated data decreases. Such a degradation is mainly due to that the accuracy of pseudo-labels will drop and such phenomenon is consistent with Figure 6 of the main text. Besides, Table 8 in the main text also shows the effect when using different amounts of unlabeled data. **Limitation**: Although the AD-PT pre-trained backbone can improve the performance on multiple downstream datasets, it needs to be verified in more actual road scenarios. Meanwhile, training a backbone with more generalization capabilities through data from different sensors is also a future direction. We will add a limitation section to our next version. --- Rebuttal Comment 1.1: Title: Keep my positive rating Comment: Thanks to the author's thoughtful response, I feel that my questions have been mostly resolved, and I will maintain my rather positive rating.
Rebuttal 1: Rebuttal: ## General Response: Dear AC and reviewers, Many thanks for your valuable comments and constructive suggestions to improve the quality of our work. Here is a summary of what we have done in the rebuttal phase. - We conduct several new experiments to cover the concerns of the reviewers. - Verifying the effectiveness of our pre-training methods on different types of 3D backbones (e.g., Pillar-based backbone, point-based backbone) - Compare our results with the result of simply extending the training schedule - Applying our pre-trained backbone to more SOTA works (i.e., VISTA) - More experiments to show the sensitivity of voxel size - A simple attempt at a fusion of multiple datasets - Reducing the labeled data - Fine-tuning on 100% Waymo data - More experiments to show the generalization ability of pre-trained backbone obtained by AD-PT - More experiments to show the fine-tuning performance when scaling pre-training data - We further provide more discussion on our insight and novelty, where we provide the autonomous driving community with the knowledge that scaling up the amount of pre-training samples can boost the performance on multiple downstream datasets simultaneously. Such an insight can inspire the pre-training study in the autonomous driving community, with the purpose of fully leveraging the constantly increasing autonomous driving data. - We provide a discussion on the performance of our methods and previous methods. - We provide a discussion on how to train on the dataset with multiple sensor types. - We provide the limitation of our methods. Thank you again for your precious time on the review. We hope that our response has addressed your concerns. We are happy to have further discussion on anything unclear about our paper. Best regards, Authors of paper 4632 Pdf: /pdf/38e2dc042557eb919cb903d930ef01025a56a4ba.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
RGMIL: Guide Your Multiple-Instance Learning Model with Regressor
Accept (poster)
Summary: The paper presents a new approach for Multiple Instance Learning (MIL) called Regressor-Guided MIL (RGMIL). The contribution of RGMIL is a new aggregation approach Regressor-Guided Pooling (RGP). In this approach, the aggregation part of the MIL model is split into N branches (for N + 1 classes, assuming class 0 is a negative class), where each branch is responsible for classifying a single class. The model is applied to a range of binary MIL problems (MUSK1, MUSK2, FOX, TIGER, ELEPHANT, Webpages, 20NewsGroups, Messidor, and UCSB Breast), and two max-based multi-class problems: MMNIST (proposed in this work) and UNBC shoulder pain estimation. The model mostly outperforms existing methods, and analysis is conducted as to why RGP is an effective pooling strategy. Strengths: **Originality** 1. The RGB pooling is an original approach, particularly with the double pass through the regressor. 2. Section 4.3 is also a novel investigation into the bottlenecks of MIL model training and can help understand why certain models succeed/fail. **Quality** 1. The evaluation covers a range of datasets and shows strong results, outperforming most other methods in most cases. 2. An ablation study is used to show how performance changes when training on different sizes bags but only classifying single instance bags. **Significance** 1. Given the strong performance of RGMIL, it would appear useful in MIL settings that use max-based problems, i.e., traditional binary MIL or problems like pain estimation. Weaknesses: **Originality** 1. I believe some elements of the proposed method are already well known concepts, e.g., equation 4 appears to be one hot encoding, and equation 8 is a softmax. Simplifying these in the methodology and focusing on the novel parts of the method will help understanding. **Quality** 1. Some elements of the approach are lacking justification (see questions below). 2. The MMNIST dataset is proposed but then only evaluated in an Instance-level setting. A strong evaluation could involve looking at the performance when the training and test bag sizes are the same (i.e., 10/10, 16/16 etc.) as well as the instance-level setting. **Clarity** 1. I found the paper very hard to follow and it took a considerable effort to unpick what the proposed method was actually doing. 2. It is not clear quite how the method works, particularly with regards to training. In Section 3.2, I am puzzled by the statement *"Parameters in both the RGP and regressor will NOT be involved in gradient descent"*. Please see questions below. 2. The work mentions how CNNs work and how humans would approach a MIL problem as justification for their approach. This makes the narrative harder to follow and detracts from the design of the novel pooling mechanism. I think this comparison would be best placed in the appendix, with more space allocated to discussing the model architecture and explaining how it works. 3. Figure and Table captions often contain little information, e.g., Table 2 does not state which dataset the model is trained on nor what the different columns means, Figure 2 should have y-axis labels, etc. 4. Some of the mathematical notation should be revisited to aid understanding. For example, is it necessary to have $M$ as the total number of instances and then $t$ as the number of instances per bag? Equations 5 through 9 jump through different notations, e.g., $fs$ for representations, then $H$, then $F_i$. 5. I would suggest another thorough proof-read to remove grammatical errors that make the work hard to follow. **Significance** 1. My understanding is that this approach is only applicable to multi-class problems that have some max-based ordering of classes, e.g., in pain estimation, if branch 1 and 2 are both positive, then the prediction is class 2. I don't know how this would work for general multi-class problems (see questions below), which limits its significance. 2. I disagree with the final paragraph of Section 2.2 (comparing t < c and t > c). Instance-level performance is important for interpretability in both cases - just because there are a greater number of instances per bag does not mean instance-level performance becomes less important. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: 1. Why does each branch of the aggregator produce two outputs? I can see the two outputs are used to calculate a difference in Equation 6, but don't understand why a single output with a sigmoid activation and a threshold of 0.5 isn't used. This needs to be explained in more detail. 2. If my understanding is correct, the regressor is used to calculate some form of attention weights (output of equation 8) and also classify the re-weighted instance representations. Is that correct? If so, what is the relationship to other MIL attention methods? It seems somewhat similar to the Additive MIL approach [1], which is not cited. 3. Is each branch regressor assigned random weights that are then fixed during training of the feature extractor? If so, what is the motivation behind this? 4. How would this approach work on multi-class problems that don't have some form of max-based ranking, e.g., SIVAL [2] or 4-MNIST-Bags [3]? My confusion lies in what happens when two or more branches all have positive indicator vectors $\hat{Y}$. In the max-based datasets used in this work, the maximum branch is taken as the overall prediction (equation 13). However, I'm not sure how this would work for other types of multi-class MIL datasets that don't use max-based classes. This limits the significance and scalability of the work. [1] Javed, Syed Ashar, et al. "Additive MIL: intrinsically interpretable multiple instance learning for pathology." Advances in Neural Information Processing Systems 35 (2022): 20689-20702. [2] Rahmani, Rouhollah, et al. "Localized content based image retrieval." Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval. 2005. [3] Early, Joseph, Christine Evers, and Sarvapali Ramchurn. "Model Agnostic Interpretability for Multiple Instance Learning." In International Conference on Learning Representations. 2022. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: 1. The work does not address the limitations of the novel method or suggest any areas for future work. 2. The main limitation of the work is that it is very hard to follow and difficult to understand how the novel method works. This is frustrating as the results appear promising. 3. One potential limitation, discussed in the questions above, is how this could be applied to multi-class datasets that do not follow some form of max-based assumption. Due to the difficult in understanding the approach and the lack of clarity in the method, I am leaning towards rejection as I don't think the method is reproducible from what is presented. However, there is certainly promise in the approach and the results appear strong. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your many valuable comments. It’s great that you appreciate our originality and investigation of the MIL model. Comment 1:Weaknesses-Originality [R] We will revise the writing thoroughly according to all comments. Comment 2: Weaknesses-Quality [R] We agree with the importance of bag-level performance. We have given additional general multi-classification experiments both on bag and instance level in global rebuttal(see PDF). Comment 3: Weaknesses- Clarity [R]: 1. We will revise our descriptions to make paper more readable. 2.More responses can be found in question parts. 3.As a starting point and important inspiration for designing the pooling method, we mentioned CNN here. Details have been put into the supplementary. 4.We have modified relevant tables and figures. 5.We have fixed inconsistencies in the mathematical notations. 6. We will correct grammatical errors as suggested. Comment 4: Weaknesses- Significance [R]: 1. We have achieved experiments on general multi-class datasets (SIVAL and the new MMNIST series), and results are shown in the global rebuttal. 2. We understand and agree with your thoughts about the instance-level importance. As the results on MMNIST show, the performance decays with the training bag length increases. In supplementary (section A.1), we argue that there is only one additional deterministic inference process when a human is faced with the MIL scenario. Any inference result that complies with the inference rule is legal. We assume that given the fixed total data records number, if the bag size is too large, there may be too many correct possibilities in a bag, which could cause instance-level performance degradation. This is an unsubstantiated idea and will be removed from the article. Comment 5: Question-1 [R] The detailed responses are shown in the global rebuttal. Comment 6: Question-2 [R] Yes. Between ABP and RGP, there are some important similarities and differences. They share the same view regarding the reliance on feature weights when obtaining bag representations. This view has been validated in many MIL models. This is a significant similarity. Between RGP and the ABP, the main difference lies in how to obtain the weights and the perspective on the MIL problem itself. The ABP series often train matrices (related to $Q$, $K$, $V$) to obtain weights, which is not very direct. In our perspective, instance-level performance is crucial, and we essentially adjust our bag-level judgment based on the current judgments of individual instances, as in human reasoning for MIL problem. In RGMIL, it believes that the solution to MIL problems should regress back to the instance level, and the criteria for judgment are the same on instance and bag levels, which is why we use the double-regressor scheme to pass the consistency. We believe that there is no need for a parameterized black-box model to simulate the reasoning process of a MIL problem. Simply asking for the current judgment from the instance-level regressor would be a more natural and intuitive way of aggregating features. RGP is non-parametric, and the learning process can entirely occur in instance feature extraction. Theoretical analysis in section 4.3 and section A.1 of supplementary confirms the advantage on instance level. In contrast, ABP does not take extra consideration on instance-level performance and the actual human reasoning process. This is the key difference between RGP and ABP. We have added the cite of Additive ABP [1]. For additive ABP, it only changes the order of summation and the output logits to improve interpretability. Essentially, it is considered as a member of the ABP series. Its motivation and research are quite different from our method. Comment 7: Question-3 [R] Yes. In classification model, it is expected that obtained features are of linear separability. For example, in the architecture of supervised model (feature extractor + linear regressor), when the parameters of linear regressor are fixed and only the feature extractor need be trained, a discriminative extractor can still be obtained. In MIL models, the requirement for the feature is still linear separability. In the case of multiple branches, it becomes a joint optimization problem, and fixing the parameters of the regressor makes more easily training without causing performance degradation. Fixing the parameters of the linear regressor does not affect overall performance, but the different random initialization strategies have a slight impact on performance. In RGMIL, we adopt the kaiming_normal_() random initialization for the parameters of the linear regressor (with a mean of 0). Comment 8: Question-4 [R] We added experiments on general multi-class problems (SIVAL and new MMNIST series) that don’t have form of max-based ranking, and details are given in global rebuttal. In the general multi-class scenario, during training, all bits in the indicator vector are considered reliable, and all corresponding branches participate in the calculation of loss function. If multiple branches output 1 during test, the branches can be sorted by querying the softmax high-bit values of each branch as confidence values, and the branch with the highest value can be selected as the final output. While in the new experiments, we did not perform extra processing on the outputs. We only modified the loss function during the training phase. Theoretically, RGMIL is flexible and can be fine-tuned based on specific circumstances. Comment-9: Limitation [R] Thanks, we will revise our description and add the analyses about the limitation. The contribution and additional experiments are shown in global rebuttal. All results presented are guaranteed to be reproduced on github. [1]R. Rahmani, et al., Localized content based image retrieval, in ACM SIGMM workshop, 2005. [2] J. Early, et al., Model Agnostic Interpretability for Multiple Instance Learning, in ICLR, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal and extended results. * For SIVAL, why were 2, 4, and 10 positive class configurations considered, but not the full 25 class problem? This makes it difficult to compare to existing works that have used the complete configuration of this data. * In general, I am unsure why only instance-level results are given for some datasets (e.g., MMNIST) and only bag-level results for others (e.g., SIVAL). While I understand the method better due to the rebuttal, I am still slightly unconvinced by the results (due to the problems outlined above). --- Reply to Comment 1.1.1: Title: Comments on new experiments Comment: Thanks for your comment again. Comment-1: SIVAL results [R]1. Revised result table Our intention was to compare different pooling methods from binary classification to 11-class classification, with varying numbers of positive classes (1, 3, and 10). However, in Table 2 of the PDF in global rebuttal, we mistakenly filled in 2, 4, and 10. 2.Why not instance level? As an image retrieval problem, SIVAL consists of 25 classes of complex objects(including apple,book,goldmedal,...) s. We argue that SIVAL dataset is not quite suitable to be evaluated on single instance for following reasons: - “fake” critical instances: In SIVAL , every bag is a complete image. The instances(image segments) in a bag are labeled positive if it contains target object, otherwise negative. But it’s not quite reasonable to expect correct predictions to be made for individual instances(image segments). For example, there are two bags (images), where one is an red apple image and another is the red book. Between two images, there are some similar instances (image segments), such as the center red regions of two images. If the instance-level is applied, though the center red region of the bag (the apple image) is labeled as positive during training, it is not sufficient to determine whether it belongs to a red apple or a red book when we make instance-level prediction in test. - Instance dependency or orderings: Since every bag is a complete image, there are order or dependency relationships between image segments. And the set of several instances (image segments) represent one object, and not a single one can be omitted, such as an apple image (one image segment does not represent an apple, mentioned above). It means that the order or dependency is crucial and useful for the final target prediction. If instance-level is applied, that dependency will be missed. Thus it’s not really reasonable to predict an individual image segment. 3.Why not 25-classification? - In the citation you mentioned above, J. Early, et al.[1] used 12 of 25 classes as positive classes. In our experiments, performance on a 11-classification task is similar to 13-classification. We considered the similar experiments series and presented the performance of different pooling methods. Apology for the misunderstanding that we need to provide evaluation of different pooling methods in different scenarios, but not a direct benchmark comparison. Since multiple MIL methods were evaluated on a 13-class classification problem in J's work: | Methods | MI-Net | mi-Net | MI-Attn | MI-GNN | | --------- | ------ | ------ | ------- | ------ | | Model Acc | 0.819 | 0.808 | 0.813 | 0.781 | We can also provide performance evaluation for RGMIL in the 13-class scenario: | Methods | MXP | ABP | G-ABP | DSP | RGP | | ---------- | ----- | ----- | ----- | ----- | ----- | | Model Acc | 0.604 | 0.787 | 0.806 | 0.767 | 0.793 | The performance in the 25-class scenario is as follows: | Methods | MXP | ABP | G-ABP | DSP | RGP | | ------- | ----- | ----- | ----- | ----- | ----- | | Model Acc | 0.356 | 0.738 | 0.745 | 0.711 | 0.727 | Since our aim is to make comparison with other methods, in the experiments of 13 and 25-classification, we pick the branch with the highest confidence value to be the final output as we mentioned above in Comment-8 before, and we report the average of 10-time test accuracy for each model after training convergence. Also notice that RGP and ABP assume no dependencies between instances, and any individual instance can be used to make predictions clearly. This differs from the actual data of SIVAL, where RGP and ABP series encounter similar situations and do not outperform many existing methods. Comment-2: R: Compared with other pooling methods, RGMIL explicitly enhances instance-level performance by transferring the learning task completely to the instance feature extraction stage. Since the primary focus of our MMNIST study was on instance-level performance analysis, we did not provide bag-level analysis and comparison in the paper. However, as you suggested, we have added the following comparison of bag-level performance for MMNIST general 4-class problems: | Aggregators | 10 / 10| 16 / 16| 32 / 32 | 64 / 64 | 512 / 512 | | ----------- | --------- | --------- | ------- | --------- | --------- | | MXP | 43.46 | 40.70 | 60.56| 74.64 | 88.16 | | ABP | 84.66 | **87.41** | **92.35**| 97.06 | 99.96| | G-ABP | **85.20** | 87.22 |92.18| **97.46** | **1.0**| | DSP | 84.07 | 83.62 | 90.17 | 95.89 | 99.98 | | RGP | 82.70 | 87.26 | 89.80 | 97.22| 99.96 | The average of 5-time test accuracy is reported for each model after convergence or 50 epochs. We reiterate that any result in the paper can be reproduced with our codes. [1] J. Early, et al., Model Agnostic Interpretability for Multiple Instance Learning, in ICLR, 2022.
Summary: This paper tries to overcome the drawback of low instance-level prediction performance in the existing Multiple Instance Learning (MIL) models. According to the authors, the low instance-level performance is because the existing techniques focus mostly on analyzing the relationship between instances and aggregating them while ignoring learning of effective instance-level representation. To this end, this paper proposes a novel aggregator function called Regressor-Guided Pooling (RGP) that directly learns discriminative instance-level representation by mimicking human inference process. An extensive evaluation is performed using the multiple MIL datasets along with the pain estimation datasets to validate the effectiveness of the proposed aggregator function. Strengths: 1. This paper has made a novel contribution to MIL field by identifying and addressing key limitations of the proposed MIL techniques. The significance of the novel contribution is demonstrated through non-trivial performance gain over other competitive baselines. 2. The motivation for introducing the Regressor-Guided Multiple Instance Learning (RGMIL) framework is very novel, natural, and intuitive. 3. This paper deals with the multi-classification scenario under the MIL setting which is unique and different from most of the MIL works that focus mostly on the binary-classification scenario. This unique scenario also helped to enhance the novelty of this paper. 4. The extensive evaluation is conducted considering multiple MIL datasets. In addition to the quantitative result, the authors have done great job explaining why the proosed technique works and how it avoids the limitations of the existing MIL techniques. Weaknesses: 1. Most of the real-world video anomaly datasets are also used as a crucial testbed for the evaluation of MIL models [1, 2, 3]. Specifically, under video anomaly detection task, the MIL model is trained with video-level annotations (without having explicit access to the frame-level annotation). During the prediction process, the trained MIL model is used to perform frame-level prediction. The evaluation of the proposed MIL model on video-anomaly detection task is missing. I wonder how effective the proposed technique is in terms of making frame-level predictions. Also, in the related work section, the authors may need to explain how MIL techniques aimed to solve the video anomaly detection task are different from their work. 2. Some of the figures deserve better treatment. For example, the caption in Figure 1 is not very descriptive. Further, modules in the figures (with texts) are difficult to read and understand. 3. The intuition behind Equation 6 is difficult to understand and is not very clear. References: [1]. Sultani et al. “Real-world Anomaly Detection in Surveillance Videos”. CVPR2018. [2]. Sapkota et al. “Distributionally Robust Optimization for Deep Kernel Multiple Instance Learning”. AISTATS2021. [3]. Tian et al. “Weakly-Supervised Video Anomaly Detection With Robust Temporal Feature Magnitude Learning”. ICCV2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How effective the proposed technique is in the video-anomaly detection tasks compared to competitive baselines? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you a lot for appreciating our efforts in addressing the use of multiple-instance learning (MIL) for instance-level predictions. Comment-1: Weaknesses (about applications) [R] In our method, RGP enhances the instance-level performance explicitly by transferring the learning task to the instance feature extraction stage through a special and simple aggregation method. This is theoretically proven in Section 4.3 of the main paper and Section A.1 of the supplementary material. RGP does not conflict with the current MIL-based video anomaly detection models. With the involvement of the instance-level feature extractor in training, RGP can theoretically achieve better instance-level performance than the existing ABP series aggregators. This work emphasizes generality and is applicable not only to pain estimation but also to other multi-classification problems. In the field of video anomaly detection, we can directly apply RGMIL for anomaly detection or simply replace the existing aggregators in other models with RGP to ensure instance-level performance. Besides, there are some similarities and differences worth noting between our experiments and video anomaly detection problems. Main similarities: - Both are image classification problems, may related to multi class; - Both emphasize instance-level representation; Main Differences: - Our experimental scenario is related to the maximum label, but video anomaly detection typically does not involve the concept of a maximum label. Even in the case of multi-classification, there is no inherent ordering between class labels. - Some video anomaly detection datasets may provide videos that contain a large number of frames, potentially reaching thousands. Training single-frame features on such large videos may lead to insufficient GPU memory. Comment-2: Weaknesses (about figures) [R] Thanks a lot for your suggestion. We will revise these figures and descriptions so that they are more readable and easily understood. Comment-3: Weaknesses (about Equation 6) As a non-parametric dynamic aggregate component, RGP still share the same view with attention mechanism that weights should rely on the feature itself. It is also reasonable to order the instances’ importance by their classification score that indicates the likelihood of being positive (critical instances) and assign weights accordingly. We could process the current instance classification score provided by regressor in different ways. We tested different combinations of regression methods and weight acquisition methods: - the output score is in the form of 1-dimensional logits output with a sigmoid function, and the weights are obtained based on the logits output; - the output score is in the form of 2-dimensional logits output with a softmax function, and the weights are obtained based on the high-bit output of logits; - the output score is in the form of 2-dimensional logits output with a softmax function, and the weights are obtained by dividing the high-bit output of logits by the low-bit output; - the output score is in the form of 2-dimensional logits output with a softmax function, and the weights are obtained by subtracting the low-bit output of logits from the high-bit output. These designs have the same underlying principles but differ in real numerical computations. We found that the last method had significantly better performance, and then we used and formulated it as Equation 6 in this paper. In practice, the sigmoid output barely works. But as shown in section A.2 of the supplementary material, there are other methods that also works without Equation 6. The method of normalization also affects the performance, and we tested parameterized normalization and unnormalized methods, both of which were not as good as simple unparameterized normalization. The two equations are only training tricks that make RGP more practical. Comment 4: Questions Thank you a lot for your suggestions. Due to the time limitations, we did not add this experiment on the dataset of video anomaly detection in this version, but we add experiments on two general multi-instance multi-class image datasets: SIVAL[1] and the new MMNIST series, as shown in the PDF in global rebuttal. Anomaly detection is not our major research interests, but we are also interested in the results for video anomaly detection problems. The experiments on anomaly detection will be very meaningful, and we will try to test it later. [1] Rouhollah Rahmani, et al. "Localized content based image retrieval." Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval. 2005. --- Rebuttal Comment 1.1: Title: Comments on rebuttal Comment: Thank you for clarifying Equation 6. Currently, I do not have additional questions. I would love to see incorporation of our discussion into the revised paper.
Summary: This paper presents a new model, the Regressor-Guided Multiple Instance Learning network (RGMIL), which addresses the challenge of providing discriminative instance-level representation in multi-classification MIL scenarios. It introduces an aggregator, the Regressor-Guided Pooling (RGP), that enhances the instance-level performance of Multiple Instance Learning (MIL) tasks. RGMIL outperforms existing methods in various datasets and demonstrates the potential for instance-level predictions. Strengths: 1. The paper presents a novel model, the Regressor-Guided Multiple Instance Learning network (RGMIL), which brings a fresh perspective to the field of multi-classification MIL scenarios. 2. The paper is well-structured and the problem and proposed solution are explained clearly. 3. The experiments are well-designed and the promising results effectively support the authors' claims. 4. The paper is inspiring for future research, including potential applications for multi-classification tasks such as the UCF-Crime dataset. Weaknesses: 1. The paper lacks a clear definition and purpose for the metrics used in Table 4. Additionally, a thorough analysis of the results presented in the table is missing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you a lot for appreciating our efforts in addressing the use of multiple-instance learning (MIL) for instance-level predictions. Comment-1: Weaknesses [R]The detailed information of the metrics can be found in section C in supplementary. We will give more details on the UNBC experiments and the additional information for Table 4. All the results regarding MMNIST and benchmark tests in the paper can be reproduced within 3 hours. Besides, the result on UNBC can also be reproduced in days. Additionally, we provided supplementary results on two general multi-instance multi-class image datasets: SIVAL[1] and the new MMNIST series experiments, as shown in the PDF in global rebuttal. We next try to test on image classification datasets next, like video anomaly datasets and more general multi-class scenarios. [1]Rahmani, Rouhollah, et al. "Localized content based image retrieval." Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval. 2005. --- Rebuttal Comment 1.1: Comment: Thank you for clarifying the availability of metric definitions in Supplementary Section C. However, I'm interested in understanding how each metric evaluates specific aspects of the model and the corresponding detailed analysis of experimental results under each metric. --- Reply to Comment 1.1.1: Title: Comments on evaluation metrics Comment: Thank you for your comment. We provide the explanation of metrics as follows: MAE (Mean Absolute Error), which can be seen as L1 loss, is a commonly used loss function for regression models. It is the sum of the absolute differences between the target values and the predicted values. MSE (Mean Squared Error), which can be seen as L2 loss, is also a widely used regression loss function. It is the sum of the squared differences between the predicted values and the true values. By squaring the errors (letting $\varepsilon$ = true value - predicted value), MSE amplifies the errors if $\varepsilon$ > 1. If there are outliers in the data, the value of $\varepsilon$ can be large, and thus $\varepsilon$ squared will be much larger than |$\varepsilon$|. Therefore, compared to using MAE to compute the loss, using MSE gives more weight to the outliers. We use these two metrics to measure the overall error between the predicted values and the ground truth at the frame level during test. Lower values of MAE and MSE indicate more accurate predictions by the model. In the UNBC experiments, due to the imbalanced distribution of pain levels, a large number of frames have a pain level of 0. Therefore, if accuracy metric is used for model evaluation, it can easily overestimate the model performance. Hence, accuracy metric is generally not used for model evaluation. PCC(Pearson Correlation Coefficient) measures the linear correlation between two variables, typically the predicted values and the ground truth. It ranges from -1 to 1, where a value close to 1 indicates a strong positive linear correlation, a value close to -1 indicates a strong negative linear correlation, and a value close to 0 indicates no linear correlation. PCC is widely used to assess the overall relationship or agreement between two continuous variables. ICC (Intraclass Correlation Coefficient) is a statistical measure that quantifies the consistency or reliability of measurements made by different observers or raters. It is commonly used when there are multiple raters or multiple measurements taken on the same subjects. ICC ranges from 0 to 1, where a value close to 1 indicates high agreement or consistency among the raters or measurements, and a value close to 0 indicates low agreement. MAE (Mean Absolute Error) and MSE (Mean Squared Error) are loss functions that quantify the error or discrepancy between predicted and true values. They are used during model training and optimization. Lower values of MAE and MSE indicate better model performance. PCC and ICC, on the other hand, are evaluation metrics that assess the agreement, correlation, or consistency between predicted and true values. They provide insights into the quality of predictions and the reliability of measurements. Higher values of PCC and ICC indicate better agreement or consistency. Most of the supervised models use some of the above metrics to evaluate performance on pain estimation problems. With the sampe experimental settings, our model demonstrates similar performance to fully supervised models on these metrics, indicating comparable reliability in pain estimation tasks.
Summary: This paper introduces _Regressor-guided MIL network (RGMIL)_ as a solution to address the challenges in Multiple instance learning, particularly in the multi-class classification scenario for pain-estimation. RGMIL introduces a novel aggregator, Regressor-Guided Pooling (RGP) in place of currently widely used attention-based aggregators. The design of RGP is based on simulating the correct inference process of humans when facing similar problems. The experiments show RGP out-performing evaluation benchmarks in MIL and works similar to supervised models for pain estimation. Strengths: - The aggregator introduced in this work is grounded in prior knowledge of Multiple Instance Learning task construction. As a result, rather than opting for a complex black box approach, the design remains simplistic.This work aligns with recent research trends that emphasize leveraging simple yet valuable prior knowledge and inductive biases to design more effective solutions. - Regressor-Guided Pooling (RGP), outperforms all MIL evaluation benchmarks including SOTA by a large margin (Table 1) and also works on-par with supervised models for pain estimation (Table 4). This compelling result clearly highlights the potential of the introduced method. - The code available has a well-written README and looks easy to reproduce. Weaknesses: - **Missing dataset details**: The details of dataset(s) construction for MIL in pain-estimation setting is missing. One of the contributions mentioned in this work is RGMIL being the first weakly supervised deep model for pain estimation, so I feel it is important for this work to also focus on pain estimation datasets and benchmarks. - Firstly, it is not clearly mentioned whether the results presented in section 4.5 is for UNBC pain dataset. The way the dataset is constructed for the MIL setting is also not presented which makes it difficult to understand if the dataset is different or the same as supervised setting. - Secondly, the result is presented on only a single dataset. Given that the contributions of the work also focuses on being the first weakly supervised model for pain estimation, it is important to benchmark on more datasets (for example BioVid dataset presented in [25]). - **Issues in paper structure and writing:** The paper lacks a cohesive flow, making it difficult to follow. The writing style is inconsistent both within the sections and throughout the entire document. This paper needs major restructuring and rewriting to be easy to follow and understand. - The main paper contains many non-essential details and lacks conciseness. Few suggestions improve it: the assumptions section (3.1) should be condensed into smaller paragraphs, with more extensive details provided in the supplementary material. Similarly, sections 4.2, 4.3, and 4.4 are verbose and could benefit by retaining crucial details in the main paper and moving the remaining content to the supplementary. - In its current stage, some important contributions are left behind in supplementary (A1, A2) which supports the method’s effectiveness. - Figure and table captions (except Table 1) are not self-sufficient and needs referencing to the text for understanding. Authors should revise the captions for better stand-alone clarity. - Figure 1(b) is not clearly visible or understandable. - **Missing important citations and explanations:** There are important statements without citations or explanations, based on which the paper makes assumptions for model design (lines 143-144) and presents results on a different dataset (lines 230-231). - line 143-144: 'Experience has shown that when we add the aggregator ⇢ to the backbone + regressor model for MIL problems, the instance-level performance often decreases significantly.' I would suggest the authors to cite works that show that adding aggregator decreases instance-level performance. If it is by observation across paper, the authors should present this in a table or a figure to give strong basis for assumptions. - line 230-231: 'Considering multiple factors, the UNBC pain dataset isn’t really a clear and flexible enough choice to present a demonstration.' What are the multiple factors here is not clear and I would suggest the authors to explain clearly. - **Little to no focus on second contribution:** The paper's focus on the first contribution is appreciated, but it leaves little room to highlight the second contribution, leading to confusion in understanding the relevance of pain estimation in this work. It was referenced multiple times but only given a limited space (10 lines: 326-336) in the overall paper. To improve clarity, the authors can choose to implement either of these two suggestions: - Prioritize RGP as the main contribution and dedicate the paper to thoroughly highlighting its significance, while briefly mentioning pain estimation as a valuable benefit of the method only shortly. This also relieves the authors of having to benchmark any other datasets while fully highlighting the method and its effectiveness. - Condense some verbose sections, to be more concise, and move extra details to supplementary. This ensures that the authors have more space to present the benchmark results on pain estimation. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - My main suggestions for the authors is to address the writing in the paper and to highlight their contributions accordingly. The results looks very compelling, but the paper is quite difficult to follow to appreciate the design or importance of the method. - Other minor changes include: captions for standalone clarity, dataset details for reproducibility and clear understanding. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have not mentioned any possible limitations or potential negative societal impact of this work. I would suggest the authors to include some details in settings where this method may not be useful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for appreciating the effectiveness and novelty of our approach. Comment-1: About writing and descriptions. [R] Thanks for your suggestions, and we will revise our writing so that the proposed method can be easily understood. Comment-2:Missing dataset details [R] We construct the UNBC dataset for MIL as follows: we selected consecutive 64 frames in the video as a training bag. To increase the amount of data, we used a sliding window approach with a step size of 8 to produce training bags. Overall, each frame was sampled as an instance in different bags 8 times. In each fold of the experiment, the total number of training bags in the constructed MIL dataset averaged around 6000. Detail will be added into the article. In experiments of pain estimation, only UNBC dataset is used, because we only obtain this one public dataset. For BioVid, before, we have sent our application for downloading this data, but not receiving any replies. At present we have another pain dataset collected by Xi Jing Hospital, but this cannot be made public until now. Additionally, we supplemented experiments on SIVAL[1] and new MMNIST for general multi-class classification, shown in PDF of global rebuttal. Comment-3:about line 143-144 [R] Compared with the fully supervised mode, the available information obtained by MIL is limited. Therefore, the limit of the instance-level performance of MIL should be equivalent to the fully supervised mode. Two main architectures of MIL, introduced in section 2.1, differ from the fully supervised mode only on the presence of the aggregator. We think that the inability of MIL to achieve the performance of the fully supervised mode is due to the limitation of the learning stage of the new component (aggregator). This can be verified by the results of MMNIST presented in Table 2. As seen in the first column in Table 2, when we implement experiments with the current mainstream models under the mode 10/1 (train/test bag size), the performance is obviously worse than 1/1 (fully supervised) mode. This illustrates that the current aggregators are unable to obtain enough accurate information for the aggregation from the 10 instances, and it does not surpass the information obtained from a single instance in the fully supervised mode. This leads to inadequate performance. Comment-4:about line 230-231 [R] - The UNBC dataset is of extremely class imbalance, and it’s not normal for MIL problem. For UNBC, the four metrics presented in Table 4 are usually used to validate the performance, since the accuracy in the UNBC dataset is easily inflated and is not reliable due to the imbalance distribution of data. Whereas, accuracy is the most intuitive and reliable metric, which we strongly prefer to use. - In experiments of UNBC dataset, 25 folds cross-validation is widely used to test the performance. It takes over a week to obtain one overall evaluation result, which reduces flexibility during the demonstration process and prevents conducting comparative experiments with multiple aggregators under different modes. - Both single-frame image and the feature extractor itself are large, and during training, increasing the bag length easily leads to insufficient GPU memory, limiting the experimental scenarios. Comment-5: Little to no focus on second contribution [R] Thanks for your suggestion. We agree with your concern that the contributions we presented is indeed not clear enough. Different from the mainstream MIL methods, RGMIL addresses the general multi-class classification problem in MIL scenarios by directly learning discriminative instance-level features. Initially, we try to design a pain estimation method from the view of MIL. In studying process, we find that it places strong emphasis on single-frame predictions during testing for pain estimation. However, most of existing MIL methods focus more on learning the aggregator black box itself, rather than learning single-frame features, meanwhile ignoring the instance-level performance. We also found that most MIL methods mainly focused on the binary-classification problem, little considering the multi-classification problem. Motivated by limitations of existing MIL methods, we designed RGMIL that explicitly enhances instance-level performance by transferring the learning task completely to the instance feature extraction stage through a special aggregation method. It is theoretically proved (in section 4.3 of paper and section A.1 of supplementary) that it is beneficial to improve instance-level performance. RGMIL emphasizes generality, and it is not used only on pain estimation but also other multi-classification problems. In our work, we validate the performance on benchmark datasets, MMNIST series, and a real application (the pain dataset, UNBC). Comment-6: Limitation [R]Feasibility of training at instance-level with excessively large bag-lengths: RGMIL significantly improves instance-level performance than current methods. However, as described in section A.1 of supplementary, we think that MIL problems may have ambiguity inherently in extreme scenarios. It isn’t theoretically insured whether training instance feature extraction directly is feasible in extreme cases. As shown in Table 1, with a maximum of bag length 512, instance performance decays with increasing bag lengths. Due to the memory limitation, we did not achieve the experiment on longer bag-lengths (like several thousands) to confirm the feasibility. Thereafter, we will try our best to figure it out. Memory limitation: It is challenging to handle the excessively large bags when the feature extractor is involved in the training. For example, in the MMNIST experiment (with a single bag of length 1024), there is insufficient memory on an NVIDIA 4090 graphics card. [1]Rahmani, Rouhollah, et al. "Localized content based image retrieval." Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval. 2005. --- Rebuttal Comment 1.1: Title: Follow-up comments Comment: I appreciate the authors' response in clarifying all my questions and comments. I hope that they will do their due diligence when submitting the final version. I have additional replies to authors' responses listed below: (1) Given the paper writing was difficult to follow due to verbosity and lack of self-explanatory figure captions, I strongly suggest to pay attention in rewriting this work for the final version. Especially, since the results are promising, a good writing will help in understanding the importance of the proposed method. (5) Thanks for explaining your process to design the RGMIL pipeline. I agree with the statement stating your contributions: "In our work, we validate the performance on benchmark datasets, MMNIST series, and a real application (the pain dataset, UNBC)." Please clearly rewrite the contributions in the paper to reflect this. (2),(3),(4),(6) please include the following points in the revised final version: - Additional dataset details in the paper for completeness. - For lines 143-144 and 230-231, add an additional 1-2 sentence short explanation to the paper to include what was explained in your reply to my questions. - Also add limitations mentioned here in the final version. --- Reply to Comment 1.1.1: Title: Comments on current revisions Comment: Thanks for your suggestion again. According to your suggestions, we have finished partial revises, and some details are shown in the following part. We will continue to revise our writing. More clear distribution declaration: 1.In Introduction, we modify the first sentence of second distribution as: “The effectiveness of RGMIL is validated in pain estimation.” 2. Specifically, in Conclusion, we have added: “We also validated RGMIL in real application on UNBC dataset”. after prioritizing the idea of RGP as the main contribution. Clarifying ambiguous figures and tables: 1.We have redrawn the model diagram in Fig. 1 to ensure clear visibility. Fig. 2 has also been redrawn, with axis labels added to each subfigure. 2.The titles of all figures and tables have been modified to ensure clarity. For example, the new title of table 3 is “RGP Information Table (64/10) on MMNIST. The first row contains the 10 images of the test bag. The Indicator vector $\mathbf{\hat{Y}}$ is the model output. The remaining contents are all the weights of corresponding instances from all the branches”. Some important explanations: 1.about line 143-144: The new descriptions are shown as following: ‘Compared with supervised learning, the available information obtained by MIL is limited. Therefore, the limit of the instance-level performance of MIL should be equivalent to the fully supervised mode. Two main architectures introduced above differ from the fully supervised ones only on the presence of the aggregator. We argue that the inability of MIL to achieve the performance of the fully supervised mode is due to the limitation of the learning stage of the new component (aggregator).” 2.about line 230-231: The new description is shown as following: ‘Due to the extremely imbalance of pain intensity distribution, the pain dataset is not really a clear and flexible enough choice to present a demonstration.’ 3.The way is to construct UNBC for MIL dataset: We supplement this part in the title of Table in which the UNBC results are shown. The new title has been modified to “Instance-level Performance Comparison with Supervised Models (*: S-O-T-A model). In the video of UNBC dataset, consecutive 64 frames are regarded as a training bag. To increase the amount of data, we use a sliding window approach with a step size of 8 to produce the training bags. The first row contains the four evaluation metrics. Following the same experimental setting with MSRAN, this experiment is implemented via 25-fold cross-validation, and the result is the mean of 25-fold cross-validation result. Statistics collected from Deep Pain, DSHF, DBR, Multistream CNN, MSRAN, LIAN” for stand-alone clarity. More clarity in theoretical analysis: 1.Section 3.1 has been amended to briefer paragraphs. 2.Reduce the verbosity of sections 4.2, 4.3, and 4.4. For example, line 290-295 was deleted. 3. Additionally, important analysis left behind in supplementary (A1, A2) including the ablation study of RGP has been moved to section 4.4 and a newly added section 4.5 . More Experiments and analysis: 1.Additional general multi-classification experiments (SIVAL and new MNIST series) provided in rebuttal phase have been supplemented. 2. Besides, the 2 major limitations mentioned above have been added in the newly added section 4.7 of the main paper.
Rebuttal 1: Rebuttal: About RGMIL: RGMIL can address the general multi-class classification problem in MIL scenarios by learning discriminative instance-level features. It holds a different view that we can solve MIL problems by learning on instance level like human does. Inherently coming with interpretability, RGP significantly enhanced instance-level performance by transferring the learning task to the instance feature extraction stage through a special and simple aggregation method. The advantage is theoretically proven in Section 4.3 of the main paper and Section A.1 of supplementary material. When there is emphasis on single-instance predictions, RGP can theoretically achieve better instance-level performance than the existing aggregators. This work emphasizes generality and is applicable not only to pain estimation but also to other general multi-classification problems. About Equation 6: As a non-parametric dynamic aggregate component, RGP still share the same view with attention mechanism that weights should rely on the feature itself. It is also reasonable to order the instances’ importance by their classification score that indicates the likelihood of being positive (critical instances) and assign weights accordingly. We could process the current instance classification score provided by regressor in different ways. We tested different combinations of regression methods and weight acquisition methods: 1.the output score is in the form of 1-dimensional logits output with a sigmoid function, and the weights are obtained based on the logits output; 2.the output score is in the form of 2-dimensional logits output with a softmax function, and the weights are obtained based on the high-bit output of logits; 3.the output score is in the form of 2-dimensional logits output with a softmax function, and the weights are obtained by dividing the high-bit output of logits by the low-bit output; 4.the output score is in the form of 2-dimensional logits output with a softmax function, and the weights are obtained by subtracting the low-bit output of logits from the high-bit output. These designs have the same underlying principles but differ in real numerical computations. We found that the last method had significantly better performance, and then we used and formulated it as Eq.6 in this paper. In practice, the sigmoid output barely works. But as shown in section A.2 of supplementary material, there are other methods that also works without Eq.6. The method of normalization also affects the performance, and we tested parameterized normalization and unnormalized methods, both of which were not as good as simple unparameterized normalization. The two equations are only training tricks that make RGP more practical. About supplementary experiments: As requested, we additionally conducted experiments on two general image multi-classification datasets as supplements in the PDF: - Table 1 and Figure 1 in the PDF are the MMNIST series instance-level results in the main paper. According to suggestions, we improved the presentation of charts and titles. In this series of experiments, for the MMNIST dataset, the label is determined by the maximum value in the input bag, and the model's output during test is obtained by selecting the branch with the highest position among all branches with an output of 1. - Table 2 is the bag-level performance reports on SIVAL dataset. SIVAL consists of 25 classes of complex objects photographed in different environments, where each class contains 60 images. Each image is segmented into approximately 30 segments, and each segment is represented by a 30-dimensional feature vector that encodes information, such as the segment’s colour and texture. The segments are labelled as containing the object or containing background. We selected 1,310 classes as positive classes, and randomly sampled from the other classes as the negative class. We set up three Linear+ReLU blocks as feature extractors. The test scenario involves image classification problems ranging from binary to 11-class classification. Each image is treated as a bag, resulting in bag-level performance evaluation. Considering that the classes in the SIVAL dataset do not have any ranking relationship, all bits of the output indicator vector are reliable, so all branches are involved to the computation of loss function. Continuing with the current version's approach, the model's output during test is still obtained by selecting the branch with the highest position among all branches with an output of 1. Despite learning instance-level features only, RGP still exhibits impressive performance on bag level. - Table 3 is the instance-level performance reports on MMNIST. We adopted a different approach in constructing the dataset this time: while still using the 4-branch RGMIL, we selected images with labels ranging from 0 to 6 as the negative class (background class), and images with labels 7, 8, and 9 as three different positive classes. Similar to SIVAL, there is no sequential relationship among the different positive class labels. Each positive class and the negative class account for approximately 25% of the dataset. Within one positive bag, the number of images of positive class is approximately 10%. Like to the SIVAL tests, during training, all bits of the output indicator vector are reliable, so all branches are involved to the computation of loss function. Continuing with the current version's approach, the model's output during test is still obtained by selecting the branch with the highest position among all branches with an output of 1. Unlike the original MMNIST series, the convergence speed of different methods varies much. For example, it's very difficult for MAX and DSP to converge even training for hundreds of epochs. Thus, we present the average of 10-time test accuracy is taken for each model after convergence or training after 100 epochs. RGP still shows dominance in the scenario on instance level. Pdf: /pdf/a4de635987fb0fd54f427137ebfcda46e7b02f9e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Revisiting Visual Model Robustness: A Frequency Long-Tailed Distribution View
Accept (poster)
Summary: In this paper, the authors investigate the high-frequency components (HFC) in images from a long-tailed perspective. They revisit the relationship between HFC and model robustness and demonstrate that the cause of the model’s under-fitting behavior is attributed to the limited information content in HFC. A Balance Spectrum Sampling (BaSS) strategy, which effectively counteracts the long-tailed effect and enhances the model’s learning on HFC. Extensive experimental results demonstrate the effectiveness. Strengths: 1. Investigating the high-frequency components (HFC) in images from a long-tailed perspective for the model robustness task sounds good and very interesting. 2. The authors give detailed definitions and analysis of the frequency long-tailed scenario. 3. The proposed solution BaSS is simple and efficient. 4. Experiments are conducted on a various benchmarks. Weaknesses: 1. To my end, the re-sampling/re-weighting method in the long-tailed situation generally improves the recognition ability of the tails at the expense of a portion of heads, does this phenomenon will weaken the recognition ability of LFC? 2. Some typos, like: line 16, counteract -> counteracts line 17, enhance -> enhances Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is this paper the first to analyze the problem of model robustness from the perspective of frequency long-tails? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors do not address the limitations in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your positive comments for our work. We set out below our responses to each of the questions. - **Q1: To my end, the re-sampling/re-weighting method in the long-tailed situation generally improves the recognition ability of the tails at the expense of a portion of heads, does this phenomenon will weaken the recognition ability of LFC?** - **R1**: Thank you for your question! Firstly, we would like to introduce a phenomenon that we have observed, which is BaSS does indeed result in a certain decline in low-frequency spectral density, as demonstrated in Figure 7 in the Appendix. This observation aligns with the trend of reducing the sampling rate or weight of the head classes. Consequently, it is valuable and intriguing to discuss whether the model's performance on LFC has been weakened or not. We present the following experimental results and would appreciate discussing with the reviewers. We have selected models based on standard training, AugMix, adversarial training, one with BaSS and the other without BaSS, and report their recognition performance on both LFC and HFC. | Method | Clean acc on LFC | Clean acc on HFC | | :---------------------------: | :---: | :---: | | standart training. | 81.59% | 18.19% | | **standart training+BaSS** | 76.77% | 28.84% | | AugMix | 85.40% | 23.64% | | **AugMix+BaSS** | 82.77% | 34.10% | | Adversarial training | 85.57% | 9.88% | | **Adversarial training+BaSS** | 84.23% | 17.90% | It can be observed that the performance on LFC has shown a decrease trend. This decline can be attributed to BaSS's focus on instance-wise balance, which, in turn, reduces the amount of information available in the low-frequency range. This observation aligns with the analysis presented in section 4. Conversely, it is worth noting that the model has improved the recognition ability on high frequencies. By combining the results from Tables 1 to 4, we can draw the conclusion that performing BaSS benefits the model's recognition on high frequencies and enhances the robust and generalization performance for full image recognition. Nevertheless, it is worth noting that this approach also exhibits some negative impact on low frequencies. This observation serves as inspiration for us to explore more effective data sampling strategies or optimization algorithms on the model side in future endeavors. We have included a detailed discussion of these findings in Appendix F.2. - **Q2: Is this paper the first to analyze the problem of model robustness from the perspective of frequency long-tails?** - **R2**: Thank you for your question. We would like to confirm that our work represents a pioneering effort in analyzing the robustness and generalizability of vision models from the perspective of frequency long-tails. While some related works have focused on the class- or attribute-wise long-tail problem in image recognition tasks (refer to Line 29-31 in the Appendix), the core contribution of our work lies in introducing a novel instance-wise (frequency) long-tailed problem definition, which also enriches the scene of visual long-tailed tasks. We thoroughly investigate and analyze the impact of the long-tailed spectrum on vision models, and propose an effective solution based on a balanced spectrum sampling strategy to address the challenges posed by the frequency long-tail distribution. - **Q3: Some typos, like: line 16, counteract -> counteracts line 17, enhance > enhances.** - **R3**: Thank you for reading our article carefully and pointing out these issues, and we will fix them accordingly during the revision phase. We hope that the above answers will solve your doubts and will continue to pay attention to your follow-up questions. --- Rebuttal 2: Comment: Thanks for the author's detailed explanation, some of my concerns are addressed well. Overall, I think the paper is a promising work, so I've slightly increased my scoring to accept. --- Rebuttal Comment 2.1: Title: Response to zwAk Comment: Dear Reviewer zwAk: We would like to extend our sincere gratitude to you for the time and effort you have dedicated to reviewing and discussion stage!We greatly appreciate your constructive comments on evaluation of our method in various dimensions, which we believe have significantly contributed to enhancing the quality of our work. Best regards, Paper 12846 authors
Summary: This paper tackles the problem of visual models' robustness from the view of frequency components. Inspired by the long-tailed characteristic observed in frequency spectrum, the authors empirically prove that standardly trained models are sensitive to HFC. After analyzing its reasons, the authors proposed a sampling method and verified its effectiveness. Strengths: 1. The authors focuses on a novel perspective of long-tailed distribution in the frequency domain to analyze and improve model performance. 2. The paper provides interesting insights into this problem, revealing the underlying properties of the studied problem. 3. Abundant experiments and analysis are done. 4. The paper is well-written. Weaknesses: I am not an expert in this area, thus my comments should be treated with caution. 1. Baseline comparisons: BaSS is a new sampling strategy, does there exist any other sampling methods for comparison? PGD-AT is from 2017 and is rather an old method. 2. It would be better to include: the difference in frequency distribution before and after BaSS is adopted. 3. Flaws in writing. E.g. Typo in Table 2 title. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In Figure 2 second row, why does on Tiny-ImageNet and ImageNet (as opposed to CIFAR10), the tail ranking indices (e.g. "80%" and "Last one") have a high density in the high frequency area? This inconsistency may draw doubts on the generalizability of the conclusions the paper gets. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No. The authors should add it. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your recognition of the novelty of our work!We set out our answers to each of your questions. - **Q1: Baseline comparisons: BaSS is a new sampling strategy, does there exist any other sampling methods for comparison? PGD-AT is from 2017 and is rather an old method.** - **Other methods for comparison?** BaSS introduces an innovative sampling strategy for instance-level long-tail challenges, i.e., the long-tail distribution is present within the image frequency domain. Actually, we have not encountered any sampling techniques designed for image features. Prevailing efforts have predominantly concentrated on long-tail scenarios delineated by class or attributes, diverging substantially from our framework centered around the frequency long-tail issues. As a result, a comparative analysis with these methodologies was deemed inappropriate. - **Concerns about the PGT-AT that is an old method.** We would like to reaffirm our positioning on BaSS, highlighting its broad applicability to a wide range of adversarial training frameworks. We selected two of the most basic adversarial training frameworks, PGT-AT and TRADES. To address your concerns, we expand our experimental scope to include the combination of BaSS with MART. MART is the latest adversarial training framework and is widely used after PGD-AT and TRADES methods. The results are presented in publicly visible PDF material a n d provide compelling evidence that BaSS harmonizes effectively with diverse adversarial training frameworks, resulting in substantial improvements in model robustness and generalization performance. It should be noted that we did not choose to combine with the current state-of-the-art (SOTA) algorithms, because the aforementioned algorithms usually face huge time and space overhead, which is a problem that needs to be solved in the process of improving adversarial robustness. The experimental results in Tab 1 and Tab 2 show that our sampling strategy significantly improves the performance of adversarial training model without almost increasing the time and space complexity (details in appendix. Tab 3), and even surpasses the current SOTA algorithms in some scenarios (as shown in Tab 3). - **Q2: In Figure 2 second row, why does on Tiny-ImageNet and ImageNet (as opposed to CIFAR10), the tail ranking indices (e.g. "80%" and "Last one") have a high density in the high frequency area? This inconsistency may draw doubts on the generalizability of the conclusions the paper gets.** - **R2**: We appreciate your careful observation of our experimental results! We also need to point out that on both Tiny-ImageNet and ImageNet, the density peak of the tail ranking index (e.x. last one) is located at the very tail of the spectrum. We wish to emphasize that this phenomenon does not impact the consistency of our conclusions drawn from datasets with varying resolutions. Considering the substantial orders of magnitude distinction between the lowest and the highest ranks (illustrated in the first row of Figure 2), we maintain that the model's sensitivity to rankings within this range can be disregarded. Consequently, the perturbations observed in these frequency directions have minimal influence on the model's robustness. Furthermore, it is acknowledged that the ultra-high frequency components in the image often correspond to noise information. This raises the possibility of the model's responsiveness to this particular aspect. Comparable findings can be validated through references [1,2]. - **Other issues:** - **It would be better to include: the difference in frequency distribution before and after BaSS is adopted.** We visualize the BASS sampled image and the corresponding spectral density distribution curve in Figure 7 of the appendix. Among them, the high-frequency information content of the sampled image is enhanced, and the corresponding spectral energy density is more balanced. - **Flaws in writing. E.g. Typo in Table 2 title.** We will carefully review the full article and fix these issues during the revision phase. We will continue to monitor the follow-up questions and discussions of the reviewer, and once again remind that the results of the Q1 experiment are presented in the pdf material shared with all reviewers. **References** [1] High-frequency component helps explain the generalization of convolutional neural networks, Wang et al. [2] Investigating and explaining the frequency bias in image classification, Lin et al. --- Rebuttal Comment 1.1: Title: Update Comment: Thank you for your detailed response. The authors have addressed my concerns and I will raise my score to accept. --- Reply to Comment 1.1.1: Title: Response to oe9n Comment: Dear Reviewer oe9n: We once again thank you for your constructive comments and valuable time spent on reviewing and discussion stage! We are grateful for your meaningful discussion topic and formatting suggestions. They have significantly contributed to enhancing the overall quality of our manuscript. Best regards, Paper 12846 authors
Summary: Using the long-tail features in the spectrum, this paper investigates the model's sensitivity to HFC and discovers patterns of underfitting behavior due to limited information content. The authors propose the Balance Spectrum Sampling (BaSS) strategy, which enhances model learning on HFC. Experimental results show that combining BaSS with existing defense methods improves model performance. Strengths: 1. The paper introduces an interesting perspective by focusing on the long-tailed distribution in the frequency domain and revisits the relationship between high-frequency components (HFC) and model robustness. 2. The under-fitting behavior is explained through the lens of spectral entropy, offering a clear understanding of the low information entropy of HFC. 3. The proposed method achieves improved performance. Weaknesses: 1. The paper lacks a discussion on the existing literature and its relevance to the topic in details, which could provide a stronger foundation for the research. The content in the Introduction section may not be sufficient. 2. More experiments are needed to evaluate the proposed method: 1) The proposed BaSS should be evaluated in combination with more baseline methods. For instance, MART[1] is a method that outperforms TRADES in performance. It would be insightful to see how BaSS performs when incorporated with such stronger baselines. 2) The paper lacks a comparison with other methods, such as AWP[2]. It would provide a more comprehensive evaluation and analysis of the proposed approach. 3. There are some small grammar mistakes in the paper. For instance, in the Abstract, it should be "counteracts and enhances" instead of "counteract and enhance" in lines 16-17. More attention should be given to checking the grammar details throughout the paper. [1] Improving Adversarial Robustness Requires Revisiting Misclassified Examples. ICLR2020 [2] Adversarial Weight Perturbation Helps Robust Generalization. NeurIPS2020 Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is the proposed method also applicable to other visual models, such as Vision Transformer (ViT), in addition to CNNs? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No potential negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for affirming the innovativeness of our article and the positive comments about the writing and experiments and we respond to your questions and suggestions below: - **Q1: The paper lacks a discussion on the existing literature and its relevance to the topic in details, which could provide a stronger foundation for the research. The content in the Introduction section may not be sufficient.** - **R1:** Due to the space constraints, the section on related works is presented in the Appendix A. In order to ensure the integrity of the content of the article, we have introduced the core content of the relevant work and the topic in the introduction section. We divided the literatures related to the topic and the corresponding discussions into two parts: - The first section of the related work focuses on the keywords **frequency domain** and **robustness**. In lines 31-40 of the introduction, we presented a discussion on analyzing model robustness in the frequency domain, providing an overview of relevant studies. Additionally, we highlighted two limitations present in existing research, which served as the driving force behind the motivation and problem formulation addressed in our work. A comprehensive description of each individual work is provided in Appendix A. - The second part of the related work revolves around the concept of the **long tail**. In lines 48-51 of the introduction, we expounded on the distinct characteristics of the long-tail problem in visual tasks. Building upon this, we drew inspiration from the long-tail distribution observed in the image frequency domain, prompting the formulation of a novel long-tail problem in visual tasks, namely, the instance-wise long-tail problem. Correspondingly, a comprehensive account of the long-tail problem in visual tasks and classical solutions can be found in Appendix A. Thank you for your advice, we will carefully consider the location of the references during the revision phase. - **Q2: Is the proposed method also applicable to other visual models, such as Vision Transformer (ViT), in addition to CNNs?** - **R2:** This is a significant and practical question. We endore the importance of considering transformer-based structures (e.x. ViT) and the large-scale dataset (e.x. ImageNet) in data anaylsis, and we have obtained consistent results regarding the main conclusion of this paper which the robustness issue of vision models mainly stems from the high-frequency components (details is shown in see Appendix D). This indicates that our proposed long-tail scenarios can be well-extended to transformer-based vison models. Considering that the ViT model accepts image patches as inputs, wherein each patch itself exists long-tail characteristics in the frequency domain. We believe that the idea of balancing image spectrum energy is also applicable to the improvement of the efficiency of ViT models. In particular, several studies have suggested improving the model's performance by optimizing the utilization of the high-frequency spectrum, which further supports our hypothesis. However, due to the fact that conducting adversarial training of ImageNet in the ViT model requires substantial computational cost. As a result, we did not perform corresponding experiments in this paper. Nevertheless, we remain committed to exploring and verifying these aspects in our future work. - **Q3: More experiments are needed to evaluate the proposed method:** - **combination with more baseline methods** We supplemented the experiments with MART, as well as the combined applications of experiments on both BaSS and MART. The experiments selected the ResNet18 and WRN34 models and used the CIFAR-10 and CIFAR-100 datasets. We show some of the experimental results below (WRN34, CIFAR-10 over 5 random runs), with detailed results available in a PDF file shared with all authors. | Method | Clean acc | Attacked by PGD-20 | Attacked by CW | Attacked by AA | | --------- | ---------- | ------------------ | -------------- | -------------- | | MART | 84.10% | 57.95% | 55.53% | 53.83% | | MART+BaSS | **89.75%** | **64.43%** | **63.21%** | **61.93%** | Our method demonstrates stable performance improvements over superior benchmark methods. We have added MART's results to Table 1 in the main text and included the corresponding references. - **comparison with other methods** We have compared two types of methods for enhancing robustness in the paper: one is the classical baseline method (which corresponds to a previous question from the reviewers), and the other is the current state-of-the-art (SOTA) method, with SOTA methods referring to the rankings published by Robust Bench. We believe that the author's question corresponds to the comparison of the second type, and following the author's suggestion, we have added Table 3 in the main text to include AWP and other excellent adversarial defense methods, along with comparative analysis and references. - **Q4 There are some small grammar mistakes in the paper. For instance, in the Abstract, it should be "counteracts and enhances" instead of "counteract and enhance" in lines 16-17. More attention should be given to checking the grammar details throughout the paper.** - **R4:** We sincerely thank you for carefully reviewing our article and identifying these issues for us, and we will fix and try to eliminate similar syntax problems in revision. We look forward to following up discussions with you and responding to your follow-up questions, reminding you that detailed results in Q3 are presented in pdf material shared by all reviewers. --- Rebuttal Comment 1.1: Title: I have read the response from authors and appreciate their work. Comment: Dear Area Chair and other reviewers I have read the response from authors and the comments from other reviewers. I appreciate the response from authors and think my concerns have been well addressed. Thus, I would like to raise my rating to "weak accept". --- Reply to Comment 1.1.1: Title: Response to Reviewer P5Z5 Comment: Dear Reviewer P5Z5: We sincerely thank you for the valuable time and effort you have dedicated throughout the review and discussion process! We believe your suggestions regarding experiments and formatting will solidify our work, and the discussion topics you've raised are also immensely valuable for our future exploration. Once again, thank you for acknowledging and appreciating our work. Best regards, Paper 12846 authors
Summary: This paper studies the effect of power imbalance across frequencies in natural image datasets on the accuracy and robustness of neural net classifiers, and empirically argues that the model is underfitted on frequencies with lower power concentration. Then, the paper proposes an augmentation strategy to improve the balance of power distribution per image, and evaluates its proposed method on several datasets and tasks to illustrate its effectiveness in boosting accuracy and robustness to data corruption and adversarial attacks. Strengths: Studying the effect of spectral power imbalance on the accuracy and robustness of neural networks is a novel and significant effort, and the paper provides several interesting experiments to provide insights into this effect. The proposed augmentation method seems to be effective, and works well in combination with other existing augmentations. Weaknesses: I think this is a valuable paper overall, however my main concern is a lack of clarity in its explanations and arguments, making several claims that seem incorrect and unjustified to me. Also, the experiments need better organization and more clear and nuanced explanations. I will elaborate my specific concerns and questions below. 1. L115 claims “Inspired by the Pareto principle [34] in long-tailed theory, we propose a stable and quantitative approach”, the paper must be more specific on how the Pareto principle is inspiring the proposed approach and in what sense it is more stable, otherwise such broad statements hinder clarity and readability. 2. Eq 2 set definitions are unclear because r_n is undefined. The explanation given in L117-120 is also unclear since “region” is undefined. Please be more careful and specific when writing definitions that form the foundation of a paper. 3. L133 claims “On one hand, the tail part is often overlooked by the system due to its numerical disadvantage”, what does numerical disadvantage mean? Please be more specific in your statements. 4. In Fig 1 and Section 3.1, the details of how the loss is computed separately for HFC and LFC are missing. Also the colorbar is missing. 5. In Fig 2, top ranks having distributions centered towards the higher frequencies does not mean that the loss is most sensitive to higher frequencies. Please plot Gradient Magnitude versus Frequency Radius to directly observe which frequencies will have the largest effect on loss (largest average magnitude). If lower frequencies receive larger average magnitudes than higher ones, then it cannot be claimed that “perturbations in HFC are more easier to cause errors in the model” as is done in L198. 6. Theorem 1 is just applying the formula plog(p) to the assumed density of p for natural images, this simple application does not need to be a theorem. 7. The paper does not explain or cite why plog(p) is a sensible measure of information on the normalized spectrum (density of p). Note that the normalized spectrum does not have any immediate meaning as a probability distribution, it is a power density function. In particular, the claim in L223 that “This indicates that high-frequency bands in the tail contain substantially less information than low-frequency bands in the head” is misleading, since all that Theorem 1 indicates is that power is concentrated at lower frequencies which is immediate by the assumption on the density of p. 8. Theorem 2 seems immediate from information theory. Could you explain why not simply state that the uniform distribution has maximum Shannon entropy instead of stating Theorem 2? 9. The explanation of the BaSS in section 4.1 is not the same as shown in Algorithm 1 in the Appendix, in that there is no actual sampling happening in Algorithm 1, it is a reweighting of the spectrum based on the ratio of log power density over power density. 10. In the experiments in Tables 1-4, the value of gamma is not specified. It is important to show an ablation study on gamma in all these settings (Fig 3 seems to be doing this, but it is unclear which plot is corresponding to which specific experiment setup from Tables 1-4). Also, the results lack error bars and therefore the significance of the improvements is unclear. 11. Please include ImageNette in Fig 3 results to observe the effect of varying the dataset. 12. In all Tables 1-4, reporting the standalone performance of BaSS is also important. Also, in some situations in Table 4 where BaSS does not help, it is important to comment on why that might be happening. 13. Discussion of related works should be part of the main paper and not in Appendix. ***Some typos*** L1 claims “A widely discussed hypothesis regarding the cause of visual models’ robustness is that they can exploit human-imperceptible high-frequency components”, this seems to be a typo, “models’ lack of robustness” perhaps? L125: “has not yet to be fully explored” should be “has not been fully explored”. L136: “regards” be “regarding”. L321: “worth noted” be “worth noting”. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See the weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: See the weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you to the reviewers for suggestions that help improve the quality of our work,we make the following responses (e.g., R1 is the response to Q1): - **R1:In Line113-117, we have revised it as:**"Previous analysis indicating that a few low-frequency components form the image's main content, while most high-frequency components comprise a minor part. The rule that few factors determine main outcomes is encapsulated by the Pareto Principle and the "80/20" quantitative rule, found across various domains, where 80% of effects are caused by 20% of the reasons. Guided by this rule, we defined the boundary between frequencies using an 80/20 spectral energy ratio." **In Line120, we added the following:**"We experimented with various datasets, finding that different resolution images' high-low frequency boundaries are at a 20% radius from the spectrum's center(see Appendix B.2). This confirms the stability and reasonableness of the classification system using spectral energy and the Pareto Principle." - **R2: We added the following on eq2**: "$r_n$ is the Euclidean Distance from the nth frequency band to the center of the spectrum". We give the definition of $r_k$ in L94, $r_n$ and $r_k$ are the differences in footmarks. **We replaced "the region" with** "the set composed of $\tilde{X}[u,v]$ ". - **R3:** Considering that the numerical definition will be different in different long tail, **we added the content in L133:** "(e.g., the sample size of the tail class is far less than that of the head class in the context of class-wise long tail)". - **R4:** We have described the details of calculating the loss of high and low frequencies in the Appendix.C. As the complement of the contour values, we will add colorbar in the revision stage. - **R5:** We have implemented the reviewers' recommended visualization method on **Fig1 in public pdf**, and the results show higher gradient amplitude distributions in the HFCs across three datasets, supporting our paper's conclusions. To further address the reviewers' concerns, we consider two points: - Evolutionary trends on five density curves within the top 80% show specific patterns(e.g., the peak of the density curve on the Tiny-ImageNet dataset exhibits minimal movement). And data points with density values greater than 0 are predominantly situated in the tail regions, indicating the peak will likely not be in the head region. - Instead of solely observing the mean value, which might overlook distribution diversity and abnormalities, we argue that Kernel Density Estimation offers a more accurate representation of the final results. - **Response to concerns about theorem in Q6-Q8:** Spectrum of single image is a statistical observation from a probability distribution, reflecting different frequency modes' probabilities. (e.g., high-frequency components are only found in specific locations, such as contours). Therefore, our definition of probability distribution is based on different frequency domain patterns in frequency domain space as random variables. - Both theorems share the background of representing image distributions in frequency domain and address the long-tail problem in image features instead of a simple calculation formula, crucial in analyzing image data influence on vision model performance. - Theorem 1 innovates in frequency entropy analysis, being superior to conventional image entropy based on grayscale. Its benefits include: (i) Reducing sensitivity to noise; (ii) Compensating for spatial relationship limitations; (iii) Aligning more with human image perceptions. - Theorem 2 combines maximum entropy and theorem 1 for practical value, inspiring the method in Section 4. The results in Section 5 demonstrate its effective application to enhance model robustness-accuracy. - **R9:** Image features like spatial pixels and spectral density are not explicitly selectable by the sampling theorem. Thus, inspiring from image sampling in pixel domain, we use the method : (1) Discretizing features with Fourier Transform; (2) Selecting and merging values from various regions, re-weighting different frequency band positions as per Eq.4, and creating a new image through inverse Fourier Transform. - **R10:** In Section 4, we outline the purposes and significance of the experiments: (1)Section 4.2 details heuristic experiments to explore the bass strategy, using the gamma parameter for comparing strategies and results.(2)Sections 4.3 and 4.4 present solutions ON two robustness problems (adversarial and corruption) using the bass strategy, with algorithms designed for specific training without gamma-related hyper-parameters. For concerns of error bars, we show the results based on 5 random runs in **Fig1-3 of the public pdf**. - **R12**: The individual BaSS results are shown in Appendix Tables 6-9, we will reorganize the contents of Table 4 in appendix. For the model's decreasing robustness in the "digital" column of Table 4, our analysis is: [1] shows that the digital subcategory share a common feature: irregular range corruptions from low to high frequencies. AugMix resists them by adopting random data augmentation, enhancing frequency information diversity. The bass method we provided, replacing one AugMix branch, may affect diversity of frequency domain information in the augmentation data. - **Other issues**: - (Q11) We have added ImageNette in Fig3 in Appendix.F2. - (Q13) Due to space limitations, please refer to Q1 response onReviewer P5Z5 - typos error: We fixed these issues and reviewed the full article. We follow your valuable advice, clarify the statement and add more experimental results. We sincerely hope that you could reconsider the overall performance and contribution of our articles. We continue to pay attention to any follow-up questions and actively discuss them. **Reference** [1] A fourier perspective on model robustness in computer vision, Yin et al. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: The typo and notation fixes make the paper more clear. Most my concerns are addressed, though I am still not convince about the contribution of theorems, I would appreciate a more direct answer to my questions 6,7,8. Additionally, please make the difference mentioned in my concern 9 very clear in the paper's main body. I'm updating my review accordingly. --- Reply to Comment 1.1.1: Title: Response to Reviewer 48f5 Comment: Dear Reviewer 48f5: We would like to express our gratitude to the reviewer for the thorough review and active feedback. Regarding the issues raised in your reply, we engage in further discussions with you. Our responses are as follows: - **Concerns about contribution of theorems and direct answer to Q6, Q7, Q8.** - **Response to Q6:** - Firstly, Theorem 1 provides a formalized definition of information entropy in the frequency domain, proving that the information content of the head low-frequency components is greater than that of the tail high-frequency components. Previous works, which compared the information sizes of different frequencies starting from image spectrum density, may align with intuitive understanding. However, they don't conform to the mathematical definition of information content (it is also mentioned by the reviewer in q7). Therefore, the hypothesis that low frequencies provide more information for model training and inference was unproven. To the best of our knowledge, no previous work modeled probability distribution in the frequency domain and defined the information content of frequency components. **Hence, our proposed Theorem 1 has made a significant contribution in confirming the proposition about the information content in the frequency domain.** - Secondly, the image information entropy that we defined from a frequency domain perspective has **clear advantages compared to the definition from spatial grayscale values.** These include: (1) Addressing the sensitivity to local noise disturbances; (2) As the frequency domain information comes from global image feature statistics, it can describe advanced image features like textures, whereas grayscale values can only depict brightness information; (3) The results of frequency domain information entropy align more with human perception of different image features. - Lastly, the power-law distribution existing in the image frequency domain is objectively present and widespread. Modeling data distribution in the frequency domain and calculating image information entropy **will be practical for future research work that discusses model performance against backgrounds like information theory.** In summary, we consider Theorem 1 to be essential and valuable, possessing the rationality to be recognized as a theorem. - **Response to Q7:** We agree with the idea that each image spectrum can be considered as an observation result of the probabilistic distribution. More precisely, we regard spectral patterns as a type of random variable, with images acting as events composed of various random variables. Statistical analyses of image frequency information can be seen as results of random events, reflecting the probability of different frequency patterns occurring. Intuitively, LFC corresponds to the most common patterns in natural images, typically representing color blocks, whereas HFC only appears at specific locations, such as outlines. Furthermore, the spectral density distribution derived from image datasets reveals a consistency of random variables in the frequency domain following a power-law distribution. This serves as the foundation for our calculation of information entropy across different frequency bands in the dataset. - **Response to Q8:** We provide the rationale for introducing Theorem 2 in the description below: - Firstly, Theorem 2 is proposed against the backdrop of the long-tail in the frequency domain. Furthermore, the proof of Theorem 2 is based on our definition of the probability distribution in the frequency domain and what was proposed in Theorem 1. Therefore, it's not merely a description based on the maximum entropy principle. Through Theorem 2, we present the ideal spectral distribution which, while founded on the maximum entropy principle, also integrates the random variable $\xi$ in the frequency domain. The ideal distribution is constrained by both $\xi_{l}$ and $\xi_{h}$ in the spectrum. - Secondly, Theorem 2 serves as the theoretical foundation for the method we proposed. Considering heuristic design methods that operate on actual image data, merely using the original expression of the maximum entropy principle as guidance might result in a lack of logical context in the paper, as well as neglect of actual variables in the image frequency domain. **Experimental results show that Theorem 2 exhibits universal and effective application benefits in real-world problems.** - **Regarding Q9, we have added an detailed explanation on comparing BaSS and other sampling method in Section 4.1 of the paper, based on our previous discussion. We will update this for all reviewers to see in due course.** We are grateful for the time and effort the reviewer has invested in reviewing our work, and believe that the suggested changes have enhanced the quality of our research. Best regards, Paper12846 Authors
Rebuttal 1: Rebuttal: Dear Area Chair and Reviewers, We would like to thank all reviewers for their thoughtful comments and insights that greatly helped improve our work. We have carefully reviewed and responded to every question from all reviewers, providing a clearer explanation of the content of the article in question, and adding more experimental results to validate our conclusions and discussions. We have summarized the following modifications and additions: - Added more experiments - We supplemented the experiments with MART and MART+BaSS on the CIFAR10 and CIFAR100 datasets, and the results are shown in Tab1-3 of the PDF file. - We added experiments with TRADES and TRADES+BaSS on the large-scale dataset ImageNette, and the results are shown in the response to reviewer fSNP. - We supplemented the visualization results of fig1 in paper to further validate our conclusions, and the results are shown in fig1 of the PDF. Due to computational resource constraints, only partial experimental results are shown, and we will supplement the complete experimental tables during the discussion and revision stage. - Supplement to the main text and appendix, - We have improved some statements in Sections 2-3 of the article, which can be seen in the response to the reviewer 48f5. - We added the results of fig2 on imagenette in appendix F.2. - We added content and corresponding experimental results of discussions with reviewer zwAk in appendix F.2. - We added a table contrasting the individual results of BaSS, according to reviewer 48f5 Q12. - We have corrected all the typos issues raised by the reviewers. During the discussion stage, we will continue to pay attention to the reviewers' follow-up questions and suggestions, and supplement the above content. Pdf: /pdf/a71ecb5b9c847bc634b7000edb55672359d647a3.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper addresses the problem of model robustness, which is of great importance in the machine learning field. The authors revisit this problem and discuss the relationship between high-frequency components and model robustness. They verify the effectiveness of the proposed method on several datasets compared to other baselines. Strengths: This paper addresses the model robustness problem, which is of great importance to the machine learning field. The authors verify the effectiveness of the proposed method on several datasets compared to other baselines. The proposed method is theoretically sound with supporting proofs. Weaknesses: How many times were these experiments conducted, and is it necessary to report the mean and standard variance of the results? I noticed that the current experiments were mainly conducted on toy datasets, such as CIFAR or ImageNet subsets. What about scaling up the experiments to large-scale datasets in the real world? The current experiments mainly focus on the PGD attack. Have you considered experimenting with other types of attacks to evaluate the performance of the proposed method in different scenarios? Additionally, apart from the single baseline PGD-AT, have you tested other baselines? Is the improvement consistent for all variants? Although the proposed method is supported by theoretical proofs, it seems relatively naive. Further robustness analysis is needed for some hyperparameters in Eq. 4. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: (Just a remark) The figures in the supplementary materials are intuitive, and reorganizing the paper could improve its readability. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for your time and effort in evaluating our manuscript. We have responded to each question in detail below: - **Q1:Have you considered experimenting with other types of attacks to evaluate the performance of the proposed method in different scenarios? Additionally, apart from the single baseline PGD-AT, have you tested other baselines? Is the improvement consistent for all variants?** - **Rich attack scenarios:** We have evaluated on two types of model robustness that are of primary concern in the community: adversarial robustness and corruption robustness. For evaluations of adversarial robustness, we used the efficient attack algorithms: PGD, CW, and AutoAttack(AA). The results are presented in Tab1-3 in the paper, and Tab2 in the appendix. For evaluations of corruption robustness, we selected 15 pixel-level image corruption algorithms and set five intensity levels under each perturbation algorithm, summarizing all the algorithms into four categories as shown in Tab4. The results in Tab1-4 show that our proposed method can effectively defend against all the above attack scenarios. - **Adding more baselines:** In the paper, we tested two adversarial training baselines: PGD-AT and TRADES. We report results for **TRADES on the ImageNette below**. In addition, we have report results on the latest adversarial training baseline, MART in **Fig3 in public pdf**. | Method | Clean acc | Attacked by PGD-10 | Attacked by AA | | ---------- | --------- | ------------------ | -------------- | | TRADE | 64.79% | 51.71% | 35.50% | | TRADE+BaSS | 66.90% | 59.96% | 42.80% | The above results demonstrate that BaSS method can be applied to enhance the robustness of various baseline algorithms and ensures stability under various attack algorithm tests. - **Q2: What about scaling up the experiments to large-scale datasets in the real world?** - **R2:** Conducting experiments on adversarial defenses in large-scale datasets is not an easy task. Due to issues with computational resources and time, we were unable to provide you with relevant experimental results during the rebuttal phase. By referring to related work in the same field[1-2], our experimental results on the ImageNet subset already possess convincing evidence for evaluation on large datasets. Additionally, to address your concern, we summarize the advantages of BaSS, which can support training on even larger-scale datasets. - **Low Time-Space Complexity.** BaSS can be regarded as a pre-processing operation on images. Therefore, it will not incur any additional overhead in memory resources; Moreover, the results in Appendix Tab3 indicate that bass almost does not increase the time overhead during the training process. - **Hyperparameter-Free.** For the hyperparameters of the training process, the algorithm with BaSS adopts the same settings as the baseline. Therefore, it does not have to face issues such as hyperparameter optimization. As a result, even training on larger datasets will not increase the difficulty of optimization. - **Furthermore**, model robustness is widely considered to be sensitive to the resolution of the training images and the number of categories in the dataset. In our experiments, we set up comparative experiments for the above factors. The experimental results show that BaSS can effectively enhance model robustness across datasets with multiple different resolutions and different numbers of categories. - **Q3: How many times were these experiments conducted, and is it necessary to report the mean and standard variance of the results?** - **R3:** Thank for your question and suggestion, our experimental results are based on 3 random runs, the standard deviations of 3 runs are very small, which hardly effect the results. In the **fig1 and fig2 of public pdf**, we reported the results and the standard deviations of 5 runs, which are still very stable. - **Q4: Although the proposed method is supported by theoretical proofs, it seems relatively naive. Further robustness analysis is needed for some hyperparameters in Eq. 4.** - **R4:** To address the concern of the reviewers, we would like to clarify the motivation behind the theorem and conclude its importance. - Both theorems share the background of representing image distributions in frequency domain and address the long-tail problem in image features instead of a simple calculation formula, crucial in analyzing image data influence on vision model performance. - Theorem 1 innovates in frequency entropy analysis, being superior to conventional image entropy based on grayscale. Its benefits include: (i) Reducing sensitivity to noise; (ii) Compensating for spatial relationship limitations; (iii) Aligning more with human image perceptions. - Theorem 2 combines maximum entropy and theorem 1 for practical value, inspiring the method in Section 4. The results in Section 5 demonstrate its effective application to enhance model robustness-accuracy. In Eq. 4, where the parameter *B* is determined by the size of image, and parameter *τ* is the constant *e*, we will provide a clearer explanation during the revision stage. We do not wish to introduce hyperparameter settings to the method, thereby increasing its training difficulty across different datasets. From another perspective, the adjustment of parameters might lead to better results, which could become a direction for optimizing BaSS in our future work. We will continue to follow and respond to any questions regarding the content of the article following the review. Meanwhile, we sincerely hope that the reviewer could reassess the contributions of our work. **References** [1] Efficient and effective augmentation strategy for adversarial training. NeurIPS 2022 [2] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness? .ICLR 2022 --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for the detailed response, which addresses most of my concerns. I am updating my rating to borderline accept. --- Reply to Comment 1.1.1: Title: Response to Reviewer fSNP Comment: Dear Reviewer fSNP: Thank you for your suggestion to expand on the evaluation of our method. This has indeed strengthened our paper. We sincerely appreciate the effort you put into the entire review process ! Best regards, Paper12846 authors
null
null
null
null
null
null
How to Select Which Active Learning Strategy is Best Suited for Your Specific Problem and Budget
Accept (poster)
Summary: Some recent works have shown that typical examples are more beneficial than more uncertain ones under low budget setting in active learning. However, there has been no prior works that address how to distinguish whether a budget is in low or high budget regime, or how to determine which approach should be taken. The authors propose SelectAL, an algorithm that handles the tasks using derivative-based test, for the first time. The algorithm is theoretically and empirically supported in Section 2 and 4, respectively. Strengths: 1. This is the first paper that addresses how to automatically determine selection strategies based on the budget in active learning. 2. The authors provide some theoretical justification for the derivative-based test method. Weaknesses: 1. Although Strength 1 is a great contribution of this paper, there are several assumptions some of which may not be realistic. Intuitively, efficiency and realizability are reasonable assumptions but it would be hard to check if universality is generally true or not. Furthermore, the uniqueness of $B_{eq}$ (line 120-121) seems like a strong assumption. 2. It is somewhat hard to follow the flow of this paper. It is partially because of the complexity of theoretical work, but also because there are many assumptions to justify the theory. But upon the clarification of the questions below, I am willing to increase the score. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. I am very confused with the definition of the generalization errors. If my understanding is correct, $E_{low}$ and $E_{high}$ are the generalization errors on the same unseen test set using $\mathcal{L}_{low}$ and $\mathcal{L}_{hight}$, each of which are trained on $\mathcal{D}_{low}$ and $\mathcal{D}_{high}$. How is it related to the equation in line 111? 2. What the equation in line 111 is saying is that the generalization error of $\mathcal{L}$ trained on $B$ examples can be expressed as the convex combination of $E(qB)$ and $E(\alpha(1-q)B)$. But, $E(qB)$ is from a learner trained only on $qB$ examples and $E(\alpha(1-q)B)$ is from the other learner trained only on $(1-q)B$ examples. Is it generally true or even close to the reality? If the generalization error is a function of the test examples given the same learner, I think the convex combination makes sense but since it is a function of training examples, I am not sure it is reasonably true. 3. Where are equalities for q coming from for $B_{low}$ and $B_{high}$ in Appendix A? 4. If the uniqueness of $B_{eq}$ does not hold, what happens to the Eq.(3), particularly, for the case $|A| \rightarrow 0$? 5. Could the authors provide more details about the experiment setting used to produce Figure 2 such as a linear $\mathcal{L}$, the dataset, and reasoning behind of choosing exponential error function? 6. For selectAL, did the authors observe some cases where {low, high, rand} are selected alternatively unlike the expected monotonic behavior? Can the authors provide the sequence of selections on CIFAR-10 and ImageNet-50? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Please refer to the weakness and questions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks much for the detailed and precise list of questions. We wish to emphasize that some brevity in the manuscript was called for because the general framework is not original - it is adopted with all its assumptions from the work of [1]. This study demonstrated that the model, despite its simplified assumptions, allows for a clear differentiation between low and high budgets, and can further be subjected to rigorous analysis. We note that due to the inherent simplifications in the model, some assumptions are hard to justify in realistic settings and therefore any conclusion derived from the model requires exhaustive empirical testing to validate its relevance in real settings. Answers to questions: --- 1+2) The model under analysis consists of two learners, $L_{low}$ and $L_{high}$, trained on separate datasets $D_{low}$ and $D_{high}$, respectively. $L_{low}$ is trained and evaluated solely on $D_{low}$, while $L_{high}$ on $D_{high}$. We then define a mixture model, combining $L_{low}$ and $L_{high}$. The model is trained and tested on a mixture distribution, where each example is drawn from $L_{low}$ with probability $p$ and from $L_{high}$ with probability $(1-p)$. The main question such a framework can address is whether changing the training distribution of the combined model, while keeping the test set unchanged, is beneficial. Under certain assumptions, the results suggest that in low-budget scenarios, more examples should come from $D_{low}$, and in high-budget scenarios, more examples should come from $D_{high}$. Since the distribution of the data is a convex combination of $2$ distributions, so is the error of the combined model, which is depicted in line 111. A combined model that gets $qB$ examples from $D_{low}$ and $(1-q)B$ examples from $D_{high}$ is expected to have a generalization error of $pE(qB) + (1-p)E(\alpha (1-q)B)$ on the mixture distribution. --- 3) In Eq. 4, with specific values for $p$ and $q$, solving for $B$ can determine the budget at which selecting $qB$ examples from $D_{low}$ yields the best generalization error. By setting $p=q$, we find a budget ($B_{eq}$) where the optimal strategy is to keep the distribution unchanged, namely, active learning is not needed. Any smaller budget would necessitate a low-budget strategy, while a higher budget would require a high-budget strategy. In practice, we only control the source from which the active set of examples $\mathbb{A}$ is sampled, but not the source of the remaining $B-|\mathbb{A}|$ examples. Sampling from $D_{low}$ all the examples in $|\mathbb{A}|$, results in a total of $\left(p+\frac{\mathbb{A}(1-p)}{B}\right)B$ examples being sampled from $D_{low}$ overall. Plugging such $q$ in Eq. 4, we obtain the maximal budget $B$ for which this strategy is favorable. We refer to this budget as $B_{low}$. Similarly, the concept applies to $B_{high}$, which defined the smallest budget for which sampling $\mathbb{A}$ from $D_{high}$ is optimal. --- 4) If $B_{eq}$ is not unique, it implies that Eq. 4 have multiple solutions, suggesting the presence of a multitude of low and high-budget regimes. However, in our empirical investigations, we have not observed this phenomenon. When multiple $B_{low}$ and $B_{high}$ values exist, they delineate distinct budget thresholds. For budgets below the smallest $B_{low}$ value, the best strategy involves sampling the entire active set from $D_{low}$. Conversely, for budgets surpassing the largest $B_{high}$ value, the optimal strategy is to sample the entire active set from $D_{high}$. In the intermediate range between these two thresholds, determining the precise optimal strategy becomes more challenging, and random sampling may be preferred. As $|\mathbb{A}|\leftarrow 0$, the minimum $B_{low}$ approaches the maximum $B_{high}$, resulting in the same optimal strategy in the limit as described in Eq. 3. --- 5) There is no specific dataset or learner in the experiment in Fig 2. Once the error function is defined, datasets and learners are not explicitly required -- the analysis is done based on the error function alone. The reason we chose the exponential form for the error function is twofold. Firstly, when examining the error functions of neural networks on real datasets, they often exhibit behavior that resembles an exponential pattern. Thus, using an exponential form in this example allows us to analyze a scenario that aligns with the behavior of empirical error functions in realistic scenarios. Secondly, we opted to utilize the same example from [1], as it was a recognized and relevant illustration, ensuring continuity with previous art. --- 6) In all our experiments and across various settings, we consistently observed a monotonic pattern in SelectAL's behavior. Specifically, SelectAL followed a sequence of low-budget strategy for several rounds, then a random strategy for another set of rounds, then a high-budget strategy for the remaining rounds. For both CIFAR-10 and CIFAR-100 datasets, as depicted in Fig 5, the selection sequence of SelectAL was as follows: It initially employed the low-budget strategy for 3 iterations. then random strategy for 1 iteration and rom that point onwards, it favored the high-budget strategy. The specific values for these iterations are highlighted in Fig 5, using orange and green dashed lines. --- In postfix, we sincerely wish to thank you for your valuable feedback. This thorough review of our work and the insightful questions you raised help us recognize the areas that need further clarification, strengthening our work. In the camera-ready revision, we will carefully address each of the concerns you have raised, aiming to enhance the overall coherence and comprehensibility of the manuscript. We are committed to incorporating your feedback to deliver a more polished and reader-friendly paper. --- [1] Hacohen, Guy, et al. "Active Learning on a Budget: Opposite Strategies Suit High and Low Budgets." ICML 2022.
Summary: This paper considers the selection of active learning methods under different budgets. Previous studies have shown that different active learning methods perform differently under different labeling budgets, and incorrect selection can lead to model performance inferior to that of random sampling baselines. To address this issue, the authors conducted theoretical analysis and proposed the SelectAL method, and related experiments showed that this method can achieve consistent performance improvements under different budgets. Strengths: 1. The idea presented is interesting and it tackles an important issue in active learning. 2. The experimental results indicate that the proposal consistently improved performance, supporting the claims made in the paper. Weaknesses: 1. There is room for improvement in the technological representation. 2. The connection between theoretical results and practical methods seems to be weak. Miscellaneous: The text font in the figures should be kept the same as the main body. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What is the physical meaning of $E_{low}(x)=E_{high}(\alpha x)$ in the assumptions (Section 2.1)? Can you provide a more intuitive explanation? 2. The theoretical results (Section 2) assume that there is a unique and minimal solution for Eq.(1). What kind of E satisfies this assumption, and is this condition general enough to support practical applications? 3. The proposed method (Section 3) seems weakly connected to the theoretical analysis. The theoretical results show that under specific assumptions, there exist sampling boundaries B_low and B_high for the Low-Budget Method and High-Budget Method, which can be calculated through closed-form solutions. However, the SelectAL method used in the proposed method relies on a training-evaluating method to identify the type of samples that are currently lacking. Can you provide more connections between theoretical results and practical methods? 4. In the experiments (Figure 6), there are indeed performance differences between different AL methods, but the differences are small (≤2%). Have you observed more severe phenomena? In Tables 1 and 2, there are similar phenomena, and the performance differences between all methods are not significant. Specifically, in the low-budget scenario, the classification performance is so poor that the model cannot be deployed. Is it necessary to introduce complex methods to deal with this? Perhaps, random sampling followed by the consideration of high-budget methods is already sufficient. --- Thanks for the detailed clarifications, which have addressed my concerns. I would like to keep my score. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors do not provide a discussion about the limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **$B_{low}$ and $B_{high}$, and the connection between the theoretical analysis and the practical methods** Indeed, the theoretical framework presented in this study relies on a simplified model. This makes possible a rigorous analysis and the derivation of closed-form solutions, unlike the involved analysis of real-world deep network training scenarios. While these closed-form expressions may be irrelevant in real deep learning scenarios, approximating $B_{low}$ and $B_{high}$ plays a vital role in SelectAL. The removal of examples in Algorithm 1 serves the purpose of calculating whether the current budget falls above or below these thresholds, as explained in detail in Figure 6. Overall, our whole approach is greatly motivated by the theoretical results; in effect, the main algorithm puts the theoretical results into actual use by empirically evaluating the thresholds and boundaries identified in the analysis. The following discussion may clarify further connections between the theoretical results and the actual method. We note that the proposed theory conveys two main such insights, which are implemented and tested in our suggested method: 1. The effectiveness of a derivative-like test, highlighted in Section 2.2 and emphasized in Equation 1, proves to be valuable in determining the budget that best suits the problem at hand. This theoretical result is put into action in the suggested algorithm, which perturbs the given labeled set to discern the appropriate budget regime. 2. The theoretical analysis in Section 2.3 demonstrates that choosing a mixture of active learning (AL) strategies in each AL round can be well approximated by selecting a single AL strategy in each round and allowing for the flexibility to switch strategies between rounds. This insight significantly simplifies the algorithm, as it shifts the focus from selecting the best mixture of several strategies to making a binary choice between two distinct strategies in a pure manner. Additionally, as observed in Fig. 5, allowing for a different pure AL strategy in each round outperforms the use of a single pure strategy throughout, validating the predictions made in our analysis. For the camera-ready version, we will add a comprehensive discussion that explicitly emphasizes these points, thereby reinforcing the cohesion between the theoretical foundations and the practical ingredients of the proposed approach. --- **$E_{low}(x)=E_{high}(\alpha x)$** The error functions $E_{low}$ and $E_{high}$ measure the generalization error of a classifier as a function of the size of its training data. For example, it could be some exponential decreasing function: the error drops fast if you increase the size of the training data at first, but as more data is added there is some diminishing returns effect. $E_{low}$ and $E_{high}$ are 2 such functions, for 2 similar classifiers, trained on either an easy dataset, corresponding to the low-budget, or a hard dataset, corresponding to the high budget. The $E_{low}(x)=E_{high}(\alpha x)$ states that the error functions in such case would be from a similar family of functions, i.e if $E_{low}$ decreases exponentially, then $E_{high}$ also does, but with a different rate of decay. This assumption is rather realistic -- measuring the error functions of neural networks on different datasets indeed shows that they decay in an exponential-like manner but with different decay rates. --- **Uniqueness of Eq. 1** In the case where there are multiple solutions for Eq. 1, the core analysis and its qualitative implications remain unchanged. In such case, we observe the existence of several thresholds $B_{eq}$, each having its distinct $B_{low}$ and $B_{high}$. To maintain consistency in our analysis, we can employ the minimal threshold as $B_{low}$ and the maximal threshold as $B_{high}$, while adopting a random AL strategy within this range. It's important to highlight that in practical scenarios when training deep networks on real datasets, the observed solutions have consistently been unique (see Fig. 6 for a concrete example). --- **Size of the differences observed** In Tables 1 and 2, we acknowledge that the differences between models trained using active learning (AL) strategies and those trained on random examples might appear relatively small. This is due to the fact that we consider a single round of AL, where only a limited number of examples are chosen by the AL strategy. Consequently, the impact of these few added examples on the overall training process may not be substantial. However, it is important to note that the primary objective in this context is to assess whether there is an improvement or deterioration resulting from the AL approach. In contrast, in Fig. 5, we demonstrate the results of a multi-round AL experiment, where a larger fraction of examples are selected by AL strategies. In this multi-round setting, the improvements achieved through active learning are significant and noticeable. Regarding the low-budget results, it is essential to consider that we are dealing with a dataset containing only 200 examples. In such limited data scenarios, achieving substantial improvements beyond what is reported in the tables can be challenging. This closely mirrors real-life situations where data can be sparse, making active learning techniques all the more valuable. However, we also recognize that there are potential ways to enhance results in low-budget scenarios, especially when unlabeled examples are available, and a semi-supervised approach is employed. In such cases, the separation between low and high-budget regimes has been demonstrated, and the use of SelectAL may be advantageous. While this is a promising avenue, we want to emphasize that our manuscript's main focus is not on semi-supervised scenarios but on showcasing the effectiveness of SelectAL in identifying the budget regime of the problem in advance. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed clarifications, which have addressed my concerns. I would like to keep my score and recommend acceptance of this paper.
Summary: The paper studies an algorithm selection problem for active learning under different labeling budgets. A theoretical analysis is conducted The proposed algorithm uses a derivative approach by removing small sets of labeled examples to approximate the effectiveness of different active learning algorithms. Empirical results are shown for different labeling budget that SelectAL chooses the best active learning algorithm. Experiments are conducted on CIFAR-10, CIFAR-100 and ImageNet-50. Strengths: 1. The paper studies an important problem of how to choose the best active learning algorithm for a labeling budget. 2. Experiments demonstrates the effectiveness of the method in specific settings. 3. It is interesting to empirically see that random sampling performs as good as other active learning algorithms when given a moderate number of labeling budget. Weaknesses: I have major concerns with the writing and structure of the paper. 1. The theoretical analysis and the actual algorithm seem to be very loosely connected. Currently, section 3 contains very little motivation from section 2. Also, the analysis framework makes assumption on universality making it unlikely to reflect the performance in practice. 2. The data removal process from a labeled set is not immediate to the reader but is the key underlying tool for the derivate approximation. How does one remove labeled examples from a labeled set using an active learning algorithm? 3. Throughout the paper, the data removal algorithms are used as TypiClust and inverse-TypiClust. Instead of formulating the algorithm as using general active learning algorithms for removal, the authors may discuss more in-depth why these two algorithms are actually effective in predicting low and high budget scenarios. Furthermore, the experiments seem to be inconsistent. Specifically, even though CIFAR-10 and CIFAR-100 have the same pool size, the experiments are conducted with inconsistent budgets. For example, 7k+1k and 25k+5k are used for CIFAR-10 while 9k+1k and 30k+7k are used for CIFAR-100. Since the paper's focus is on different labeling budgets, I believe more budget settings are also needed beyond just three. Lastly, algorithm selection methods have been studied before, but not for different budgets. The paper could benefit from the following related work: 1. Baram, Y., Yaniv, R. E., & Luz, K. (2004). Online choice of active learning algorithms. Journal of Machine Learning Research, 5(Mar), 255-291. 2. Hsu, W. N., & Lin, H. T. (2015, February). Active learning by learning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 29, No. 1). 3. Pang, K., Dong, M., Wu, Y., & Hospedales, T. M. (2018, August). Dynamic ensemble active learning: A non-stationary bandit with expert advice. In 2018 24th International Conference on Pattern Recognition (ICPR) (pp. 2269-2276). IEEE. 4. (Note the first version came out Feb 2023.) Zhang, J., Shao, S., Verma, S., & Nowak, R. (2023). Algorithm Selection for Deep Active Learning with Imbalanced Datasets. arXiv preprint arXiv:2302.07317. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What's the underlying intuition and reasoning of why TypiClust and inverse-TypiClust can correctly predict the budget scenarios. 2. Why are the particular initial seed set size and batch size selected for CIFAR-10 and CIFAR-100. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Sufficient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Connections between the theoretical analysis and the actual algorithm presented:** The theoretical analysis presented in our work conveys two main take-home messages: 1. The effectiveness of a derivative-like test, highlighted in Section 2.2 and emphasized in Equation 1, proves to be valuable in determining the budget that best suits the problem at hand. This theoretical result is put into action in the suggested algorithm, which perturbs the given labeled set to discern the appropriate budget regime. 2. The theoretical analysis in Section 2.3 demonstrates that choosing a mixture of active learning (AL) strategies in each AL round can be well approximated by selecting a single AL strategy in each round and allowing for the flexibility to switch strategies between rounds. This insight significantly simplifies the algorithm, as it shifts the focus from selecting the best mixture of several strategies to making a binary choice between two distinct strategies in a pure manner. Additionally, as shown in Fig. 5, allowing for a different pure AL strategy in each round outperforms the use of a single pure strategy throughout, validating the predictions made in our analysis. We understand from the reviews that these connections, between the analysis and the empirical section, could be better elucidated in the current manuscript. Accordingly, for the camera-ready version, we will add a comprehensive discussion that explicitly emphasizes these points, thereby reinforcing the cohesion between the theoretical foundations and the practical ingredients of the proposed approach. --- **Removing data using AL algorithm, and the choice of TypiClust and inverse TypiClust:** The rationale for choosing the specific methods of TypiClust and inverse-TypiClust is the following: To remove examples from a labeled set using an active learning algorithm, we must be able to consider the AL algorithm as if it operates with an empty labeled set, denoted by $\mathbb{L}$, while its unlabeled pool $\mathbb{U}$ is composed of the original labeled dataset. This restricts our choice of AL algorithms quite drastically. This is because most (if not all) uncertainty-based algorithms are not well-defined in such cases, since they typically make use of a classifier trained on $\mathbb{L}$, which in our scenario is empty. In contrast, TypiClust [1] and Inverse-TypiClust present suitable solutions for these settings, as they tackle a covering problem directly on the data itself, independent of any specific classifier trained on $\mathbb{L}$. Additionally, in the original paper it was shown that typiclust is a very good performer when given an empty $\mathbb{L}$. Based on the above, and the lack of competitive alternative methods that can be effective when $\mathbb{L}$ is empty, we chose TypiClust and inverse-TypiClust. This choice was empirically tested (see Fig. 6), where it was shown to be effective. While this concept is presently explained in the manuscript in the paragraph starting at line 218, we will emphasize and clarify this point in future revisions, as it is an important part of our suggested algorithm. Nevertheless, it is essential to highlight that the proposed algorithm is not exclusive to this choice of TypiClust and Inverse-TypiClust. Alternative AL strategies, such as ProbCover [2] as a low-budget option or CoreSet [3] as a high-budget option, can also be effective choices in place of TypiClust and Inverse-TypiClust, respectively. In Fig. 1 in the attached PDF, we report the decisions made by SelectAL when using ProbCover and CoreSet instead of TypiClust and inverse-Typiclust, demonstrating this point. We will add this figure to the appendix, and discuss it in the main paper. --- **Different budget settings for CIFAR-10 and CIFAR-100** The size of the low-budget regime varies across different tasks. Intuitively, in the low-budget regime, you only have enough examples to provide a coarse description of the underlying problem. As the difficulty of the task increases, the low-budget regime becomes larger. Since CIFAR-100 is a more challenging problem compared to CIFAR-10 due to its tenfold increase in the number of classes, it has a larger low-budget regime despite both datasets sharing the same pool size. Table 1 in the manuscript showcases specific settings for a single-round active learning scenario, selected to be representative of different budget levels one might encounter. Considering that the low budget in CIFAR-100 is larger, we appropriately adjusted the settings for the mid-budget and high-budget scenarios to reflect this difference. Having said all that, it is important to note that Table 1 is a **showcase**, and by no means shows all the conditions tested. For example, Fig. 5 shows results with many more cases. It summarizes the results of experiments with SelectAL in multi-round AL settings, effectively testing it in a large array of possible budgets, including low, mid, and high budgets, for both CIFAR-10 and CIFAR-100. This specific figure clearly shows that SelectAL works well in many settings, where it outperforms other AL strategies by a significant margin. --- **Related work** Thank you for bringing this set of interesting papers to our attention. Indeed they are not directly related to our work, as they focus on selecting AL algorithms under different settings than ours. Nevertheless, it will strengthen the paper to discuss our method in a broader context. This will be done for the camera-ready revision. --- [1] Hacohen, Guy, et al. "Active Learning on a Budget: Opposite Strategies Suit High and Low Budgets." International Conference on Machine Learning. PMLR, 2022. [2] Yehuda, Ofer, et al. "Active learning through a covering lens." Advances in Neural Information Processing Systems 35 (2022): 22354-22367. [3] Sener, Ozan, and Silvio Savarese. "Active Learning for Convolutional Neural Networks: A Core-Set Approach." International Conference on Learning Representations. 2018. --- Rebuttal Comment 1.1: Comment: Thank you for your clarifications. I believe most of my concerns are addressed. The only problem is in the PDF, the figure does not seem to only contain performance of SelectAL? --- Reply to Comment 1.1.1: Comment: We deeply appreciate your prompt feedback and the opportunity to further clarify our work. The figure in the PDF indeed does not show the performance of SelectAL. Instead, the figure is focused solely on illustrating the decision-making process within the SelectAL algorithm, rather than explicitly showcasing its performance enhancement. In a manner analogous to Figure 6 in the original manuscript, this figure demonstrates for each budget the values that the SelectAL will calculate when perturbing the data. As this figure was calculated using ProbCover and CoreSet instead of TypiClust and inverse-TypiClust, it shows the decisions that SelectAL will make using these AL strategies instead. Since the qualitative behavior depicted in this figure is the same as the behavior in Fig. 6a in the original manuscript, the resulting SelectAL will perform identically to SelectAL which is based on TypiClust and inverse-Typiclust. Therefore, the performance of SelectAL using these underlying strategies will be identical to the performance depicted in Table 1 and Fig 5a, which is already shown to be good. Thank you once again for your valuable engagement with our work. We would be more than happy to address any other concerns you may have.
Summary: The literature has shown that the optimal type of active learning (AL) strategy depends on the type of budget. Methods based on uncertainty sampling are most effective when the budget is large, and methods based on typicality work better in the low budget setting. However, what qualifies "large" or "low" depends on different criteria such as the data set and its size, the type of neural network architecture that is trained etc. The paper proposes a general approach that automatically determines whether a given setting is low or high budget by decoupling the data set in different parts. The method is divided in two parts. In the first part, a different active learning strategy is used depending on whether a labeled image belongs to the low or high budget setting. In the second part, unlabeled images are added to the labeled set. Strengths: Quality and clarity: The general motivation of the approach (i.e. combining low and high budget settings and using different strategies for each) is clear and makes sense given the recent AL literature. The first part of the algorithm is clear and more or less described in Algorithm 1. The clarity of the second part could be improved by writing its pseudo-code (in the main paper or appendix). Originality: Similar ideas that consider different strategies for different parts of the data set were recently published in the literature (e.g. [A,B]). Jain et al. [A] show that low-budget settings follow a linear law whereas high-budget settings follow a power law in the standard classification setting. They also try to estimate whether a setting is low or high budget. Mahmood et al. [B] also consider to learn the optimal budget over multiple datasets (in the multi-variate case) to improve model performance while minimizing labeling cost. This is related to the case where each dataset corresponds to a type of budget. However, [A,B] do not consider active learning strategies. [A] Jain et al., A Meta-Learning Approach to Predicting Performance and Data Requirements, CVPR 2023 [B] Mahmood et al., Optimizing data collection for machine learning, NeurIPS 2022 Significance: The idea of the paper is interesting as it allows to automatically find the optimal AL strategy and type of budget. However, I wonder how much it would be useful in real-world applications since it seems much more expensive to run than standard AL approaches (by constantly checking how much samples improve the performance, and then to determine whether they have to be added to the low or high budget). Weaknesses: - As mentioned above, the whole method should be clearly written in the main paper or appendix to improve clarity. - Is the method scalable to real-world applications? If so, isn't it cheaper to select a few uncertainty strategies (in large scale applications) to optimize the general labeling cost? - Due to its nature, the proposed method SelectAL returns the same scores as the best baselines (TypiClust and ProbCover, according to Table 1 and 2). Would there be a way to combine low and high budget strategies to obtain significantly better performance than a single baseline? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the references; indeed a discussion of low and high budget, independently of active learning, will put the present work in greater context. We will modify our related work section accordingly, and cite the appropriate references. Specific comments raised by the reviewer: **Clarity**: To improve the clarity of our paper, we will add pseudo-code for the second part of the algorithm to the appendix, and will reference it in the main text. **Scalability**: Empirically, our proposed method -- SelectAL -- demonstrates robustness and scalability to real-world applications. The total cost of SelectAL involves training a neural network once on two variants of the labeled set, denoted in the text by $\mathbb{L}$. While training two networks incurs this overhead, it is comparable to the training cost of the model that would use the labeled data afterward. Hearing the reviewer's concern and in order to further reduce the method's cost, in our future work we will examine the option to perform the test in Alg. 1 using a significantly smaller model. This discussion will be added to the summary and discussion section. It is worth clarifying that as currently presented, SelectAL already chooses a single low-budget and a single high-budget strategy to represent a larger family of active learning algorithms. In this way, we reduce the complexity of the method, as users are not required to separately examine each AL strategy they wish to consider. This already significantly contributes to reducing the method's computational cost. This heuristics was evaluated empirically, and the results reported in the paper show its effectiveness. After SelectAL returns its trinary decision, a user may decide to select a few uncertainty strategies, or a mixture of such strategies, if the high-budget regime is identified. **Combining low and high-budget strategies**: In a single round within the active learning framework, our theoretical analysis in Section 2.3 indicates that when the number of added examples is relatively small as compared to the total number of examples in the dataset, there is no clear advantage to mixing strategies, and specifically to combining low and high-budget strategies. Empirically, we validated this finding through a grid search, which showed that while combining several strategies does offer a minor improvement, it does not yield a significant boost in learning beyond any single active learning strategy. As a result, and since the use of multiple strategies adds complexity to the suggested approach, we propose to use of a single strategy. We note that the scenario changes in the context of multi-round active learning. With SelectAL's capability to switch strategies in each round, we can continually build the final labeled set, incorporating examples from different strategies along the way. As shown in Fig. 5, this dynamic approach leads to more substantial performance improvements as compared to any single uncertainty-based strategy. Thus there is a trade-off between the number of rounds and performance, where improved performance can be obtained with increased complexity by way of additional rounds, and when fewer examples are chosen in each round. We will add a discussion to this effect in the camera-ready revision. --- Rebuttal Comment 1.1: Title: Thank you for the clarifications. Comment: Thank you for the clarifications.
Rebuttal 1: Rebuttal: We thank all the reviewers for the significant time and effort they made in reviewing our work. In the attached PDF, we added an experiment suggested by Reviewer GcEN, in which we use other active learning strategies, namely ProbCover [1] and CoreSet [2], as the decision rules for SelectAL. Specifically, this experiment can be viewed as an extension of Figure 6 in the original manuscript, showing that SelectAL will make similar choices for a variety of active learning strategies. We believe that this addition enhances the comprehensiveness of our findings and strengthens the overall contribution of our work. --- [1] Yehuda, Ofer, et al. "Active learning through a covering lens." Advances in Neural Information Processing Systems 35 (2022): 22354-22367. [2] Sener, Ozan, and Silvio Savarese. "Active Learning for Convolutional Neural Networks: A Core-Set Approach." International Conference on Learning Representations. 2018. Pdf: /pdf/82f5847ffa2ac86bdbc1c426fa6c9f69f8a78113.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Evaluating Post-hoc Explanations for Graph Neural Networks via Robustness Analysis
Accept (oral)
Summary: This paper presents a new metric, OOD-resistant Adversarial Robustness (OAR), for evaluating post-hoc explanation methods for Graph Neural Networks (GNNs). Inspired by adversarial robustness, OAR performs random perturbations on the complementary part of the explanation result, reducing the impact of the Out-Of-Distribution (OOD) issue common in previous removal-based metrics and ensuring consistency with GNNs' behavior compared to generation-based methods. The authors further introduce an OOD reweighting block, which measures the OOD score of each perturbed sample, allowing for the marginalization of OOD instances, and the OOD score is assigned via the reconstruction loss of a Variational Graph Auto-Encoder (VGAE) pre-trained by each dataset, inspired by graph anomaly detection methods. Extensive evaluations and ablation studies demonstrate that OAR aligns more closely with the ground truth. Strengths: - This work summarizes and addresses a critical and emerging issue in post-hoc explanation methods for GNNs, and the proposed metric is intuitive and can mitigate issues that may happen in previous removal-based and generation-based methods, offering significant benefits to the research community. - The authors conduct extensive experiments, demonstrating the potential of the proposed metric as a superior evaluation tool for post-hoc explanation methods. - It is interesting to see OOD score can be measured using graph anomaly detection methods (and that works). - This paper is well-written and well-motivated. Weaknesses: - As an evaluation metric, OAR introduces an additional component that requires training on the original dataset used for the GNN. This process makes things complex, time-consuming, and may accumulate errors, making it tricky for different research groups to compare methods across various datasets. - SimOAR somewhat addresses the above concern, but it does so at the expense of performance. From current experiments (w/ six backbone methods), the performance degradation seems to be significant. - Though I am generally satisfied with evaluating six backbone explanation methods, the inclusion of more baselines would provide a more comprehensive statistical overview, given that this paper aims to introduce a metric for future community use. It would also be beneficial to include different types of post-hoc methods, such as decomposition, surrogate, and generation-based methods, among others. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - I am not quite sure if the usage of "adversarial" and "attack" in OAR would be clear. I thought there would be some adversarial training in OAR but in fact the authors just got inspiration from them and OAR does not really do adversarial things. - How is the metric RM calculated mathematically? Is it fidelity? Can the authors provide its formulation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: - This paper focuses on the explanation of graph-level tasks. How would this generalize to other types of tasks as a metric? - There are works showing post-hoc explanation methods are always suboptimal in terms of finding label-relevant patterns and therefore proposing self-explainable models and pretrain-finetuning framework, e.g., GSAT. This may limit the future impact of OAR if it is only for post-hoc methods. Can the idea in OAR help self-explainable methods? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *Dear Reviewer vGHJ:* *Thank you for the thoughtful feedback! Your constructive criticism has been invaluable in refining our work. Below, we give point-by-point responses. Hope our responses could address your concern!* >*W1 & W2: Concern about the training process in OAR, while SimOAR addresses the above concern but degrades the performance.* Thanks for your concern. We agree that the training process of OAR is complex and time-consuming. In fact, we have delved into this issue in **Appendix F** (Line 175-185) of the main paper. Specifically, 1. We believe that learning distribution information from this training process is inevitable to address the problem of OOD. Here is the reason: - Explanation (which is indeed the process of sampling subgraphs from inputs) would inevitably introduce the risk of OOD. While to measure this OOD, it is necessary to obtain the distribution information. 2. Although SimOAR is a compromise which degrades the performance, we believe that the experimental results of OAR and SimOAR precisely demonstrate the potential optimization space. Concretely, **OAR and SimOAR represents two extremes:** OAR accurately acquires distribution to combat OOD, while SimOAR completely abandons this acquisition to improve efficiency. It naturally inspired the future optimized directions: - On the one hand, we can find a balance between accuracy (OAR) and speed (SimOAR) to adapt to different tasks. - On the other hand, we can explore more efficient generative algorithms that require less acquisition of distribution information, such as diffusion models, to optimize both accuracy and speed simultaneously. >*W2: It would also be beneficial to include different types of post-hoc methods.* Thanks for your suggestion. Following your guidance, we have conduct additional experiments across three types of post-hoc methods to validate the effectiveness of our OAR. Limited by the short rebuttal time, we only select one method for each type (*i.e.*, decomposition: **GNN-LRP**, surrogate: **PGMExplainer**, generation: **XGNN**) on BA3 and MNIST-sp. Moreover, we noticed that you are interested in **GSAT** in the following comments, hence, we also conduct the **GSAT** (in its post-hoc mode). Here are the results (corresponding to Figure 3 in the main paper): |||BA3||| |:-:|:-:|:-:|:-:|:-:| ||PGMExplainer|GNN-LRP|XGNN|GSAT| |RM|0.341|0.355|0.307|0.386| |DSE|0.409|0.412|0.367|0.370| |OAR|**0.488**|**0.525**|**0.505**|**0.463**| |SimOAR|0.463|0.500|0.482|0.432| |||||| |||MNIST-sp||| |:-:|:-:|:-:|:-:|:-:| ||PGMExplainer|GNN-LRP|XGNN|GSAT| |RM|0.302|0.323|0.287|0.342| |DSE|0.314|0.340|0.341|0.317| |OAR|**0.546**|**0.583**|**0.520**|**0.554**| |SimOAR|0.542|0.548|0.499|0.530| |||||| We will continue to conduct more complete experiments across other post-hoc methods and add them into the reversion. >*Q1: Concern about the usage of "adversarial" and "attack".* We are sorry for making you confused. Specifically, our adversarial robustness mean that “How adversarial are random perturbations against the original data distribution, which are measured by OOD score”? Here are two more reasons why we named our attack as **adversarial attack**: - First, the starting point of our methods is finding the minimum perturbation leading to the wrong prediction (Line 128-132 in the main paper). However, this target is proved to be hard to reach and sometimes even intractable in the scenario given in the paper (Line 144-151). Thus, we transfer to approximate it from its dual perspective. That is, calculating the largest changes of outputs under the attacks. Hence, **our approach is essentially derived from the adversarial attack.** - Moreover, here are the imposed objective and the target of the *adversarial attack* and *our attack in OAR*, which are similar with each other: ||Objective|Target| |:-:|:-:|:-:| |Adversarial Attack|**selected part of input** (*which is selected by the attack algorithm*)|**perturb output as much as possible** (*until flip the outputs*)| |Attack in Our OAR|**selected part of input** (*which is selected by the explanations*)|**perturb output as much as possible** (*to see the robustness*)| |||| >*Q2: How is the metric RM calculated mathematically?* In our paper, Removal-based metric (RM) is employed to quantify the fidelity of the explanations. More formally, for an input graph $G$ and an explanation (subgraph) $G_s$, RM believes that a good explanation should have a large $f(G)-f(G_s)$, where $f$ is the GNN. >*L1: How would OAR generalize to other tasks?* Thanks for your concern. Following your comments, we have conducted experiments across several prevalent **node classification** datasets following [1]. Here are the results: ||BA-Shapes|BA-Community|Tree-Cycles|Tree-Grid| |:-:|:-:|:-:|:-:|:-:| |RM|0.312|0.321|0.295|0.411| |DSE|0.406|0.377|0.343|0.384| |OAR|**0.542**|**0.560**|**0.489**|**0.440**| |SimOAR|0.527|0.535|0.471|0.431| |||||| According to these results, we can find that our OAR and SimOAR can work well for the task of node classification. *[1] GNNExplainer: Generating Explanations for Graph Neural Networks. NIPS 2019* >*L2: Can the idea in OAR help self-explainable methods?* Thanks for your concern. Yes, it can. The evaluation target of OAR is the explanation (subgraph). Hence, any algorithm which generate explanations can be evaluated by OAR. Following your concern, we have conducted the experiments on GSAT: |||GSAT||| |:-:|:-:|:-:|:-:|:-:| || BA3 | MUTAG | TR3 |MINST| |RM|0.376|0.381|0.402|0.337| |DSE|0.411|0.425|0.417|0.319| |OAR|**0.613**|**0.590**|**0.585**|**0.606**| |SimOAR|0.598|0.572|0.552|0.581| |||||| These results validate that our OAR can help self-explainable methods. *Once again, we sincerely appreciate your time and effort in reviewing our paper. Your criticism has been invaluable in refining our work, and we are more than happy to add clarifications to address your concerns!* *Best,* *Authors* --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response and the additional experiments. These have addressed the majority of my questions and concerns. After reading the other reviews, I am pleased to maintain my original score. I appreciate the effort and insights the authors put into the paper.
Summary: The paper proposes a novel evaluation metric called OOD-resistant Adversarial Robustness (OAR) to address the issue of assessing the credibility of explanations provided by graph neural networks (GNNs). The paper criticizes existing evaluation methods that fail to consider out-of-distribution (OOD) data and can produce inconsistent results. OAR overcomes these limitations by calculating the robustness of explanatory subgraphs under attack and incorporating an OOD reweighting block. The paper also introduces a simplified version of OAR called SimOAR for more efficient evaluation of large datasets. Experimental results show that OAR outperforms current evaluation metrics and demonstrates consistency with metrics like Precision, Recall, and Human Supervision . Strengths: 1. This work tackles an important research gap on evaluating the explanation for GNN. 2. The paper is well-written and easy to follow. 2. The authors conduct experiments, and the results demonstrate the advantage of their proposed method. Weaknesses: 1. Figure 1 is kind of hard to follow. It could be better if authors can refine it. 2. Some experiment setting is missing. For example, what is the setting of the GAE? What is the number of the generated graph? 3. Lack of related work, and some important reference is missing. For example, the authors could add a section introduce the explanation methods for GNN according to [1]. [1] Zhang, He, et al. "Trustworthy graph neural networks: Aspects, methods and trends." arXiv preprint arXiv:2205.07424 (2022). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Listed in Weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *Dear Reviewer uvv7:* *We gratefully thank you for your valuable comments! Here we meticulously give point-by-point responses to your comments. Hope that our responses could address your concerns!* >**W1: Figure 1 is kind of hard to follow. It could be better if authors can refine it.** Thanks for your concern. Following your suggestion, we have made significant refinements to improve its clarity. The refined version provides a more coherent representation of the information it conveys, as shown in the new added *Supplementary PDF*. The following are some specific modifications: - We have extracted the common parts of Figure 1 (a), Figure 1 (b), and Figure 1 (c), and placed them together to avoid repetition. We have also added appropriate textual descriptions to make them easier to understand. - We have increased the font size to match the text in the main body. - We have provided detailed illustrations of Figure C (our algorithm OAR) to demonstrate the process of our algorithm more intuitively and smoothly. We hope that the revised Figure 1 now effectively supports the content and contributes to the comprehension of our research. Thanks again for your valuable suggestion! >**W2: Some experiment setting is missing. For example, what is the setting of the GAE? What is the number of the generated graph?** Thanks for your concern. For the setting of the GAE, please refer to **Line 138-140** in Appendix C; for the number of the generated graph, please refer to **Line 160-161** in Appendix C, where we point out that for BA3, TR3, and MUTAG, the number of perturbed subgraphs $N_{perturb}$ is 20, while for MNIST-sp, the number of perturbed subgraphs $N_{perturb}$ is 50 owing to its large size. We agree with your concern about the absence of these important parameters in the main paper. Hence, following your suggestion, **we have moved them from the Appendix C to the Experimental Setup** section in the main paper of the revised version. We hope that this modification could enhance the readability and reproducibility of our paper. >**W3: Lack of related work, and some important reference is missing. For example, the authors could add a section introduce the explanation methods for GNN according to [1].** Thanks for your concern. Based on your recommendation, we have included the missing reference [1] within the Related work section in the main paper, ensuring proper attribution to the original work. Furthermore, following your suggestion, we have introduced the current trustworthy GNNs from six aspects (robustness, explainability, privacy, fairness, accountability, and environmental well-being) in the Related Work following [1]. By incorporating these suggested changes, we believe that we have enriched the manuscript and provided readers with a more comprehensive understanding of the explanation methods for GNN. [1] Zhang, He, et al. "Trustworthy graph neural networks: Aspects, methods and trends." arXiv preprint arXiv:2205.07424 (2022). *Thank you for drawing our attention to these important aspects. Your valuable feedback has greatly enhanced the quality of our research. If you have any further **recommendations** or require additional **clarification**, please do not hesitate to let us know.* *Best,* *Authors* --- Rebuttal Comment 1.1: Comment: I appreciate the comprehensive reply and the extra experiments you conducted. They have resolved most of my inquiries and worries. Having gone through the other reviews, I'm content to keep my initial rating.
Summary: This paper presents a new evaluation metric OAR for evaluating post-hoc explanation methods for GNNs. OAR has two main advantages in that it does not need ground-truth labels and in that it prevents finding spurious explanations based on out-of-data-distribution phenomena. The paper focuses on graph classification: The input for this explanation is a subgraph of the graph for which we want to explain a prediction. The proposed method OAR, creates several permutation of the original graphs, where it is allowed to only permute edges **not** in the explanation. A generative model forces those permutations to not leave the training data distribution. The less the permutations can change the model prediction the better an explanation we consider the subgraph. Experiments validate the effectiveness of OAR. Strengths: The paper tackles an important topic, GNN explanation, in an ambitious way: To evaluate different explanation methods unsupervised, without access to ground truth explanations. The paper is in a good shape (apart from the introduction+related work): The writing is clear and allows for understanding most of the paper in the first or second read. The figures are helpful for understanding and well connected to the text. The experiments chosen by the authors make sense and I appreciate the authors also running a user study. Weaknesses: I do not like the approach the authors have taken for the related work in this paper. The first section is a hybrid of related work and introduction, and I feel it serves neither well. I am missing a concise motivation and a broader context of the paper from this section. The actual related work is in Appendix A, which I would prefer to be in the main paper. I also think that there is an important branch of work missing [1-5]. These works discuss general properties that we may want an explanation method and an evaluation setup to adhere to. I think it would benefit the paper to discuss how OAR is compatible/incompatible with these proposals. These works also offer some alternative benchmarks for evaluation instead of the flawed BA and tree-based (TR) datasets. [1] Himmelhuber et al: Demystifying Graph Neural Network Explanations [2] Sanchez-Lengeling et al. Evaluating attribution for graph neural networks [3] Agarwal et al. Evaluating explainability for graph neural networks [4] Agarwal et al. Probing GNN explainers: A rigorous theoretical and empirical analysis of GNN explanation methods [5] Faber et al. When comparing to ground truth is wrong: On evaluating GNN explanation methods There is also one paper discussing out-of-distribution versus explanation for GNNs [6] that might be worth discussing/comparing to. [6] Faber et al. Contrastive graph neural network explanation I am not convinced about the explanations that the method produces. Generally, we want to create explanations to make humans understand what is happening. For this, every step to create the explanation is ideally human inspectable. Here, it seems like we are transferring the trust problem from the GNN to the VGAE. This becomes the blackbox that somehow determines if the results are good or bad and we cannot inspect this blackbox. For example, using recall on ground-truth data makes for a good explanation metric because us humans can see *why* the explanation is supposedly good. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could you expand on the setup used for the user study? The previous years (e.g., https://neurips.cc/Conferences/2021/PaperInformation/PaperChecklist) included a paper checklist of what to consider and the details in Appendix D are quite short. The study on MNIST explanations seems to be very dependent on the layout of the explanation subgraph (maybe even more so than the choice of what nodes to put there). Can you expand on how you created the graph layout? Are superpixel nodes positioned where they are placed in the image? Does the permutation space for the targeted adversarial attacks allow adding and deleting nodes? Or are the attacks constrained to happen in the given adjacency matrix? OAR itself makes no assumptions on the size of the explanation, but it seems it is easier to get good explanations the larger the explanation graph is: there is less attack space for adversarial permutation and a larger fixed graph likely helps for low OOD scores. I wonder if OAR would benefit from a component that penalizes surplus nodes in the explanation? Could the method be expanded to also support node classification? One naive angle might be to extract the receptive field of each node and use OAR on those graphs? From what I understand, an important motivation is the fight of out-of-distribution data. This motivates the generative graph model to find and remove outliers. On the other hand, it seems that SimOAR performs similarly without using such a model but a budget(?) on perturbations. Can we maybe do something simpler than the generative model, for example, looking at spectra? Some of the works linked above showed that the BA and tree based datasets have flaws and GNNs do not always use all the data present in the ground truth. Could these also influence your experiments? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Addressed by the authors Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your valuable comments. We hope that the following reply could address your concerns! >**W1: Concerns about the Introduction and the missing related work [1-5].** Thanks for bringing this point to us. Following your suggestions, we have implemented the following changes in the response and our revision: 1. **Introduction & Related Work Organization**. The 'Introduction' section has been revamped to streamline previous studies. The 'Related Work' section has been added into the main paper. 2. **Literature View**. We appreciate your introduction to references [1-5] and have incorporated them into the 'Related Work' section. These papers primarily classify current metrics into four categories: accuracy, faithfulness, stability, and fairness. Notably, OAR align with the 'faithfulness' metric. 3. **Evaluation Metric & Dataset**. Drawing from references [1-5], we've specifically adopted the contemporary faithfulness metric, GEF [3], and the latest synthetic dataset, SHAPEGGEN [3], to enhance our paper. We utilized these resources to perform additional experiments, with the outcomes presented in the table below and Table 1 of the main paper. ||SHAPEGGEN|TR3|MUTAG|MNIST|BA3| |:-:|:-:|:-:|:-:|:-:|:-:| |GEF|0.800|0.734|0.800|0.934|0.734| |SimOAR|0.800|0.867|0.800|0.934|0.934| |OAR|0.867|0.934|1.000|0.934|1.000| ||||| > **W2: [6] might be worth discussing/comparing to.** Thanks for your suggestion. We have incorporated a discussion on CoGE [6] in our revision. In essence, although CoGE can determine if a subgraph is associated with the OOD class, it fails to provide a precise score delineating the OOD degree, making it coarser compared to OAR. Moreover, CoGE is more aligned with explanation methodologies rather than functioning as an evaluation framework, which is why we did not select it as a baseline. We hope this classification addresses your concerns! > **W3: Concerns about transferring the trust problem from the GNN to the VGAE.** Thank you for highlighting this important aspect. While we recognize the concerns of transferring trust to VGAE with OAR, we assert that its impact is relatively minimal for several reasons: 1. **VGAE-guided OAR**. Indeed, the trust and accuracy of VGAE is important to evaluate the explanation faithfully. However, we are optimistic that as generative research (e.g., diffusion models) advances, emerging generation ability will mitigate this limitation. 2. **VGAE-free SimOAR**. Our streamlined variant, SimOAR, sidesteps VGAE altogether. Instead, it utilizes transparent heuristics for perturbation generation. This strategic choice inherently counteracts the transfer of trust issues from GNN to VGAE. 3. **Performance Insights**. As Table 1 shows, SimOAR is slightly worse or on par with OAR, but outperforms all baselines. This underscores the idea that VGAE's influence is minimal. We're grateful for your insightful suggestions, and we will incorporate them into the reversion. >**Q1: Expand the setting of user study following checklist.** Thanks for your suggestions. We have detailed the setting following checklist and incorporated it into our revision: 1. **Instructions.** Each participant was asked to check 5 groups of graphs as shown in Appendix D and answer which subgraph is best preserves digital information. 2. **Risks.** Our user study do not have any risks. 3. **Wage.** The participants in our user study were volunteers. >**Q2: How to creat the MNIST-sp?** We created the MNIST-sp according to [1]. Specifically, the node features are the pixels' values and the pixels' centers. Edges are the spatial distance between the superpixel centers. *[1] Geometric deep learning on graphs and manifolds using mixture model cnns. CVPR. 2017* >**Q3: Does the attacks allow adding and deleting nodes?** No, our attacks are only allowed to happen in the given adjacency matrix. We will explore the addition/deletion of nodes in future work. >**Q4: If OAR would benefit from penalizing surplus nodes.** Following your recommendations, we integrated the L1-norm for explanation size control. Experiments exhibted in the following and the Table 1 of the main paper, verify that the Component you suggested indeed enhances our methods. ||TR3|MUTAG|MNIST|BA3| |:-:|:-:|:-:|:-:|:-| |SimOAR|0.867|0.800|0.934|0.934| |SimOAR+L1|**0.934**$\uparrow$|0.800|0.934|**1.000**$\uparrow$| |OAR|0.934|1.000|0.934|1.000| |OAR+L1|0.934|1.000|**1.000**$\uparrow$|1.000| ||||| >**Q5: Could OAR work for node classification?** Yes, it can. Here are the results on four node classification datasets following [1]: ||BA-Shapes|BA-Community|Tree-Cycles|Tree-Grid| |:-:|:-:|:-:|:-:|:-:| |RM|0.312|0.321|0.295|0.411| |DSE|0.406|0.377|0.343|0.384| |OAR|**0.542**|**0.560**|**0.489**|**0.440**| |SimOAR|0.527|0.535|0.471|0.431| ||||| *[1] GNNExplainer: Generating Explanations for Graph Neural Networks. NIPS 2019* >**Q6: Can we do something simpler than VGAE, like spectra?** Yes, we can. Spectra is the commonly used techniques in anomaly detection. Considering their typically work for nodes classification, we use the Manhattan distance between the spectrum as the OOD degree and exhibit the results: ||MUTAG|BA3|TR3|MNIST| |:-:|:-:|:-:|:-:|:-:| |OAR|0.567|0.503|0.483|0.455| |SimOAR+Spectra|0.511|0.459|0.449|0.433| ||||| Based on these we find that the performance of spectra is inferior to that of VGAE. >**Q7: Could BA3 and TR3 influence experiments?** Thanks for your concern. We believe that they will not. Here are two reasons: - First, although ground truth in BA3 and TR3 might not conform to the decision-making process exactly, they contain sufficient discriminative information to help justify the quality of explanations. - Second, even after excluding the potentially problematic datasets BA3 and TR3, we still have other datasets like MUTAG and MNIST-sp, and our methods also achieved the best performance on these datasets across various settings. --- Rebuttal Comment 1.1: Comment: Thank you for your very detailed review and in particular for the additional experiments. #### W1+W2 Thank you for rearranging the paper structure and expanding the related work section. In particular, I appreciate that the authors extend this discussion into new experimental results. The results provide further support for their proposed method. I agree that [6] can just be discussed and does not warrant experiments as a baseline. #### Q1, Q2, Q3, Q5, Q7 Thank you for providing additional details for these questions. I also appreciate here taking the extra effort to create the additional node classification experiment. Can you describe a bit how you adapted OAR to the node setting? #### Q4 Thank you for taking the time to also try this experiment. #### W3, Q6 While I am not fully convinced by the argument, I agree with SimOAR as an inspectable alternative to the full VGAE-OAR version. The reason why I am saying I am not fully convinced is that there are differences in performance, even when we help SimOAR with spectral information (from the experiment in Q6). Overall, I really like the new information from this rebuttal phase. The authors provided four new sets of experiments. They embedded current GNN explanation work in their framework, extended to Node classification, found improvements with L1 regularization and furher compared to a spectral baseline. My assessment of losing inspectability was too pessimistic because of SimOAR, although it lacks slightly behind in performance. Therefore, I increase my score. --- Reply to Comment 1.1.1: Title: Response to Reviewer xH8P Comment: Dear Reviewer xH8P, I would like to express my sincere gratitude to you for recognizing and providing constructive feedback on our work, which has been invaluable in refining it. Following your comments regarding node classification, we have incorporated the experimental details in the revision. Specifically, for each node in the input graph, **we construct an ego graph for it based on the number of layers in the baseline GNN.** Then, the explanation task for node classification can be transferred to the explanation task for graph classification. Furthermore, the remaining hyperparameters and methods in our OAR/SimOAR remain unchanged. Once again, I would like to extend my sincere appreciation for the time and effort you have dedicated to the review process! We truly value the importance of your input in our professional journey! Best regards, Authors
Summary: This paper studies the explainability evaluation of GNNs, and proposes a new evaluation paradigm. Inspired by adversarial robustness, it uses a generative model (VGAE) to fulfill the explanation subgraph, randomly perturbs the fulfilled parts, and uses the OOD scores of perturbed graphs to measure the importance of explanation subgraph. Experiments are done on several datasets and show the improved evaluation quality w.r.t. diverse criteria. Strengths: 1. This paper summarizes the removal- and generation-based evaluation protocols well, and points out their drawbacks w.r.t. post-hoc explainability. It resolves these drawbacks by calculating the robustness of explanations under attack and OOD-reweighting. This motivation is clear and reasonable. 2. In terms of technical contributions, the evaluation of explanation robustness is well-designed. It first gives an adversarial robustness-related definition, and then transfers it into a tractable objective and designs an OOD-reweighting block (i.e., an external VGAE) to solve the objective. Moreover, a computationally efficient variant is also proposed. 3. The experiments are sufficient to demonstrate the effectiveness of the proposed method, w.r.t., explanation evaluation, generalization, model design, and user study. 4. I appreciate the diverse criteria used in the paper, especially the “consistency of ground-truth explanations and human intuition”. 5. The presentation of the proposed method is clear. Weaknesses: 1. Regarding adversarial robustness, the proposed measurement is based on perturbing the subgraphs. I have two questions: (1) why name the random perturbation as adversarial robustness, which usually performs adversarially perturbation? Does it mean that “How adversarial are random perturbations against the original data distribution, which are measured by OOD score”? (2) How many perturbed subgraphs are needed? I think these concepts are essential to understand the proposed evaluation method. Hence, more clarification is needed. 2. The OOD reweighting block is implemented by VGAE, hence, the proposed method seems heavily dependent on the VGAE quality. However, many studies show the generative ability of VGAE is suboptimal and degenerated. It would be better to analyze the influence of VGAE quality on the OAR performance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why name the random perturbation as adversarial robustness, which usually performs adversarially perturbation? Does it mean that “How adversarial are random perturbations against the original data distribution, which are measured by OOD score”? 2. How many perturbed subgraphs are needed? 3. What is the influence of VGAE quality on the OAR performance? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer QMrv:** **Thank you for the thoughtful feedback! Your constructive criticism has been invaluable in refining our work. Below, we give point-by-point responses to your comments. Hope that our responses could address all your concerns!** >*W1 & Q1: Does the adversarial robustness mean that “How adversarial are random perturbations against the original data distribution, which are measured by OOD score”?* Thanks for your concern. Yes, it does. Here are two more reasons why we named our methods OAR (OOD-resistant **adversarial** robustness): - First, the starting point of our methods is finding the minimum perturbation leading to the wrong prediction (Line 128-132 in the main paper). However, this target is proved to be hard to reach and sometimes even intractable in the scenario given in the paper (Line 144-151). Thus, we transfer to approximate it from its dual perspective. That is, calculating the largest changes of outputs under the attacks. Hence, **our approach is essentially derived from the adversarial attack.** - Moreover, here are the imposed objective and the target of the *adversarial attack* and *our attack in OAR*, which are similar with each other: | | &emsp; &emsp;&emsp; &emsp; &emsp; &emsp; Objective | &emsp; &emsp;&emsp; &emsp; &emsp; &emsp;Target | |:-------------:|:--------:|:--------:| | Adversarial Attack | **selected part of input** (*which is selected by the attack algorithm*) | **perturb output as much as possible** (*until flip the outputs*) | | Attack in Our OAR | **selected part of input** (*which is selected by the explanations*) | **perturb output as much as possible** (*to see the robustness*) | |||| Moreover, following your suggestion, we have added these clarifications into the Introduce Section in the reversion. We believe that this will greatly increase the readability, coherence, and comprehensibility of the article. Thanks again for your valuable suggestion! >*W2 & Q2: How many perturbed subgraphs are needed?* Please refer to Line 160-161 in Appendix, where we point out that for BA3, TR3, and MUTAG, the number of perturbed subgraphs $N_{perturb}$ is 20, while for MNIST-sp, the number of perturbed subgraphs $N_{perturb}$ is 50 owing to its large size. >*W3 & Q3: What is the influence of VGAE quality on the OAR performance?* Thanks for your concern. The quality of VGAE will have a certain impact on the performance of OAR, but it is not significant. Here is the reason: - The simplified version of OAR -- SimOAR (which can be viewed as the case that the VGAE is poorly trained and produces random scores) -- performs close to OAR. It indicates that the performance of VGAE does not significantly impact OAR's performance, and further demonstrates that the effectiveness of our OAR and SimOAR is mainly attributed to the preferable paradigm, instead of the other module (e.g., VGAE). **Once again, we sincerely appreciate your time and effort in reviewing our paper. Your constructive criticism has been invaluable in refining our work, and we are more than happy to add clarifications to address any additional recommendations and reviews from you!** **Best,** **Authors** --- Rebuttal Comment 1.1: Title: Response to Authors and raise score from 7 to 8 Comment: Thank you for answering my comments. Your response address the concerns I had, and I will raise my score. --- Reply to Comment 1.1.1: Title: Response to Reviewer QMrv Comment: Dear Reviewer QMrv, I would like to express my heartfelt gratitude to you for recognizing our work. Your insightful guidance and constructive suggestions have undoubtedly played a vital role in improving the quality of our work. I would also like to extend my sincere appreciation for the time and effort you have dedicated to the review process. We truly value the importance of your input in our professional journey! With warm regards, Authors
Rebuttal 1: Rebuttal: Dear Reviewers: We gratefully thank you for your valuable comments! We were encouraged to hear that our work has **clear and well-written presentations** (by all Reviewers), **well-designed and interesting technical contributions** (by Reviewer QMrv and vGHJ), **extensive and sufficient experiments** (by all Reviewers), which **addresses a critical and emerging issue** to the research community (by Reviewer uvv7 and vGHJ). Here we meticulously give point-by-point responses to your comments, and further add the additional experiments and figures into the one-page supplementary PDF. Especially, we have taken measures to enhance the structure of the Introduction and Related Work section, and provided a more rigorous definition of our methods. Furthermore, we have provided a more detailed description of our experimental settings and included a wider range of representative baselines and datasets for conducting additional experiments. We hope that our responses adequately address all your concerns and meet the expectations of the conference committee. Once again, we sincerely appreciate your time and effort in reviewing our paper. Your constructive criticism has been invaluable in refining our work, and we are more than happy to add clarifications to address any additional recommendations and reviews from you! Best, Authors Pdf: /pdf/b8616b2a912d6b864810dad4a6edce18a6480865.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On Sparse Modern Hopfield Model
Accept (poster)
Summary: The authors introduce sparse Hopfield networks that feature sparse retrieval dynamics corresponding to sparsemax attention mechanisms and resulting in sparse patterns that are more robust to noise. They prove fast convergence analogous to modern Hopfield networks and show how the sparse Hopfield model has a tighter lower bound for memory capacity compared to the dense version. Altered sparse variations of Hopfield layers to be used in Deep Learning models are introduced and their viability is shown on established Image Classification and one synthetic and four real-world Multiple Instance Learning tasks. The novel approach shows increased memory robustness in relation to Gaussian noise applied to the input images. Strengths: # Significance The paper's main strength lies in the theoretical results and the Theorems and Lemmas whose proofs can be found in the vast appendix. It's not unlikely that these results will be utilized in future work on associative and biologically plausible Deep Learning. The authors shared their source code facilitating reproducibility. # Clarity and Quality The goals of this line of research are presented clearly and summarized into one concise research question. The formatting of math such as definitions and theorems is clear and pleasant to read. Proof sketches are more or less easy to follow. The use of language is of high quality minus a number of typos and grammatically wrong sentences. Weaknesses: # Originality The papers originality is fair since sparse computation is a well-known topic in Deep Learning and the presented work is merely an incremental improvement on modern Hopfield networks. The connection between Hopfield networks and attention was made in previous work making the correspondence to a form of sparse attention mechanism an obvious venue of investigation. # Clarity The MIL tasks are not introduced very well: The explanation of the bit pattern experiment is insufficient for readers unfamiliar with the task and the Real-world tasks are not explained at all. # Significance Potential computation advantages gained from sparse patterns is shortly mentioned in the introduction but not touched upon in the remainder of the manuscript. Additionally, a footnote weakens the claim and puts it into perspective. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the goal for the MIL tasks? - Line 192: two-layer fully connected networks -> MLPs? # Minor mistakes - Figure and Table 3 are mentioned in the text but are not found in the paper nor in the appendix. - Line 14: "exploit datasets" is worded strangely - Same goes for Line 87 "provably blessings" - Line 112: "memoery" - Theorem 3.1: Grammatically incorrect - Line 229: "to utilizes" - Corollary 3.1.2: "retrieves a memory patterns" - Line 320: "Boarder impact" Should the authors decide to fix the given mistakes I am willing to increase my overall rating! Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do not mention any limitations of their work. However, given the theoretical nature of the work, I do not deem it necessary to discuss them at length. Societal impact of this paper does not surpass the already significant implications of research in the field as a whole. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### The revised draft is in the same Dropbox folder (in Supplementary Material PDF). --- >**Reviewer's comment:** The papers originality is fair since sparse computation is a well-known topic in Deep Learning and the presented work is merely an incremental improvement on modern Hopfield networks. The connection between Hopfield networks and attention was made in previous work making the correspondence to a form of sparse attention mechanism an obvious venue of investigation. **Response:** We acknowledge that the link between Hopfield networks and attention mechanisms has been well-established in prior work. However, In this work, we introduce a principled construction of the energy function through the convex conjugation of entropy regularizer, which sets our work apart from the Modern Hopfield Network (MHN). We consider this our core construction. Unlike the heuristic approach to the learning rule found in MHN, which is motivated by previous work from 2017, our method provides a more rigorous construction. By using the Gini entropic regularizer, we can analytically derive both the energy function and the retrieval dynamics. The connection to attention is indeed a part of our work, but it is not the main focus. Therefore, in the submitted version, we treated this connection as a remark and included detailed information in the appendix. Our primary emphasis is on the theoretical advancements in the understanding and construction of Hopfield models, which we believe represents a significant step beyond mere incremental improvement. >**Reviewer's comment:** The MIL tasks are not introduced very well: The explanation of the bit pattern experiment is insufficient for readers unfamiliar with the task and the Real-world tasks are not explained at all. **Response:** Multiple Instance Learning (MIL) is a variation of supervised learning where the training set consists of labeled bags, each containing multiple instances. The goal of MIL is to predict the bag labels based on the instances they contain, which makes it particularly useful in scenarios where labeling individual instances is difficult or impractical, but bag-level labels are available. Examples of such scenarios include medical imaging (where a bag could be an image, instances could be patches of the image, and the label could indicate the presence or absence of disease) and document classification (where a bag could be a document, instances could be the words or sentences in the document, and the label could indicate the topic or sentiment of the document). Thus, intuitively, attention-based models are suitable for this task. In practice, MIL was widely used in anomaly detection and other tasks that require high cost for fine-level labeling. >**Reviewer's comment:** Line 192: two-layer fully connected networks -> MLPs? **Response:** We acknowledge that the original wording was unclear. We have modified it in the latest revision as follows: >>SparseHopfieldLayer, by contrast, has learnable memory patterns that maps query patterns to hidden states with sparsemax activation. Thus it can substitute a fully connected layer within deep learning architectures. Thank you for bringing this to our attention. >**Reviewer's comment:** Figure and Table 3 are mentioned in the text but are not found in the paper nor in the appendix. **Response:** We appreciate your attention to detail. However, it appears there may have been a slight oversight. Both Figure 3 and Table 3 can be found on page 27 of the appendix in the version of the paper that was submitted. >**Reviewer's comment:** Minor mistakes: Line 14: "exploit datasets" is worded strangely. Same goes for Line 87 "provably blessings". Line 112: "memoery". Theorem 3.1: Grammatically incorrect. Line 229: "to utilizes". Corollary 3.1.2: "retrieves a memory patterns" . Line 320: "Boarder impact" . **Response:** Thanks for doing such careful proofreading! We have fixed all typos and grammatical errors pointed out. We hope that our responses adequately address the reviewer's concerns, and we look forward to any further feedback. Thank you for your time and consideration. --- Rebuttal Comment 1.1: Title: Updated overall rating Comment: Thank you for your thorough response to my review! You were able to convince me of the originality of your work. Using the Gini entropy regularizer to derive energy function and retrieval dynamics is undoubtedly a valuable contribution towards understanding the nature of MHNs. Since the authors addressed most of my concerns in the revised version I adjusted my overall rating suggesting acceptance of the manuscript. Best of Luck, Reviewer usKi --- Reply to Comment 1.1.1: Comment: Dear Reviewer usKi, We are pleased to hear that our revisions have addressed your concerns, and we are grateful for the time and effort you invested in reviewing our work. Your insightful comments have been constructive in enhancing the quality of the draft, especially the careful proofreading. Thank you! Warm regards, Authors
Summary: The authors introduce sparse Hopfield model, which are memory-associative models used to store and retrieve patterns. Theoretically, the authors connect sparse Hopfield with sparse attention mechanism and empiricially the authors show how their method can outperform the state-of-the art. Strengths: The authors present the key research questions, limitations of the current state of the art and their contribution in well-organized sections, detailed theoretic analysis is provided. The figure captions are detailed, and the contributions are clearly stated. Weaknesses: The paper in terms of language is very hard to follow for layman readers. As Hopfield network may not be familiar to all interested readers, perhaps the authors can give a very basic understanding of what Hopfield network is, why it is interesting and important. The authors talk about computational efficiency and noise-robustness in the introduction, however experiments comparing efficiency of their method with baseline is missing in current manuscript. How and why sparsity increase noise-robustness is not clearly explained. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see the weakness section. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes, limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### The revised draft is in the same Dropbox folder (in Supplementary Material PDF). --- >**Reviewer's Comment:** The paper in terms of language is very hard to follow for layman readers **Response:** Thank you for your feedback. We understand the need for making the paper more accessible to readers who may not be familiar with Hopfield networks. To address this, we have included a brief introduction to Hopfield networks in the introductory section of the paper: >>"Hopfield models are classical associative memory models based on the Ising model, used in both biological and artificial neural networks (Hopfield, 1982; Hopfield, 1984). These models utilize statistical-mechanical retrieval dynamics to store a collection of memory patterns and retrieve the one that is most similar to a given query. For example, if the stored memories are images of all the dogs you've seen in the past, and the query is the image of a dog you see today, the Hopfield model retrieves the memory of the dog that most closely resembles the one you saw today. This is achieved by embedding the memories in the energy landscape of a physical system, where each memory corresponds to a local minimum. When a query is presented, the model initiates energy-minimizing retrieval dynamics at the query, which then navigate the energy landscape to find the nearest local minimum, effectively retrieving the memory most similar to the query." Moreover, we have made several modifications: 1. Section 2 now begins with an opening paragraph that simplifies the construction of a well-defined Hopfield model, making it more accessible to non-specialists. 2. We have provided an overview of the theoretical results in Section 2.1 to aid readers in developing a deeper understanding of the concepts. 3. The discussions on computational benefits have been reformatted into several remarks (Remark 2.3 and 2.4) to improve readability. 4. We have added Remark 2.1, which succinctly summarizes the relationship between entropic regularizers and sparse probability mappings, supplemented with relevant references and examples. 5. Additional explanations have been included for Definition 2.2 and Lemma 2.2 to enhance clarity. 6. Section 4.2 now includes an introductory paragraph about Multiple Instance Learning (MIL). 7. For the convenience of the readers, a nomenclature table of notations has been added to the appendix. We believe these changes will significantly improve the general readability of the draft. >**Reviewer's Comment:** The authors talk about computational efficiency and noise-robustness in the introduction, however experiments comparing efficiency of their method with baseline are missing in the current manuscript. **Response:** Thank you for comments. We believe there might have been a slight oversight. The efficiency comparison between our sparse model and the dense baseline can be found in "Convergence Analysis" of Sec. 4.2.2 (summarized as Figure 2). The results indicate that the sparse Hopfield model converges faster than the dense model and also yields superior accuracy. We acknowledge that our initial submission may not have adequately emphasized the efficiency and noise-robustness. In our revised version, we have made a concerted effort to enhance clarity by breaking down the information into more digestible remarks (remark 2.3 & 2.4). In the original submission, we mentioned "efficiency" in line 83 of the introduction, referring to the "potential computational efficiency" that our model might inherit from [Martins and Astudillo, 2016]. We also acknowledged in footnote 2 (and Limitations Sec.), that our proposed sparse Hopfield model still faces the issue of quadratic complexity and explained that the term "potential computational efficiency" was meant to highlight the possible efficient implementations of sparsemax that takes advantage of sparsity, as mentioned in Sec. 2 of [Martins and Astudillo, 2016]. In the revised version, we have clarified our meaning and provided additional context to avoid confusion. Thank you for bringing this to our attention. On the other hand, there is indeed another kind of "efficiency" advantage provided by our model. To clarify, by this "efficiency," we are referring to the efficiency of memory retrieval compared, with dense model. Essentially, a retrieval dynamic with a smaller error converges faster to the fixed points (or stored memories), thereby enhancing efficiency. This concept is elaborated upon in Remark 2.3 of our latest revision. >>**Remark 2.3 (Faster Convergence)**: Computationally, Theorem 2.2 implies that $\mathcal{T}$ requires fewer updates to reach fixed points with the same amount of error tolerance compared to $\mathcal{T}_{\text{dense}}$. In other words, $\mathcal{T}$ retrieves stored memory patterns faster and therefore more efficiently, as evidenced in Figure 2. We have conducted a empirical comparison between the sparse Hopfield model and its dense counterpart in Section 4.1, which is summarized in Figure 2. >**Reviewer's Comment:** How and why sparsity increase noise-robustness is not clearly explained. In response to your comment on noise-robustness, Please see line 176-179 of the original submitted version or newly added remark 2.4. >>**Remark 2.4 (Noise-Robustness)**: Moreover, in cases of contaminated patterns with noise $\boldsymbol{\eta}$, i.e., $\tilde{\mathbf{x}}=\mathbf{x}+\boldsymbol{\eta}$ (noise in query) or $\tilde{\boldsymbol{\xi}}_\mu=\boldsymbol{\xi}_\mu+\boldsymbol{\eta}$ (noise in memory), the impact of noise $\boldsymbol{\eta}$ on the sparse retrieval error (equation 2.6) is linear, while its effect on the dense retrieval error (equation 2,6) is exponential. This suggests the robustness advantage of the sparse Hopfield model, as evidenced in Figure 1. Thank you for your time and valuable feedback. Please do not hesitate to let us know if there are any other aspects of our work that you would like us to clarify.
Summary: This paper proposes the sparse modern Hopfield network. It studies the new proposed model from both perspectives, theoretical and empirical, validating the approach. Strengths: * The proposed model and the introduction of sparsity in the modern Hopfield network seem novel * The theoretical analyse is well received * The empirical validation shows that the sparse model is competitive (if not better) than its dense equivalent * The code is provided for easy reproducibility Weaknesses: * The paper is very hard to read. It looks to me that it has a mathematical exposition which is much more complex than necessary, while it fails to develop for the reader basic intuitions about the proposed approach * The paper structure could have been improved by reserving a fair amount of space to give more details (besides math) about the proposed model and by using any other possible tools (algorithms, illustrations, etc.) to actually show/present how it actually works. Some of the current paper material can be moved into the appendix. This would really ease the reader job. * I mention that I didn’t follow the math completely, but even so it can be observed that some mathematical notations are not defined, e.g., what is n on line 111? * The literature survey on related works about sparsity in deep learning is quite weak. I believe that this is relatively important as long as the proposed model aims to be used in deep learning models. Also, the related works discussion shall be in the main paper and not in the appendix. * The paper needs a careful proofread as it contains typos (for instance, “memoery” on line 112) and English usage can be enhanced for clarity. * The paper may be potentially impactful in the research community, but given its current state it is arguable (in my opinion) if it can actually become influential Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Q1) Can you please add a nomenclature in the Appendix with all the mathematical notations and symbols? The paper is not an easy read, and a nomenclature would help the reader seriously. Q2) Can you please discuss and quantify (using various metrics and techniques) the sparsity patterns obtained, besides the relation with sparse attention? How the various sparsity patterns are before, during, or after training? Some (numerical, or visual) examples would help. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: Fairly well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### The revised draft is in the same Dropbox folder (in Supplementary Material PDF). --- >**Reviewer's Comment:** The paper is very hard to read, with a mathematical exposition that seems more complex than necessary. It lacks basic intuitions about the proposed approach. **Response:** We acknowledge the reviewer's concern and have made revisions to enhance readability. These include: - Updating sections, tables, and figures for clarity. - Adding a high-level overview of the Hopfield model in the introduction to help readers build intuition. - Adding an opening paragraph in Section 2 that explains the construction of a well-defined Hopfield model in layman's terms. - Adding an overview of theoretical results in Section 2.1 to help readers build intuition. - Adding explanations and visual demonstrations to help readers understand the Sparse Hopfield Model. - Including a nomenclature table in the appendix. - Rephrasing computational benefits into multiple remarks for readability. - Moving broader impacts and future direction into the appendix to save space. We hope that these modifications have enhanced the overall comprehensibility of the draft. >**Reviewer's Comment:** The paper structure could be improved by giving more details about the proposed model and using tools like algorithms and illustrations. Some material can be moved to the appendix. **Response:** We appreciate this suggestion and have made structural changes to include more details and tools to present the model. We have summarized the implementation of the Sparse Hopfield Layer into Algorithm1 (Sec.D.3) in the latest draft. We have also moved some material to the appendix to ease the reader's job. >**Reviewer's Comment:** Some mathematical notations are not defined, e.g., what is n on line 111? **Response:** Thank you for pointing this out. We have included a table of notation in the appendix and clarified the definition of n in the latest revision, $n:=||\mathbf{x}||$ is the norm of query $\mathbf{x}$. >**Reviewer's Comment:** The literature survey on sparsity in deep learning is weak, and the related works discussion should be in the main paper. **Response:** We appreciate the reviewer's concern. However, we believe a broader survey on "sparsity in DL" is beyond the scope of our work. This work is primarily related to attention and transformers. The current survey, based on the efficient transformer review [Tay 2022] and many other efficient transformer papers published in the past three years, should suffice. Nonetheless, we are open to specific suggestions on what additional literature should be incorporated. [Tay 2022] Tay, Yi et al. “Efficient Transformers: A Survey.” ACM Computing Surveys 55 (2020): 1 - 28. (arXiv:2009.06732) >**Reviewer's Comment:** The paper contains typos and needs proofreading. **Response:** Thank you for your careful proofreading. We have fixed all identified typos and grammatical errors. >**Reviewer's Comment:** The paper's potential impact is arguable in its current state. **Response:** We appreciate the reviewer's concern on the potential impact of our paper. In response to this, we have taken steps to demonstrate the **practical implications** of our work by implementing the proposed model in **two additional real-world experiments** in the latest revision (Sec. G.5). These experiments utilize sparse Hopfield model for transformer-based models for distinct tasks, including multivariate time series prediction and neural machine translation. The results from these experiments show that our proposed Sparse Hopfield model not only enhances the performance of transformer-based deep learning models consistently, but also achieves state-of-the-art results. On the **theoretical front**, our work introduces a principled construction of the energy function, achieved through the convex conjugation of the entropy regularizer. This unique approach sets our work apart from the Modern Hopfield Model, marking it as a core contribution of our research. In contrast to the heuristic approach to the learning rule found in MHN, our method provides a more rigorous construction. By utilizing the Gini entropic regularizer, we have been able to analytically derive both the energy function and the retrieval dynamics. We believe these advancements represent a significant step beyond mere incremental improvement, thereby making a fair contribution to the field. >**Reviewer's Comment:** Can you please add a nomenclature in the Appendix with all the mathematical notations and symbols? **Response:** Yes, we have added a nomenclature table in the appendix (Table 3 in the latest revision) to help readers. Thank you for this suggestion! >**Reviewer's Question:** Q2) Can you please discuss and quantify (using various metrics and techniques) the sparsity patterns obtained, besides the relation with sparse attention? How the various sparsity patterns are before, during, or after training? Some (numerical, or visual) examples would help. **Response:** Please see **Figure 2: Quantifying Sparsity in Bit Experiments** in the **attachment of the global rebuttal response** (or the last page of the latest revision). Here, we quantify and visualize the patterns' sparsity over 5 runs with sparsity metrics (Sparsity Ratio \& Hoyer's Sparsity Measure). We plot the real (target) pattern, initial pattern initialized in dense and sparse $\mathtt{HopfieldPooling}$, and their learned patterns. From above plots, it is easy to see the sparse model obtains sparser patterns than the dense one. --- We sincerely appreciate the time and effort that Reviewer XuPM has invested in reviewing our paper. We have taken all comments into careful consideration and have made corresponding revisions to address the concerns raised. We hope that the revisions and clarifications provided in this response address the reviewer's concerns and make the value of our work clear. We look forward to further feedback and discussion. --- Rebuttal Comment 1.1: Title: Rating updated Comment: I thank the authors for carefully considering all my comments and for the extensive rebuttal in general. Really appreciating it. After reading the other reviews and the authors' answers, I have increased my rating to weak accept. --- Reply to Comment 1.1.1: Title: Thank You for Constructive Comments Comment: Dear Reviewer XuPM, We're happy that our revisions have met your expectations. We truly appreciate your thoughtful feedback throughout the review process! Your insights have been invaluable in refining our paper. Thank you! Best regards, Authors
Summary: The paper proposes a new model from the Dense Associative Memory family that uses Sparsemax function for its energy. This model is studied and compared to the model with the softmax-based energy from both theoretical and empirical perspectives. The work proposes that the sparse model outperforms the model with the softmax (dense model) in terms of the memory retrieval bound. Strengths: The work derives a novel model from the modern HN family and analyses its energy and dynamics using convex conjugate of the sparse entropic regularizer. As far as I can tell this is a sophisticated result, which meaningfully extends the family of models that has been previously studied. The work studies theoretically several properties of the convergence dynamics in this new model. In general, results pertaining to continues models from this family are scarce, which makes this submission even more valuable. The authors also discuss possible ways of integrating their model into existing Hopfield-like frameworks, and propose Sparse Hopfield, SparseHopfieldPooling, SparseHopfieldLayer layers based on their energy function. Empirical results look promising. The proofs in the appendices look convincing, although I have not checked them carefully. Weaknesses: It would be nice if the authors could summarize in a concise and crisp way what is the theoretical advantage of their model with the Sparsemax compared to previously studied modern HN models. I can see that the new model uses a very different language for its formulation (which is great), but I am struggling to understand its computational benefits compared to previously studied models. The improvements in empirical performance, which the authors present are great, but they are not too significant to claim superiority just based on them. I would appreciate a clear theoretical proposition here. The statement in lines 578-579 is somewhat confusing. The model studied in Ramsauer 2020 uses softmax activation function, the model studied in Demircigil uses exponential activation function. Several papers (incorrectly) state that these two models are identical. This is wrong, since they have mathematically distinct energy functions and update equations. Some wording in Appendix C might be somewhat confusing. The appendix makes it sound that everyone before Ramsauer 2020 studied binary Hopfield networks, but Ramsauer 2020 introduced the continuous networks. This is not quite correct. For instance https://www.pnas.org/doi/10.1073/pnas.81.10.3088 introduced continues sparse (as opposed to dense) Hopfield networks in 1984. Krotov & Hopfield 2016 introduced continuous dense Hopfield networks (see equation 10 in their paper and most of the empirical results on MNIST), etc. Ramsauer 2020 focused on studying a specific model from that family (with softmax activation) and calculated the capacity of that specific model. But the continuous networks (both sparse and dense) were introduced in prior work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could the authors please explain the computational benefits of their model compared to previously studied dense associative memories? 2. I am not too familiar with Gini entropy and its conjugate, but formulas for the relationship between $F(\mathbf{p})$ and $\Psi(\mathbf{p})$ (e.g. above equation 2.4) look very similar to Legendre transformation to me. Krotov & Hopfield 2020 also use Legendge transformation to compute the energy function using Lagrangians. Is there a precise relationship between the Lagrangian language of Krotov and Hopfield and the formalism developed in this submission? 3. The authors present quantitative metrics to empirically evaluate the performance of their model. This is great, but in order to get more intuition about the new model it would be helpful to show a few images for experiments presented in section 4.1 that the authors used in their experiments. For example pairs of initial states and final retrieved images. It would be interesting to see visually what kinds of mistakes the new network makes compared to previously studied models. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This is a theoretical work. The authors address societal impacts in the last section “Broader Impact” and appendix A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### The revised draft is in the same Dropbox folder (in Supplementary Material PDF). --- > Q1 & It would be nice if the authors could summarize in a concise and crisp way... **Response**: Thank you for your comment. Please see below summary of the advantages of our sparse model over its dense counterpart. 1. **Principled Derivation of Energy Function and Retrieval Dynamics (Sec. 2.1 & 2.2)**: Unlike the heuristic design of the energy function in the dense modern Hopfield model, we provide a principled derivation for the energy function and the corresponding retrieval dynamics. This is based on the insight that the convex conjugate of various entropic regularizers can result in distributions exhibiting different levels of sparsity. 2. **Faster Convergence of Memory Retrieval (Remark 2.3 in the latest version)**: Our model demonstrates faster convergence of memory retrieval from the tighter retrieval error bound. The sparser the pattern is, the faster the convergence. >**Remark 2.3 (Faster Convergence)**: Computationally, Theorem 2.2 implies that $\mathcal{T}$ requires fewer updates to reach fixed points with the same amount of error tolerance compared to $\mathcal{T}_{\text{dense}}$. In other words, $\mathcal{T}$ retrieves stored memory patterns faster and therefore more efficiently, as evidenced in Figure 2. 3. **Noise-Robustness (Remark 2.4 in the latest version)**: Our model exhibits significant advantages in scenarios with higher noise levels, as shown in Fig. 1 of the experimental section. >**Remark 2.4 (Noise-Robustness)**: Moreover, in cases of contaminated patterns with noise $\boldsymbol{\eta}$, i.e., $\tilde{\mathbf{x}}=\mathbf{x}+\boldsymbol{\eta}$ (noise in query) or $\tilde{\boldsymbol{\xi}}_\mu=\boldsymbol{\xi}_\mu+\boldsymbol{\eta}$ (noise in memory), the impact of noise $\boldsymbol{\eta}$ on the sparse retrieval error (equation 2.6) is linear, while its effect on the dense retrieval error (equation 2,6) is exponential. This suggests the robustness advantage of the sparse Hopfield model, as evidenced in Figure 1. 4. **Tighter, Sparsity-Dependent Well-Separation Condition (Thm. 3.1)**: Intuitively, the sparser the pattern is, the easier it is for the well-separation condition to be fulfilled. This means it is easier to separate and retrieve stored memories. The hardness of distinguishing patterns can be tamed by the sparsity, preventing an increase of separation $\Delta_\mu$ with $M$ as observed in the dense Hopfield model. The benefits of sparsity arise from the increased energy landscape separation provided by the sparse Hopfield energy function. This enables the separation of closely correlated patterns, resulting in a tighter well-separation condition for distinguishing such patterns. We hope these points clarify the theoretical advantages of our model. >**Reviewer's Comment:** The improvements in empirical performance, which the authors present are great, but they are not too significant to claim superiority just based on them **Response:** In response to this, we have added two additional real-world experiments showing that our proposed Sparse Hopfield model not only enhances the performance of transformer-based deep learning models consistently, but also achieves STOA results. >**Reviewer's Comment:** The statement in lines 578-579 is somewhat confusing.... **Response:** We appreciate your attention to this detail, and we fully agree with your observation. The models referenced are indeed distinct, and many existing works state their resemblance incorrectly. Yet, our statement that “[Ramsauer 2020] generalizes the energy function of [Demircigil 2017]” was intended to highlight the relationship between the **energy functions**, not the activation functions. We did recognize that [Demircigil 2017] does not explicitly define activation functions, and its retrieval dynamics are not related to attention at all. We acknowledge that the wording in lines 578-579 may have led to confusion. To ensure clarity and accuracy, we have revised the paragraph in question. Thank you for bringing this to our attention. >**Reviewer's Comment:** Some wording in Appendix C might be somewhat confusing... **Response:** We appreciate your clarification. Our intention was not to portray the Modern Hopfield model as the first continuous Hopfield model. We understand how our wording in Appendix C could have led to confusion, and we have revised it accordingly. We have also expanded our citations to include more of the relevant prior work. Please refer to footnote 9 in the updated version of our paper for these changes. Thank you! >Q2 & I am not too familiar with Gini entropy and its conjugate... **Response:** Your observation is astute. Indeed, the relationship between Gini entropy and its conjugate, as well as the formulas we've presented (such as equation 2.4), do bear a resemblance to the Legendre transformation. Specifically, if we let `h_μ = x, Ξ` as per eqn 5 of [Krotov & Hopfield 2020], and set `L_h = Ψ*` such that `∇Ψ* = sparsemax = f_μ` and `L_v = 1/2 x^2` such that `∇L_v = v_i`, we can rewrite our proposed energy function in their formulation, up to an additive constant. Furthermore, the finite difference version of their update rule aligns with our retrieval dynamics. It's clear that `L_h` corresponds to the convex conjugate of entropic regularizers. While the formulation in Krotov & Hopfield 2020 necessitates a degree of hindsight to derive different Hopfield variants, our methodology provides a systematic and principled approach for deriving `L_h` for models exhibiting varying degrees of sparsity. >Q3: The authors present quantitative metrics to empirically evaluate the performance of their mode... Please see the Sec. H.1 (Visualization of Experimental Validation of Theoretical Results) of revised draft. We're open to any further questions or clarifications you might have about our work. --- Rebuttal Comment 1.1: Title: Thank you for answering my questions and clarifications. Comment: Dear Authors, thanks for answering my questions and all the clarifications. I think I understand the paper better now. --- Reply to Comment 1.1.1: Title: Thank You for Insightful Review Comment: Dear Reviewer haSG, We are happy to hear that our revisions have addressed your concerns. Thank you again for your constructive comments, which are pivotal in improving our draft and presenting a clearer view of the modern Hopfield model family. We truly appreciate your thorough review. Best, Authors
Rebuttal 1: Rebuttal: Dear Reviewers, We thank the reviewers for the insightful questions and reviews. We have answered all the questions and addressed all the problems in detail in rebuttal and revision. The latest revision is readily available in the reproducibility code folder. Any changes or modifications made from the submitted version are highlighted in blue in this updated draft. In response to the reviewers' suggestions, we have made revisions to improve the overall readability of the paper. We have conducted another round of proofreading and have fixed all typos. Several sections, tables, and figures have been updated for clarity and completeness. These revisions include additional explanations, paragraphs and sections to help readers build intuition, and visual demonstrations of the retrieval task to highlight the advantages of the sparse model over its dense counterpart. Most importantly, a new section, Section G.2 in the appendix, has been added to present **two practical scenarios of integrating the Sparse Hopfield Model into transformer-based deep learning models**. --- ## Revision Details (The revised draft is in the same Dropbox folder in Supplementary Material PDF) **Major revisions include:** - **Two Real-World Additional Experiments**: - We have implemented the proposed model in two additional experiments that utilize transformer-based models for distinct tasks. - These tasks include multivariate time series prediction and neural machine translation. - Our results demonstrate that our proposed Sparse Hopfield model consistently enhances transformer-based deep learning models and achieves state-of-the-art performance. - In **60+% of 58 settings, the Sparse Hopfield model ranks first or second**, with 28 top and 7 runner-up performances in the time series prediction task. - [Table 1 and Table 2 of the **attachment** or Sec. G.2 of the revised draft] - **Visual demonstration of retrieval:** We provide a visual demonstration of the advantages of the sparse Hopfield model over its dense counterpart. - [Figure 1 (**Visualizing Noise-Robustness of Sparsemax and Softmax Hopfield Models**) of **attachment** or Figure 4 in Sec.G.1 of the revised draft] - Additional evaluation and visualization of sparse patterns. - [Figure 2 (**Quantifying Sparsity in Bit Experiments**) of **attachment**] - An opening paragraph in Section 2 that explains the construction of a well-defined Hopfield model in a manner that is easily understandable to non-specialists. - An overview of theoretical results in sec2.1 to help readers build intuition. - A **nomenclature table** of notations in the appendix. - [Table 3 in Sec.A of the revised draft] - An algorithm summarizing the implementation of the sparse Hopfield layer with multiple updates. - [Algorithm 1 in Sec.D.3 of the revised draft] **Minor revisions include:** - Proofreading the manuscript and fixing all identified typos and grammatical errors by reviewers and authors. - Moving the broader impacts and future direction paragraph into the appendix to save space. - Rephrasing paragraphs regarding computational benefits into multiple remarks for enhanced readability. [remarks 2.3, 2.4 (line 224-231) of the revised draft] - Adding a remark summarizing the relationship between entropic regularizers and sparse probability mappings, with citations and examples. [remark 2.1 (line 154-157) of the revised draft] - Rephrasing the remark below Lemma 2.1 to include intuition about $\beta$. [Remark 2.2 (line 188-193) of the revised draft] We hope these revisions address the reviewers' concerns and improve the overall quality of our paper. --- **What are included in the attachment:** 1. Figure 1: **Visualizing Noise-Robustness of Sparsemax and Softmax Hopfield Models** (Section 4.1 & Figure 4). 2. Table 1: **Additional Real-World Experiment: Multivariate Time Series Prediction** (Appendix G.5.1). 3. Figure 2: **Quantifying Sparsity in Bit Experiments** (Section 4.2.1). 4. Table 2: **Additional Real-World Experiment: Machine Translation on the WMT17 Dataset with Language Pairs of DE-EN, EN-DE, RU-EN, EN-RU** (Appendix G.5.2) --- **The Nomenclature Table:** [Sec. A of the revised draft] | Symbol | Description | |---|---| | $\langle\mathbf{a},\mathbf{b}\rangle$ | Inner product for vectors $\mathbf{a},\mathbf{b}\in \mathbb{R}^d$ | | $[I]$ | Index set $\{1,\cdots,I\}$, where $I\in\mathbb{N}^+$ | | $\|\cdot\|_2$ | Spectral norm, equivalent to the $l_2$-norm when applied to a vector | | $d$ | Dimension of patterns | | $M$ | Number of stored memory patterns | | $\beta$ | A scaling factor of the energy function that controls the learning dynamics | | $\mathbf{x}$ | State/configuration/query pattern in $\mathbb{R}^d$ | | $\mathbf{\xi}$ | Memory patterns (keys) in $\mathbb{R}^d$ | | $\mathbf{\Xi}$ | Shorthand for $M$ stored memory (key) patterns in $\mathbb{R}^{d\times M}$ | | $\mathbf{\Xi}^T \mathbf{x}$ | $M$-dimensional overlap vector | | $[\mathbf{\Xi}^T \mathbf{x}]_\kappa$ | The $\kappa$-th element of $\mathbf{\Xi}^T \mathbf{x}$ | | $n$ | Norm of $\mathbf{x}$, denoted as $n\coloneqq\|\mathbf{x}\|$ | | $m$ | Largest norm of memory patterns, denoted as $m\coloneqq \max_{\mu\in[M]}\|\mathbf{\xi}_\mu\|$ | | $\kappa$ | The number of non-zero element of Sparsemax | | $R$ | The minimal Euclidean distance across all possible pairs of memory patterns | | $S_\mu$ | The sphere centered at the memory pattern $\mathbf{\xi}_\mu$ with finite radius $R$ | | $\mathbf{x}^\star_\mu$ | The fixed point of $\mathcal{T}$ covered by $S_\mu$ | | $\Delta_\mu$ | The separation of a memory pattern $\mathbf{\xi}_\mu$ from all other memory patterns $\mathbf{\Xi}$ | | $\tilde{\Delta}_\mu$| The separation of $\mathbf{\xi}_\mu$ at a given $\mathbf{x}$ from all memory patterns $\mathbf{\Xi}$ | Pdf: /pdf/45e3dbbfb95880407018ad928f0664f19ae91c67.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
POP-3D: Open-Vocabulary 3D Occupancy Prediction from Images
Accept (poster)
Summary: This paper proposes an approach, POP-3D, to achieving open-vocabulary 3D occupancy prediction from images by aligning 3D features with those 2D features from pre-trained MaskCLIP+. Specifically, in contrast to previous occupancy prediction methods, such as TPVFormer, this framework uses two heads to predict class-agnostic occupancy and semantic-related features, respectively and then uses the features to perform language-grounded open-vocabulary 3D perception. To evaluate the effectiveness of the method, the paper also introduces a new evaluation method tailored to the 3D occupancy prediction task. Promising experimental results show the efficacy of the proposed method. Strengths: - The basic idea is easy to follow and the illustration figures are clear. - The motivation to address the problems of open-vocabulary 3D occupancy prediction is good and the proposed method is effective. - The design of decomposed heads for geometry and semantic prediction is reasonable. - The proposed self-supervised learning method does not need 3D manual annotations while achieving comparable performance with supervised methods. - The experiments compared with MaskCLIP+ baseline are interesting, giving a more comprehensive position of the proposed method among different solutions for the 3D occupancy prediction task. Weaknesses: - (Method) Although the proposed method can achieve open-vocabulary 3D perception, the basic capability is mainly borrowed from 2D features and the pre-trained MaskCLIP+. It is one of the reasonable solutions for open-vocabulary 3D perception but I have to say there is nothing new for exploring how to achieve that from 3D data. I can understand that this is also related to the task setting in this paper is 3D occupancy prediction from "images", but it makes the studied problem more like a "pseudo-3D open-vocabulary" one. - (Problem Setting) While this paper shows the open-vocabulary capability of the proposed method with the zero-shot occupancy prediction results and the visualization of the case study, it still does not provide a clearly useful background to perform 3D open-vocabulary occupancy prediction in "driving scenarios" (there are typically very few categories of interest, which are almost enough for safe planning and control), or at least provide a good playground/annotations to study this problem. This significantly weakens the value of this paper, because the foundation of studying this problem is not very clear, especially when this benchmark is not officially set up or popular enough such that we do not need to justify this. - (Evaluation) The introduced evaluation protocol mainly considers the weaknesses of "sparse" semantic occupancy prediction in TPVFormer and adds empty labels along a lidar ray. However, such problems do not exist in many recent occupancy benchmarks [1][2][3], where dense occupancy labels are available. Therefore, I think the contribution of the new evaluation protocol is a little incremental and does not solve the mentioned problem fundamentally compared to these new occupancy benchmarks. [1] Tian et. al., Occ3D: A Large-Scale 3D Occupancy Prediction Benchmark for Autonomous Driving [2] Tong et. al., Scene as Occupancy [3] Wei et. al., SurroundOcc: Multi-Camera 3D Occupancy Prediction for Autonomous Driving Technical Quality: 2 fair Clarity: 3 good Questions for Authors: None Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors have mentioned the potential limitations and provide simple clarifications and possible solutions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. **Method novelty** We would like to thank the reviewer for the comment. Although it is true that POP-3D uses image-text aligned features provided by MaskCLIP+, we argue that exploiting such features is an interesting and non-trivial starting point to explore the hard task of Open Vocabulary 3D semantic occupancy. There are indeed several recent works that distill CLIP features into another modality, e.g., audio, point clouds, etc., to inherit CLIP’s open-vocabulary capabilities. While POP-3D exploits information from LiDAR to understand the 3D geometry of a scene, the supervisory signal comes from the image domain where large-scale language-image pre-training datasets (such as those used for training CLIP) are available, it remains an image-based method. We argue that the novelty and difficulty of our endeavor is in reaching an effective interplay between the complementary information coming from multiple camera images, LiDAR geometry, and language-image features in the complex outdoor 3D space surrounding the vehicle, without using any manual labels. The quantitative results showing performance boosts in occupancy estimation (IoU - Figure 4b) and close performance on semantic predictions (mIoU - Figure 4b), while outperforming original MaskCLIP+ features (Figure 4a), confirm that POP-3D does not just imitate image-language features, but reaches non-trivial 3D perception capabilities. **Motivation of open vocabulary in driving scenarios** The need for open vocabulary in driving scenarios is essential for statistical system validation. In contrast to the standard research dataset setup (majority of data for training, minority for validation and testing), testing, called system validation, of the Advanced Driving Assistance Systems (ADAS) requires vastly more data captures than training. As an example, a level 3 system validation requires tens of thousands of hours of real-world driving in order to satisfy requirements based on functional safety standards defined in ISO 26262. Open-vocabulary functionality is essential to understand the contents of the data for: (a) being able to find out if specific long tail distribution object classes are present in the data based on system requirements and (b) having the ability to recognize the root causes of failures (e.g., specific object types on which false positives appear), in order to have feedback to models training and adaptation to improve performance and increase safety. In short, the behavior of ADAS on objects from classes unseen at training time needs to be thoroughly tested for safety reasons, and such object classes are unknown in advance. We argue that there are many non-frequent types of objects in driving scenarios that are essential for the autonomous driving perception system to work under all conditions. To support this, we follow the suggestion of the reviewer and prepare a small benchmark to study this problem. We refer the reviewer to the section “Language-driven 3D grounding & retrieval evaluation” in the General Response. So far, we gathered seven scenes with non-frequent objects, such as excavators, large trash bin, or police car. We present the results in Table A of the General Response. We aim to enlarge this benchmark and publish it alongside the final paper. We hope that this can serve as a playground to study this problem. **Limitation of evaluation protocol and new occupancy benchmarks** We agree with the comments of the reviewer regarding the limitations of our evaluation protocol. We thank the reviewer for the pointers to the recent and elaborate benchmarks with dense labels that are relevant to our work. However, we would like to highlight that the suggested benchmarks were published on arXiv in March 2023 (SurroundOcc [3]), April 2023 (Occ3D [1]), and June 2023 (OccNet [2]), so below the two-month period suggested by NeurIPS guidelines. Due to differences in the granularity of the voxel size produced by POP-3D and the multiple ones from the benchmarks we were unable to adapt and evaluate our method for such an experiment during the short rebuttal time. However, we plan to improve our evaluation protocol by leveraging these new findings and conduct evaluations on them for the updated version of the paper. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: I acknowledge that I have read the authors' rebuttal and the other reviews. Thank you for addressing my concerns. I will keep my score and give borderline acceptance in the final decision. As the other reviewers may have important concerns not fully addressed, I would not argue my case if other reviewers reach a different recommendation.
Summary: This work presents a system that is able to achieve open-vocabulary 3D semantic volumetric predictions with multi-view image inputs. Given a set of images, the system learns a 3D voxel grids which decodes into binary occupancy and visual-linguistic embeddings. The embeddings are used for retrieval and semantic segmentation given language inputs. Comparisons are present against the fully-supervised TPVFormer and MaskCLIP+ on nuScenes. Strengths: - The paper is well-written, easy to read, and materials are clearly presented. - The presented system should be quite useful in downstream applications especially for outdoor driving scenarios. - The presented method is reasonable, well designed, and is new. Weaknesses: - The paper only compares to MaskCLIP+ as the zero-shot baseline (TPVFormer is fully supervised), which is insufficient, given the abundant related works like Lerf/Semantic-abs/OpenScene: 3D Scene Understanding with Open Vocabularies /CLIP-FO3D: Learning Free Open-world 3d Scene Representations from 2D Dense CLIP. Although these work may use NeRF/point clouds as the 3D representation, the reviewer finds the core technical challenges are shared. - Given these previous works, which are not even cited and discussed, I would consider the technocal novelity of this work is low. - No qualitative comparison to maskclip+/TVPFormer. Please show figures. Some minor questions - what's the design for f_3d? No explanation given for this image->3D lifting backbone. - In line 162, how about some points are occupied but not observed in any view? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: see weankess Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: no issue found Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. **Comparison to other open-vocabulary methods: LeRF, Semantic-Abs, OpenScene, CLIP-FO3D** Thank you for suggesting relevant works which tackle a similar problem to ours. We will integrate a discussion about those works in our related work section. For now, we would like to point out important differences between each one of them and our POP-3D method. The first, LeRF, is a NeRF-based method, which needs to be trained on each scene independently and does not generalize to novel scenes. CLIP-FO3D is a 3D network processing point clouds and RGB images and learns to imitate MaskCLIP features via pairs of RGB images aligned with dense point clouds as input during inference, whereas we produce 3D open-vocabulary semantic occupancy maps only from images. OpenScene learns a representation of 3D point clouds and is therefore limited to the availability of 3D scanners at inference time, which is too costly to be mounted on every car. Moreover, the CLIP-like features in OpenScene are obtained using either LSeg [A] (which requires ground-truth semantic image annotations) or OpenSeg [B] methods (which requires class-agnostic segmentation masks and image captions) which both need much more annotation than MaskCLIP+ (which requires no additional annotation) exploited in our POP-3D. Therefore, it is expected that LSeg and OpenSeg features provide better pixel-text alignment. In Table B of the General Response, we provide results using the OpenScene method (adapted to our evaluation framework and without test-time augmentations). OpenScene achieves better results than our method. Again, we would like to stress that OpenScenes uses image-language encoders that were trained with much more supervision than MaskCLIP+, which we use in our experiments. At the same time, our method is agnostic to the used image-language encoder and could hence exploit encoders from LSeg and OpenSeg (as OpenScene does) to improve its performance. We would be interested to have the full reference to the "Semantic-abs" work mentioned by the reviewer, as we were not able to find it. **Technical novelty** We would like to point out the tight/impossible timeline here, which is the reason why we were not aware/couldn't compare to works suggested by the reviewer. Although we agree those works are relevant, two were published on arXiv in March 2023 (CLIP-FO3D, LERF) (which is below the two months suggested by NeurIPS guidelines), and OpenScene's (presented at CVPR23, which happened in June 2023) code was provided only in mid-March 2023. Nonetheless, as mentioned above, we thank the reviewer for pointing us out to those works and will discuss them in the related work section of the paper. **Qualitative results for MaskCLIP+ and TPVFormer** We provide an additional qualitative comparison of our POP-3D framework to the fully-supervised TPVFormer and to MaskCLIP+ results projected from 2D to 3D ground-truth point cloud. These qualitative results can be found in the PDF within the General Response. **Design of f_3d image-to-3D lifting** We use the TPVFormer [26] model, which builds upon the popular BEVFormer bird's-eye-view (BEV) lifting method [37]. BEVFormer is an attention-based lifting method using points from the BEV grid as queries for fetching visual information from the image encoder features of the different cameras. TPVFormer generalizes BEVFormer to the 3D space in a computationally efficient manner via a tri-perspective view (TPV) representation. Three axis-aligned orthogonal TPV planes (HW, DH, WD, with H,W,D denoting the resolution of the three planes across height, width, and depth dimensions) are learned with BEVFormer lifting. Voxels are modeled in the 3D space by summing their projected features on the three planes. We give information about the setup of the 3D backbone in the "Implementation details" subsection and will expand the provided description. Additional information can be found in the original TPVFormer paper. We would like to highlight that POP-3D is not specifically designed for use with TPVFormer backbone and could be used with other image-to-3D lifting strategies as long as they produce voxel-level representations, which is the case for all recent methods. **Occupied invisible points** Following the original implementation of TPVFormer, we set such voxels as empty. We are aware of potentially different solutions for setting such voxels as ignored, but we did not yet test this training setup. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. I've read the rebuttal and the other reviews. I'd like to raise my score to Weak Accept.
Summary: The paper presents a novel approach for predicting open-vocabulary 3D semantic voxel occupancy maps from 2D images. The objective is to enable 3D grounding, segmentation, and retrieval of free-form language queries, which is challenging due to the ambiguity between 2D and 3D representations and the difficulty of obtaining annotated 3D training data. The proposed model architecture consists of a 2D-3D encoder, occupancy prediction, and 3D-language heads. It generates a dense voxel map of 3D grounded language embeddings, facilitating various open-vocabulary tasks. The authors also introduce a tri-modal self-supervised learning algorithm that leverages images, language, and LiDAR point clouds to train the model without the need for manual 3D language annotations. Strengths: ## Strengths - Tri-Modal Self-Supervised Learning: This approach enables training the model using a pre-trained vision-language model without the need for any 3D manual language annotations. - Experiment results: The authors quantitatively evaluated the model's performance in zero-shot 3D semantic segmentation using existing datasets. Additionally, the model's effectiveness in tasks such as 3D grounding and retrieval of free-form language queries is demonstrated. Weaknesses: ## Weakness: - The experiment is unable to demonstrate the advantage of the so-called “Open”-vocabulary. Only a few categories are demonstrated in the experiment. If we only need to understand a few categories, it would be better to just use standard object detection methods that have tens of common categories. At least hundreds of object categories need to be demonstrated to justify the “Open”, which is a key advantage that the authors claimed. - The method needs paired lidar-image data. This makes it less generalizable than the masked clip. - Although the proposed method is better than masked clip in some known categories, this might not be true for a wider range of categories unseen in training. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Could the authors justify the paper's contribution given the weakness? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Please see weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and suggestions. We address here each comment/question. **Evaluating open-vocabulary capabilities on hundreds of object categories** We would like to highlight that open-vocabulary approaches for image-based 2D tasks (semantic segmentation, object detection) can show results on a high number of classes also due to the existence of several richly annotated datasets with numerous classes (COCO, LVIS, Objects365), whereas most autonomous driving datasets come with a small number of annotated classes. We emphasize that for the 16 classes we consider in our evaluation, no class information was used during the self-supervised training, making this evaluation open-vocabulary. In contrast, similar approaches in the image domain achieve wider open-vocabulary detection skills by leveraging different forms of annotations, typically for a set of base classes (LSEG [A], Detic [E], OWL-ViT [F], etc.) and are then evaluated on a set of novel classes. POP-3D does not use any human annotation and learns to predict 3D semantic voxel occupancy maps only from LiDAR information and distillation from image-language features. We understand the reviewer is asking for the “open” aspect of the method to be more consistently evaluated. We were not able to gather a benchmark with hundreds of object categories in the one week of the rebuttal (no such dataset exists among automotive driving datasets with paired images and LiDAR scans). However, we produce a first small benchmark which we plan to expand. We refer the reviewer to the section "Language-driven 3D grounding & retrieval evaluation" in the General Response. We propose a new benchmark that aims to evaluate the "open" quality of methods. Although limited in terms of the number of queries, we observe that this preliminary benchmark demonstrates further the good performances of our method and better ones than MaskCLIP+. **Neccesity of paired LiDAR-image data and generalization compared to MaskCLIP** We would like to clarify this point. Indeed, the reviewer is correct in saying that our method POP-3D is trained using LIDAR-image pairs. However, during inference, our method takes **only 2D images as input** (no LiDAR data is used as input) and produces a 3D semantic open vocabulary feature map as output. In contrast, MaskCLIP does not need any paired LiDAR-image data for training but needs such pairs at inference time for the re-projection of features from 2D to 3D space. The MaskCLIP+ baseline is rather an artificial baseline that we proposed to emphasize that POP-3D successfully learns not only to lift and predict image-language features in the 3D space but also to leverage its 3D perception skills in order to produce better predictions. We observe that POP-3D outperforms MaskCLIP+, which needs LiDAR at inference time. We would like to stress that the setup of POP-3D reflects current ways of capturing datasets for autonomous driving [G,H,I,J] and also industry standards, where it is common to have paired image-LiDAR data during the initial data capture in order to validate the perception algorithms (such as depth prediction from camera validated by ground-truth measurements from LiDAR scanner). Such data can be captured using only a small fleet of cars. On the other hand, it is expensive to have LiDAR scanners mounted on every deployed car due to the high price of the sensor. **Performance on categories unseen in training** We would like to clarify that during training, POP-3D does not receive any explicit information about the “known” categories; the model “sees” the pixels of the objects in the images, their corresponding LiDAR points, and MaskCLIP+ features, but no labels. These categories are not known by the model and are given to it only at inference time in the form of text embeddings. We try to answer the lack of comparison in the long-tail classes by collecting a new benchmark for open-vocabulary language-driven 3D grounding and retrieval. We refer the reviewer to the corresponding section in the General Response. Table A there provides results with our method and MaskCLIP+. Our method achieves better retrieval performance. Regarding object categories that are not seen at all during training, i.e., not present in the images of the nuScenes dataset, we hypothesize that given the performance boosts over the MaskCLIP+ baseline, POP-3D learns to exploit both the structure of the CLIP feature space and the 3D word layout. As a result, it can deal to some extent with such categories depending on how far they are from the nuScenes data distribution. [E] Zhou et al., Detecting Twenty-thousand Classes using Image-level Supervision, ECCV 2022 [F] Minderer et al., Simple Open-Vocabulary Object Detection with Vision Transformers, ECCV 2022 [G] Sun et al., Scalability in perception for autonomous driving: Waymo open dataset, CVPR 2020 [H] Geiger et al,. Are we ready for autonomous driving? the KITTI vision benchmark suite, CVPR 2012 [I] Behley et al., SemanticKITTI: A dataset for semantic scene understanding of lidar sequences, ICCV 2019 [J] Mao et al., One million scenes for autonomous driving: ONCE dataset, NeurIPS Datasets and Benchmarks 2021 --- Rebuttal Comment 1.1: Title: Response to the Authors Comment: I appreciate the authors' response and clarification. This paper has explored a new task where the spatial information from lidar and the semantic information from CLIP is distilled into the pure image-based network. Although there are some problems regarding the usefulness and practicality of the new task, I would still encourage this kind of work and its new exploration. I have raised my score.
Summary: This paper proposes a camera-only open-vocabulary 3D occupancy prediction method, named POP3D. POP3D consists of a 2D-3D encoder and 3D language heads, which can combine open-vocabulary segmentation with 3D occupancy prediction together. The proposed method can be trained in a self-supervised manner by leveraging images, language, and Lidar modalities. Experiments on the widely used nuScenes benchmark demonstrate the effectiveness of the proposed method. Strengths: [1] The presentation of this paper is clear. [2] The motivation of this paper makes sense. [3] The proposed method has practical value in auto-labeling system. Weaknesses: The paper is well-writen overall. One of my concerns is that the evaluation protocol proposed by the authors may not accurately reflect the quality of occupancy prediction. Would you please provide more details about the groud truth generation? (e.g. How many lidar sweeps have you used? Because the Lidar used in nuScenes dataset is sparse than others.) I am concerned that inaccurate evaluations may lead to incorrect conclusions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. We address here the concerns individually. **Details about ground-truth generation** We happily provide more details about the ground-truth generation. We use a single LiDAR sweep (i.e., only a single LiDAR point cloud from a single time step without point cloud aggregation). Then we consider the volume $[−51.2m,+51.2m]×[−51.2m,+51.2m]×[−5m,+3m]$ around the car and voxelize the space with cubic voxels of 1m size (see Sec. 4.1/Implementation details). Following [7], we cast rays from the LiDAR sensor mounted on the car to the points in the LiDAR point cloud (see Figure 3 in the paper for visualization). We use the same assumption as [7], i.e., we consider all the points along the ray as an empty space. We proceed with setting all the voxels that contain at least one point from the LiDAR point cloud as “occupied”. The remaining voxels are ignored, as we cannot be sure whether they are empty or occupied. We understand the concern of the reviewer about the sparsity of the nuScenes LiDAR, which has only 32 beams, hence a low vertical density. However, we argue that this does not lead to inaccurate occupancy predictions because the resolution in the horizontal direction is sufficient and allows the sensor to capture all objects except those extremely thin.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable comments, which will allow us to improve the quality of the paper. We are glad to note that they appreciated the clarity [*nph2, yGBj, Vrgc, q7fa*] of the manuscript, the well-defined motivation of the tackled problem [*yGBj, q7fa*], the originality [*nph2, Vrgc*] and soundness [*9pAV, Vrgc, q7fa*] of the method, its relevance [*9pAV, nph2, yGBj, Vrgc, q7fa*], and the quality of the results [*nph2, 9pAV, q7fa*]. In order to address several comments on the experimental results, we report here additional quantitative and qualitative comparisons. First, we propose a new *open-vocabulary language-driven 3D grounding and retrieval* benchmark, which we plan to further extend and present in the revised paper *[nph2-BA, 9pAV-BR, q7fa-BA]*. Second, we compare to more recent related works, as required by reviewers *[nph2, Vrcg]*, even if they require much more manual annotation than ours. ## Language-driven 3D grounding & retrieval evaluation As suggested by the reviewers, we have built a small (due to limited time) benchmark to evaluate the "open" capability of our method quantitatively. ### The benchmark We have collected an initial version of our *Language-driven 3D grounding & retrieval* benchmark with natural language queries. To build this benchmark, we have manually annotated 3D scenes from the validation split of the nuScenes dataset for a set of natural language open vocabulary queries. This initial set contains 7 queries, due to the time limitation of the rebuttal, but we continue to extend it. The spatial grounding for each query was obtained by manually annotating the relevant set of voxels in the scene. The objective is, given the query, to retrieve all relevant voxels in the scene. Results are evaluated using the precision-recall curve; negative data are all the non-relevant voxels/points in the given scene. In the table below, we report the average precision (AP) for each query, further aggregated into a mean average precision (mAP) score corresponding to the mean performance across all queries. We believe this dataset, when further extended, would be an important step toward open-vocabulary analysis of driving scenes. ### How do we form the text queries The text category descriptions used as queries in every experiment are shown in the table below. To get feature corresponding to the query text description, we follow the same approach as presented in the main paper. | experiment name | category text description | MaskCLIP+ | POP-3D (ours) | | ---------------:| ------------------------: | ---------:| -------------:| | excavator-0 | excavator | 14.0 | 12.8 | | delivery-0 | delivery vehicle | 6.9 | 9.9 | | delivery-1 | | 15.2 | 15.5 | | delivery-2 | | 63.9 | 65.5 | | police-0 | police car | 54.1 | 55.6 | | trash-bin-0 | trash bin | 9.7 | 12.6 | | backhoe-0 | backhoe | 3.3 | 4.1 | | **mean** | | **23.9** | **25.1** | **Table A: AP results on our Language-driven 3D grounding & retrieval benchmark** ### Discussion about the results We compare the performance of our method with MaskCLIP+ and report results in the table above. Our approach exhibits superior AP outcomes for 6 out of the 7 natural language queries. Overall, our POP-3D method attains a mAP of 25.1, surpassing MaskCLIP+'s mAP of 23.9. ## Comparison to more related works We extend Fig. 4 of the main paper with more related works, which we present below in Table B. We additionally evaluate the ODISE (CVPR'23) and OpenScene (CVPR'23) methods on our task. Both methods can only be evaluated with the LiDAR-based evaluation because they do not produce 3D occupancy. We denote "2D->3D" in the table the setup where we assign 2D features to the 3D points projected into the images. We see that both ODISE and OpenScene outperform previously reported results. However, they both use manual annotation while we don't. For instance, ODISE requires panoptic segmentation annotations for training, while OpenScene uses features from either LSeg [A] or OpenSeg [B], which are two image-language encoders that are trained with supervision. We compare here to OpenScene with OpenSeg [B] (noted "OpenScene-OS"), which requires both class-agnostic segmentation masks and image captions. Note that we cannot evaluate MaskCLIP+, ODISE, and OpenScene for the task of image-based 3D occupancy prediction since they work either only in the 2D space (MaskCLIP+ and ODISE) or directly in the space of 3D point clouds (OpenScene), therefore do not predict the occupancy from images. | Method | Type of supervision | LiDAR-based evaluation | 3D occupancy prediction from images (mIoU/IoU) | | -------------------------------- | --------| ---------------------- | ----------------------- | | MaskCLIP+ (2D->3D) | none | 23.0 | not adapted | | POP-3D (ours) | none| 26.4 | 16.7 / 37.7 | | TPVFormer (supervised) | full LiDAR segmentation | 31.3 | 21.3 / 26.2 | | ODISE (2D->3D) | panoptic masks annotations | 34.7 | not adapted | | OpenScene-OS (ensemble of 3D and 2D->3D features) | w. OpenSeg encoder trained w. class-agnostic seg. masks & image captions | 38.8 | not adapted | **Table B: Extension of Fig. 4 of the main paper with more related works** [A] Li et al., Language-driven Semantic Segmentation, ICLR 2022 [B] Ghiasi et al., Scaling Open-Vocabulary Image Segmentation with Image-Level Labels, ECCV 2022 Pdf: /pdf/21d4769962699db85ecfe87409277960f1b50116.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work presents a novel approach for zero-shot 3D occupancy prediction for autonomous driving applications. The key idea consists of three parts: - a 2D-3D encoder to create a 3D voxel feature grid, based on TPVFormer - An MLP-based voxel occupancy predictor that is class agnostic - an MLP-based feature encoder, called a "3D language head" that takes the TPV-former features that have positive occupancy, and learns to predict the corresponding features of an off-the-shelf MaskCLIP+ model. In effect this is distilling the 2D MaskCLIP+ features into 3D. At inference time LiDAR information is not necessary because occupancy is predicted directly from RGB images. Further, the model can be used zero-shot at inference time by computing the similarty of the output of the 3D language head features with language features from a CLIP text encoder. During training, only direct supervision is needed derived from class-agnostic occupancy from the LiDAR data, the distillation loss only needs an off-the-shelf vision language model. The proposed approach obtains ~78% of the fully supervised TPVFormer, while needing no class label supervision. Strengths: Originality - While there are a variety of works trying to leverage vision-language models, this is the first work to show promising results on open vocabulary 3D occupancy prediction. The proposed model is simple but highly effective taking advantage of existing LIDAR datasets, the TPVFormer architecture, and off the shelf vision language models. The simplicity of this technique opens up the potential for removing the dependence of expensive supervision in this problem domain and instead focusing on extracting information from existing vision-language models. Quality - The main claim of the paper is that the proposed system can perform zero-shot semantic occupancy prediction without using any 3D semantic labels at training time. This paper provides sufficient evidence for this claim by evaluating on nuScenes and obtaining ~78% of the performance of the fully supervised model. This highlights the power of vision-language models. - The related work is detailed and positions the proposed work well relative to previous techniques semantic 3D occupancy prediction, multi-modal learning and open vocabulary segmentation. - Section 4.3 presents empirical evidence for the choice of hyperparameters, and the effect of image resolutions. Clarity - The paper writing is excellent, it easy to follow and understand. Figure 2 is particularly helpful for following the technical description of the model in Section 3. Weaknesses: Quality - small scale evaluation - The evaluation is done only on one dataset. This is understandable given the computation costs for training and evaluating models, but having evaluation on more than one dataset would significantly improve the paper. A potential candidate dataset is https://pandaset.org/ which contains LIDAR and 5 wide angle cameras. - Evaluation is done only against one open-vocabulary baseline and one supervised basline. While TPVFormer appears to be the only fully supervised reasonable choice, there are baselines like ODISE (https://github.com/NVlabs/ODISE) or OVSeg (https://jeff-liangf.github.io/projects/ovseg/) for open vocabulary segmentation, whose predictions could get back-projected into the class-agnostic occupancy predictions, to obtain 3D semantic labels. These are relevant baselines in light of the comparison with MaskCLIP. - Open-vocabulary recognition is powerful because it allows tackling difficult long-tailed scenarios. Currently such a benchmark doesn't exist for occupancy prediction, limiting the evidence to qualitative examples, if we want to evaluate performance beyond the classes defined in nuScenes. It would be a great improvement to the paper if there were some annotated samples to set quantitative benchmark. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Why is evaluation limited to only one dataset when there are other datasets that can be used for evaluation? - Why aren't existing open-vocabulary segmentation models, backprojected in 3D, used as baselines in addition to MaskCLIP? This is particularly important given the small scale experiments. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations and the potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and suggestions. We address here each comment/question. **Datasets for evaluation** In this work, we leverage TPVFormer for the multi-camera 2D to 3D projection. To show the effect of our contribution compared to a fully-supervised setting in a fair way, we follow the TPVFormer setup (training and evaluating on nuScenes train and val sets, respectively) and train with our self-supervised objective. The design and tuning of neural BEV projection methods (used in TPVFormer) is strongly dependent on the camera rig setup of the ego-vehicle (cameras' intrinsics, extrinsics, type and number, LiDAR density). Switching to another dataset with different sensor configurations would require specific tuning and training time. The rebuttal period was too short for such an experiment, but we would gladly consider the suggestion for extensions of this work. **Additional baselines** We thank the reviewer for the suggestion. We note that ODISE [C] (which was published and made the code available two months prior to the NeurIPs submission deadline) needs for its training the panoptic segmentation mask annotations and OVSeg [D] a segmentation model already pre-trained with segmentation mask annotations. So, both of them use a labor-intensive form of supervision that renders a direct comparison with our unsupervised POP-3D method unfair towards our work. Yet, in Tab. B of the General Response, we provide new results with ODISE. ODISE obtains 34.7 mIoU when we get 26.4 mIoU without human-label supervision. We will include this comparison in the final version of our paper. Regarding OVSeg, we did not have enough time to run it in our evaluation framework, but we would be happy to include it too in the final version of our paper. **Quantitative benchmark** We thank the reviewer for the comment. As suggested, we provide in Table A of the General Response an open-vocabulary natural language 3D grounding and retrieval benchmark, which is limited for now but which we will continue to extend. We can see that our method achieves better results than MaskCLIP+. [C] Xu et al., Open-vocabulary panoptic segmentation with text-to-image diffusion models, CVPR 2023 [D] Liang et al., Open-vocabulary semantic segmentation with mask-adapted CLIP, CVPR 2023 --- Rebuttal Comment 1.1: Title: Thank you for your answers Comment: Thank you for the response. I appreciate the additional baseline results for ODISE and the commitment to add OVSeg later. The addition of the quantitative benchmark is a very important improvement to the paper, and it's great to see that the proposed method outperforms MaskCLIP+. I'm looking forward to seeing the completed version of the benchmark. If possible, it would be great to see the number of scenes/labeled examples for the numbers in the general response. Due to these added results, I will increase my rating to a weak accept.
null
null
null
null
null
null
Automatic Clipping: Differentially Private Deep Learning Made Easier and Stronger
Accept (poster)
Summary: The paper proposes a new alternative version of the DP-SGD algorithm where the gradients are normalized. This new method allows for a proof of convergence of the gradient when we add a small stability constant to the rescaling factor. This work eliminates the need for a costly 2D gridsearch between learning rate and clipping constant when training DP models. The authors then proceed to run experiments on to show their method equals or outperforms Abadi’s DP-SGD algorithm. Strengths: The article eliminates a costly operation in DP training and provides strong evidence regarding the convergence of their method. The exhaustive list of datasets on which the method was tested testifies as to how only minimal changes to existing implementations are needed to apply this method. ### Originality The method is original. ### Quality The experiments presented are very complete and yield high quality results. Also, various supporting githubs and ressources are mentionned, making these results reproducible. ### Clarity The article is clear and well written. ### Significance The original motivation of the article is clear. Weaknesses: ## Sensitivity of the hyperparameter optimization The results are not clear enough to indicate whether the proposed DP training method is superior to the rescaled DP-SGD method which impedes the significance of the article. If the authors of the paper manage to prove that their method is less expensive than the rescaled DP-SGD trick then the contribution is significant. The article lacks a conclusive experiment or statement proving that the sensitivity of the hyperparameter optimization process is easier with this method than with the re-scaled version of Abadi’s DP-SGD. The paper shows in its appendix H. that the training process of the AUTO-S method is relatively robust to the choice of $\gamma$ on NLP tasks. However the paper doesn’t compare the sensitivity of the **AUTO-S method** to the $\gamma$ factor to the sensitivity of the **rescaled Abadi clipping** to the $R$ factor. In my opinion this is an important experiment to run, since it would justify using this method instead of the widely known rescaled DP-SGD version introduced in De et al. [15]. This could improve the impact and the significance of the contribution. ### Actionable feedback > However, R remains and is expensive to tune (l223) This sentence in particular is a bit of an overclaim in my opinion, see my comments below. **In your experiments you specifiy regarding the range of $\gamma$** > ”under which a wide range of 0.0001 < < 1 gives similar accuracy” and >“Note that the largest good is 1000 times bigger than the smallest good ” **In the comparison to related works on clipping** You state “Similarly (De et al.) re-parametrizes the per-sample gradient clipping to mimic normalization. However R remains and is expensive to tune (see figure 8 of De et al.)”. However this fig 8. shows that the re-parametrized per-sample gradient clipping seems to stay stable by a similar factor of approximately 1024 on the clipping norm, on a even harder dataset. Even though their experiment are not extensive enough to prove a superior efficiency of the rescaled DP-SGD method to yours, it is not conclusive enough for you to assume that your method is easier to tune. If you add this missing experiment, or if you modify this claim, I would consider improving my rating. Technical Quality: 3 good Clarity: 3 good Questions for Authors: ### Can you provide tangible proof that your method is significantly less expensive to tune than the rescaled DP-SGD algorithm ? The figure you cite in the concurrent work of De et al. in section 4.4 does not seem complete enough and gives no indication of the strain in ressources the rescaled DP-SGD might still induce. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The method is original. However some similarities can be drawn with existing concurrent approaches like Yang et al. 2022 [2206.13033] that did not go through peer-reviewing to the best of my knowledge, however this work might be worth citing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments! We will address the comparison to re-scaled DP-SGD in (De et al.) from the aspects of (1) convergence and hence performance, (2) better understanding of general DP optimizer, (3) saving the hyperparameter tuning efforts, (4) extensive experiments. 1. Convergence Because re-scaled DP-SGD simply absorbs the clipping threshold into the learning rate, there is a one-to-one mapping between original DP-SGD and re-scaled DP-SGD. Hence the convergence is exactly the same if both DP-SGD are tuned optimally. In contrast, our AUTO clipping has better convergence, as illustrated in Section 3.2, 3.3 with clear motivation, Section 5 with theoretical proof (which is absent in De et al.) and Section 6 with empirical evidence. Our improvement is particularly attributed to the stability $\gamma$ not found in re-scaled DP-SGD. 2. Understanding of general DP optimizer De et al.'s re-scaling does not consider weight decay, which would lead to $$\text{Before re-scale: }w_{t+1}=w_t-\eta\left(\frac{1}{B}\sum_i\min(1,\frac{R}{||g_i||})g_i+\lambda w_t+\frac{\sigma R}{B}N(0,I)\right)$$ $$\text{After re-scale: }w_{t+1}=w_t-\eta\left(\frac{1}{B}\sum_i\min(\frac{1}{R},\frac{1}{||g_i||})g_i+\frac{\lambda}{R} w_t+\frac{\sigma}{B}N(0,I)\right)$$ This is proved in Theorem 1. In words, although re-scaled DP-SGD seems robust to $R$ by a factor of approximately 1024, it also secretly weakens the weight decay when using smaller $R$, as shown in their Fig 8. Also, it is unclear how re-scaling would work with adaptive optimizers like Adam, because Adam does not absorb the clipping factor but rather cancels it with the adaptive denominator. This difference is visualized in our Figure 1 and proved in Theorem 2. In words, the understanding from re-scaling omits some important aspects when advanced techniques, like adaptivity and weight decay, are introduced to DP-SGD, which is formalized by our work. 3. Saving the hyperparameter tuning We argue that the sensitivity experiments are already done by noticing the fact that re-scaled DP-SGD only modifies the learning rate (by absorbing the clipping norm; in De et al.'s own word, "This is a re-parameterization of Equation (2) in which the learning rate absorbs a factor of C"). Therefore, the optimal choice of re-scaled DP-SGD is the same as the optimal choice of original DP-SGD, just that the learning rate is re-scaled. Hence our Figure 14&15 (appendix H4) demonstrate that (re-scaled) DP-SGD can be sensitive to R but AUTO DP-SGD is not sensitive to $\gamma$. We also would like to point out that De et al. explicitly claimed that they use optimal $R=1$ for their vision models. However, we found that DP-Adam requires an optimal clipping norm $R=0.1$ on language models, i.e. RoBERTa and GPT2. This is also confirmed in "LARGE LANGUAGE MODELS CAN BE STRONG DIFFERENTIALLY PRIVATE LEARNERS". Hence, the performance of different tasks seems not robust to choice of $R$. 4. Extensive experiments We experimented on large models and language tasks, while De et al. only experimented with computer vision datasets. Note that many techniques like linear probing only works for vision but not language, or vice versa. We show this is not the case for AUTO-S. Specially, we extend to large models with over 700 param (GPT2-large), for which the normalization is much favorable because the tuning efforts are huge. We will add an appendix to include this discussion. We sincerely hope the reviewer would appreciate our contribution besides the hyperparameter tuning and consider raising the score if satisfied. --- Rebuttal Comment 1.1: Comment: Thank you for this detailed response. I hope to see this discussion on De et al. in the camera-ready version. I have a better understanding of your contribution. I increased my score to 7 as a consequence.
Summary: The author(s) proposed a new gradient clipping technique for differential private training algorithms. It was shown that the new clipping technique is more robust to hyper-parameters and can save the time for hyper-parameter tunning. Convergence of the proposed method under a gradient symmetric assumption is developed and experiments on both vision and language tasks are conducted to evaluate the proposed method. Strengths: - The paper is well-written and easy to follow in general. - The experiments are thorough and the algorithm seems to be useful on the practical side. Weaknesses: - The convergence analysis is based on a gradient symmetric assumption, therefore the utility bound in Theorem 4 is not really comparable to prior works. Please also see my quetions below. Overall, the theoretical contribution seems limited. However, given the strong empirical performance of the proposed method, this might be acceptable. - Some related work should be added [1,2,3]. [1] Clip21: Error Feedback for Gradient Clipping, https://arxiv.org/abs/2305.18929 [2] Improved Convergence of Differential Private SGD with Gradient Clipping, https://openreview.net/forum?id=FRLswckPXQ5 [3] Normalized/Clipped SGD with Perturbation for Differentially Private Non-Convex Optimization, https://arxiv.org/pdf/2206.13033.pdf Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Consider the example with 3 data points, $f(x) = f_1(x) + f_2 (x) + f_3(x)$, where $f_1( x ) = 2 x^2, f_2(x) = (x-1)^2, f_3( x ) = f_2(x) = (x-1)^2$. Then the solution is at $x^* = 0.5$, but this point is not stationary for Algorithm 1, because $g_1( x^* ) = 2, g_2(x^*)=g_3(x^*)=-1$, which implies $g_1( x^* ) / \| g_1( x^* ) \| = 1, g_2(x^*) / g_2( x^* )=g_3(x^*)/g_3( x^* )=1$. Algorithm 1 willi fail on this example, does the author(s) agree? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I do not find any negative societal impact of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your enlightening review. We agree that Theorem 4 is not directly comparable to prior DP works, because our proof techniques are novel and settings are different. We hope the reviewer would agree that our setting (including the symmetric gradient noise, not symmetric gradient) is more realistic as noted in Footnote 6, consistent to most non-DP deep learning literature. We would also like to point the reviewer to the "Noise structures" section in [1*] for a literature survey about the symmetry of gradient noise, where the gradient noise is proportional to the oracle Hessian matrix, which by definition is symmetric. [1*] Wu, Lei, Mingze Wang, and Weijie Su. "The alignment property of SGD noise and how it helps select flat minima: A stability analysis." Advances in Neural Information Processing Systems 35 (2022): 4680-4693. To provide a comparison to the standard non-DP SGD, we devote Appendix D to study the convergence under the same setting as our DP-SGD analysis. We will add the suggested reference in the camera ready version, especially we would add a short paragraph comparing your 3rd reference to ours: (1) The analysis is based on different assumptions. Yang et al. uses a relaxed Lipchitz smoothness, instead, we use the symmetric gradient noise assumption. (2) Our experiments is more comprehensive, covering over 10 tasks including the scalability of DP GPT, while they only cover 2 smaller models. Lastly, to address your example question, we agree that AUTO clipping does not find this minimum. However, this example should not belittle AUTO clipping for several reasons. (1) Per-sample clipping is known to introduce some bias, e.g. Abadi's clipping will also not find this minimum for R=1, because $C_1g_1(x^*)=1, C_2g_2(x^*)=C_3g_3(x^*)=-1$. It will find the minimum when R is large and not do the clipping, but then the noise $\sigma R\mathcal{N}(0,I)$ is also large, rendering much worse performance. Hence the issue is not unique to AUTO clipping. (2) This example seems to not follow the i.i.d. data assumption, a common one in deep learning literature, in that $f_1,f_2,f_3$ should take the same form, like $f_i=L(x_i;w)$. In our Section 3.3, we also provide an example by specifying the data distribution and draw large number of datapoints. We would love to extend the discussion if the reviewer is interested in such examples. We hope the reviewer can raise the score if satisfied! --- Rebuttal Comment 1.1: Title: Thanks for your rebuttal Comment: I would like to thank the author(s) for the detailed rebuttal. I think the example raised in my question is still a validation toy example of the empirical-risk minimization problem (finite-sum); it can be sampled from a mixture of Gaussian's distribution. Anyway, I still appreciate the empirical performance of the proposed method and keeping my score unchanged.
Summary: The paper discusses a way to control the sensitivity of DP-SGD by a method that does not require clipping. Strengths: - The motivation for the paper is clear - The authors make a compeling argument that suggest why their method should work - They have sound theoretical guarantees and experiments - The idea is simple yet fully developed - The impact this could have on private learning is huge - Experimental results are compeling Weaknesses: N/A Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: N/A Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you! Happy to extend the discussion any time.
Summary: This paper introduces 'automatic clipping' to replace the usual clipping operation in DP-SGD. The challenge of usual clipping is to choose a good threshold 'R' especially for deep learning models. The authors claim that automatic clipping maintains the same level of privacy and computational efficiency as existing DP optimizers such as DP-SGD, DP-Adam, and DP-LAMB, but without the need for any DP-specific hyperparameters. In fact, automatic clipping uses a kind of normalization to bound the sensitivity of individual gradient. Additionally, they provide a thorough convergence analysis for DP-SGD with automatic clipping in the non-convex setting, demonstrating that it can match the convergence rate of standard SGD under certain conditions. The authors validate their proposal by showing that automatic clipping either matches or surpasses the state-of-the-art performance on a variety of language and vision tasks. Strengths: 1. **Simplicity**: The proposed method of automatic clipping significantly simplifies the process of DP training by eliminating the need for tuning DP-specific hyperparameters, thereby making DP as user-friendly as standard non-private training. 2. **Performance**: The paper demonstrates that the automatic clipping method is either on par with, or better than, existing state-of-the-art techniques in a variety of tasks. This suggests that the method does not compromise performance for ease-of-use. 3. **Rigorous Analysis**: The authors provide a detailed convergence analysis of automatic DP-SGD in a non-convex setting, showing that it can match the convergence rate of standard SGD under the symmetric gradient noise assumption of per-sample gradients. Weaknesses: **Misleading justification**: In figure 2, the paper present the dot products of $\langle g_i, \sum_i g_i \rangle$. First it does not make sense if the dot product is properly normalized. Moreover, the distribution of $\langle g_i, \sum_i g_i \rangle$ does not necessarily align with good performance. The illustration in Figure 2 is rather misleading. **Too strong Assumptions**: The proposed automatic clipping method's performance is based on a symmetric gradient noise assumption of the per-sample gradients. This assumption is too constrained and does not satisfy in practice. **Overclaim**: Section title 4.3 "Automatic clipping is equally private and maximizes utility" is somewhat overclaiming. It is not carefully argued why automatic clipping can "maximize utility given a privacy budget. **Missing reference**: The paper missed a very related reference Yang et al. "Normalized/clipped sgd with perturbation for differentially private non-convex optimization" which also studied the DP-SGD with normalization. The paper should have a careful comparison with it. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We will address the your comments point-by-point. Misleading justification & Overclaim: the dot product is not normalized and it is indeed aligned with good convergence in the following sense. Under our non-convex but Lipchitz smooth setting (see Appendix C.2), $L_{t+1} − L_t \leq g_t^\top(w_{t+1} − w_t) + \frac{L}{2}||w_{t+1} − w_t||^2$, where $L_t=\sum_i L_i(w_t), g_t=\sum_j g_j(w_t)$. According to DP-SGD, we have $w_{t+1} − w_t=-\sum_i C_i g_i-\sigma N(0,I)$, and the loss decrease (as stated above equation C.2) is upper bounded by $-\eta (\sum_j g_j)(\sum_i C_i g_i+\sigma N(0,I))+L\eta^2(B^2+\sigma^2 d)$. Notice that the second term is independent of clipping and the first term (ignoring $\sum_j g_j\cdot \sigma N(0,I)$ for its zero expectation) can be written as $-\eta \langle \sum_i C_i g_i, \sum_j g_j\rangle$, which is exactly the dot product in Section 3.2 that we want to maximize. We thank the reviewer for bringing this up and will surely add this explanation in the camera ready version. Too strong Assumptions: We emphasize that the symmetric noise is actually relaxed from the common non-DP deep learning literature (see references in Footnote 6, also the "Noise structures" section in [1*] for a literature survey about the symmetry of gradient noise), and has been empirically verified in DP deep learning by [14]. We agree that the assumption can be strong and not satisfied in practice, but it is necessary and insightful, shedding light on the training behaviors below Theorem 5 for practitioners. [1*] Wu, Lei, Mingze Wang, and Weijie Su. "The alignment property of SGD noise and how it helps select flat minima: A stability analysis." Advances in Neural Information Processing Systems 35 (2022): 4680-4693. Missing reference: We thank the reviewer for mentioning this paper. It is a concurrent work that goes public (to arxiv) after ours. To comply with NeurIPS anonymous submission policy, we cannot provide the timestamp. We will add the comparison in the camera ready version by claiming that (1) Our algorithms are the same. (2) The analysis is based on different assumptions. Yang et al. uses a relaxed Lipchitz smoothness, instead, we use the symmetric gradient noise assumption. (3) Our experiments is more comprehensive, covering over 10 tasks including the scalability of DP GPT, while they only cover 2 smaller models. We are happy to extend the discussion and hope the reviewer can raise the score if satisfied! --- Rebuttal Comment 1.1: Title: thanks for the response. Explanation is still misleading. Comment: Dear authors, Thanks for the response. I found the explanation is still misleading because the upper bound $-\eta \langle \sum_i g_i, \sum_i C_i g_i + \sigma N(0,1)\rangle + \frac{L}{2}||\sum_i C_i g_i + \sigma N(0,1)||^2$ is not carefully analyzed. You claim that "The first term gives the term you want to maximize by ignoring the noise gradient inner product, and the second term is independent of clipping" is not correct. It is obvious that the second term depends on whether you are using the clipping or normalization. One can consider a simple set-up: $g_i$ is extremely small, $C_i$ for normalization will be much larger than 1 (clipping threshold). That means when you doing normalization, although the first term is maximized, the second term may also be (unwantedly) magnified significantly. Overall, it is not clear whether the upper bound is increased or decreased. --- Reply to Comment 1.1.1: Title: Clarification on the second term Comment: Thank you for the discussion! We would like to clarify why we may treat the second term as independent of whether we use clipping or normalization. Let's consider the re-parameterized clipping as in [16] De et al.: $C_i=\min(1/||g_i||,1/R)$ and AUTO-V: $C_i=1/||g_i||$ (the analysis will be the same if we consider Abadi's clipping $C_i=\min(R/||g_i||,1)$ and R-dependent AUTO-V $C_i=R/||g_i||$). In our proof above Equation (C.2), with $Z=N(0,I)$, we used $$\frac{1}{2}||\sum_i C_i g_i+\sigma Z||^2\leq ||\sum_i C_i g_i||^2+\sigma^2||Z||^2\leq B\sum_i ||C_i g_i||^2+\sigma^2||Z||^2$$ given by Cauchy Schwartz inequality. Then in line 608, it is clear that $||C_i g_i||\leq 1$ for both clipping and normalization and everything (related to the second term) follows since after Equation (C.2). Therefore, **the proof as a upper bound guarantee is valid (though maybe not tight)**. To justify the "tightness", we only need to justify the use of $||C_i g_i||\leq 1$ for clipping, which can be much loose if most $g_i$'s are extremely small and not clipped, as you correctly pointed out. Hence the question is what proportion of $g_i$'s is clipped. Theoretically, if R is small, then all $g_i$'s are clipped and $C_i=\min(1/||g_i||,1/R)=1/||g_i||$ reduces to AUTO-V. In fact, this scenario has been observed to achieve strong performance by recent works (see [16] Figure 8 and [40] Figure 8(b)). Empirically, from **Figure 12 (appendix H)** and line 799, we observe that $30\sim 100$% per-sample gradients are clipped, across different datasets. Specifically, for GPT2 training, *100% of $g_i$ are clipped in all iterations*. We agree this proportion is certainly task-dependent, but it should provide helpful insights to view the second term as independent of clipping or normalization. --- Rebuttal 2: Title: Follow up on symmetric gradient noise assumption Comment: We would like to provide more references to justify that our assumption follows from the non-DP deep learning literature (hence it is actually less constrained). Most theoretical papers that analyze the standard SGD assumes that the mini-batch gradient follows $$\frac{1}{B}\sum_i g_i\sim \frac{\partial \hat L}{\partial w}+\xi(w)$$ where $\frac{\partial \hat L}{\partial w}$ is the oracle gradient with respect to the generalization loss, and $\xi$ is the random gradient noise with $\mathbb{E}[\xi]=0, \mathbb{E}[\xi\xi^\top]=\Sigma(w)/B$. That is, the per-sample gradient (by setting $B=1) follows $$g_i\sim \frac{\partial \hat L}{\partial w}+\xi(w)$$ Next, the noise structure $\Sigma$ is proportional to the oracle Hessian matrix $H$, which is symmetric: - [2*,3*,4*] below assume $\Sigma(w)=\sigma^2 H(w)$ for some constant $\sigma$ - [5*,6*] below assume $\Sigma(w)=2L(w) H(w)$ for loss $L$ - [46, 63, 12, 70] in our submission assume $\Sigma$ is the covariance matrix of some Gaussian. **In all these cases, the gradient noise is symmetric**, which is verified in practice as well (happy to add reference for this as well). We hope this is helpful and would appreciate it if the reviewer could increase the score. [2*] Zhanxing Zhu, Jingfeng Wu, Bing Yu, Lei Wu, and Jinwen Ma. The anisotropic noise in stochastic gradient descent: Its behavior of escaping from sharp minima and regularization effects. In International Conference on Machine Learning [3*] Stanisław Jastrz˛ebski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos Storkey. Three factors influencing minima in SGD. arXiv preprint arXiv:1711.04623, 2017. 3 [4*] Zhiyuan Li, Tianhao Wang, and Sanjeev Arora. What happens after SGD reaches zero loss? –a mathematical framework. In International Conference on Learning Representations, 2022. 3 [5*] Liu Ziyin, Kangqiao Liu, Takashi Mori, and Masahito Ueda. Strength of minibatch noise in SGD. In International Conference on Learning Representations, 2022. 3 [6*] Takashi Mori, Liu Ziyin, Kangqiao Liu, and Masahito Ueda. Logarithmic landscape and power-law escape rate of SGD. arXiv preprint arXiv:2105.09557, 2021. 1, 3, 8 --- Rebuttal 3: Title: Further discussion Comment: We would like to thank the reviewer again for the comments and hopefully our response clears your concerns. Given that the discussion period ends in 3 days, we are happy to address further questions, and if there are no more concerns, we would appreciate it if the reviewer can consider raising the score. --- Rebuttal 4: Comment: We hope we have addressed your comments and if not, we are happy to further discuss them. Given that the discussion period ends in 1 day, we would appreciate it if the reviewer can consider raising the score. --- Rebuttal 5: Comment: Dear reviewer, Thank you again for the discussion. We hope we have addressed your concerns about symmetric noise of gradient and tightness of upper bound. Given that the discussion will end in an hour, please let us know any questions you may have. It would be really appreciated if our response is reflected in your score.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The script proposed an automatic clipping methods for various DP algorithms. The problem is important because the performance of DP models are sensitive to the choice clipping threshold, yet there is no theorical guidance for tuning it. Strengths: The writing is clear and the paper is easy to follow. The idea is simple and effective. Theory and experiments are provided to show the efficacy of the proposed automatic clipping method. I think this result is worth sharing with the DP community. Weaknesses: See below Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Only one question:** According to to line 140, Auto-V maximizes the dot product similartiy **only when** <g_i, \sum_j g_j> >0 for any i. However, in practice, <g_i, \sum_j g_j> could be negative, but Auto-V still perform gradient normalization on these samples. I wonder: what if we only normalize g_i with <g_i, \sum_j g_j> >0 and ignore the rest, (i.e., do exactly the optimal normlization in line 140). Will it bring better performance? Similar question also goes to Auto-S. **Suggestions:** I suggest add index to the equations in line 138 and 140. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for appreciating our work! We will add index to equations as suggested. We expect the optimal clipping, as we already stated in line 140, to have a better performance BUT it will not lead to DP guarantee. The reason is due to the term $\sum_j g_j$, which changes when adding or removing one sample from the batch. To give more details, the bounded sensitivity in DP requires that the sum of clipped gradients changes by at most $R$, when adding or removing one sample. Suppose in one dataset we have only one datapoint (j=1,2) and in its neighboring dataset we have two datapoints (j=1,2,3). We will use AUTO-V clipping as an example, and Abadi's clipping can be analyzed similarly. Then for AUTO-V clipping, we have $|| C_1g_1+C_2g_2+C_3g_3||-||C_1g_1+C_2g_2||\leq ||C_3g_3||\leq 1$ by the triangular inequality (where $C_i=1/||g_i||$). But the clipping in line 140 gives $||C_1'g_1+C_2'g_2+C_3'g_3||-||C_1g_1+C_2g_2||$ (where $C_i=\mathbb{I}(\langle g_i, g_1+g_2\rangle>0)$ and $C_i'=\mathbb{I}(\langle g_i, g_1+g_2+g_3\rangle>0)$), hence the triangular inequality does not apply due to $C_1'\neq C_1$. This is the reason that per-sample clipping has to look at individual gradient separately in most DP literature (unless $\sum_j g_j$ is privatized at additional privacy budget), i.e. $C_i$ only depends on $g_i$ and nothing else. We will add this explanation in the camera ready version. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response. I will keep my score
null
null
null
null
null
null
Using Imperfect Surrogates for Downstream Inference: Design-based Supervised Learning for Social Science Applications of Large Language Models
Accept (poster)
Summary: The author presents a complete and thorough study in their research regarding semi-supervised learning of social science application targeting bias and coverage control. The work include clear mathematical proof of the purposed approach, and experimental results demonstrating the well alignment of theory and application. The assumption is also clearly stated and limitations are considered. As a non-expert of computational social science, the paper is relatively easy to read and understand. Overall I think this is a solid work in the field. I have a few minor questions and comments below. Strengths: 1. the mathematical proof of bias correction step, equation 4 and the conditionally unbiased estimation of Yi is simple and clear 2. the surrogate labels can be arbitrarily biased, while the theoretical guarantee still holds is the most fascinating and important contribution of the work. I personally find this really interesting 3. experimental results show great alignment of theory, comparable performance of RMSE compared with SSL although the method does not specifically target for it Weaknesses: 1. the biggest miss/weakness after reading from my perspective is from the title, using LLM annotations for valid statistical inference. In general the research is more like a mathematical proof of bound on semi-supervised learning and demonstration of usefulness with experiment. There is little regarding using LLM annotations. With such a title I would expect a substantial part of the work focusing on LLM generated annotations, e.g. how it is used and how can it be improved from LLM perspective, etc. 2. the assumption of the purposed approach, though clearly stated, greatly limit the contribution of the approach. It may be a common setting in social science that test set is known, but not the case for majority of LLM annotation use cases. 3. the asymptotic behavior given the gold-standard annotation, is not well discussed in the main paper. Aspects like the downstream coefficients expectation and variance given the size of golden labels compare with sample space size are not well covered. Some empirical numbers could be extremely useful, beyond the settings in experiment part 4. some minor comments: page 8 figure 2 annotations, as a non expert of social science it would be good to explain in more detail what is coverage and why it is important to the study. In second experiment class prevalence estimation, better include more details like each dataset's distribution, total size of each dataset, etc. (see it in supplement however I think it is helpful to include some numbers in main paper, as it gives the important size ratio of golden-set and total set) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see comments in weakness part Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: the limitations of the work is well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review! We are glad you found the theoretical and empirical work in the paper enjoyable! **Clarify the RMSE gain of DSL over GSO:** First, based on suggestions from reviewers, we further clarified the RMSE gains of our estimator compared to existing alternatives. In particular, we show that the RMSE gains of DSL compared to GSO (the only other method that is unbiased and has valid confidence intervals) are substantial: it is equivalent to having 50% more hand-coded documents (when using zero-shot LLM learning) and to having 100% more hand-coded documents (when using five-shot LLM learning). Please see the new Figure R-1 in the attached PDF. Given the high cost of hand-coding, this practically means that computational social scientists can obtain unbiased estimates and valid confidence intervals with much fewer hand-coded documents when they use DSL instead of existing alternatives. We wanted to emphasize the substantial efficiency gains from our method, which our previous Figure 2 did not convey well. **We quote the weaknesses below and respond.** **(1)** *the biggest miss/weakness after reading from my perspective is from the title, using LLM annotations for valid statistical inference. In general the research is more like a mathematical proof of bound on semi-supervised learning and demonstration of usefulness with experiment. There is little regarding using LLM annotations. With such a title I would expect a substantial part of the work focusing on LLM generated annotations, e.g. how it is used and how can it be improved from LLM perspective, etc.* Following your and other reviewers’ advice, we are changing the title to “**Using Imperfect Surrogates for Downstream Inference: Design-based Semi-supervised Learning for Social Science Applications of Large Language Models.**” We will also emphasize the generality of DSL and downplay the specific LLM angle in the paper. Please see more detailed responses in our Global Response (at the top of this page). **(2)** *the assumption of the purposed approach, though clearly stated, greatly limit the contribution of the approach. It may be a common setting in social science that test set is known, but not the case for majority of LLM annotation use cases.* We completely agree and this is why we have tried to be so clear both about the social science connection and the assumption. We are adding more citations of applications to the paper to make clear that this covers a sizable portion of social science use cases. We also want to further clarify that our method only requires a random subset of the test data (i.e., we do not need the full test data when we hand-code documents). An interesting future direction is to incorporate the domain shift (i.e., researchers do not have access to a random subset of the test data). We hope the reviewer agrees that the use case is broad enough to support the work, but we agree on the limitation. **(3)** *the asymptotic behavior given the gold-standard annotation, is not well discussed in the main paper. Aspects like the downstream coefficients expectation and variance given the size of golden labels compare with sample space size are not well covered. Some empirical numbers could be extremely useful, beyond the settings in experiment part* Thank you for raising this point! We want to clarify that this was indeed our goal in Figure 2 (which shows asymptotics for the CBP application). Bias is the difference between the expected values of estimated coefficients and the true coefficients, while RMSE is a summary measure of bias and variance. To make these points clearer, we created a new figure (Figure R-4 in the attached PDF) that directly visualizes the expected values of coefficients and variance. Please let us know whether the newly added figure is what you are thinking about. We are also happy to provide any additional detail you would like to see. In our new simulation results, we also consider a wide range of sample sizes (see Figure R-3 in the attached PDF). **(4)** *some minor comments: page 8 figure 2 annotations, as a non expert of social science it would be good to explain in more detail what is coverage and why it is important to the study.* Thank you for your question. The coverage is a probability that the 95% confidence interval covers the true coefficients over the sampling distribution. If an estimator has valid uncertainty quantification and valid confidence interval, this coverage should be (asymptotically) at least 95%. This valid uncertainty quantification is essential because social scientists often conduct statistical analyses to *test social science hypotheses* rather than making predictions. For example, when studying online hate speech, social scientists would be interested in testing whether highly educated people are less likely to post hate speech. To conduct such hypothesis testing, valid uncertainty quantification (with confidence intervals and corresponding p-values) is fundamental. We will provide this discussion in the paper and extend the caption of Figure 2. **(5)** *In second experiment class prevalence estimation, better include more details like each dataset's distribution, total size of each dataset, etc. (see it in supplement however I think it is helpful to include some numbers in main paper, as it gives the important size ratio of golden-set and total set)* Thank you for the suggestion! We amended Table 2 to include this information. Please see the updated Table 2 in the attached PDF. Please let us know whether the newly added table is what you are thinking about. --- Rebuttal Comment 1.1: Comment: Please kindly let us know whether our responses addressed your questions and concerns. We are more than happy to add any additional analyses you think are necessary for the paper. We just wanted to ask this such that we can respond again to you within the discussion periods.
Summary: In order to exploit the advantages of Large Language Models (LLMs) in computational social science (CSS) area, this paper proposed a novel DSL estimator, which employed a doubly-robust procedure to combine surrogate labels with a few gold-standard labels. By using the proposed method, not only LLMs can be used as surrogate label generator, but also the asymptotic unbiasedness and proper uncertainty quantification in CSS can be maintained. Finally, this paper conducted plenty of experiments to prove the effectiveness of the proposed method. Strengths: 1. This paper proposed a unified framework for using imperfect surrogate labels in downstream statistical analyses which maintains the CSS priority for unbiasedness and proper coverage. 2. This paper proposed a strong theoretical guarantee for the effectiveness of the proposed method. 3. This paper conducted extensive experiments across 18 datasets to compare the performance of the proposed method. Weaknesses: 1. This paper focused on the semi-supervised setting in some CSS scenarios. The difference between the proposed method and previous semi-supervised methods should be compared in a detailed way. 2. This paper had plenty of notations. An explanation table about these notations should be provided. 3. In experiments parts, this paper should provide a brief about the evaluation tasks and corresponding datasets, so that the results can be more convincing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see the weaknesses for more details. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review! We are glad you enjoyed the theory and the extensive experiments. **Clarify the RMSE gain of DSL over GSO:** First, based on suggestions from reviewers, we further clarified the RMSE gains of our estimator compared to existing alternatives. In particular, we show that the RMSE gains of DSL compared to GSO (the only other method that is unbiased and has valid confidence intervals) are substantial: it is equivalent to having 50% more hand-coded documents (when using zero-shot LLM learning) and to having 100% more hand-coded documents (when using five-shot LLM learning). Please see the new Figure R-1 in the attached PDF. Given the high cost of hand-coding, this practically means that computational social scientists can obtain unbiased estimates and valid confidence intervals with much fewer hand-coded documents when they use DSL instead of existing alternatives. We wanted to emphasize the substantial efficiency gains from our method, which our previous Figure 2 did not convey well. **Below, we quote the weaknesses you identified below and respond.** **(1)** *This paper focused on the semi-supervised setting in some CSS scenarios. The difference between the proposed method and previous semi-supervised methods should be compared in a detailed way.* Thank you for this point. We have added an additional comparison with the best state of the art we are aware of (Wang et al. 2020 in *PNAS*), which, as far as we know, is the only published article that proposes an estimator that covers exactly the same settings as ours (downstream regression analyses). As this existing estimator requires stronger parametric assumptions, it is, in general, biased and has invalid confidence intervals in our experiments, in contrast to our proposed DSL. Please see our new results in Figure R-3 in the attached PDF. Importantly, we additionally show that, as sample size increases, DSL provably dominates SSL in terms of RMSE because bias of SSL does not vanish with sample size, while variance of DSL will vanish. See the same Figure R-3. We will also use the additional page to describe the results of our experiments more clearly and what they tell us about how DSL compares to prior SSL methods — essentially that they have slightly better RMSE, while all of the SSL methods are biased and have invalid confidence intervals. We are also open to any other details you would like to see. Thank you for your suggestions. **(2)** *This paper had plenty of notations. An explanation table about these notations should be provided.* Thank you for the suggestion. We will take the notation summary in Appendix 2.2 and append a condensed version to Figure 1, so readers will have an easily accessible notation guide. **(3)** *In experiments parts, this paper should provide a brief about the evaluation tasks and corresponding datasets, so that the results can be more convincing.* Thank you! Per the suggestion from you and one other reviewer, we will substantially extend our discussion of the details of the evaluations in the main paper by pulling in material from the appendix. In particular, we will be more clear about the specification of downstream analyses, the details of LLM annotations, and training procedures. We have also extended Table 2 to include more information about each dataset (please see a snippet of Table 2 in the attached PDF; we show two rows as an example due to the space constraint). If there are any other details you would find particularly essential to include, please let us know, and we are happy to accommodate. We think of the theoretical results as the most convincing piece of evidence and the experiments as demonstrating that the results hold in practice. The review has been helpful in clarifying that we need to bring in more details. --- Rebuttal Comment 1.1: Comment: Please kindly let us know whether our responses addressed your questions and concerns. We are more than happy to add any additional analyses you think are necessary for the paper. We just wanted to ask this such that we can respond again to you within the discussion periods.
Summary: This paper studies the use of surrogate labels in model training and it's bias, coverage and performance properties. Practically, the premise is to use a small number of gold data annotations for a classification task, then construct surrogate labels that have desirable bias, coverage and performance properties. The application of this would be doing additional analyses using covariates that would benefit from the additional support in data size. The method proposed uses a small amount of gold labels to calibrate the model prediction and is agnostic to the choice of underlying model. The method is compared to using gold labels alone, surrogate labels alone from an external model not fitted to the data and a simple form of semi-supervised learning where a model is fitted to the gold labels. Experiments are conduced in depth on bias, coverage and RMSE in more depth on one data set on congressional bills. Further experiments are conducted over a battery of data sets related to computational social science applications on bias, coverage and RMSE with 100 gold labels. The results show that bias and coverage for the proposed method are closer to using gold standard data alone, while showing small improvements on some datasets in RMSE. Strengths: The premise and application is interesting and has not been too well studied to the best of my knowledge. A large number of data sets were used in the bias/coverage/RMSE experiments, which provides a good starting point for such analyses. The proofs and experiments supporting that bias and coverage properties of the proposed method. Weaknesses: The key weakness is that application presented as a motivation for this paper is not tested throughout any of the experiments. The covariates are not used in any of the experiments to enhance data analysis, which could have been done with several data sets in the collection, including the congressional bills. The experiments show minor improvements in general on the RMSE, especially when compared to the vanilla self-supervised methods. The self-supervised methods represented here are very weak for this category of methods. For example, the gold labels are overwritten by the self-supervision and the self supervision is done in a single round over the entire data set, without any filtering by confidence as usually done, which makes this an artificially weak baseline, especially on bias and coverage. Yet, the SS methods achieve constantly higher RMSE, especially when considering the difference between Gold labels and the proposed method (DSL), which shows that the DSL methods does not capture much of the performance benefits that self-supervision alone could achieve. This leaves the main application of the approach the ability to use more data for covariate analysis, but this application is now shown through experimentation in the paper. The paper should also present results in F1 metric (both surrogate label model and method comparison results) because RMSE is not natural for multi-class classification and having the surrogate label model and methods trained on top of predictions on the same method is useful for comparisons. As a general comment: the method is agnostic to the source of surrogate labels, as any model can be used to produce the labels and the proposed method does not use any information about the model or its assumptions. Hence, while LLMs can be a source for labels, the paper should be rewritten to remove so many specific mentions of LLMs and discussions about LLMs, which serve no purpose for the paper (perhaps it would for paper publicity), but actually artificially narrow the generality of the method. The paper also cites many papers that make unfounded claims or estimates about accuracy needed for 'annotations' (surrogate labels) for applications or model training, while none of the cited papers (e.g. Gilardi 2023) actually trained models to test the improvements those could bring (or not). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weaknesses. An interesting analysis would be to use either synthetic data or to intentionally vary the accuracy of the underlying model on the same data set (rather than existing accuracy which differs across data sets). This would provide a more controlled analysis of these factors. What is the K (defined in line 192) in the experiments? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review! We are glad that you enjoyed the proofs, experiments, and premise. Below, we quote the weaknesses you identified and respond. **(1)** *The key weakness is that the application presented as a motivation for this paper is not tested throughout any of the experiments. The covariates are not used in any of the experiments to enhance data analysis. [...]* We believe this might be a miscommunication due to our use of semisupervised learning (an overloaded term!). We first clarify our goal and then propose ways to incorporate your suggestions. Our goal is to *explain* a document label outcome using covariates with regression models (e.g., logistic regression). For example, political scientists are often interested in *explaining* what types of people are more likely to post hate speech. Here, outcome **Y** is whether a post contains hate speech, and covariates **X** could include posters’ characteristics (e.g. education and partisanship). This regression estimates the share of hate speech within strata of **X**. Social scientists run such statistical analyses to *test* social science hypotheses (e.g., whether highly educated people are less likely to post hate speech). These regressions are run explicitly or implicitly through the calculation of subgroup averages. This task of *explanation* and *hypothesis testing* in the social sciences is distinct from *unit-level prediction*—classifying whether each post contains hate speech. Document-level prediction is not less important; our goal is different. We developed DSL to perform downstream regression analyses that are asymptotically unbiased and have valid confidence intervals. Importantly the role of covariates is to use them as explanatory variables in the second-step downstream regression, not to improve model training for document-level predictions. We think your understanding was: use an LLM to annotate a set of documents, use those as the labels to train a supervised classifier, and then return this fine-tuned classifier as the output. This is similar to the common computer science goal of training a better model to predict labels at each document level. This goal is different from ours. Your concern, in this case, appears to be that we could have a stronger baseline by only training the classifier using labels about which the LLM is confident and potentially repeating over multiple rounds, folding in new examples each time. Please correct us in the discussion if we have misunderstood your point. These procedures are related but they tackle different goals. If the classifier trained on your procedure was used as the predicted labels in our model, it might yield a higher accuracy surrogate, but our main point is how to use such surrogates in downstream analyses. If we directly use such surrogates in the downstream regression, they will lead to substantial bias and invalid confidence intervals, even when the accuracy can be extremely high. You were concerned that we didn’t use the covariates “to enhance data analysis,” but the CBP example is specifically about the downstream regression of the Macroeconomy category regressed on three covariates in our downstream logistic regression. Thus, we believe that we have demonstrated in our experiments the core use case, and it is consistent with the vast majority of social science use cases. We will make this point clear in our revision and include additional citations. **(2)** *The experiments show minor improvements in general on the RMSE, especially when compared to the vanilla self-supervised methods. The self-supervised methods represented here are very weak for this category of methods. [...] Yet, the SS methods achieve constantly higher RMSE.* We want to clarify two points. First, we follow the social science priority for bias and coverage. SSL is biased and has invalid confidence intervals, in contrast to DSL. Second, focusing on RMSE, we expect that SSL, which is optimized for RMSE, can outperform DSL. However, in our experiment, we showed that DSL, while maintaining unbiasedness and valid confidence intervals, can achieve RMSE comparable to SSL (even though it is often higher than SSL in a finite sample). Importantly, we additionally show that, as sample size increases, DSL provably dominates SSL in terms of RMSE because bias of SSL does not vanish with sample size, while variance of DSL will vanish. See our new simulation evidence (Figure R-3 in the attached PDF). We have also added the state-of-the-art SSL baselines (Wang et al., *PNAS*), and found the same pattern (see the same Figure R-3). **(3)** *The paper should also present results in F1 metric [...].* We added F1 metric for the surrogate in Table 2 (see Table 2 in PDF). For our main results, the target is the coefficients of the downstream regression and the F1 metric does not apply. **(4)** *As a general comment: the method is agnostic to the source of surrogate labels [...].* Following your and other reviewers’ advice, we are changing the title to reflect this point. Please see the global response at the top of this page. **(5)** *The paper also cites many papers [...], while none of the cited papers [...] actually trained models to test the improvements those could bring (or not).* We think this is related to the miscommunication described above. Gilardi and others want to annotate documents so they can use LLM annotations as outcomes in downstream analyses without doing hand coding. We show this surrogate-only estimator is biased. **(6)** *An interesting analysis would be to use either synthetic data or to intentionally vary the accuracy of the underlying model on the same data set[...].* This was the goal of Figure 1a, and we now extended it by including additional simulations based on synthetic data (see Figure R-2 in PDF). **(7)** *What is the K in the experiments?* The number of splits which we set to five in both experiments (the default in Chernozhukov et al. 2018). --- Rebuttal Comment 1.1: Comment: Please kindly let us know whether our responses addressed your questions and concerns. We are more than happy to add any additional analyses you think are necessary for the paper. We just wanted to ask this such that we can respond again to you within the discussion periods. --- Rebuttal Comment 1.2: Comment: Hello, (1) I believe I understand and am in agreement with the statement in the response. The additional experiments presented in the response and not in the original paper, including the variation in performance of surrogate labels, do show evidence of this for this setup (although unclear if balanced or imbalanced). However, I believe a single test case (data set, data combination, selection of topics, data size, covariates, inclusion of the MPnet similarity not in the original paper) is not enough to prove general applicability, especially as the experimental setup is not standard for this data set. Separately, it seems there is a strange effect in the new charts that is not commented which could warrant some investigation: the RMSE is similar with varying the sample size from 50-1000 and the percent improvement is actually decreasing with larger sample size for 5-shot. Simply having more experiments across multiple data sets would lead to more robust insights. (2) The point is that the SS method is a weak baseline and it would add more weight to the paper to prove the bias and coverage is an issue for better SS methods. (3) The F1 should also be presented in the method comparison results, given many of the data sets are multi-class classification and/or using imbalanced data sets. (4) The change in wording I believe is unsatisfactory and would like to continue arguing that the insistence on the importance of LLM is not representative and restricting the applicability of the method. I find this comment a bit misplaced: "you already build up a gold standard dataset with which you can use DSL to improve performance" the gold standard is needed anyway to start from and the other parts of the response mention that DSL's main goal was actually not to improve performance. (5) The reference is primary about the 70% accuracy threshold copied from (Ziems at al 2023). (6) Thank you for adding the analysis in Figure R-2. --- Reply to Comment 1.2.1: Title: Response to Point 1 Comment: Thank you for your comments. Given the time constraint (we only had 12 hours between your response and the end of the discussion period), we were not able to produce additional experimental results. We addressed your numbered list below (thank you for continuing the numbering to keep our complex discussion well-organized!) with a short summary of the main points on each. Overall we are still concerned that there is a misunderstanding about the goal of our study. **Response to (1)**: To summarize the discussion thus far: Your original point (1) was to argue that the motivation for our paper was not tested in the experiments. In our reply we argued that this was based on a miscommunication due to the variety of meanings given across fields for ‘semisupervised’ learning and restated our paper’s goals clarifying how the experiments target our use case. In the response to our reply, you began with *I believe I understand and am in agreement with the statement in the response* and then proceeded to raise a few additional questions which we address below. Now we quote the rest of the reply in full and respond. *I believe I understand and am in agreement with the statement in the response. The additional experiments presented in the response and not in the original paper, including the variation in performance of surrogate labels, do show evidence of this for this setup (although unclear if balanced or imbalanced).* (1.1) In Figure 1 and Figure 2, we are showing the figures based on the balanced class but we have already done the same analyses for the imbalanced class and they show similar patterns. *However, I believe a single test case (data set, data combination, selection of topics, data size, covariates, inclusion of the MPnet similarity not in the original paper) is not enough to prove general applicability, especially as the experimental setup is not standard for this data set.* (1.2): Here we would respectfully point out that our original submission uses 18 different datasets. For the rebuttal we had a space constraint of 1 page of figures and so we focused on showing the results for the CBP application (Figure R-1) and for a new simulation specifically designed to address your concerns (Figure R-2). We are happy to commit to including similar figures for all 18 datasets in the appendix of the final version. That said, we emphasize again that our primary evidence is the proof of the properties of DSL while we see the datasets as simply demonstrating that the proofs hold in real settings. Finally there is a passing reference here to MPnet similarity. We think this is a misinterpretation of the citation Wang et al. The full citation for our reference is: Wang, Siruo, Tyler H. McCormick, and Jeffrey T. Leek. "Methods for correcting inference based on outcomes predicted by machine learning." *Proceedings of the National Academy of Sciences* 117.48 (2020): 30266-30275. This was cited in our paper and we included it as a baseline to address the concern you raised about absence of a strong baseline. It is the strongest competing method targeting our task of which we are aware. *Separately, it seems there is a strange effect in the new charts that is not commented which could warrant some investigation: the RMSE is similar with varying the sample size from 50-1000 and the percent improvement is actually decreasing with larger sample size for 5-shot.* (1.3) Please note that the Y-axis on Figure R-1 is the percent improvement, so the absolute value of RMSE is in fact decreasing as the sample size increases. We can emphasize this point clearly in a caption when we include it in the appendix. Theoretically, the percent improvement converges to a certain number, and for the zero-shot case, we already see that the percent improvement converges to certain values (on the left panel, about 15% and, on the right panel, about 40-50%). As for the 5-shot case as well, we theoretically expect that this improvement will converge to a particular value. We can further increase the sample size and verify this in the final revised version. *Simply having more experiments across multiple data sets would lead to more robust insights* (1.4) As noted above, we are happy to provide similar analyses for all 18 different data sets in the final version.That said for a paper with a proof of the relevant property, we are comfortable with 18 datasets as sufficiently demonstrating that the properties of DSL are robust to a variety of real-world circumstances (including the range of tasks that Ziems et al use to represent computational social science). --- Reply to Comment 1.2.2: Title: Response to Points (2-3) Comment: **Response to Point (2)** To summarize the discussion thus far: the original objection here was around the lack of RMSE gains over the semi-supervised methods (written as ‘self-supervised’ in the original review and part of our sense that there was a misunderstanding). We replied by describing the importance of bias and coverage for social science and emphasizing that semi-supervised methods directly minimize RMSE. We showed that we are able to match the RMSE of the Semi-Supervised Learning setup and further that we outperform the state-of-the-art baseline in the Wang et al paper cited above. Now we quote the rest of the reply in full and respond. *The point is that the SS method is a weak baseline and it would add more weight to the paper to prove the bias and coverage is an issue for better SS methods.* We are concerned that you might still be thinking about " **Self-Supervised** Learning" but we want to emphasize that our paper or our response never discuss " **Self-Supervised** Learning". As in the title and throughout the paper, we only talk about " **Semi-Supervised** Learning". While both are often abbreviated to be SSL, they are different methods, and our relevant alternative is not "Self-Supervised Learning". Our SSL method in the paper is consistently about " **Semi-Supervised** Learning," which is a class of methods that use the gold-standard labels to train the model and predict labels in the unlabeled data set. The central question is how you use predicted labels from any **Self-Supervised** learning methods in downstream analyses. If used directly in the downstream analyses, the estimator is "Surrogate-Only" and if it is used with additional fine-tuning with the gold-standard label, the estimator is "Semi-Supervised Learning". Your original review did not offer any citations and so we struggled to figure out what stronger baseline you had in mind. We incorporated the state-of-the-art "Semi-Supervised Learning" approach by Wang et al (2020)—full citation above—in the initial rebuttal because it is the best we could find. This paper is closest to our setting and yet it does not have any theoretical guarantees of asymptotic unbiasedness and valid confidence intervals, and we showed that it is biased and has invalid confidence intervals in our experiments. In general, bias and valid confidence intervals are theoretical properties that statisticians need to explicitly prove and no methods can be asymptotically unbiased and have valid confidence intervals by chance. We extensively reviewed the literature and know of no other method that has such guarantees. We are obviously now out of time to provide additional experiments in a rebuttal for you, but whether this is published at NeurIPS or elsewhere we are committed to getting this right, so if you have a citation or a method name in mind, please let us know and we are happy to incorporate an evaluation against it in the final version. **Response to Point (3)** To summarize the discussion thus far: the original review asked that we add F1 to the main results Table 2. We promised to do so and in the rebuttal figure showed a snippet of Table 2 demonstrating that point. We also emphasized that this is the only place F1 applies because the quantity of interest is again not the individual predictions but coefficients. The reply was: *The F1 should also be presented in the method comparison results, given many of the data sets are multi-class classification and/or using imbalanced data sets.* We want to re-clarify that, for our main experiments, our target quantity is a coefficient (a vector of continuous variables), and thus, the final output is not a multi-class-classification. Therefore, regardless of whether the problem is imbalanced or multi-class, the final output is always a vector of continuous variables. Therefore, unfortunately, we do not have any F-1 score to report for the main results. But, as you suggested, we have already incorporated the F-1 score for the underlying LLM annotations (please see Table 2 in the PDF attached to the first rebuttal). --- Reply to Comment 1.2.3: Title: Response to Points (4-5) Comment: **Response to Point (4)** To summarize the discussion thus far: The reviewer pointed out that the method is agnostic to the surrogate labels and suggested that the paper be rewritten to remove “so many specific mentions of LLMs and discussions about LLMs.” We offered to change the title and emphasize clearly in the paper the lack of dependence on LLMs. *The change in wording I believe is unsatisfactory and would like to continue arguing that the insistence on the importance of LLM is not representative and restricting the applicability of the method. I find this comment a bit misplaced: "you already build up a gold standard dataset with which you can use DSL to improve performance" the gold standard is needed anyway to start from and the other parts of the response mention that DSL's main goal was actually not to improve performance.* As we clarified above, we will change the title to "Using Imperfect Surrogates for Downstream Inference: Design-based Semi-supervised Learning for Social Science Applications of Large Language Models." Therefore, it is clear that LLMs are simply an application of the method. And, we do not restrict our paper to LLMs. Given that all of our experiments are about LLMs and in light of our point made in the paper and the rebuttal that many social scientists are moving towards LLMs, we think removing more reference to it could harm the ability of readers to find our paper. In response to the second point, given the extreme space constraint, we were brief here, but we mean (as argued repeatedly in the paper) that DSL improves over GSO (in terms of RMSE) and improves over SO and SSL (in terms of bias and coverage). When you say "the other parts of the response mention that DSL's main goal was actually not to improve performance." We think this is a misunderstanding again. We were saying in the response that the goal is not to improve *performance of individual document classification* but instead to improve inference in the downstream regression. In the comment you are referencing we are talking about improving performance of the inference in the downstream regression. We apologize for any confusion. **Response to Point (5)** To summarize the discussion thus far: The reviewer was concerned that we cited papers on accuracy claims that themselves don’t do enough to evaluate possible improvements in accuracy that could be achieved. We thought this was part of the misunderstanding and clarified the specific goal we were after through the lens of one of the papers we cited. *The reference is primary about the 70% accuracy threshold copied from (Ziems at al 2023).* We agree that 70% accuracy claim in Ziems et al (2023) is not based on any particular evidence. But we show in the paper that, even when the accuracy of LLM classification (in the balanced case) is as high as 95%, the coverage is less than 80% (Figure 1-(a)). Therefore, our claims about the surrogate-only estimation is not dependent on Ziems et al (2023). As you suggested above, in the final version of the paper, we are happy to incorporate more experimental results where we vary the accuracy of LLM annotations. **Summary** We are concerned there may still be some confusion with you about the goal of our study and what semi-supervised learning entails in our setting. We are more than happy to run benchmarks before publication (whether at NeurIPS or elsewhere) comparing to any baseline you like.
Summary: The authors consider using outputs of LLMs for labeling in downstream analysis in computational social science while guaranteeing statistical properties (asymptotic unbiasedness and uncertainty quantification). They show that directly using LLM-produced labels yields bias and invalid confidence intervals, then propose the design-based semi-supervised learning (DSL) estimator as an alternative. DSL combines surrogate labels from LLMs and gold-standard labels within a doubly robust procedure that guarantees valid inference even when the surrogates are arbitrarily biased and without requiring stringent assumptions. Experiments on 18 datasets show that DSL yields valid statistical inference while achieving error rates comparable to existing alternatives that focus on prediction with statistical guarantees. Strengths: The paper is very well written, the setting and methodology are well described and motivated. The assumptions and potential use cases are clearly stated. Further, though the proposed approach is conceptually simple, it blends sound ideas from semi-supervised learning (not necessarily in the, transductive, machine learning sense), doubly-robust estimation, K-fold cross-fitting and automated labeling with LLMs. Weaknesses: Perhaps the biggest weakness of the paper is that, though to the authors' admission, it prioritizes unbiasedness and coverage, however, DSL does not outperform GSO which is a very simple approach that does not require anything that is not available (a method for surrogate labels and the estimation of g). As a result, it is difficult to imagine a situation, more so in social science, where someone will prefer DSL over the far simpler GSO, unless the reviewer is missing something. As minor points: i) the title of the paper is somewhat misleading because, as the authors point out, the proposed approach is agnostic to the methodology used to obtain the surrogate labels, so the fact they use LLMs is more of a detail of the experiments than a characteristic or contribution of the proposed approach; and ii) the paper (in its current form) does not seem a good fit for a conference proceeding considering the amount of important experimental details that need to be relegated to the supplementary material, without which, it is extremely difficult to properly understand the results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In several places, the authors emphasize the need for knowing \pi (the probability of obtaining a gold-standard label) and \pi > 0 for all samples. However, this is not further discussed or described in the experiments. For instance, in Section 2.2 in the Appendix, the authors claim that \pi is known because they can choose which samples to label, however how does that make \pi for all samples that are not going to be labeled (with probability 1) larger than 0? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors discuss some of the limitations of the proposed approach. However, the limitations are quite generally about the addressed setting and less so about the methodology. Also, they point to four limitations, however, only three are described (the particular setting, access to gold standard labels and the focus on bias and coverage rather than MSE). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the kind words about the paper! We are glad that you found the setting and methodology well-described and motivated. **(1)** *Perhaps the biggest weakness of the paper is that [...] it prioritizes unbiasedness and coverage, however, DSL does not outperform GSO which is a very simple approach that does not require anything that is not available[...]. As a result, it is difficult to imagine a situation [...] where someone will prefer DSL over the far simpler GSO, unless the reviewer is missing something.* As the reviewer notes, gold standard labels only (GSO) and DSL are both asymptotically unbiased and have valid confidence intervals. In terms of the RMSE, we now see that Figure 2 did not convey precise information about how much gains DSL can provide compared to GSO. We clarify here that DSL indeed provides substantial RMSE gains over GSO. As written in the original draft, in our Congressional Bills Data experiment, we find that the RMSE gain by DSL over GSO is about 10% with zero-shot learning LLMs and about 30% with five-shot learning. To make these numbers more intuitive, we translated the RMSE improvement in terms of the number of hand-coded documents, i.e., how many more hand-coded documents does GSO need to achieve the same RMSE as DSL? This translation is possible because both estimators are asymptotically unbiased, and all the gains come from a smaller variance. We find that, for the zero-shot case, the RMSE gain by DSL is equivalent to having 50% more hand-coded documents, and for the five-shot case, the RMSE gain by DSL is equivalent to having 100% more hand-coded documents. Given the high cost of obtaining hand-coded documents, this shows that computational social scientists can obtain unbiased estimates and valid confidence intervals with much fewer hand-coded documents when they use DSL instead of GSO. We provide a new Figure R-1 (see PDF) based on our experiment. We also emphasize that as the quality of LLMs improves and their deployment gets easier, the cost-benefit proposition for DSL over GSO will be even greater. In Figure R-2 of the attached PDF, we show the degree of gains of DSL for higher accuracies in simulated data. **(2)** *As minor points: i) the title of the paper is somewhat misleading because, as the authors point out, the proposed approach is agnostic to the methodology used to obtain the surrogate labels, so the fact they use LLMs is more of a detail of the experiments than a characteristic or contribution of the proposed approach;* Following your and other reviewers’ advice, we are changing the title to **“Using Imperfect Surrogates for Downstream Inference: Design-based Semi-supervised Learning for Social Science Applications of Large Language Models.”** We will also emphasize the generality of DSL and clarify that LLMs are one application in the revision. Please see more detailed responses in our Global Response (at the top of this page). **(3)** *[...] the paper (in its current form) does not seem a good fit for a conference proceeding considering the amount of important experimental details that need to be relegated to the supplementary material, without which, it is extremely difficult to properly understand the results.* We will move more experimental details out of the supplement (in particular, we will be more clear about the specification of downstream analyses, the details of LLM annotations, and training procedures). We are open to suggestions about which would be most useful to move. While we know that many experimental details are in the appendix, we see the experiments as primarily the validation that the theory holds in practice and hence not the main focus. We think NeurIPS would be an excellent venue because this paper tries to address the intersection and frontier of computational social sciences: as some reviewers note, the common social science task of making downstream statistical analysis (and their focus on bias and coverages) is relatively new to computer science communities, while the valid use of LLMs is new to social scientists. Given this novelty and the interdisciplinary nature of our work, we believe NeurIPS is the best venue. **(4)** *[...] the authors emphasize the need for knowing \pi (the probability of obtaining a gold-standard label) and \pi > 0 for all samples. However, this is not further discussed or described in the experiments. [...] [T]he authors claim that \pi is known because they can choose which samples to label, however how does that make \pi for all samples that are not going to be labeled (with probability 1) larger than 0?* Thank you for your question. $\pi$ is the probability of being sampled for hand-coding. As an example, consider random sampling. If I have 10000 documents and I sample 100 of them to hand annotate completely at random, $\pi$ = 1/100 for all documents regardless of whether any individual document is chosen or not. Therefore, in the social science applications where researchers can sample documents for hand-coding, this $\pi$ is known, and it is greater than 0. In our experiment, to show the generality of our approach, we use the stratified sampling procedure where we stratified based on LLM labels such that we can oversample rare cases when we conduct hand-coding. We thank the reviewer for this clarification question, and we will add a simple example in the paper itself to hit this point home and will reinforce it when we describe applications. **(5)** *The authors discuss some of the limitations of the proposed approach. However, the limitations are quite generally about the addressed setting and less so about the methodology. Also, they point to four limitations, however, only three are described [...].* Thank you also for flagging the concerns about the limitations, we will expand this (and get the count correct!) to focus more on limitations specific to the method with an emphasis on points raised by the reviewers. --- Rebuttal Comment 1.1: Comment: Thanks for the very detailed response (including the figures) to the weaknesses and questions raised in the original review. The answers to points (1) and (4) will be specially important to address in the revision to make it easier for the reader. I have updated my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you so much for your kind words. In the revised manuscript, we will make sure to incorporate these changes, especially (1) and (4), as you suggested. Thank you again for your careful engagement with our work!
Rebuttal 1: Rebuttal: Thank you to all the reviewers for the careful engagement with our work. In this paper, we proposed how to use imperfect LLM annotations in downstream regression analyses, while guaranteeing asymptotic unbiasedness and proper uncertainty quantification, which are fundamental to social science research. All four reviewers agreed that the problem and setting of our paper are interesting, and our proposed methodology has strong theoretical guarantees. Most felt that our experiments using 18 different computational social science data sets were strong as well. In this reply, we address some of the common critiques and provide a guide to the new figures in the attached PDF. **Why Bias and Coverage are Important in the Social Sciences** We want to further clarify why we focused on bias and valid uncertainty quantification and why they are often prioritized over RMSE in the social sciences. This is primarily because social scientists often conduct statistical analyses to *test social science hypotheses* rather than making predictions. For example, when studying online hate speech, social scientists would be interested in testing whether highly educated people are less likely to post hate speech. To conduct such hypothesis testing, valid uncertainty quantification (with confidence intervals and corresponding p-values) is essential. If there exist several estimators that are asymptotically unbiased and provide valid confidence intervals, it is best to choose the one with a lower RMSE. In this paper, we showed that the gold-standard only (GSO) estimator and DSL estimator are the only two methods that are unbiased and have valid confidence intervals, and DSL outperforms GSO in terms of RMSE, which we elaborate more on below. **Concerns about RMSE Improvements over Existing Alternatives** There was concern about the RMSE improvements of DSL over the gold standard only (GSO) and semi-supervised learning (SSL) benchmarks. In terms of comparisons against GSO, we admit that Figure 2 did not convey clear information about how much DSL can improve RMSE (this was masked because of a log scale). Please let us re-clarify this point here. As written in the original draft, in our Congressional Bills Data experiment, we find that the RMSE gain is about 10% with zero-shot learning LLMs and about 30% with five-shot learning. To make these numbers more intuitive, we translated the RMSE gains into the gains in the number of hand-coded documents, i.e., how many more hand-coded documents does GSO need to have the same RMSE as DSL? This translation is possible because both estimators are asymptotically unbiased, and all the gains come from a smaller variance. We find that, for the zero-shot case, the RMSE gain by DSL is equivalent to having 50% more hand-coded documents, and for the five-shot case, the RMSE gain by DSL is equivalent to having 100% more hand-coded documents. Given the significant cost of obtaining hand-coded documents, this shows that computational social scientists can obtain unbiased estimates and valid confidence intervals with much fewer hand-coded documents when they use DSL instead of GSO. We provide a new Figure R-1 (see PDF) based on our experiment. We also provide a new simulation result in Figure R-2 to show that this sample size gain increases as the accuracy of LLMs increases. We will incorporate these new results and clarification in the next revised version. In terms of comparison against SSL, we want to first re-emphasize that SSL does not meet the social science priority of bias and coverage. Following the reviewers’ comments, we added several additional variants of SSL methods, and all of them are still biased and have invalid confidence intervals, as our theory suggested. Focusing only on RMSE, we expect that SSL, which is optimized for RMSE, can outperform DSL. However, in our experiment, we showed that DSL, while maintaining unbiasedness and valid confidence intervals, can achieve RMSE comparable to SSL (even though it is often higher than SSL in a finite sample). Importantly, we also show that, as sample size increases, DSL provably dominates SSL in terms of RMSE because bias of SSL does not vanish with sample size, while variance of DSL will vanish with o_p(1/n). One reviewer raised concerns over the strength of the SSL baseline. We have added our SSL baselines, including an estimator by Wang et al. (2020), which, as far as we know, is the only published article that proposes an estimator that covers exactly the same settings as ours (downstream regression analyses). As it requires stronger parametric assumptions, it is, in general, biased and has invalid confidence intervals in our experiments (see Figure R-3 in the PDF). **Concerns about the Title** Multiple reviewers raised concerns about the title because DSL can be applied to any surrogates and is not limited to LLMs. To address this, we are going to change the title to: “**Using Imperfect Surrogates for Downstream Inference: Design-based Semi-supervised Learning for Social Science Applications of Large Language Models.**” We will also emphasize the generality of DSL and downplay the specific LLM angle in the paper. But we hope to keep some of the framings for three reasons. First, LLMs are becoming a huge part of the computational social sciences (e.g., Ziems et al., 2023), and the application to LLMs will be more than 90% of the use cases of our approach. Second, we think the approach works very well with the applied zero/few-shot LLM workflow. You have some LLM annotations, but you have to sample some observations to check that it is working; in doing so, you already build up a gold standard dataset with which you can use DSL to improve performance. Third, we think LLMs will continue to get better, and that will just make the DSL estimator improve even more. **Additional Changes** Additional changes are specific to individual reviewers and discussed in those rebuttals. Pdf: /pdf/1c62268f3d252eb9fad5bad945da9e41c6182d86.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
DynGFN: Towards Bayesian Inference of Gene Regulatory Networks with GFlowNets
Accept (poster)
Summary: In this work, the authors propose an extension of GFlowNets to enable posterior sampling of cyclic graph structures and apply their approach to infer gene regulatory networks (GRN). While existing methods are able to infer cyclic graph structures or sample from Bayesian posteriors over DAGS, they cannot do both simultaneously. The authors formulate the problem as a dynamical systems identification problem which can be factored into the priors and model likelihood. The priors are learned using either linear or non-linear Hyper Networks while the model likelihood is estimated using the GFlowNets framework with detailed balance loss. The authors show their model is robust to several parameters including edge sparsity and time intervals between data points. Furthermore, they introduce several optimization ideas enabling a tractable search over admissible graph structures. Finally, they benchmark their approach against several baseline methods and apply their approach to an RNA velocity dataset to show their method is able to recover known and putative gene-gene interactions. Strengths: The paper addresses an interesting/important question which has been the center of much research for decades: posterior sampling over directed graphs, this time with cycles and not DAGs. The authors appear knowledgable in this field and they take advantage of GFlowNets, building a method to assess posterior structures with cycles as they unfold them across time. Weaknesses: Our biggest grievance is that the paper is just not reader friendly for anyone who is not an expert in that specific line of work. As noted the work seems technically very solid, one that offers specific new solution for the stated problem. Still, we as researchers not in this specific field, remain unsure after reading it of the overall innovation/significance of creating posterior sampling of structures over GFlowNets. We add more details below. (1) The termination criteria of Algorithm 1 is not clear. Under what conditions does the transition probability become null? Although one of the main innovations of the paper is sampling from the posterior of GFlowNet structures, the authors don’t reference any standard Bayesian analysis. How are posterior samples summarized (for example, how is a point estimate of p obtained from posterior samples in Figure 3)? How and when does Algorithm 1 converge to the posterior? Is the approach sensitive to variable initializations? (2) Although the authors compare their approach to sensible baselines using sensible metrics, it is not clear the approach has a clear advantage over the original GFlowNet in this applied setting. For example, how does the MAP parameter estimate (or equivalent) of this method compare to the original GFlowNet under the AUC metric (these are presumably comparable). Does GFlowNet produce different insight on gene-gene interactions in the RNA velocity dataset? (3) The authors assume causal sufficiency as a criteria for model identifiability. Is this a reasonable assumption given the sparsity of single cell data and limitations in the scalability of the model? In other words, how can we be reasonably certain that all casual gene measurements are observed given the model effectively only functions on a gene space of size 20 or lower? (4) Section 5 on graph augmentation is very abstruse. Efforts could be made to improve readability/accessibility for an audience that doesn’t work directly in this field. The related Figure 2 is also very cryptic. (5) It is not clear or obvious how the factorization into the “per-node” posterior in section 4.1 leads to the reduction in search space from 2^d^2 to d2^d. A brief explanation would be useful. (6) Line 154: “Previous work has shown GFlowNets are useful in settings with multi-modal posteriors.” This needs a citation. (7) There is inconsistency in the notation in Figure 1 which leads to some confusion. Specifically, it appears the Q variable is overloaded. How is the graph prior (represented as an adjacency matrix) sampled? Is there a hyperprior that is used or are graph structures uniform? (8) The patterns that the authors suggest that are evident in the Figure 3 heatmaps are not convincing. Perhaps it would be better to sort both heatmaps the same way instead of independent clustering. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: see above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: see above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments, insightful feedback, and helpful suggestions to improve the manuscript. Please refer to our feedback summary and the attached document regarding questions/clarifications on Fig. 1, Section 5 and Fig. 2, and Fig. 3. In the following, we address individual comments/questions brought up by the reviewer to improve the overall exposition quality of the manuscript. We are happy to incorporate any additional suggestions and feedback. >The termination criteria of Algorithm 1 is not clear. How and when does Algorithm 1 converge to the posterior? While the stopping criteria is not explicit, it is part of the GFlowNet policy $P_F(s_i | s_{i-1}; \psi)$. See [1] for further details and for sufficient conditions under which the optimal GFlowNet policy (which includes the stopping criterion) correctly converges to the posterior. >How are posterior samples summarized? How is a point estimate of p obtained from posterior samples in Fig. 3? We are not certain which $p$ is being referred to, but any statistics of the posterior are computed by either Monte Carlo (sampling from the GFlowNet policy) or using log-likelihood evaluation of the model. In Fig 3 there is a $\rho$ which quantifies correlation between genes used to determine the ground truth graph. Is this what is being referred to? >Is the approach sensitive to variable initialization? We do not think DynGFN is very sensitive to variable initialization. Examining Tables 1-4 in Section 6 we see that the standard deviation over 5 seeds is much smaller than the differences between methods. This suggests that DynGFN is relatively stable in that it outputs similar performing posteriors over different initializations. >Section 5 on graph augmentation is very abstruse. To clarify this section, we have amended Fig. 2 (see attached document) and included more informative labels and captions to correspond to the language used in Section 5. >It is not clear that the approach has a clear advantage over the original GFlowNet in the applied setting? A key contribution of this paper is on how to leverage time and velocity information to address Bayesian structure learning over **cyclic** graphs representing dynamics, while DAG-GFN [2] (we assume this is what is meant by original GFlowNet) only works in the acyclic case. Therefore, it is not directly applicable to graphs with cycles, as are known to frequently occur in gene regulatory networks. We refer the reviewer to Fig. 4 in the attached PDF where we show an explicit example where using the DAG assumption for cyclic structure learning fails. >Is the causal sufficiency assumption a reasonable assumption..? The reviewer identifies a great point. We agree that the assumption of causal sufficiency likely fails for single-cell data with only 20 genes. However, as we grow the number of genes that we are able to infer over, this assumption becomes more plausible. Ideally, we want an approach that yields (1) good posteriors, (2) good identifiability, and (3) good scalability, but due to the significant challenges associated with each of them, we have devoted our effort to address the first two in this work. If GFlowNets (or some other inference procedure) improved scalability, DynGFN could immediately leverage this advancement to do inference over a larger number of genes, which would make the assumption more plausible. >It is not clear or obvious how the factorization into the “per-node” posterior in section 4.1 leads to the reduction in search space. The per-node factorization in effect uses $d$ GFlowNets to learn $d$ graphs each of size $(d \times 1)$ for $dx_i$ and its parents $x$ (ref equation 4). This is possible because the posterior is factorizable into independent components. This is an advantage of DAG-GFN [2] as this is not possible in the DAG setting. Each of the d graphs are then aggregated to form the full $(d \times d)$ graph $G$. In this case, each of the $d$ GFlowNets trains over a search space of $2^d$ combinations. Therefore, the state space of the per-node factorized DynGFN is $d \cdot 2^d$. In contrast, if a single GFlowNet is used, then there are $2^{(d^2)}$ possible states. We will include this explanation in section 4.1. Additionally, we thank the reviewer for noting this and bringing it to our attention. We correct line 191: $d \cdot 2^d$ = $20 \cdot 2^{20} ~= 2^{104}$ for $d=20$ should be $d \cdot 2^d ~= 2^{(4.3 + 20)}$ for $d=20$. >Previous work has shown GFlowNets are useful in settings with multi-modal posteriors. This needs a citation. We thank the reviewer for noting this. We agree, we will add citations [1-4] to this statement. >Figure 3 See the attached PDF for an updated Figure 3 with the same ordering. To make our point more obvious, we additionally include a histogram of correlation values for the full correlation and the correlation over cell cycle time. We can see that the (absolute value) of the correlation over cell cycle time is substantially higher on average in distribution. We thank the reviewer for their insightful and helpful feedback on our paper. We hope that our rebuttal fully addresses all the important points raised by the reviewer. If our responses, alongside the accompanying additions, improve the overall quality of our work, we hope the reviewer would kindly raise their score. We are more than happy to answer any further questions. [1] Bengio, Yoshua, et al. "Gflownet foundations." arXiv preprint arXiv:2111.09266 (2021). [2] Deleu, Tristan, et al. "Bayesian structure learning with generative flow networks." Uncertainty in Artificial Intelligence. PMLR, 2022) [3] Malkin, Nikolay, et al. "Trajectory balance: Improved credit assignment in gflownets." Advances in Neural Information Processing Systems 35 (2022) [4] Madan, Kanika, et al. "Learning GFlowNets from partial episodes for improved convergence and stability." International Conference on Machine Learning. PMLR (2023) --- Rebuttal Comment 1.1: Title: Feedback for the rebuttal Comment: We have read through the authors response and appreciate the effort made to fix/clarify points we raised. As noted, the "causal sufficiency assumption" is a very strong assumption and as the authors admit with current scale (~20 genes) far from realistic. This limits the actual applicability of the work and makes it more of a computational/methodological contribution. Nonetheless, we appreciate the work/contribution and retain our (positive) score. --- Rebuttal 2: Title: what did you think of the authors' response? Comment: The authors have provided detailed responses to your questions and comments. Please revise the text and score of your review to reflect how their responses have changed your perspective on their submission, and please acknowledge that you have read the authors' carefully written response.
Summary: The authors tackle the well-known problem of Gene Regulatory Network (GRN) inference from time series data. Starting from an estimate of RNA velocity, GRN inference is framed as a causal discovery task, with two specificities: the inferred graphs may be cyclic, and a probabilistic distribution over candidate graphs is sought rather than a point estimate, so as to account for the high amounts of noise inherently present in biological data. The method is articulated around two assumptions: - a dynamic structural model $\frac{dx_i(t)}{dt} = f_i(\text{Pa}(x_i), \epsilon_i)$ for each gene. $\text{Pa}(x_i)$ is directly obtained from the regulatory graph $G$, and the structural causal model (SCM) $f_i$ is parametrized by $\theta$ - a factorisation of the joint distribution of the graph $G$, the structural causal model parameters $\theta$, and the observed data $\mathcal{D}$ as $p(G,\theta,\mathcal{D}) = p(\mathcal{D}|G,\theta)p(\theta|G)p(G)$. The graph sampler $p(G)$ is modeled as a GFlowNets. Next, since the parameters of the SCM depend on the graph structure, they are computed by a HyperNetwork, i.e. $p(\theta|G)$ is substituted to a neural network which takes as input a graph $G$ and outputs the structural equation model parameters $\theta$: $p(\theta|G) = \delta(\theta|G)$. Next, the authors evaluate the proposed method, DynGFN, on several synthetic examples as well as a real-world one involving single-cell RNA velocity data. The authors introduce other baselines by using different approaches than a GFlowNets for the graph sampler $p(G)$. This being said, DynGFN offers substantial improvements when it comes to jointly recovering the ground truth structure while characterizing the uncertainty around it. Strengths: The big picture of the method is well explained. The work described here is likely to have a significant impact for the GRN inference community, as I believe that not many probabilistic methods are available in this field. I also believe that DynGFN could be used as a starting point for the more general problem of physical/biological dynamical system discovery, where one is not only concerned with the graph adjacency matrix but with providing a mechanistic description of the structural causal model $f_i$ [1,2] [1] Discovering governing equations from data by sparse identification of nonlinear dynamical systems, PNAS, 2016 [2] Identification of dynamic mass-action biochemical reaction networks using sparse Bayesian methods, PLoS Comp. Biol., 2022 Weaknesses: - The paper builds on GFlowNets for the graph sampler $P(G)$ and Hypernetworks to generate the structural causal model parameters $\theta$ given a sampled graph $G$. I found it slightly difficult to understand everything $in$ $detail$ given that these tools are each introduced only in a small paragraph. - Even though the method focuses on Bayesian Inference, it would have been interesting to assess DynGFN's performances compared to SOTA methods for GRN inference from time series data such as DynGENIE3 [1], BINGO [2] or others. Lastly, these methods use time-resolved gene expression data, from which one can get an estimate of the RNA velocity using for instance finite differences, which may be a quite rough estimate for sparsely sampled measurements. Do I understand right that the RNA velocity data employed in this submission somehow constitutes a ``more principled'' way to estimate the RNA velocity? [1] dynGENIE3: dynamical GENIE3 for the inference of gene networks from time series expression data, Scientific Reports, 2018 [2] Gene regulatory network inference from sparsely sampled noisy data, Nature Communications, 2020 Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Any insights on why $h$-DynGFN yields significantly better results than $\ell$-DynGFN on the linear system presented in table 1? - In the single cell experiment, where does the ground truth network (used to compute AUC / Bayes-SHD) come from? Could you explain a little more what does "correlation over cell cycle time" means in Section 6.3? I have read the rebuttal done by the authors. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors mention as main limitation the difficulties to scale to large gene regulatory networks due to the combinatorial explosion. Causal sufficiency is also assumed here, which means that all relevant variables are observed. This assumption hardly ever holds in real-world biological problems. This issue is also mentioned by the authors in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and insightful feedback. We address your comments in what follows. >The paper builds on GFlowNets for the graph sampler and Hypernetworks to generate the structural causal model parameters. I found it slightly difficult to understand everything given that these tools are each introduced only in a small paragraph. Thank you for bringing up this issue, we realize our work is quite dense and the page limit was an issue. We will of course expand the exposition in the main text with the addition of an extra page. We are happy to incorporate any specific suggestions you have on how to improve the exposition. > Even though the method focuses on Bayesian Inference, it would have been interesting to assess DynGFNs performance to SOTA methods from time-series data. We are also very interested in how DynGFN compares to and can be combined with SOTA methods on inferring GRNs from time-series data! As noted by the reviewer, methods such as DynGENIE3 and BINGO consider a different setting where we have access to multiple samples from the time series. They are generally not well suited to velocity measurements, which effectively amounts to having two extremely close time points, as these methods often using changes over time to infer the gene regulatory network dynamics. We believe adapting these state of the art methods to infer regulation from velocity data would be an interesting direction given the potential quantity of RNA-velocity analysis, but we believe this is out of scope for this project. Future work on how to adapt DynGFN to the time series setting, potentially using ideas developed in [3], is also extremely promising. >Does the RNA velocity used in this submission constitute a more ‘principled’ way of estimating RNA velocity? We do not explicitly estimate RNA velocity in this work. Rather, we use estimated RNA velocity (using scVelo [1]) to help formulate the problem of Bayesian structure learning over cyclic graphs using observational data consisting of dynamic tuple pairs $(x, dx)$. We leave inference of RNA velocity using DynGFN for future work. See response to Reviewer [b9zX] for further discussion. >Insights on why h-DynGFN yields better results compared to l-DynGFN on the linear system? The reviewer brings up an insightful question. We suspect that l-DynGFN may be more sensitive to the stochasticity between batches of data compared to h-DynGFN since the analytic linear solver used in l-DynGFN directly uses the minibatch to solve for the parameters (the factorization P(\theta | G) is poorly enforced). This means that there is greater stochasticity in the reward depending on the batch selected. We believe this is why we see that l-DynGFN yielding worse results on the linear system compared to h-DynGFN. >Where does the ground truth network in the single-cell experiment come from? This ground truth network is constructed using external prior biological knowledge. Specifically, we extracted a subset of the gene network from the KEGG cell cycle pathway entry hsa04110. >What is meant by ‘correlation over cell cycle time’? This data is from Riba et al. 2022 [2]. Here they supply public data with a cell cycle pseudo-time label called `cell_cycle_theta` in the public data. We correlate bin this label into 10 bins and approximate the correlation of gene expression over time. We would like to thank the reviewer for their review of our paper. We believe we have answered all the great points raised by the reviewer in our author response, and we kindly ask the reviewer to consider upgrading their score if the reviewer is satisfied with our responses. Please let us know if you have any additional feedback or comments. We would be happy to discuss. [1] Bergen, Volker, et al. "Generalizing RNA velocity to transient cell states through dynamical modeling." Nature biotechnology 38.12 (2020) [2] Riba, Andrea, et al. "Cell cycle gene regulation dynamics revealed by RNA velocity and deep-learning" Nature Communications (2022) [3] Tong et al. "Simulation-Free Schrödinger Bridges via Score and Flow Matching" ArXiv (2023) --- Rebuttal 2: Title: what did you think of the authors' response? Comment: The authors have provided detailed responses to your questions and comments. Please revise the text and score of your review to reflect how their responses have changed your perspective on their submission, and please acknowledge that you have read the authors' carefully written response.
Summary: Authors introduced a principled methodology called DynGFN, which effectively identified cyclic structures and concurrently modeled the Bayesian posteriors over directed acyclic graphs (DAGs). Leveraging RNA velocity, the authors formulated a dynamic system that unveiled the underlying gene regulatory networks. DynGFN was meticulously designed with three modules, and its superior performance was demonstrated through synthetic experiments and real-world analysis. Strengths: The real biological systems can be rarely formulated as DAGs and there are always feedback loops to make the system work. DynGFN was appropriately motivated with a real biological thinking. Weaknesses: The overall manuscript was hard to follow as there was too much content included. There were still a lot of technical details that needed to be clarified. It would improve substantially if the manuscript can be carefully restructured. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: Major concerns 1. The synthetic experiments and real analysis did not explicitly present any results related to feedback loop identification, which serves as one of the motivations behind DynGFN. Could you elaborate on this point? 2. How many epochs did it take to stabilize the training of l-DynGFN? 3. Given the inference of RNA velocity involves multiple free parameters, how do these parameters impact the inference results obtained from DynGFN? 4. In the synthetic experiments simulating dynamic systems, how were the ground truth regulatory networks constructed based on the simulated dynamic systems? 5. Appendix B.3 mentions the construction of the validation and test sets to fine-tune the hyperparameters. Could you explain how these sets were constructed for the experiment? 6. Regarding Table 4, could you clarify what the ground truth graph was used for calculating Bayes-SHD and AUC? Was it formulated based on external knowledge? 7. In Figure 3, it is important to note that gene regulation is not solely determined by the direct interaction between corresponding proteins. In fact, it has been reported that "CDK1 targets MCM2-7 complex..." [1]. [1] Enserink, Jorrit M., and Richard D. Kolodner. "An overview of Cdk1-controlled targets and processes." Cell division 5.1 (2010): 1-41. Minor concerns 1. Figure 1 lacks definitions for numerous parameters, which may cause confusion in understanding the illustration. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: The authors mentioned two limitations: 1) scalability issues concerning larger systems and 2) hyperparameter tuning. To address these challenges, the authors suggested employing more informative priors or grouping genes together as nodes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and insightful feedback. Please refer to our summary to all reviewers and the attached document regarding questions about cyclic dependencies in our synthetic and real experiments, as well as for clarifications regarding Fig. 1. We also introduced a new toy example to exemplify our model’s capability for learning cyclic dependencies (See attached Fig. 4 and response to Reviewer [PVQy]). We now address each salient point individually. >How many epochs for l-DynGFN to stabilize? To show when l-DynGFN stabilizes during training, we have introduced validation curves for mean squared error and Bayes-SHD over the course of training on the linear system with $d=20$ (See Fig. 5 in the attached). In this experiment I-DynGFN stabilizes in around 500 epochs. >How do RNA velocity parameters impact the inference results obtained by DynGFN? The reviewer points at a great question. We agree that these free parameters may impact the results of DynGFN. DynGFN currently takes RNA velocity as input, but an ideal solution would incorporate this source of uncertainty (uncertainty due to RNA velocity inference) into the posterior over the graph and the parameters. Due to the difficulty of the cyclic structure learning problem and given that DynGFN already requires multiple posteriors, we leave investigation into a fully Bayesian RNA velocity and gene network recovery method to future work. We will add discussion of this to the limitations section. >How were the ground truth regulatory networks constructed for the synthetic dynamical systems? We first construct the ground truth regulatory network (GRN) and then use this ground truth GRN to simulate the system dynamics. Consider the linear system $dx = Ax$. To construct the system, we randomly sample $A$ such that $A \in \mathbb{R}^{d \times d}$ with a specified sparsity. We constrain $A$ to follow some properties to ensure the system is stable. Specifically, we subtract the maximum eigenvalues from the diagonal of $A$. This $A$ is then used to simulate $dx = Ax$. The procedure follows for the non-linear system with the addition of the non-linearity. We will include this description in the Appendix. For more specific implementation details see line 58 of `src/datamodules/simulated_datamodule.py` in the attached code. >How were the validation and test sets constructed for the experiments? How was the ground truth graph in the real data setting formulated? In the synthetic experiments, we simulate a system and generate train, validation, and test observations $(x, dx)$ for a given system with true connections defined by $A$. In the real data setting, we split the observed data $(x, dx)$ into train, validation, and test datasets. $A$ in the real data setting is constructed using prior biological knowledge. Specifically, we extracted a subset of the gene network from the KEGG cell cycle pathway entry hsa04110. Fig. 3 (see attached document for updated version) shows the constructed ground truth graph. >It is important to note that gene regulation is not solely determined by the direct interaction between corresponding proteins. We agree, we will add a mention of this in Section 6.3, and to the future work section. We find this quite interesting and gets back to what gene regulatory network we are trying to discover. Our model is searching for a posterior over sparse gene regulatory networks which explain the data dynamics. It could be argued that as the time goes to zero, we will find only direct interactions, however we leave further investigation of this to future work. We thank the reviewer for their time and effort in reviewing our work and we hope the reviewer would kindly consider a fresh evaluation of our work given the main clarifying points outlined above. If you have any additional comments, we would be happy to discuss further. --- Rebuttal Comment 1.1: Comment: I have read through the responses from authors and I appreciate the efforts to further improve the manuscript. NOTEARS is inherently not a suitable method to identify cyclic relations in the data. An alternative approach will be preferred. Therefore I keep my original score unchanged. --- Rebuttal 2: Title: what did you think of the authors' response? Comment: The authors have provided detailed responses to your questions and comments. Please revise the text and score of your review to reflect how their responses have changed your perspective on their submission, and please acknowledge that you have read the authors' carefully written response.
Summary: The authors propose to learn dynamical systems with cyclic dependencies. They factorise the generative model using a variant of the GFlowNet model, a HyperNetwork and MLPs. Strengths: Positives include: 1. extending the formulation from DAGs to cyclic graphs 2. introducing a per-node posterior formulation to improve the computational complexity 3. using GFlowNets to address the multimodality issues Weaknesses: Negatives include: 1. due to the high density the paper is hard to understand: a lot is explained and detailed in the Appendix 2. it is hard to gauge the quality of the results: even for the artificial case it is hard to assess intuitively if the approach solves the problem in a satisfactory way. Perhaps the authors should use simple systems with few nodes, with and without cycles, and show how well the true dependencies are recovered. 3. it is unclear how important it is to being able to model cyclic structures: where are the cyclic dependencies in the examples shown in Fig. 2 and Fig. 3? The authors should show a clear example where failing to model the cyclic dependencies results in a poor fit. 4. it is unclear how to translate the declared advantage of being able to model multi-modal posteriors in practical terms: how would we make use of this representation capacity when explaining the cause dependencies in a gene regulatory network? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: To illustrate the advantages of the proposed approach: 1. use simple systems with few nodes, with and without cycles, and show how well the true dependencies are recovered. 2. show a clear example where failing to model the cyclic dependencies results in a poor fit. 3. show how to use the multimodal representation capacity when explaining the cause dependencies in a gene regulatory network. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and insightful feedback. Please refer to our feedback summary and the attached document regarding questions about cyclic dependencies in Fig. 2 and Fig. 3. We also include further discussion regarding the toy example in the feedback summary. We now address each comment individually. >Perhaps the authors should use simple systems with few nodes, with and without cycles, and show how well the true dependencies are recovered. The reviewer suggests a valuable addition. To convey this, we introduced a toy example with a simple 3 variable system (with and without cycles), see Fig. 4 in the attached document. We consider a method for learning acyclic graphs (NOTEARS [1]) which does not model cyclic dependencies, unlike DynGFN. It is clear from this example that NOTEARS struggles to recover cyclic dependencies. We will add Fig. 4 to the manuscript. We can also verify this result by considering a conditional independence test over cyclic dependencies. With reference the attached Fig. 4 it is easy to see that the conditional independence test fails in the cyclic setting: In the acyclic case, we can identify the v-structure by observing that $x_1 \perp x_3$ and $x_1 \not\perp x_3 | x_2$, which implies that $x_2$ is a collider (i.e. $x_1$ and $x_3$ are marginally independent and conditionally dependent); in the cyclic example, we introduce time dependencies such that there are cycles in the summary graph that render these variables marginally dependent. >it is unclear how to translate the declared advantage of being able to model multi-modal posteriors in practical terms Learning multi-modal posteriors over structure allows us to quantify uncertainty over a particular causal dependency. In turn, it makes it easier to answer the active learning question, “how should we select interventions such that we minimize uncertainty?” [2]. We use this as motivation for our work and cite relevant papers (see lines 54-55). We will add the aforementioned citation to this list. As a first step, we need to effectively learn the multi-modal posterior over structure, hence the motivation and focus of this work. We hope our response fully addresses all the important and salient points raised by the reviewer. We believe the new example improves the clarity of our work, and we thank the reviewer for their excellent suggestion. Due to the increased clarity of these new examples, we ask that the reviewer consider raising their score. If you have any additional comments and suggestions we would be happy to discuss further. [1] Zheng, Xun, et al. "Dags with no tears: Continuous optimization for structure learning." Advances in neural information processing systems 31 (2018) [2] Toth, Christian, et al. "Active bayesian causal inference." Advances in Neural Information Processing Systems 35 (2022) --- Rebuttal 2: Title: what did you think of the authors' response? Comment: The authors have provided some responses to your questions and comments. Please revise the text and score of your review to reflect how their responses have changed your perspective on their submission, and please acknowledge that you have read the authors' carefully written response. --- Rebuttal Comment 2.1: Comment: The authors have provided a satisfactory answer to the main concerns we raised. The high density and not being a self contained paper is still an issue. I will therefore correspondently raise the score.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their time and insightful feedback when reviewing our paper. We appreciate the constructive criticisms and suggestions which will serve to improve the overall quality of our paper. Our method focuses on Bayesian structure learning over cyclic graphs for application in gene regulatory network (GRN) inference. We consider a dynamical systems perspective to model cycles through time while leveraging advancements in generative flow networks (GFlowNets) to learn complex posteriors over explanatory structure given the observational data. In general, reviewers found our work to address an important and long-standing problem in biology while employing a probabilistic/Bayesian perspective ([uDCE], [2wco]), to be well motivated for tackling the problem of cyclic Bayesian structure learning for GRNs ([PVQy, b9zX, uDCE]), and to provide a good quality contribution to the area ([b9zX, uDCE, 2wco]). Reviewers’ primary concerns consist of clarifying questions. Here we address some general concerns raised across reviewers. 1. **Cyclic dependencies in synthetic and real experiments:** Reviewers asked about cyclic dependencies in the synthetic and real experiments ([PVQy], [b9zX]). We highlight that our experiments do all contain cyclic dependencies in the ground truth graphs and amend the accompanying figures (Fig. 2 and Fig. 3) to appropriately demonstrate this. Please see the attached document for reference. In Fig. 2 we have added a 3D realization showing the difference between the dynamic graph and static graph. We show how we model cyclic dependencies present in the static graph through directed edges in the dynamic graph. This corresponds to the motivation of using dynamic data of the form (x, dx). In Fig. 3 we have adjusted the presented GRN to include the cyclic dependencies of the ground truth GRN that were considered in the experiment. 2. **Comparison to DAG learning method(s):** Reviewers raised points about demonstrating the effectiveness of DynGFN to learn cyclic dependencies relative to counterpart directed acyclic graph (DAG) learning methods ([PVQy], [2wco]). To show this, we have added an additional experiment (toy example) that considers a DAG-based system and accompanying system with cyclic dependencies. We demonstrate that the DAG structure learning method (NOTEARS [1]) cannot learn the cyclic dependencies in the cyclic system, compared to DynGFN which performs very well in the cyclic setting. We show this result in Fig. 4 (included in the attached document) which we will include in the revised manuscript. 3. **Figure clarifications:** Some reviewers asked clarifying questions regarding presentation of Fig.1 ([b9zX], [2wco]) and Fig. 3 ([PVQy], [b9zX], [uDCE], [2wco]). To clarify Fig. 1, we will add subscripts to the $Q$ variable to separate usage for modeling the posterior over graphs and posterior over parameters to outline the difference between the 2 $Q$’s. Specifically, for posterior over graphs we will state $Q_\Psi(G | D)$ (this is consistent with the notation in the figure), while for the posterior over parameters we will state $Q_\phi(\theta | G, D)$. Since we model $Q_\phi(\theta | G, D)$ as a Dirac, this collapses to $\theta = h_\phi(G)$, as shown in the figure. For Fig. 3, we have amended the ground truth GRN (stated in item 1 above) and the accompanying heatmaps. Please see the attached document for the amended Fig.3. [1] Zheng, Xun, et al. "Dags with no tears: Continuous optimization for structure learning." Advances in neural information processing systems 31 (2018) Pdf: /pdf/b4707cdf91c14b59d03e5fda32b53eb0127af075.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Accessing Higher Dimensions for Unsupervised Word Translation
Accept (poster)
Summary: The authors argue that relying solely on low-dimensional vectors misses out on better denoising methods and valuable world knowledge present in high dimensions. Therefore, they propose coocmap, a method that can exploit high-dimensional co-occurrence counts. Through extensive experiments, the authors show coocmap works very well under different data sizes and data from various domains. Strengths: - The paper is well-presented and easy to follow. - The figures are illustrative and informative. - The authors conduct extensive experiments on different language pairs, including similar and dissimilar languages. Besides, the authors also explore the influence of data sizes and domain mismatch. - The method proposed, coocmap, is straightforward but effective. Compared with some baselines, the method seems to work surprisingly well on small data sizes. Weaknesses: There is no apparent weakness in this paper. However, there are some questions/suggestions (see below). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Line 12: "similar data", the author should specify similar in terms of what? - As vocabulary size is an important hyperparameter of the method. The authors should mention it in the main content (probably in the First paragraph in Section 4). The authors are also encouraged to explore the influence of vocabulary size on performance. - The authors should provide more intuition on why clip and drop are very helpful for harder cases, e.g., en-zh. As the vanilla coocmap is not stable on en-zh. - Line 194: "NewsCrawl 2019.es and ..." -> remove "and" - Figure 2: It would be great if the authors could say the upper 3 figures are similar languages while the lower three are less similar languages in the caption. - I have a more general question: the study itself is interesting and demonstrates how a simple higher-dimensional co-occurrence matrix could largely help unsupervised word translation. However, in the era of LLMs, when and where can the method be possibly applied? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors discussed the limitation and broader impact of the paper. It is encouraged to include some additional experiments to explore the influence of vocabulary size. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the helpful review and corrections, which will help us improve the revision. > Line 12: "similar data", the author should specify similar in terms of what? We will specify similar in domains. > The authors should provide more intuition on why clip and drop are very helpful for harder cases, e.g., en-zh. As the vanilla coocmap is not stable on en-zh. Thanks for the suggestion, we will discuss more. Clip can be motivated by robust statistics and some clipping happens automatically for SVD truncation. We also motivated clip intuitively in the section it was introduced but not specifically for harder cases. As for drop, we provided two citations that did it to a lesser extend and we admittedly do not understand drop as well as clip. One additional intuition is that drop already happened for free in word vectors. > As vocabulary size is an important hyperparameter of the method. The authors should mention it in the main content (probably in the First paragraph in Section 4). The authors are also encouraged to explore the influence of vocabulary size on performance. We will add a summary of the main points about vocabulary size to the first paragraph of Section 4 in addition to reference to limitations. > the study itself is interesting and demonstrates how a simple higher-dimensional co-occurrence matrix could largely help unsupervised word translation. However, in the era of LLMs, when and where can the method be possibly applied? It is easier to use the findings than the method directly. Retrieval augmented LMs and knn LMs are perhaps two places where the findings may apply. Transformer LMs still have a low dimensional embedding layer which is typically a few thousand dimensions for each subword. Previous work have shown that knn and retrieval can improve the perplexity, but did not explain why. Our results suggest that the information lost in dimension reduction is probably why retrieval can be beneficial. Note that retrieval is just indexing for higher-order co-occurrences and raise the possibility that simply having the full dimensional co-occurence may also be helpful and capture information not already captured by transformer models. It would be pretty interesting to see if the drop and clip techniques are applicable in LLMs directly since we show that even low-dimensional vectors can benefit from clip, which suggests that other activations that approximates co-occurences may also benefit from clipping. On the other hand, drop was needed to actually get the improvement with higher dimensions, which may also be helpful to projects attempting to augment LLMs with a sparse high dimensional object. --- Rebuttal Comment 1.1: Comment: Thank you very much for your detailed response. All my questions have been answered now.
Summary: This paper proposes using high-dimensional co-occurrence statistics for unsupervised word translation rather than relying on low-dimensional vector embeddings. They propose using coocmap (uses association matrix of cooccurrences) and combining this with regularization operations (clip/drop) that eliminate noise. The goal is to achieve good performance on this task more efficiently (less data+compute) and good cross-domain performance. Experiments are performed on 7 languages and 3 domains, measuring precision@1 on 5K most frequent words against the MUSE dictionary. The baseline uses fasttext vectors. Coocmap+variants outperform fasttext vectors across languages that are not very similar and across domains in some cases using an order of magnitude less data. Analysis shows that high-dimensional co-occurrence data is more robust than its low-dimensional counterparts. Strengths: 1. This work is interesting and identifies that for the task of unsupervised word translation the raw learning signal generalizes better than compressing into low-dimensional vectors which are lossy. 2. The domain mismatch experiments perhaps highlight this point well by relying on more fine-grained information that may be lost in the embeddings. 3. The proposed denoising functions are clever and simple. Weaknesses: The current draft is at times not very articulate and it can take a few passes to understand the intended point. Some revisions might help! Is the following an accurate characterization of the work: low-dimensional representations are not expressive enough for this task so we start by overfitting with cooccurrences and use some heuristics to regularize? If yes, then does this feel more like gaming the task? And would that make the claim about being able to transfer to other monolingual tasks unfair? Doesn’t seem too convincing yet. Scaling does still seem like a challenge: would switching to BPE preserve the multilingual distributional hypothesis? Nit: including low-resource languages in the experiments might make the paper stronger. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Could some of the headroom in the vectors be covered by distilling from coocmap to low-dimensional vectors? i.e. is it an optimization issue or a capacity issue? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: See above for concerns regarding transferring to other tasks and being able to scale easily. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the review and feedback. We are glad that you liked the content of the paper and we will try harder to make the paper more articulate in future revisions. > Is the following an accurate characterization of the work: low-dimensional representations are not expressive enough for this task so we start by overfitting with cooccurrences and use some heuristics to regularize? > If yes, then does this feel more like gaming the task? And would that make the claim about being able to transfer to other monolingual tasks unfair? Doesn’t seem too convincing yet. This is not accurate. As there is no training data to fit but only unlabeled corpus, there is no overfitting here and no regularization. As for expressivity, low-dimensions representations is how previous work solved unsupervised word translation, showing the power and ease-of-use of low-dimensional vectors. However the community then draws the wrong conclusion from this success, concluding that vectors are also better in every way and there is only noise in high-dimensions to be removed. We show that a few simple processing steps actually allow us to use the full high-dimensional co-occurrences, leading to similar and then better results than low-dimensional vectors. No, we could not game the task and do not see a good way to game it as the task is fully unsupervised. Speculations were clearly labelled and our reasoning is this: 1) naively using high dimension may indeed lead to conclusions like in [1] because no denoising was used at all 2) here is a way to use the higher dimensional information for unsupervised MT so that higher dimension is helpful Thus we speculate that the actual situation is that the necessary but simple denoising operations were not used in previous work, which leads to the conclusions in [1]. This proposed explanation is actually much more intuitive than the current "vectors make all NLP tasks better". The reviewer can draw their own conclusions about this, but the author is willing to bet on it. We will try to make this more clear. [1]: Baroni et al. Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors > Scaling does still seem like a challenge: would switching to BPE preserve the multilingual distributional hypothesis? Scaling coocmap to more data is easier than with word vectors. Scaling to more word types is more challenging than using word vectors, but some incremental approach (limiting the search space to words of similar frequency) and approximations (including truncation and vectorization) can be used. BPE with large enough vocabulary should preserve at least some of the multilingual distributional hypothesis but probably not as well as words, which are betters unit for translation than subwords. BPE can be combined into coocmap by featurization. > Could some of the headroom in the vectors be covered by distilling from coocmap to low-dimensional vectors? i.e. is it an optimization issue or a capacity issue? Experiments included in the paper show the gap is probably more of a "capacity" issue, in the sense that low-dimensional vectors lost useful information. In the experiment of coocmap-fasttext vs coocmap in figure 4, the same optimization method was used, only their "capacity" differs. Not sure how distilling applies here exactly, but word vectors are probably doing as good a job as they can given their low dimensions. We showed their superiority over SVDs at the same dimension. --- Rebuttal Comment 1.1: Title: Author response Comment: I should have used quotes---"overfitting" and "regularization"---to highlight that I meant this less as precise mechanics and more as a loose analogy...rather than lossy compression as in low-dimensional vectors, this approach starts out with all coocurrence information and uses clip/drop etc. to remove the noise. Anyway, I buy that this is a clever approach for this task but am not yet convinced about robustly transferring to and generalizing for more complex tasks---perhaps you're right and the lost fine-grained information is helpful for generalization.. Either way, the paper was an interesting read and hope the revisions can make it a bit easier to follow. Thanks for your response. I still recommend accepting. --- Reply to Comment 1.1.1: Comment: Thanks for the continued discussions and great to hear you found the paper interesting. > perhaps you're right and the lost fine-grained information is helpful for generalization. Both vectors and drop/clip lose some information. However d-dimensional vectors lose a lot more information by parameter count than drop and clip. In summary: * full: V^2 * d-dim vectors: V d * clip: V^2, but epsilon of largest are now identical * drop k: (V-k) * V > not yet convinced about robustly transferring to and generalizing for more complex tasks Personally I'm willing to bet that this will generalize to other word vector evaluations that may be past their prime as well. For the important task of language modeling, transformers already use much higher and increasing dimensions to represent subwords. They still might benefit from even higher dimensions, but it is less clear if the increased compute will be worth it. On this point it is natural not to be convinced based on this paper! Curious if there is any particular complex task that you would be interested to see.
Summary: This paper introduces coocmap, an approach for unsupervised word translation that uses high-dimensional co-occurrence statistics instead of lower-dimensional word embeddings. The approach is generally analogous to vecmap, alternating between distance computations and a matching phase. Coocmap is often more data efficient than other methods, and its performance may be improved by regularizing the co-occurence matrix (normalization, value clipping, dropping large singular vectors). Strengths: The proposed approach goes against the conventional wisdom by using high-dimensional co-occurrence vectors instead of word embeddings. The method is generally more data efficient than previous work, which may be surprising. The paper describes strategies to improve upon using the raw co-occurrence matrices directly. The experiments cover multiple languages (although English is part of every pair) and domains. Weaknesses: Given that unsupervised word translation has been introduced many years ago, but arguably has limited practical applications, the authors should more clearly motivate their work. In addition to data efficiency, final accuracy performance should also be discussed (although we can read it from the figures). It might have been interesting to analyze how well the approach works based on word frequency (and with vocabulary sizes beyond 5000). The paper can be difficult to read at times. vecmap should arguably be described before coocmap, although presenting them in parallel may still be acceptable. The paper slightly exceeds the 9-page limit. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Why use 50% as a threshold for "works"? Given that you use high-dimensional co-occurrence statistics, what are the compute and memory requirements compared to approaches that use low-dimensional vectors? [L45] retain -> retaining [L48] have -> having [L286] Fragment "Actually showing the same behaviors as fasttext." [L292] retain [L303] modeling (or modelling, or "to model") [L336] focuses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes, the authors discussed limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the review and concrete corrections. We will make the corrections in the final version. > Given that unsupervised word translation has been introduced many years ago, but arguably has limited practical applications, the authors should more clearly motivate their work. The particular MUSE evaluation was indeed introduced "many years ago" and the underlying task of word translation has been around for even more years, which shows how intuitive the task is. Our main contribution is to show that unsupervised translation is also possible without word vectors and that the data requirement is surprisingly little using a simple method. More generally, as LLMs clearly demonstrated unsupervised abilities, future MT systems are likely to use more unsupervised data. The path from research to practical application may take time and immediate practical application should not be a requirement for research. > In addition to data efficiency, final accuracy performance should also be discussed (although we can read it from the figures). The final accuracy was judged not to be unreliable as several previous work warned against taking this benchmark too seriously. On the other hand, MUSE is absolutely good enough to tell if the unsupervised method is successful at all. > It might have been interesting to analyze how well the approach works based on word frequency (and with vocabulary sizes beyond 5000). Yes, though we needed a cutoff to develop the method quickly and keep it simple. An incremental approach where the most common words is solved first then extending the search to larger vocabulary is fairly promising but still introduces complications in search strategy and implementation. This point is stated in limitations and we will emphasize more. > The paper slightly exceeds the 9-page limit. Thanks for pointing this out and for overlooking this mistake, we followed instructions that "additional pages containing only broader impact statement and references are allowed", which seems outdated. Will fix for the final version. > Why use 50% as a threshold for "works"? This is just an arbitrary threshold so we can measure the data requirement. Since the accuracy is never 100%, and are binary in that they are either near 0% or is eventually greater than 60%. It would have been fine to pick anywhere between 10% and 60% to establish the comparison. One analogy supporting the choice of 50% is the half-life of an exponential decay, which is admittedly more precise than our use case. > Given that you use high-dimensional co-occurrence statistics, what are the compute and memory requirements compared to approaches that use low-dimensional vectors? This is discussed in limitations for compute as a function of vocabulary size V, which favors d-dimensional vectors by a factor of V/d. For memory, both methods are O(V^2). In term of data size scaling, both methods are constant. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I am inclined to increase my rating from 5 to 6. > The path from research to practical application may take time and immediate practical application should not be a requirement for research. I agree that this is not strictly necessary, but it could increase the impact of the paper. >The final accuracy was judged not to be unreliable as several previous work warned against taking this benchmark too seriously In the camera-ready version, you could mention this and cite the papers you refer to here (some of which may already be in the references). > Will fix for the final version. If the paper is accepted, you should be allowed a 10th content page.
Summary: This paper proposes coocmap, a method for unsupervised word translation that uses high-dimensional co-occurrence statistics instead of low-dimensional vectors. The authors show that relying on low-dimensional vectors can lead to suboptimal denoising methods and overlook useful world knowledge in high dimensions. The authors demonstrate that unsupervised translation can be accomplished with a smaller amount of data and in a wider range of scenarios than previously believed. They also suggest that co-occurrence-based methods may outperform low-dimensional vectors in other tasks. Strengths: 1. This paper successfully demonstrates the effectiveness of high-dimensional co-occurrence statistics and provides empirical evidence that unsupervised translation using only co-occurrence statistics is feasible. 2. The authors provide a detailed comparison with vecmap and present coocmap in a clear and easy-to-follow manner. 3. The paper showcases the trend of retrieval accuracy during the training process on the data size, which is valuable for understanding the training dynamics of word translation capacity. Weaknesses: 1. The experiment setup in the paper is somewhat ambiguous, as the authors state that the experiment was conducted on the top 5000 most common words in each language using the MUSE dictionary, but do not provide a clear definition of what they mean by "most common." Additionally, the MUSE dictionary contains a significant number of "identical translation" pairs, where the source word and corresponding target word are the same. It is unclear how these types of translation pairs were processed in the experiment. 2. The method is only compared with vecmap. Maybe more baseline methods such as LNMAP[1], FIPP[2] and [3] should be included for a comprehensive understanding of model performance. 3. The paper could benefit from a more comprehensive discussion of prior works in the field, which would help readers contextualize the authors' contributions. It is unclear whether this is the first paper to use co-occurrence statistics to induce an unsupervised word translation dictionary, and it would be helpful to know if there are any previous works in this area. [1]LNMap: Departures from isomorphic assumption in bilingual lexicon induction through nonlinear mapping in latent space. [2]FILTERED INNER PRODUCT PROJECTION FOR CROSSLINGUAL EMBEDDING ALIGNMENT [3] Improving Word Translation via Two-Stage Contrastive Learning Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Figure2, there is a sharp surge of performance for vecmap-fasttext on enwiki-frwiki and enwiki-eswiki. Can you explain this? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the review, we are glad to hear that you found the paper easy to follow. In particular, we claim that we are the first to do unsupervised word translation with just co-occurrences and there are several difficulties with doing this which we identified in the paper. More details in the answer to the particular question, please do reconsider your score as this is an important claim. > The experiment setup in the paper is somewhat ambiguous, as the authors state that the experiment was conducted on the top 5000 most common words in each language using the MUSE dictionary, but do not provide a clear definition of what they mean by "most common." The most common words is based on word frequency in the corpus and returned by the WordLevel tokenizer. In python, `Counter(corpus).most_common(5000)` gets the 5000 most common words when `corpus` is a list of words. For example, the most common word in `corpus=['cat', 'cat', 'dog']` would be `{'cat'}`. We will add "by word frequency" to further clarify. > Additionally, the MUSE dictionary contains a significant number of "identical translation" pairs Yes indeed identical translation is a known issue with the evaluation set. We avoided the worst cases for Chinese / English by filtering out all Latin characters from the Chinese corpus, so there is at least no identical words for the en-zh pair. For other languages where words may actually be identical, we ignore this issue but track how many identical pairs were matched in the experiments. This and other problems with the benchmark is only important if we claim that small improvements on MUSE are an important contribution, which we do not. On the other hand, the MUSE evaluation is good enough to establish if unsupervised translation is happening at all and at how much data is needed, which is our main evaluation. > The method is only compared with vecmap... more baseline methods such as LNMAP[1], FIPP[2] and [3] should be included Because establishing acc(coocmap) > acc(other method) is not a main point of the paper. All of the cited methods are also vector space methods that depends more on which vectors were used and on what data they are trained on. Those methods also compared with vecmap in their own papers. We share your concerns that MUSE/other word translation evaluations should not be taken too seriously, and we do not use MUSE to claim that method X has slightly higher accuracy than method Y. Instead we use MUSE to test which method works at all and how much data was needed. Note that we do not even claim that acc(coocmap) > acc(vecmap), as vecmap with enough data outperformed the basic coocmap as shown in Figure 2. It is only with "drop" and high dimensions that coocmap has higher accuracy than vecmap where drop changes the inputs but not the method. We compared to other normalization methods and other word vectors in the appendix, which was interesting but did not make the main paper. > It is unclear whether this is the first paper to use co-occurrence statistics to induce an unsupervised word translation dictionary, and it would be helpful to know if there are any previous works in this area. Thanks for the feedback. As far as we know, this is indeed the first paper to use occurrences to do unsupervised word translation successfully. We cited works on previous attempts that ultimately did not work unsupervised on general data. These previous works are cited prominently in the first sentence of introduction (Rapp 1995, Ravi and Knight 2011) and with more details in the discussions section and appendix. Rapp 1995 suggests that this might be possible but did not have the search methods. Ravi and Knight actually used parallel data in narrow domain like time expression and subtitles for their method to work. Their method also does not include the normalization and relative measurements that were crucial for coocmap to work. In fact, we probably could not do this successfully without knowing that it is possible and without some of the advances from vector based methods such as normalization and relative measurements, but vectors themselves were not necessary. > In Figure2, there is a sharp surge of performance for vecmap-fasttext on enwiki-frwiki and enwiki-eswiki. Can you explain this? All methods need a sufficient amount of data to work and the transition is usually fairly sharp. The intuition is once you can figure out an initial amount of word translations, then other frequent enough words should be easy to translate based on context. --- Rebuttal 2: Title: Thanks for the rebuttal Comment: I read the rebuttal and concerns from other reviewers as well. I keep my recommendation score. --- Rebuttal Comment 2.1: Comment: Did we address the weaknesses raised in the review? We believe weakness 1 and weakness 3 were completely addressed and an argument was made about weakness 2. In particular it seems addressing weakness 3 could be important enough to improve the recommendation.
Rebuttal 1: Rebuttal: We thank all reviewers for the insightful and helpful reviews. We will aim to improve the draft using the feedback and make it more clear. A few common points are emphasized here and other points and questions are addressed in individual responses. 1. We are the first to use just co-occurrences to achieve fully unsupervised word translation. To do this, we needed the normalization and relative measurements which was developed for vector methods but turned out to be even more crucial for co-occurrence methods. Interestingly, it turns out that these methods rather than vectors were the key. In addition to normalization and relative measurements, which enabled co-occurences to work at all but slightly less accurate than vectors, clip and drop was needed to really make higher dimensional vectors more robust and more accurate than well-trained low-dimensional vectors on exactly the same data. 2. The main conceptual contribution is to change our understanding from "dense vectors work better in every NLP task than sparse vectors" (Jurafsky and Martin, 2023,6.8) to "there is useful information in higher dimensions" accessible after applying simple techniques such as normalization, relative measurement plus drop and clip. We back up this new view with detailed analysis and experiments on dimensions, which is in retrospect more plausible/intuitive than the prevailing view of the community. On the other hand, we do not consider achieving slightly higher accuracy as a main contribution nor very interesting. In fact, figure 2 shows that vecmap often has slightly better accuracy than the basic coocmap with enough data. We do not think the word translation task is sensitive enough such that gaining a few percentage more accuracy is important/interesting. We are aware of its limitations both pointed out by some reviewers and by previous work (identical words, lots of proper nouns). On the other hand, we are still happy to use word translation as the evaluation because it is very intuitive and it is absolutely good enough for testing if unsupervised translation is succeeding at all. These known issues are not relevant if we only focus on the binary phenomenal and data efficiency instead of a few percentage difference in the accuracy.
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the authors target to solve unsupervised word translation, also called lexicon or dictionary induction, using their proposed method, coocmap, a simplified version of the conventional way, vecmap. Different from vecmap, coocmap can estimate word mapping without using rotation weights for row vectors. Instead, coocmap uses an association matrix to represent source and target word relations. Thus, coocmap can estimate source and target word mappings by rearranging the columns of the association matrix to reflect the updated word mappings during training time. Experimental results on language pairs from English to Spanish, French, German, Hungarian, Finnish, and Chinese show that coocmap outperforms vecmap, especially when the data size or dimension size increase. Strengths: - The proposed method, coocmap, outperformed the conventional approach, vecmap, in word translation accuracy on language pairs from English to Spanish, French, German, Hungarian, Finnish, and Chinese. - Coocmap does not require a rotation matrix for projecting word embeddings to the ones of other languages; thus, it's simple. Weaknesses: - Not using the rotation matrix means coocmap cannot handle unseen words because their representation is not included in the association matrix of coocmap. Thus, the versatility of coocmap is less wide compared with vecmap. - Many translation tasks require translations for texts rather than word translation. Thus, investigation of downstream tasks, especially for machine translation, is necessary to claim the advantage of coocmap even though it's not conducted in the paper. - The motivation for using the unsupervised method needs to be clarified. In practice, we can use the MUSE [1] dictionaries you used to evaluate your approach, to train word translation models in a supervised manner. [1] Lample, G., Conneau, A., Ranzato, M. A., Denoyer, L., & Jégou, H. (2018, February). Word translation without parallel data. In International Conference on Learning Representations. (paper: https://openreview.net/forum?id=H196sainb, code:https://github.com/facebookresearch/MUSE) Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - In the evaluation, why didn't you compare the performance of your coocmap with the reported scores of conventional methods? You can use Semeval 2017 [2] data for this purpose, similar to the paper of MUSE dictionaries [1] you used. - Could you expand your approach to using subwords? This is because current natural language generation tasks, including machine translation, heavily rely on subwords rather than words. This part relates to the contribution of your work to downstream tasks. - In the current draft, you use MB to show the size of the data. How did you calculate them? The data you used is compressed or not? - It's difficult to estimate the computational complexity or space of both coocmap and vecmap by MB of the used data. Showing the vocabulary size is more helpful for readers to understand. [2] Camacho-Collados, J., Pilehvar, M. T., Collier, N., & Navigli, R. (2017, August). Semeval-2017 task 2: Multilingual and cross-lingual semantic word similarity. In Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017) (pp. 15-26). (https://aclanthology.org/S17-2002/) Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: You need to include the treatment of the unseen words in the limitation part of your paper. Furthermore, you need to refer to the fact that current translation models commonly use subwords, not words. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the review with many concrete points that may help us avoid possible confusions. We first address the weaknesses and then the questions. We emphasize that the most substantial proposed weakness about unseen word is wrong. > Not using the rotation matrix means coocmap cannot handle unseen words... versatility less wide compared with vecmap > You need to include the treatment of the unseen words in the limitation part of your paper. There is no difference in the treatment of unseen words. For words unseen during matching process but seen in the corpus, you can use the occurrences of the unseen words with seen words to form the association matrix. The case of adding a new corpus containing new words can be treated similarly. For words unseen in any corpus during training, we also cannot get vectors for them and any rotation matrix is also useless. > In practice, we can use the MUSE [1] dictionaries you used to evaluate your approach, to train word translation models in a supervised manner. It has been shown in previous work that supervised translation can work to some extend in the 90s and 00s, but it is only after vectors/pretraining that unsupervised word translation started working in the late 10s. So the community inferred that there is some magic about low-dimensional representations that must be necessary. Thus it is only interesting if we show that the full unsupervised self-learning loop can work in high-dimensions. In fact, table 1 of your citation [1] shows that supervised vs unsupervised does not make a big difference as long as the self-learning loop gets started unsupervised. Increasingly, as LLMs are shown to perform translation unsupervised, it is increasingly difficult to argue that supervised is always the better choice in practice. > investigation of downstream tasks, especially for machine translation, is necessary to claim the advantage of coocmap even though it's not conducted in the paper. The main contribution of coocmap is conceptual by showing that low-dimensions is not necessary and denoising can be effectively performed in high-dimension. Note you can use the same objection against the cited MUSE, vecmap, word2vec etc and the evaluation that you cited. While it would be nice if we can also get results on translating sentences or documents, we thought translating words unsupervised is an interesting enough problem that has a very intuitive evaluation. Thanks for the questions. > In the evaluation, why didn't you compare the performance of your coocmap with the reported scores of conventional methods? You can use Semeval 2017 [2] data for this purpose The included evaluation were deemed enough to support the main points claimed by the paper that you do not need vectors and keeping high dimensions is useful. We do not claim that this method is better than reported scores of "conventional methods" when the amount of data is big enough that everything works. > Could you expand your approach to using subwords? > Furthermore, you need to refer to the fact that current translation models commonly use subwords, not words. Yes, you can build the subword association matrix as well and even more generally use featurized co-occurences that may include subwords as features. Regardless of how the latest NLP model is tokenized, the word is basic concept recognized by almost everyone, is a better unit for translation than subwords, and used by previous works / evaluations. While subwords represents a good practical trade off point for transformers, it does not render evaluating on words invalid and would only be a complication for this work. > In the current draft, you use MB to show the size of the data. How did you calculate them? The data you used is compressed or not? Showing the vocabulary size is more helpful for readers to understand. MB is megabytes of uncompressed text data in utc-8 encoding. The paper clearly states that any data is decompressed if it came compressed. The caption of table 1 also states how to convert the MB to token counts at roughly 0.2 million tokens per MB. It can be counted by various unix and python tools such as 'f.read(1000)' to read 1000 bytes of data. As you pointed out, the token count depends on how you tokenized, thus we picked MB but would not object to token count. > It's difficult to estimate the computational complexity or space of both coocmap and vecmap by MB of the used data. In both methods, estimating occurrences or vectors is linear in the size of the data. The remaining matching process is a function of the vocabulary size and independent of the data size. In practice, the data size dependent part of coocmap is very fast, just requiring a counting pass over the data whereas estimating vectors tend to run much slower. --- Rebuttal Comment 1.1: Title: Thank you for answering my questions and comments. Comment: Thank you for answering my questions and comments. >> Not using the rotation matrix means coocmap cannot handle unseen words... > There is no difference in the treatment of unseen words... When using substrings like fastText to represent word embeddings, we can approximately consider vector representations of unseen words. In this case, vecmap can map unseen word embeddings between languages by projecting its rotation matrix, whereas coocmap cannot. Thus, there is a significant difference regarding the treatment of unseen words. - [3] Bojanowski, Piotr, et al. "Enriching Word Vectors with Subword Information." Transactions of the Association for Computational Linguistics 5 (2017): 135-146. >> In practice, we can use the MUSE [1] dictionaries you used to evaluate your approach, to train word translation models in a supervised manner. > It has been shown in previous work that supervised translation... I agree with you regarding the difficulty of arguing that supervised learning is always the better choice in practice. This is because we need to compare performance in supervised and unsupervised learning for each task. We must do that to judge how unsupervised learning is essential for each task if the comparison is possible. The following papers use recent models to compare supervised and unsupervised learning in machine translation. These comparisons indicate the importance of comparing supervised and unsupervised learning for target tasks. - [4] Word alignment (The extended approach for the translation in the 90s to 00s): Dou, Zi-Yi, and Graham Neubig. "Word Alignment by Fine-tuning Embeddings on Parallel Corpora." Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. 2021. - [5] Machine Translation by LLMs: Zhu, Wenhao, et al. "Multilingual machine translation with large language models: Empirical results and analysis." arXiv preprint arXiv:2304.04675 (2023). >> investigation of downstream tasks, especially for machine translation, is necessary to claim the advantage of coocmap even though it's not conducted in the paper. > The main contribution of coocmap is conceptual by showing that low-dimensions is... I have the same opinion that unsupervised word translation is interesting because of my expectations for low-resource languages. However, the current manuscript does not cover the result in pairs of low-resource languages. Thus, I could not judge coocmap from the practical viewpoint. Regarding the conclusion of the unnecessity of low dimensions, you should consider the performance when training data is large. Figure 2 shows that vecmap-fastext outperforms coocmap in some cases. This result is along with the following theoretical paper that describes the relationship between dataset size and word embedding performance by signal-to-noise ratio. - [6] Yin, Zi, and Yuanyuan Shen. "On the dimensionality of word embedding." Advances in neural information processing systems 31 (2018). > Thanks for the questions. >> In the evaluation, why didn't you compare the performance of your coocmap with the reported scores of conventional methods? You can use Semeval 2017 [2] data for this purpose >The included evaluation were deemed enough to support the main points... Based on the theoretical paper I shared [6], you need to vary both dimension size and data size to check the usefulness of high dimensions. >> Could you expand your approach to using subwords? Furthermore, you need to refer to the fact that current translation models commonly use subwords, not words. > Yes, you can build the subword association matrix as well... Thank you for answering my question with detailed explanations. To show the usefulness of word information on Transformer, you may introduce the case that whole-word masking of BERT can improve performance in downstream tasks. This fact suggests word boundaries are still crucial in Transformer. - [7] Kenton, Jacob Devlin Ming-Wei Chang, and Lee Kristina Toutanova. "Bert: Pre-training of deep bidirectional transformers for language understanding." Proceedings of naacL-HLT. Vol. 1. 2019. - [8] Cui, Yiming, et al. "Pre-training with whole word masking for chinese bert." IEEE/ACM Transactions on Audio, Speech, and Language Processing 29 (2021): 3504-3514. >> In the current draft, you use MB to show the size of the data. How did you calculate them? The data you used is compressed or not? Showing the vocabulary size is more helpful for readers to understand. > MB is megabytes of uncompressed text data in utc-8 encoding... >> It's difficult to estimate the computational complexity or space of both coocmap and vecmap by MB of the used data. > In both methods, estimating occurrences or vectors is linear in the size of the data... Now, I understand that MB in your paper actually corresponds to the vocabulary size for each setting. I appreciate your detailed explanation. --- Reply to Comment 1.1.1: Comment: Thank you for the continued discussions and for the pointers to relevant previous work. We have good answers of your new and old concerns upon clarification. If these address your concerns sufficiently, please consider increasing the score! >> Not using the rotation matrix means coocmap cannot handle unseen words... > When using substrings like fastText to represent word embeddings... Thus, there is a significant difference regarding the treatment of unseen words. We worried/wondered about the effect of the subword information in fasttext as well. Fortunately, we saw no difference at all for this task between fasttext with and without subword information. In the appendix, we noted that "we also checked that the subwords information made no difference by turning them off in fasttext hyperparameters." The main result of the paper did not need these findings: if the substrings were important, that would only strengthen our baseline where the co-occurrence matrix was more unsupervised than the baseline of fasttext which use more information. So the potential differences is in the direction of favoring the baseline, but actually made no difference at all for this benchmark that do not test on unseen words but may still use substrings to improve representations of seen words. Beyond this particular benchmark, co-occurrences can also use subwords if you like. Instead of vector averages, you can count when each substring of word w co-occurred with context c towards w in the style of the cited paper [3] (for example, if "country" is not seen but "count" and "try" were seen, then "country" also gets the co-occurrence of "count" and "try"). You can also use BPE subwords instead of substrings which is more modern. > I have the same opinion that unsupervised word translation is interesting because of my expectations for low-resource languages > However, the current manuscript does not cover the result in pairs of low-resource languages. We did cover some low(-ish) resource languages like Finnish and Hungarian, these two with English were reported not to work previously unsupervised by Søgaard et al., 2018, but works with coocmap and improved vecmap and has a very low data requirement of < 100MB of data. While Chinese is not low-resource, we filtered out all Latin characters from its corpus, again measuring the data requirement, showing it to be 100s of MB depending on the domain mismatch. The implication for low-resource are 1) < 100MB is likely fine if domains match otherwise need more 2) the ability to handle mismatched domains is improved with coocmap. > [6] Yin, Zi, and Yuanyuan Shen. "On the dimensionality of word embedding." Advances in neural information processing systems 31 (2018). > Based on the theoretical paper I shared [6], you need to vary both dimension size and data size to check the usefulness of high dimensions. We did, varying the data was the main experimental setup of the paper! Note the plots are in log of MB of data which goes up to 2000 MB and from 5 to 5000 dimensions. In figure 4 and appendix, we also varied the dimension to show that with coocmap higher dimension is better whereas fasttext has a low optimal dimension. In fact we cited [6] and provide a better explanation for their results in the analysis section. In figure 4, we show that indeed if you train vectors you get an optimal dimension but with the full-dimensional co-occurrence + a sensible matching method, then the higher dimensions the better. It would be interesting to see if this finding applies to their basic word vector evaluation tasks -- actually testing on these monolingual tasks is out of the scope of this paper but I am willing to bet they are wrong. > Regarding the conclusion of the unnecessity of low dimensions, you should consider the performance when training data is large. We tested on sufficient amount of data (300MB in figure 4) to observe the benefits of higher dimensions. Indeed we only went up to 2GB in figure 3, which you may not consider large. However, [6] argues that lower dimensions is needed with less data (instead more data), so the existing experiments are sufficient to test their hypothesis. Note [6] never varied the data size for their experiments. Thanks again for the questions and discussions.
null
null
null
null
null
null
Gacs-Korner Common Information Variational Autoencoder
Accept (poster)
Summary: The authors present a concept of common information that assesses and distinguishes shared information from unique information between two random variables. This notion, which is a variant of the Gacs-Korner common knowledge, is more optimizable and experimentally approximable when employing sample data. Strengths: I believe this paper made a good contributions by proposing to relax the requirement that the representation must be a deterministic function and instead allows it to be a stochastic mapping. Based on the results, it has been shown that such an approach is more beneficial for quantifying and interpreting the latent representation. Weaknesses: - The authors experimentally demonstrating that adding unsupervised viewpoints improves disentanglement; which is likely due to limitations in single-sample analysis; active interaction, not passive observation, fosters better learning of environment representations. but the observation has not been sufficiently explained or explored in the current study. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Could you describe how the multi-view VAE model's unique latent variables capture the individual factors that cause variation, and how this contributes to the efficiency of inferring common and unique components from data? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors address the limitation adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and address their questions below. > The authors experimentally demonstrating that adding unsupervised viewpoints improves disentanglement; which is likely due to limitations in single-sample analysis; active interaction, not passive observation, fosters better learning of environment representations. but the observation has not been sufficiently explained or explored in the current study. We agree with the reviewer’s observation and intuition; and we will happily elaborate on this in the text. Indeed, as we briefly mentioned in the text, a source of inspiration for our work was an old experiment where neuroscientists showed that the ability for animals to act within an environment, rather than passively observe it strongly impacts their representations of it and ability to identify relevant environmental variables (Held & Hein, 1963 ). One explanation for these findings is that the correlations between the actions and the environmental stimuli are meaningful and important for the animal to leverage. Here, in a similar spirit, we indeed found that the addition of an additional viewpoint can provide a guiding signal for finding the common information between the views, which can identify and reveal interesting structure in the data. > Could you describe how the multi-view VAE model's unique latent variables capture the individual factors that cause variation, and how this contributes to the efficiency of inferring common and unique components from data? In our optimization, adding the unique latent variables (with the constraint that $\beta_u > \beta_c$) was important so that unique information would not be encoded in the common latent factors. While this is penalized by $\lambda_c$, as well as by randomly sampling $z$ from $q_1(z_c|x_1)$ and $q_2(z_c|x_2)$ with $p = 0.5$, without the addition of the unique components, the unique information could leak into the common latents if the unique information helped for reconstructing the output more than the cost of not satisfying the (approximate) equality constraint. By adding in the unique latent variables (with $\beta_u > \beta_c$) the common and unique information will be partitioned, as we show in Thm 3.1 and our experiments. --- Richard Held and Alan Hein. Movement-produced stimulation in the development of visually guided behavior. Journal of comparative and physiological psychology, 56(5):872, 1963. --- Rebuttal Comment 1.1: Comment: Thanks for your responses and explanations. I keep my score as it is.
Summary: The basis of the proposed work is a practical means to decompose the information contained in two random variables into common and unique components. With a couple tweaks to a vanilla VAE setup, data that has been paired such that certain factors of variation have a constant value within the pair can be used to train latent spaces that separate the components of the information. The common component of the information -- the Gacs-Korner common information -- is the primary goal here, and it’s generally difficult to acquire because it requires finding a random variable that is simultaneously a deterministic function of both input variables. Except for highly constrained joint distributions, such a variable generally does not exist (beyond the vacuous solution). The current work proposes a relaxation where the common component can be a stochastic function of the inputs, and this facilitates optimization as well as makes the method applicable to cases without cleanly shared information (at least in theory; it’s not shown here). As a route to Gacs-Korner common information, the paper offers a principled and pragmatic methodology that (as far as I can tell) offers something original and of value. The example scenarios and applied metrics, however, paint the work more as a route to partially disentangled latent spaces given grouped data, and in that regard it’s lacking novelty and missing relevant comparisons. The following papers leverage the same weak supervision (paired/grouped data with a subset of factors held constant) to learn latent spaces that separate common from unique information: - “Multi-Level Variational Autoencoder: Learning Disentangled Representations from Grouped Observations”, Bouchacourt 2018 - “Disentangling Factors of Variation with Cycle-Consistent Variational Auto-Encoders”, Jha 2018 - “Unsupervised Robust Disentangling of Latent Characteristics for Image Synthesis”, Esser 2019 - “NestedVAE: Isolating Common Factors via Weak Supervision” Vowels 2020 And one that does so without a generative model: - “Learning Disentangled Representations via Mutual Information Estimation”, Sanchez 2020 In summary, the method is an interesting contribution but the experimental results are not very effective. Strengths: The principled route to extracting the common information in a pragmatic methodology is great. Though it is not demonstrated in this work, the authors motivate the work in terms of multi-modality data, which could be a rich area of application of the method. Weaknesses: See the summary for the primary weakness. I can imagine a couple routes to make the experimental results more strongly support the method. First, the current experiments could include direct comparisons to some of the attached methods with a demonstration of what GKVAE brings. However, if GKVAE is not more effective in the scenarios currently in the manuscript, qualitatively different experiments could help show its merit. Paired data streams from different modalities could highlight the strengths of GKVAE, or perhaps an example where the content of the common information allows for insight about the relationship between the variables (ie, through inspection of the learned latent variable). Or some example where GK information is desirable. On a much more minor scale: I doubt the majority of readers will have a better intuition for “usable information” than for standard mutual information; it seems a weakness to me to rely on that rather than any of a number of mutual information estimators or bounds that should work fine on the low-entropy examples of this work. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - As common information is a lower bound to mutual information (L73), how can a lower bound to MI be used as a lower bound for common information (L194)? - Was $\lambda_c$ shown to be necessary given the random swapping of the common representation (L225)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and address their concerns below. To understand the **difference between GK information and mutual information and why GK information is desirable**, consider the following settings. **Case 1:** Let $X_1 = C + N_1$ and $X_2 = C + N_2$, where $C$ and $N_i$ are independent variables, but the noise $N_1$ and $N_2$ are correlated. Suppose $C$ is discrete (e.g. a 0 or 1 feature), and have $N_i$ be a relatively small amount of gaussian noise. Now $C$ can still be recovered deterministically from $C + N_i$ (just threshold) but the mutual information between $C + N_1$ and $C + N_2$ is $1 + 0.5 \log (\frac{\det(\Sigma_1) \det(\Sigma_2) } {\det(\Sigma)} )$ (the first bit comes from the MI of $C$, the log comes from the MI between gaussian variables). Then the MI can be made arbitrarily large by increasing the correlation between $N_1$ and $N_2$ (making $\det(\Sigma) \to 0$). In contrast, the GK common information only identifies the common component C. In a sense, the GK is able to recover the underlying "discrete" or "symbolic" information that is common between two different continuous sources. This is a valuable property to have in settings like neuroscience where one wants to measure what is actually encoded by both representations and not just how much they are correlated (which could be due to any amount of spurious factors). Additionally, with our relaxation of the problem we can measure how much is almost encoded in both, which makes the method more applicable to real-world cases where it is unlikely the same exact thing is encoded in both. **Case 2:** Let $X_1 = f_1(C; U_1)$ and $X_2 = f_2(C; U_2)$ where $C$ and $U_i$ are independent but $U_1$ and $U_2$ are correlated. Suppose $f_i$ is an invertible data generating function mapping latent factors to observations (such as pixels). By maximizing mutual information between views, one is implicitly caring about those correlations (based on a similar computation to above in Case 1), whereas in the GK sense one cares about only the fully shared components between views. These different perspectives can offer different utility depending on the use case. We will clarify this distinction in the main text, which we hope is helpful for readers. In addition to the benefits of interpreting a well-defined common core between variables, the GK common information can be helpful for compression schemes, as we mention in the discussion. We appreciate the reviewer noting that our contribution is interesting in bridging the efforts in information theory to define a well-defined notion of common information and recent progress in machine learning to exploit such knowledge coming from multiple data streams. We agree that there is a growing literature on multi-view representation learning that aims to partition the information between views (which we discuss in part in our paper) and we will update our citations with the references you mention which share a similar goal -- thank you for suggesting them! The existing methods appear to focus on qualitatively partitioning the information through the use of different objectives, however, the setups appear to be difficult to prove theorectical guarantees (as they lack a well-defined definition) or enable quantification of the information contained in the representation. In contrast, our setup allows us to formalize the decomposition of information using a well-defined information theoretic notion of common/unique information and we show both formally and empirically that we can optimize the objective with a simple modification to a traditional VAE setup, and also enables quantification of such information. We compare our method against contrastive training, finding that contrastive approaches only identify the common information (Fig. 3). We also agree with the reviewer that the ultimate goal is to leverage our framework for application on real-world high dimensional multi-modal data, or for interpreting high dimensional scientific data (such as neural recordings from different brain areas) and are excited about the insights this can yield. We believe showing that a well-defined partitioning of information exists and can be approximated from data using our framework is an important step to this goal. For example, accurately modeling for example high-resolution RGB + Depth data requires task specific architectures, and we leave it to future work, and we believe our work is an important step towards this ultimate goal. Q1: We are looking at mutual information contained in the representation that satisfies the constraint that it is the common information (approximately). We then look at the usable/mutual information that the representation contains about the ground-truth latent variable. Q2: In terms of the necessity of $\lambda_c$, the random sampling of the latent may not be enough to enforce that only common information is encoded in $z_c$, as in principle $z_c$ could encode unique information about a view that is then discarded by the view-specific decoder. The constraint on $\lambda_c$ guarantees that that will not be the case. In Fig. 1 in the rebuttal material, we run an ablation experiment setting $\lambda_c=0$. We observe that the implicit bias due to the random swapping suggested by the reviewer is enough to have a qualitative separation between common and unique information in the experiment (Fig. 1, left). However, in Table 1 of the rebuttal we observe that in this same setting the divergence between the distribution $q_1(z_c|x_1)$ and $q_2(z_c|x_2)$ was exceedingly large (7.5 * 10^5) which would lead to severe mis-estimation of the GK common information. Increasing the value of $\lambda_c$ solves the problem (Table 1 of rebuttal material). Higher-values of $\lambda_c$ will increasingly satisfy the constraint of the GK information, but may destabilize training. Empirically, we found that $\lambda_c=0.1$ provided a good trade-off in our experiments. --- Rebuttal Comment 1.1: Comment: Thank you for the effort put into the rebuttal and the clarity on the role of $\lambda_c$ provided by the additional experiments. After consideration of the rebuttal and the other reviews, I am still of the opinion that the premise of this work is interesting and valuable but the experimental support is ineffective. I strongly recommend more relevant baselines on the current experiments and/or use cases where the distinction between common and shared/mutual information is more meaningful -- even a simple synthetic one as suggested in the above response would be helpful. GK common information is far enough off the beaten path that the onus lies heavily on the authors to demonstrate why the goal is worthwhile, especially in the midst of a variety of other methods that accomplish qualitatively similar separation of information. --- Reply to Comment 1.1.1: Comment: Thank you for your comment and feedback! In line with the use cases we described in the above response, we plan to include a supporting synthetic experiment to better highlight the use cases and the difference between Gacs-Korner common information and mutual in the camera-ready version. --- Rebuttal 2: Title: Acknowledging the rebuttal Comment: Dear reviewer, Thank you for your time and effort. The authors have tried to address you comments in their rebuttal. What do you think about their response? Could you please acknowledge the rebuttal as well as the other reviews. Thanks, The AC
Summary: The paper proposes the Gracs-Korner Common information to measure common information between two random variables and propose a variational relaxation to compute the practical loss. The paper then derives an objective that eases disentanglement requirement. Strengths: 1. The paper proposes the Gracs-Korner Common information to measure common information between two random variables and propose a variational relaxation to compute the practical loss. To this point, the paper is novel to me. Empirical evidence demonstrates the advantages of the proposed method. Weaknesses: 1. The proposed method is only compared with basic VAE where as the advantage of the proposed method is not significant. 2. The proposed method seems to be loosely connected with Gacs-Korner Common Information. The derivation is a minor variation of a vanilla VAE. In this regard, it seems the impact of the work is limited, where the modification introduces additional encoder architecture but only brings limited empirical advantage. The final loss further decompose the distance between prior and posterior inference into “common components-prior distances” and the “unique components-prior distances". The driven force that separates the DNN to learn the decomposition remains unclear. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please see weakness above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and address their concerns below. > The proposed method is only compared with basic VAE where as the advantage of the proposed method is not significant. The purpose of our study was to show formally and empirically that we can partition the latent representation of multi-view data into a common and unique component, and also provide a tractable approximation for the Gacs-Korner common information between high dimensional random variables. In comparison against a (Beta)-VAE, in addition to our empirically observed higher disentanglement scores, the (Beta)-VAE cannot provide insight into which factors are shared and which are unique, whereas our optimization directly provides this separation. Additionally, even theoretically it has been shown that VAE cannot disentangle components without additional signals (like, in our case, the common information between different sensors) (Locatello et al., 2019). > The proposed method seems to be loosely connected with Gacs-Korner Common Information. The derivation is a minor variation of a vanilla VAE. In this regard, it seems the impact of the work is limited, where the modification introduces additional encoder architecture but only brings limited empirical advantage. The final loss further decompose the distance between prior and posterior inference into “common components-prior distances” and the “unique components-prior distances". The driven force that separates the DNN to learn the decomposition remains unclear. In Theorem 3.1, we prove that our notion of common information is a direct generalization of the original Gacs-Korner common information. Moreover, we show that our variational relaxation of the problem provides a tractable way to compute the common information between high-dimensional random variables, which is a major obstacle with the original proposal of GK-Common Information. We show that our approach recovers the correct value of the GK common information in all benchmarks. Moreover, we introduce for the first time a series of benchmarks with high-dimensional random variables (which previous algorithms could not address) and we show that even here we can obtain the correct theoretical value of the common information. Most results in structured representation learning with VAE involve loss functions modified with additional constraints. We believe that the fact that the changes need to decompose the representation in common and unique factors are relatively simple to implement, theoretically grounded, and widely applicable to any existing VAE architecture are an advantage of our method, not a downside. The key parameters enabling this optimization are the relative values of $\beta_u$ and $\beta_c$, with $\beta_u > \beta_c$, which encourages common information to be encoded in the common latents by paying a smaller cost. $\lambda_c$ encourages the distributions of the encoders to be similar so that the factor encodes common information (we additionally bias the information to be common through random sampling of the common latents from either encoder during training). --- Locatello et al. Challenging common assumptions in the unsupervised learning of disentangled representations. ICML, 2019 --- Rebuttal Comment 1.1: Title: thanks for your reply Comment: Thanks for the reply! I will maintain my score. Best --- Rebuttal 2: Title: Acknowledging the rebuttal Comment: Dear reviewer, Thank you for your time and effort. The authors have tried to address you comments in their rebuttal. What do you think about their response? Could you please acknowledge the rebuttal as well as the other reviews. Best, The AC
Summary: This paper proposes a new way to partition common and unique information in multi-view data, by leveraging and optimizing the objective of common information defined by Gacs and Korner. Authors extended the deterministic setup to stochastic scenario and used a variational autoencoder to realize the objective. Authors also carefully designed experiments in both static and time series data to validate the effectiveness of their architecture. Strengths: 1. The introduction of the Gacs-Korner common information to partition common and unique information in realistic multi-view data is novel. 2. A straightforward application of Gacs-Korner common information is hard. The stochastic relaxation made by authors makes sense to me. It is also good that authors constructed different datasets to validate the effectiveness of their method. Weaknesses: I found the biggest issue of this manuscript is that some points are very unclear or need more explanations. Below I listed a few of them: 1) Can you elaborate more on the difference between common information and mutual information. I can understand that mutual information has no clear interpretation in terms of information decomposition. It would be much better to describe more differences. For example, in Eq.~(15), what would be the mutual information between $x_1$ and $x_2$; and why mutual information should be much larger than common information $z_c$? Are there more illustrative examples? 2) In terms of implementation, I appreciate that authors relax the constraint of $Z=f(X_1)=g(X_2)$ with a conditional divergence minimization term $D(p(z|x_1);p(z|x_2))$. How to implement this term in your VAE objective? 3) Also in implementation, it requires that $\beta_u > \beta_c$. How to balance the trade-off between $\lambda_c$ and $\beta$? In Eq.~(13), there should be two $\beta_u$ corresponding to two views, i.e., $\beta_{u1} KL(q_{\phi_{u1}}||...)+ \beta_{u2} KL(q_{\phi_{u2}}||...)$? 4) Proposition 3.1 requires an invertible mapping from $z$ to $x$. How is this condition reflected in your implementation? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to weaknesses above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Authors did not discuss potential limitations and negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and address their concerns below. Q1: We elaborate on the difference between common information and mutual information in our response to [Reviewer iAUJ](https://openreview.net/forum?id=e4XidX6AHd&noteId=HqfjeyTDaR) (see Case 1 and Case 2 in particular). Q2: This was achieved through a combination of using $\lambda_c$ and also sampling independently from both encoders with $p=0.5$ (Line 226). As per the request of reviewer iAUJ, we ran the ablation with $\lambda_c = 0$ and found that the implicit bias due to the random swapping suggested by the reviewer is enough to have a weak qualitative separation between common and unique information (Fig. 1 of rebuttal pdf). However, in Table 1 of the rebuttal we observe that in this same setting the divergence between the distribution and was exceedingly large (7.5 * 10^5) which would lead to severe mis-estimation of the GK information in the common component. Increasing the value of solves the problem (Table 1 of rebuttal material). Higher-values of will increasingly converge to the exact GK information, but may destabilize training. Empirically, we found that $\lambda_c = 0.1$ provided a good trade-off. We parametrized the encoders $p(z|x_1)$ and $p(z|x_2)$ using neural networks that outputted the mean and (diagonal) covariance of a Gaussian distributuion. In this case, the KL divergence (which we denote by D) has a close form, which we minimize during training. Q3: Related to the above, the purpose of $\lambda_c$ was to ensure that the information was common. We found that we could set $\lambda_c$ to be a large value provided we started it off at a small value at the beginning of training (Fig. 2 of rebuttal). In this way, the effect of the parameter $\lambda_c$ appears to depend more strongly on it's value during training rather than the values of $\beta_u$ and $\beta_c$. Indeed there should be two $\beta_u$ in Eq.~3 corresponding to the two views (in our experiments they were equal). We will clarify the text. Q4: This assumption refers to condition on the data generating model so that the latents can be recovered: it’s saying that we need to be able to recover the latents from the data in order for our optimization to recover the ground-truth latents. In the datasets we examine, this appears to hold, as indeed we can decode the common and unique latents from the common and unique factors. Note that even if this assumption does not hold in practice, our method will still recover the common information and unique information present in the observed data. We discussed limitations in Appendix D, and we do not foresee any negative societal impacts. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. I do not have more concerns. It is good that authors use to two illustrative examples to demonstrate the merits of CI over MI. I suggest including this discussion also in the main text. My score remains the same, since I agree with Reviewer iAUJ that there are growing number of literatures on multi-view (disentangled) representation learning, which may also define their own ways to separate unique and common information. I understand that those literatures may suffer from a rigorous definition on what is common and what is unique. However, a comparison to state-of-the-art can enhance the quality of this paper.
Rebuttal 1: Rebuttal: We thank all the reviewers for their thoughtful comments and suggestions, which we believe will lead to a clearer and improved manuscript. We have responded to each of the reviewer's questions individually. Please see the attached pdf for figures and a table that we refer to in our responses to specific reviewers. Pdf: /pdf/b218dd06265b01d529ae23d64a68beaa8fbd7c6c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
CLadder: Assessing Causal Reasoning in Language Models
Accept (poster)
Summary: The paper collects a new dataset, named Cladder, that aims at evaluating the causal reasoning abilities of language models. The datasets contain 10,000 questions asking about boolean questions that require intensive causal reasoning. The dataset is constructed based on the idea of a causal inference engine, which covers three rungs of causal inference. Specifically, the authors first sample formal representations of causal inference problems by sampling causal graphs as well as query types. Next, the formal representations are verbalized into natural language problems, where the variables in the formal representations are named with concepts in the stories collected from commonly cited causal inference books. Finally, the representations are verbalized into natural language problems using fixed templates. Using the collected datasets, the paper evaluates a line of language models’ causal reasoning abilities. Experiments results suggest that these problems are still hard for language models in the zero-shot setting (if I understand correctly). In addition, the paper proposes CausalCoT which lets LMs explicitly state causal problems and then solve the problem. The results suggest CausalCoT is better than the baseline. Strengths: The paper works on an interesting problem, causal inference, and collects a useful dataset consisting of 1,0000 examples. The data creation process is well-designed. The dataset covers multiple rungs and multiple causal graphs. In addition, the alignment also includes anti-commonsense and nonsensical settings, which is essential for controlling possible effects of memorization in LLMs. The paper includes additional error analysis highlighting the LM’s capabilities in performing different steps in the process of solving causal inference problems. The paper is well-written and easy to follow. Weaknesses: 1: The tested baselines (in Table 2) need clarification. Also, the baselines might be somewhat weak. Are the methods tested in Table 2 implemented in zero-shot setting? (I am guessing that based on section 5.6). I wouldn’t be surprised if these language models utterly fail in a zero-shot setting. While the paper tries to frame that providing in-context examples is orthogonal to the paper (section 5.6). I believe it is necessary to benchmark what much better systems can do (e.g., few-shot CoT prompting or few-shot CausalCoT prompting) to understand the capabilities of LMs as well as the potential headroom. 2: The language is synthetic. As suggested in 3.2, the verbalization of the formal causal problem is rule-based, which could result in synthetic and formulaic language. The paper does not provide much evaluation of the language side of the datasets in the main body. 3: No human performance The paper does not provide human performance for this dataset. Although many problems are adapted from books, the templated-based verbalization as well as the nonsensical setting could make some problems tricky even for humans. Thus, human evaluation is still valuable. Also, it can help verify the quality of the datasets. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Are the baselines all in zero-shot setting? In addition, it would be helpful if the authors can include some full prompt examples in the appendix. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The paper discusses the limitations in Section 7. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the overall positive comments about the meaningfulness of the problem (“The paper works on an interesting problem”), dataset contribution (“a useful dataset consisting of 1,0000 examples”), “well-designed” “data creation process” that “covers multiple rungs and multiple causal graphs” and “includes anti-commonsense and nonsensical settings”, comprehensive experiments with “additional error analysis”, and “well-written and easy to follow” writing. In the following, we address the three major comments raised by the reviewer. ### 1. Additional Few-Shot Experiments > Are the baselines all in zero-shot setting? Yes, our main results (Section 5.2) are zero-shot, and our Section 5.6 analyzes the few-shot/in-context learning. Following your suggestions, we also dedicated a few-shot experiment and reported the performance below: ||0-Shot|Few-Shot (10-Shot)| |----|---|---| |GPT-3 Non-Instr. (davinci)|47.42|49.83| |GPT-3 Instr. (davinci-001)|57.07|57.19| |GPT-3 Instr. (davinci-002)|56.24|57.36| |GPT-3 Instr. (davinci-003)|62.69|63.50| |GPT-3.5|61.71|61.85| |GPT-4|64.28|65.43| |+CausalCoT|66.64|70.09| Several key takeaways are (1) few shot brings some improvement, but relatively minor for most models, maybe because the task is really complex. (2) The strongest model, few-shot CausalCoT, reaches 70.09%. The 4 point improvement could be because CausalCoT uses much richer information to complete the task, thus benefiting more learning from examples. > I believe it is necessary to benchmark what much better systems can do (e.g., few-shot CausalCoT) to understand the capabilities of LMs as well as the potential headroom. Thank you for your advice. We implemented the suggestion, and find that few-shot CausalCoT reaches 70.09%, +4 point increase over the 0-shot CoT. We hope this result helps to better understand the capabilities of LMs as well as the potential headroom. ### 2. Evaluation of the Natural Language > The paper does not provide much evaluation of the language side of the datasets In the composition of this work, we followed the standards of dataset creation and experiments in previous work on formal tasks for LLMs, such as this recent [ICLR 2023 paper](https://openreview.net/forum?id=qFVVBzXxR2V) on formal reasoning in logic. Similarly, their language is also formulaic. Nonetheless, we are happy to add evaluations of the quality of the natural language in our data below. **Evaluation:** For our benchmark dataset, we think the set of criteria to judge it is: (1) grammaticality, (2) human readability, (3) formal correctness, and (4) naturalness, whose performance we report below. 1. **Grammaticality:** We run the grammar check on our dataset by the LanguageTool package, and get on average 1.26 grammatical errors per 100 words (i.e., 98.74% correct), which shows that most of our language follows English grammar. 2. **Human readability** (i.e., how comprehensible/intelligible are the questions to, e.g., a student who has taken a causality course): 96%. We obtain this score by sampling 50 random questions, and let a grad student annotator go through the questions to mark the questions that they cannot understand. 3. **Formal correctness:** 100%, because of our systematic generation process together with the formal solutions. 4. **Naturalness/Perplexity:** In addition, we also produce how natural the questions sound by the automatic metric of perplexity (since human evaluation might be subjective). We use gpt2 to calculate the perplexity scores, and obtained a score of 21.17 (the lower, the more natural the language sounds to GPT). For comparison, the perplexity of the MATH Dataset (NeurIPS 2021, by [Hendrycks et al.](https://arxiv.org/abs/2103.03874)) is 57.45, which means that the language in our data is more natural-sounding. For a more intuitive feeling, please feel free to check the examples of our dataset in Appendix Tables 5-12. **Camera Ready:** We are happy to add the above evaluation scores to the camera-ready version of our dataset. ### 3. Reporting the Human Performance > Some problems could be tricky even for humans. Thus, human evaluation is still valuable. We agree with the reviewer’s intuition that the problems can be challenging for humans too. In addition to the readability score, we also conducted a small experiment getting our coauthors to work through 50 random samples. In these preliminary results, we obtained a correctness score of around 80% as the human performance score. We plan to extend this to a larger cohort for the camera-ready version. **Camera Ready Plan:** We plan to use the human evaluation score in the following way: - The score indicates the human performance, against which we can illustrate whether LLMs surpass it or not. - We want to emphasize that the human performance will only be used as a baseline, but not a judgment for the dataset quality. Imagine a university exam that is valid and good-quality, but challenging for the students. Hence, **a good-quality dataset** (i.e., formally correct and readable) **can also be challenging for humans**, and requiring specific skills to solve. That’s another motivation for using models to automate these challenging tasks for humans. - Another disclaimer is that, similar to any cognitive science studies, the human performance will be a function of the subjects’ background, including their previous education background, familiarity with causal formulations, and carefulness when solving these problems. Our reported performance is based on a cohort of researchers familiar with the study and took the questions with caution. > it would be helpful to include some full prompt examples in the appendix. Sure, we can provide the full prompts in our camera ready version. We composed several examples of our prompts at this anonymous link: https://anonymous-link.notion.site/CLadder-031085fc56854955bcf3d30d499f0f42 --- Rebuttal Comment 1.1: Comment: Thanks for your clarification. I appreciate the added experiments and human evaluation, which helps address my concern. I am happy to raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for reading our rebuttal materials. We are glad that the newly added experiments and human evaluations help clarify your concerns. And we appreciate your raise of score and positive support. Have a good weekend :)!
Summary: The paper introduces a large dataset, CLADDER, designed to evaluate the ability of LLMs to perform causal inference. The dataset includes 10K binary questions about variable association and treatment efficacy along with a ground truth explanation. Questions are answered in the context of an hypothetical closed world in which the available measurements allow to unambiguously answer the questions. Most of the classical causality related NLP tasks focus on commonsense reasoning and domain knowledge, assessing the ability of the model to make deductions based on knowledge of properties of an object or human behavior in everyday life situations. The CLADDER data goes beyond domain knowledge, requiring the model to perform causal inference: inferring the causal graph, describing the treatment effect of interest related to the question (the estimand), choosing an estimator and expressing the numerical results. To build the dataset, a causal graph derived from a list of 10 classical structures is sampled. A query type, describing the treatment effect of interest is then derived. The underlying estimand is then derived and an estimate is obtained using the rules of do-calculus and the abduction-action-prediction steps of counterfactual prediction. All those components, causal graph, query and results are then translated into natural language to be ingested by classical models. For each causal graph a set of 2 to 5 stories taken from the literature are used to ensure that the variable names and their relation are not nonsensical. A template is also used to generate step-by-step explanations. Eight models are then evaluated on the task, with an overall accuracy of 66.64 being reached for GPT-4 and CausalCot, a chain-of-thought prompting strategy that relies on 4 sub questions, corresponding to natural reasoning steps, to guide the model. A nonsensical evaluation, accounting for potential data contamination is also performed. The evaluation is further refined for the CausalCot model, by including an ablation study and assessing the effect of in context learning. Strengths: Classical causality related NLP tasks mostly focused on commonsense reasoning and domain knowledge. The paper goes beyond that, introducing a new original task, CLADDER, which theoretically requires the model to infer the causal graph, describe the treatment effect of interest, choose an estimator and provide an estimate. The paper nicely builds on the framework proposed by Pearl with a dataset of 10K questions covering a large variety of causal graph structures and estimands. Additionally, the explanation generated together with the questions allows further assessment of the reasoning ability of the model. The evaluation of state of the art models on the task is well conducted with a particular methodology to isolate the effect of data contamination. Additionally, a methodology to boost model's performance using chain-of-thought prompting is proposed. If the improvement is marginal, it allows further investigation on the ability of the model to perform the natural reasoning steps to complete the task. Weaknesses: To better understand how the information provided is used, it could be interesting to mention a new attribute in the text, completely unrelated for which you provide the marginal probability. Given the low performance, adding noise to the available information is not the priority but it could be nice for future work. On top of the ablation study, it would be interesting to know how the CausalCoT model performs if it was given the question for step 1 along with the answer and then only the question for step 2 to 4. Then doing the same but providing the answer for step 1 and 2. And finally the same for step 1, 2 and 3. It would provide insights on the ability of the model to perform the task if it received partial answers. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Line 156 _"processes we define specify all necessary information"_ A bit heavy with two verbs following one after the other. I suggest rephrasing, maybe as: "the causal processes defined encapsulates all necessary..." In Figure 4, _Translate the question to a formal estimand_, please note that the notation used with a _\sum_ can be ambiguous as this is only valid for an infinite amount of data (ambiguous outside of the Pearl's community). The estimand doesn't depend on the data observed, the estimator does. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The conclusions made are only valid in a self contained, hypothetical world without any unmentioned factor or causal relationship. A potential negative impact would be for the model to wrongly infer causal effect in the presence of many unmeasured confounders where conditional and marginal are available for few variables. If widely deployed, that kind of model should be able to state the hypothesis it is relying on or state that not enough information is available to draw conclusions on efficacy. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing very detailed comments and an elaborate summary of the work. We appreciate the reviewer’s comments that “The paper nicely builds on the framework proposed by Pearl with a dataset of 10K questions covering a large variety of causal graph structures and estimands.” and “The evaluation of state of the art models on the task is well conducted with a particular methodology to isolate the effect of data contamination.” Here we address some remaining questions and comments of the reviewer. > To better understand how the information provided is used, it could be interesting to mention a new attribute in the text, completely unrelated for which you provide the marginal probability. Given the low performance, adding noise to the available information is not the priority but it could be nice for future work. This is a perfect suggestion! Actually, we are working on a follow-up paper using this idea to see what perturbation in the prompt will distract the model. And we agree, these explorations (in lines of adversarial attack, or LLM performance analysis) are very interesting to work on. > On top of the ablation study, it would be interesting to know how the CausalCoT model performs if it was given the question for step 1 along with the answer and then only the question for step 2 to 4. Then doing the same but providing the answer for step 1 and 2. And finally the same for step 1, 2 and 3. It would provide insights on the ability of the model to perform the task if it received partial answers. This is a nice thought. We conduct the aforementioned experiments, and report the results in the following: | Providing the answer of | Asking the question of | Performance | | ----------------------- | ---------------------- | ----------- | | None | Step 1, 2, 3, 4 | 66.64 | | Step 1 | Step 2, 3, 4 | 68.10 | | Step 1, 2 | Step 3, 4 | 71.03 | | Step 1, 2, 3 | Step 4 | 72.49 | | Step 1, 2, 3, 4 | None (Just final Q) | 74.51 | There is a very nice monotonically increasing trend as we provide more partial answers in the CausalCoT process. We are happy to put this result table also in the camera ready version of the paper. > In Figure 4, Translate the question to a formal estimand, please note that the notation used with a \sum can be ambiguous as this is only valid for an infinite amount of data (ambiguous outside of the Pearl's community). In our work, the word “data” is used to refer to population-level information, as opposed to finite samples: this is indeed unusual, although consistent with how it is often used by Judea Pearl et al. to stress that certain key problems of causal inference are orthogonal to estimation from finite samples. We will add a footnote and a clarification in the image caption to stress this. > Line 156 "processes we define specify all necessary information" A bit heavy with two verbs following one after the other. I suggest rephrasing, maybe as: "the causal processes defined encapsulates all necessary..." Thank you for the suggestion. We will improve the presentation of the camera-ready version of our paper accordingly. Thank you for the two notes! --- Rebuttal Comment 1.1: Comment: Thanks a lot for adding this analysis to the ablation study. It's reassuring to observe a monotonically increasing trend when adding the different steps. I really appreciate your answers and the quality of the paper. I recommend it for acceptance. --- Reply to Comment 1.1.1: Comment: Thank you very much for taking time to go through our rebuttal materials. We are glad that the ablation study result provide more insights into CausalCoT, and thank you for suggesting it at the first place. Also, we appreciate your endorsement for the paper, which supports us to continue exploring on this line of research.
Summary: The author proposes a math-word-style formal causal reasoning dataset, CLADDER, which is the first one that can be used for formally evaluating the causal reasoning/inference ability of LLMs. The authors also proposed a new prompting strategy, CausalCoT, which improves the zero-shot performance of GPT-4 by 2.4% on the proposed dataset. Strengths: 1. The paper is well-written and easy to follow. Illustrations are helpful. 2. It is the first formal causal inference/reasoning dataset in a natural language, math-word-problem-like format, which provide a nice addition to the current logical/math reasoning datasets. 3. It provides a way to formally evaluate the causal reasoning ability of LLMs for the first time. Weaknesses: 1. The causal inference questions used in the dataset seem to be a bit simple for evaluating GP3/GPT4 scale LLMs. To my understanding, they only require a one-step application of a causal inference equation to binary variables. It would be great if hard questions with multi-step reasoning could be included. 2. To my understanding, only zero-shot prompting is used to evaluate the performance of LLMs in Table 2, while the common evaluation paradigm for the reasoning abilities of LLMs is through few-shot CoT prompting. It would be helpful if the authors provided another set of results with few-shot prompting. 3. It would be helpful to actually fine-tune a small LLM (e.g., 7B LLaMa) on the proposed dataset since the dataset is large enough to be trained on (10K). In this case, we can have a better understanding of the difficulty of the proposed dataset, and whether there are spurious correlations/shortcut features in the dataset that can be captured by a statistical model. 4. The authors only apply the proposed CausalCoT method to GPT4, and the improvement seems to be small (2.4%). The evaluation of the proposed prompting technique seems to be insufficient. I'm wondering what is the results when CausalCoT is applied to other models and when it is applied with few-shot in-context learning. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please see weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback that “The paper is well-written and easy to follow”, the dataset is valuable as “It is the first formal causal inference/reasoning dataset in a natural language, math-word-problem-like format”, which constitutes “a nice addition to the current logical/math reasoning datasets”, and “It provides a way to formally evaluate the causal reasoning ability of LLMs for the first time.” Below we address several comments. > To my understanding, they only require a one-step application of a causal inference equation to binary variables. It would be great if hard questions with multi-step reasoning could be included. This comment might stem from the misunderstanding that our questions require one-step reasoning: we explain below that this is not the case, and we will clarify this in a revised version of our paper. As shown in Figure 1, correctly solving our causal questions needs five steps. The last step (Step 5) of the solution corresponds to the idea “a one-step application of a causal inference equation[s]”, which might have led to the confusion. For the remaining four steps, they involve different subskills such as (1) parsing the causal graph by causal relation extraction, (2) classifying the query type, (3) deriving the estimand by causal inference (e.g., through backdoor adjustment), and (4) collecting available data by semantic parsing. In particular, to generate ground truth solutions, step (3) is taken care of by the causal inference engine, which in general requires non-trivial reasoning about an underlying causal graph and additional assumptions. This 5-step reasoning process makes our CLadder dataset actually very challenging for the current LLMs, with GPT4 reaching only 64.28% accuracy. As for the connection with other multi-step reasoning works such as HotpotQA, it can be an interesting extension to do causal inference several times to get the final answer, or do a formal causal inference + commonsense reasoning combination, by letting the LLM to use common knowledge (e.g., age affects experience but not vice versa) to build the causal graph. > It would be helpful if the authors provided another set of results with few-shot prompting. Our Section 5.6 has a small-scale experiment to analyze the effect of few-shot/in-context learning. Following your suggestions, we also dedicated a few-shot experiment and reported the performance below: ||0-Shot|Few-Shot (10-Shot)| |----|---|---| |GPT-3 Non-Instr. (davinci)|47.42|49.83| |GPT-3 Instr. (davinci-001)|57.07|57.19| |GPT-3 Instr. (davinci-002)|56.24|57.36| |GPT-3 Instr. (davinci-003)|62.69|63.50| |GPT-3.5|61.71|61.85| |GPT-4|64.28|65.43| |+ CausalCoT|66.64|70.09| Several key takeaways are (1) few shot brings some improvement, but relatively minor for most models, maybe because the task is really complex. (2) The strongest model, few-shot CausalCoT, reaches 70.09%. The +4 point improvement could be because CausalCoT uses much richer information to complete the task, thus benefiting more learning from examples. > It would be helpful to actually fine-tune a small LLM (e.g., 7B LLaMa) on the proposed dataset since the dataset is large enough to be trained on (10K). In this case, we can have a better understanding of the difficulty of the proposed dataset, and whether there are spurious correlations/shortcut features in the dataset that can be captured by a statistical model. We carefully designed the dataset to avoid trivial shortcut features by, for example, balancing the number of times the correct answer is “yes” vs “no” across all stories and query types. Additionally, to minimize any correlation between for every question we generate both “polarities” (such that the same question can be asked with the correct answer being “yes” or “no”). For example, for a question like “Is ringing alarm less likely than silent alarm overall?” we also generate the positive polarity version “Is ringing alarm more likely than silent alarm overall?” and then randomly select one of the verbalizations with the corresponding answer. Furthermore, note that our benchmark is a challenge set rather than a train+test set, much like the other challenge sets TruthfulQA (ACL 2022), and MoralExceptQA (NeurIPS 2022). For challenge sets such spurious correlations are generally less of a concern, because no training is involved to let the model learn/overfit to any undesirable “shortcuts.” --- Rebuttal Comment 1.1: Comment: Dear reviewer, we have read your review carefully, conducted the requested experiment, and included a detailed reply in the rebuttal. Could you let us know if you have further questions? Happy to follow up :)!
Summary: The research question is whether large language models are capable of doing formal causal reasoning as formalized by the approach with SCMs popularized by Pearl. The authors provide a benchmark of 10k examples to evaluate such abilities; they describes in detail the design choices for this benchmark. They then propose a chain-of-thought prompting strategy (CausalCoT) to enhance LLMs with causal reasoning abilities; evaluations are carried out both on OpenAI's proprietary LLMs and open-source LLMs. Strengths: * The design principles behind the benchmark are well-explained. In particular since LLMs tend to struggle with arithmetic-heavy tasks authors took care of using binary variables and simple causal graphs. * A variety of topologically distinct treatment-effect pairs is considered, covering thus multiple causal graphs. * Non-sensical and anti-commonsense variants of stories are injected to reduce the effect of LLMs having memorized commonsense causal knowledge. Results are reported for the different splits (commonsense, anti-commonsense, etc...). * Good quality figures and diagrams; sufficient background material in the appendix for those not familiar with Pearl's causal framework. * A new application of CoT to formal causal inferences. Weaknesses: * I have some concerns about whether the research question is appropriate for the conference audience. For example, the cited (in the paper) concurrent work of (Kiciman et al 2023) considers model evaluation on several ways of formalizing causal discovery; this broadens the scope of the potentially interested audience. * In my opinion it would be very important to discuss why this specific approach to evaluating causality matters and the ML community should care. For example, one might argue that human beings have deeply advanced their causal understanding of the world for centuries without using Pearl's framework to draw causal conclusions, so why should we care about LLMs using this specific framework? To draw another analogy, mathematics keeps progressing, yet almost no professional mathematician can write a formal proof or uses a formal proof to discover new math; the interest in understanding if LLMs can write formal proofs or can assist in writing formal proofs should be well-motivated. * If formal causal reasoning abilities matter for discovering new science (e.g.~the quote opening the Introduction), then it would be good to discuss why a setting in which the model uses software tools for causal inference was not considered. Do such tools exist? For example, for formal mathematical proofs this is the case and a natural approach would be to make LLMs make use of such tools. * CausalCoT makes Rung 3 worse (Table 2). IIUC Rung 3 is the most challenging task (counterfactuality), so it would be good to explain why CausalCoT makes the accuracy worse. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * L. 75; is it correct that Rung i+1 is more difficult than Rung i? Maybe it could be emphasized also in this section. * L. 264: please add an explanation of why temperature=0 * Table 2: Can you explain why CausalCoT makes performance drop on Rung 3, which should be the most challenging one in the Ladder? * I am having trouble to understand how CausalCoT works in practice. Could you point me to where I can find an interaction with the model in the supplementary materials? It would be great if one such example could make it to the paper or the appendix. I am sorry if I missed it somewhere. More specifically, according to this [post](https://www.lesswrong.com/posts/yZb5eFvDoaqB337X5/investigating-causal-understanding-in-llms) some form of few-shot examples are needed to successfully prompt models, so I am wondering if one needs to do the same to get the causal graph representation when using CausalCoT. Overall, my main concern is a lack of discussion or argument for why evaluation on this specific formalization of causal reasoning should matter. I am happy to increase the score if my concerns get addressed. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Good. During the rebuttal period the authors agreed to discuss a couple of extra limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing that “The design principles behind the benchmark are well-explained”, the dataset is diverse with “A variety of topologically distinct treatment-effect pairs” “covering thus multiple causal graphs”, the experiments are multi-faceted with “Non-sensical and anti-commonsense variants of stories”, and the paper has “Good quality figures and diagrams”. In the following, we will focus on a few main points: 1. The value of the paper, including (a) the relation with the concurrent paper Kiciman et al. (2023), and (b) the value of automating formal reasoning abilities for LLMs/society, and 2. Several technical details to clarify. ### 1. Clarification of the Value of the Paper > the cited concurrent work of (Kiciman et al 2023) considers model evaluation on several ways of formalizing causal discovery. Thank you for bringing up the comparison of our paper and this concurrent work (published in May, two weeks before the NeurIPS deadline). Firstly, note that under the [NeurIPS policy](https://nips.cc/Conferences/2023/PaperInformation/NeurIPS-FAQ#:~:text=Authors%20are%20not%20expected%20to%20compare%20to%20work%20that%20appeared%20only%20a%20month%20or%20two%20before%20the%20deadline.), “Authors are not expected to compare to work that appeared only a month or two before the deadline” including self-published work such as arXiv submissions. Nonetheless, we are happy to highlight the _orthogonality_ between Kiciman et al. (2023) and our work. Their work falls under the umbrella of “causality as knowledge”, whereas our work focuses on “causality as formal reasoning”, following the categorization in our Abstract and Intro, L4-6 and L33-39. To illustrate the difference between the two approaches, we use the formulation of “Counterfactual” (Rung 3) as an example: **Example from our data:** Q: Effort has a direct effect on college admission. [...] For nonsmokers who are lazy, the probability of college admission is 55%. For nonsmokers who are hard-working, […] If we disregard the mediation effect through effort, would smoking positively affect college admission? A: Yes. (This answer is obtained by applying the causal inference engine.) **Example from Kiciman et al (2023)’s data (Table 8):** Q: A woman does not order Chinese food. What would have happened if she had ordered Chinese food? A: The woman would have become less hungry. **Difference:** Crucially, our answer deductively follows directly by the rules of formal causal inference and information given in the prompt, whereas a commonsense question relies on a common sense understanding of actions and consequences, which is far more susceptible to implicit biases, e.g., if the woman was ordering for someone else, then her hunger level wouldn’t change at all. > it would be very important to discuss why this specific approach to evaluating causality matters and the ML community should care. We share the reviewer’s thoughtfulness concerning the value of our work to the ML community. Here are some of the main aspects to better motivate the value of our contributions: Our causal inference dataset relies on _formal reasoning_, but not _generating proofs_. Our benchmark occupies a hitherto unaddressed niche of evaluating the formal reasoning abilities of LLMs (much like math word problems or university math exams), but for problems concerning causality. Now, one might question to what extent it is necessary to evaluate the formal causal reasoning abilities of LLMs. Aside from (1) the academic benefits of understanding the strengths and weaknesses of the current state-of-the-art for LLMs, we believe our work is also a step towards some important practical applications. (2) From analyzing medical studies to risk assessments, formal causal inference problems occur in many high value tasks. Here, as the use of LLMs becomes more common to automate certain tasks, it is important to (3) understand to what extent LLMs can reliably understand and solve such problems. It has been claimed that LLMs understand causality well (e.g., Kiciman et al report very high performance numbers such as 97% and 92%). In contrast, we are the first to show that LLMs have a far longer way to go, reaching only 60+% now on CLadder. It is scientifically worthwhile to study what LLMs can already do and what they cannot. > it would be good to discuss why a setting in which the model uses software tools for causal inference was not considered. Do such tools exist? We concur that it could be interesting to connect LLMs to causal inference tools (e.g., TETRAD etc.). However, we argue that one important step towards augmenting LLMs is understanding to what extent they already solve certain reasoning tasks. ### 2. Technical details > Table 2: Can you explain why CausalCoT makes performance drop on Rung 3 In short, Rung 3 has difficulty even with the assistance of CausalCoT because it fails severely at one of the steps. In “5.4 Error Analysis by Subquestions”, we can see that in Table 3, one of the trickest steps for LLMs is Step 2, to identify the query type (e.g., ATE, NDE, …), which only reaches 50% F1 in general, and the worst for Rung 3, with only 42% F1. > why temperature=0? For reproducibility, otherwise LLMs' performance will fluctuate. > Could you point me to where I can find an interaction with the CausalCoT model in the supplementary materials? Thank you for the good suggestion, we will add the example prompts and more experimental details in the Appendix of the improved version of the paper. We will also show here the prompt of our zero-shot and 10-shot experiments. An example of our **zero-shot prompt** is at this [anonymous link](https://anonymous-link.notion.site/CLadder-031085fc56854955bcf3d30d499f0f42#0df909ae3e064b89afc78817332d6798). An example of our **10-shot prompt** is at this [anonymous link](https://anonymous-link.notion.site/CLadder-031085fc56854955bcf3d30d499f0f42#bda58d0c708e43c4bb8417bb58b647b8). --- Rebuttal Comment 1.1: Title: Further Questions Comment: Thanks for the explanations and the example prompts. I have some more questions and comments. `Comparison to Kiciman et al.` I am aware of the venue's policy on comparison to concurrent work. While reading the paper under review, the question of why studying this particular formalization of causality is important has come to my mind multiple times. In the related work the authors cite Kiciman et al. and I had a look at that work to get a better understanding of the current debate around causality and LLMs. I found the way in Kiciman et al. the research topic is *presented* quite compelling. In the paper under review I feel the part from L30-50 would need expansion to make the research question look compelling. For example, understanding the difference between correlation and causation (L36-37) or proposing plausible novel hypotheses (L38-39) does not really need Pearl's framework. However, one might point out some examples of the necessity or new deep insights / discoveries brought by this causality framework and improve the motivation for the research question. In summary, I do think a bit more work should be put on presenting and motivating the research question. `Table 2: Can you explain why CausalCoT makes performance drop on Rung 3` I am still confused. I understand that there is a drop in Step 2. However, in Table 3 it seems that adding CasualCoT results in about a 5 points drop from GPT-4 on Rung3, which I find surprising as adding the formal casual reasoning with CoT should help on the most challenging Rung 3. Also on these tasks, what magnitude of accuracy difference is statistically significant? `However, we argue that one important step towards augmenting LLMs is understanding to what extent they already solve certain reasoning tasks.` I am worried that if models use the right tools, they might solve these causality tasks easily. For analogy, an LLM can make simple arithmetic mistakes but if it uses a calculator tool in the right way it can get many arithmetic questions right. While I do not think there is a need to add experimental results with tools, I do think this should be pointed out as a potential limitation of this study and its conclusions on the abilities of causal understanding of LLMs. --- Reply to Comment 1.1.1: Comment: `Comparison to Kiciman et al.` We appreciate your interest in our work. And we appreciate the effort you've taken to delve into the Kiciman et al. work to gain a deeper understanding of our research. However, we believe that we might be on a slippery slope if we start drawing parallels between our work and that work. This might inadvertently contradict the concurrent work policy. So we would prefer for our work to be judged on its own merit. Independent of the above, we agree that the difference between correlation and causation is the most obvious and best known aspect of causality; however,Pearl's framework is much richer than this. While Pearl's framework is not the only one dealing with causality, it is probably the most common one in the ML community, so it is a very natural framework to use here. For example, another approach to causal inference which may be considered, apart from the one based on structural causal models (SCMs), is the potential outcomes (PO) framework. However, while the two approaches dedicate more or less emphasis to different aspects of causal inference, it has been argued that the SCM framework "does not exclude any concept or relationship definable in the PO approach" (see http://causality.cs.ucla.edu/blog/index.php/2020/01/29/on-imbens-comparison-of-two-approaches-to-empirical-economics/). Moreover, the SCM framework was developed by AI researchers (as opposed to the PO framework, which was mostly developed in the context of economics): it therefore puts particular emphasis on algorithmic aspects of causal reasoning (see, e.g., https://ftp.cs.ucla.edu/pub/stat_ser/r360.pdf for counterfactuals). This makes it particularly suited for our objective, where we want to algorithmically generate ground truth answers to causal questions, without having to assess the correctness of an answer based on common sense. We will be happy to elaborate on the aspects above in the Introduction, to better contextualize our work. In case the above does not answer your questions, maybe you could further let us know what you dislike about using the Pearl framework here? Why do you think it would be intellectually preferable not to use it? `Table 2: Can you explain why CausalCoT makes performance drop on Rung 3:` We agree that the result is in some sense counterintuitive: it is an empirical finding, and our explanation is that it might be related to difficulties in distinguishing the query type, as our investigation through CausalCoT shows, since this sub-task is harder for counterfactuals (rung 3) than for lower rungs, due to a broader range of queries to be considered. We can include this in the Limitations section too, in that a precondition for CoT to work well is that each subquestion can be well-addressed by LLMs, which is a bit difficult for the case of the query type questions in Rung 3. And for the significance test, yes we can run a significance test and add it to the next version of the paper. `one important step towards augmenting LLMs is understanding to what extent they already solve certain reasoning tasks.` We would agree that investigating the performance with plugins would be great, and it would be an extremely interesting aspect to investigate in the future. As you suggested, we will discuss this future research direction in the paper. There is, as you mentioned, a thread of work attempting to improve LLM math abilities, e.g., https://arxiv.org/abs/2308.05713, where it is suggested that “the plugins significantly enhance GPT’s ability to solve these problems”. The problem is not solved though: as the authors remark, "there are still often ”interface” failures; that is, GPT often has trouble formulating problems in a way that elicits useful answers from the plug-ins". We expect that something similar will happen for causal inference, once suitable plugins are built, where the language-to-tool interface will still be a non-trivial research question. Moreover, the possible future existence of such plugins will make it even more important to have systematic benchmarks that assay causal inference capabilities in a broad set of tasks (not just correlation vs. causation), so we would view this as nicely complementary to what we do.
Rebuttal 1: Rebuttal: Firstly, we would like to thank all reviewers for a wealth of feedback. Four out of five reviewers recommended acceptance, and we are encouraged by the large number of thoughtful comments and suggestions, which demonstrate not only that all reviewers largely understood our focus, but also that they expressed interest in our work. We appreciate the acknowledgement of our work’s contributions in the following ways: 1. **Relevance:** This work “focuses on an important problem” (bMQ8) and “provides a way to formally evaluate the causal reasoning ability of LLMs for the first time” (hcc4). 1. **Dataset contribution:** The created benchmark is “the first formal causal inference/reasoning dataset” (hcc4), thus “a valuable evaluation resource” (bMQ8) and “The paper nicely builds on the framework proposed by Pearl with a dataset of 10K questions covering a large variety of causal graph structures and estimands” (rbbm). 1. **Method contribution:** Our CausalCoT is “a novel structure that is based on causal reasoning steps” (bMQ8) and “which lets LMs explicitly state causal problems and then solve [them]” (YWrR). 1. **Comprehensive evaluation results:** “The evaluation of state of the art models on the task is well conducted with a particular methodology to isolate the effect of data contamination.” (rbbm) and addressing "effects of memorization in LLMs" (YWrR). A shared question across most reviewers is more clarification of whether we used a zero-shot or few-shot evaluation. We clarified in our response that our main experiments are zero-shot to simplify analysis, while we include some few-shot results in the paper. Upon the request of the reviewers, we have also conducted several new experimentsto report the few-shot performance of all models. Additionally, some reviewers asked about the quality of the dataset, for which we conduct new human evaluations and automatic evaluations to gain better insight into the quality of our data. Finally, we also supplemented our rebuttal with ablation studies and various experiments requested by the reviewers (discussed in the corresponding rebuttals). We look forward to the discussions to make any further clarifications to help the reviewers reach a consensus on our submission's value to the community.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper evaluates the causal reasoning ability of language models and proposes a structured prompting approach to improve this ability. For this goal, the authors create a benchmark. The benchmark includes synthesized causal graphs and related queries. The answers to queries are computed based on a causal reasoning formalism given the causal graph. The graph and queries are verbalized for creating the corpus to provide natural language context and questions. For evaluation purposes, a variety of current large language models such as GPT 3.5 and GPT 4 are used. The language models are given the questions that need causal reasoning to be answered. In addition to the new benchmark and the LLM evaluations, the authors propose a structured chain of thought prompting strategy (called CausalCOT) to improve the ability of LLMs in causal reasoning. The idea is to provide step-by-step prompts based on the steps of formal reasoning. The steps include prompting for extraction of the causal graph, classifying the query type, and several others. The results show that structured prompting improves the causal reasoning ability of the large language models. Especially for the questions that are anti-commonsensical and are not contained training large language models, the models show a more considerable extent of improvements compared to the baselines. Strengths: The paper focuses on an important problem which is causal reasoning with LLMs. The created benchmark is potentially a valuable evaluation resource. Structured prompting is a recent research trend for improving LLMs, and the authors follow the same trend but with a novel structure that is based on causal reasoning steps. The findings show the effectiveness of structured prompting that is in line with other related research. Weaknesses: --It is not clear to me how the LLMs follow each step of reasoning: Is this totally zero-shot? or do you do in-context learning with a few examples? if this is zero-shot, it is a bit hard to see why the model knows about causal graphs and specific query types for example. --What is the lexicon used for verbalization of the causal model? How many distinct works are used? How do you evaluate if the text that is generated based on a template, is in line with commonsense or not? --The data set is synthetic which is fine. However, do you have any realistic scenarios to test the model on? I mean some realistic problem setting that shows this causal reasoning is required and the structured prompting is helpful in that realistic setting. More specifically, do we have real natural language explanations that explain complex casual graphs? can you add more discussions in this direction to the paper? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the above weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: there is a section on this in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s comment that this work “focuses on an important problem”, “The created benchmark is potentially a valuable evaluation resource”, and our CausalCoT is “a novel structure that is based on causal reasoning steps”. > Is this totally zero-shot? or do you do in-context learning with a few examples? To avoid effects of arbitrary choices of hyperparameters of a few-shot evaluation (e.g. number of samples and sampling method), we report zero-shot performance of all models in the main results (Section 5.2), since most LLMs are trained on the large internet corpus that covers basic causality knowledge. In the paper, we also did a small-scale analysis of the few-shot setting in Sec 5.6 “Effect of In-Context Learning”. During the rebuttal period, we extended the experiments, and obtained the performance for few-shot learning using 10 examples to instruct CausalCoT, leading to an accuracy of 70.09% (+4 point increase over the 0-shot CoT in Table 2), as the model learns more correct reasoning from the examples. > if this is zero-shot, it is a bit hard to see why the model knows about causal graphs and specific query types for example. As shown in Figure 1, the first step is causal relation extraction (i.e., text-to-causal graph parsing). In each input prompt, all necessary the causal relations are explicitly mentioned (in natural language), so instead of inferring causal relations from training data, the model only has to parse the input and then solve the causal inference problem. This way we focus our evaluation on formal reasoning, rather than commonsense-based causal reasoning as in related work. Hence, in the error analysis by subquestions (Section 5.4), the first step tests whether models can comprehend causal text expressions well enough to ground them into symbolic forms. And the second step to classify a question into specific query types is also a skill that LLMs should have knowledge of given its training data of the entire internet and many book corpora. For example, GPT-3.5 and GPT-4 can act as a causality textbook to explain the definition of each query type and give examples. And this step is to test the reverse ability. For example, the question “Is the chance of YY larger when observing XX?” is asking about correlations (Rung 1), and the question “Will XX increase the chance of YY?” is asking about average treatment effect (Rung 2). We regard such skills within the current capabilities of LLMs given its large training data. > How many distinct works are used? As listed in Appendix A.1, we collected causal questions from a list of nine commonly-used causality books and papers (Pearl et al., 2000; Glymour et al., 2016; Peters et al., 2017; Pearl and Mackenzie, 2018; Neal, 2020; Halpern and Pearl, 2005a; Halpern and Pearl, 2005b; Hopkins and Pearl, 2007; Pearl, 2009a). > What is the lexicon used for verbalization of the causal model? We report the question statistics in Table 1, where we have on average 94.47 words per question, and 539 vocabulary words in total. We also show data examples in Appendix Tables 5-12. > How do you evaluate if the text that is generated based on a template, is in line with commonsense or not? As in lines 171-183, we collect commonsensical causal stories from common examples in causal inference books and papers (in Appendix A.1), which always set up the problems in a realistic way, such as the drug-gender-recovery example of Pearl and Mackenzie (2018) to illustrate Simpson’s paradox. And for the non-commonsensical stories, their detailed generation process are in line 177-183. > The data set is synthetic which is fine. However, do you have any realistic scenarios to test the model on? I mean some realistic problem setting that shows this causal reasoning is required and the structured prompting is helpful in that realistic setting. More specifically, do we have real natural language explanations that explain complex causal graphs? Can you add more discussions in this direction to the paper? Yes, we can add more discussions about extending the dataset to more real-world scenarios in the camera-ready version of the paper, such as fake news debunking, and logical fallacy correction. In the meantime, when composing the dataset’s commonsensical part, we took care to set many problems more realistically. Some of the “commonsensical” questions included in our dataset were inspired by real-world examples of policy-relevant causal inference questions. For example, the dataset includes questions on vaccine efficacy (see the example in our Figure 1) inspired by similar questions which arose in the context of the COVID-19 pandemic, and where incorrect causal reasoning resulted in fallacies where vaccinations were considered be harmful instead of beneficial in preventing severe cases (see, e.g., https://www.covid-datascience.com/post/israeli-data-how-can-efficacy-vs-severe-disease-be-strong-when-60-of-hospitalized-are-vaccinated). Another note for the general usage of our theoretical framework is that the notions of NDE and NIE are very applicable to cases like bias and discrimination. Let’s take the example of how the double-blind review process for papers work: Potentially, we have the causal graph of author identity->paper quality->acceptance and also author identity->acceptance. The unfair causal effect is the natural direct effect, i.e., NDE(author identity->acceptance), and that’s why author identities are anonymized to avoid this problem. --- Rebuttal Comment 1.1: Comment: I have read the reviews and the author's response. I was already positive about this work but I also see the limitations in verifying the validity of the generated model and the extension to the realistic domain. My score stays unchanged. --- Reply to Comment 1.1.1: Comment: Thank you very much for your positive feedback of our work. Appreciate it!
null
null
null
null
null
null
Estimating and Controlling for Equalized Odds via Sensitive Attribute Predictors
Accept (poster)
Summary: This work provides tight and computable bounds on the EOD violation of a classifier in a setting where the sensitive attributes are unknown. In addition, a post-processing technique is proposed to provably yield classifiers that maximize prediction power while achieving minimal worst-case EOD violations with respect to unobserved sensitive attributes. Experiments are done on both synthetic and real data. Strengths: Significance: The problem under study is interesting and important. It would be useful to be able reduce the upper bound of certain fairness violation in models when sensitive attributes are unknown. Originality and quality: the proposed solution is novel and technically sound. Clarity: the presentation is clear with limitations analyzed in details. Weaknesses: 1. One assumption of the proposed approach is that there exists a dataset of (X,A) to learn $\hat{A} = h(X)$ from. This could limit the use of the approach and should be discussed. 2. It would be beneficial to demonstrate the verification of Assumption 1 on the two datasets. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Q1: can you elaborate on the potential bad outcome of applying $f_{opt}$ yields a less accurate and more biased (in terms of $\Delta_{FPR}$) model as in Figure 3? Q2: Is that possible to use a different dataset of similar features (but not the same features as in the experiments) to learn $\hat{A} = h(X)$ (in some form of transfer learning)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Results on the CheXpert data shows that, although the proposed method successfully reduces the worst case scenario TPR and FPR, it may actually increase the true TPR or FPR. Figure 3b shows an increase of $\Delta_{FPR}$ of $f_{opt}$ over $f$. This should be discussed as the potential outcome of applying $f_{opt}$ can be a less accurate and more biased model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comment/Concern 1: Assumption of the proposed approach is that there exists a dataset of $(X,A)$ to learn $\widehat{A} = h(X)$. This could limit the use of the approach and should be discussed** Answer: We completely agree. All the contributions of our work rely on there being a dataset of $(X,A)$ to learn $\widehat{A} = h(X)$. As we explain in the Introduction, this is a milder assumption (or often alternative assumption) to having a dataset from the entire joint distribution over $(X,Y,A)$. Our results are limited to such cases, however, so they do not apply when a predictor $\widehat{A}$ for $A$ cannot be constructed. **Comment/Concern 2: Verification of Assumption 1 on the two datasets** Answer: We thank the reviewer for mentioning that we do not formally demonstrate that Assumption 1 holds on the datasets used in our experiments. We refer the reviewer to the global author rebuttal where there is **pdf** attached. In this pdf we provide Table 1 that demonstrates that Assumption 1 (or its relaxations) holds for the datasets in the paper along with new data sets (descriptions of the new datasets/experiments are detailed in the global author rebuttal). **Comment/Concern 3: Can you elaborate on the potential bad outcome of applying $f_{opt}$ yields a less accurate and more biased** Answer: Certainly. The reviewer is correct: it it clear from Figure 3 that applying $f_{opt}$ yields a less accurate and more biased (in terms of $\Delta_{FPR}$) model. Using such a model could thus be seen as controversial because it might perpetuating unfairness. However, in some settings, one may have the goal of constructing a classifier, $f$, that has a fairness violation that does not exceed a budget $\epsilon$ (for example, as required by regulatory agencies). In these cases, our method allows one to potentially achieve this goal by reducing the worst-case violation so that it is less than $\epsilon$. Nonetheless, we recommend thoroughly assessing the problem at hand with domain experts to assess if the potential (small) bias introduced by our method constitutes a manageable risk. This is because in a demographically scarce setting one cannot calculate the true fairness violation, and thus, one cannot have an idea of how the true violation ultimately changes as $f$ is modified. **Comment/Concern 4: Is it possible to use a different dataset of similar features (but not the same features as in the experiments) to learn $\widehat{A} = h(X)$ (in some form of transfer learning)** Answer: In some restricted sense, yes: one could learn a $\widehat{A} = h(X)$ from $\mathcal{D}_1$ using whatever features that are available to the user. However, these features must be present in the dataset $\mathcal{D}_2$ over $(X,Y)$ that is used to learn $\widehat{Y} = f(X)$. If this is not the case, then the user with access to $\mathcal{D}_2$, would not be able to impute proxy sensitive attributes $\widehat{A}$ in lieu of $A$ and compute the bounds on the true fairness violations. However, the features in $\mathcal{D}_1$ might include more features than those present in $\mathcal{D}_2$ --- Rebuttal Comment 1.1: Comment: Thanks for the authors' revision. I would recommend accepting this paper.
Summary: The paper explores the fairness problem in machine learning when sensitive attributes (e.g., demographics and gender) are unavailable for practitioners. The authors focus on the Equalized-Odds (EOD), a well-known definition of fairness for classifiers. The authors provide bounds for EOD violation in a setting where sensitive attributes are unavailable. Additionally, they propose a post-processing method to control the worst-case EOD. Finally, the authors illustrate their results via synthetic and real-world datasets experiments. Strengths: * The paper is very well written. Particularly the related work section is very informative. * The authors explore an interesting problem for the community and propose a different view of the problem, i.e., a worst-case optimization. * The theoretical guarantees provide a method for practitioners to access the quality of the method (assuming a worst-case scenario). * The proposed method is simple, intuitive, and computationally inexpensive. Weaknesses: * The evaluation in the main paper is limited. The authors only apply their method to one real-world example. I suggest adding at least two more examples and as many examples as possible in the appendix. These examples can convince us that: (i) Assumption 1 is followed in multiple scenarios and (ii) your method helps to improve fairness. * The fact that the Naïve approach had a better "true" EOD is interesting, and it might indicate that your method is not very efficient, i.e., using the sharp upper bound as a proxy for the true value is too pessimistic. Performing more experiments in real-world datasets can help the authors and readers understand when minimizing the upper bound is better than the naïve approach. * It needs to be clarified how to extend the approach to a setting where the sensitive attribute is not binary. In the introduction, the authors list examples of sensitive attributes as "demographics and gender," which are not binary. A subsection is recommended to discuss how your results would change when $A$ is not binary. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * Is it possible to extend this worst-case analysis to other post-processing methods? Combining it with the optimized method proposed in [1] would be very interesting. * In Theorem 2 the authors assume that $\mathcal{F}$ is such that $ \forall f \in \mathcal{F}$ the Assumption 1 is followed. After that, they conclude that then, there exists a fair model in $\mathcal{F}$. This doesn’t seem correct. For example, take $\mathcal{F} = \{f_0\}$ where $f_0$ follows Assumption 1 but is not worst-case optimal. Then, there is no fair model in $\mathcal{F}$. Perhaps, the authors mean $\mathcal{F}$ to be the set of *all* models that follow Assumption 1. However, in this case, assuming knowledge of $\mathcal{F}$ does not sound reasonable. Could the authors clarify this point? * Could you describe (in the main paper) when the worst-case bound is achieved? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors discuss their work limitations. I also suggest discussing the ethics of predicting users' sensitive attributes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comment/Concern 1: More real-world experiments to convince us that Assumption 1 is followed in many scenarios and your method helps to improve fairness.** Answer: Thank you for the suggestion. We have performed additional experiments to show that Assumption 1 holds in multiple scenarios and showcase the utility of our results. We refer the reviewer to the global author rebuttal which provides details of the experiments and an attached **pdf** which contain the results. We would also like to comment on the point the reviewer made that our method "helps to improve fairness". It actually improves the worst-case fairness violation and we make no claim that it reduces the true violation. As we explain in our paper, doing so is impossible in general: one cannot claim that performing any correction of $\hat Y$ with respect to $\hat A$ provably reduces the true violation unless strict and unverifiable assumptions are made about $\hat A$ and $\hat Y$ (See Awasthi et al. [2021] and Kallus et al. [2022] for further details) **Comment/Concern 2: Naïve approach had a better true EOD** We thank the reviewer for bringing this up. On one hand, improvements on the real fairness are not verifiable (as explained above). On the other hand, Awasthi et al. [2019] study the properties of a classifier $\tilde{Y}$ that comes from using the post-processing method by Hardt et al. [2016] with noisy sensitive attributes $\hat A$ in lieu of $A$ (i.e. the "naive" approach). They provide conditions on $\hat A$ that are necessary for $\tilde{Y}$ to have a smaller true fairness violation than the original classifier, $\hat{Y}$. One is that $\hat{Y} \perp \hat{A} \mid A,Y$, which is strict and not verifiable in demographically scarce regimes. So when this condition does not hold, no guarantees exist that show that correcting with respect to $\hat{A}$ reduces the true fairness violation. While our post-processing method also does not provide any guarantee about the true violation, it is guaranteed to provably reduce the worst-case violation. **Comment/Concern 3: Extending the result to settings with non binary sensitive attributes** We agree that cases where $A$ is not binary are interesting. Definitions of fairness can be extended to these situations. For simplicity, suppose one wants to enforce Equal Opportunity and $A \in \{1, \dots, k\}$. Then one can require $P(\widehat{Y} = 1 \mid A = i, Y = 1) = P(\widehat{Y} = 1 \mid A = j, Y = 1)$ for all $i \neq j$. Enforcing this using the method in Hardt et al. [2016] would mean including $\frac{k(k-1)}{2}$ constraints. In our setting, to reduce the worst-case violations using $\hat{A}$ we would also have to include $\frac{k(k-1)}{2}$ constraints since there are that many bounds to reduce. While this can be done, it will naturally result in the expected loss to increase since there are more constraints. **Comment/Concern 4: Extending worst-case analysis to other post-processing methods/Combining it with the optimized method in [1].** In Theorem 2 we provide necessary conditions for $\hat Y$ to have minimal worst-case fairness violations. As long we can add these constraints to other post-processing methods, we believe we can combine our analysis with other methods. The reviewer mentions a method proposed in [1] without a citation. We will gladly answer any questions about combining our results with this method once the reviewer clarifies which work they are referencing. **Comment/Concern 5: Clarity of Theorem 2** We thank the reviewer for pointing out this inaccuracy in our original statement of Theorem 2 -- we meant that $\mathcal F$ simply parametrizes estimators for the random variable $Y$. Here is the rewritten version of the result in its correct form. Theorem 2: Let $\hat{A} = h(X)$ be a fixed sensitive attribute classifier with errors $U_0$ and $U_1$ that produces rates $\hat{r}\_{i,j} = P(\hat{A} = i, Y = j)$. Let $\mathcal{F}$ be the set of all predictors of $Y$ parametrized by the rates $\hat{r}\_{i,j}$ that, paired with $h$, satisfy Assumption 1. Then, $\exists \overline{Y} \in \mathcal{F}$ with group conditional probabilities, $\widehat{\underline{\alpha}}_{i,j} = P(\overline{Y} = 1 \mid \hat{A} = i, Y = j)$ that satisfy the following condition, $$\frac{\hat{r}\_{0,j}}{\hat{r}\_{0,j} - \Delta U}\widehat{\underline{\alpha}}\_{0,j} - \frac{\hat{r}\_{1,j}}{\hat{r}\_{1,j} + \Delta U}\widehat{\underline{\alpha}}\_{1,j} = \frac{\Delta U}{2}\\left(\frac{1}{\hat{r}\_{1,j} + \Delta U} + \frac{1}{\hat{r}\_{0,j} - \Delta U}\\right)$$ Furthermore, any such $\overline{Y}$ has minimal maximal-fairness violation: $$|\Delta\_{TPR}(\overline{Y})| \leq B\_{TPR}(\overline{Y}) \leq {B}_{TPR}(\hat{Y}) \quad \text{and} \quad |\Delta\_{FPR}(\overline{Y})| \leq B\_{FPR}(\overline{Y}) \leq {B}\_{FPR}(\hat{Y})$$ This states that any predictor of $Y$ that satisfies Assumption 1 AND the condition above, has a worst case fairness violation that is less than or equal to any predictor of $Y$ that only satisfies Assumption 1. In other words, this condition is a necessary condition for the worst case fairness violation to be minimal. **Comment/Concern 5: Could you describe when the worst-case bound is achieved?** Answer: Certainly, we will add this in the revised version of the paper. Under Assumption 1, for $\Delta_{TPR}$, the bound is achieved when for $i \in \\{0,1\\}$ the unobserved quantities $P(\hat{Y} = 1, A = 1, Y = 1 \mid \hat{A} = i)$ and $P(\hat{Y} = 0, A = 1, Y = 1 \mid \hat{A} = i)$ achieve their feasible maximal and minimal values respectively. To be more precise, when: $$P(\hat{Y} = 1 , A = 1 , Y = 1 \mid \hat{A} = 1) = P(\hat{Y} = 1 , Y = 1 \mid \hat{A} = 1)$$ $$P(\hat{Y} = 1 , A = 1 , Y = 1 \mid \hat{A} = 0) = P(A = 1 \mid \hat{A} = 0)$$ $$P(\hat{Y} = 0 , A = 1 , Y = 1 \mid \hat{A} = 1) = P(\hat{Y} = 0 , Y = 1 \mid \hat{A} = 1) - P(A = 0 \mid \hat{A} = 1)$$ $$P(\hat{Y} = 0 , A = 1 , Y = 1 \mid \hat{A} = 0) = 0$$ A similar result holds for $\Delta\_{FPR}$ --- Rebuttal Comment 1.1: Comment: I want to thank the authors for their careful answers. The paper substantially improved after the revision. I am suggesting that the papers get accepted, increasing the score to 7 and the contribution score to 3. Sorry for not sharing the citation. I probably had a problem with the format I copied. However, I still think it could be interesting for the authors to explore the connections between [1] and their work. [1] Alghamdi, W. et al. Beyond Adult and COMPAS: Fair Multi-Class Prediction via Information Projection. NeurIPS 2022.
Summary: The paper proposes a tight upper bounds for the equalized odds violation of a predictor in a setting without sensitive attributes. It also presents a post-processing correction method to control the worst-case equalized odds violation and presents results on a variety of synthetic and real datasets. Strengths: 1. The paper addresses a critical issue in fair ML. 2. The tight and computable upper bounds for equalized odds (EOD) violation is a valuable contribution as it allows for a precise understanding of the worst-case EOD violation of a predictor. 3. The proposed post-processing correction method for controlling worst-case EOD is interesting. 4. The paper is backed by experiments on both synthetic and real datasets, strengthening the validity of the results. Weaknesses: 1. Assumption 1 may limit its applicability in scenarios where the proxy sensitive attributes are not accurate. 2. The paper's results are limited to EOD and its relaxations as definitions of fairness. Extending the results to other notions of fairness may be non-trivial and deserves further consideration. 3. The paper does not consider how one could train a classifier from scratch to have minimal violations, which could be an important Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Could you provide more insight into the limitations of Assumption 1 and how these might be addressed? 2. Could you elaborate on how the proposed post-processing correction method could be implemented in real-world scenarios? 3. How could your results be extended to other definitions of fairness, and what challenges might be encountered in this process? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The paper relies on Assumption 1, which may limit its applicability in scenarios where the proxy sensitive attributes are not accurate. Despite this limitation, this work is a nice contribution. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comment/Concern 1: Limitations of Assumption 1 and how they might be addressed** Answer: Absolutely. For clarity, we re-write Assumption 1 below: For $i,j \in \\{0,1\\}$ the classifiers $\hat{Y} = f(X)$ and $\hat{A} = h(X)$, with errors $U_i = P(\hat A = i, A \neq i)$, rates $r_{i,j} = P(\hat A = i, Y = j)$, and conditional errors $\hat {\alpha}\_{i,j} = P(\hat Y = 1 | \hat{A}=i, Y = j )$ satisfy $$\frac{U_i}{\hat{r}\_{i,j}} \leq {\hat \alpha}\_{i,j} \leq 1 - \frac{U_i}{\hat{r}\_{i,j}}$$ In short, this assumption says that the estimated-group conditional errors of $\hat A$ are not higher than (relatively) to the errors of $\hat Y$. Naturally, all works that focus on using proxy sensitive attributes to measure fairness make some assumptions, and our work is no different in this regard. However, our assumption is significantly milder than those in existing results. For example, [Awasthi et al., 2021] assume that $\hat{Y} \perp \hat{A} \mid A,Y$, which is strict and not verifiable in demographically scarce regimes. Our assumption is directly verifiable in practice, and does not require independence -- it simply requires $\hat{A}$ to be reasonably accurate. In fact, it has been shown in many settings that accurate $\hat{A}$ can be developed [Baines and Courchane, 2014, Elliott et al., 2009, Imai and Khanna, 2016, Gichoya et al., 2022], as required by Assumption 1. Lastly, we would like to point out that if Assumption 1 holds for $i \in \\{0,1\\}$ and $j = 1$ (only), then all the results for $\Delta_{TPR}$ hold (similarly for $j = 0$, and $\Delta_{FPR}$). Thus, if one only cares about the equal opportunity definition of fairness, the assumption only needs to hold for $i \in \\{0,1\\}$ and $j = 1$ and one only needs to add this constraint to the post-processing algorithm. Limitations: When the features $X$ are not predictive of $A$, yielding a poor $\hat{A}$, Assumption 1 may not hold. In these settings, we can still derive a tight upper bounds. However, they are no longer linear in the group estimated TPRs and FPRs. Thus, the characterization of classifiers with minimal worst-case bounds is more involved and the methods to minimize these violations will likely not be as succinct and elegant as the results we presented. We will include this comment in the revised version of our manuscript. Lastly, we performed additional experiments to demonstrate Assumption 1 holds in various settings. We refer the reviewer to the global author rebuttal for the details of these experiments along with results in the attached **pdf**. **Comment/Concern 2: Post-processing method in the real world** Answer: We envision our method being used in a scenario where one does not have access to $A$ but requires $\hat{Y}$ to have an Equalized Odds (or its relaxations) violation no greater than a budget $\epsilon$ (for example, as determined by regulatory agencies). In this scenario, with the use of $\hat{A}$ and our Theorem 1, one can still compute the worst case fairness violation of $\hat{Y}$. Furthermore, by means of Theorem 2 and our post-processing algorithm, one can reduce this quantity. Note, this provides a provable certificate, since if the worst case fairness violation is less than epsilon, so is the true fairness violation. **Comment/Concern 3: Extension to other definitions of fairness** Answer: Our results apply to equalized odds, equal opportunity, predictive equality, and can be trivially extended to demographic parity. While this is a restriction, note that this is a main set of fairness definitions considered in many previous works. Extending our results to other definitions, such as causal or individual fairness, is challenging for a variety of reasons. For example, in causal fairness, a key quantity of interest is the average treatment effect (ACE), $E[Y(A=1)] - E[Y(A=0)]$, where $Y(A)$ is a counterfactual quantity. Under certain assumptions on the data generating process, techniques from causal inference can be used to identify the ACE [Pearl, 2009]. However, in all such scenarios, the treatment $A$ (i.e. the sensitive attribute) is **observed**. In our setting, $A$ is not observed, and so the way to apply techniques from causal inference remains an unclear but interesting research direction. As another example, considering individual fairness requires that "similar" individuals are classified "similarly" by $\hat{Y}$. Similarity is measured with respect to a fair metric, $d_x$. If $d_x$ does not involve $A$, then one can easily check if individual fairness holds. If $d_x$ does involve $A$, then evaluating individual fairness relies on how $d_x$ changes when proxies $\hat{A}$ are used in place of $A$. These are indeed interesting research avenues that will constitute future work. **Comment/Concern 4: Training a classifier from scratch to have minimal worst-case fairness violations** Answer: Empirically, one could do this by an in-processing training method where one is trying to minimize the expected loss of a classifier with the constraints we provide in Assumption 1 and Theorem 2. In doing this, in principle, one would end up with a classifier $\tilde{Y}$ with minimal worst-case fairness violations and expected loss, potentially smaller than that of classifier $\overline{Y}$ returned from our post-processing algorithm. This is because in the post-processing algorithm, the set of classifiers one is optimizing over is constrained by the initial predictor $\hat{Y}$, meaning $\overline{Y}$ cannot have smaller expected loss than $\hat{Y}$. The in-training optimization problem would have no such restriction. However, note that our post-processing algorithm is computationally inexpensive and provably optimal, whereas the in-processing counterpart could be computationally expensive and not be necessarily guaranteed (e.g. it would result in an non-convex optimization problem if the classifier is a deep neural net). We will add these discussions to the revised version of our manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal. All my questions have been addressed.
null
null
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for the insightful questions about our work. All the reviewers had questions about and wanted to see a verification of Assumption 1. For clarity, we re-write Assumption 1 below: For $i,j \in \\{0,1\\}$ the classifiers $\widehat{Y} = f(X)$ and $\widehat{A} = h(X)$, with errors $U_i = P(\hat A = i, A \neq i)$, rates $r_{i,j} = P(\hat A = i, Y = j)$, and conditional errors $\hat {\alpha}\_{i,j} = P(\hat Y = 1 | \hat{A}=i, Y = j )$ satisfy $$\frac{U_i}{\hat{r}\_{i,j}} \leq {\hat \alpha}\_{i,j} \leq 1 - \frac{U_i}{\hat{r}\_{i,j}}$$ We first would like to point out that the assumption can be relaxed, meaning, if it holds for $i \in \\{0,1\\}$ and for $j = 1$ (only), then all the results for $\Delta_{TPR}$ still hold. Similarly, if these hold for $j = 0$, the results for $\Delta_{FPR}$ hold true. As a result, if one only cares about the equal opportunity definition of fairness, one only needs a relaxation of the assumption to hold (it just needs to hold for $i \in \\{0,1\\}$ and $j = 1$) and only this constraint needs to be added to the post-processing algorithm. In the attached **pdf**, the results in Table 1 verifies that Assumption 1 (or its relaxations) indeed holds for various datasets. We also present Figures 1 and 2, which extend our results for these new datasets. We will discuss the table and figures in more detail, but we first provide a brief summary of the new experiments below: **FIFA Experiment** (Adopted from Awasthi et al. [2021]): We use the FIFA 20 player dataset and aim to predict whether a soccer player’s wage is above ($Y = 1$) or below ($Y = 0$) the median wage based on the player’s age and their overall attribute. We consider nationality as the sensitive attribute $A$ and use the player's name to predict this attribute. Assumption 1 holds for various pairs of nationalities and in Table 1 we demonstrate this for the Argentine/English and France/Spain pairs. **ACSPubCov** Experiment (Adopted from Ding et al. [2021]): The task is to predict, using 2018 state census data, whether a low-income individual, not eligible for Medicare, has coverage from public health insurance . We consider sex to be sensitive attribute $A$. We determine that for 28 out of 50 states, Assumption 1 holds for $\Delta_{TPR}$ while the assumption for $\Delta_{FPR}$ does not hold for any. We showcase that Assumption 1 holds for the state of Illinois. We now discuss the figures and tables. **Table 1** We provide Table 1, which demonstrates that Assumption 1 (or its relaxations) holds for a variety of experiments/datasets. The *Actual Value* column of Tables 1a and 1b lists the ${\hat \alpha}\_{i,j}$ and the left and right columns list $\frac{U_i}{\hat{r}\_{i,j}}$ and $1 - \frac{U_i}{\hat{r}\_{i,j}}$ respectively for all the datasets. From Table 1, it is clear that the ${\hat \alpha}\_{i,j}$ lie in between $\frac{U_i}{\hat{r}\_{i,j}}$ and $1 - \frac{U_i}{\hat{r}\_{i,j}}$ as is required by Assumption 1. Notice, we only list the estimated group TPRs, ${\hat \alpha}\_{i,1}$ for the ACSPubCov Illinois dataset. This is because, as mentioned, Assumption 1 fails to hold for $\Delta_{FPR}$ for all 50 states. The reason for this is that, while $\widehat{A}$ is indeed very accurate (e.g. $U \approx 0.08$ on the Illinois data), the label classifier $\widehat{Y}$ has very low estimated group FPRs ${\hat \alpha}\_{i,0}$. In conclusion, these new results show that our assumption holds in several real datasets and cases, while also showcasing when (and why) it doesn't. **Figures 1 and 2** In Figure 1, we present the results for the FIFA 2020 player dataset restricted to English and Argentine nationalities. Similar to the experiments in the manuscript, we learn a sensitive attribute predictor $\hat{A} = h(X)$ which achieves an error of $U = 0.025$ with $U_1 \approx 0.02$ and $U_0 \approx 0.005$ and we learn a label classifier $\hat{Y} = f(X)$. On a test data set, we generate our predictions $\hat{Y} = f$ and $\hat{A} = h$ to yield a dataset over $(\hat{A}, Y, \hat{Y})$ and utilize the bootstrap method to obtain to generate 1000 samples from this dataset and for each sample, perform the correction algorithms to yield $f_{\widehat{\textrm{fair}}}$ and $f_{\textrm{opt}}$. Figure 1a and 1b display $B\_{TPR}$ and $\Delta\_{TPR}$ (not identifiable) for the various classifiers. We additionally add Figure 1d and 1e which are normalized histograms constructed from the bootstrapped samples of $B\_{TPR}(f_{\widehat{\textrm{fair}}}) - B\_{TPR}(f_{\textrm{opt}})$ and $B\_{FPR}(f_{\widehat{\textrm{fair}}}) - B\_{FPR}(f_{\textrm{opt}})$. Observe how both quantities are greater than or equal to 0, indicating that $B\_{TPR}(f_{\textrm{opt}})$ and $B\_{FPR}(f_{\textrm{opt}})$ are smaller than $B\_{TPR}(f_{\widehat{\textrm{fair}}})$ and $B\_{FPR}(f_{\widehat{\textrm{fair}}})$ respectively. This indicates that our post-processing method is better than simply correcting with respect to $\hat A$ in regards to reducing the worst-case fairness violation (in fact our method is optimal as noted in Theorem 2). Lastly, Figure 1c shows that the expected loss increases for $f_{\textrm{opt}}$ but that the increase is minimal. We perform the same experimental process on the ACSPubCov Illinois dataset and the results for the TPR quantities are depicted in Figure 2. Note we do not show FPR related quantities because Assumption 1 fails to hold. Pdf: /pdf/c1879d58e55ddfb9542ef3e4bfe9fcea7818608b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Likelihood-Based Diffusion Language Models
Accept (poster)
Summary: This paper is concentrated on training a Diffusion LM. The authors chose Variational Lower Bound loss used in Diffusion LM as a core of the proposed framework named Plaid, as well as some improvements on the training procedure and architecture of a model (usage of categorical reparametrization, use of trained categorical mapping for learning conditional likelihood term, and usage of self-conditioning method). Authors evaluated proposed changes in the ablation study compared to reimplemented CDCD measured by NLL, derived scaling laws for their model. They trained the Plaid 1B model, outperforming GPT-2 124M in NLL. Also, the authors provided samples from the Plaid 1B model and showed the ability of the model to perform conditional and unconditional sampling. Strengths: - Authors proposed improvements on top of VLB loss, which appear to be useful measured by NLL on test data within the ablation study. - Categorical reparametrization is an exciting way to mix strengths of score interpolation objective from CDCD with VBL loss which do not imply heuristic constraints on training (e.g., normalization of the embedding matrix). - The paper is well organized and mainly clear (see weaknesses section) Weaknesses: Experiments: - Specific weightings A and B in Section 4.1 are too artificial. It will be nice to have an explanation in the paper as to why such weightings were selected. - Information regarding pre-trained embeddings in Section 4.1 is necessary. Stating "all models use fixed embeddings obtained from a previously-trained known-good model" does not provide any helpful information regarding the details of the experiment. - There needs to be more information regarding human resources used for crowd working in Section 4.1, while the authors claimed that Human Subjects are N/A. - The authors stated that results from Section 4.1 suggest that VLB performs at least comparably to other choices. Though, I need to see how such a conclusion could be made from this experiment. As for me, it states that if one tries to replace theoretically justified weighting with a specific hand-crafted one, the quality of samples will reduce. If it was the aim of this experiment, different other parts of VBL could also be changed to understand how important they are. - The evaluation of other experiments is poor. Plaid is compared to CDCD and GPT-2 only by NLL, which does not show any information on the quality of samples. E.g., removing self-conditioning leads to an increase of NLL on 0.09 points – how bad is it? Is this a minor reduction of sample quality or not? It is a critical point for me since if we want to use Plaid for text generation, it is not important what NLL values on test data it achieves. It is much more important how good samples from the model are. For the autoregressive model, it is convenient to expect that lower NLL leads to better samples, but is there such a convenience for Diffusion LMs (even trained with VLB loss)? I strongly believe that not. E.g., if one dramatically reduces the noise scale, making embeddings easily distinguishable, then it is easy to make NLL equal to 0, while samples will be poor. It is necessary to include numerical evaluation of model samples (e.g., text repetition, evaluating perplexity of samples with a third-party language model, and others) - Baselines used for experiments are also limited. E.g., if Plaid is built on top of VLB from Diffusion LM, it is necessary to include information on its performance in the paper. - The list of insights on the performance of Plaid could be more extensive. E.g., sampling from Plaid could be performed with a different number of steps. How to sample quality differs with the change in this number? How does the number of sampling steps affect inference speed? The only numerical information regarding Plaid that is available is NLL on test data, which is not sufficient. Motivation: - I see the leitmotif of the first three sections that VLB allows us to reduce dependency on heuristic design choices (e.g., authors claim other methods to be workarounds (L109)). Though the design choices of other works are heuristic, the VLB framework seems to be much more complex and harder to implement (e.g., usage of double-precision parameters for everything except transformer layers and usage of parameter-specific learning rates indicate that making Plaid work stable required a lot of efforts). Why should a practitioner use a method that requires parameter-specific LR and spend more time selecting the best-performing LR over a simple yet heuristic method? Doing so could be justified by higher sample quality, but in the current state, the paper lacks details on sample quality. - Considering the usage of double-precision and parameter-specific learning rates, stating that the likelihood-based approach "simplifies training" (L167) does not seem to be true. Reproducibility - The paper needs to include information regarding training infrastructure and the time necessary to reproduce the experiments. Also, the authors claimed "Yes" within the "compute" box for submission while not providing information regarding computing. - Self-conditioning used in the code is not common. I understand self-conditioning is only added for specific token positions with some offset (line 194 train.py). What is the motivation for doing so? Are there any other works that used such a scheme? If yes, I do not see citing. If not, I need sufficient information on such a scheme in the paper. - I do not see the trained model with supplementary materials, though authors claim that the model is publicly available Narrative: - The paper feels like a merge of two short papers: authors had proposed architectural improvements for training a Diffusion LM with VLB objective without precise studying limits of the Plaid framework and studied scaling laws of Plaid. These scaling laws could have been an interesting complement to the paper if there were more experiments on Plaid performance and behaviour. But now, scaling laws experiments look like studying the scaling laws of a model we do not understand. While at this point, I voted for the rejection of the paper, I still see potential in this work since some of the architectural choices seem interesting to me. I would happily increase the score if the drawbacks were fixed. First, possible changes include adding more metrics for Plaid since NLL value does not provide information on Plaid performance. Once these metrics are added, including more experiments to provide more insights into Plaid behavior will be highly beneficial. E.g., authors could study what trained posterior $f_{\theta}(z)$ looks like (its statistics, etc.). Technical Quality: 1 poor Clarity: 3 good Questions for Authors: Please, refer to points from weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 3 good Contribution: 2 fair Limitations: L320 explicitly states, "Our ablations show that maximizing likelihood does not substantially harm sample quality, " though it is not valid. Ablation experiments were performed with only NLL on test data, which does not include any sample quality information. The only experiment that evaluated sample quality was a comparison with hand-crafted weighting functions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for a very thorough review. We’d like to respond to some of your points below: > For the autoregressive model, it is convenient to expect that lower NLL leads to better samples, but is there such a convenience for Diffusion LMs [...]? I strongly believe that not. E.g., if one dramatically reduces the noise scale [...] then it is easy to make NLL equal to 0, while samples will be poor. This is not correct: we use the discrete NLL for training and eval, which is the same quantity that autoregressive LMs use. Your claim would apply to a continuous NLL on the embeddings, which we do not consider in this work. Rescaling the embeddings would not minimize our loss, as the reconstruction term p(x|z_0) in eqn (3) would increase. More generally, it is impossible to "cheat" the discrete NLL: a generative model that minimizes the discrete NLL also minimizes the KL divergence, and the unique minimizer of the KL is the original data generating distribution. This is a main reason we emphasize NLL-based training and evaluation in the paper. We have updated the paper writing to clarify this. > Plaid is compared to CDCD and GPT-2 only by NLL, which does not show any information on the quality of samples. We have performed a human study on Amazon Mechanical Turk, comparing unconditional samples from Plaid-1B to samples from GPT-2 124M. The turkers find the samples comparable, with the 95% CI for the win rate of Plaid ranging from 0.47 to 0.55, with a mean win rate of 0.51. We find this to be consistent with our likelihood evaluations. Regarding CDCD, we would have liked to do a sample comparison, but we corresponded with the CDCD authors and they were unable to provide us with CDCD-generated samples. > Specific weightings A and B in Section 4.1 are too artificial. It will be nice to have an explanation in the paper as to why such weightings were selected. We will update the final draft of the paper with a more detailed discussion. We selected these heuristics because they down weight small noise levels relative to the likelihood weighting, which is what has been used successfully in image diffusions (we adapt these schedules to the text setting instead of directly using them because they assume image inputs which are differently scaled than our text embedding vectors.). One advantage of our likelihood-based formalism is that the weightings are specified by the objective itself, rather than through a heuristic. > Information regarding pre-trained embeddings in Section 4.1 is necessary. Stating "all models use fixed embeddings obtained from a previously-trained known-good model" does not provide any helpful information regarding the details of the experiment. The fixed embeddings were obtained by training a Plaid model and validating that it attains strong likelihoods and generates coherent samples. We then use these embeddings from the Plaid model on the ablations. We will provide training hyperparameters and evaluations of this model the final draft of the paper. > There needs to be more information regarding human resources used for crowd working in Section 4.1, while the authors claimed that Human Subjects are N/A. We apologize for the oversight. We agree, and will update the paper draft to explain in detail our human experiment protocol. The overall experimental design follows a blinded randomized A/B test. The details are as follows: We recruited crowd workers using Amazon Mechanical Turk (selection criteria: US location, 95% HIT approval rate, >1000 HITs approved). Workers were shown two random samples in random order and given the following prompt: “Given two short pieces of text, decide which is more coherent overall (i.e. has the fewest grammar mistakes and makes the most sense).” Workers were paid $0.15 per task, which we estimated to take less than 30 seconds on average. > Though the design choices of other works are heuristic, the VLB framework seems to be much more complex and harder to implement (e.g., usage of double-precision parameters for everything except transformer layers and usage of parameter-specific learning rates indicate that making Plaid work stable required a lot of efforts). Double-precision and parameter-specific LRs were convenience decisions in our implementation. We will update the camera-ready with the following result: our method trains stably and performs well when single-precision floats and a fixed learning rate for all parameters is used. > Self-conditioning used in the code is not common. Our self-conditioning implementation directly follows the original work (Chen et al. 2022). We apply self-conditioning to a random subset of each batch (again following Chen et al.). The line you reference is an implementation detail, where we choose that subset by picking an offset in {0,1,2,3} and picking `batch[offset::4]`, noting that the examples are randomly ordered within the batch. > The paper needs to include information regarding training infrastructure and the time necessary to reproduce the experiments. Also, the authors claimed "Yes" within the "compute" box for submission while not providing information regarding computing. We provide FLOP counts for all models trained. We will additionally update the paper with actual hardware used and wall-clock times: All of our small runs take less than 24 hours on a single A100, and Plaid 1B took 30 days on 8 A100s. > I do not see the trained model with supplementary materials, though authors claim that the model is publicly available The model is currently publicly available, but we have redacted the link in the paper in order to comply with anonymity guidelines. We will un-redact the link in the camera-ready. > The paper feels like a merge of two short papers The scaling law study, which is the first of its kind for diffusion models, was made possible only by the VLB framework: smooth power-law scaling is a unique property of likelihoods (see Kaplan et al. 2020). --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answer. I want to apologize for the inconvenience regarding the significance of NLL in the setup of your model. It indeed provides useful information on the likelihood of the data under Plaid. While you have provided an upper bound on NLL, it is clear that the tightness of this bound depends on a number of samples of $t$. Am I understanding correctly that during the generation, you still could use a different number of generation steps to follow the path from $z_1$ to $z_0$? If so, it seems like this number of steps will affect the final quality of metrics. How does this quality depend on a number of steps? You have provided a human evaluation of the comparison with GPT-2, but what if we use two times fewer steps than was used in this experiment? What if we use two times more steps? I believe that providing such information on the quality of Plaid samples in dynamic would significantly improve the paper so that I will have no more concerns about your submission. Also, while for Plaid, it is valid to estimate NLL, as you pointed out in your answer, for CDCD, this estimation does not provide useful information regarding the performance of CDCD, though, with Table 1, you still had compared Plaid to CDCD purely based on NLL. How was this evaluation performed? If you had a reproduction of CDCD (I understand that full reproduction is not possible), then you could use it to compare samples from Plaid and CDCD using automatic metrics. --- Reply to Comment 1.1.1: Comment: Thank you for your response! Regarding step counts: We train and evaluate NLLs using equation 5 (via equation 3) which is the infinite-timestep limit of the NLL bound, so our NLL evaluation results do not depend on a chosen number of steps. When generating samples, we are similarly interested in the infinite-step limit behavior, so we use a naive sampling algorithm (ancestral sampling) combined with a much larger number of steps than we believe to be necessary (4000). To confirm that 4000 steps approximates the infinite-step limit, we will run a human study comparing samples generated with 2000 and 4000 steps and include it in the camera-ready version. Given that many more efficient sampling algorithms for diffusion models exist (e.g. DDIM, 2nd-order ODE solvers, diffusion distillation), we focus in this paper on building the strongest models we can without inference compute constraints. Investigating these sampling algorithms with Plaid would be exciting future work. Regarding CDCD: Even though we don’t use it for training, we can compute an NLL bound for CDCD using equation (3) and, as with the Plaid models, it is a correct bound on the discrete data log-likelihood. We developed, debugged, and tuned the hyperparameters of our CDCD reimplementation against the NLL bound, so we don’t think it would be fair to CDCD to evaluate that model on sample quality. We will update the draft to make this point more clear.
Summary: This paper formalises variation deffusion models for text and makes several algorithmic contributions to such models. They show that these models are able to model the likelihood of text well, and do several other evaluations to validate the likelihood approach, quantify the computational requirements, perform ablations, and evaluate a method for doing conditional generation. Strengths: I found this perspective on diffusion enlightening and was very interested to see how to use it to model text. The contributions are substantial and the evaluations are informative. Weaknesses: This paper covers a lot of material for its length, which makes it challenging to read. Several references are made to Appendices, but I couldn't find any; these would have been useful. The model specifications in sections 3.3 - 3.5 are very terse; perhaps more citations or a longer explanation in an appendix would be helpful to many readers. Section 3.2 seems related to simplex diffusion, so some additional citations would be appropriate. The validation of the likelihood-based approach (sec 4.1) only addresses whether the model puts high probability on good texts, not whether it covers the full distribution of texts accurately. The latter is addressed by the likelihood evaluations themselves, but this is exactly what the model is trained to do. It would be better to include previous measure for generation quality, such as both forward and reverse cross entropy (https://arxiv.org/abs/1804.07972). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Where are the Appendices? Would it be possible to run a forward-cross-entropy evaluation on the generated texts, to see if they cover the full distribution? Suggestion: The caption of table 2 does not define what the numbers are. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Only briefly mentions computational issues. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. > [Appendices] Our appendices are included in the supplementary material that is uploaded. > The model specifications in sections 3.3 - 3.5 are very terse; perhaps more citations or a longer explanation in an appendix would be helpful to many readers. Section 3.2 seems related to simplex diffusion, so some additional citations would be appropriate. Our main text exposition was limited by the page limit, but we agree that additional materials on S3.3-3.5 and the simplex diffusion could be helpful, and intend to revise our manuscript to expand these points. > [Forward likelihood and evaluations] Our likelihood evaluations are exactly E_{p_data}[log(p_model(x))], which is the forward cross-entropy that you have asked for, and Plaid shows substantial gains on these evaluations. We agree that additional evaluations could be helpful especially to measure the quality of samples, and so performed an Amazon Mechanical Turk study generated samples, and found Plaid-1B to be comparable to GPT-2 124M, with mean win rate of 5.112, and a 95% CI of 0.47-0.55. > Suggestion: The caption of table 2 does not define what the numbers are. > Only briefly mentions computational issues. We will additionally correct the caption of table 2, and include a discussion of compute details and challenges.
Summary: This paper aims to close the gap between autoregressive and diffusion-based language models on standard perplexity-based language modeling benchmarks. To achieve this goal, the paper proposes several algorithmic improvements for maximum-likelihood training, and studies the scaling laws of the diffusion models to find optimal training regimes. Finally, based on the improvements, the paper shows results with a 1B diffusion-based language model that outperforms GPT2 in perplexity with various analyses. Strengths: Solid experiments and analysis. The derived recipe for training diffusion-based LM that differs substantially from the usual autoregressive model is particularly meaningful. The explored design space and open-sourced model have a good contribution to the community on further developing of diffusion-based LM as an alternative foundation model research. Weaknesses: The basic algorithm is mostly following VDM (Kingma et, al.), and most of the proposed improvements are also seen from existing literature. Therefore, the novelty from the algorithmic side is somewhat limited. Part of the description is confusing and unclear how exactly the authors do (see Q1) Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: In line 111, the paper mentioned, “Plaid loss function is a bound on discrete data”. Why is this the case? Do you mean the objective in Eq.5 where x is discrete or x is one-hot vectors? In line 63, the paper mentions each x will be transformed into embedding vectors with invertible token-wise embedding function Embed(.). Is this invertible function the learnable embeddings mentioned in line 111? Why is it invertible? Why is there no x~ in the final objective in Eq.5? For categorical reparameterization, how do you keep x~ still the original embedding? Will there be some mismatch? How this relates to avoiding the model to memorizing the embedding vectors? I can get the method but find it confusing how this is motivated. Can you also do the same in the target embedding (e.g., label smoothing?) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The paper does not seem to discuss the limitations. Since the findings on diffusion-based LM seems to suggest that training diffusion-based LM is much harder and may require more resources than its counterpart autoregressive models, it may be interesting to discuss the future steps or any potential combinations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! We'd like to respond to a few points below: > The basic algorithm is mostly following VDM (Kingma et, al.), and most of the proposed improvements are also seen from existing literature. Therefore, the novelty from the algorithmic side is somewhat limited. We are the first to find an algorithmic recipe for compute-efficient likelihood-based text diffusions. As you correctly point out, this recipe leverages lots of prior work, but our results couldn’t be achieved just by carefully implementing prior work: no prior text diffusion work has achieved any nontrivial likelihoods on any standard benchmark, and our recipe achieves more than a 2x improvement in compute efficiency over our CDCD implementation (see Table 1). We have updated our paper draft to make this more explicit in the introduction. > In line 111, the paper mentioned, “Plaid loss function is a bound on discrete data”. Why is this the case? Do you mean the objective in Eq.5 where x is discrete or x is one-hot vectors? Eq 5 should have $\tilde{x}$ instead of $x$; we will correct this. The overall loss (eq 3) is indeed a bound on the discrete data: the second term of eqn (3) measures the conditional likelihood of discrete tokens given the final (continous) embedding in the diffusion process. > In line 63, the paper mentions each x will be transformed into embedding vectors with invertible token-wise embedding function Embed(.). Is this invertible function the learnable embeddings mentioned in line 111? Why is it invertible? Why is there no x~ in the final objective in Eq.5? We don’t need the embedding matrix to be invertible in order for the embedding function to be invertible, since the embedding function is only defined over the set of vertices of the vocabulary simplex and not the interior of the vocabulary simplex. Our embedding function is therefore indeed invertible even though our embedding matrix is low-rank. We will update the paper with a more careful discussion of this. Eqn 5 should indeed have a $\tilde{x}$ in place of $x$ (see above) and we will fix this.
Summary: This paper propose a variational diffusion language model which is based on variational diffusion model. The work is related on some works such as combineing vae with language model. Strengths: The application of variational diffussion model to language model is make sense at the some times, and the experiment results verify the effectivenss of the proposed model. However, the proposed model is litter novel, which is simlar to the application of vae on language models. Weaknesses: The proposed model is a autoagressive language model, and will add more parameters compared with exciting languge models. At the same time, the sampling time of each token will become expensive. It's noted that the autoagressive language model generate each token is slow, and there are many works try their best to construct non-agressive languge models based on diffussion model. While the results are not perfect, their work can speed the generative process. Thus, in my opinion, this paper do a simple combination work to improve language model's performance, which is not very worth . Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see the weaknesses Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The sampeling time is expensive. The parametes of model is large Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We would like to clarify some points in our work, as the review seems to have some misconceptions. - our work does use a variational characterization of diffusion models, but has very little in common with VAEs. - our work does not propose an autoregressive language model - finally, our work develops new changes to the diffusion model architecture and proposes a scaling law for diffusions. What our work does do is to build a new class of likelihood-based diffusions that have a clear, principled objective and use this as a way to build computationally efficient training for large diffusion models. --- Rebuttal Comment 1.1: Comment: Sorry for misunderstanding. And my other concern is why not use the BLEU evaluate the model's performance.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper explores the likelihood-based training of diffusion language models as an alternative to autoregressive models like GPT-2. Several algorithmic improvements are proposed for maximum likelihood training of diffusion LMs, including learned noise schedule, learned embeddings, categorical reparameterization, output prior, learned conditional likelihood, and self-conditioning. Scaling laws are analyzed to derive a compute-optimal regime for training diffusion LMs that differs substantially from autoregressive LMs. The methods enable training Plaid 1B, a 1 billion parameter diffusion LM that outperforms GPT-2 124M in likelihood on benchmarks. The qualitative analysis shows that Plaid 1B generates fluent unconditional samples and demonstrates controlled generation abilities. Strengths: - The paper shows that diffusion language models can achieve non-trivial likelihoods on standard language modeling benchmarks, outperforming a widely used autoregressive model like GPT-2 124M. This helps establish diffusion models as a promising alternative to autoregressive models. - The analysis of scaling laws and derivation of a compute-optimal training regime is insightful. Following this recipe likely enabled training a large model like Plaid 1B. - The trained model, if released, helps move the field forward, demonstrating fluent unconditional and controlled generation results. Weaknesses: - More analysis could be provided on the sample quality of the Plaid 1B model beyond likelihood benchmarks. Samples from the model should be evaluated quantitatively. The paper focuses on likelihood, but other metrics like human evaluation of samples could better highlight the benefits of diffusion models over autoregressive ones. - The ablations studying design choices are limited to a single model scale (1B). Ablations at multiple scales could better validate conclusions. - Lack of experiment support (or theoretical explanations) of design choices, such as pre-training sequence length, word embedding dimension, the ratio of examples to compute conditional loss, etc. - Only CDCD is compared, additional comparison to other diffusion language models (such as Diffusion-LM[1]) would add context about progress in this area. [1] https://arxiv.org/abs/2205.14217 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Need more detailed derivations from Eq(4) to Eq(5) when $T\rightarrow \infty$. - Why Section 3.1 is titled learned embedding since prior diffusion models also use the learned embeddings? What the meaning of ablation learned embedding in Table 1? Use fixed pretrained embeddings? - How to choose the examples ratio between the two terms in Section 3.4? Why use $\sqrt{\frac{Var(L_{\infty})}{Var(logp_{\theta})}}$? - Why use embedding dimension 16 when training? Would it be too small to represent the information of tokens, with a vocabulary size of 32K tokens? - Plaid 1B is trained using the sequence length of 1024. Why use 1024 and what would happen if we want to use a shorter or longer sequence length, without considering the memory consumption? Some preliminary experiments show that the longer the sequence is trained, the harder for models to converge. Do you encounter this and how to solve it? - Plaid 1B is trained using the sequence length of 1024, does it mean that you chunk the pretraining text data using 1024 and batch them? If then, how to undermine the [EOS] of the sampling sentence? You truncate a small random subset of examples with shorter sequence lengths, but how to batch these samples? Did you use [PAD] token to pad the sequence? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors have not addressed the limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful comments. We give some detailed comments on your questions below. > [Sample evaluation of Plaid 1B] We have performed a study on Amazon Mechanical Turk, comparing samples from Plaid-1B to samples from the GPT2 124M model. We generate unconditional samples of length 128 from both GPT-2 and Plaid 1B, and repeatedly ask Mechanical Turk crowdworkers to choose the most coherent sample from a pair (one GPT-2, one Plaid, blinded and randomly ordered). The turkers find the samples comparable, with the 95% CI for the win rate of Plaid ranging from 0.47 to 0.55, with a mean win rate of 0.51. We find this to be consistent with our likelihood evaluations. > [Ablations at different scales] We agree that this would be useful follow-up, however, we were limited in our computational resources and focused on scaling up the main diffusion model (Plaid 1B) first to show the validity of our design choices at scale. > [Experimental support of other design choices] Hyperparameters for Plaid were chosen on small-scale hyperparameter tuning runs, consistent with our scaling approach. We will include a discussion of these choices and details in our revision. > [Diffusion-LM comparison] We have performed diffusion-LM comparisons (Using the likelihood based diffusion-LM in appendix F of the diffusion LM paper) and found the diffusion-LM likelihood performance to be worse than our worst and smallest models. We did not initially include these results in the paper as diffusion-LM did not focus on likelihoods, we will include them in the final version of our paper. > [Derivations eq4-5]. These derivations follow from the Kingma et al variational diffusion formalism. We will make that clear in our revision. > [Learned embeddings] We called this section learned embeddings, as we wanted to discuss how a likelihood-based formalism changes how we learn embeddings (i.e. we do not need heuristics to prevent the representation from collapsing). We will make this clearer. > How to choose the examples ratio between the two terms in Section 3.4? This is the closed-form solution which minimizes the variance of the sum of the two terms. We will update the paper to clarify this. > [Embedding dim] Both our generated samples and our likelihood evaluations confirm that the 16 dim embeddings remain sufficiently powerful to capture a vocabulary of 32k tokens. We were similarly surprised, but the optimizing against the likelihood (which is defined in the original discrete space) made it clear that low dimensional embeddings were computationally efficient with little quality loss in the model. > [Sequence length] We trained with 1024 as a reasonable standard context window length that would be comparable to GPT-2 (which also uses sequence length 1024). In earlier experiments, we also tested context lengths 256 and 512, and found them to work fine with no hyperparameter changes needed. We suspect that longer sequence lengths could also be used without any changes. > [Training for sequences] We take random 1024 sequences in openwebtext, including end of sequence tokens at the end of documents, and continuing on to the next document. This approach makes it so that every batch has the same length, and no padding is necessary. For the random subset with shorter sequence lengths, we don't need to use [PAD] tokens because our implementation supports variable-length sequences in the same batch directly.
Summary: This research studies diffusion-inspired language models with a primary focus on narrowing the perplexity gap between autoregressive and diffusion-based LMs. To achieve this goal, the paper systematically explores various design choices within diffusion-based language models, addressing questions such as the best method for learning token embeddings and the impact of self-conditioning on performance, among others. Through this comprehensive investigation into design choices, the paper successfully scales up the diffusion language model to an impressive 1 billion parameters. Experimental results conclusively demonstrate that the 1B diffusion LM outperforms GPT-2 124M in terms of perplexity across diverse benchmark datasets. Strengths: [Motivation] The focus of this research on further enhancing diffusion-based LM is particularly intriguing, given that considerable effort is often required to bring about fundamental changes in the trends of autoregressive LM. [Experiments] The successful scaling-up of diffusion-based LM is also a strength of this paper. Weaknesses: [Limited Novelty] While the Plaid framework claims to contribute by exploring the design space of diffusion-based language models and introducing compute-optimal MLE training of language diffusion models, its technical novelty remains limited. Many of its design choices, including self-conditioning, have been extensively studied in the past, as evident in Table 1 and previous work such as CDCD. [Comparison to CDCD] It is difficult to determine whether the Plaid framework produces better samples than CDCD based solely on Table 1. While the table shows that Plaid outperforms CDCD in terms of perplexity, it lacks a direct comparison of the generated texts. Without such a comparison, it is challenging to ascertain which model generates higher quality samples or exhibits more desirable language generation characteristics. Additional evaluation and comparison of the generated texts would provide a more comprehensive assessment of the two models' performance. [Comparison to GPT-2] The performance of Plaid 1B is comparable to GPT-2 124M. However, it is challenging to assess the significance of this achievement, as Plaid requires a significantly larger number of parameters to match the performance of GPT-2 124M. To better understand the importance of this result in the context of diffusion LM research, the paper should provide further elaboration. It would be helpful to highlight more benefits of diffusion-based LM, such as its performance on in-filling tasks and its ability to capture long-range dependencies more effectively than autoregressive models with a left-to-right order. By emphasizing these advantages, the authors can better demonstrate the value of the Plaid framework beyond simple perplexity comparisons with GPT-2. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: #1. (ll. 102-105). I can’t comprehend the rationale behind why the loss functions used in CDCD result in ill-posed problems when optimized over the embedding. Since CDCD aims to be learned by minimizing the cross-entropy loss between the output of diffusion and the input tokens, it’s not an ill-posed problem. #2. (ll. 62). The embedding function should be invertible, but there is no constraint on the objective function. After training, is the embedding matrix full-rank? #3. (ll. 139). In what situations is it beneficial to allow \sigma^2(0) to take a large value? It appears that when the first forward step can utilize a large noise scale, the overall process's length is reduced, leading to faster sampling speed. Is this the intention of the authors? #4. In section 4.1, the paper lacks a clear description of the rationale behind the choice of heuristics on the weighting for the ablation study. Could you provide a detailed explanation for selecting these particular heuristics? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper does not explicitly outline the limitations of the proposed framework. However, a significant drawback of the Plaid framework is its requirement for a significantly higher number of parameters compared to AR (Autoregressive) language models. This contrasts with the trend observed in diffusion models for image generation tasks. For example, when comparing DALL-E 1 to DALL-E 2 or Parti to Imagen, AR-based image generation models typically necessitate a much larger number of parameters than diffusion-based generative models. It would be beneficial to include an explanation in the paper regarding why this observed trend in parameter efficiency is not consistent with diffusion language models, or at least, why it is not evident in the Plaid framework. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful and thorough review! We’d like to respond to some of your points below: > [Limited Novelty] We are the first to find an algorithmic recipe for compute-efficient likelihood-based text diffusions. As you correctly point out, this recipe leverages lots of prior work, but our results couldn’t be achieved just by carefully implementing prior work: no prior text diffusion work has achieved any nontrivial likelihoods on any standard benchmark, and our recipe achieves more than a 2x improvement in compute efficiency over our CDCD implementation (see Table 1). We have updated our paper draft to make this more explicit in the introduction. > [Comparison to CDCD] We agree that a comparison to CDCD in terms of sample quality would have been helpful. We did attempt to perform these types of evaluations by contacting the CDCD authors with a request for samples, but were unable to obtain a sufficient number of samples to perform human evaluation. We have updated the draft with a note that makes this clear: We were unable to compare Plaid to CDCD in sample quality, as we were unable to obtain CDCD samples from the original authors. > [Comparison to GPT-2] We agree that Plaid lags substantially behind autoregressive models in likelihood performance under a fixed compute or parameter budget, and we discuss this fact in detail in our paper (see Fig 1 and Sec 5.2). The significance of our comparison to GPT-2 is that until now, no prior work on diffusion language models had achieved any nontrivial likelihood at all. > #1. (ll. 102-105). I can’t comprehend the rationale behind why the loss functions used in CDCD result in ill-posed problems The CDCD objective has the following degenerate solution: if we take the embedding norms to infinity, then (because the maximum noise variance is a constant) predicting the tokens becomes trivial. For this reason the CDCD authors require a hard norm constraint on the rows of their embedding matrix. They discuss this in Sec 3.2 of their paper. > #2. (ll. 62). The embedding function should be invertible, but there is no constraint on the objective function. We don’t need the embedding matrix to be invertible in order for the embedding function to be invertible, since the embedding function is only defined over the set of vertices of the vocabulary simplex and not the interior of the vocabulary simplex. Our embedding function is therefore indeed invertible even though our embedding matrix is low-rank. > #3. (ll. 139). In what situations is it beneficial to allow \sigma^2(0) to take a large value? The intention is not faster sampling speed but rather improved likelihoods under a fixed training FLOP budget, as demonstrated in Table 1. Truncating the process makes it so that there’s less that the model needs to learn. > #4. [...] Could you provide a detailed explanation for selecting these particular heuristics? We have updated the paper with a more detailed explanation. We selected these heuristics because they downweight small noise levels relative to the likelihood weighting. This is the same motivation behind the schedules that have been proposed in image diffusions (we cannot copy those schedules directly because they assume image inputs which are differently scaled than our text embedding vectors). > It would be beneficial to include an explanation in the paper regarding why this observed trend in parameter efficiency is not consistent with diffusion language models, or at least, why it is not evident in the Plaid framework. We agree! We have updated the paper with a discussion. In short, likelihoods have been historically less relevant in image modeling than in text, and image diffusion models have never been evaluated in our setting (likelihood under a fixed compute budget) and it is unclear whether similar trends would be observed if they were. --- Rebuttal Comment 1.1: Title: Increasing my score from BR to BA. Comment: Thank you for your comprehensive response. Most of my initial questions about the paper have been clarified. As the authors recognize the absence of human evaluation and the inefficiencies in the network parameters, the manuscript could be further enhanced from another round of revisions. Nonetheless, since my primary concerns have been addressed, I've adjusted my score to BA (borderline accept). Thank you!
null
null
null
null
Multinomial Logistic Regression: Asymptotic Normality on Null Covariates in High-Dimensions
Accept (poster)
Summary: The paper aims to study the statistics of the multinomial logistic MLE on null covariates in a multiclass classification setting in the asymptotic proportional regime, i.e., in the limit in which the size of the dataset $n$ and of the dimension of the covariates $p$ are both sent to infinity but with finite ratio. The work extends previous results by Sur and Candès in which the normality of the MLE on null covariates was proven in the case of binary classification and introduces a proper statistic for a $\chi^2$ test on the relevance of a feature. Strengths: The paper is clearly and carefully written, the validity conditions of the results are clearly stated and the outcome of the theoretical analysis is supported by robust numerical evidence. The characterization of the statistics of the MLE on null covariates in the case of multiclass classification in the high dimensional limit is timely. The authors also compare the high-d and the classical theory, showing the remarkable difference in the provided predictions. Finally, they also test the validity of their results beyond the Gaussian hypothesis adopted for the derivation of their theorem. Weaknesses: A possible theoretical weakness of the paper is the fact that the existence of the MLE is assumed (Assumption 2.4) and not fully characterized. This "weakness" is acknowledged by the authors themselves in the final section of the main text. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Out of curiosity, the authors state that they expect their results to hold for covariates with distribution having "sufficiently light" tails. Could they comment on this? Do they expect, for example, sub-gaussianity to be sufficient for their results to hold? As a (very) minor remark, Theorem 2.1 and Theorem 2.2 exhibit the quantities $\mathsf y_i-\hat{\mathsf p}_i\equiv -\mathsf g_i$, but a different notation is used. The analogy of the two results might appear more evident at first sight using the same notation in both theorems (eg replacing $-\mathsf y_i+\hat{\mathsf p}_i$ with $\mathsf g_i$ in the first theorem). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The work is of theoretical nature and all limitations are within the well specified hypotheses of the reported theorems. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful and encouraging feedback. Here, we respond to your comments point by point. > **Q1:** Out of curiosity, the authors state that they expect their results to hold for covariates with distribution having "sufficiently light" tails. Could they comment on this? Do they expect, for example, sub-gaussianity to be sufficient for their results to hold? **A1:** This is a great question. Regarding this question of universality with respect to the distribution of $x_i$, the associated challenges and some recent references, we refer to our answer to question **Q2** from Reviewer uvEK. > **Q2:** As a (very) minor remark, Theorem 2.1 and Theorem 2.2 exhibit the quantities $\mathsf{y}_i - \mathsf{\hat p}_i = -\mathsf{g}_i$ , but a different notation is used. The analogy of the two results might appear more evident at first sight using the same notation in both theorems (eg replacing $-\mathsf{y}_i + \mathsf{\hat p}_i$ with $\mathsf{g}_i$ in the first theorem). **A2:** Thank you very much for your attention to detail. We adopt your suggestion and revised the manuscript by replacing $-\mathsf{y}_i + \mathsf{\hat p}_i$ with $\mathsf{g}_i$ in Theorem 2.1 to make the analogy between the two results more evident.
Summary: This paper studies the asymptotic distribution of multinomial logistic regression in the high dimensions. Under the assumption of a Gaussian design (and a few other assumptions), it establishes and characterizes the asymptotic normality of the coefficient of a null feature. This extends previous results for high-dimensional binary logistic regression to $K \geq 3$ number of classes. This result enables proper significant testing in high-dimensions, for which the classical fixed-$p$ asymptotic fails to control the type-I error. Strengths: 1. The paper extends previous results for high-dimensional logistic regression to the multi-class setting. 2. The paper is well-written and the technical content is presented rigorously. 3. The simulations results seem to match the asymptotic theory very well. Weaknesses: 1. The hypothesis $H_0$ seems stronger than the hypothesis $H_0'$: the population coefficient (in terms of KL projection) for feature $j$ is zero. Supposedly $H_0'$ reduces to $H_0$ when the model is well-specified. In terms of the data generating mechanism given by Assumption 2.1, it seems possible that when $y_i = f(U_i, x_i^{T} B^{\ast})$ does not hold, a feature that does not satisfy $H_0$ may still have zero as its population "true" coefficient. I am wondering whether $H_0$ can be suitably weakened. 2. The result relies on a random Gaussian design, although it is suggested that the result probably extends to other random designs with light tails. 3. The paper does not characterize the distribution for non-null features, so the study of parameter inference for high-dimensional multinomial logistic regression is still incomplete. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Per my first comment on the "Weaknesses", can authors comment on the extent to which $H_0$ can be weakened? 2. To what extent is the "universality" expected to hold for this problem? Does the asymptotic normality fail under a random, heavy-tailed design? What about fixed designs? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I do not foresee any potential negative societal impact of this work. In terms of limitations, I think obtaining results for the non-null features would make this work a much stronger paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback to help us improve our paper. Here, we respond to your comments point by point. > **Q1**: Per my first comment on the "Weaknesses", can authors comment on the extent to which can be weakened? **A1**: The question regarding the possibility to weaken $H_0$ to $H_0'$ is interesting. We do not have an answer right away. The proof is written with a generative model of the form $y_i = f(U_i, x_i^T B^*)$ in mind, and it is not clear at this point that all steps in the proof where this model is used could be generalized to a model-free setting. (Note that many choices for $f$ are allowed which provides significant model flexibility; the stronger assumption here is indeed that the response depends on $x_i$ only through $x_i^TB^*$). > **Q2**: To what extent is the "universality" expected to hold for this problem? Does the asymptotic normality fail under a random, heavy-tailed design? What about fixed designs? **A2**: Thanks for the great question. We expect our asymptotic normality results (Theorems 2.1 and 2.2) to hold when the covariate $x_i$ is iid sub-Gaussian or has bounded moments of sufficiently large order. This is expected because 1. This is empirically confirmed by the simulation results using the Radermacher distribution and SNP example in Section 3. However, the asymptotical normality fails to hold under random, heavy-tailed design; we conducted additional simulation results using heavy-tailed covariates (e.g., Pareto and Weibull); the corresponding quantile-quantile lines deviate significantly from the 45-degree line. 2. In the same regime (proportional asymptotics for p/n), several recent works show that universality holds under a set of assumptions that typically includes iid entries with bounded moments of sufficiently large order: - "Universality of empirical risk minimization" by Montanari and Saheed (arxiv:2202.08832), cf. section 3.3 there for iid sub-gaussian entries, - "Universality laws for Gaussian mixtures in generalized linear models" by Dandi, Stephan, Krzakala, Loureiro, Zdeborová (arxiv:2302.08933) - "Universality of regularized regression estimators in high dimensions" by Han and Shen (arxiv:2206.07936) which handles iid entries with $6+\epsilon$ bounded moments. These three works provide hope for extending our results to iid entries (or other design distributions treated in these works). However, all these works assume some form of "delocalization" of $\hat B$ in the form of an $L_\infty$-norm constraint in the minimization problem (see, e.g., equation (29) in Montanari and Saheed). This is fine if $\hat B$ is already known to satisfy such $L_\infty$-norm bound, however in logistic regression (even in two classes) such $L_\infty$-norm bound is not known, and these universality results cannot yet be applied. Given the intricacies involved in proving the delocalization and universality results, we prefer to leave this problem open for future research as these challenging universality questions warrant their own papers. > I think obtaining results for the non-null features would make this work a much stronger paper. We agree that an analysis for non-null features would be very interesting. However, a major obstacle to obtaining such results is that we operate under the model $y_i=f(U_i,x_i^TB^*)$, allowing for a general function $f$ ($f$ must output a one-hot encoded vector, but is otherwise unrestricted). In our setting, $B^*$ is thus not identifiable because $y_i=\tilde f(U_i, x_i^T\tilde{B}^*)$, where $\tilde{B}^*=B^* M$ and $\tilde{f}(u,v^T)=f(u,v^TM^{-1})$ for any invertible matrix $M$. Consequently, turning to confidence intervals for entries of $B^*$ or confidence sets for rows of $B^*$ would require either a model more specific than the current model of the paper (in order to address the identifiability issue), or considering that the parameter of interest is the column space of $B^*$. Both of these avenues appear interesting for future work. The rationale for leaving such extensions for future work is that they would require mathematics significantly different from the present work (mathematics relying on additional assumptions compared to our model $y_i=f(U_i,x_i^TB^*)$) and we are not aware of known techniques that could advance the analysis for non-null features. --- Rebuttal Comment 1.1: Comment: I appreciate the reply from the authors. I feel that my concerns have been properly addressed. I am raising my score to "Accept".
Summary: In this paper, the authors provide asymptotic characterization of the behavior of the maximum likelihood estimator (MLE) of multinomial logistic model (with more than two classes), in the high-dimensional regime where the dimension and the sample size of data go to infinity at the same rate. Under some technical assumptions (that may need some further elaborations), this paper develops asymptotic normality and asymptotic chi-square results for the multinomial logistic MLE on null covariates (see Theorem 2.1 and 2.2). The proposed results can be used for statistical inference and significance test of some specific features. Numerical experiments on synthetic data are provided in Section 3 to validate the proposed theory. Strengths: This papers focuses on the fundamental and important problem of MLE of multinomial logistic model in the modern high-dimensional regime. The proposed theory improves prior art in characterizing the asymptotic normality and asymptotic chi-square results, both of significance to statistics and ML. The paper is in general well written and easy to follow. Weaknesses: I do not have strong concerns to raise for this paper. See below for some detailed comments and/or questions. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. While I am almost fine with Assumption 2.1 and 2.2 (just being curious, is some upper bound on the spectral norm of the covariance $\Sigma$ needed? or it is just a matter of scaling with respect to $n,p$?), I am a bit confused by Assumption 2.3 and 2.4: are they something intrinsic or for the ease of technical analysis? What happens if, say Assumption 2.3 is violated? Can we have something similar but just more involved or the MLE is totally different? Also, Assumption 2.4 is a bit misleading, in the sense that the assumption is not instinct, and should perhaps be reduced into some assumption on the dimension ratio $p/n$ and/or statistics of the data? I believe it makes more sense to assume something like "the dimension ratio $p/n$, covariance $\Sigma$ and xxx satisfy that xxx". I am also confused by the paragraph after Assumption 2.4 and I am not sure the convergence of some multinomial regression solver can be used as a rigorous theoretical indicator. The algorithm may converge (or believed to converge) due to many reasons. Perhaps some better (numerical) criterion can be proposed by, e.g., checking the gradient and/or Hessian of the point of interest. 2. for the sake of presentation and use, it be helpful to present Theorem 2.2 and the estimation of $\Omega_{jj}$ in form of an algorithm. 3. Almost nothing is mentioned for the proof of the theoretical results (Theorem 2.1 and 2.2): is the proof technically challenging or contains some ingredients and/or intermediate results that may be of independent interest? Could the authors elaborate more on this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: This paper is primarily of a theoretical nature, and I do not see any potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable and encouraging feedback to help us improve our paper. Here we provide answers to your concerns point by point. > **Q1:** While I am almost fine with Assumption 2.1 and 2.2 (just being curious, is some upper bound on the spectral norm of the covariance $\Sigma$ needed? or it is just a matter of scaling with respect to $n,p$?), I am a bit confused by Assumption 2.3 and 2.4: are they something intrinsic or for the ease of technical analysis? What happens if, say Assumption 2.3 is violated? Can we have something similar but just more involved or the MLE is totally different? Also, Assumption 2.4 is a bit misleading, in the sense that the assumption is not instinct, and should perhaps be reduced into some assumption on the dimension ratio $p/n$ and/or statistics of the data? I believe it makes more sense to assume something like "the dimension ratio $p/n$, covariance $\Sigma$ and xxx satisfy that xxx". **A1:** The upper bound on the spectral norm of $\Sigma$ is not needed in our theory thanks to the rotational invariance: a key step in our proof involves rotating and scaling the covariate with covariance $\Sigma$ to the identity matrix $I_p$ (see page 20 in the supplement). Assumption 2.3 is indeed necessary for our technical analysis. One intuition that this is a fairly mild assumption is that if for some label $k_0$ we observe $o(n)$ observations, the dataset obtained by dropping all observations with label $k_0$ still has sample size $\tilde n = (1-o(1))n$ because we dropped $o(n)$ observations. We end up with a dataset with one fewer label, and almost the same sample size. Furthermore, if $y_i$ for $i=1,...,n$ are iid from a fixed distribution independent of $n,p$, if $P(y_{ik}=1)>0$ then by the weak law of large numbers, the probability to observe $\gamma*n$ responses $y_i$ with label $k$ converge to 1 for any $\gamma\in(0,P(y_{ik}=1))$. On the other hand, if $P(y_{ik}=1)=0$ then this label $k$ may be removed as it will not appear at all. For Assumption 2.4, we agree with you that imposing conditions on the MLE $\mathsf{\hat B}$ is not intrinsic. It is a reasonable assumption to us because (a) the event in Assumption 2.4 can be observed directly, and (b) because the goal of this paper is to study the property of $\mathsf{\hat B}$ provided it exists (it would not make sense to study such tests based on the multinomial MLE in settings where this MLE does not exist with high or positive provability; nothing can be done with MLE in the event that it does not exist). We suspect Assumption 2.4 could be reduced to "the dimension ratio p/n, covariance $\Sigma$ and xxx satisfy that xxx" as suggested by the referee, and as Candès and Sur (2021) did for binary logistic regression. However, such analysis for 3 or more classes does not exist yet (only partial answers are known, cf. lines 326-328 page 9) and a complete answer to this phase transition for multinomial logistic regression is beyond the scope of our paper. > I am not sure the convergence of some multinomial regression solver can be used as a rigorous theoretical indicator. The algorithm may converge (or be believed to converge) due to many reasons. Perhaps some better (numerical) criterion can be proposed by, e.g., checking the gradient and/or Hessian of the point of interest. We agree that checking the gradients/Hessian at the current iterate is a good criterion. We will clarify this paragraph: Our stance is to leverage (and not reinvent the wheel on) the well-studied convex solvers from optimization which often have their own built-in methods to assess convergence, for instance as returned in the ``success`` and ``message`` attributes of ``scipy.optimize.OptimizeResult`` for L-BFGS which is the default solver in scikit-learn. In practice, a definitive sign that convergence fails and the MLE does not exist is when the solver returns a $\hat B$ that perfectly separates the data in the sense that the corresponding softmax$(x_i^T\hat B)$ perfectly interpolates $y_i$ for all $i$. This holds for $K=1$: gradient descent is known to converge to such interpolator (max-margin classifier) on separable data (Soudry, Hoffer, Nacson, Gunasekar and Srebro 2018)., although we are not aware of similar works for $K>1$. > **Q2:** for the sake of presentation and use, it be helpful to present Theorem 2.2 and the estimation of $\Omega_{jj}$ in form of an algorithm. **A2:** Thanks. For the camera-ready version, we will include another equation in Theorem 2.2 with $\Omega_{jj}$ replaced by its estimator (2.5). > **Q3:** Almost nothing is mentioned for the proof of the theoretical results (Theorem 2.1 and 2.2): is the proof technically challenging or contains some ingredients and/or intermediate results that may be of independent interest? **A3:** The rigorous proof is non-trivial and, due to page limits, we decided to present it in the supplement. The first few pages of the supplement has a diagram explaining the relationship between the different theorems/lemmas, and how the lemmas in the supplement are combined to obtain the theorems stated in the main text. The main idea of the method used in the supplement consists of applying Lemma S5.3 to a carefully chosen function $F(\tilde z)$ taking values in the set of matrices with all singular values equal to 1, which is a new result of independent interest to exhibit asymptotic chi-square distributions in such problems. This poses unique challenges to ensure that the derivatives are well-behaved despite the normalization requiring all singular values to be equal to 1, and we solve some of these challenges specific to the cross-entropy loss in section S6 of the supplement. We expect some of these tools to be of independent interest and useful more broadly (beyond the cross-entropy loss) when one wishes to prove asymptotic chi-square distributions in related problems. --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal and detailed clarifications. And I maintain my positive rating on this paper.
Summary: This paper studies the asymptotic distribution of the MLE of the multinomial logistic regression model when the sample size and the number of parameters are of the same order. The validity of the asymptotic theories is evaluated through extensive simulation studies. The paper is overall very well written and the presentation is clear. Strengths: The paper is very well written, and the problem under consideration is of great interest. Theoretical results are sound and the numerical experiments are sufficient. Weaknesses: One of the motivating example of the paper is to study classification with 3 or more classes. And one of the most important goal in classification is the classification error or AUC. In my opinion, the impact of the proposed method on classification errors should be evaluated through simulation studies and some real data examples. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Related to my question in the weakness section, one could use the proposed tests to exclude some noise covariates in the multinomial regression and see how much improvements can be achieved in terms of classification errors. Preferably, one should compare its performance to some existing methods. 2. I would strongly suggests applying the proposed methodology to some real-world benchmark data set. The statistical tests may provide additional insights and the classification errors can be evaluated through cross-validation. 3. The paper's theoretical framework appears to heavily rely on the seminal work of Sur and Candes (2019), because of which I still have some reservation on the novelty of the theory. To address concerns about the novelty of the theory, it would be wise to explicitly highlight the unique theoretical challenges posed by the multinomial logistic regression model compared to the more commonly studied binary logistic regression model. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful and positive feedback. > **Q1**: Related to my question in the weakness section, one could use the proposed tests to exclude some noise covariates in the multinomial regression and see how much improvements can be achieved in terms of classification errors. Preferably, one should compare its performance to some existing methods. **A1:** The tests we propose provide strong guarantees for controlling Type I error: rejecting a null covariate (i.e., wrongly making a discovery regarding an insignificant covariate) happens with probability approximately $\alpha$ for any fixed $\alpha\in(0,1)$. However, this does not guarantee that the features that are not rejected should be safely removed from the model (which would require Type II error control). There could be instances where $B^*$ has many small coefficients on all features, so small that the test would not reject $H_0$ for all features; but overall the MLE $\hat B$ may still be more accurate (say, in prediction error) than random guess in such cases. In such situations, removing the features for which $H_0$ was not rejected would lead to a model with dimension 0 (with prediction probabilities $1/K$ for all classes) and would perform worse than the MLE for estimation of $B^*$ or prediction on a test set. In short, while the method suggested by the referee could be implemented, there are situations where using the MLE on the full model performs better. To our knowledge, there is no existing testing method with the same goal as ours for classification problems with 3 or more classes. > **Q2**: I would strongly suggests applying the proposed methodology to some real-world benchmark data set. The statistical tests may provide additional insights and the classification errors can be evaluated through cross-validation. **A2:** Thanks for the suggestion. We conducted a real data analysis by applying the proposed test to heart disease data from the UCI Machine Learning Repository. After standard data-cleaning processes, the dataset has 297 instances with 13 features, including age, sex, and other attributes. The response variable was transformed into 3 classes (0, 1, and 2) after converting 3 and 4 to 2. To demonstrate the validity of the proposed significance test, we generated a noise variable from a standard normal distribution, resulting in a dataset with 297 instances and 14 variables. We applied our test to this noise variable using both the proposed test and the classical test, repeating the experiment 10,000 times. The type I error of our proposed test is 0.0508, aligning well with the desired type I error of 0.05. In contrast, the classical test exhibits a type I error of 0.0734, significantly exceeding the desired rate of 0.05. These results confirm that the classical test tends to include noise variables, leading to false discoveries. On the other hand, our proposed test is more reliable and achieves the desired type I error. Regarding > The statistical tests may provide additional insights and the classification errors can be evaluated through cross-validation we refer to our response to the previous question, where we explained that the proposed method cannot be readily used to safely drop features from the model. Our method provides a strong guarantee for Type I error control, but this does not readily lead to improved classification error on unseen data. > **Q3**: The paper's theoretical framework appears to heavily rely on the seminal work of Sur and Candes (2019), because of which I still have some reservation on the novelty of the theory. To address concerns about the novelty of the theory, it would be wise to explicitly highlight the unique theoretical challenges posed by the multinomial logistic regression model compared to the more commonly studied binary logistic regression model. **A3:** The proposed theory is novel because it is different from the seminal work of [SC19] in at least three aspects: (i) The results in [SC19] only apply to binary classification problems, while our proposed theory also applies to classification problems with three or more classes. (ii) The techniques used in the proof are significantly different from those of [SC19]. Case-in-point: even for $K=2$, our argument does not rely on or involve the nonlinear system with 3 equations and 3 unknowns which is central to [SC19] for binary classification (or its generalization to $K\ge 3$ classes). The Approximate Message Passing proofs used by [SC19] are also not employed here. Our result for asymptotic multivariate normal and chi-square distributions is based on applying Lemma S5.3 of the supplement to a carefully chosen function $F(\tilde z)$ valued in the set of matrices with all singular values equal to 1, which is a new result of independent interest to exhibit asymptotic chi-square distributions in such problems. This further poses unique challenges to ensure that the derivatives are well-behaved despite the normalization requiring all singular values to be equal to 1, and we solve some of these challenges specific to the cross-entropy in section S6 of the supplement. (iii) From the computational side, all the quantities involved in our results can be readily computed from the data, while the relevant quantities in [SC19] require the estimation of the solutions to the nonlinear system mentioned in (ii). For $K=2$, this is not an issue, as [SC19] proposed the ProbeFrontier method to estimate the solutions to the nonlinear system. Any extension of their work to 3 or more classes (which, as far as we are aware, does not currently exist for the multinomial logistic model) would also require solving solutions for a new nonlinear system. We expect the ProbeFrontier method to break down since the new nonlinear system would now include several scalar parameters due to the matrix nature of $B^*$ (as opposed to the single unknown scalar $\gamma=Var[x_i^T\beta^*]$ in [SC19]). [SC19]: Sur and Candès (2019), PNAS --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have no further comments and will keep my initial score.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
MathNAS: If Blocks Have a Role in Mathematical Architecture Design
Accept (poster)
Summary: The paper introduces MathNAS, a novel approach to Neural Architecture Search (NAS) that utilizes mathematical programming techniques. MathNAS aims to improve the efficiency and accuracy of architecture search by dividing the search space into smaller building blocks and predicting network performance based on the performance of these blocks. The authors validate the effectiveness of MathNAS on computer vision (CV) and natural language processing (NLP) tasks, showcasing its superior performance compared to state-of-the-art models. Strengths: 1. Efficient Performance Evaluation: MathNAS proposes a general framework for evaluating candidate networks by estimating block performance first and then combining them to predict network performance. This approach greatly improves evaluation efficiency . 2. Reduced Search Complexity: By establishing a mapping between block performance and network performance, MathNAS transforms NAS into an Integer Linear Programming (ILP) problem, reducing the search complexity to polynomial time . 3. Superior Results: The authors demonstrate MathNAS's capabilities by considering key performance indices such as accuracy, latency, and energy. The results show that MathNAS outperforms state-of-the-art models in terms of these metrics . Weaknesses: 1. Lack of Detailed Implementation: The paper provides a high-level overview of MathNAS but lacks detailed implementation information. More specific details about the mathematical programming techniques used and the specific algorithms employed would have been beneficial. 2. Section 2.2 of the paper presents some significant ambiguities that complicate understanding of the proposed method: a. The paper does not clearly explain the definitions of "inherent capability" and "interactive capability". This makes it challenging to understand their intended meanings and how they function within the given neural network context. b. There is a lack of clarity on how the changes in these capabilities, represented as $\Delta \phi\left(\mathcal{B}{(i, 1) \rightarrow(i, j)}\right)$ and $\Delta \Phi\left(\mathcal{B}{(i, 1) \rightarrow(i, j)}\right)$, are evaluated. The methodology for these calculations needs to be explained more explicitly. c. Equation 3 lacks clarity due to the aforementioned uncertainties. Without clear definitions and calculation methods for the terms on the right-hand side, it is impossible to meaningfully interpret or apply this equation. d. Additionally, the processes of module swapping and internal adjustment are not well-defined. More details on how these operations are performed, both in terms of selecting which modules to swap and how the internal adjustments are made post-swap, are needed for a comprehensive understanding and replication of the method. Does every module have the sample input/output shape? 3. Limited Discussion on Generalization: Although the paper mentions MathNAS's remarkable generalization capabilities in designing efficient architectures for NLP tasks, there is limited discussion on the factors contributing to this generalization and how it compares to other NAS methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can you clarify the approach or strategy employed to divide the networks into individual modules or blocks? 2. Based on observations from Figure 1b, the Floating Point Operations Per Second (FLOPs) appear to have minimal impact on latency. Could you provide an explanation or insight into why this phenomenon occurs? 3. Have you considered implementing a Graph Neural Network (GNN) model designed to predict performance variances that may arise as a result of interchanging different modules within the network? What outcomes or implications might this approach yield? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review of our work. --- # Clarification about Section 2.2 ## Clarifications on Definition and Calculation of Equation 3 *The answers here correspond to parts a, b, and c of the original question.* Both theoretically and empirically, we have shown that the effects of a specific block switch differs across various architectures. Delving into these discrepancies, we define the calculation of block performance as: $$b^{A}\_{i,j}=\overline{\Delta Acc\left(\mathcal{N}^{\Omega}\_{(i,1) \rightarrow (i,j)}\right )}$$. **This formula is designated for the computation of the performance of $b\_{i,j}$**. Furthermore, we propose that the effect of block performance on network accuracy should be approached from two angles: inherent capability and interactive capability. These two concepts help illuminate the meaning behind our block performance formula (**though it's important to note they serve explanatory purposes and aren't directly employed in the calculation**). Hence, we have: $$\overline{\Delta Acc\left(\mathcal{N}^{\Omega}\_{(i,1) \rightarrow (i,j)}\right )} = \Delta\phi(\mathcal{B}\_{(i,1)\rightarrow (i,j)}) + \Delta\Phi(\mathcal{B}\_{(i,1)\rightarrow (i,j)})$$. Wherein: - The term *inherent capability* refers to the intrinsic ability of the block. - The term *interactive capability* denotes the capacity of a block when considering its influence on other blocks within the network. To draw an analogy, consider a team where an individual's performance is not just a reflection of their inherent skill but also the influence they exert on their teammates. Similarly, a network can be viewed as a team composed of blocks. ## Details on Module Switching and Internal Adjustments *The answers here correspond to part d of the original question.* For all replacement blocks within the search space, a performance evaluation is required. Specifically, to evaluate the performance of block $b\_{ij}$, we consider the switch from $b\_{i1}$ to $b\_{ij}$. After a block swap, there's no need for internal adjustments. This is because for all candidate blocks contained within the $i$-th block node, their input and output tensor spatial dimensions match those in the search space. --- # Other Questions ## Detailed implementation: In our *common response*, we've provided detailed technical specifics regarding the solution to the ILP equation. These technical details will be supplemented in our revised version. ## Discussion of generalization on NLP tasks: Here, we'd like to elucidate the generalization capability of MathNAS across diverse tasks. As mentioned in the paper, MathNAS is applicable to various block architectures, including convolutional blocks and transformer blocks. **The architectural applicability of MathNAS is task-agnostic.** This suggests that, for a range of tasks spanning CV and NLP, MathNAS is theoretically applicable as long as the search space can be structured in a block format. Moreover, the Transformer architecture has demonstrated significant potential in NLP tasks. Experiments further revealed that MathNAS exhibits exceptional performance within the search spaces of Transformer architectures, including SuperViT and SuperTransformer. This, to some extent, highlights the commendable task generalization of MathNAS on NLP tasks. ## Division of blocks: A detailed description of the block division for different search spaces can be found in the Appendix D.1. Here, we provide a brief overview of our block partitioning strategy: - For the **MobileNetV3** search space, we adhere to the original block division, considering one inverted residual structure as a single block. - For the **SuperTransformer** search space, we classify two encoder layers as one block and one decoder layer as another block. - For the **SuperViT** search space, we again follow the original block categorization, defining either a CNN or a Transformer block as a single block. - For the **NAS-Bench-201** search space, our concept of a block corresponds to an edge operation in a GNN, where each operation on an edge equates to a block. ## Interpretation of latency experimental results: Figure 1 illustrates the impact of FLOPs on network performance (accuracy, latency, energy) during block switches. In other words, it reveals the contribution of blocks to the performance of networks with varying FLOPs. The minimal variations in latency shown in Figure 1b suggest that **the contributions of different blocks to the network's latency are independent**, which aligns with our understanding. Taking a network composed of sequentially connected blocks as an example, the network's required latency is roughly the sum of the independent latencies needed for each block. ## Implementation on the GNN model: In fact, we believe that our experimental verification of MathNAS on the NAS-Bench-201 search space can serve as a testament to its compatibility with GNN networks. NAS-Bench-201 represents a micro search space, consisting of five identical GNN blocks where the edges of a GNN can select different operations. In the context of the paper, the block switch operation, when applied to NAS-Bench-201, essentially refers to changing the operation on a particular edge while keeping the operations on other edges constant. This effect mirrors the application of MathNAS to a complete GNN model. Hence, from the validation of MathNAS on NAS-Bench-201, we can infer its suitability for GNN networks. We will clarify this point in our revised version. --- Rebuttal 2: Title: Anticipating Your Feedback Comment: With the rebuttal deadline fast approaching, we kindly request your feedback. If you have any questions or concerns, please don't hesitate to reach out to us.
Summary: The paper proposes a way to perform neural architecture search (NAS) by solving an integer program to identify the constituents of a block-modular neural net, having pre-computed estimations of performance for the separate blocks by a novel method. The resulting "MathNAS" approach is demonstrated in various experiments to be able to yield competitive neural network designs compared to other state of the art NAS methods. -- update: I have read and acknowledged all other reviews and the authors rebuttals, see discussion. -- Strengths: The novel performance estimation method appears to be well-motivated and leads to the possibility of applying mixed-integer programming to compute an optimal architecture (w.r.t. the utilized performance estimates). The numerical results indicate that this works well in practice, giving somewhat better or comparable results with notable savings in training time. This new NAS pipeline also shows promise for future refinements, e.g., regarding the MIP problems used for the search once performance estimates have been pre-computed, given the inherent flexibility of MIP to model a variety of different aspects. Weaknesses: The paper must be thoroughly proofread for English language and grammar/typos. The supplementary document is as long as the main paper, but provides mostly only some small additional information; some things I hoped to find in the supplement were however not included, e.g., an explanation for the MIP objective function and a description or at least literature reference for how to turn the fractional objective into a linear one. Math typesetting especially of optimization programs can be improved. Also, I got the impression that the authors are trying a bit too hard to "sell" their ideas; maybe tone down the language a bit (there's very many usages of words like "impressive" or "remarkable" and similar phrasings praising the paper's results). Finally, there are several parts that remained somewhat unclear to me (see "Questions"), so the presentation can also be improved in terms of clarity/rigor. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - what exactly do you mean by "polynomial-time search complexity"? It is quite confusing to read something like "...can be transformed into an integer programming problem, which further reduces the search complexity to polynomial time" -- solving IPs is generally not possible in polynomial time! I suspect the authors mean that they need to pre-compute polynomially many values (for their performance estimation scheme), but this does not really become clear. - In the same vein, it is mentioned that some neural network training is necessary, but it is never clearly stated what and when. Again, I guess training is required to pre-compute the block performance scores somehow, but the details of this are too vague; this certainly needs to be clarified. - Could you describe a bit more how flexible the block/modular architecture is? (i.e., mention earlier that blocks can be construed of essentially any (finite) number of sub-network; this is never clearly stated and only becomes more clearly apparent in the supplementary document where details on the search spaces in the different experiments are given). - You claim at several points that the integer programs were solved with Gurobi (that is the solver name; Gurobipy is merely its Python interface...) on a GPU. I am quite sure that Gurobi does not run on GPUs. MIP solvers currently do not benefit from GPU vs. CPU computations; Gurobi even explain this on their website. So, please clarify and correct/elaborate in the paper! (Also, state the Gurobi version you used.) - Please clarify what exactly is being sampled (and to what purpose exactly) in Section 2.3 - Please justify the IP objective function and clarify/give a reference for how the fractional IP can be turned into a linear IP. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: Limitations have by and large been addressed sufficiently. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review of our work. --- # Polynomial-time Search Complexity As you surmised, the polynomial time referenced here does not pertain to the duration required to solve the ILP problem. Instead, **it denotes the time complexity of the MathNAS algorithm in identifying the optimal architecture, or more precisely, the number of networks that need evaluation within the search space**. Only after evaluating the requisite networks do we proceed to solve the ILP problem. Similar forms of time complexity descriptions have appeared in other MP-NAS works, such as in LayerNAS [1]. [1] Yicheng Fan et al. LayerNAS: Neural Architecture Search in Polynomial Complexity. 2023. --- # Network Training Here, we clarify the network training aspects within the MathNAS algorithm. MathNAS operates in two stages: an offline block evaluation stage and an online search stage. In the offline stage, we first assess the accuracy of $m\* n$ sampled networks and then calculate block performance based on these evaluations. **[WHEN]** This network evaluation stage (offline stage) indeed encompasses the network training process. **[WHAT]** Specifically, for our experiments in the MobileNetV3 [2], SuperViT [3] and SuperTransformer [4] search spaces, we utilize networks pretrained from prior work. For the NAS-Bench-201 [5] search space, we rely on the accuracy on the validation set of networks independently trained for 200 epochs. In the revised version, we will elaborate on these offline costs and incorporate them into the overall cost of the MathNAS algorithm. [2] Andrew Howard et al. Searching for mobilenetv3. 2019. [3] Chengyue Gong et al. Nasvit: Neural architecture search for efficient vision transformers with gradient conflict aware supernet training. 2021 [4] Hanrui Wang et al. Hat: Hardware-aware transformers for efficient natural language processing. 2020. [5] Xuanyi Dong et al. Nas-bench-201: Extending the scope of reproducible neural architecture search. 2020. --- # Block Architecture Flexibility Here, we provide a detailed description of block architecture flexibility. Given a search space comprised of $m$ block nodes, each block node has $n$ candidate blocks. These blocks can be of any neural network type, including CNNs and Transformers, with no structural constraints. Additionally, a candidate block can be a 'skip', indicating variability in the number of network blocks. For instance, in the case of an inverted residual block, one can choose different numbers of network layers, kernel sizes, and expansion ratios. For a transformer block, one can select varying numbers of encoder/decoder layers, hidden dimensions, and the number of attention module heads. **As long as the spatial dimensions of the input and output tensors of the candidate blocks match the dimensions at the corresponding positions in the search space, the MathNAS algorithm can effectively conduct the search.** --- # Sampling strategy in Section 2.3 To compute the accuracy of block $b\_{i,j}$, we must evaluate the mean change in network accuracy when all networks in the search space containing block $b\_{i,1}$ undergo a specific block replacement process - switching from $b\_{i,1}$ to $b\_{i,j}$ (ref: Equation 3). This implies that in order to calculate $b\_{i,j}^A$, we need to determine the accuracy for $m^{(n-1)}$ networks. However, given the observed inverse relationship between network FLOPs and delta-Accuracy, we can estimate $b\_{i,j}^A$ based on the accuracy variation of a network with average FLOPs (ref: paper 2.3 Single Network Sampling Strategy). **In summary, to reduce algorithmic complexity, we sample a specific network from $\mathcal{N}^{\Omega}\_{(i,j)}$ ( i.e., all $m^{(n-1)}$ networks containing $b\_{i,j}$ ) whose FLOPs equal the average FLOPs of $\mathcal{N}^{\Omega}\_{(i,j)}$.** We use the change in accuracy of this sampled network as a proxy for the average accuracy change of all networks, thereby deriving the accuracy for block $b\_{i,j}$. Experimental results show this sampling approach introduces minimal error. --- # Other Questions ## Writing Problem: Thanks for your patient and meticulous review. We have rechecked and corrected the grammar and spelling throughout the paper. Additionally, we've reorganized the Appendix and provided corresponding references in the paper. We have also considered improvements in the formatting of mathematical formulas and refined our choice of words. These writing issues will be addressed in the revised version. ## Use of Gurobi: Upon re-examining our code related to the dynamic network experiments, we discovered that when running MathNAS on the TX2 GPU, the equations were actually being solved on the CPU using the Gurobipy. Subsequently, the solved architecture was loaded onto the GPU for execution. Hence, the reported search time primarily reflects the equation-solving time on the TX2 CPU, which explains why the reported CPU and GPU solution times are identical. We apologize for any confusion our initial description may have caused and will clarify this matter in the revised version. ## ILP objective function: In our *common response*, we've provided the technical details of how to convert to an ILP problem and solve it. We will add these technical details in our revised version. --- Rebuttal Comment 1.1: Comment: I thank the authors for their explanations and additional information in response to my own and the other reviews. All issues seem to have been addressed appropriately, so I uphold my opinion that this paper should be accepted.
Summary: This paper introduces MathNAS, a mathematical programming NAS algorithm. MathNAS maps network performance to block performance, enabling the prediction of accuracy, latency, and energy consumption of large networks based on their constituent modules. This algorithm reduces the search space from n^m to m*n, allowing the NAS problem to be solved as an Integer Linear Programming problem. Strengths: Overall, this paper is well-written and organized, making it easy to follow along. The authors did a great job explaining their methods in detail, complete with loads of equations and clear explanations. Plus, the proposed algorithm has undergone numerous experiments under different models and application fields, which consistently showcase its effectiveness. Weaknesses: 1) In all the experiments, MathNAS takes significantly less "search time" than the other methods, especially some methods need thousands of hours while MathNAS only requires several seconds, which is very impressive. Is the time for all the other methods include the model training time (either training time of supernet or the subnet)? If yes, for the fair comparison, for the MathNAS, should the time that needed to train the base model be added to this time as well? Also, as algorithm 1 shows that there are three steps for MathNAS to determine the best architecture, should the time that calculate each block's performance also be added to the total "search time"? 2) The order of the tables and the description of the experimental results in the paper do not follow the same order. For example, the text follows GNN -> CNNs (Table 2) -> ViT -> NLP (Table 1) -> Dynamic Networks (Table 3), which leads is hard to follow. 3) A typo in paragraph "MathNAS for Mobile CNNs ...", the top-1 accuracy of MathNAS-MB1 is 75.9% as shown in the table, not 76.4%. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In all the experiments, MathNAS takes significantly less "search time" than the other methods, especially some methods need thousands of hours while MathNAS only requires several seconds, which is very impressive. Is the time for all the other methods include the model training time (either training time of supernet or the subnet)? If yes, for the fair comparison, for the MathNAS, should the time that needed to train the base model be added to this time as well? Also, as algorithm 1 shows that there are three steps for MathNAS to determine the best architecture, should the time that calculate each block's performance also be added to the total "search time"? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review of our work. Here we respond point-to-point to your questions. --- # Search Time We explain the search times mentioned in the paper in the *common response*. To ensure a fair comparison, as you rightly pointed out, we will include the training time for the MathNAS base model in the revised version. --- # Description Order We are aware of this issue and will address and rectify it in the revised version. We sincerely apologize for any inconvenience caused during your reading. --- # Misspell Once again, we greatly appreciate your patient and meticulous review. Indeed, this oversight was our mistake. We have rechecked the number and will incorporate the necessary corrections during our revision. Accurate data presentation is vital for the advancement of NAS research. To ensure our data is correctly represented, we have reviewed all other data points in addition to the ones you highlighted.
Summary: This paper introduces MathNAS, a blockwise NAS framework. It begins by estimating the performance of the building blocks within the search space and then predicts the overall performance of a network based on the performance of its individual building blocks. This approach effectively reduces the complexity of network search from O(n^m) to O(nm). Additionally, a block performance evaluation scheme, utilizing average FLOPs, is proposed to further reduce the time complexity from O(n^m) to O(nm). Lastly, the network search process is formulated as an Integer Linear Programming (ILP) problem. Strengths: * Clarity: The paper is effectively written, employing appropriate mathematical notations throughout. * Quality: The experiments conducted in the paper are of high quality, as they include a comprehensive comparison with well-known networks in various domains, such as tabular search space, mobile CNN, ViT, and NLP. * Originality and significance: While the paper presents valuable contributions, there are some concerns that should be addressed. Please refer to the weaknesses section outlined below. Weaknesses: The paper does not cite a few highly relevant papers on blockwise NAS. * LANA: Latency Aware Network Acceleration. https://arxiv.org/pdf/2107.10624.pdf * Distilling Optimal Neural Networks: Rapid Search in Diverse Spaces. https://arxiv.org/pdf/2012.08859.pdf * BLOX: Macro Neural Architecture Search Benchmark and Algorithms. https://arxiv.org/pdf/2210.07271.pdf The authors should thoroughly discuss the distinctions between their proposed approach and the previously mentioned works, while also considering the possibility of conducting experiments to compare them. Specifically, LANA adopts a constrained ILP approach that relies on block metrics derived from factors such as delta-accuracy and delta-latency. The proposed approach bears resemblance to LANA. While the authors have made an effort to explain the validity of their approach, as depicted in Figure 1 and Section C of the appendix, it remains unconvincing that Equation 2 holds true under all circumstances. Additionally, Equations 3 and 4 are similar to those presented in LANA. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Figure 1 in the appendix requires clarification. It appears to suggest the presence of three levels of blocks, with each level accommodating parallel blocks. Could you please clarify whether the blocks are connected sequentially or in parallel? Additionally, how many levels of blocks are there? * In Algorithm 1, there is an inconsistency with the existence of b_{i,0}, which does not align with the range specified for i (i={1, m}). Is this a typographical error? * Regarding the results on NB-201, it would be beneficial to know if multiple runs were performed to obtain averages using different seeds. Similarly, for the Mobile CNN, ViT, and NLP networks, were multiple runs conducted to calculate the means and variances? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper does not explicitly mention the limitations and potential negative social impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review of our work. --- # Originality and Contribution #### Regarding the comparison of the effectiveness of MathNAS and blockwise methods, we responded in the *common response*. At the same time, we supplemented the experimental comparison between MathNAS and the block-wise methods, refer to *rebuttle PDF*. Here we mainly clarify the difference between our work and LANA. ### Block Performance Evaluation **LANA** switches blocks on the teacher net (typically the largest network within the search space) and directly uses the resultant accuracy change as the block's performance metric. **MathNAS** conducts block switches on a network whose FLOPs represent the average within the search space. This particular network is chosen based on the observed inverse relationship between FLOPs and delta-Accuracy. We estimate block performance using the accuracy changes observed on this network. Experimental results indicate that this is a more efficient and accurate method for evaluating block performance. ### Network Accuracy Evaluation **LANA** estimates network performance by simply summing up the block performance. It assumes that the impact of a specific block switch remains consistent across different architectures. **MathNAS** employs a weighted evaluation of block performances based on the network's FLOPs, thereby offering a more precise assessment of each block's influence on overall network accuracy. This method subsequently aids in more accurate network performance predictions. Furthermore, both theoretically and empirically, we've demonstrated that the effects of a specific block switch vary across different architectures. While exploring these variances, we suggest that the influence of block performance on network accuracy can be explored from two perspectives: inherent capability and interactive capability. ### Network Latency/Energy Consumption Evaluation **LANA** estimates network latency by summing up block latencies, which restricts LANA's prediction method to be suitable only for macro search spaces with sequential block connections. **MathNAS** calculates the delta-Latency for each block based on the obtained block latency. It considers the network latency to be equal to the difference between the latency of the initial architecture and the delta-latency of each block comprising the network. This makes our latency prediction method applicable to micro search spaces, such as NAS-Bench-201. #### LANA and MathNAS both employ the concept of delta-Accuracy for estimating network performance. This similarity might lead one to believe that Equations 3 and 4 in MathNAS resemble those in LANA. However, it's crucial to note that the block performance evaluation method and the network performance prediction formula in MathNAS are fundamentally distinct from those in LANA. --- # Other Questions ## Figure 1 in the appendix: **In fact, this figure does not represent a specific search space.** Our intention is to demonstrate the architectural versatility of MathNAS, that is, it can be applied to search spaces where the block structure is serial or parallel. Specifically, we used four classic search spaces during the experiment: NASBench201, MobileNetV3, Supervit and Supertransformer. A detailed description of their architecture is in Appendix D.1. We are aware of the possible misinterpretations that our diagrams can cause. In the latest revised version, we will remove this figure. ## Typographical error in Algorithm 1: Thank you for your meticulous review. Indeed, **this is a typographical error**. In the revised version, we will make the necessary correction. ## Obtain averages using different seeds: Given that the essence of our search algorithm is to solve an ILP (Integer Linear Programming) equation, we employed the Gurobipy and Linprog libraries in Python for the solution. Within a fixed time frame, the solution obtained by the solver is unique and deterministic; hence, **there was no need for multiple computations to obtain average values and variance** in our experiments. Details on solving the ILP equation can be found in the "common response" section. ## Limitations and negative social impact discussion: In the revised article, we will add a discussion of the limitations of our work and potential negative societal impacts. Here we give a brief overview of it: ### limitations and future work: - The theoretical explanation of the law of FLOPs and accuracy changes proposed by MathNAS needs to be strengthened. We intend to continue to explore and try to give a complete theoretical proof from the aspect of network interpretability. - Zero-shot NAS algorithms have been proven to be more efficient. Our future goal is to investigate the potential of applying MathNAS to zero-shot NAS algorithms. ### Potential negative societal impact: Our proposed technique for rapidly designing network architectures may lead to unemployment of network architecture designers. The technology can also be misused by those who might create evil artificial intelligence. --- Rebuttal Comment 1.1: Comment: The authors have put in a significant amount of effort to respond to my concerns. It has clarified the difference with prior works. Please make sure the new materials and citations will be included in the revision. Based on the response, I have updated my score.
Rebuttal 1: Rebuttal: Thank you all for your thoughtful reviews. Here, we respond to some common concerns across all reviewers. --- # Paper Citations, Novelty, and Supplementary Experiments Existing blockwise methods such as DNA [1], DONNA [2], and LANA [3] use block distillation techniques to block the teacher model, obtaining the architecture of blocks to be replaced and their performance evaluation. Following this, they use the performance of each block to guide the algorithm in finding models with superior performance. Recent work [4] has pointed out the limitations of such methods. They depend on an excellent teacher network due to their use of distillation techniques. Furthermore, previous block performance estimation methods are unable to effectively predict actual block performance. Additionally, these methods are only suitable for the macro search space. MathNAS overcomes the above limitations: - It does not depend on distillation but proposes a new block evaluation method based on the observed relationship between FLOPs and delta-Accuracy. This method is mathematically efficient and succinct, and it has been theoretically validated across different search spaces. - It applies to a wider variety of search spaces, beyond the classical ones apart from the macro search space, such as the micro search space of NB201 (ref: rebuttal pdf). - Its evaluation of block performance and network accuracy prediction is more precise (ref: rebuttal pdf). - It can find more superior architectures based on a full-space search, holding true even when compared to non-blockwise NAS methods. Moreover, the time complexity of the search algorithm is at a polynomial level. *We have conducted supplementary experiments to compare the applicability of MathNAS and previous blockwise methods across different search spaces, as well as the efficacy of network accuracy predictions. Please refer to the attached pdf for details.* In the revised version, we will include these references, discuss the differences, and provide supplementary experimental validation. [1] Changlin Li et al. Blockwisely supervised neural architecture search with knowledge distillation. 2020. [2] Bert Moons et al. Distilling optimal neural networks: Rapid search in diverse spaces. 2021. [3] Pavlo Molchanov et al. LANA: latency aware network acceleration. 2022. [4] Thomas Chau et al. BLOX: Macro Neural Architecture Search Benchmark and Algorithms. 2022. --- # Search Cost The search cost of MathNAS consists of two stages: offline network pre-training (conducted only once) and online real-time search. - During the offline network pre-training, MathNAS evaluates block performance once. - During online searching, MathNAS is capable of multiple real-time searches based on the current hardware resource constraints. To negate the influence of variations in GPU models and versions on the pre-training time, and to facilitate comparisons by future researchers, we have adopted pre-trained networks provided by existing works (MobileNet [2], SuperViT [3], SuperTransformer [4], NAS-Bench-201[5]). Consequently, all mentions of MathNAS's search cost in the paper refer solely to the real-time search time on edge devices (i.e., the time taken to solve the ILP problem). In the revised version, we will consider adding the offline network pre-training time and will adjust the existing statements accordingly for greater clarity. [5] Andrew Howard et al. Searching for mobilenetv3. 2019. [6] Chengyue Gong et al. Nasvit: Neural architecture search for efficient vision transformers with gradient conflict aware supernet training. 2021 [7] Hanrui Wang et al. Hat: Hardware-aware transformers for efficient natural language processing. 2020. [8] Xuanyi Dong et al. Nas-bench-201: Extending the scope of reproducible neural architecture search. 2020. --- # Searching Equation Solving Details In this section, we describe in detail the solution of the fractional objective function programming equation proposed in the paper. The solution is divided into two steps. 1. Convert the original equation into an integer linear programming equation. 2. Solve the ILP equation. ### Equation Transformation In order to transform the equation into an ILP problem, we first perform variable substitution on the original equation [9]. $$ \text{let}\quad b^{\widetilde{B}}\_{i,j}=\cfrac{b^B\_{i,j}}{\sum\_{i=1}^m\sum\_{i=1}^nb^F\_{i,j}*b^B\_{i,j}},\quad z=\cfrac{1}{\sum\_{i=1}^m\sum\_{i=1}^nb^F\_{i,j}*b^B\_{i,j}} $$ Then the original equation can be transformed into the following integer linear programming problem: $$ \begin{split} &O = \min\limits\_{b^{\widetilde{B}},z}{ (\sum\_{i=1}^m\sum\_{j=1}^nb^A\_{i,j}\*b^{\widetilde{B}}\_{i,j}\*\overline{\mathcal{F}(\mathcal{N})})} \\\\ &s.t. \\\\ &(Lat(\widetilde{\mathcal{N}})-\hat{L})*z \leq \sum\_{i=1}^m\sum\_{j=1}^nb^L\_{i,j}*b^{\widetilde{B}}\_{i,j}, (Eng(\widetilde{\mathcal{N}})-\hat{E})*z \leq \sum\_{i=1}^m\sum\_{j=1}^nb^E\_{i,j}*b^{\widetilde{B}}\_{i,j} \\\\ &\forall{1\leq i \leq m}, \sum\_{j=1}^nb^{\widetilde{B}}\_{i,j}=z, b^{\widetilde{B}}\_{i,j}\in\left\\{0,z\right\\}. \end{split} $$ ### ILP Solving To solve the ILP equations, we use the off-the-shelf Linprog Python package and the Gurobipy Python package to find feasible candidate solutions. - Linprog is a basic integer programming solver that can be used on almost all edge devices, even on the resource-constrained Raspberry Pi. We use it to implement the branch and bound method and solve the ILP problem. - Gurobipy is a more powerful solver, which has built-in a variety of advanced solving algorithms such as heuristic algorithms, and can flexibly utilize all available hardware resources on the device. Although Gurobipy is powerful, it requires more hardware resources than Linprog. Therefore, for devices with limited hardware resources, we use Linprog for searching. For well-resourced devices, we use Gurobipy. [9] Siegfried Schaible et al. Fractional programming. 1983. Pdf: /pdf/9983e92fa0cd8e4ce26e324c718be767f52a8249.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a divide-and-conquer neural architecture search methodology that estimates DNN performance based on the individual performance of each block within a DNN. Accuracy is estimated based on the perturbed accuracy of replacing a block within the NN, and latency/energy are also used within a linear model to predict NN characteristics. Strengths: The paper is generally well-written and the results are strong compared to the chosen baselines from both CNNs and transformer based vision models. The efficacy of the method on 4 search spaces is impressive. Weaknesses: There are 3 main weeknesses / questions that I have regarding the paper: 1. As a blockwise NAS approach, this paper largely ignores much of the prior literature on the topic, for example, [1,2,3]. These prior works also rely on blockwise layer statistics (sometimes obtained through distillation) then the layerwise performance is assembled either with a learned predictor [1,2] or ILP [3] similar to what is proposed in this work. 2. A large part of the evaluation is done on NB201 which isn't a macro search space. I am a bit confused as to how that evaluation on NB201 demonstrates the presented blockwise methodology -- all blocks within NB201 are identical, just sized differently. Perhaps this is something the authors can clarify in the rebuttal? I understand that the plots in Fig. 1 are showing the accuracy delta when one block is replaced with another, bit this would switch out the entire network basically, right? 3. Some conclusions (e.g. line 156) may be too specific to NB201. The inverse correlation of flops and accuracy is not a hard rule and many counterexamples can be presented. On NB201 specifically, it is widely known that this holds. 4. Search cost is written as "10 minutes" does this include all the time to evaluate individual blocks and build up the model? I think this cost should be quantified clearly in the paper. [1] Distilling Optimal Neural Networks: Rapid Search in Diverse Spaces [2] Blockwisely Supervised Neural Architecture Search with Knowledge Distillation [3] LANA: Latency Aware Network Acceleration [4] BLOX: Macro Neural Architecture Search Benchmark and Algorithms Technical Quality: 3 good Clarity: 3 good Questions for Authors: see above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: see above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review of our work. --- # Citations We have meticulously reviewed the mentioned works and discussed the distinctions between MathNAS and them in the *common response*. Furthermore, we compared the accuracy prediction performance of MathNAS with these works. The experimental results can be found in the *rebuttal pdf*. --- # Experiments on NAS-Bench-201 First, let's clarify the specific application of MathNAS in the NAS-Bench-201 search space. As you rightly pointed out, NAS-Bench-201 is not a macro search space, and during a block switch, all blocks undergo changes. In this paper, the "block" concept we mentioned in the paper corresponds to the "edge operations" in NAS-Bench-201. Consequently, in Figure 1, the block switch operation within this space refers to the delta accuracy resulting from a change in one edge operation while keeping other edge operations constant. This is consistent with the essential nature of switching in MathNAS. In the paper, since we also considered macro search spaces beyond NAS-Bench-201, we adopted the "block" concept for modeling and derivation for the sake of uniform representation. NAS-Bench-201 can be viewed as a special case of the MathNAS model. In the revised version, we will elaborate and clarify this matter further. --- # Generality of Conclusions We concur with your observation that conclusions drawn from prior works in the NAS field concerning NAS-Bench-201 may not necessarily hold in other search spaces. However, we believe our experiments demonstrate the generalizability of the proposed rule from two aspects: - First, we validated the inverse proportionality rule between delta-Accuracy and FLOPs on both MobileNetV3 and NAS-Bench-201, as depicted in Figure 1 of the paper. MobileNetV3, being a macro search space constructed from sequentially connected inverted residual blocks, contrasts starkly with NAS-Bench-201. The verification in the MobileNetV3 search space underscores the universal applicability of the stated rule across diverse architectures. - Second, we tested the network performance prediction formula derived from the inverse relationship between FLOPs and delta-Accuracy (i.e., Equation 5) across four distinct search spaces: NAS-Bench-201, MobileNetV3, SuperViT, and SuperTransformer, as showcased in Figure 3 of the paper. These spaces encompass varied network architectures (detailed descriptions of search space structures can be found in Appendix D.1), including CNN, GCN, and Transformer, representing the majority of contemporary NAS search space architectures. Hence, the validation across these four spaces suggests that the introduced inverse rule and the network performance prediction formula can be applied broadly across most search spaces. Given the above, we feel that the statement, "A large part of the evaluation is done on NB201," doesn't quite align with our experimental setup. We referenced NAS-Bench-201 more frequently in our paper because it offers detailed and reliable performance information for its contained architectures, which might have inadvertently led to misunderstandings. --- # Search Cost After a thorough review, we couldn't locate any mention of a "10 minutes" search cost. Perhaps you are referring to the "10 seconds" mentioned in Table 1. In the *common response*, we provided an explanation of the search cost. In the revised version, we will provide a clearer elucidation of this aspect. --- Rebuttal Comment 1.1: Comment: The rebuttal text and additional experiments have addressed most of my comments. Comparisons to other blockwise approaches would be a great addition to the paper. Citing and comparing the method to DONNA, DNA, BLOX, LANA is also very important in my opinion. I still don't fully understand how a perturbation of an edge operation in NAS-Bench-201 can be considered a block swap? This changes the whole model (all 9 cells in the case of NAS-Bench-201) and not just 1 block/cell since this is a micro search space. Because most evaluations in this paper are based on this dataset, I think this is an important thing to clearly explain and flesh out. Can you please explain more on this point? I will raise my score by one point assuming I receive an adequate response to the above concern. Thanks for the rest of your work on this rebuttal. --- Reply to Comment 1.1.1: Comment: Thanks for your kind comment. In the revised version, we will include the comparison between MathNAS and other block-wise methods. --- To address your concerns, here we clarify the rationale and methodology behind treating perturbations in edge operations within NAS-Bench-201 as block swaps from two perspectives: * Practical Implementation (How): **During experiments within the NAS-Bench-201 space, we identify a set of edges in the same position across multiple GNN cells as a single "block".** Given that the structure of GNN cells in the network remains consistent, our focus is on a single cell. Hence, any alteration in an edge operation essentially translates to a corresponding change in all cells, while other edge operations remain static. * Theoretical Framework (Why): Regardless of the distinction between macro and micro search spaces, the networks in both are assembled from multiple mutable modules, be it blocks or edges. The capability of the entire network can be represented by the capabilities of these individual modules. In MathNAS, to explore the contribution of module capabilities to the network's performance, we evaluated changes in inherent module capabilities and their interactive capacities during module switches (i.e., Equation 3). **This module evaluation methodology is applicable to our definition of blocks in NAS-Bench-201: alterations in edge operations impact not only the specific edges' output data (inherent capability) but also influence the input and output data of other edges within the network (interactive capability).** Empirical evidence from macro search spaces (MobileNetV3, SuperViT, and SuperTransformer) and micro search space (NAS-Bench-201) also underscores the universality of our block definition and its adaptability across various search spaces. We will clarify this matter in our revised version. Please do not hesitate to reach out with any additional queries.
null
null
null
null
null
null
Large Language Models of Code Fail at Completing Code with Potential Bugs
Accept (poster)
Summary: In general, the code completion task that is commonly used to evaluate LLMs assume a clean, non-buggy starting point. However, in real-world software development, this assumption is not always valid. In this papers, the authors investigate code completion given a buggy prefix. For this, they derive two benchmarks from existing datasets. Namely, they build buggy-HumanEval by introducing semantics-altering operator changes in the HumanEval dataset and buggy-FixEval based on user submissions in coding contests. They find that performance is lower in the presence of a buggy prefix, and they introduce different techniques for mitigating this to some extent. Contributions: - Evaluate a handful of Code LLMs on code completion with a buggy prefix. - Introduce two benchmarks that will be publicly released. - Investigated different post-hoc mitigation techniques which demonstrate some improvement in performance. Strengths: - The authors introduce two useful datasets, with synthetic and real examples, that could serve as benchmarks of buggy code completion for the research community. - The task of buggy code completion is quite interesting and also important, though it may not be completely novel, as there is prior work (“Learning to Extend Program Graphs to Work-in-Progress Code”) outside the context of LLMs that studies a similar setting. - The post hoc mitigation techniques are useful in understanding the current limits. - There are nice ablation studies and qualitative analysis with interesting findings. Specifically, it’s cool to see that sometimes the completion changes to adapt to the potential bug. Weaknesses: - To run the test suite, both the prefix and predicted suffix are needed. The bug is in the prefix that is given to the model (not generated by the model). So, it is possible that tests fail only because of the bug in the prefix and not because of anything the model generated. In lines 96-101, the authors explain that they loosen the constraints such that they allow the model to make changes to the prefix; however, this is an additional bug fixing task that the model needs to perform, on top of code completion. In other words, in the buggy code completion setting, there are two possible ways to pass the tests. (1) Fix the bug in the prefix and then complete or (2) Try to complete the code by adapting to the buggy code. Therefore, it doesn’t seem fair to compare directly with completion only based on a clean prefix. Since the main claim of this paper is that the task of code completion suffers as a result of buggy prefixes, I feel that it is important to evaluate the completion component alone. Would it be possible to decouple things a bit? For example, when running the test suite only, would it be possible to replace to buggy prefix with the clean prefix to evaluate whether the completion would have been correct if the prefix (which is not model generated) was correct? - For the post-hoc mitigation techniques, the rewriting uses a BERT-based model which is a considerably weaker model, and even though some code pre-training is done on top of it, it likely does not reach the level of modern code LLMs. Perhaps using a stronger model would have yielded better performance. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Please clarify the point I raised in the section above. - The authors claim multiple times that code repair on short context is an ill-defined problem. No references are provided here, and actually, people have studied such tasks in the literature (e.g., bug-fixing with a single statement such as ManySStuBs4J or handful of statements such as the Tufano et al., BFP dataset). Could the authors provide an explanation for their argument? - When introducing new benchmarks, it is good to provide dataset statistics (e.g., average lengths). - Missing reference: “Learning to Extend Program Graphs to Work-in-Progress Code” (they consider code completion on work-in-progress code) [Li, Maddison, and Tarlow; 2021] - Consider placing Figure 2 closer to the results. - It would be interesting to run your experiments on GPT-3.5 or GPT-4. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: One limitation that the authors have not mentioned is the assumption that FixEval closely aligns with a real-world scenario. Coding contest problems do not necessary align with a more general software engineering setting. However, it is challenging to get a test suite for such a setting and so it is more difficult to evaluate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the detailed review and constructive suggestions. We are encouraged that the Reviewer finds our task interesting and important with our studies and analysis providing interesting and cool findings. We appreciate your acknowledgment of the thorough evaluation of our insights as well as the contribution of new benchmark datasets and post hoc mitigation techniques. Please find our answers to your comments and questions as follows. **Comment:** *It is possible that tests fail only because of the bug in the prefix and not because of anything the model generated.*; *it doesn’t seem fair to compare directly with completion only based on a clean prefix.*; *Would it be possible to decouple things a bit?* Thanks for your suggestion. First, we note that: * The performance with “clean prefix” is considered a reference to see how potential bugs affect the LLM rather than a direct method for comparison, so our purpose is to measure the gap between scores when using buggy and clean prefixes to relate how well models react to the change. * The “potential bugs” in buggy prefixes do not necessarily make the model fail. Instead, a good model should manage to complete buggy prefixes with a suitable completion to satisfy the test cases. We showed these interesting examples in Section 4.6. Per your suggestion, we have evaluated and reported the following results (pass@1 score): |buggy-| HumanEval|HumanEval | FixEval| FixEval| |:---:|:---:|:---:|:---:|:---:| |codegen- | 2B-mono|350M-mono|2B-mono|350M-mono| |Clean prefix + clean-based completion| 54.9|43 | 37.8|27.6 | |Buggy prefix + buggy-based completion| 3.1| 0.7| 4.3| 2.4| |Clean prefix + clean-based completion| 40.8| 31.6| 4.6| 2.5| Here, the bottom row (Clean prefix + clean-based completion) is when we use the clean prefix and the completion of the buggy prefix. As we can see, the result reflects that models fail to react to the change in the buggy prefix, as discussed in Section 4.2 (*Why do Code-LLMs fail at bCC?*). **Comment:** *For the post-hoc mitigation techniques, the rewriting uses a BERT-based model which is a considerably weaker model ... Perhaps using a stronger model would have yielded better performance.* In the rewriting method, we use Incoder [Fried et al., 2022] because, to the best of our knowledge, it is the latest and the best LLM trained for infilling tasks — suitable for fixing bugs in the middle of a program. In fact, Incoder is both (left-to-right generation) as well as editing (via masking and infilling). Nevertheless, our design allows us to easily replace Incoder with a better infilling model in the future for better repairing performance. *[Fried et al., 2022]. Incoder: A generative model for code infilling and synthesis. arXiv preprint arXiv:2204.05999, 2022 **Q:** *The authors claim multiple times that code repair on short context is an ill-defined problem. … Could the authors provide an explanation for their argument?"* Thanks for your question. In fact, our statement is that bugginess is only meaningful and a property of a **complete program**; thus (in line 31), we meant that given **the incomplete and short** code context* (i.e., without a full program), it’s ill-defined for detecting or fixing bugs. Thanks for your reference, In [Tufano et al., 2019], we believe while the context is short, it’s a full program, e.g., JAVA programs in Figure 2. We will revise the statement to make it transparent. **Q:** *When introducing new benchmarks, it is good to provide dataset statistics (e.g., average lengths).* Thanks. We provided some detailed statistics of datasets in the supplementary (Section A). Per your suggestion, we also report the length statistics (in terms of the number of tokens) as follows. |Percentiles|50th|90th|95th|98th|99th|100th| |:---:|:---:|:---:|:---:|:---:|:---:| :---:| |buggy-HumanEval|214|470|550|595.6|606.7|617| |buggy-FixEval|171|242.7|262|330|402.4|566| We will add more statistics in the final version. **Q:** *Missing reference: “Learning to Extend Program Graphs to Work-in-Progress Code” (they consider code completion on work-in-progress code) [Li, Maddison, and Tarlow; 2021]* Thanks for bringing this work to our attention. We note that this work focuses on the transfer learning of token-relation prediction for code completion under work-in-progress code scenario, different from our setting and no-training post-hoc techniques. We have added this work to the related work. **Q:** *It would be interesting to run your experiments on GPT-3.5 or GPT-4.* Thanks for your suggestion. Due to the limited time and flexibility, we have conducted extra experiments with a very large model CodeGen-16B-mono instead of GPT-4. We are also planning to run our experiment on GPTs and incorporate results in the next version. Here are the results on CodeGen-16B-mono |buggy-HumanEval (200 instances) | pass@1| pass@10|pass@100| |:---:|:---:|:---:|:---:| |Clean prefix| 78.5 | 93.1 | 95.7 | |Buggy prefix| 8.9 | 20.4 | 25.2| |Removal| 43.2 | 70.9 | 80.1 | |Completion->rewriting| 14.4 | 29.7 | 35.6| |Rewriting->completion|70.9 | 80.2 | 86.9 | |buggy-FixEval (100 instances)| pass@1| pass@10|pass@100| |:---:|:---:|:---:|:---:| |Clean prefix| 59.7 | 72.6 | 75.7 | |Buggy prefix| 8.5 | 18.8 | 37.7 | |Removal| 20 | 49.8 | 68 | |Completion->rewriting| 8.6 | 18.4 | 37.7| |Rewriting->completion|8.9 | 19 | 32.9 | As we can see, CodeGen-16B-mono exhibits a similar phenomenon to our observations. **Minor**: Thanks. We revised. **Limitation:** *Coding contest problems do not necessary align with a more general software engineering setting* Thanks. We will elaborate on this limitation in our manuscript. --- **Final notes:** We hope our responses and new experiment results can help you better appreciate our work and consider increasing your score and supporting accepting our paper. Thanks again for providing valuable feedback! --- Rebuttal Comment 1.1: Comment: Thank you for providing detailed explanations and new results. Just a few more points: - In the first table you provided, why are the results for FixEval surprisingly low for clean prefix + buggy-based completion, compared to HumanEval? - For the last two CodeGen tables, could you also include this clean prefix + buggy-based completion? From the HumanEval results in the first table, it seems like replacing the buggy prefix with the clean prefix helped to significantly close the gap in performance for HumanEval. I'd like to see how that affects stronger models as well. --- Reply to Comment 1.1.1: Title: Re: Official Comment by Reviewer 8UBb Comment: Thank you for your comments. Please find our answers as follows, **Question:** *In the first table you provided, why are the results for FixEval surprisingly low for clean prefix + buggy-based completion, compared to HumanEval?* In buggy-FixEval (bFE), a buggy prefix may be very different from its corresponding clean prefix, e.g., more than one place of differences, variable changes, etc., while in buggy-HumanEval, the difference between a buggy and a clean prefix is guaranteed to be a single operator change. This means that a code language model is more likely to generate different completions for a buggy prefix in bFE than for one in bHE. Thus, simply concatenating the clean prefix with a model completion from the buggy prefix is less likely to result in a functional program for bFE. **Question:** *For the last two CodeGen tables, could you also include this clean prefix + buggy-based completion? … I'd like to see how that affects stronger models as well.* Here we include “clean-prefix + buggy-based completion” pass@1 results using CodeGen-16B-mono. |buggy-| HumanEval (200 instances) | FixEval (100 instances)| |:---:|:---:|:---:| |Clean prefix + clean-based completion| 78.5 | 59.7| |Buggy prefix + buggy-based completion| 8.9 | 8.5| |Clean prefix + buggy-based completion| 49.4| 10.6| We see that the trend is consistent where CodeGen-16B-mono is able to reach a relatively high pass rate compared to clean prefix completions on buggy-HumanEval, but not so well on buggy-FixEval. **Comment:** *It seems like replacing the buggy prefix with the clean prefix helped to significantly close the gap in performance for HumanEval.* We would like to remark that a high pass rate on bHE suggests more about a model’s inability to adapt to the potential bugs injected into the prefix rather than the difficulty of the buggy prefix itself. In particular, a success case for "clean prefix + buggy-based completion" is almost always a failure case for "buggy prefix + buggy completion", as is guaranteed by the tested semantic-altering operator flip in the prefix part. Meanwhile, the failure of all "buggy prefix + buggy-based completion" cases for a buggy prefix does not imply that a functionally correct buggy-based completion is impossible. Instead, it suggests the model’s inability to come up with such completions. Nevertheless, while access to clean prefixes is unavailable in our setting (only for the relative comparison), the "clean prefix + buggy-based completion" results shed insight into the counterfactual performance of buggy code completion and help quantitatively demonstrate the current models' insensitivity to adapting to potential bugs. Thanks again for the great questions and suggestions. We will incorporate this discussion into our revised version.
Summary: The paper studies the ability of large language models for code to handle prompts which contain “buggy” code. To this end, it introduces two datasets: (i) buggy-HumanEval, which is constructed by injecting synthetic wrong binary operator bugs into HumanEval problems, and (ii) buggy-FixEval, which is constructed from real bugs in user submissions to competitive programming tasks. The evaluation finds that “buggy” prompts significantly degrade LLMs’ performance. The paper also investigates mitigation methods for “buggy” prompts. Strengths: The strengths of the paper are as follows: 1. The paper is well-written and easy-to-follow. It provides useful examples for illustration. 2. The paper introduces two new datasets, which can be useful for future research. 3. The evaluation is thorough and provides new insights. Weaknesses: ### Main weaknesses Definition 2.1 is a core definition in the paper but has two major issues. First, defining bugginess over code prefixes sounds incorrect to me, because bugginess is a property for complete programs. Second, the original prefix $s$ can also be viewed as a “buggy prefix”: there likely exists $c’$ such that $t’ = s’ :: c’$ satisfies $h$ and $t = s :: c’$ does not satisfy $h$. In fact, the paper shows the existence of such cases (e.g., Figure 12). Consider a “buggy prefix” $s’$ and a program $t’ = s’ :: c’$. Under what condition exists $c’$ such that $t’$ satisfies $h$? Or when is $t’$ unlikely to satisfy $h$? For these two questions, the paper provides some empirical evidence (e.g., Figure 12 and Lines 640-642) but lacks a theoretical analysis. A highlight of the paper is the discussion over synthetic and real bugs (buggy-HumanEval vs. buggy-FixEval). However, the paper does not capture data imbalance in real bug distribution. That is, bugs occur very infrequently in practice, as shown in [1] and [2] below (not discussed in the paper). Ignoring such data imbalance can threaten the validity of some results. For example, Tables 2 and 3 are obtained on datasets with a balanced ratio of “buggy” and “non-buggy” prefixes. - [1] He and Vechev, On Distribution Shift in Learning-based Bug Detectors, ICML 2022. - [2] Karampatsis and Sutton, How Often Do Single-Statement Bugs Occur? The ManySStuBs4J Dataset, MSR 2020. For buggy-FixEval, the absolute improvement from rewriting->completion or completion->rewriting is small, suggesting that these methods might be of limited practical effectiveness. ### Other questions: 1. In Table 1, the performance of removal->completion on buggy-HumanEval is very low. This is surprising to me, because I would imagine the performance to be much better, if the original HumanEval prompt is used as input (as in Figure 5(a)). How do you perform the removal exactly? 2. In Section 4.3, the results of completing “clean prefixes” are treated as the upper bounds for completing “buggy prefixes”. This does not sound correct to me. Again, I think the correct upper bound is 100 minus the percentage of “buggy prefix” $s’$ for which no program $t’ = s’ :: c’$ satisfies $h$. ### Small issues: 1. Line 235: I could not find Appendix C.3. 2. Line 665: figure 8 -> Figure 8. 3. Table 5 is not referenced in Appendix B.3. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: Please consider addressing the points raised in the “Weakness” section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: The paper provides a sufficient discussion of limitations and potential negative impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer Cn5Q for the detailed review and constructive suggestions. We are encouraged that the Reviewer find our paper well-written and easy to follow with illustrated examples. We appreciate your acknowledgment of the thorough evaluation of our insights and the contribution of new benchmark datasets. **Comment:** *bugginess is a property for complete programs.* Thanks. We completely acknowledged this property of bugs in the manuscript that “bugginess” is a property of complete programs, not code prefixes (and the bugginess of a code prefix can only be assessed in conjunction with a completion.) Therefore, our work referred to the studied patterns as "potential bugs", not “bugs” per see. For brevity, we sometimes drop “potential” or “potentially” when mentioning potential bugs and potentially buggy prefixes when the context is clear. We will make this point clearer near the definition. **Comment:** *The original prefix can also be viewed as a “buggy prefix”.... For these two questions, the paper provides some empirical evidence ... but lacks a theoretical analysis.* Thanks for a great comment. Generally speaking, the bugginess level of a prefix is relative to w.r.t. a reference prefix; thus, our definition means a prefix potentially buggy is w.r.t. a reference prefix (which we name a “clean” prefix). Specifically, “potential bug” as some pattern occurring unnaturally w.r.t to a naturally written program. However, under current definition in manuscript, it is indeed a possibility that the original prefix can also be viewed as a “buggy prefix”. We will refine these terms (clean as reference prefix, and buggy as relative/potential buggy). Here, we provide a detailed clarification and hope this helps Reviewer better assess our intuition. * Intuitively, the reference program t:= s::c captures the natural distribution of programs that correlate s to preferred implementation for certain programming tasks, e.g., functionally correct, easy to read, follows software development best practices, etc. So, it’s likely that deviations from it can be probably bad, especially when they introduce functional incorrectness. * A more holistic definition would be to quantify naturalness and various aspects of preferred quality traits and define the “potential bugginess” as a likelihood value. However, quantifying the property and distribution of counterfactual implementation t’ and completion c’ is inherently difficult, if not impossible. Even full-scale empirical analysis would require significant effort in collecting and reviewing cases manually. We provide empirical observations with model completions. We go for the current operational definition for its simplicity and verifiability. It suffices to guide us through the initial exploration of the impact of this previously under-investigated scenario. Nevertheless, we will refine the terminologies (buggy, clean) to reflect our definition and intuition precisely. Regarding the analysis, our work aims to answer these two questions empirically first for further theoretical analysis in future studies. **Comment:** *Imbalance in real bug distribution: Ignoring such data imbalance can threaten the validity of some results.* Thanks. Our work identifies and focuses on studying the behavior of LLMs in a setting with potential bugs. How often do potential bugs appear in actual code development when a user triggers auto-completion is indeed an open question and worth studying. It is however beyond the scope of this study. We provide preliminary observations on balancing the performance on potentially buggy and clean prefixes in Tables 2 and 3. We hope our study will raise awareness about the existence of potential bugs and call for a method that deals well with whether the potential bug exists or not. **Comment:** *methods might be of limited practical effectiveness* Thanks. Our proposal for these simple strategies is to help understand the limits of current LLMs instead of an advanced technique. We hope our study can motivate further study of advanced methods. **Q:** *How do you perform the removal exactly?* For the *removal* method, we remove the partial code from the prefix to keep only the problem statement and the function header (with optional examples), so the input of removal is as same as the prompt in the original HumanEval. We note that: * We double-checked that on the HumanEval, our removal method reproduces the pass@k performance of Codegen models reported in [Nijkamp et al., 2022]. * buggy-HumanEval is derived from a subset of considerably longer and harder problems in HumanEval. In particular, the average number of lines of our selected problems is 8.2 compared to 2.9 of the rest, resulting in much lower pass@k scores in our subset. **Q:** *The correct upper bound is 100 minus the percentage of “buggy prefix” s' for which no program t' = s'::c' satisfies h* Thanks. We agree that the term "upper bound" here is improperly used. We will refine the term 'upper bound' to “expectation from reference prefixes”'. While the suggested bound is theoretically the upper bound, an empirical estimate seems not plausible because of the number of such buggy s'. Thus, we use the results on the "clean prefix" as expected to evaluate how well the model can achieve with the corresponding "buggy prefix". **Minor:**: Thanks. We fixed these issues. --- **Final note:** We want to thank you again for such exemplary and detailed comments. We will integrate these answers into our new version. We hope our responses can help you better appreciate our work and consider increasing your score and supporting accepting our paper. We agree that our current definition has some ambiguity though it serves the purpose of empirical verification. Still, with its timely development, novelty, significance, and potential in future research, we believe our work can have a good impact on the research society. Thanks again for your careful reading! --- Rebuttal Comment 1.1: Comment: I have read other reviews and the author rebuttals. I would like to thank the authors for providing helpful clarifications. However, I think two fundamental issues are not fully addressed, as discussed below. I don’t see how they can be addressed without major rewriting. Therefore, I still consider this paper borderline and do not raise my rating. ### Definition I still don’t find “potential bugs” a reasonable term because LLMs can potentially generate bugs from any prefix. I think (as also suggested in the author rebuttal), the property studied in this paper should be the “naturalness” of the prefix, in the context of functional correctness for code completion. That is, LLMs are accurate at generating functionally correct code for natural prefixes (e.g., the original prefixes in HumanEval), but fail to generalize to unnatural prefixes (e.g., those with anti-patterns applied). ### Data imbalance I understand the results in Tables 2 and 3. But they are done in an unrealistic setting with a balanced ratio of “buggy” and “clean prefixes”. In a real-world setting, it is critical to consider an imbalance buggy-clean ratio (e.g., 1:1000 as shown in the paper I cited). Due to the large number of “clean” prefixes, even a slight decrease of pass@1 on “clean” prefixes will drastically increase the absolute number of bugs. While your approach improves pass@1 for “buggy” prefixes, it might only translate to a minor reduction in the absolute number of bugs. As a result, the approach might end up with introducing more bugs. I suggest the authors to at least cite the two papers I mentioned and provide a discussion on this, such that the readers can have a correct understanding on the practical usefulness of your approach. --- Reply to Comment 1.1.1: Title: Re: Official Comment by Reviewer Cn5Q Comment: Thanks for your prompt response. Please find our response as follows, ### **Data imbalance** Thanks for the suggestion. We have carefully read the papers and added them to our manuscript. Due to the limited length of the rebuttal this year, we couldn’t present our discussion. Here is our detailed discussion on related works: > [1] finds that distribution shifts between distributions of artificial and real bugs make trained models unusable in practice (<10% precision). Thus, [1] exposes and attempts to mitigate the negative impact of bug repairers caused by the domain shift between synthetic training bugs and realistic test bugs with a proposed two-stage training approach: first on a balanced synthetic dataset, then on an imbalance data with a small number of real bugs. It is relevant to us in 1) that we access the naive completion and baseline mitigations on both synthetic and realistic buggy-code completion benchmarks, and 2) that there is possibly a similar impact of domain shift for the program rewriters used in our baseline methods, as they may not have been trained over such examples. [2] focuses on single-statement bug fixes. They reported an interesting data point on how prevalent single-statement bug fixes appear in popular open-source Java projects. There are more bugs beyond “simple bug fixes”. Finished code in popular open-source projects is likely to have much higher quality than work-in-progress draft code. There may be unidentified bugs. The bug density may be different between Java and Python. Thus, the presence of potential bugs can be much more frequent. * [1] He, Jingxuan, Luca Beurer-Kellner, and Martin Vechev. "On distribution shift in learning-based bug detectors." In International Conference on Machine Learning, pp. 8559-8580. PMLR, 2022. * [2] Karampatsis, Rafael-Michael, and Charles Sutton. "How often do single-statement bugs occur? the manysstubs4j dataset." In Proceedings of the 17th International Conference on Mining Software Repositories, pp. 573-577. 2020. In the revised version, we will add a discussion on the practical usefulness of real-world imbalanced bug distribution. *However*, it’s worth noting that our work is more orthogonal and complements the existing (suggested) works: 1. **Scope of our paper:** Our main point of the paper is to expose the issue and study the impact when potential bugs are present. The baseline methods are first attempts to demonstrate the task's difficulty rather than offer a well-rounded solution that works in all cases. Also, the current assessment can tell us how a method deals with buggy prefixes relatively well. 2. Beyond our scope, how can we use our provided datasets and evaluation for imbalanced settings? * (i) *How about the current evaluation?* > One can use the weighted version of pass@k between clean and buggy prefixes, given prior knowledge about the clean/bug ratio. * (ii) *Regarding our methods,* the distribution shift [1] more or less affects the train-based approaches significantly, as mentioned in [1], while our approaches are post-hoc, without any training * (iii) Furthermore, one can use our balanced dataset for training. For instance, in [1], the approach can use our balanced dataset for the first stage in their two-stage approach. Given prior knowledge about the clean-buggy ratio, one can sample from our clean and buggy datasets to create an imbalanced dataset. ### **Potential Bug Definition** As mentioned in our response, we will revise the definition to make it more precise with more elaboration on the meaning. Our team debated using "naturalness" or "potential bugs" to describe this phenomenon. Eventually, it settled on "potential bug" since we believe this to be more descriptive of this phenomenon than "naturalness," which would be even more ambiguous. Furthermore, while the terms can be refined, they don’t significantly affect the rest of the paper and results. --- Given these two points, we will refine the definition of potential bugs and the discussion on imbalance in the final revision. We hope that our responses will further address your concerns.
Summary: This paper identifies and analyses a limitation of the LLMs for code, that they fail for completing code with potential bugs. This is a problem because LLMs are deployed in real world software projects which may contain many bugs. The authors attempt to bridge this gap between test environment and real world by creating buggy versions of two popular datasets used for testing LLMs code generation capabilities. They experiment with two models, with two sizes each and show how their performance is impacted by introducing potential bugs in the test dataset. The authors also try different techniques of addressing the limitations of LLMs when trying to complete potentially buggy code. Strengths: The authors have highlighted a relevant problem with LLMs. It is a well written paper and the authors demonstrate a good understanding of the problem. They consider many different possibilities when generating the data and run thorough experiments covering many scenarios. They also explore intuitive solutions to overcome the limitation of LLMs that they have highlighted. The strengths of the paper are: 1. The paper highlights an interesting and a very real problem. 2. The suggest good post-hoc strategies and cover some of the gap in the model performance introduced by the buggy dataset. 3. They experiment with multiple datasets. 4. They experiment with multiple open source models of different sizes. 5. They will make the new buggy version of the dataset available to public. Weaknesses: The paper has the following weaknesses: 1. The scope the problem highlighted is very narrow. It is limited to code generation, and even within code generation it is limited to code completion. It does not even apply to natural language to code generation. This is completely because of the nature of the problem highlighted. 2. They have experimented with multiple models of different sizes, but they did not try a very large LLM. As we know that as the model size increases, new capabilities get added to the model. This leaves the question open as to whether a much larger model, like 16B parameter CodeGen, would exhibit the same limitation or have the same gap in performance after post-hoc techniques. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Do you think a much larger model will have the same limitation? Do you plan to experiment with larger models? For code completion applications deployed in IDEs, rewriting user code is generally not expected. How does this limitation, and the fact (which you have mentioned in the paper) that it may be impossible for the model to avoid some potential bugs, affect the scope of the problem? More generally, how do you expect code completion applications to deal with buggy code? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors should write more about the scope of the problem and it's impact on code completion applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the detailed review and constructive suggestions. We are encouraged that the Reviewer finds our problem interesting and practical, and our paper is well-written and well-demonstrated. We appreciate your acknowledgment of the contribution of new benchmark datasets and repair methods to the community. **Question:** *Do you think a much larger model will have the same limitation? Do you plan to experiment with larger models?, like the 16B parameter CodeGen* Thanks. To better answer your question, we have conducted extra experiments with CodeGen-16B-mono. We notice that, due to the limited time and computation, we provide the results with a subset of the original dataset instead. We will continue our experiment and incorporate results on all datasets in the final version. |buggy-HumanEval (200 instances) | pass@1| pass@10|pass@100| |:---:|:---:|:---:|:---:| |Clean prefix| 78.5 | 93.1 | 95.7 | |Buggy prefix| 8.9 | 20.4 | 25.2| |Removal| 43.2 | 70.9 | 80.1 | |Completion->rewriting| 14.4 | 29.7 | 35.6| |Rewriting->completion|70.9 | 80.2 | 86.9 | |buggy-FixEval (100 instances)| pass@1| pass@10|pass@100| |:---:|:---:|:---:|:---:| |Clean prefix| 59.7 | 72.6 | 75.7 | |Buggy prefix| 8.5 | 18.8 | 37.7 | |Removal| 20 | 49.8 | 68 | |Completion->rewriting| 8.6 | 18.4 | 37.7| |Rewriting->completion|8.9 | 19 | 32.9 | As we can see, CodeGen-16B-mono exhibits a similar phenomenon to our observations in the manuscript (with smaller versions), i.e., potential bugs make LLM harder to generate correct programs, and post-hoc techniques still suffer from a large gap to clean-prefix completion. **Question:** *How do the practice (rewriting user code is generally not expected) and the unavoidability of potential bugs affect the scope of the problem?* Great point! We do not commonly expect IDE smart tools to rewrite or change user code when recommending completions because no one has implemented such. We believe the work-in-progress code is draft, less refined, and more error-prone. Thus, it should be viewed as a hint of user intent rather than a golden one. From this view, it follows naturally if a pair programmer or a smart tool identifies or believes a certain part of the partial code is not what the developer meant, they should suggest the developer change their draft code rather than blindly continue it. From a user experience perspective, the IDE can display code change suggestions to a user’s existing code if the smart tool believes there is an error and some part of the code should be changed to something else. Similar functionality already exists for other types of code change suggestions, such as spelling correction suggestions or missing import suggestions. **Question:** *More generally, how do you expect code completion applications to deal with buggy code?* Thanks for a great question. The response to the previous points described a potential user experience with smart completion tools. On the modeling side, we would like to see code completion solutions take more holistic views of the program specification, user intent, code and project context, and similar code and coding conventions into consideration rather than the narrow task of predicting what follows next from a string of code. This work is motivated by treating partial code not just as code context but as a hint of user intent. We explore the impact of “anti-patterns” induced by errors or drafts and present results on how we can possibly do better if we allow completion solutions to deviate from given code prefixes. — **Final notes**: We would like to thank the Reviewer for your constructive comments and appreciation of our findings. Per your suggestion, we will incorporate new experiments on a very large model and our answer about the scope of the problem and its impact on code completion applications. We hope our response makes you consider increasing your score and further support accepting our paper. --- Rebuttal Comment 1.1: Title: Thanks for the update. Increasing the score to Accept. Comment: Thanks for the explanation. I am satisfied by the new experiment that you have added and your explanations. Because you have agreed incorporate the new experiment and discussion on problem scope, I am increasing the score to Accept. Good luck! --- Reply to Comment 1.1.1: Title: Re: Thanks for the update. Increasing the score to Accept. Comment: Thank you for the positive feedback!
Summary: Understanding the buggy code completion abilities of Large language models (LLMs) is a vital topic considering the increasing trend of adopting LLMs for programming assistance purposes. This paper studies the buggy code completion problem on different LLMs and datasets and takes a step to investigate three post-hoc methods to mitigate the effect of potential bugs. The authors evaluate the capabilities of four LLMs against buggy code with semantics changes and further compare the effectiveness of three repairing methods from multiple aspects. Strengths: + Timely and vital problem. + The overall presentation of the paper is sound. + The experiments are conducted in a good manner. ### Significance: This topic itself is motivating as the presence of potential bugs in code completion is common and unavoidable in program development. The authors construct two datasets for buggy code completion, which can be an asset for following research in this domain. ### Novelty: Understanding the capabilities of LLMs on buggy code completion and corresponding repairing methods is novel. ### Verifiability and Transparency: Although the replication is not yet available, the authors provide rich information in supplement material(appendix) to support the findings and discussions in the manuscript. ### Presentation: Overall, the manuscript is written in a good manner, and I enjoyed reading this paper. Weaknesses: - Lack of justifications for some technical details. - Some experiment results are not sufficiently explained. This manuscript is generally easy to follow; however, some technical detail and experiment results still lack some explanations and justifications. Please see the comments below: 1. The definition of "potential bugs" is kind of ambiguous. In line 43, the authors state that "potential bugs are defined in relation to specific code completions and are not truly bugs per se without completions." However, during the dataset construction, the authors make candidate buggy samples by manipulating the semantic opposites, which can raise at least one violation regarding the test cases. These statements can somehow cause confusion about the definition of potential bugs. 2. In addition to the previous comment, the authors mention the injected bugs are not necessarily "incorrect." More justifications are needed to clearly explain the definition of the correctness of the bugs introduced in buggy code completion. 3. For the Buggy-FixEval dataset, the authors leverage a limit on the character-level edit distance between the clean prefix and the buggy one. Does that mean the constructed buggy prefix can have semantic changes at multiple spots? If so, this may raise more challenges for the following Rewriting->Completion repairing method since it contains a buggy line localization process. 4. The Rewriting->Completion needs more details and justifications specifically for the likelihood-based measurement used in buggy line selection. Although Appendix B.1 provides additional information for the likelihood measurement, no evaluation or discussion is found to justify the effectiveness of such measurement for line selection. Although the authors state a heuristic oracle is used for comparison, limited information is available to give a concrete understanding. To the best of my knowledge, the token-level probability distribution in LLMs comes with relatively high predictive uncertainties, which may not be capable as an indicator for buggy line localization. 5. In the manuscript, the authors report the performance of different repairing methods with the metric of pass@1. From Appendix D Figure 6, I observe some interesting trends at pass@100 different from the performance on pass@1. Specifically, the Rewriting->Completion approach shows equivalently great or even beyond performance compared to the other two methods. The authors only discussed the findings for pass@1 in the manuscript; more insights from Figure 6 can be an asset to enrich the soundness of the paper. 6. In terms of the effect from bug and split locations, more in-depth discussion can help to deliver a better understanding of the characteristics of different repair methods. Namely, the authors may present some thoughts on why the Rewriting->Completion and Completion->Rewriting have distinct performances w.r.t different bug and split locations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Are there any experiments conducted to evaluate the effectiveness of the likelihood-based measurement on buggy line localization? Is this method effective on both single-line detection and multi-line section? 2. Does the constructed buggy prefix in Buggy-FixEval dataset can have semantic changes at multiple spots? 3. From Figure 6, are there any thoughts about the Rewriting->Completion method having a relatively better performance at higher k values? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The overall manuscript is sound. More implementation details about the dataset construction (Buggy-FixEva) and repair methods (Likelihood-based measures) can help to enhance the adaptability of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer XA1K for the detailed review and constructive suggestions. We are very excited that you enjoyed reading our work and found our problem timely, vital, and motivating, as well as our manuscript well-written. We also appreciate your acknowledgment of the novelty and contribution of new benchmark datasets and repair methods to the community. Please find our detailed responses as follows. **Comment:** *... These statements (line 43 and data construction) can somehow cause confusion about the definition of potential bugs."* Thanks for the comment. During the construction of buggy-HumanEval, we alter the code prefix so that the prefix becomes buggy, aka fails test cases, with respect to the code suffix, aka the reference completion. The altered prefix is not itself “buggy per se” because bugginess is mostly pointless without a complete program. It IS buggy, however, “in relation to specific code completions” – in this case, the reference completion or the original suffix. We are revising these statements in the updated version. **Comment:** *More justifications are needed to clearly explain the definition of the correctness of the bugs introduced in buggy code completion.* Thanks for a good question. This again relates to the point that it’s pointless to say if the partial code is “buggy per se” or “incorrect by itself”. It can be incorrect with one completion but at the same time correct with a different completion. This ambiguity is inherent and is what exactly makes the problem difficult and interesting. We will add more justifications and examples to make the explanation more precise. **Question:** *Does the constructed buggy prefix in Buggy-FixEval dataset can have semantic changes at multiple spots?* Yes, theoretically speaking, the buggy prefix can have more than one semantic change. However, through our manually inspect, most buggy prefix contains only one potential bug, given the programs are mainly short. Regarding the rewriting->completion, we heuristically set a priority score for the line that appears early in the program among lines with high probabilities of containing bugs. **Question:** *Justifications for the likelihood-based measurement used in buggy line selection* * **Experimental results**: Thanks for the suggestion. The accuracies of localizing the line of potential bugs (with the same setting described in the manuscript) are approximately 82% and 53% on buggy-HumanEval and buggy-FixEval, respectively. For buggy-FixEval, we compare the detected line with the line of the first semantic difference between buggy and clean prefixes. * **comment 1**: “ … the token-level probability distribution in LLMs comes with relatively high predictive uncertainties, which may not be capable of as an indicator for buggy line localization.” > This is a good point. Indeed, our algorithm mitigates this variance by (i) instead of the softmax score, we measure the maximal margin between scores of the target token and the argmax token (similar to the popular approaches used in uncertainty quantification), and (ii) aggregate the score gap along the line and set a high threshold (to be more conservative). Our underlying idea is to treat the potential bug as the outlier in the generation flow of the LLM, and from our observation that most clean code has lower perplexity scores than the corresponding buggy code, 94% and 70% for HumanEval and FixEval, respectively. * **comment 2**: “ … Limited information is available to give a concrete understanding of heuristic oracle … “ > Heuristic oracle only serves as a reference to practically compare our bug localization method. The oracle assumes access to the clean prefix to localize the bug line better. For instance, on buggy-HumanEval, it detects the potential bug by string-based comparing the buggy and clean prefixes to find the first difference. We will provide more details in our revision. **Question:** *3. From Figure 6, are there any thoughts about the Rewriting->Completion method having a relatively better performance at higher k values?* Thanks for a very good catch and suggestion. We hypothesize that higher k results reflect more about the "diversity" of the generated completions by methods. In particular, * *Removal* has no guidance about the completion, so the model tends to generate more diverse solutions. Thus, pass@1 can be high, but pass@k may be limited. * In *completion->rewriting*, only one prefix is used, making it harder for the repairer to fix if the buggy prefix steers LLM in the wrong direction. * *Rewriting->completion* balances the two: the rewriting phase first can help fix the prefix while still providing some good guidance to generate solutions that are more precise but less diverse (than removal), thus achieving better pass@k with higher k. We will incorporate these discussions in our revised version. **Comment:** *More discussion on bug and split locations. Why the Rewriting->Completion and Completion->Rewriting have distinct performances w.r.t different bug and split locations?* In Figure 3d, rewriting->completion achieves higher scores when potential bugs appear later. In contrast, completion -> rewriting (Figure 3c) achieves high scores when the code prefix is longer, indicating that this method can fix potential bugs that appear early in the program. We suspect that this is because longer prefixes restrict more flexibility of the completion, making the generated completion less likely to deviate from the “mode” completion – the completion as if the injected bug was not there in the prefix. In this case, the completed code, aka the input to the subsequent code repairer, better resembles the kind of input the repairer is trained on and thus makes the repairer more likely to repair the program successfully. --- **Final notes**: We are excited that you find our paper sound, and thanks again for carefully reading our work and for providing encouraging feedback! --- Rebuttal Comment 1.1: Title: Follow with the response Comment: Dear Reviewer XA1K, We appreciate your time and effort in reviewing our paper. If any concern/question remains unaddressed, we are happy to provide further clarification. Thanks.
Rebuttal 1: Rebuttal: We are thrilled that the reviewers appreciate our paper, and we want to extend our gratitude for their insightful feedback and constructive comments. These invaluable suggestions will undoubtedly enhance the quality of our work. First and foremost, it's encouraging to note that the reviewers have identified the following: * (i) Our proposed problem is important, interesting, and practical (R-2xun, R-XA1K, R-h8sG, R-8UBb) with a novel setting (R-2xun, R-XA1K). * (ii) Our evaluation is thorough, comprehensive, and well-conducted (R-XA1K, R-h8sG, R-Cn5Q) with rich insights and findings (R-2xun, R-XA1K, R-Cn5Q, R-8UBb). * (iii) Our new public benchmark datasets are helpful for future research (R-2xun, R-h8sG, R-Cn5Q, R-8UBb). * (iv) The mitigation techniques are intuitive and helpful in understanding the current limits (R-2xun, R-h8sG, R-8UBb). * (v) Our paper is well-written and easy-to-follow (R-XA1K, R-h8sG, R-Cn5Q) with useful examples for illustration (R-Cn5Q). We believe we have clarified and addressed all questions with our best. Please find our answers in each reviewer's section.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces a new code-completion problem: potential bugs exist in the completion prefixes. The authors constructed two benchmark datasets, one is created by manually injecting bugs from the HumanEval dataset, and the other is based on actual bugs in the FixEval dataset. Three baseline methods are proposed, including (1) removing the prefix and then generating from scratch, (2) completing at first and then repairing the whole solution, and (3) repairing the prefix first and then finishing the completion. The performances of different approaches on the two benchmark datasets are analyzed. Strengths: (1) The author proposes a new, challenging, and practical scenario in the field of code completion - how to complete code completion tasks in scenarios with potential errors in the context. (2) Based on the setting of the paper, the author constructed two datasets based on HumanEval and FixEval as benchmark datasets. (3) Combined with the code repair model, this paper proposes three baseline methods to deal with the problems in this scenario. (4) The author conducted a comprehensive analysis of the performance of the baseline method. Weaknesses: (1) The baseline method provided in the paper is somehow straightforward, mainly a straightforward combination of code completion and repairing. (2) The repairing process also introduces performance degradation, where a mechanism to judge how to use the code repair may be needed during the code completion. (3) Although the authors argue that their new scenario is not a simple combination of code repair and completion, their baseline method is still a simple combination of the two tasks. (4) The authors did not show the repairing performance among the two benchmark datasets. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Section 3.1.2, when constructing Buggy-FixEval dataset from FixEval dataset, the author split rejected solution in halves and regard their first halves as prefixes containing bugs, thereby creating samples with buggy prefixes. However, How to ensure the first half of the rejected solutions contain at least one bug? Please give more details. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: (1) When constructing a dataset whose prefixes have potential bugs, the authors select the first half of rejected solutions in the FixEval dataset as prefixes with bugs. However, this process cannot ensure that the prefixes contain bugs. For example, bugs may exist in the second half of the solution. (2) I am unsure if applying such three baseline methods to code completion will hurt the clean prefixes. The authors did not try to interpret this problem. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer 2xun for the detailed review and constructive suggestions. We appreciate your acknowledgment of the novelty, challenge, and practical setting of our problem and the contribution of new benchmark datasets, baseline methods, and our comprehensive analysis. Please find our answers to your comments and questions as follows. **Comment:** *proposed baselines are somehow simple, mainly a straightforward combination of code completion and repairing* Thanks for the comment. In fact, "simplicity" and "modularity" (a combination of existing modules) are our key designs for these baselines for understanding the current limits of code language models. These designs have some advantages: (i) easy to incorporate any LLM with the latest repairing modules without extra training, (ii) flexible to update each model separately, thus enabling more independent tests of the target LLM. Nevertheless, the rewriting->completion baseline is more sophisticated with an additional algorithm of bug localization. Furthermore, we are experimenting with two advanced techniques: (i) fine-tuned a CodeT5 pre-trained for neural program repair, and (2) few-shot in-context learning for fixing the program with loop repairing. We hope to release these investigations soon in a further study. **Limitation/Comment:** (i) *unsure if baseline methods hurt the clean prefixes*, (ii) *"... may need a mechanism to judge how to use the code repair …"* This is a good point. Indeed, we acknowledged this phenomenon (baseline methods may hurt the clean prefixes) in Section 4.4. (see Tables 2 and 3). Furthermore, we showed our initial effort of controlling when to apply the repairing method by thresholding. For instance, in Table 3, given prior knowledge about the distribution of buggy programs, we may select a proper threshold to balance the clean and buggy settings. Moreover, we want to emphasize that the primary purpose of these baselines is to study the limits of code language models in our setting rather than **a perfect code completion solution**. Thus, we hope these are considered initial efforts for more advanced study of repairing techniques in future research. We will add more interpretations in the revised version. **Comment:** *No repairing performance among the two benchmark datasets* To our best interpretation of your comment, 'repairing performance’ means “repairing” without completion. If so, for *completion -> repairing* and *removal* methods, the repairing performance can be only measured by the final performance (pass@k) compared to the buggy-prefix completion, as shown in the manuscript. For the *rewriting -> completion* method, we can provide the accuracy of the line of bug localization as 82% and 53% on buggy-HumanEval and buggy-FixEval, respectively. Please let us know if this answer addresses your comment. **Question:** *In FixEval dataset, how to ensure the first half of rejected solutions contain bugs?* Thanks. This is a good question that sparked our discussion when designing the buggy-FixEval dataset. Indeed, we manually inspected all pairs to select the final 292 pairs that satisfy our conditions. Note that each pair has already been chosen to be similar (our supplementary A.2, page 2). We further execute the program that concatenates the buggy half with the completion half of the correct code to ensure that it fails at least 1 test case. --- **Final notes:** We are grateful that you find our idea novel and challenging. We will integrate our clarification into the final version. We hope our response helps you better understand our paper and consider increasing your score and further support accepting our paper. --- Rebuttal Comment 1.1: Comment: Hi Authors, Thank you for your response and we will take those into consideration. Best, --- Reply to Comment 1.1.1: Title: Follow with the response Comment: Dear Reviewer 2xun and Area Chair, We thank the Area Chair n3xV for being so considerate. We appreciate your time and effort in reviewing our paper. We are happy to provide further clarification if there is any concern/question remaining unaddressed. Thanks.
null
null
null
null
null
null
Aligning Gradient and Hessian for Neural Signed Distance Function
Accept (poster)
Summary: The authors propose a new smoothing loss to better regularize the estimated Signed Distance Field (SDF) from unoriented point clouds. By aligning the SDF's gradient and its Hessian, the quality of reconstructed surfaces improves significantly. Compared with commonly used Eikonal loss, the proposed loss is more effective. Experiments validate the effectiveness of the proposed method as well as not so sensitive to input noise and hyper-parameters. Strengths: 1. The proposed loss is novel and theoretically reasonable. 2. The loss could be easily worked with different methods as a plugin. 3. The experiments are almost complete and big improvements validate the effectiveness. Weaknesses: The paper is poorly written. The authors write the paper more like from a mathematic point of view rather than computer vision point of view. Many details are missing, and some said "obviously" and "evidence" are not clear at all. For a computer vision paper, Figures are much more important for illustration and detailed explanation of Figures and Equations, especially where to look at in the figure and why an equation intuitively works. However, if the authors could provide details in the rebuttal process and the final version, I believe that the paper could be accepted. If not, it should be rejected, as this is not a math paper. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The first paragraph in Sec 3.3 requires figure illustration. It’s difficult to understand why Equ. 4 intuitively works. What is Equ. 4 enforcing? What do you mean by "alignment"? A few figures are necessary. It is not clear at all where Equ. 4 comes from. 2. In Figure 2, The 100 points inputs are different for three methods? Please show the same inputs’ results as comparisons; otherwise, it is not convincing. 3. In Figure 3, it is NOT evidence that the alignment loss can suppress the ghost geometry as mentioned in Line 173. The authors need clearly show which region in Figure 3 is problematic, and why the last column is good. It seems the alignment loss works similar to Dirichlet loss, and I do not see any benefit. Are the black lines zero-isosurface? Then the last column’s result seems quite bad. 4. The big Q in Equ. 7 is not defined. 5. Table 5’s caption needs explanations of each row, especially the difference between same name’s rows. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The current experiments only show surface reconstruction from point clouds. But I think the loss could benefit other scenarios, like incorporated into Neus, but the authors do not show its effectiveness. Also, whether the loss works for unsigned distance field (UDF) also? This could be an interesting future topic. The current point cloud scenario is somewhat limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for considering our method novel and theoretically reasonable. **Q1: Explanation of Sec 3.3 and Eq. 4.** We are sorry for the unclear writing. We will further improve the readability in the revision. Recall that one of the basic properties of SDF is the Eikonal condition (line. 116) $\lVert\nabla f_\theta\rVert_2^2=1$, we can differentiate both sides of the identity and we can get the eigenfunction of the Hessian matrix (Eq. 2): $\mathbf{H}\_{f\_\theta} \nabla f\_\theta \(\boldsymbol{q}\)=\mathbf{0}$, which implies the gradient of SDF is one of the eigenvectors of the Hessian matrix of SDF $f$ and its corresponding eigenvalue is 0. The more detailed proof is available in ref. [30]. The property implies that the gradient must align with one eigenvector of the Hessian matrix with 0 eigenvalue. Our regularization term (Eq. 4) is exactly based on this fundamental property. Therefore, we use "alignment" to enforce the gradient to become one of the eigenvectors of the Hessian matrix, with the corresponding eigenvalue being 0 that is the second derivative in the gradient direction is zero. Experimental results show that the alignment requirement is effective, i.e., not only effectively suppresses the ghost geometry but also preserves geometric details. **Q2: The issue of Fig. 2** Thanks for pointing this out! We follow the 2D toy problem settings as DiGS which samples 100 points from analytical equations at each epoch. So it seems the points leveraged by different methods are different. We agree that it is misleading and we conducted experiments with fixed points in **Fig. 1 in the PDF file**, which is used to replace Fig. 2. **Q3: The issue of Fig. 3** We are sorry for this issue. We will replace it with our new figure (**Fig. 2 in the PDF file**) to demonstrate our benefits. **Q4: The definition of Q** Q is the set of sampled points and we defined it in **Sec. A in the supplementary material**. We follow IGR [ICLR 2020] to sample point clouds and directly leverage the sampled points produced by NeuS on Ray for multi-image input. Specifically, suppose that $p_i$ is a point in the input $P$, we define the Gaussian function rooted at $p_i$ and take the distance to its $k$-th nearest neighbor ($k = 50$ by default follow IGR) as the standard deviation. Then we sample points from each distribution. **Q5: Explanation of Table 5** Thanks for the question. In the paper, all methods marked with '*' means it leverages with normals in the training stage, and those marked with '+' means this is a supervised one. We have compared different optimization-based methods under DFAUST and the different symbols mean its training settings. In all, our method can learn shape space without requiring input normals or additional supervision but still produces more faithful shapes than others. We will revise it following your advice. **Q6: Extension of our method** Thanks for your constructive comments! It is interesting to explore our method with multi-image input and fit UDF. We have incorporated our loss into NeuS and conducted the experiments under the DTU dataset in **Sec.D.5 in the supplementary material** (we enclosed the experimental results in the submission phase). It can be seen that our method supports multi-image inputs and effectively improves the quality of results compared to the original NeuS. For UDF, unfortunately, it does not satisfy the Eikonal term since the norm of gradient is 0 on the 0-isosurface. Therefore, the key alignment property does not hold anymore and thus our algorithm may not work well for fitting UDF.
Summary: The paper under review proposes a method for signed distance function (SDF) reconstruction from point cloud data without the use of normal data at the points. The paper proposes a new term as a regularization to complement the EIkonal loss, which is a necessary condition for an SDF. The purpose of this regularization is to reduce "ghosting" effects. The term is obtained by differentiating the Eikonal constraint, resulting in a second-order constraint, namely that the Hessian times the gradient is zero. The paper shows experimental comparison to SOA on multiple benchmarks with favorable results. Strengths: - New regularization term for neural SDFs; original to best of my knowledge; neural SDFs are a very relevant research topic of interest in vision/graphics - Paper is mostly clear - Experiments on many benchmark datasets, with favorable results to the authors' approach Weaknesses: - Experimental results on SRB and ShapeNet use different metrics than recent papers in area, e.g., DiGS[6] - not clear why they have been changed. Please report results with previous protocols. - A new sampling strategy is reported in supplementary for the alignment loss. Not clear what the effect of this is, and why it is needed. - Theoretical justification is weak, e.g., line 156-174; there is no clear understanding of this new term - no clear explanation of why the ghosting is reduced with this term. --The authors attribute ghosting to in practice only applying the Eikonal constraint to a sampling of points; unclear if this is true due to random sampling or this is only part of the problem. The Eikonal equation does not have a unique solution which is in part why there is ghosting. --Unclear why the smoothness term not being min zero means it's difficult to regulate. --It is stated that the norm and direction of the gradient is de-coupled. However, a minimization of the loss is being considered - the optimizing flow for both the Eikonal and alignment will both change the direction & norm of the gradient during minimization. - Line186-187: the constraint is true a.e. so why is the adaptive weighting needed? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Address the weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable comments. We will incorporate all feedback into the revised version. **Q1: Different metric compared to DiGS** To our knowledge, there are two commonly used evaluation settings under SRB, one used by DiGS [CVPR 2022] and the other proposed by Shape as Points [NIPS 2021]. The version of DiGS operates in the resolution of $512^3$ and uses Chamfer distance and Hausdorff distance at the original scale, while the version of Shape as Points operates in the resolution of $256^3$ and uses Chamfer distance, F-Score, and Normal Consistency, but it does not give the number of evaluation points. Therefore, we sample 100K evaluation points and follow the settings of Shape as Points in this paper. Following your advice, we also show the results under the settings of DiGS, see **Tab. 3 in the global response box**. It can be seen that our method outperforms DiGS in terms of Hausdorff distance. To summarize, our method consistently outperforms DiGS, in terms of whether Hausdorff distance or for F-Score (indicated in the main paper). **Q2: Effect of sampling strategy for alignment** The sampling strategy is not new. Actually, we use the sampling strategy following IGR [ICLR 2020] and NeuralPull [ICML 2021], to sample more points near the input points rather than uniformly the bounding box. The rationale behind this lies in that the SDF only guarantees to be differentiable in a narrow region nearby the surface (considering faraway surface points which have two or more same minimal distance that cannot be differentiable). Besides, for the reconstruction methods, the 0-isosurface is of more interest than other level sets. We compared different sampling strategies as **Tab. 4 in the global response box** (We disable adaptive weight here to validate sampling strategies only). It can be seen that sampled points near the surface are better than samples randomly on the bounding box. **Q3: Understanding of ghost geometry and our term** Thanks for pointing out this issue. As you observed, the Eikonal term has many invalid solutions (with ghost geometry) that cannot be filtered out on finite and discrete points, as discussed in DiGS. By contrast, second-order information can effectively limit the solution space. Secondly, the parameters of the network are typically larger than input points, leading to over-parameterized neural networks with an extensive array of parameters, which can further convolute the optimization process, as described by Empirical Analysis of the Hessian of Over-Parametrized Neural Networks [Sagun et al., 2017]. Our regularization term is based on the fundamental geometry property of SDF to limit the solution space and reduces the number of possible solutions. The occurrence of ghost geometry contradicts the fundamental property of SDF. We construct a similar toy problem as DiGS with the shape 'L' with 100 points to demonstrate the effect of our term. We set the weighting coefficient of all regularization terms to 100 for fairness (the default coefficient used by DiGS) shown in **Fig. 1 in the PDF file**. The experimental results validate our observation. (Note that DiGS leverages 15K points in its main paper for the toy example.) To further validate the effect of our term, we increase the weighting coefficient of the regularization term to 1e3, 1e5, and 1e7, respectively, as shown in **Fig. 2 in the PDF file**. It can be seen that even with large weight, our regularization term consistently yields high-fidelity surfaces without introducing over-smooth and ghost geometry. However, smoothness energy terms easily degenerate. In essence, other general smoothness energies, which are not always to zero, tend to force the derivative to 0 and will be smoothing in unwanted ways. Instead, our regularization term is more relaxed to avoid over-smooth results, which enforces the gradient to become one of the eigenvectors of the Hessian matrix, with the corresponding eigenvalue being 0 that only the second derivative in the gradient direction being zero satisfies the fundamental property of SDF. To illustrate the feature-preserving ability of our method, we conducted experiments under a lion shape with 100K points as input. As shown in **Fig. 3 in the PDF file**, our method not only suppresses ghost geometry but also recovers high-fidelity geometric details. However, smoothness energy terms tend to produce over-smoothed results. Finally, we agree that the norm and direction of the gradient cannot be totally de-coupled. We shall make it more rigorous in this revision. **Q4: Effect of adaptive weighting** As discussed in #Q2, it is better to emphasize the points nearby the underlying surface. We conducted an ablation study about how adaptive weighting influences the reconstruction result; **Tab. 7 in the main paper** demonstrates that an adaptive weighting scheme ($\delta = 0$ means disabling it with the adaptive weighting constantly is 1) a better strategy. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the reviewers for the rebuttal. It addressed my main concern with different metrics compared to DiGs. I think this paper is a good contribution with good empirical evidence, but still a bit lacking in the theory. I have increased my rating. --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: Thanks for your quick response. We really appreciate your decision to increase the rating. At the same time, we shall follow your advice to further consolidate the theory. Thanks!
Summary: The authors provide a novel regularizing term for training of implicit neural SDF representations, motivating the function gradient to be a zero eigenvector of the Hessian near the zero level set (the surface). They demonstrate that it can be used to reduce the appearance of ghost geometry and increase quality of the representations. Strengths: - A principled novel regularization term that seems to significantly improve the learned representations. - Is tackling an important application domain, as neural geometry representations are broadening the application domain of deep learning techniques. Weaknesses: - It seems unclear to me how the method is specifically designed to handle unoriented point clouds. If there is a claim that the regularizer somehow motivates consistent alignment of resulting normals, it'd be nice to have some commentary on this. - The evaluation metrics are a little strange to me: - It's unclear how they are comparing normals, as there would need to be a correspondence between the reconstructed surface and the ground truth surface. - F-score seems a bit of an odd choice. I initially assumed that they are using the interior of the isosurface to specify points classified as inside the volume, but the default threshold 0.005 seems quite low for this. So perhaps they are asking for the network output to classify points on the shape, but I am not sure of how they do this either, as nearly all points will evaluate to nonzero values. - In my opinion, Hausdorff would be good to include, as it would give a sense of the maximum error. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately address the limitations of the methods, which are mostly that there is still room for further improvement in their output. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate that the reviewer finds our paper novel. We address additional comments below. **Q1: How to handle unoriented point clouds?** Yes, our alignment term encourages consistent orientations of normal vectors. Based on our $C^{\infty}$ activation function (sine and softplus), when we request the Hessian matrix to degenerate along the eigenvector corresponding to the zero eigenvalue, that is the second derivative in the gradient (normal) direction is zero, the degeneration direction of the Hessian matrix in the neighborhood also tends to be consistent, leading to consistent normals. **Q2: Confusion about the evaluation metric** In the paper, we used the commonly used metrics for evaluating reconstruction methods including Chamfer distance, F-Score, and Normal consistency proposed by OccupancyNet [CVPR 2019] and Convolutional Occupancy Networks [ECCV 2020]. We give their definitions in **Sec. C in the supplementary material**. For the purpose of comparing normals, we sample the same number of points from the reconstructed surface and the ground-truth surface, respectively. For each sample point on the reconstructed surface, we find the nearest point in the other point set and keep the dot-product of the two normals. Note that the computation is bi-directional. The original F-Score is defined as the harmonic mean between precision and recall. To define the F-Score in our scenario, we sample the same number of points from the reconstructed surface and the ground-truth surface, as mentioned above. For recall, it counts how many points on the GT mesh lie within a certain distance (threshold t) to the reconstruction surface. For precision, it counts the percentage of points on the reconstructed mesh that lie within a certain distance (threshold t) to the GT. The F-Score is then defined as the harmonic mean between precision and recall, see the definitions of precision and recall in **Sec. C in the supplementary material** Furthermore, we agree that Hausdorff distance is a good choice. We compared ours and DiGS using Chamfer distance and Hausdorff distance under SRB, see **Tab. 2 in the global response box**. We will add it in the revision. --- Rebuttal Comment 1.1: Title: Thank you for the clarifications Comment: After reading the other reviews and rebuttals, I'd like to keep my score where it is. --- Reply to Comment 1.1.1: Title: Thanks for your reply Comment: We appreciated that your decision to keep your score. It’s glad we clarified your doubts and give you a better view about our work. We will follow your advice and revise our paper.
Summary: The paper considers the task of surface reconstruction from point clouds without normals and uses neural signed distance functions. They consider previous smoothness losses, and come up with their own by taking the derivative of the Eikonal equation, thus guaranteeing that their loss constrains a fundamental property of SDFs. They motivate this loss in various other ways (which to me are fairly unclear). They also show good results on many datasets, and compare to many different types of approaches. Strengths: - Using second order information derived from the Eikonal equation makes sense - Lots of experiments and comparisons to many types of methods Weaknesses: - One of the benefits of the split of the ShapeNet dataset is that you use is that you can compute IoU with it, which is a very different but important measure to IoU. - Your explanation for why L_{align} is different from the Eikonal term is essentially that it works better for finitely many points, which is not an explanation. You should explain that with finitely many points constraining second order information reduces the number of possible solutions, and maybe allude to the toy problem in DiGS (or even construct the same toy problem with your loss term). - Your explanation for why not smoothness energy is not very compelling either. Your main argument is that other energies are not 0 at the optimum, hence it is hard to choose how regularisation to apply. A much better argument is your earlier point that the loss is derived from a fundamental property of SDFs, while the others are general smoothing losses that are not specifically guiding towards a proper SDF and will be smoothing in unwanted ways. Overall great idea and results, not great presentation/soundness. Your explanation for your loss needs to be more clear, keeping it to it is a fundamental property of an SDF works just fine. A toy problem to back this up could be to see what happens when you regularise the 2D shapes with high weight with the different smoothness energies, mostly likely other methods would stop obeying the points and go for a smoother surface while yours would still obey the input points. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: - Is there a reason you are using most of DiGS' datasets but not using their version of the metrics? - Your explanation of the implication is Rodigues' formula is not clear. I couldn't find the formula name in your citation [34], and looking else online it seems that the formula states that one of the eigenvalue-eigenvector pairs of the Hessian should be a principle curvature, principle direction pair, not all eigenvalue-eigenvector pairs? Why does it imply equation (3)? - What is Figure 3 trying to show us? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 1 poor Contribution: 3 good Limitations: Not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive feedback. In the following, we will address the main concerns carefully and seriously. **Q1: Compute IoU under ShapeNet** Following your advice, we report the IoU statistics under ShapeNet, see **Tab. 1 in the global response box**. At the same time, we will add the statistics to the paper. It can be seen that our method is a bit inferior to the supervised method POCO (note that ours outperforms POCO in terms of Normal Consistence, Chamfer distance, and F-Score, as shown in the main paper). However, our method is consistently better than the other optimization-based methods. **Q2: Compared to the Eikonal term and other smoothness energy terms** Thanks for pointing out this issue. As you observed, when one enforces the Eikonal term on finite and discrete points, still a large number of invalid solutions (with ghost geometry) cannot be filtered out. By contrast, second-order information can effectively limit the solution space. Secondly, the parameters of the network are typically larger than input points, leading to over-parameterized neural networks with an extensive array of parameters, which can further convolute the optimization process, as described by Empirical Analysis of the Hessian of Over-Parametrized Neural Networks [Sagun et al., 2017]. As for various smoothness energy terms (i.e., Laplacian energy used by DiGS), they have a negative effect (missing geometric details) although they can reduce the number of possible solutions. In other words, the ability to accurately represent geometric details is greatly diminished. By contrast, our regularization term is based on the fundamental geometry property of SDF, which doesn’t enforce unwanted smoothness. We construct a similar toy problem as DiGS with the shape 'L' with 100 points to demonstrate the effect of our term. We set the weighting coefficient of all regularization terms to 100 for fairness (default coefficient used by DiGS), shown in **Fig.1 in the PDF file**. The experimental results validate our observation. (Note that DiGS leverages 15K points in its main paper for the toy example.) To further validate the effect of our term, we increase the weighting coefficient of the regularization term to 1e3, 1e5, and 1e7, respectively, as shown in **Fig. 2 in the PDF file**. It can be seen that even with large weight, our regularization term consistently yields high-fidelity surfaces without introducing over-smooth and ghost geometry. However, smoothness energy terms easily degenerate. In essence, other general smoothness energies, which are not always to zero, tend to force the derivative to 0 and will be smoothing in unwanted ways. Instead, our regularization term is more relaxed to avoid over-smooth results, which enforces the gradient to become one of the eigenvectors of the Hessian matrix, with the corresponding eigenvalue being 0 that only the second derivative in the gradient direction being zero satisfies the fundamental property of SDF. To illustrate the feature-preserving ability of our method, we conducted experiments under a lion shape with 100K points as input. As shown in **Fig. 3 in the PDF file**, our method not only suppresses ghost geometry but also recovers high-fidelity geometric details. However, smoothness energy terms tend to produce over-smoothed results. **Q3: Different metrics compared to DiGS** To our knowledge, there are two commonly used evaluation settings under SRB, one used by DiGS [CVPR 2022] and the other proposed by Shape as Points [NIPS 2021]. The version of DiGS operates in the resolution of $512^3$ and uses Chamfer distance and Hausdorff distance at the original scale, while the version of Shape as Points operates in the resolution of $256^3$ and uses Chamfer distance, F-Score, and Normal Consistency, but it does not give the number of evaluation points. Therefore, we sample 100K evaluation points and follow the settings of Shape as Points in this paper. Following your advice, we also show the results under the settings of DiGS, see **Tab. 2 in the global response box**. It can be seen that our method outperforms DiGS in terms of Hausdorff distance. To summarize, our method consistently outperforms DiGS, in terms of whether Hausdorff distance or for F-Score (indicated in the main paper). **Q4: Misleading of Rodrigues’ formula** We are sorry for this misleading, it is actually not related to Rodrigues’ formula. A more suitable explanation and reference is as follows ref. [30]: See Chapter 2. When the situation reduces to 2D, it coincides with your observation - one of the eigenvalue-eigenvector pairs of the Hessian should be a principle curvature, principle direction pair. We will fix this in the revision. **Q5: The objective of Figure 3** Thanks for pointing out the misleading figure. We shall replace it with a more informative figure (**see Fig. 1 in the PDF file**). --- Rebuttal Comment 1.1: Title: Thank you for your clarifications. I still have some concerns on the IoU results. Comment: Thanks for giving the results on ShapeNet for IoU. POCO performing better is not a concern, it is a supervised method and thus is an unfair comparison. However, the actual IoU values are very suspicious to me. The values for IoU vary greatly from the DiGS paper: in that paper the mean IoU is 0.939 whereas you report 78.44? How are you calculating the IoU? The Fig 1 and Fig 2 in the PDF file is a much better representation of what is going on. It would be good if you have at least one of them in the main paper. Thanks for providing the metrics used in DiGS (it was actually started by Neural Splines in CVPR2021). In regards to Section 3.2, do you really need that the eigenvectors define the principal directions? By taking the derivative of the eikonal equation you get (2), which is what you use in practice. I don't understand why the principal directions are important and/or how they are used in the paper? --- Reply to Comment 1.1.1: Title: Thanks for your questions. We hope that our answers can address your concerns. Comment: Thanks for your questions. We hope that our answers can address your concerns. **Question about IoU** For the computation of IoU, we adopt the evaluation code from POCO, which is derived from a widely used IoU evaluation code by ConvOccupancyNet[ECCV 2020]. This code retrieves the volume data (occupancies) from 'points.npz' and computes the IoU using 100K evaluation points. In contrast, DiGS employs its own evaluation code (not popular actually). In their main paper, DiGS does report superior IoU values. However, its performance degrades for some categories with concave features (See Tab.1 and Fig. 1 in the supplementary material), such as lamps. In fact, the inconsistency is due to different experimental settings: DiGS directly utilizes 100K points as input to conduct the experiment, while our experiments use only 3K points. Hope that our answer can address your concern. **Question about eigenvectors** We do not employ the eigenvectors corresponding to principal directions in our paper, but it is insightful to understand the relationship between the Hessian matrix of SDF and the surface. When we enforce that the gradient aligns with the kernel space (the space spanned by the eigenvector corresponding to 0), the other two eigenvectors naturally align with the principal directions. While directly constraining the direction of the principal curvature of the surface might produce interesting results, our method doesn't impose any constraints on the curvature and related directions, that’s why our approach has a better ability to recover geometric details. Thanks.
Rebuttal 1: Rebuttal: We sincerely appreciate all valuable comments and suggestions, which helped us to improve the quality of the article. We have carefully considered all your advice and addressed the questions raised, see details in the separate response and the attached PDF files. Specifically: - We clarify the motivation and insight of the method compared to the Eikonal term and other general smoothness energy terms. Moreover, we conducted 2D experiments compared to them (the black bold line represents 0-isosurface), see the figures in the attached PDF files. - We report the IoU under ShapeNet: | | | $\text{SPSR}^*$ | $\text{NSP}^*$ | $\text{SAL}$ | $\text{IGR}$ | $\text{SIREN}$ | $\text{DiGS}$ | $\text{OSP}$ | $\text{iPSR}$ | $\text{PGR}$ | $\text{POCO}^{+}$ | $\textbf{Ours (SIREN)}$ | |:--------------------:|:-----------------------:|:---------------:|:--------------:|:------------:|:------------:|:--------------:|:-------------:|:------------:|:-------------:|:------------:|:-----------------:|:-----------------------:| | $\text{IOU}$ | $\text{mean}\uparrow$ | $81.01$ | $67.34$ | $48.89$ | $30.28$ | $35.65$ | $78.44$ | $57.86$ | $75.53$ | $69.26$ | $\textbf{84.25}$ | $81.19$ | | | $\text{std.}\downarrow$ | $15.10$ | $21.85$ | $30.93$ | $33.46$ | $25.96$ | $18.72$ | $27.28$ | $19.78$ | $20.22$ | $\textbf{13.62}$ | $16.37$ | $\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$ Table 1: IoU under ShapeNet - We compare with DiGS under SRB using the evaluation settings of DiGS: ||$\text {Mean}$||$\text {Anchor}$||||$\text{Daratech}$||||$\text{DC}$|||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| ||$\text{GT}$||$\text{GT}$||$\text{Scans}$||$\text{GT}$||$\text { Scans }$||$\text{GT}$||$\text{Scans}$|| |$\text{Method}$|$d_C$|$d_C$|$d_C$|$d_H$|$d_{\vec{C}}$|$d_{\vec{H}}$|$d_C$|$d_H$|$d_{\vec{C}}$|$d_{\vec{H}}$|$d_C$|$d_H$|$d_{\vec{C}}$|$d_{\vec{H}}$| |$\text{DiGS}$|$\textbf{0.19}$|$3.52$|$0.29$|$7.19$|$0.11$|$1.17$|$\textbf{0.20}$|$3.72$|$0.09$|$1.80$|$0.15$|$\textbf{1.70}$|$0.07$|$2.75$| |$\textbf{Ours}$|$\textbf{0.19}$|$\textbf{2.98}$|$\textbf{0.28}$|$\textbf{4.79}$|$0.24$|$1.78$|$\textbf{0.20}$|$\textbf{2.52}$|$0.13$|$1.84$|$\textbf{0.14}$|$1.88$|$0.10$|$2.77$| ||$\text {Gargoyle}$ ||||$\text {Lord Quas}$ |||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| ||$\text {GT}$||$\text{Scans}$||$\text{GT}$||$\text { Scans }$|| |$\text{Method}$|$d_C$|$d_H$|$d_{\vec{C}}$|$d_{\vec{H}}$|$d_C$|$d_H$|$d_{\vec{C}}$|$d_{\vec{H}}$| |$\text{DiGS}$|$\textbf{0.17}$|$\textbf{4.10}$|$0.09$|$0.92$|$\textbf{0.12}$|$\textbf{0.91}$|$0.06$|$0.70$| |$\textbf{Ours}$|$0.19$|$4.56$|$0.15$|$1.82$|$0.14$|$1.13$|$0.09$|$0.95$| $\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$ Table 2: Comparison under Surface Reconstruction Benchmark. - We explore the effects of different sampling strategies under SRB and the evaluation settings of DiGS. Note that Bbox-sampling means we sample randomly in the bounding box ([-1, 1]) of the input and then utilize them for our alignment term, and $k$ means the $k-th$ of the nearest sampling following IGR[ICLR 2020]: ||$\textbf{Mean}$|| |:---|:---:|:---:| ||$\textbf{GT}$|| |$\text{Method}$|$d_C$|$d_H$| |$\text{Bbox-sampling}$|$\text{0.32}$|$\text{4.51}$| |$\text{k=1}$|$\text{0.32}$|$\text{4.90}$| |$\text{k=25}$|$\text{0.28}$|$5.87$| |$\mathbf{k=50 (ours)}$| $\textbf{0.19}$|$\textbf{2.98}$| |$\text{k=75}$|$\text{0.26}$|$\text{4.31}$| |$\text{k=100}$|$\text{0.28}$|$\text{5.34}$| $\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$ Table 3: Effects of different sampling strategies. --- Please let us know if you have any other questions. We would be pleased to engage in further dialogue with you. Thanks for your time, The Authors Pdf: /pdf/9c4b3202ea2d549b2addb0585007aa940d1bd0bf.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces a novel approach to learning the SDF directly from point clouds without the use of surface normals. The key insight behind this is that aligning the gradient of the SDF with its Hessian allows for better training of the distance field. Through extensive experiments, the paper demonstrates the effectiveness of the proposed approach in accurately recovering the underlying shape and effectively reducing geometry artifacts. Strengths: - The paper exhibits a good writing style, effectively conveying the ideas and concepts to the readers. The content is clear, concise, and well-structured, allowing for easy understanding of the proposed method. - The paper introduces a novel method for learning the signed distance function directly from unoriented point clouds. - The paper presents extensive experimental results that showcase the effectiveness of the proposed approach. Weaknesses: - The paper does not explicitly discuss potential limitations or challenges of the proposed method. Addressing and acknowledging these limitations would provide a more comprehensive understanding of the method's applicability and potential areas for improvement. - The paper lacks a time analysis of the proposed method. While the experimental results demonstrate the effectiveness of the approach, the paper does not provide insights into the computational efficiency or time complexity of the method. Understanding the time requirements would be valuable for assessing its practicality and potential for real-time applications. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can the proposed method guarantee the reconstruction of watertight surfaces? For large-scale scenes, are the reconstructed surfaces watertight? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper lacks a discussion of potential limitations and the negative social impact of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their appreciation and invaluable comments. We will incorporate all feedback into the revised version. **Q1: Limitations and negative social impact** We shall point out the following limitations in this revision: (1) it is weak in handling sparse input such as sketch point clouds or LiDAR. In particular, most LiDAR data in the KITTI dataset are partial scans. This presents significant challenges in closing the gaps between the stripes and completing large missing parts. We enclosed the relevant experimental results in the **PDF file** (**Fig. 4**) and will include them in the supplementary. (2) Its potential negative social impact includes unauthorized replication of mechanical designs and intentional content creation as other reconstruction methods. **Q2: Time analysis of the method** We made a detailed comparison with IGR, SIREN, and DiGS on the timing cost and the number of parameters for optimizing for a single point cloud in **Sec.B.2** and **Tab. 3** in the supplementary material. See the table below for more information. | |$\text{IGR}$|$\text {SIREN}$|$\text{DiGS}$|$\text {Ours}$| |:------------|---------|:---------:|:---------:|:---------:| |#$\text{parameters}$|$\text{1.86M}$|$\text{264.4K}$|$\text{264.4K}$| $\text{264.4K}$| |$\text {time[ms]}$|$\text{50.73}$|$\text{11.52}$|$\text{36.28}$|$\text {40.10}$| The optimization time statistics are made on a single GTX 3090 GPU. It can be seen that the timing costs of DiGS and ours are higher than the conventional SIREN since DiGS and ours need a second-order optimization. However, ours is more computationally efficient than IGR. **Q3: Guarantee about the watertight reconstruction surface?** Since our approach fits the underlying SDF, which naturally guarantees that the reconstruction surface (the 0-isosurface) is watertight. But it must be pointed out that when the points are too sparse or with a lot of missing parts, the produced surface may consist of many separate connected components. See **Fig.4 in the PDF file**. Furthermore, for large-scale scenes (such as from Matterport3D), it is extremely hard to represent large-scale scenes within a single network due to the catastrophic forgetting issue of the neural networks. The sliding window strategy proposed by recent works (e.g. DeepLS [ECCV 2020] and BlockNeRF [CVPR 2022]) seems helpful. We shall list it as an interesting future work. Thanks. --- Rebuttal Comment 1.1: Title: Comment by Reviewer XXzk Comment: Having reviewed the rebuttal and taken into account other assessments, the provided response has addressed my concerns. I appreciate the author's feedback. --- Reply to Comment 1.1.1: Title: Thanks for your reply Comment: Thank you for taking the time to review the rebuttal and other assessments. We are pleased to see that the provided response has effectively addressed your concerns. Your feedback is greatly appreciated as it helps refine the quality of our work.
null
null
null
null
null
null
Private (Stochastic) Non-Convex Optimization Revisited: Second-Order Stationary Points and Excess Risks
Accept (spotlight)
Summary: This paper provides a private algorithm for (Stochastic) Non-Convex Optimization using gradients, differences between two successive gradients, and the exponential mechanism for privacy (added to the gradient). It leverages the SpiderBoost algorithm and its private version to find (privately) stationary points. The main challenge addressed by the paper is to keep the quality of the gradient estimations at all steps. Strengths: - simple modification of SpiderBoost to maintain the quality of the gradient estimations. - interesting properties of the algorithm, i.e., the algorithm is capable of escaping the saddle point with high probability, and a large drift implies significant decrease in the function value, limiting the number of gradient queries. Weaknesses: - there is something weird about Algorithm 1. drift_t is set to zero in lines 7 and 10, but then overwritten in line 14. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - traditional hypotheses on these types of results Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time reviewing and commenting on our paper. Thanks for pointing out the typo. It should be $\mathrm{drift}_{t+1}=\mathrm{drift}_t+ \eta^2 ||\nabla_t||^2$ in line 14. Please let us know if you have further questions or concerns. --- Rebuttal Comment 1.1: Title: Respond to authors Comment: Dear reviewer, please read authors' response to your review and reply to them regarding how it changed (or did not change) your evaluation of this work. Thank you in advance, Your AC
Summary: This paper addresses the problem of private stochastic optimization for general non-convex functions. For smoothed functions, the authors propose a combination of the Gaussian mechanism and the variance reduction technique (spider algorithm) to find the SOSP of the loss function. Their results for the empirical risk improve upon previous state-of-the-art bounds by a factor of $n\varepsilon$ and their results for the population risk are new in the field of private optimization literature. For general Lipschitz non-convex problems, the authors establish improved bounds for bounding the excess risk using exponential mechanisms. Notably, they provide a new exponentially-time algorithm that matches the proposed lower bound. Strengths: The paper is well-written and clear in its claims. The authors propose a novel approach to private stochastic optimization for both smooth and non-convex loss functions by combining existing private mechanisms with the Spider algorithm. They provide new results for both upper and lower bounds on the excess population risk, which are technically solid and contribute to the literature on private stochastic optimization. Overall, the results presented in the paper make a valuable contribution to the field. Weaknesses: 1. The paper makes a significant contribution to improving the bounds for private optimization, but there is still a gap between the existing upper and lower bounds that needs to be explored in future work. 2. While the paper focuses on theoretical studies, the authors could consider conducting experimental studies to validate their theoretical findings Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I have no further questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your kind reviews and comments. In our future work, we will both explore the tightness of our bounds, and evaluate our algorithms empirically. --- Rebuttal 2: Title: Respond to authors Comment: Dear reviewer, please read authors' response to your review and reply to them regarding how it changed (or did not change) your evaluation of this work. Thank you in advance, Your AC
Summary: The paper introduces a non-convex optimization framework that integrates two types of gradient oracle and an application of the exponential mechanism to find the global minima of non-convex objectives, requiring minimal assumptions and showcasing remarkable population risk bound performance. The paper also highlights the ability to emulate previous empirical and population risk bounds effectively, removing the necessity for smoothness assumptions. Strengths: This paper presents robust, theoretical insights into a clearly defined problem, and its logical flow makes it accessible and easy to comprehend. It commendably surveys and incorporates existing related literature. Weaknesses: This paper appears to conclude abruptly, with the discourse ending unexpectedly following a theorem. The absence of a conclusion or summary weakens the paper's overall structure. There is an absence of numerical experiments. Additionally, the authors seem to have relied solely on Advanced Composition for privacy accounting. There are, however, various alternative privacy accounting methods available that could potentially improve the overall performance. For instance, the 'Better Privacy Accounting' method [1], utilization of GDP [2], or the application of tighter composition theorems [3] might be considered for future research and improvements. [1]: Altschuler, J., & Talwar, K. (2022). Privacy of noisy stochastic gradient descent: More iterations without more privacy loss. Advances in Neural Information Processing Systems, 35, 3788-3800. [2] Liu, Y., Sun, K., Jiang, B., & Kong, L. (2022). Identification, amplification and measurement: A bridge to gaussian differential privacy. Advances in Neural Information Processing Systems, 35, 11410-11422. [3] Kairouz, P., Oh, S., & Viswanath, P. (2015, June). The composition theorem for differential privacy. In International conference on machine learning (pp. 1376-1385). PMLR. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: How does the performance of the Stochastic Spider, as presented in this paper, compare to that of the BoostSpider in practical applications? While the theoretical results indicate a rate improvement, is this enhancement accompanied by an increase in constants and an impact on finite sample performances? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We will add a detailed conclusion section in the paper. In more detail, our conclusion will review the main contributions of the paper and then propose some future directions which we feel are important/interesting, including: - Can we close the gap between our upper and lower bounds for SOSP? - We used a “gradient difference oracle” O_2, which has lower sensitivity, and thus we can tolerate a higher noise multiplier on this oracle. Are there other settings where such a technique can improve the privacy-utility tradeoff? - Can we obtain our SOSP bounds using a batch gradient oracle instead of a full gradient oracle? - For population risk, do any assumptions “between” convexity and LSI admit better excess loss bounds in polynomial time? Comment about the numerical experiment: The objective of our paper was to improve the current SOTA analytical bounds (see [1] and [2] in response to Reviewer nTrG, which are also purely theoretical in nature). We acknowledge having a thorough empirical study is important. We leave it for future exploration. Our results seamlessly extend to measures of privacy like GaussianDP/ RenyiDP/ zCDP. We will update the presentation to state the main results in terms of Gaussian DP and using Gaussian DP composition (as opposed to ($\epsilon,\delta$)-DP). One reason for choosing the ($\epsilon,\delta$)-DP variant was to be consistent with the prior literature. We want to emphasize that changing the notion of privacy does not make any meaningful changes to our results, as (i) our results are based solely on the Gaussian mechanism and AboveThreshold algorithm (which are known to satisfy all the above notions of DP seamlessly.) (ii) we give polynomial improvements on the dependence on n and d, so improving log factors via tighter composition does not qualitatively change the results. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I believe the authors will add a detailed conclusion section and update the presentation to state the main results in terms of Gaussian DP and using Gaussian DP composition. I have adjusted my score to 5 accordingly.
Summary: This paper studies differentially private second-order optimization problems, where the loss function is assumed to be second-order smooth. The objective is to identify second-order stationary points (SOSP) while ensuring privacy guarantees. The authors present results for both empirical risk and population risk, with notable advancements over existing state-of-the-art outcomes. Specifically, the empirical risk minimization (ERM) bound improves from the state-of-art rate, while the population risk result stands as the first of its kind in the literature on differentially private SOSP optimization. Furthermore, the paper contributes a lower bound analysis for private SOSP optimization, further strengthening its significance in this domain. Strengths: The strength of this paper lies in its highly comprehensive study of private second-order optimization. The authors study both empirical risk minimization and population risk, and the obtained bounds improves from the state-of-the-art results in both case. The inclusion of a lower bound also closes the gap of this problem. Weaknesses: The authors can enhance the writing by providing additional discussion regarding the algorithm employed, the obtained results, and the insights derived from their findings. Furthermore, providing more detailed explanations for the technical results cited from previous works, such as Lemma 3.4 and Alg.2, would help clarify any confusion and prevent ambiguity. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have a few questions regarding the general SOSP framework (Algorithm 1): - What role does $\text{frozen}_t$ play? - line 8: why is an additional noise $g_t$ added to the private oracle $\mathcal{O}_1(x_t)$? - Since Alg. 1 only serves as a non-private SOSP optimization framework, is it possible to choose other variance-reduction algorithms (e.g., [1])? If not, what is the key characteristic that distinguishes SpiderBoost from other optimizers in the private setting? Also, Section 5 focuses on private excess risk minimization, which seems to deviate from the previous discussion about SOSP. I'm wondering how Sec. 5 relates to the previous part of the paper? Reference: 1. Ashok Cutkosky, Francesco Orabona. Momentum-Based Variance Reduction in Non-Convex SGD. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive reviews and questions. Following are the responses to the specific questions: Q1. We use Lemma 3.4 for escaping the saddle point directly, which requires us to add “extra” Gaussian noise whenever the gradient norm of the current point is small (meaning it is possibly a saddle point). Notice that Section 3 discusses the general optimization framework and has nothing to do with DP, and hence the oracle does not necessarily need to add Gaussian noise and can even be exact. frozen_t is used to control the “extra” Gaussian noise we add to escape saddle points, as adding too much extra Gaussian noise may worsen the algorithm's performance. For example, when the initial point is already a good SOSP, then ${\rm frozen}_t$ can ensure that further extra noise can be added at most every $\Gamma$ steps. Q2: As we mentioned in Q1, the Gaussian $g_t$ is used for helping the point escape the saddle point. (A similar idea was used in [2,3]). Q3: This is also asked by other reviewers, and we will add a discussion on this. We chose SpiderBoost mainly because [1] used it for finding first-order stationary points privately. We examined whether this algorithm can be used to find second-order stationary points (SOSP), and eventually solved the challenges towards achieving SOSP. It is true that current Section 5 can be considered as an interlude. (We will make it clear in the paper.) Prior works ([1] and [2]) have considered zeroth order (excess risk), FOSP, and SOSP. Being consistent with the literature, we improve on excess risk bounds too. [1] Arora, R., Bassily, R., González, T., Guzmán, C. A., Menart, M., & Ullah, E. (2023, July). Faster rates of convergence to stationary points in differentially private optimization. In International Conference on Machine Learning (pp. 1060-1092). PMLR. [2] Wang, D., Chen, C., & Xu, J. (2019, May). Differentially private empirical risk minimization with non-convex loss functions. In International Conference on Machine Learning (pp. 6526-6535). PMLR. [3] Jin, C., Ge, R., Netrapalli, P., Kakade, S. M., & Jordan, M. I. (2017, July). How to escape saddle points efficiently. In International conference on machine learning (pp. 1724-1732). PMLR. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for detailed response, and I remain my rating.
Rebuttal 1: Rebuttal: **General Reply:** We thank all the reviewers for their reviews and comments. We will diligently address all the presentation issues raised by the reviewers and enhance the writing quality in future versions, like providing more explanations on the intuitions. In the following, we respond to the specific questions raised by the reviewers.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper studies the problem of non-convex optimization under differential privacy constraints. The goal of the paper is to find first and second order stationary points of the function. The proposed algorithm which is inspired by SpiderBoost [1] is a sampling based method that generates a private estimator that minimizes both excess empirical and population risks.  The authors studied the cases: when only polynomial run time is allowed, and when the exponential run time is permitted. In the first case, their proposed algorithm outperforms the previous best and tightens the gap between the provided upper bound and the already existing lower bound. For the case when exponential run time is allowed, the paper provides a nearly matching upper and lower bound.  [1] Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, and Vahid Tarokh. Spiderboost and momentum: Faster variance reduction algorithms. Advances in Neural Information Processing Systems, 32, 2019. Strengths: The paper is a collection of elegant results. From a technical point of view, the paper is solid. It enhances advanced techniques to fulfill its objective. While the paper outperforms previous best algorithm [2], with polynomial run time it relaxes the smoothness assumption that has been assumed in [2].  [2] Di Wang, Changyou Chen, and Jinhui Xu. Differentially private empirical risk minimization with non-convex loss functions. In International Conference on Machine Learning, pages 6526–6535. PMLR, 2019. Weaknesses: For a reader who is not fluent with the literature, the paper is not easy to follow. Several insights left to be unexplained. The paper is quite dense without enough provided intuition on the proposed algorithm. The authors did a good job in mentioning the similarities with precious work, but they did not discuss their technical novelties. Some expressions and notation are left undefined (ex. W_2 at line 308).  As a side note, I think P at line 18 should be P*. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What is the main technical novelty of the work? 2. Is the proposed algorithm a privatized version of SpiderBoost? if yes, except the added gaussian noise what are the additional elements that are added? And can one similarly construct a privatized version of other algorithms like Spider or SARAH? 3. Can you please provide an intuition that why your technique could relax the smoothness assumption? 4. Except the relaxed smoothness assumption, can you please provide an intuition what would be the difference in the perfromance if one used Spider or SARAH instead of SpiderBoost? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitation of the work is the optimality gap between the upper and lower bound, where the authors mentioned this and referred to it as an open problem.  Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable reviews. We use $P^*$ as the input dataset can be of arbitrary size. For example, we consider a set D of size n in the paper, meaning $D \in P^n$. Q1: We discuss our techniques and the challenge of previous techniques in Subsection 1.2 (our techniques) and related work (page 2), and more details can be found there. In particular, our main technical ideas are: 1.To use the notion of drift to decide when to add the noise in SpiderBoost to get good gradient estimations all the time, which enables it to escape saddle points to reach SOSP, and 2. Design privacy analysis of the exponential mechanism and sampling method based on log-Sobolev-inequality (as opposed to smoothness) to obtain the excess population risk guarantees. Q2: Yes, our algorithm can be treated as a privatized version of SpiderBoost. Besides adding noise, the main added element is the method for choosing when to recompute the gradient from scratch; SpiderBoost does this after a fixed number of rounds, whereas we dynamically choose to do this based on the quantities frozen and drift. We did not try to construct private versions of other algorithms like Spider or SARAH, but we believe our methods and intuitions can also be helpful for those algorithms. However, we do not expect the theoretical bounds there to be better, as all these algorithms are morally similar. Q3: Sorry for the confusion. For context, we only relax the smoothness assumption for the excess risk, but not for finding the SOSP. The intuition for relaxing the assumption for SOSP is that we are directly providing our bound via LSI (log-Sobolev inequality), rather than via a run of LMC, which explicitly requires smoothness for convergence. Q4: SpiderBoost was used by [1] to obtain DP-FOSP, and it was natural to seek whether it is able to find DP-SOSP. SpiderBoost allows a larger learning rate and hence may have better practical performance. Empirical evaluation to compare these different algorithms is left for future work. [1] Arora, R., Bassily, R., González, T., Guzmán, C. A., Menart, M., & Ullah, E. (2023, July). Faster rates of convergence to stationary points in differentially private optimization. In International Conference on Machine Learning (pp. 1060-1092). PMLR. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response. I maintain the score that I gave.
null
null
null
null
null
null
Hypernetwork-based Meta-Learning for Low-Rank Physics-Informed Neural Networks
Accept (spotlight)
Summary: The paper presents a novel low-rank PINN architecture to solve parametrized partial differential equations (PDEs). Based on a hyper network framework they compute solutions to PDEs in a rank-adaptive way. Their approach is able to overcome for certain test systems known failure modes of PINNs. The method is extensively evaluated and benchmarked on various tasks for PDEs in 1D and 2D. Strengths: Disclaimer: I am not up to date with the most recent architectural improvements regarding PINNs and state-of-the-art accuracy. To the best of my knowledge, the architectural improvements regarding the low-rank approximation are novel and the idea to predict the singular values with a hyper network is an interesting approach to learn a neural network to represent multiple parametrizations of PDEs: 1. The method section is well written and easy to follow. All details are sufficiently explained, and the paper is well organized. 2. The authors carried out extensive experiments to support their architectural changes and compare it against different methods. 3. The results seem promising. Especially the results regarding the failure modes are impressive with an order of magnitude better accuracy (rel. and abs.) in Table 2 with significantly fewer parameters. Weaknesses: The results are impressive, as mentioned above, but to me it is not clear why the hyper network is so crucial to overcome failure cases: 1. In my opinion the paper would benefit from a more prominent motivation, why the proposed approach is more robust against failure modes. Is it connect to the motivation in appendix D.2 and the remark in appendix B? 2. In this regard an ablation on the architectural changes would also help to better understand the improvements proposed. To my understanding the closest PINN architectures is the HyperPINN method. Would it make sense to compare it against the newly proposed ansatz for example in Table 2, as another method w/o pre-training? 3. In section 4.2.2 (“computational cost”) a fair comparison would be the meta-learning strategies, especially since some of the parametrizations of $\beta$ are in the training set (for Phase 1). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why is your model more robust against the known “failure modes”? In Table 2 it seems a Naïve LR-PINN is not able to reach the same accuracy, but considering a full rank version, it should be as expressive as the Hyper-LR-PINN approach. Could you elaborate? (related to the first bullet point in the weakness section) 2. Is it clear why in the general case of CDR equations (Table 4) the improvements diminish compared to the benchmark methods? 3. Is the PINN-P approach also pre-trained on multiple parametrizations in appendix N? Judging from Table 5 it seems that PINN-P is randomly initialized, but wouldn’t it be possible to pre-train it? This could potentially also improve results on PINN-P. 4. Overall, I had problems understanding how the datasets are decomposed, especially when comparing Hyper-LR-PINN with a vanilla PINN or variations (PINN-P). As far as I understood Hyper-LR-PINN is pre-trained on various parametrizations including the target parametrization and then in Phase 2 finetuned on the target. Does this mean Hyper-LR-PINN sees more datapoints on the target then the vanilla PINN? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are addressed in the main text and also the supplementary information. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: In my opinion the paper would benefit from a more prominent motivation, why the proposed approach is more robust against failure modes. Is it connect to the motivation in appendix D.2 and the remark in appendix B?** **A1:** Your understanding of our motivation is correct; we were motivated by the observations in appendix D.2 and the remark in appendix B, which leads to development of the proposed approach. Although we empirically presented that the proposed algorithm is effective in solving the “failure modes”, we lack theoretical explanation on the phenomena we are observing. We conjecture that learning the set of basis vectors (close to orthogonal) that works well for a range of PDE parameter values seems to be the key aspect that enables the model to avoid spectral bias and, thus, facilitates approximating highly oscillatory (cf. convection equations) and very stiff (cf. reaction equations). \ **Q2: In this regard an ablation on the architectural changes would also help to better understand the improvements proposed. To my understanding the closest PINN architectures is the HyperPINN method. Would it make sense to compare it against the newly proposed ansatz for example in Table 2, as another method w/o pre-training?** **A2:** We indeed have a result of applying HyperPINN on the “failure mode’’ of the convection equations, shown in Appendix S. This result suggests that the improved performance of Hyper-LR-PINN is more than just using hypernetwork. As elaborated in the response to the previous comment, we believe the training algorithm accounts more for the improved performance. \ **Q3: In section 4.2.2 a fair comparison would be the meta-learning strategies, especially since some of the parametrizations of $\beta$ are in the training set (for Phase 1).** **A3:** Please see global comment #1. \ **Q4: Why is your model more robust against the known “failure modes”? In Table 2 it seems a Naïve LR-PINN is not able to reach the same accuracy, but considering a full rank version, it should be as expressive as the Hyper-LR-PINN approach. Could you elaborate?** **A4:** For Naïve-LR-PINN, the orthogonal basis is obtained from the randomly initialized weights of the layers. In contrast, for Hyper-LR-PINN, the basis is learned from the weights of the layers that represent the parameterized PDEs. Consequently, Hyper-LR-PINN can obtain a basis that is well-suited for representing the target parameterized PDEs. This hypernetwork based meta-learning methodology provides a favorable starting point to the model before entering the Phase 2 (See appendix R). The experiments in this paper empirically demonstrate that the obtained basis can more easily generalize to PDE coefficients that were not trained during Phase 1, as compared to Naïve-LR-PINN. \ **Q5: Is it clear why in the general case of CDR equations (Table 4) the improvements diminish compared to the benchmark methods?** **A5:** We believe that equations containing diffusion phenomena smooth out the solution snapshots and, thus, make the all considered methods somewhat comparable to each other. In such situations, PINNs in full-rank have a tendency to produce slightly better accuracy as they have higher expressivity. As shown in the additional comparisons (cf. Figure 1 of attached PDF), we also expect that the other competing baselines (MAML, Reptile) would have trouble finding good initializations for PINNs when the ranges of PDE coefficients become larger. \ **Q6: Is the PINN-P approach also pre-trained on multiple parametrizations in appendix N? Judging from Table 5 it seems that PINN-P is randomly initialized, but wouldn’t it be possible to pre-train it? This could potentially also improve results on PINN-P.** **A6:** Please see global comment #2. \ **Q7: Overall, I had problems understanding how the datasets are decomposed, especially when comparing Hyper-LR-PINN with a vanilla PINN or variations (PINN-P). As far as I understood Hyper-LR-PINN is pre-trained on various parametrizations including the target parametrization and then in Phase 2 finetuned on the target. Does this mean Hyper-LR-PINN sees more datapoints on the target then the vanilla PINN?** **A7:** Thanks for the comments. To address this concern, we’d like to emphasize that this is why we included baselines such as MAML and Reptile and put all those considered methods in the same experimental environments. Those are the PINN variants for solving parameterized PDEs and we can use the same setups for training/test data. All MAML, Reptile, and the proposed Hyper-LR-PINN see the same number of coordinates (x,t) at the same number of PDE parameters ($\mu$) during the training. For PINN-P, thanks to the reviewer’s suggestion, we were able to perform the experiments on the same training and data setup, and reported the result in Tables 1 and 2 of the attached PDF. With those additional experiments, we will add this description on the dataset in the manuscript to address the concern. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response and the additional experiments! All my concerns were addressed and I will raise my score.
Summary: In this paper, a meta-learning method is proposed for learning parameter-dependent dynamical systems. In particular, similarly to the reduced order method, the proposed method uses a low-rank neural network to construct a model, and the parameter-dependent coefficients are learned separately by a neural network. By combining these two, it is possible to retrieve the important modes, depending on the parameters of the system. This makes it possible to build models that perform well despite the small model size. Strengths: As far as I know, modeling differential equations by low-rank neural networks is certainly new. Based on the idea of the reduced order method, I suppose that this is a natural and reliable approach. The construction of a small model is also desirable for simulation and other applications. In addition, the paper is clearly written. Weaknesses: Although it may not be a weakness, it is difficult to understand theoretically that learning singular vectors does not improve performance when the matrix is decomposed into lower ranks. Based on the idea of model order reduction, the singular vectors should correspond to the important modes for expressing the solution and should be learned according to the data. In this regard, there appears to be a discrepancy between the idea and the results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Why does learning singular vectors not improve performance? I have an impression that this situation is similar to the reservoir computing, in which multiple random modes are first constructed and only a part of the parameters are adjusted to form a model. Should this method be understood as such a method rather than as a reduced order method? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No potential negative societal impact is expected. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Although it may not be a weakness, it is difficult to understand theoretically that learning singular vectors does not improve performance when the matrix is decomposed into lower ranks. Based on the idea of model order reduction, the singular vectors should correspond to the important modes for expressing the solution and should be learned according to the data. In this regard, there appears to be a discrepancy between the idea and the results.** **Why does learning singular vectors not improve performance? I have an impression that this situation is similar to the reservoir computing, in which multiple random modes are first constructed and only a part of the parameters are adjusted to form a model. Should this method be understood as such a method rather than as a reduced order method?** **A1:** Thank you for the comment. Indeed, this is a valid point; making the basis vector learnable in phase 2 would increase expressivity. We believe, however, the difficulty comes from the training algorithm, where the similar phenomena were observed in matrix decomposition methods. Consider finding a low-rank decomposition of matrix such that min $|| X - UDV^T ||_A$ with a norm induced from an inner product. In such a problem, updating $U,D,V$ all together in an interactive solver typically introduces more complexity. To avoid such difficulty, special solvers have been developed such as direct linear algebraic decompositions [Golub and Van Loan, 1996] which does not use a gradient-based optimization method, or notably, alternating minimization for matrix completion [Jain, et al, 2013]. In the context of low-rank approximation of solutions of PDEs, similar alternating approaches (e.g., alternating least-squares or alternating energy minimization) have been shown to be more effective [Doostan and Iaccarino, 2009], [Dolgov and Savostyanov, 2014]. Developing such a specialized solver in the setting of low-rank MLP can be considered as out of scope. We will mention this aspect in the limitation section where we discuss potential future directions. We agree that there is a very high-level analogy that can be made to reservoir computing (RC), since in the second phase of our two-phase training procedure only part of the parameters are trained. However, this is a feature shared with all models making use of encoding-decoding schemes. Moreover, an important distinguishing feature in RC is the recurrent neural network, whereas our neural network architecture is feedforward. The fact that we are specifically solving parametrized partial differential equations also makes our model more specialized. We believe these points makes our model much more similar to a reduced order model, where a dimension-reduced implicit representation is learned in the first phase. In the second phase, a family of functions is quickly approximated by training a few parameters. [Golub and Van Loan, 1996] Golub, Gene H. and Van Loan, Charles F., Matrix Computations, The Johns Hopkins University Press, 1996. [Jain, et al, 2013] Jain, P., Netrapalli, P., Sanghavi, S.: Low-rank matrix completion using alternating minimization. In: Proceedings of the forty-fifth annual ACM Symposium on Theory of Computing, pp. 665–674. ACM (2013) [Doostan and Iaccarino, 2009] Doostan, A., Iaccarino, G.: A least-squares approximation of partial differential equations with high-dimensional random inputs. J. Comput. Phys. 228(12), 4332–4345 (2009) [Dolgov and Savostyanov, 2014] Dolgov, S.V., Savostyanov, D.V.: Alternating minimal energy methods for linear systems in higher dimensions. SIAM J. Sci. Comput. 36(5), A2248–A2271 (2014) --- Rebuttal Comment 1.1: Comment: Thank you very much indeed for the detailed explanation! My concern is fully addressed and I have raised my score.
Summary: This work aims to improve on the efficiency and performance of adapting physics-informed neural networks (PINN) when dealing in many-query settings, given that the present field is limited in computationally-demanding retraining on new PDE parameters. It specifically focuses on learning low-rank versions of PINNs that enable low-parameter models through a hypernetwork-based meta-learning framework. A two-stage learning framework is proposed, in which 1) the hypernetwork and a low-rank PINN (LR-PINN) is trained across varying PDE parameters to learn a general set of basis vectors and 2) the basis vectors of the LR-PINN are fixed and the hypernetwork is used to generate an initial diagonal set on the novel PDE parameter configuration which are then trained on the new collocation points. In a set of experiments and ablations on challenging 1-D and 2-D PDEs that are known to have common "failure modes" for PINNs, the Hyper-LR-PINN demonstrated significant computation and performance improvements over both traditional PINN and meta-learning PINN baselines. Strengths: The blend of hypernetworks and meta-learning techniques in the PINN space in order to build efficient models that can work or adapt across a variety of PDE parameters is of increasing notice and importance. This work is positioned well against similar recent works and provides an intuitive and thought-out approach for handling all of these components together, especially the smart use of hypernetworks in outputting a smaller set of important parameters as a component of the low-rank decomposition. - The writing is clean and clear and the sections are structured well. - The algorithm is explained clearly and it is easy to follow the all the moving components. - The research questions problems are clearly laid out and have corresponding answers in the model design. - The experiments are robust, with ablations that support each of the research question, and a comprehensive suite of baselines are considered. - The appendix is similarly robust, providing both visualizations and analogies to relevant background techniques that enhances understanding. The inclusion of the preliminary experiments that motivated the techniques proposed is additionally welcome. - The code is clean and reproducible, given provided environment files and a detailed README with instruction. All data generators are provided as well. Weaknesses: - The lack of explanation on the meta-learning inclusion to the framework in the main paper is a bit confusing if a reader isn't coming from a meta-PINN perspective as there are no definitions given in the standard form of meta-train and meta-test environments or details on the context/query sets. It is elaborated on more in the Appendix and it can be pieced together in the work but perhaps a more clear reference to that section in the main work or elaboration on how meta-learning is formulated in this domain may provide more general clarity. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: ** ** Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Societal impact and limitations of the model/experimentation is described appropriately, with further information on future work provided in the supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The lack of explanation on the meta-learning inclusion to the framework in the main paper is a bit confusing if a reader isn't coming from a meta-PINN perspective as there are no definitions given in the standard form of meta-train and meta-test environments or details on the context/query sets. It is elaborated on more in the Appendix and it can be pieced together in the work but perhaps a more clear reference to that section in the main work or elaboration on how meta-learning is formulated in this domain may provide more general clarity.** **A1:** We appreciate the suggestion. We agree that readers who are not from a meta-PINN perspective may have confusion. Considering the page limitation, we think it would be better to put a clear reference to the sections in the main text. We will make this change in the manuscript at the camera-ready version.
Summary: The paper proposes an approach for solving PDEs by using a low-rank parameterized NN. While traditional PINNs are restricted to a single PDE instance, the authors use a hypernetwork to be able to solve PDEs with different parameters. The hypernetwork predicts the weights matrices of a PINN network in a low-rank fashion: each weight matrix is decomposed via SVD, and only the singular values diagonal matrix are modulated by the hypernetwork, while the other two matrices are shared across all the different PDEs. This allows quick inference on unseen PDEs, by learning only the diagonal terms. Performance is evaluated on 1D-convection-reaction-diffusion and 2D-Helmholtz PDEs. It is shown that this low-rank hypernetwork architecture allows to learn on previous failure cases of the vanilla PINN (eg high convection or high reaction). The hypernetwork method shows strong performance, compared to other meta-learning methods (MAML, Reptile), as well as other PINN variants. Strengths: - The paper is well-written, and provides an extensive set of experiments and ablations showing the relevance of the low-rank decomposition, proposed loss terms and evaluate - The performance of the proposed model is very competitive, while also providing much faster inference compared to the other meta-learning algorithms, and generalizing better than the other PINN variants Weaknesses: - “low-rank PINNs containing only hundreds of model parameters” (l. 10): the parameter comparison seems unfair, since only the parameters learned during phase 2 are counted (not hypernetwork or basis vectors parameters) - The adaptive rank (section K) in appendix is missing Technical Quality: 3 good Clarity: 3 good Questions for Authors: - "For higher values (20), it appears that a higher rank is required to achieve a certain level of accuracy, presumably due to numerical reasons.” (l.503): what do you mean by numerical reasons? - How does the proposed architecture scale to higher dimensions? - How does the Phase 1 computation compare to the PINN variants? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The method is currently limited to generalizing to new PDE parameters only, it would be interesting to see if it can be extended to different initial / boundary conditions Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: “low-rank PINNs containing only hundreds of model parameters” (l. 10): the parameter comparison seems unfair, since only the parameters learned during phase 2 are counted (not hypernetwork or basis vectors parameters)** **A1:** Thank you for the comment. We will make the statement more precise to be fair with baselines. Our planned edit would be as follows: “To address this issue, we propose lightweight low-rank PINNs containing only hundreds of model parameters, enabled by meta-learning a set of common basis vectors for weights and an associated hypernetwork, which allow efficient solution approximations for varying PDE input parameters.” \ **Q2: The adaptive rank (section K) in appendix is missing** **A2:** Thank you for pointing this out. The content of Section K was presented not in an ideal place, causing the confusion. We reorganized the sections in Appendix for better legibility. \ **Q3: "For higher values (20), it appears that a higher rank is required to achieve a certain level of accuracy, presumably due to numerical reasons.” (l.503): what do you mean by numerical reasons?** **A3:** We wrote "numerical reasons" to mean that, while there is a theoretically optimal approximation that guarantees equivalent error rates for different parameter values, there is no known learning procedure that finds this optimal approximation numerically at equivalent rates of convergence. That is to say, the learning problem we consider here is known to have more challenging and non-convex loss landscapes for optimization for higher parameter values [18]. Therefore, we speculated that representing these more complex phenomena would require a higher rank, and we observed such behavior in experiments, as shown in Figure 4 of the main paper. In the model reduction or numerical analysis literature, here a heuristic argument based on numerical experiments that suggest a trade-off between the rank and the ease of numerical approximation [Welper, 2017] unless some nonlinear transformation is applied [Rim et al, 2018]. We plan to edit the text to make this more clear, as follows: "For higher values (20), it appears that a higher rank is required to achieve a certain level of accuracy, presumably due to the difficulty of the numerical optimization problem." [18] Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, and Michael W Mahoney. Characterizing possible failure modes in physics-informed neural networks. Advances in NeuralInformation Processing Systems, 34:26548–26560, 2021 [Welper, 2017] G. Welper, Interpolation of Functions with Parameter Dependent Jumps by Transformed Snapshots, SIAM Journal on Scientific Computing 2017 39:4, A1225-A1250 [Rim et al 2023] D. Rim, B. Peherstorfer, and K.T. Mandli, Manifold Approximations via Transported Subspaces: Model Reduction for Transport-Dominated Problems, SIAM Journal on Scientific Computing 2023 45:1, A170-A199 \ **Q4: How does the proposed architecture scale to higher dimensions?** **A4:** For higher dimensional PDEs, we expect the specification of our LR-PINN architecture should match with the PINN architectures that are shown to be effective in approximating solutions (for example, PINNs for 1D/2D/3D PDEs in the original PINN paper). They employ relatively small-size neural networks (e.g., an MLP with 9 layers with 20 neurons) and we expect that employing a similar architecture in a low-rank format would allow approximation of solutions with reasonable accuracies. In our main paper, we used the same model architecture to learn both the 1D Convection-Diffusion-Reaction (CDR) equations and the 2D Helmholtz equation, and successfully trained the equations in both cases. Also, we expect the size of the hypernetwork can be kept relatively small as we interpret that the hypernetwork plays a role of an interpolation function. The effectiveness of lightweight hypernetworks has been demonstrated in the experiments in the manuscript. \ **Q5: How does the Phase 1 computation compare to the PINN variants?** **A5:** Please see global comment #1. --- Rebuttal Comment 1.1: Comment: Thank you for the explanations, I will keep my score
Rebuttal 1: Rebuttal: **Global comment #1: [Additional results of computational cost]** Assuming that we solve PDEs for the same number of PDE parameters without our method and other baselines, the computation needed for each forward pass in Phase 1 is slightly higher than that of training all individual vanilla PINNs because the proposed method involves training the hypernetwork. At the cost of marginally increased computation, the proposed method exhibits many benefits; most prominently (1) for test PDE parameters, an error level of about 5% can be achieved within few epochs (See Figure 1 of the attached PDF) while the traditional method requires several thousand epochs, and (2) we resolve the difficulties that PINNs have in the failure modes. Regarding the benefit (1) mentioned above, we conducted additional analyses on computational cost in a many-query scenario for four models, PINN, MAML, Reptile, and Hyper-LR-PINN, for fair comparison. In the experiments, we trained the models for convection equations on various $\beta$ ranges, i.e., [1, 5], [1, 10], and [1, 20], and measured the number of epochs required to reach an L2 error less than 0.05. We trained three meta-learning models, MAML, Reptile, and Hyper-LR-PINN, for the convection equations with $\beta$ values with an interval of 1 in Phase 1, whereas testing the models with an interval 0.5 in Phase 2, i.e., the trained and tested $\beta$ settings do not overlap. The experimental results are summarized in Figure 1 in the attached PDF. According to Figure 1, when learning convection equations with $\beta \in$ [1, 20] and $\beta \in$ [1, 10], the performance of two meta-learning baselines is observed to be inferior to PINN models without pre-training. Even for some beta values, the minimum threshold, i.e., an error level of about 5%, was not reached within 10,000 epochs so that they were not plotted. However, with $\beta \in$ [1, 5], the meta-learning baselines demonstrate superior performance compared to PINN in all beta values. On top of that, Hyper-LR-PINN stands out as the most effective approach, achieving the target with the least computational cost across all ranges. These experimental results indicate that existing meta-learning methodologies seem to have difficulties in finding a good “global/common” initialization for PINNs as the range of the PDE coefficients becomes larger. On the other hand, Hyper-LR-PINNs do not suffer from the same issue and are shown to be very effective over a wider range of PDE coefficients compared to existing meta-learning methods. We will include additional experimental results and discussions in the paper. \ **Global comment #2: [Additional ablation studies on PINN-P]** We have verified that PINN-P is amenable to pre-training since it models the parameterized solution. Following reviewers’ suggestion, we integrated an additional pre-training phase into PINN-P and conducted new experiments on convection equations with $\beta$ = {30, 35, 40}, which are PINN’s failure modes (cf. Tables 2 and 7). The fine-tuned results of PINN-P are summarized in Tables 1 and 2 from the attached PDF. When training PINN-P, we followed the same experimental setups of experiments in Table 2. To be specific, in phase 1, we trained PINN-P using convection equations with $\beta \in$ [30, 40], and then fine-tuned the models for six specific convection equations where $\beta$ = {30, 35, 40} and $\beta$ = {41, 42, 43} (within and outside the learning range of phase 1, respectively) in phase 2, employing the pre-trained model from the phase 1. As shown in Tables 1 and 2 of the attached PDF, PINN-P with pre-training phase shows overall improvements compared to previous PINN-P experiments in Table 7. However, there still exists a substantial performance gap between PINN-P and Hyper-LR-PINN (Full rank, Adaptive rank). Both PINN-P and Hyper-LR-PINN demonstrated that the pre-training process can be beneficial in improving model performance. We will conduct further discussions on the pre-trained PINN-P model. Pdf: /pdf/d1085e64cb522981eb6a9545d3694461da7dc754.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Characterizing Out-of-Distribution Error via Optimal Transport
Accept (poster)
Summary: In this work, the authors aim to predict model’s performance on out-of-distribution (OOD) data without access to labels during testing. Specifically, they identify pseudo-label shift, which gauges the difference between the predicted and the true OOD label distributions, to indicate the under-estimation issue suffered by the existing methods. Based on the obervision, they propose Confidence Optimal Transport (COT) and an empirically-motivated variant of COT, Confidence Optimal Transport with Thresholding (COTT), for more robust OOD error estimation. Experiments are conducted on 10 datasets with various types of distribution shift to verify the effectiveness of the proposed method. Strengths: 1. The motivation in this work is clear. The authors explain the under-estimation issue encountered by previous studies through the lens of pseudo-label drift, and propose a method leveraging optimal transport theory based on the observision. 2. The code is available with the submission which improves the reproducibility index of the paper. Weaknesses: 1. Since this paper contains many mathematical symbols, the meaning of some symbols is confused, which increases the difficulty of understanding. For example, in Section 2.1, you mention to predict the error $\epsilon = 1-\alpha$, but later the predicted error is denoted as $\hat{\epsilon}$. Comparing Figure 1 with Section 3.1, I find the assumption changes from $P_T(\vec{y}) \approx P_S(\vec{y})$ to $P_T(\vec{c}) \approx P_S(\vec{y})$. Is the meaning of $P_T(\vec{y})$ equivalent to that of $P_T(\vec{c})$? In addition, in Section 3.1, what is the meaning of $\hat{P}_S(\vec{y})$ and $\hat{P}_T(\vec{y})$? I suggest that the authors explain them after every equation and use harmonized symbols. 2. The relationships between pseudo-label drift and the true target error is not clear. In Section 2.4, you briefly mention that the pseudo-label shift is a lower bound of the true target error of the model. Can you explain the property in details? Specifically, I prefer to know the relationships between your method (or miscalibration) and the true target error. Can you give some insights? 3. This work is potentially vulnerable to two cases: 1) imbalanced test set. 2) The label space of the test set is only the subset of that of the training set. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See the weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The author provides the limitations of their work. It would be better if the authors discuss more cases such as the two cases mentioned in the main review. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable review of our work! In particular, we are excited that our motivation reads clear to the reviewer and our code submission alongside this contribution improves the reproducibility of our method. ### On Weakness 1 We thank the reviewer for pointing out the confusion and will clarify them below. We have updated our paper to incorporate the suggestions and corrections suggested by the reviewer to make it more readable. - Regarding the notation for test error, we would like to clarify that ${\epsilon}$ is the true test error (that is assumed to be inaccessible and must be estimated/predicted), while $\hat{\epsilon}$ is the predicted test error (given by an error estimation algorithm), as defined in line 84 of our paper. Our goal is to output $\hat{\epsilon}$ as close to ${\epsilon}$ as possible. - The $\vec{c}$ in line 188 should be $\vec{y}$. - In Section 3.1, the hat notations denote the empirical distributions, that is the distributions over the finite samples. ### On Weakness 2 We thank the reviewer for asking for more details and intuitions on pseudo-label shift. We provide a more detailed explanation below and have updated our paper correspondingly. Pseudo label shift is the lower bound of the true error because as we mentioned in Section 2.4, it is the Wasserstein distance between $P_{pseudo} (y)$ and $P _T (y)$, defined as the minimum transport cost over all transport maps between these two distributions. Meanwhile, the true error corresponds to a specific transport map: where the pseudo label of each sample is mapped to its true label, so that is why it is lower bounded by the pseudo-label shift. For the relationship between our method and true target error, we would like to direct the reviewer to Proposition 2 (line 194) of our paper, where we established that the error estimate of COT is never smaller than half the pseudo-label shift, which is the lower bound of true target error. ### On Weakness 3 We thank the reviewer for pointing out potential vulnerable cases and will address them below. We have incorporated the discussion on these two cases in our paper. - Imbalanced test set: Theoretically, our method is agnostic to different label distributions, assuming no/mild label shift between in-distribution and out-of-distribution label distributions. Empirically, We would like to note that RxRx1-WILDS, Amazon-WILDS, and CivilCom.-WILDS are all imbalanced datasets and COT/COTT provides competitive performance on these datasets. - The label space of the test set is only the subset of that of the training set: If a big fraction of the training set labels do not show up in the test set, this will cause a large label shift, which would lead our methods to overestimate the error. However, this setting is outside the scope of this paper as we currently consider no/mild label shift between in-distribution and out-of-distribution test sets. In the future, we plan to explore the use of Partial Optimal Transport for this more challenging setting. --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for your detailed response to my reviews. My concerns have been addressed, so I chose to raise my score. Best regards, Reviewer chpC
Summary: Authors develop a methodology for estimating the accuracy of a classifier in the case where the distribution of the test set is different from the one used in the training data. Their methodology is based on computing a certain Wasserstein distance between suitable distributions on labels. Authors argue this definition is robust to pseudo label shift and validate their claims through extensive experimental validation. Strengths: StrengthsI think overall it is a good, strong paper. It addresses an important problem and it provides a highly innovative perspective on that problem. Experiments are compelling and make a strong point that the method is able to combat otherwise pervasive underestimation of error for of out of distribution samples. Weaknesses: The main weakness is the lack of sufficient clarity. The entire argument is subtle and required a super clear explanation. Unfortunately, the way the manuscript is written does not help much. Another weakness is that while this is promoted as an OT based method, the theoretical results (while seemingly correct) don’t enlighten much about what the role of Optimal transport is. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1)Proposition 2 is nice but it is unclear how such lower bound can be ever applied. In other words, the estimated error could be 1 always and the result is useless. Authors should elaborate more on the significance of this result. 2)Similarly, Corollary 1 is helpful (line 189) but again, doesn't say the proposed method is always good. What AC got it just right while COT overestimates the error? The fact that there is a GAP between great experimental results and these somewhat weak theoretical results is a bit too bad and authors (although perhaps not here) should aim to clarify this. 3)Authors may try to better explain the role of the Wasserstein infinity distance here. Is there anything special about this metric space that makes things work or it is just a mathematical convenience? The proofs are based on geometrical intuitions, but is there anything special about the Wasserstein geometry here? 4) It would be great if the authors could elaborate more on why the Pseudo label shift is the right quantity to focus on.  By line 147 I am worried that the observed correlation showed in the experiments could be an artifact of the fact that the pseudo label shift is bounded by \epsilon. How can this be ruled out? 5) The definition of Pppseuudi in line 95 page 3 is awkward as it is not immediately clear whether it refers to domain X or Y. 6) inline equation in line 188 page 5 seems wrong. Could you please correct? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: n.a. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and are encouraged to hear that our work was perceived as a good and strong contribution, providing a highly innovative perspective on the problem of OOD detection. Further, we appreciate the reviewer’s comment that our experiments are compelling in supporting our claims. In the following paragraphs, we will provide further clarifications on our method, which we have also incorporated into the updated version of our manuscript. ### On Weakness: We thank the reviewer for the advice to improve the writing of our paper to make our argument stronger. We have added more explanations and details to make our messages clearer. On role of OT, we think Figure 1 can provide some intuition. Essentially, using OT, our method can provide an error estimate that is aware of the assumed label distribution. From here, we proved formal guarantees of our estimation, i.e., COT provides provably more conservative estimates than AC, and COT is never lower than half the pseudo-label shift, which is the lower bound of true target error. We hope this gives a clearer picture of our work but welcome any follow-up questions. ### On Question 1 and 2: The reviewer insightfully pointed out the fact that both Corollary 1 and Proposition 2 are lower bounds of COT and there is no upper bound guarantee. These lower bounds are useful because NNs tend to be overconfident [1], resulting in overly optimistic performance estimates for existing confidence-based methods. Given this observation, lower bounds are actually more useful than upper bounds. On the other hand, we believe upper bounds are difficult to obtain and require a much deeper theoretical understanding of issues like calibration. While more calibration methods have been proposed over the years, the reason why and when neural networks are miscalibrated is still not well understood [2]. We admit that it is indeed possible for AC to be just right and COT to overestimate. However, we rarely observe this phenomenon empirically, because when the model makes a lot of mistakes, very low confidence is required to have a large overestimation of the correct set, but we know neural nets are more prone to be overconfident than not [1]. As a result, AC almost always underestimates the error. Thus, we think the two lower bounds of COT we proved are helpful in explaining the success of our methods on a large number of datasets. We hope this explanation helps clarify things but welcome any follow-up questions during the discussion period. ### On Question 3: Role of Wasserstein infinity distance: the reviewer’s observation is astute. The infinity norm is the most natural distance for AC-MC, which computes the maximum confidence in a confidence vector. This is precisely the definition of the infinity norm of a confidence vector. The infinity norm enables us to shine a new light on AC that is previously unexplored. Bringing AC into the abstract geometric framework allows us to obtain results previously unattainable: 1. Understanding the well-known failure mode in AC-MC via projection optimality, an inherent geometric notion. 2. Design COT and provide a guarantee that safeguards the method from similar issues. While similar results may be obtainable with enough effort through other perspectives, we believe our novel geometric insights are interesting to the community. As a side note, it is a simple exercise to show that the L-1 norm and the L-infinity norm are equivalent to COT. However, the L-1 norm does not provide a unified perspective for AC-MC, which is why the L-infinity norm is important here. ### On Question 4: The reasons why we consider the pseudo-label shift are the following: - When source and target label marginals are the same, pseudo-label shift is the lower bound of the true error, because as we mentioned in Section 2.4, it is the Wasserstein distance between $P_{pseudo} (y)$ and $P_T (y)$, defined as the minimum transport cost over all transport maps between these two distributions. Meanwhile, the true error corresponds to a specific transport map: where the pseudo label of each sample is mapped to its true label, so that is why it is lower bounded by the pseudo-label shift. - Under the assumption that the target label distribution is close to the source one, pseudo-label shift is a quantity that we can actually estimate in practice, and this quantity provides us with additional information about the true target error. - It also has an intuitive triangular relationship with AC-MC and COT, as illustrated in Figure 1. Using the additional information about the target label distribution allows us to get a more accurate prediction when the model is miscalibrated, as we discussed in Sections 2.3 and 2.4. How to Interpret the correlation between pseudo-label shift and AC-MC’s estimation error: - The reviewer is correct that the pseudo-label shift lower bounds the true error. Assuming reference to Figure 2, we would like to clarify that the y-axis is AC-MC’s estimation error instead of the true error. We hope this helps clarify things but if not, we welcome any follow-up questions during the discussion period. ### On Question 5 and 6: We thank the reviewer for their writing suggestions. The $\vec{c}$ in line 188 should be $\vec{y}$. We have incorporated them and corrected the typos in our paper. ### References: [1] Ovadia, Yaniv, et al. "Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift." Advances in neural information processing systems 32 (2019). [2] Wang, Cheng. "Calibration in Deep Learning: A Survey of the State-of-the-Art." arXiv preprint arXiv:2308.01222 (2023). --- Rebuttal Comment 1.1: Title: Thank you Comment: Thanks a lot for your in-depth rebuttal. All my concerns have been addressed and I have maintained my already positive assessment of this paper.
Summary: This paper proposed Confidence Optimal Transport (COT) and a variant, Confidence Optimal Transport with Thresholding (COTT) based on pseudo-label shift to characterize the error between predicted and true OOD label distribution. Strengths: 1. The idea of using pseudo-label shift to characterize error is natural and straight-forward. 2. Experiment results are reasonable. Weaknesses: 1. This work does not have sufficient theoretical support for the method. 2. I am not fully convinced that the main assumption of same source label distribution and target label distribution is reasonable. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. I don't understand the assumption: "the target label distribution is close to the source label distribution" and think this assumption conflicts with the purpose of detecting OOD error --the test set should be assumed to behave differently from the training set and that's where the error come from. I wonder if authors could help clarify this assumption. 2. Figure 2 and 3 do not seem clear to me, could authors elaborate more on the purpose of including types of corruption in the figure? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes, the authors have addressed their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and for the recognition that our idea is natural and straightforward, and our experiment results are reasonable. We address concerns and questions below. ### On Weakness 1 Previous works have established that no theoretical guarantee is possible for OOD error estimation on arbitrary distribution shifts (See section 3.1 in Garg et al. 2022 [1] ). Therefore, assumptions are essential to make progress on theoretical guarantees. We believe that one significance of this work is that we specify a specific assumption (no/mild label shift) that, when held true, makes our method more desirable than other methods. Most previous methods rely on the assumption that the model is well-calibrated. However, modern neural networks are well known to be overconfident and miscalibrated. While calibration methods such as temperature scaling help in the in-distribution setting, they fail miserably under distribution shift. See Ovidia et al. 2019 [2] for a comprehensive study. Under the no/mild label shift assumption, we additionally showed that COT provides provably more conservative estimates than AC, and established the relationship between COT and pseudo-label shift. We would appreciate it if the reviewer could elaborate on what their ideal theoretical support looks like so we could further discuss if such desired results are attainable. ### On Weakness 2 and Question 1 To start with, we would like to clarify that we mainly focus on covariate shifts in this work. For example, different hospitals might use different stain colors for their pathology samples. This will have the effect that their images look very different, but their respective label marginals (distribution of classes of diseases) should be very similar. For another example, when a self-driving car operates on rainy days, the images it collects might be very different from ones from sunny days, but the occurrence of objects, for example, traffic lights, stop signs, etc should still be the same. We hope these examples help to clarify why the assumption that the source and label distributions are the same does not defeat the purpose of estimating OOD errors. Additionally, building upon our response regarding weakness 1, since some assumptions are required, we have chosen an assumption that is frequently satisfied, theoretically convenient to work with, and produces strong empirical results. We again thank the reviewer for raising this concern over our assumption. We have updated our paper to explicitly state the distribution shifts we aim to tackle. ### On Question 2: We thank the reviewer for pointing out their confusion over Figure 2 and 3 and have updated the captions for them to make it clear. For Figures 2 and 3, we showed the corruption types and severities to provide readers with the specific distribution shifts we test on in CIFAR-10-C and CIFAR-100-C datasets. ### References [1] Garg, S. and Balakrishnan, S., 2022. Leveraging Unlabeled Data to Predict Out-of-Distribution Performance. ICLR. [2] Ovadia, Yaniv, et al. "Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift." Advances in neural information processing systems 32 (2019). --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my concerns and decide to stick to my rating as 5.
Summary: The paper proposes to compute OOD error estimates using a Wassertein distance between the target label distribution and the distribution of confidence vectors. This is in contrast to AC-MC which compare the pseudo-label distribution and the confidence distribution. The paper motivates COT (confidence optimal transport) and COT - threshold to handle outliers and shows they are useful metrics in understanding OOD error. The paper does a variety of experiments to evaluate how the proposed methods do in fact estimate OOD error bettter than existing work. Strengths: 1. The experiments in the paper show clear advantage of COT and COTT for predicting OOD error using unlabelled samples. 2. The presentation in the paper is clear, barring some organization issues. 3. The paper tackles the important problem of estimating OOD error without labels and the proposed method is not very complicated. Weaknesses: 1. The paper does not specify the kind of distribution shift under which the method works; causal shift, anticausal shift, covariate etc.. For example, there are distribution shifts where p_S(X) = p_T(X), which, given good prediction, would mean small COT and COTT (because confidences are concentrated around the correct label). Could the authors explain how this may not be an issue? Better, could the authors evaluate the methods on a dataset like ColoredMNIST with a flipped spurious correlation between train and test. 2. The writing could be improved by introducing COT and COTT before showing plots with them. 3. I'm a little fuzzy on the intuition of which AC-MC does not work as well as COT. Is it that the confidence distribution becomes more diffuse with distribution shift and AC-MC does not scale with that "diffusion" because the pseudo-label distribution also depends on the confidence distribution, but COT being that it looks at the $p_T(Y)$, breaks that dependennce? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: In one sense. The weaknesses section outlines the issue of understanding what kind of OOD the method is really testing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback and for the compliment that the experiments demonstrate clear advantages of our method, the presentation of the paper is clear, and we tackle an important problem of estimating OOD error without a complicated solution. We would like to address your questions and concerns as follows: ### On Weakness 1 We thank the reviewer for this great point, and have added clarifications about the shift we studied in the updated version our paper. To clarify, we mainly experimented with covariate shift datasets. For cases where $P _{S} (X)$ = $P _T (X)$, this means we are essentially estimating the model’s performance on in-distribution data. Since we calibrate the model using temperature scaling on the in-distribution validation set, as mentioned in Section 4.2 of our paper, the confidence scores will be close to the in-distribution accuracy on average. As a result, the particular case brought up by the reviewer would not be an issue. Per the reviewer’s suggestion, we performed additional experiments on the ColorMNIST dataset. We found that COT/COTT provides consistently strong performance as well. Please refer to the PDF file attached to our global response for more detailed experimental results. ### On Weakness 2 We thank the reviewer for this great suggestion! We have updated the paper accordingly. ### On Weakness 3 The reviewer’s intuition on why AC-MC does not work as well as COT sounds mostly correct to us. Here is a more detailed elaboration on this intuition. Modern neural networks are well known to be overconfident and miscalibrated. While calibration methods such as temperature scaling help in the in-distribution setting, they fail miserably under distribution shift. See Ovidia et al. 2019 [1] for a comprehensive study. Now, AC-MC by definition only works when the calibration is good, which as we mentioned is a poor assumption in practice under OOD settings. This corresponds to the reviewer’s intuition where the confidence becomes “diffused” but not enough, due to overconfidence and failure of OOD calibration. By contrast, COT is able to adjust for these overconfident mispredictions using the assumed label distribution To provide a more concrete example, a model might have 80% accuracy on in-distribution data and since it’s calibrated its average confidence (AC) is around 80%. Now, on the OOD data, it can have an accuracy of 50% but AC of 70% due to overconfidence. But if pseudo-label shift exists on this OOD data, which we empirically observed to be true in extensive experiments, COT will be able to leverage this information to provide a more accurate estimate of the performance. ### References [1] Ovadia, Yaniv, et al. "Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift." Advances in neural information processing systems 32 (2019). --- Rebuttal Comment 1.1: Comment: Thanks for the response. I have updated my score. I had one major concern left that another reviewer points out: the assumption that " the target label distribution is close to the source label distribution". I am inclined to not consider this a very serious issue after 1) the authors pointed out their main concern is covariate shift and also that label shift correct is well-studied and could be applied orthogonal to the method. Could the authors comment on my thinking here and write their own response to the concern that " the target label distribution is close to the source label distribution" is a strong assumption? --- Reply to Comment 1.1.1: Title: Reply to Reviewer 1Jqy's Inquiry Comment: We thank you for acknowledging our rebuttal and for raising the score! We are glad that you requested further clarifications of our assumption that “the target label distribution is close to the source label distribution”, as this assumption is an important part of our paper. Your thoughts on the assumption make sense to us. We also think that for covariate shifts, assuming the target label distribution is close to the source label distribution is a natural one. However, while we do agree that label shift correction is well-studied, we would like to note that these correction methods [1] assume there are no covariate shifts, i.e., p(x|y) does not change. Thus, we do not think we can directly use existing methods to relax our assumption. While this is unfortunate, we think this also helps to support our claim that finding the right assumption is critical in making progress on tackling distribution shifts. We would address the concern by pointing out: 1) the assumption that “the target label distribution is close to the source label distribution” is a natural assumption for covariate shifts as we observe it to be true in various real-world benchmark datasets we experimented with. 2) the assumption is a powerful one that, when held true, allows us to compute pseudo-label shift, a lower bound of true error, and to propose COT/COTT that provides strong empirical performance and some theoretical guarantee. 3) proposing this assumption has great value, as the community currently working on the problem of estimating OOD error finds it challenging to specify under what circumstances their proposed methods are guaranteed to work. Confidence-based methods mainly rely on the model being well-calibrated, which is rarely true on OOD datasets [2] and can be extremely unreliable as shown in our paper. 4) we plan to work on methods to relax this assumption in the future to make COT/COTT robust to even extreme distribution shifts. Thanks again for your reply. Please let us know if our reply addressed your concerns. [1] Lipton, Zachary, Yu-Xiang Wang, and Alexander Smola. "Detecting and correcting for label shift with black box predictors." International conference on machine learning. PMLR, 2018. [2] Ovadia, Yaniv, et al. "Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift." Advances in neural information processing systems 32 (2019).
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their time and effort in reviewing our work. We are glad that our work receives the following compliments from our reviewers: - It shows “clear advantage of COT and COTT for predicting OOD error” (1Jqy) over “most other metrics that are sensitive to miscalibration” (HB65), and “provides some performance guarantee” (HB65). - “The motivation in this work is clear” (chpC), “addresses an important problem and provides a highly innovative perspective” (dn4Z), “the idea of using pseudo-label shift to characterize error is natural and straight-forward” (1ruP), and “the proposed method is not very complicated” (1Jqy), "this manuscript is timely and important" (HB65). - “The authors present impressive experiments” (HB65), “experiments results are reasonable” (1ruP), “experiments are compelling and make a strong point” (dn4Z), “the code is available which improves the reproducibility” (chpC). We have responded to all reviewer concerns and completed all the additional experiments that were requested. The attached PDF file contains experimental results for ColoredMNIST, per reviewer 1Jqy’s request. Pdf: /pdf/ce07ea7bbed13f20f3954638eae6bc9b117f968e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The manuscript studies the challenges caused by out-of-distribution (OOD) data in machine learning models. They identify "the difference between the predicted and true OOD label distributions" (called pseudo-label shift) as the main reason many existing works underestimate OOD errors. They propose a new approach called Confidence Optimal Transport (COT), which uses optimal transport theory to provide more robust error estimates in the presence of pseudo-label shift. Another approach they propose is COT with Thresholding (COTT), which further applies thresholding to individual transport costs to improve the accuracy of COT's error estimates. The authors present impressive experiments to conclude that the COTT approach outperforms existing state-of-the-art OOD error-predicting methods. Predicting a classifier's performance on out-of-distribution (OOD) data is a significant challenge in machine learning, so this manuscript is timely and important. Strengths: - Due to the use of the Wassienstien distance, COT is robust to miscalibration, meaning it performs well even when the predicted probabilities of the model are not well-calibrated. This is a significant advantage as most other metrics are sensitive to miscalibration. - It looks like COT and COTT are better at handling pseudo-label shifts, i.e., when there are differences between the predicted and true OOD label distributions. Table 1 extensively compares COT and COTT against other metrics for measuring the OOD errors. - It is very welcomed that COT provides some performance guarantee. Under the assumption that the true and predicted label distributions are the same, COT predicts an error of less than half the Wasserstein distance between the pseudo-label distribution and the true label distribution Weaknesses: - The computation of COT and COTT involves solving a linear program of optimal transport. Yes, the authors propose a batching technique to make their methods scalable, but there is a concern that COTT is computationally infeasible for large-scale inputs. - In most practical setups, COT doesn't have any performance guarantees because the assumption that the true and predicted label distributions are the same is rarely valid. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The performance of COTT depends on the threshold. The choice of the threshold could impact the performance of COTT. How sensitive is COTT to the threshold value? Is thresholding effective in reducing the impact of outliers? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper has a particular section addressing the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review and for the recognition that our methods “have a significant advantage” over other metrics, our experiments are extensive, and we provide performance guarantees. We would like to address the concerns and questions raised by the reviewer as follows: ### On Weakness 1 We thank the reviewer for raising concerns over the scalability of our algorithm. Indeed, using COT/COTT requires solving a linear program with $O(n^3)$ time complexity. That’s why we proposed the batched version of COT and conducted all experiments with this setup. For a batch of 10,000, COT finishes the prediction in 7 seconds when using the Python Optimal Transport (POT) library [1]. Thus, we think batched COT is reasonably fast, especially considering the current problem setting where we want to estimate the performance of a bunch of samples instead of single ones, which, most likely, do not require real-time response. We have added this discussion to our paper. ### On Weakness 2 We assume the reviewer refers to our assumption that the target label marginal is similar to the source label marginal. In that case, we agree that this assumption may not hold in real-world cases. That’s why we conducted experiments where we simulate mild label shift in Section B4 of our supplemental material and demonstrated that COT/COTT still provides superior performance. We also agree that COT/COTT might fail in extreme cases but ultimately, all works tackling distribution shift have to make some assumptions about the data as it is theoretically impossible to have an OOD error prediction algorithm that works for any distribution shift (See section 3.1 in Garg et al. 2022 [2] ). Thus, we think one significance of our work is that we specify a specific assumption that, when held true, makes our method a more desirable one compared to other methods. ### On Question 1 We thank the reviewer for this great question regarding the threshold of COTT. We would like to point out that in our implementation, the threshold of COTT is not a hyperparameter, but is learned based on the transportation costs such that the number of validation samples whose costs are above the threshold matches the number of mistakes (See Section 3.2, paragraph 3 of our paper). While it is interesting to see the impact of varying the threshold, the current way of setting the threshold makes it a consistent estimator of in-distribution test errors (see Supp D.3 in Garg et al. 2022 [2]). Empirically, thresholding is effective in reducing the impact of outliers as evidenced by the performance improvement of COTT over COT, and ATC over AC. ### References [1] Flamary, Rémi, et al. "Pot: Python optimal transport." The Journal of Machine Learning Research 22.1 (2021): 3571-3578. [2] Garg, S. and Balakrishnan, S., 2022. Leveraging Unlabeled Data to Predict Out-of-Distribution Performance. ICLR. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Dear Authors, thanks a lot for the detailed responses. I decide the keep my scorings.
null
null
null
null
null
null
GMSF: Global Matching Scene Flow
Accept (poster)
Summary: Previous scene flow estimation methods require complicated coarse-to-fine or recurrent architectures as a multi-stage refinement. In contrast, this paper proposes a simpler single-scale one-shot global matching to address the problem. To this end, this paper decomposes the feature extraction step via a hybrid local-global-cross transformer architecture. Extensive experiments show that the proposed method achieves SOTA performance on multiple scene flow estimation benchmarks. Strengths: 1. This paper proposes Global Matching Scene Flow (GMSF) and achieves state-of-the-art performance on multiple scene flow estimation benchmarks. 2. The proposed pipeline is simple and effective. 3. The authors have provided the code in the submission. Weaknesses: 1 Some related studies have been neglected. Please compare with the previous study [cite1] in experiments. Besides, previous studies [cite2-3] need to be cited and discussed. [cite1] Lang I, Aiger D, Cole F, et al. SCOOP: Self-Supervised Correspondence and Optimization-Based Scene Flow[J]. arXiv e-prints, 2022: arXiv: 2211.14020. [cite2] Li X, Kaesemodel Pontes J, Lucey S. Neural scene flow prior[J]. Advances in Neural Information Processing Systems, 2021, 34: 7838-7851. [cite3] Li R., Zhang C., Lin G., Wang Z., Shen C.: Rigidflow: Self-supervised scene flow learning on point clouds by local rigidity prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, Los Alamitos, CA, USA (June 2022), pp. 16959–16968. 2 Compared methods are not consistent in Table 2 and Table 3. Please clarify the reason. 3 The motivation of Eq. (11) is unclear. Please further clarify the reason why this term can be viewed as a smoothing procedure. 4 Ablation study is not enough. I suggest the authors conduct an ablation study to Eq. (13). Specifically, two version models need to be compared, i.e., model A trained with the first term and model B trained with Eq. (13). 5 The authors need to compare the FLOPs, GPU memory, and run-time of recent scene flow methods. Although the proposed method is simple and effective, the computational cost needs to be compared. 6 It seems that tokenization is too complex and redundant. Specifically, Table 5 shows that DGCGG+PT achieves almost the same performance as MLP+PT. Therefore, I think only using PT is enough, and I suggest the authors conduct experiments to report the performance of GMSF with only PT as the tokenization process. In this way, GMSF would become more efficient. 7 Whenever any abbreviations appear for the first time, it requires a full form. For example, GMSF and LiDAR. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the Weaknesses. If these concerns are addressed, I am willing to improve the rating. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive comments and helpful suggestions. Below are our clarifications for the questions. **Q1: Related work.** We thank the reviewers for the suggestion of the unmentioned related work. We will add the following discussion in the related work. Different from the proposed method, which is fully supervised and trained offline, some other works focus on runtime optimization, prior assumptions, and self-supervision. Li $et\ al.$ [2] revisit the need for explicit regularization in supervised scene flow learning. The deep learning methods tend to rely on prior statistics learned during training, which are domain-specific. This does not guarantee generalization ability during testing. To this end, Li $et\ al.$ proposes to rely on runtime optimization with scene flow prior as strong regularization. Based on [2] Lang $et\ al.$ [1] propose to combine runtime optimization with self-supervision. A correspondence model is first trained to initialize the flow. Refinement is done by optimizing the flow refinement component during runtime. The whole process can be done under self-supervision. Pontes $et\ al.$ [4] use the graph Laplacian of a point cloud to force the scene flow to be "as rigid as possible". Same as Li $et\ al.$ [2], this constraint can be optimized during runtime. Li $et\ al.$ [3] propose a self-supervised scene flow learning approach with local rigidity prior assumption for real-world scenes. Instead of relying on point-wise similarities for scene flow estimation, region-wise rigid alignment is enforced. The comparison with SCOOP is provided here and will be added in the experiments section. |Method|$EPE_{3D}\downarrow$|$ACC_{S}\uparrow$|$ACC_{R}\uparrow$|$Outliers\downarrow$| |-|-|-|-|-| |SCOOP|0.063|79.7|91.0|24.4| |SCOOP+|0.047|91.3|95.0|18.6| |**GMSF**|0.051|79.9|92.3|22.9| SCOOP only reports results on KITTI$\_o$. The symbol $+$ indicates that all the points in the test point clouds are used during evaluation. Without the symbol indicates using the same dataset as us preprocessed by Liu $et\ al.$ in FlowNet3D [5]. Different from our proposed method, which is fully trained offline and generalizes to KITTI without fine-tuning, SCOOP combines offline training with runtime optimization. The performance is increased by utilizing more data during evaluation (SCOOP$+$). Our proposed method does not require expensive online optimization and performs slightly worse in terms of $EPE_{3D}$, which may shed light on future work. Thanks for the suggestion. **Q2: Compared methods are not consistent in Table 2 and Table 3.** The datasets used in Table 2 and Table 3 are the same (Flyingthings3D and KITTI Scene Flow). However, the preprocessing procedures are different. We follow the preprocessing procedure from Liu $et\ al.$ in FlowNet3D [5] to prepare the F3D$\_o$/KITTI$\_o$ in Table 2, and follow the preprocessing procedure from Gu $et\ al.$ in HPLFlowNet [6] to prepare the F3D$\_s$/KITTI$\_s$ in Table 3. The difference is that F3D$\_o$/KITTI$\_o$ keeps all valid points with an occlusion mask available during training and testing, F3D$\_s$/KITTI$\_s$ simplifies the task by removing all occluded points (see paper L 234-240). The numbers are not consistent because of the different experimental settings. We take the results directly from the published papers. Some of the methods only conduct experiments on one version of the datasets. For better comparison with state-of-the-art methods, we conduct experiments with both of the settings. **Q3: Motivation of Eq. (11) and why it can be viewed as a smoothing procedure.** Please refer to the global response to all reviewers C. **Q4: Ablation study on Eq. (13).** Please refer to the global response to all reviewers D. **Q5: FLOPs, GPU memory, and Runtime.** Please refer to the global response to all reviewers A. **Q6: GMSF with only PT as the tokenization process.** Thanks for the suggestion. We interpret the suggestion as: Since employing DGCNN and MLP gives pretty much the same results, which leads to the question if further simplifying the architecture, e.g. by removing MLP and only employing PT, would work. This is not the case, since the purpose of DGCNN and MLP is to project the 3D coordinate into the high-dimensional feature space. The same procedure is employed in the PT [7] original paper, where the input feature vectors are acquired by applying MLP on the 3D coordinates. Removing the MLP and directly applying the PT on the 3-dimensional coordinate would result in a lack of capacity of the model. To support our claim that reducing the number of channels leads to a lack of capacity of the model, we did a small ablation study on the number of channels in the table below: |Channels|$EPE_{3D}\downarrow$|$ACC_{S}\uparrow$|$ACC_{R}\uparrow$|$Outliers\downarrow$|$EPE_{3D}\downarrow$|$ACC_{S}\uparrow$|$ACC_{R}\uparrow$|$Outliers\downarrow$| |-|-|-|-|-|-|-|-|-| ||$all$||||$nonocc$|||| |64|0.059|87.06|93.43|17.13|0.032|92.64|96.98|13.57| |128|0.049|90.08|94.72|13.08|0.025|94.98|97.78|9.87| **Q7: Abbreviations.** Thanks for pointing out. We've made modifications based on the comment. **References** [1] Lang I, Aiger D, Cole F, et al. Scoop: Self-supervised correspondence and optimization-based scene flow[C]//CVPR2023. [2] Li X, Kaesemodel Pontes J, Lucey S. Neural scene flow prior[J]. NeurIPS2021. [3] Li R, Zhang C, Lin G, et al. Rigidflow: Self-supervised scene flow learning on point clouds by local rigidity prior[C]//CVPR2022. [4] Pontes J K, Hays J, Lucey S. Scene flow from point clouds with or without learning[C]//3DV2020. [5] Liu X, Qi C R, Guibas L J. Flownet3d: Learning scene flow in 3d point clouds[C]//CVPR2019. [6] Gu X, Wang Y, Wu C, et al. Hplflownet: Hierarchical permutohedral lattice flownet for scene flow estimation on large-scale point clouds[C]//CVPR2019. [7] Zhao H, Jiang L, Jia J, et al. Point transformer[C]//ICCV2021. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I still have the following concerns about GMSF. (1) The comparison with SCOOP [cite1]. Because the authors have provided codes, I trained both SCOOP and GMSF by myself. However, SCOOP is faster, data-efficient (one-tenth of the whole training set), and easy to be trained (less than 1h with a single 3090 trained on FT3D). In contrast, the GPU requirement of GMSF is too large, and I cannot train the GMSF on a GeForce RTX 3090 (24G) even with bs = 1. More crucially, SCOOP is an unsupervised method without ground truth flows. Please clarify the strength of GMSF. (2) About the tokenization process. I am confused about the table in the answer to Q6. Which row of the table represents the performance of GMSF with only PT as the tokenization process? According to Concern (1), I strongly suggest the authors simplify the architecture of GMSF. Otherwise, it is impossible to use GMSF for real-world applications. --- Reply to Comment 1.1.1: Title: Thanks for the reply and for the effort of running the experiments. Comment: Thanks for the reply and for the effort of running the experiments. Q1: The strength of GMSF. Methods, such as SCOOP, based on prior information and runtime optimization are usually easier to train but more time-consuming during inference ($10\times$–$100\times$ slower, refer to [1]). Therefore, when there is a requirement on inference time, runtime optimization-based methods are not suitable. For example, SCOOP [2] achieves the best performance with an inference time of \~20s, Neural Prior [1] with an inference time of \~40s (see Figure 7. in SCOOP). This is around two orders of magnitude higher than our inference time (371.8ms). Moreover, during inference, our method requires 5GB of memory, and can thus run on a Geforce RTX 3090 or lower end graphics cards. Q2: Table in the answer to Q6. The result of GMSF with only PT is not in the table. The table is with DGCNN+PT, but the dimensions of the feature space are reduced from 128 to 64 in the first row. As mentioned, the function of DGCNN and MLP is to project the 3D coordinate into the high-dimensional feature space. Without them and only applying PT on the 3D coordinate would result in a lack of capacity of the model. The table shows a performance drop with fewer feature dimensions, and presumably a further simplification could result in a larger performance drop. (We have tried training GMSF with only PT, but this has problems with convergence.) However, another way to simplify our model is to reduce the number of global-cross transformer layers (see Table 4 in the paper). The table shows that our proposed method has a good performance even with only 4 global-cross transformer layers. (We have tried with global-cross transformer layers = 4, batch size = 1. It converges well with 14.5GB memory usage.) Besides, since most of the applicability in real scenarios is based on inference, this makes inference time arguably the most important factor in the real-world applicability of a method. We are on par with the other state-of-the-art methods (**two orders of magnitude faster than SCOOP**). Moreover, during inference, our method only requires 5GB of memory, therefore, it can even be run on lower end consumer graphics cards. We hope that could help with real-world applications. References [1] Li X, Kaesemodel Pontes J, Lucey S. Neural scene flow prior[J]. NeurIPS2021. [2] Lang I, Aiger D, Cole F, et al. Scoop: Self-supervised correspondence and optimization-based scene flow[C]//CVPR2023.
Summary: The paper presents GMSF, a transformer-based method that matches dense features to estimate the scene flow from point clouds. The proposed method uses a transformer-based architecture that matches two point clouds and calculates the scene flow leveraging the cross- and self-attention modules. The paper presents experiments on FlyingThings3D and KITTI Scene Flow datasets where the proposed method consistently improves the benchmarks. Strengths: In sum, I think the solution using cross- and self-attention mechanisms are an interesting application to solve the scene-flow problem. Although, this solution has been applied to sparse feature matching (see LoFTR), I think it is also good to see it works for the scene-flow problem. Here are the details of the strengths: S1. I find the solution an interesting application of cross- and self-attention mechanisms to solve the scene flow problem. Though in general I think the novelty is not that high since the overall idea has been used for feature matching in LoFTR CVPR 2021 [36]. S2. The clarity of the narrative is quite good. The description of the architecture and attention modules is clear. Also, the description of the losses make sense and are easy to understand. Thus, I think the paper can be reproducible. Weaknesses: Overall, I think while the paper shows another application of attention to compute 3D scene flow, I think the paper is lacking more thorough experiments and ablations. Here are the details: W1. Missing ablations. First, the paper is not showing the performance impact of the parameters of the KNN component shown in Figure 2.I am sure this is also crucial since this allows the encoder to grab local information. How to set the KNN parameters? What is the effect of this parameter in the final performance? Second. What is the behavior of setting $\lambda$ to a different value? Why was $\lambda=0.9$ and what is the performance of the method when varying this parameter? W2. Is the KITTI Scene Flow dataset challenging enough? Since this is a dataset for autonomous driving, the motion of the vehicle is mainly planar and thus limiting the motion degrees of freedoms (only 1 for rotation, and mostly one 1 for translations, since the car moves linearly most of the time). I think this greatly simplifies the complexity of finding correspondence of any type (e.g., scene flow) in these autonomous driving datasets. W3. Lastly, I am concerned about the novelty of the approach. I think previous works have shown that attention mechanisms are useful for matching tasks in vision, and thus I struggle to find novel components or ideas in the paper. I think the paper should discuss more in depth the novelties of the paper more in detail. ---- Post Rebuttal After reading the rebuttal and discussion with the authors, my concerns have been addressed and I will increase my rating. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. As stated in Weaknesses section, how to set the KNN parameters of the method? 2. in line 158, the paper mentions "stable tokens". However, the paper never defines what a "stable token" is. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I think limitations are stated adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive comments and helpful suggestions. Below are our clarifications for the questions. **Q1a: Ablation study of the KNN component.** For the Point Transformer in Figure 2, we follow the setting in Zhao $et al.$ [1] and set $k$ to 16. Thanks to the reviewer's suggestion to provide an ablation study on the $k$ parameter. The results are given in the following table: | $k$ | $EPE_{3D}\downarrow$ | $ACC_{S}\uparrow$ | $ACC_{R}\uparrow$ | $Outliers\downarrow$ | $EPE_{3D}\downarrow$ |$ACC_{S}\uparrow$ | $ACC_{R}\uparrow$ | $Outliers\downarrow$ | |-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | | $all$ | $all$ | $all$ | $all$ | $nonocc$ | $nonocc$ | $nonocc$ | $nonocc$ | | $k=8$ | 0.050 | 89.06 | 94.25 | 14.33 | 0.027 | 94.25 | 97.50 | 10.94 | | $k=16$ | 0.049 | 90.08 | 94.72 | 13.08 | 0.025 | 94.98 | 97.78 | 9.87 | | $k=32$ | 0.049 | 89.53 | 94.42 | 13.89 | 0.026 | 94.61 | 97.58 | 10.59 | The result confirms that the best performance is obtained for $k=16$ in the Point Transformer paper [1] (Section 4.4 Ablation Study, Number of neighbors). Setting $k$ to 8 and 32 results in a slightly worse performance. **Q1b: Ablation study for the parameter $\gamma$ in Eq. (13).** We test the two extreme cases: with only the first term $\hat{V}\_{final}$, and only the second term $\hat{V}\_{inter}$, respectively. Thanks to the recommendation we could slightly improve the performance by simplifying Eq. (13) that was previously introduced in RAFT [2]. Please refer to the global response to all reviewers D for more details. **Q2: KITTI Scene Flow dataset not challenging enough.** In the autonomous driving scenarios, the vehicles are mostly moving in a plane. However, the scene flow is often estimated from the view of a moving vehicle instead of a static view. The largest camera motion is yawing, but there is also a rotation, though smaller, around the other two axes. Therefore, the degrees of freedom of the relative movement of the other vehicles is not restricted to 1 translation and 1 rotation. Moreover, pedestrians, which are also important in autonomous driving scenarios, are much more complex since they are non-rigid objects. Although such autonomous driving scenarios are sometimes simplified by assuming the moving vehicles are rigid objects, explicitly modeling this fact oversimplifies the task and introduces a strong and potentially misleading motion prior. By contrast, our method does not exploit any such biases or prior information, thus being applicable to more general cases, as underlined by the good performance of our approach. Furthermore, we complement KITTI with the more challenging datasets, FT3D (see paper Table 2, 3) and Waymo (see response to reviewer QN7Z, Q1). FT3D has more complicated and less constrained object movements and is larger than the KITTI dataset. The Waymo dataset is another larger autonomous driving scene flow benchmark. We achieve superior performance on both datasets compared to previous methods. **Q3: Novelty.** Cross- and self-attention mechanisms proved efficient in 2D image matching [3]. However, their application in the task of scene flow estimation has not been thoroughly studied yet. We propose a "simple and effective" (vPo9) architecture to tackle scene flow estimation, which is proven effective on multiple scene flow benchmarks, including the new challenging Waymo-open dataset (cf. response to Q1 to QN7Z). Moreover, the novelty and contribution of our work is not only the architecture itself but also the evidence that it is applicable and works very well in a new field. Similarly, Reviewer D7Yn commented that the proposed architecture, and the motivation behind it, are intuitive and novel, and its application to the task of scene flow seems conceptually new. **Q4: Definition of a "stable token".** We thank the reviewer to point out our inaccurate wording. "Stable token" indicates that the token with the Local Point Transformer is more powerful and helps with both accuracy and robustness. **References** [1] Zhao H, Jiang L, Jia J, et al. Point transformer[C]//ICCV2021. [2] Teed Z, Deng J. Raft: Recurrent all-pairs field transforms for optical flow[C]//ECCV2020. [3] Sun J, Shen Z, Wang Y, et al. LoFTR: Detector-free local feature matching with transformers[C]//CVPR2021. --- Rebuttal Comment 1.1: Title: RE: Rebuttal by Authors Comment: Thanks for the clarifications. A few more questions: Q1a: It seems like the parameter $k$ has marginal impact, and that k=16 "best" performance may be within the margin of error. Any insight as to why $k$ has a marginal impact on the method? Q2: The reason I raised this concern is to understand the generalization of the proposed method. Although the results from FT3D partially satisfy my question, the FT3D dataset is synthetic and may pose a synthetic-real gap. --- Reply to Comment 1.1.1: Comment: Thanks for the reply. Q1: Marginal impact of $k$. We believe that the marginal impact of $k$ is due to the design choice of our tokenization process. The tokenization consists of a DGCNN and a Point Transformer, both of which encode local information. Table 5 in the paper shows that as long as there is some local information encoded, the performance remains competitive (EPE=0.049 for DGCNN+PT, EPE=0.055 for DGCNN). Only changing the $k$ parameter in the Point Transformer does not affect the local information encoded by DGCNN, thus has a limited impact. Moreover, Local information of only 4 points already includes features such as curvature. These features become more distinct the more neighboring points are considered. But there is a natural limit when they don't add any more information. Q2: Performance on the real dataset. Since the annotation of real data is very expensive in scene flow estimation, synthetic datasets are usually employed during training [1, 2]. This is also the case for optical flow estimation [3, 4], where the same FT3D dataset is used during training and shows a good generalization performance on the KITTI dataset. However, we agree that when training only on synthetic datasets, the synthetic-to-real-gap should be carefully analyzed. According to the suggestion from Reviewer QN7Z, we extend our experiments to the larger and more challenging Waymo-open autonomous driving dataset that contains 798 training and 202 validation sequences (each 20 seconds with 10Hz point clouds data) with more complex scenes compared to the KITTI dataset. The results in comparison with state-of-the-art methods are given in the following table. We outperform the state of the art by a large margin. We hope this could help with your concern about the applicability of our method in real-world scenarios. | method | $EPE_{3D}\downarrow$ | $ACC_{S}\uparrow$ | $ACC_{R}\uparrow$ | $Outliers\downarrow$ | |-----------|-----------|-----------|-----------|-----------| | FlowNet3D[1] | 0.225 | 23.0 | 48.6 | 77.9 | | PointPWC[5] | 0.307 | 10.3 | 23.1 | 78.6 | | FESTA[6] | 0.223 | 24.5 | 27.2 | 76.5 | | FH-Net[7] | 0.175 | 35.8 | 67.4 | 60.3 | | **GMSF** | 0.086 | 73.9 | 84.7 | 43.9 | References: [1] Liu X, Qi C R, Guibas L J. Flownet3d: Learning scene flow in 3d point clouds[C]//CVPR2019. [2] Gu X, Wang Y, Wu C, et al. Hplflownet: Hierarchical permutohedral lattice flownet for scene flow estimation on large-scale point clouds[C]//CVPR2019. [3] Teed Z, Deng J. Raft: Recurrent all-pairs field transforms for optical flow[C]//ECCV2020. [4] Jiang S, Campbell D, Lu Y, et al. Learning to estimate hidden motions with global motion aggregation[C]//ICCV2021. [5] Wu W, Wang Z Y, Li Z, et al. Pointpwc-net: Cost volume on point clouds for (self-) supervised scene flow estimation[C]//ECCV2020. [6] Wang H, Pang J, Lodhi M A, et al. Festa: Flow estimation via spatial-temporal attention for scene point clouds[C]//CVPR2021. [7] Ding L, Dong S, Xu T, et al. Fh-net: A fast hierarchical network for scene flow estimation on real-world point clouds[C]//ECCV2022.
Summary: This work aims to address the task of scene flow estimation for 3D point clouds. The authors propose a hybrid architecture based on local-global-cross transformers. Given as input a source and a target point cloud, first, the local transformer extracts geometric features within a patch, then the global transformer analyzes each point cloud individually using self-attention to capture the overall context, and finally the cross transformer exchanges information between the point clouds to generate the final feature representation for each point. The scene flow is predicted by performing pointwise matching with the cross similarity matrix, while occlusions are handled through a self-similarity matrix applied to the predicted scene flow. To evaluate the effectiveness of their approach, the authors conduct experiments on two benchmarks for scene flow estimation, demonstrating better performance compared to existing methods. Strengths: - This work introduces a straightforward architecture for scene flow estimation, utilizing transformers. Without bells and whistles, the authors show that the proposed network can produce discriminative per-point features that can be robustly matched for scene flow computation. - The authors conduct extensive experiments on the FlyingThings3D and KITTI Scene Flow benchmarks in different preprocessing and occlusion settings. The results suggest that the proposed method has significant performance improvement on FlyingThings3D, while also performing competitively on KITTI Scene Flow when compared to existing methods for generalization test. - The paper is overall well written and easy to follow. Weaknesses: - One concern for this work is its limited technical contributions: the hybrid network uses the well-established transformer architecture, while the scene flow is estimated by a common probabilistic point matching approach. One seemingly interesting proposal is the occlusion handling with the self-similarity matrix. However, it lacks in-depth explanations for why it helps with occlusion handling, and its ablation study is also missing in Sec. 4.5. - In the generalization test on KITTI-S (Tab. 3), the proposed method exhibits a performance gap compared to PT-FlowNet [8]. More detailed analysis and explanation would be helpful for gaining a better understanding of this discrepancy. - For the ablation study in Tab. 4, it is unclear whether the performance saturates with eight global-cross transformer layers, and whether more layers would be beneficial or not. To provide a comprehensive assessment, it would be good to include comparisons of runtime and memory usage, as the backbone is built upon transformers, which may not scale well with more points, and the paper mentions that input point clouds are resampled to 8K points. - Minor: L267, “Although we don’t” => Note that we do not Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the Weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors discussed limitations at the end of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive comments and helpful suggestions. Below are our clarifications for the questions. **Q1a: Limited technical contributions.** The transformer architecture and probabilistic matching approach have been studied and proven efficient previously in many NLP tasks and image understanding. However, their application in the task of scene flow hasn't been thoroughly studied. We propose a "simple and effective" (vPo9) architecture to tackle scene flow estimation, which is proven effective on multiple scene flow benchmarks, including the new challenging Waymo-open dataset (cf. response to Q1 to QN7Z). Moreover, the novelty and contribution of our work is not only the architecture itself but also the evidence that it is applicable and works very well in a new field. Similarly, Reviewer D7Yn commented that the proposed architecture, and the motivation behind it, are intuitive and novel, and its application to the task of scene flow seems conceptually new. **Q1b: Why occlusion handling works and can be viewed as a smoothing procedure.** Please refer to the global response to all reviewers C. **Q1c: Ablation study on occlusion handling.** Please refer to the global response to all reviewers D. **Q2: Performance gap compared to PT-FlowNet.** We would like to first mention that PT-FlowNet was officially published on May 2023 with a preliminary online version available already in March (the NeurIPS submission deadline was on May 11th). For fairness and comprehensiveness we chose to include the paper and compare it with our proposed method. The proposed method shows a significantly better performance on FlyingThings3D dataset compared to PT-FlowNet, especially in terms of $ACC_S$ and $Outlier$ (accuracy: 98.9\% vs. 91.4\%; outlier: 17.4\% vs. 3.2\%). When generalizing to the KITTI dataset, the proposed method exhibits a performance gap in terms of $EPE_{3D}$. However, the $ACC_R$ is on par with PT-FlowNet, and the other metrics only differ slightly. Besides, KITTI is a relatively small dataset with only less than 200 scenes for evaluation. One aspect that could explain our performance gap compared to PT-FlowNet is that our proposed method is not specifically designed for the experimental setting without occlusion. In the real-world scenario, there is never the case where we can get clean, non-occluded point pairs. Therefore, we aim for more general cases where occlusion exists. The generalization ability on KITTI$_o$ (see paper Table 2.) well supports our claim. However, PT-FlowNet only provides results on F3D$_s$ and KITTI$_s$ datasets. Moreover, during testing, PT-FlowNet requires 32 refinement iterations which slows down the runtime. For each of the scenes, PT-FlowNet needs 898.5 ms to process, while our method only needs 371.8 ms, more than twice as fast. Increasing the number of attention blocks in our model likely results in better performance at the cost of speed (see additional experiments on the number of transformer layers in the global response to all reviewers B.). In summary: except for the only partially realistic KITTI$_s$ setting, where we almost match PT-FlowNet's performance, outperform them by a large margin while at the same time requiring lower computational resources. **Q3a: Ablation study on the number of transformer layers.** Please refer to the global response to all reviewers B. **Q3b: FLOPs, GPU memory, and Runtime.** Please refer to the global response to all reviewers A. **Q3c: Point clouds are resampled to 8K points.** For clarification, this is a common setting for scene flow estimation from point clouds. We follow the same setting for a fair comparison. **Q4: Reformulation.** We thank the reviewer for the suggested reformulation and will use it in the final version. --- Rebuttal Comment 1.1: Title: Comments on Authors' Rebuttal Comment: I appreciate the authors' effort in addressing my questions in the rebuttal. After reading through the review comments from other reviewers, although I am still concerned about the technical novelty, as commented by the fellow reviewers, I think this work can serve a stepping stone for future works on scene flow given its comprehensive and improved results. It would be great if the authors can release the code and trained models to benefit the community. As of now, I would like to keep my rating.
Summary: This paper proposes a hybrid local-global-cross transformer scene flow estimation model, achieving the state-of-the-art results on FlyingThings3D and KITTI Scene Flow datasets. Strengths: 1. This work has a mature model design based on Transformers, achieving superior results on the Flyingthings3D and KITTI Scene flow datasets. 2. The structure and writing of this paper do not have major issues. Weaknesses: 1. Scene flow estimation has been developed for many years, and the autonomous driving industry has already introduced Occupancy and Flow Prediction techniques. Researchers should not be limited to designing a toy model solely to maximize the benchmarks on FlyingThings3D and KITTI Scene Flow. These two datasets have dense point clouds and clear correspondences, which are far from practical applications. For this paper, I hope the authors can conduct more experiments on the Waymo motion data (https://waymo.com/open/data/motion/). 2. Undoubtedly, Transformers will bring more powerful feature modeling capabilities and higher latency. How does the latency of this method compare to competitors? Please add this item to the main experimental table. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please refer to the weaknesses section, refute views or answer questions or improve the paper. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The limitations has been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the comments and suggestions. We improve the paper based on the two weaknesses pointed out. We show how we improve the paper below: **Q1: Experiments on the Waymo-open dataset.** As suggested, we conduct additional experiments on the Waymo-open dataset for scene flow. The dataset contains 798 training and 202 validation sequences. Each sequence consists of 20 seconds of 10Hz point cloud data. Due to the large scale of the dataset and the time limitation, we trained our model on 1/8 of the training sequences (first 100 training sequences) and tested on all 202 validation sequences. We use the same parameter setting and training scheme as training on FlyingThings3D, and train for 300k iterations. The results in comparison with other state-of-the-art methods are given in the following table: | method | $EPE_{3D}\downarrow$ | $ACC_{S}\uparrow$ | $ACC_{R}\uparrow$ | $Outliers\downarrow$ | |-----------|-----------|-----------|-----------|-----------| | FlowNet3D | 0.225 | 23.0 | 48.6 | 77.9 | | PointPWC | 0.307 | 10.3 | 23.1 | 78.6 | | FESTA | 0.223 | 24.5 | 27.2 | 76.5 | | FH-Net | 0.175 | 35.8 | 67.4 | 60.3 | | **GMSF** | 0.086 | 73.9 | 84.7 | 43.9 | Our proposed method already demonstrates a massive improvement over all the other methods when only trained on 1/8 of the training set. We are convinced that training on the full dataset would result in an even better performance. The results of the full training will be added to the main paper. **Q2: FLOPs, GPU memory, and Runtime.** Please refer to the global response to all reviewers A. Since the additional results on the Waymo dataset show promising performance and all our reported memory and runtime studies are on par with previous approaches while significantly improving the performance, we hope to have fully answered all questions the reviewer has. We are open to elaborate on these points further during the discussion phase. --- Rebuttal Comment 1.1: Title: Has the rebuttal addressed your concerns Comment: Dear Reviewer QN7Z, Could you please read the author rebuttal and acknowledge if your concerns have been addressed? The discussion period will end very soon on Monday, August 21. Thank you for your time in reviewing this submission! Best, AC --- Rebuttal Comment 1.2: Comment: Sorry for my late reply. If the method can bring such a significant improvement, I will change the score to positive. Please merge the new waymo experiment into the camera ready version. Thanks.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback and questions. We are happy that the reviewers find our proposed approach, and the motivation "intuitive and novel" (D7Yn). The architecture is "straightforward without bells and whistles" (YAmp), "mature" (QN7Z), and "simple and effective" (vPo9). The solution is an "interesting application" of cross- and self-attention mechanisms to solve the scene flow problem (UZxB). The paper is well-written and easy to follow (D7Yn, YAmp, UZxB). The strong performance on the standard benchmarks of scene flow estimation is highlighted by (D7Yn, QN7Z, YAmp, vPo9). In the following, we will address the questions asked by multiple reviewers. Individual questions are addressed below the respective reviews. The provided additional results are mostly ablation studies suggested by the reviewers that complement our existing experiments and provide a deeper insight into our approach. **A: FLOPs, GPU memory, and Runtime (D7Yn, QN7Z, YAmp, vPo9).** Several reviewers requested meta-information on our experiments. Adding to our ablation results, we report runtime (ms per scene) during testing on an NVIDIA A40 GPU, FLOPs (G), Number of parameters (M), GPU memory (GB) during testing (batch size 1) and training (batch size 8) under different numbers of transformer layers in the following table: |layers|0|2|4|6|8|10|12|16|24| |-|-|-|-|-|-|-|-|-|-| |Runtime|207.9|242.7|289.3|330.7|371.8|417.3|454.6|540.2|701.4| |FLOPs|482.0|516.5|550.9|585.4|619.8|654.3|688.7|757.6|895.4| |Parameters|1.82|2.87|3.92|4.97|6.02|7.07|8.12|10.22|14.42| |Memory (test)|4.78|4.92|4.94|4.83|4.97|4.99|5.00|5.03|4.97| |Memory (train)|56.9|91.7|106.2|122.3|142.3|162.3|185.5|225.5|305.6| Expectedly, the computational effort increases with a larger number of transformer layers. A runtime (ms) comparison with other state-of-the-art methods is shown in the following table. The result shows that the runtime is on par with our strongest competitors: 3DFlow and PT-FlowNet (on the same NVIDIA A40 GPU). We also report the runtime of other methods for a comprehensive comparison. |method|GPU|Runtime|method|GPU|Runtime| |-|-|-|-|-|-| |FlowNet3D|TITAN V|130.8|FlowStep3D|TITAN RTX|820.8| |HPLFlowNet|TITAN V|98.4|CamLiFlow|Tesla V100|118.0| |PointPWC|GTX 1080Ti|117.4|CamLiPWC|Tesla V100|110.0| |HCRF-Flow|GTX 1080Ti|228.2|CamLiRAFT|Tesla V100|216.0| |RAFT3D|GTX 1080Ti|386.0|PV-RAFT|NVIDIA A40|740.0| |FLOT|GTX 2080Ti|389.3|3DFlow|NVIDIA A40|121.6| |SCTN|GTX 2080Ti|242.7|PT-FlowNet|NVIDIA A40|898.5| |Bi-PointFlow|TITAN RTX|40.5|**GMSF**|NVIDIA A40|371.8| **B: Ablation study on the number of transformer layers (D7Yn, YAmp).** We thank the reviewers for pointing out that the performance may not be saturated with 8 global-cross transformer layers. We initially aimed for a method that works with limited compute resources, in this case 4 NVIDIA A100 GPUs. We agree that in order to understand our approach better, a larger number of layers should be evaluated. We extend our experiment with more transformer layers. The performance increases with more layers and saturates at a number of 24. The trade-off are the compute resources as shown in the table above. Detailed results are given in the following table: |Layers|$EPE_{3D}\downarrow$|$ACC_{S}\uparrow$|$ACC_{R}\uparrow$|$Outliers\downarrow$|$EPE_{3D}\downarrow$|$ACC_{S}\uparrow$|$ACC_{R}\uparrow$|$Outliers\downarrow$| |-|-|-|-|-|-|-|-|-| ||$all$||||$nonocc$|||| |8|0.049|90.08|94.72|13.08|0.025|94.98|97.78|9.87| |10|0.046|90.53|94.97|12.57|0.024|95.37|97.92|9.43| |12|0.046|90.94|95.05|12.20|0.024|95.75|98.03|9.05| |16|0.041|92.52|95.76|10.31|0.020|96.84|98.38|7.53| |24|0.041|92.38|95.75|10.16|0.020|96.73|98.39|7.36| **C: Motivation of Eq. (11) and why it can be viewed as a smoothing procedure (YAmp, vPo9).** In addition to the explanation in L197-201, we provide further clarification in the following, which will be added to the main paper: The self-similarity matrix $M_{self}$ bears the similarity information for each pair of points in the source point cloud. Nearby points tend to share similar features and thus have higher similarities. Multiplying $M_{self}$ with the predicted scene flow $\hat{V}_{inter}$ can be seen as an averaging procedure, where for each point, its predicted scene flow vector is updated as the weighted average of the scene flow vectors of the nearby points that share similar features. **D: Ablation study on occlusion handling / Eq. (13) (YAmp, UZxB, vPo9).** Given the similarities between optical flow and scene flow, we followed the previous optical flow method RAFT [1] by using Eq. (13) with increasing weights for the refined prediction. We agree with the reviewers that an additional ablation study on the training loss in Eq. (13) provides further insights and compare three models in the following table as suggested by reviewer vPo9: Model A, B, and C are trained with only the second term $\hat{V}\_{inter}$, only the first term $\hat{V}\_{final}$, and Eq. (13), respectively. |Model|$EPE_{3D}\downarrow$|$ACC_{S}\uparrow$|$ACC_{R}\uparrow$|$Outliers\downarrow$|$EPE_{3D}\downarrow$|$ACC_{S}\uparrow$|$ACC_{R}\uparrow$|$Outliers\downarrow$| |-|-|-|-|-|-|-|-|-| ||$all$||||$nonocc$|||| |A|0.150|73.11|82.91|34.11|0.047|84.37|94.12|24.71| |B|0.045|91.43|95.38|11.61|0.025|95.36|97.95|8.99| |C|0.049|90.08|94.72|13.08|0.025|94.98|97.78|9.87| The result shows that the presence of $\hat{V}_{final}$ in the loss produces good results, which means the occlusion handling helps improve the performance by a large margin. More interestingly, Model B performs slightly better than Model C. This reveals that scene flow estimation might differ from optical flow estimation regarding the required loss formulation and that supervision of intermediate results is not necessary. (Thanks for the reviewers' comments to help us improve the paper!) **References** [1] Teed Z, Deng J. Raft: Recurrent all-pairs field transforms for optical flow[C]//ECCV2020. Pdf: /pdf/a31b7b5e703ab8805d76432eca29033edf8068a0.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes GMSF for scene flow estimation from point clouds. As far as the authors are aware, GMSF is the first to address scene flow estimation with global matching - GMSF is formulated as a single-scale one-shot global matching. GMSF incorporates a novel local-global-cross transformer architecture to extract high-quality feature representation, to finally compute the scene flow between point clouds via global matching. GMSF outperforms existing methods on F3D_c, F3D_o, F3D_s and KITTI_o, while performing competitively on KITTI_s. The ablative results show that increasing the number of global-cross transformer layers - thus increasing the capacity - is beneficial to the performance, and that the presence of local information is crucial for the performance of GMSF. Strengths: * The paper is overall well written and easy to follow. * The proposed architecture, and the motivation behind it, is intuitive and novel. While the architectural novelty itself is not entirely new (e.g., the sequence of local and global attention has first been introduced by SuperGlue[1] and LoFTR[2] for 2D image matching, and has been applied to 3D point cloud registration through methods including CoFiNet[3] and GeoTransformer[4]), its application to the task of scene flow seems conceptually new. * Strong performances on the standard benchmarks of scene flow estimation. [1] PE Sarlin et al., SuperGlue: Learning Feature Matching with Graph Neural Networks, CVPR 2020 [2] J Sun et al., LoFTR: Detector-Free Local Feature Matching with Transformers, CVPR 2021 [3] H Yu et al., CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration, NeurIPS 2021 [4] Z Qin et al., Geometric Transformer for Fast and Robust Point Cloud Registration, CVPR 2022 Weaknesses: * Insufficient ablative experiments. What if the number of transformer layers increase to a number higher than 8? The given results show that the performance improves gracefully with the number of layers, and it naturally leaves the question to 'until how much'. Also, what if the global transformer and cross transformers are decoupled, such that they can have varying number of layers? * Lack of latency and computation analyses. The authors emphasize that GMSF is a single-scale, one-shot method; then how does it compare to existing methods in terms of latency and computation (FLOPs, memory)? * Lack of analysis. How does incorporating global-cross transformer layers improve the performance, and how does incorporating **more** layers **further** improve the performance? This has been partially answered by Table 4, but a visual comparison / analysis would be more convincing. * Lack of mention of 3D point cloud registration methods. While the task at hand is different, the architectural design and motivation is closely related to 3D point cloud registration, which I believe is therefore worth mentioning in the related work section. Also, it might be a bit of an overstatement to mention that GMSF is the 'first' to address scene flow estimation with global matching, as scene flow estimation and point cloud registration are seemingly closely related tasks . Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weaknesses section. The motivation and the proposed method are intuitive and novel, but the design choice of GMSF should be better substantiated, with included analyses for clarity. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have addressed the limitations of GMSF in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive comments and helpful suggestions. Below are our clarifications for the questions. **Q1a: Ablation study on the number of transformer layers.** Please refer to the global response to all reviewers B. **Q1b: Ablation study on decoupling global transformers and cross transformers.** We follow SuperGlue [1] and LoFTR [2] and do not decouple the global-cross transformer layer since this combination has been proven effective. We now add an ablation study on decoupling the transformer layer. Three architectures are compared: | Model | $EPE_{3D}\downarrow$ | $ACC_{S}\uparrow$ | $ACC_{R}\uparrow$ | $Outliers\downarrow$ | $EPE_{3D}\downarrow$ |$ACC_{S}\uparrow$ | $ACC_{R}\uparrow$ | $Outliers\downarrow$ | |-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | | $all$ |||| $nonocc$ |||| | global | 0.141 | 64.65 | 81.17 | 40.89 | 0.069 | 72.11 | 88.52 | 34.84 | | cross | 0.051 | 88.65 | 94.04 | 14.73 | 0.027 | 93.97 | 97.41 | 11.20 | | global-cross | 0.049 | 90.08 | 94.72 | 13.08 | 0.025 | 94.98 | 97.78 | 9.87 | The result shows that the cross transformer layers significantly contribute to the performance. This coincides with the finding in SuperGlue [1] where they address "cross-attention is critical to strong gluing" in Section 5.4 and Table 4. **Q2: FLOPs, GPU memory, and Runtime.** Please refer to the global response to all reviewers A. **Q3: Visual comparison / analysis of the influence of incorporating more transformer layers.** We provide an additional visual analysis of our model with 0, 2, and 24 transformer layers in the attached PDF to show how incorporating global-cross transformer layers improves performance and how incorporating more layers further improves performance. The results show that without global-cross transformer layers the prediction performance drops when there's a lack of texture information e.g. planar and curved surfaces, repetitive pattern. By incorporating only 2 transformer layers, the performance is significantly improved, which is also shown in the paper in Table 4. Without global-cross transformer layers, the learned features are local features. Thus, when there is a planar or curved surface or a repetitive pattern, the matching process tends to be inaccurate. More transformer layers further help with such cases. However, this might not be easy to see in the visualization since the difference of the $EPE_{3D}$ is small (0.081m and 0.041m for transformer layers of 2 and 24, respectively). **Q4: Related work in 3D point cloud registration.** We agree that point cloud estimation is a closely related topic to scene flow estimation. We will discuss this relationship in the paper, also with regard to our contributions, and add it to the related work section. A brief summary is provided below: Related to scene flow estimation, there are some correspondence-based point cloud registration methods. Such methods separate the point cloud registration task into two stages: finding the correspondences and recovering the transformation. PPFNet [3] and PPF-FoldNet [4] proposed by Deng $\textit{et al.}$ focus on finding sparse corresponding 3D local features. Gojcic $\textit{et al.}$ [5] propose to use voxelized smoothed density value (SDV) representation to match 3D point clouds. These methods only compute sparse correspondences and are not capable of handling dense correspondences required in scene flow tasks. More related works are CoFiNet [6] and GeoTransformer [7], both of which involve finding dense correspondences employing transformer architectures. Yu $\textit{et al.}$ in CoFiNet [6] proposes a detection-free learning framework and find dense point correspondence in a coarse-to-fine manner. Qin $\textit{et al.}$ in GeoTransformer [7] further improves the accuracy by leveraging the geometric information. Some works further introduced superpoint matching to solve the ambiguity introduced by self-attention layers. RoITr [8] introduces a Rotation-Invariant Transformer to disentangle the geometry and poses, and tackle point cloud matching under arbitrary pose variations. PEAL [9] introduced prior embedded Explicit Attention Learning model (PEAL), and for the first time explicitly inject overlap prior into Transformer to solve point cloud registration under low overlap. However, the goal of point cloud registration is not to estimate the translation vectors for each of the points, which makes our work different from these approaches. **References** [1] Sarlin P E, DeTone D, Malisiewicz T, et al. Superglue: Learning feature matching with graph neural networks[C]//CVPR2020. [2] Sun J, Shen Z, Wang Y, et al. LoFTR: Detector-free local feature matching with transformers[C]//CVPR2021. [3] Deng H, Birdal T, Ilic S. Ppfnet: Global context aware local features for robust 3d point matching[C]//CVPR2018. [4] Deng H, Birdal T, Ilic S. Ppf-foldnet: Unsupervised learning of rotation invariant 3d local descriptors[C]//ECCV2018. [5] Gojcic Z, Zhou C, Wegner J D, et al. The perfect match: 3d point cloud matching with smoothed densities[C]//CVPR2019. [6] Yu H, Li F, Saleh M, et al. Cofinet: Reliable coarse-to-fine correspondences for robust pointcloud registration[J]. NeurIPS2021. [7] Qin Z, Yu H, Wang C, et al. Geometric transformer for fast and robust point cloud registration[C]//CVPR2022. [8] Yu H, Qin Z, Hou J, et al. Rotation-invariant transformer for point cloud matching[C]//CVPR2023. [9] Yu J, Ren L, Zhang Y, et al. PEAL: Prior-Embedded Explicit Attention Learning for Low-Overlap Point Cloud Registration[C]//CVPR2023. --- Rebuttal Comment 1.1: Comment: Thank you authors for the detailed responses to my concerns. They seem to have been well addressed. I still appreciate the conceptual novelty and the strong performance of the paper, but am still concerned about the limited technical novelty of the paper. I would like to maintain my current rating of borderline accept.
null
null
null
null
null
null
Compositional Foundation Models for Hierarchical Planning
Accept (poster)
Summary: The paper presents Hierarchical Planning with Foundation Models (HiP), a framework that addresses long-horizon decision-making in novel environments, using multiple modalities of data and reasoning at various levels of hierarchy. The key elements of this framework involve the use of a large language model for constructing symbolic plans, a video diffusion model to generate observation trajectory plans, and a large pre-trained robot ego-centric action model to map these plans to a robot's control space. The authors propose an iterative refinement approach for feedback incorporation, promoting a consensus among different models, without the need for large model finetuning. Experimental results on two long-horizon tabletop manipulation tasks were presented, demonstrating promising results for the proposed strategy. The authors also mention the potential for including other modalities, like touch and sound, in the future. Strengths: The technical content of the paper appears to be sound, and the proposed framework has shown promising results in the experimental results provided. The methodology to combine language models, video diffusion models, and ego-centric robot control models into one system is comprehensive. The paper is well-structured and clearly explains the approach and the reasoning behind the decisions made. The paper's flow from problem statement to proposed solution, and finally to experimental evaluation, is logical and easy to follow. Weaknesses: 1. The baselines compared are all relatively weak. It's more appropriate to compare with foundation robotics models, such as SayCan, Gato[1], palm-e[2]. The authors argue that compared to saycan, they can generalise to new skills. However, they didn't demonstrate the generalization to new skills either. (They evaluated on unseen tasks but not new skills.) 2. While the iterative refinement approach is interesting, it could be further scrutinized to understand its limitations better, particularly concerning computational efficiency and robustness. 3. As this is mostly an experimental, please provide code to ensure reproducibility. 4. Formatting issue in line 9, 45. [1] Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, Tom Eccles, Jake Bruce, Ali Razavi, Ashley Edwards, Nicolas Heess, Yutian Chen, Raia Hadsell, Oriol Vinyals, Mahyar Bordbar, and Nando de Freitas. A generalist agent. In Transactions on Machine Learning Research (TMLR), November 10, 2022. [2] Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. Palm-e: An embodied multimodal language model. In arXiv preprint arXiv:2303.03378, 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Could the authors discuss more about the efficiency and robustness? And also compare with a few foundation robotics model to better situate the work? This paper presents a potentially significant step towards more effective long-horizon decision-making. The presented results are promising. I'm happy to raise my scores if the authors address my concerns. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors discussed the limited. They didn't comment on the broader societal impacts but I didn't see any concern. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer zvNJ for their constructive feedback. We now answer the following concerns raised in the review. >The baselines compared are all relatively weak. It's more appropriate to compare with foundation robotics models, such as SayCan, Gato[1], palm-e[2]. The authors argue that compared to saycan, they can generalise to new skills. However, they didn't demonstrate the generalization to new skills either. (They evaluated on unseen tasks but not new skills.) We add comparisons to 2 existing foundation models for decision making: Gato [1] and Say-Can [2]. In training the visual and action planning for HiP, we use 100k robot videos for visual planner and the inverse dynamics, when trained from scratch, utilizes 10k state-action trajectory pairs. In order to ensure fair comparison, we use 110k datapoints for training Gato and Say-Can. *Gato:* We borrow the Gato architecture from [Vima Codebase](https://github.com/vimalabs/VIMA/tree/main) and use it for training a language conditioned policy with imitation learning. We use 110k (langauge, observation trajectory, action trajectory) datapoints in each of the three domains for training Gato. Furthermore, we provide oracle subgoals to Gato. We compare Gato with HiP and other baselines on all three domains in Table 1 of the rebuttal document. We see that HiP outperforms Gato on both seen and unseen tasks in all of the three domains. *SayCan:* We borrow the SayCan algorithm from the [SayCan codebase](https://github.com/google-research/google-research/blob/master/saycan/SayCan-Robot-Pick-Place.ipynb) and adapt it to our settings. Following the recommendations of SayCan codebase, we use CLIPort policies as primitives. CLIPort policies take in top-down RGBD view and outputs pick and place pixel coordinates. Then, an underlying motion planner picks the object from the specified pick-coordinate and places the object at the specified place-coordinate. We train CLIPort policies on 110k (language, observation, action) datapoints in *paint-block* and *object-arrange* domain. The SayCan paper uses value function as an affordance function to select the correct subgoal given current observation and high level goal. However, CLIPort policies don't have a value function. The SayCan codebase uses a hardcoded scoring function which doesn't apply to *object-arrange* domain. To overcome these issues, we use the LLM grounding strategy from Huang et al. [3]. It uses unnormalized logits over the pixel space given by CLIPort policies as affordance and uses it to ground LLM to current observation and thus predict the subgoal. We then compare SayCan with HiP and other baselines on *paint-block* and *object-arrange* domain in Table 1 of the rebuttal document. While SayCan outpeforms other baselines, HiP still outperforms it both on seen and unseen tasks of *paint-block* and *object-arrange* domain. We couldn't run SayCan on *kitchen-tasks* domain as there's no clear-cut primitive in that domain. This points to a limitation of SayCan which requires tasks to be expressed in terms of primitives with each primitive paired with an affordance function. PaLM-E[4] is based on finetuning PaLM models and associated vision encoders on robotics data. However, PaLM models aren't available and finetuning a large LLM (540B parameters) along with a vision encoder (22B parameters) requires a large number of GPUs that aren't available in an academic lab. >As this is mostly an experimental, please provide code to ensure reproducibility. We have uploaded anonymized preliminary version of code. We also provided architectural details in Appendix B of our submission and further details on training and evaluation procedure in Appendix C of our submission. We will work on cleaning up the code over coming weeks and provide a clean version of the code with detailed instructions with the camera ready version of the paper. >While the iterative refinement approach is interesting, it could be further scrutinized to understand its limitations better, particularly concerning computational efficiency and robustness. **Computational efficiency of iterative refinement:** We provide average runtime of HiP for a single episode in all the three domains in Table 2 of the rebuttal document. As we can see, visual planning takes majority of the runtime. The prediction time by subgoal classifier $f_\phi$ is minimal and gradients from observation trajectory classifier $g_\psi$ adds minimal runtime overhead. However, the subgoal classifier and observation trajectory classifier adds memory overhead. **Robustness of iterative refinement:** By robustness, we assume you are referring to sensitivity analysis with respect to hyperparameters. The subgoal classifier doesn't introduce any test time hyperparameters and we use standard hyperparameters (1e-3 learning rate, 1e-6 weight decay, 256 batch size, 50 epochs, Adam optimizer) for its training which remains fixed across all domains. We observed that the performance changes minimally across different hyperparameters, given a learning rate decay over training. However, the observation trajectory classifier $g_\psi$ introduces an additional test time hyperparameter $\omega'$ which appropriately weights the gradient from observation trajectory classifier. Table 4 in the rebuttal document varies $\omega'$ between 0.5 and 2 in intervals of 0.25 and shows success rate of HiP. We see that HiP gives the best performance when $\omega' \in \{1, 1.25\}$ but it's performance degrades for higher values of $\omega'$. We will include this analysis in camera ready version of our paper. **References:**\ [1] Reed et al. "A Generalist Agent." TMLR, 2022.\ [2] Ahn et al. "Do As I Can, Not As I Say: Grounding Language in Robotic Affordances." CoRL, 2022.\ [3] Huang et al. "Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control." arXiv:2303.00855, 2023.\ [4] Driess et al. "PaLM-E: An Embodied Multimodal Language Model." ICML, 2023. --- Rebuttal Comment 1.1: Comment: Dear reviewer zvNJ, Thank you again for your comments and suggestions on our paper. We hope that our responses and new results have addressed your questions and concerns. We still have a few days left in the discussion period. If you have any further questions, please don't hesitate to let us know and we'll be happy to address them. Thank you! Best, Authors
Summary: This work presents a new approach of combining foundation models from different modalities to into hierarchical system to solve long-horizon robotics problem. The performance of the system is evaluated on two long-horizon tasks and largely out-performed other baselines. Strengths: The approach of this work is a effective combination of previously available models. The effectiveness of iterative refinement is particularly interesting (which seems to be one of the main performance booster in this approach). The framework details are well supported by the writing and supplementary materials. Weaknesses: I don't find weakness of this work, but I do have some questions in terms of the dataset, experiment, and analysis, see questions below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What does dataset stats look like? 2. Can you provide more description rationalize the use of training data to evaluate HiP (Table 1)? 3. line 213: What is benefits of pre-training video diffusion model in terms of success rate? The description seems to be only indirectly describing the benefits? 4. It would be interesting to see some analysis on the effectiveness of subgoal generation in this approach, for example, if the granularity of the subgoal generation various, what effect does it have on the overall approach. (This is interesting to me, without this will not affect my review score.) 5. How is the computation speed look like for this approach, how fast it is to generate for an episode? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer z2ah for evaluating our work. We now answer the following concerns raised in the review. >What does dataset stats look like? We described our evaluation environments in Section 3.1 of our original submission and provided further details about our datasets for task planning, visual planning and action planning in Appendix C of our original submission. We now summarize these dataset statistics here: 1. Task planning: (a) LLM prompting: For all the three domains, we prompt the LLM with 5 examples of (high level goals, set of desirable subgoals) pairs sampled from tasks in given domain. (b) Training multi-class classifier $f_\phi$ for subgoal classification: We used classification dataset $D_{classify}$ , consisting of observation $x_{i,1}$, goal g, candidate subgoals $(w_j)\_{j=1}^M$ and the correct subgoal label i, for training multi-class classifier $f_\phi$. The classification dataset for *paint-block*, *object-arrange*, and *kitchen-tasks* consists of 58k, 82k and 50k datapoints respectively. 2. Visual planning: After pretraining our visual model on Ego4D dataset consisting of 344k datapoints, we finetune our visual model on dataset $D_{video} \coloneqq (\tau_x^i, w_i)$ consisting of (observation trajectory, subgoal) pairs. The video dataset consisted of 100k datapoints for all the domains. 3. Action planning: We train VC-1 [1] initialized inverse dynamics model on dataset $D_{\text{inv}} \coloneqq (\tau_x^i, \tau_s^i)$ consisting of paired observation and robot state trajectories. The inv dataset consisted of 1k datapoints on *paint-block* and *object-arrange* domains and 3.5k datapoints on *kitchen-tasks* domain. >Can you provide more description rationalize the use of training data to evaluate HiP (Table 1)? We are not sure if we understand this question completely, but how we understood it is that you are asking about the evaluation metrics used in Table 1. We measure the task completion rate (i.e. whether the final goal has been achieved) for evaluating HiP and other baselines on *paint-block* and *object-arrange* domains. To be consistent with evaluation metrics used in [1,2], we use subtask completion rate (i.e. percentage of subtasks completed) for evaluating HiP and other baselines on *kitchen-tasks* domain. To evaluate a model on Seen and Unseen tasks, we sample 1000 tasks from $T_{train}$ and $T_{test}$ respectively, and obtain average task completion rate on *paint-block* and *object-arrange* domains and average subtask completion rate on *kitchen-tasks* domain. We repeat this procedure over 4 different seeds and report the mean and the standard error over those seeds in Table 1. We will update our draft to better describe our evaluation metrics in camera ready version of the paper. We apologize if we misinterpreted your question and we would be happy to offer further clarifications. >What is benefits of pre-training video diffusion model in terms of success rate? The description seems to be only indirectly describing the benefits? Figure 2 in the rebuttal document provides success rate of HiP when video diffusion model is trained with 100% training data, 75% training data and 50% training data, with or without Ego4D [2] pretraining. We see that pretraining with Ego4D consistently yields better success rate even with reduced training dataset sizes. >It would be interesting to see some analysis on the effectiveness of subgoal generation in this approach, for example, if the granularity of the subgoal generation various, what effect does it have on the overall approach. We appreciate that the reviewer suggested this interesting axis to explore. So, we conduct a study in *paint-block* domain to analyze how granuality of subgoals affect HiP. In our current setup, a subgoal in *paint-block* domain is of form "Place <block color> block in/on/to <final block location>" and involves a pick and a place operation. We refer to our current setup as HiP (standard). We introduce two additional level of subgoal granuality: 1. Only one pick or place operation: The subgoal will be of form "Pick <block color> block in/on <initial block location>" or "Place <block color> block in/on/to <final block location>". It will involve either one pick or one place operation. We refer to the model trained in this setup as HiP (more granular). 2. Two pick and place operations: The subgoal will be of form "Place <1st block color> block in/on/to <final 1st block location> and Place <2nd block color> block in/on/to <final 2nd block location>". It will involve two pick and place operations. We refer to the model trained in this setup as HiP (less granular). Note that UniPi has the least granuality in terms of subgoals as it tries to imagine the entire trajectory from goal description. Table 3 in the rebuttal document compares HiP (standard), HiP (more granular), HiP (less granular) and UniPi on seen and unseen tasks in *paint-block* domain. We observe that HiP (standard) and HiP (more granular) have similar success rates where HiP (less granular) has a lower success rate. UniPi has the lowest success rate amongst these variants. We hypothesize that success rate of HiP remains intact when we decrease the subgoal granuality as long as the performance of visual planner doesn't degrade. Hence, HiP (standard) and HiP (more granular) have similar success rates. However, when the performance of visual planner degrades as we further decrease the subgoal granuality, we see a decline in success rate as well. That's why HiP (less granular) sees a decline in success rate and UniPi has the lowest success rate amongst all variants. **References:**\ [1] Majumdar et al. "Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?" arXiv:2303.18240, 2023.\ [2] Grauman et al. "Ego4D: Around the World in 3,000 Hours of Egocentric Video." CVPR, 2022. --- Rebuttal Comment 1.1: Comment: Dear reviewer z2ah, Thank you again for your comments and suggestions on our paper. We hope that our responses and new results have addressed your questions and concerns. We still have a few days left in the discussion period. If you have any further questions, please don't hesitate to let us know and we'll be happy to address them. Thank you! Best, Authors --- Rebuttal Comment 1.2: Comment: Thanks for addressing my questions! I think this work makes good contribution towards combining models and robot planning. I will keep my score.
Summary: The paper introduces a framework called Hierarchical Planning with Foundation Models (HiP), which is designed to improve decision-making in new environments with long-term goals. HiP uses hierarchical reasoning to plan subgoals, visually reason about plans, and execute actions through visual-motor control. The framework utilizes different kinds of knowledge to support various levels of decision-making. It employs a large language model for constructing symbolic plans, a large video diffusion model for grounding the plans in the environment, and an inverse dynamics model for inferring actions from generated videos. The models are kept consistent through iterative refinement. The effectiveness of HiP is demonstrated through its application in two long-horizon table-top manipulation tasks. The paper's introduction explains that successful execution of tasks in unfamiliar environments requires reasoning at abstract, geometric, and control levels. The authors propose using multiple modalities of Internet-scale data to reason across these levels. They explain the limitations of existing models and propose HiP as a solution, which uses three different large foundation models to construct a physically executable plan for long-horizon tasks. The models are iteratively refined based on feedback from downstream models, ensuring that the final plan satisfies constraints at all levels. The authors also discuss the potential of training foundation models for videos and ego-centric actions, and the limitations of their current computational resources. They conclude by highlighting the promise of their proposed approach in long-horizon decision-making tasks and hope to inspire future work in more complex real-world tasks. Strengths: - Hierarchical Planning with Foundation Models (HiP) leverages different modalities of knowledge, which can improve decision-making in novel environments with long-horizon goals. - The framework uses a large language model, a large video diffusion model, and an inverse dynamics model, allowing it to construct symbolic plans, ground those plans in the environment, and infer actions from generated videos. - The iterative refinement process ensures consistency between the models and enables hierarchically consistent plans that are responsive to the goal and executable given the current state and agent. - The approach is computationally efficient to train as it doesn't require any large model finetuning. - The authors demonstrate promising results on two long-horizon tabletop manipulation environments, illustrating the efficacy and adaptability of the approach. Weaknesses: - The paper is technically similar to UniPi. The technical novelty could be potentially incremental - it adds language model to decompose the language goal to language subgoals and then apply video prediction model to it. - The work relies on foundation models for video prediction and ego-centric action prediction. Concern: Although it can be expected that video prediction model will be available in the future, but it may not be assumed to be suitable for robotics applications. Additionally, it seems to be overclaim that robotics should rely on these models. 1. The applications in robotics may need to model multiple aspects of physical environments, where “expected” video foundation models can struggle to learn How could forces be modeled in video model? How to guarantee the generated video are feasible in physics and can be executed for unseen tasks? The shown pick-and-place is a useful environment, but doesn’t support all the claims. Either the claims need to be revised or more experiments are needed. 2. It is also likely that vision (image/video) is only one of several important modalities in real-world robotic decision-making. For example, for some insertion tasks, tactile sensing could be important. 3. The computational cost of these models is not known yet at all. Although GPT4 is available for use, it only needs to transfer text over internet. If the needed models were to transfer high-quality images or even videos, how much bandwidth would they need? Is that possible to allow real-time robotic decision-making? It seems unlikely that in the near future we will have pretrained action/video models for all scenarios that can give real-time results over internet. If the goal of this paper is to explore along this direction, I think these aspects should all be considered, instead of assuming all these challenges are nonexistent and just claim the contributions. - Overall, I think the paper is a meaningful initial exploration towards this direction, but the claims of the paper should be clear about this as some assumptions are not necessarily realistic. The title “Hierarchical Planning with Foundation Models” does not necessarily match the contributions and novelties of this paper. - L68 — The step 2 “visual planning” is the key step to plan in physical level, but image-level is not necessarily the correct level of abstraction for many cases. When does it apply and when not? This seems to include lots of unnecessary details of the physical world for tasks other than just tabletop manipulation. - More general concern: The approach relies on the availability of large pretrained models in different domains. Currently, these models are only readily available in the language domain. Furthermore, as mentioned in the paper, the approach is demonstrated on smaller-scale video and ego-centric action models trained in simulation, which serve as proxies for larger pretrained models. This might limit the generalizability of the results to more complex real-world tasks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer 3JTC for their constructive feedback. We now answer the following concerns raised in the review. >Novelty Prior works using foundation models for robotics typically learn a single large model on Internet/robotics data. However, LLMs cannot construct visually grounded plans and UniPi cannot construct long horizon plans. The novelty of approach is to use a hierarchy of different foundation models in combination, to hierarchically reason across different modalities, enabling us to scale to long-horizon plans. This hierarchy further allows us to use passive pre-training on different modalities of data, significantly reducing the need for domain-specific data. We further propose to use iterative refinement to connect models across different hierarchies together and ensure compatability across modalities. We demonstrate our results on three long-horizon robot manipulation domains. >Use Case of Video Models in Robotics Video prediction models trained on Internet data can provide us with a rich source of motion information and physics. Given a text subgoal and current image observation, a synthesized video can tell us the precise hand motions necessary to open a door. Our submission shows that video models pretrained on Internet data (Ego4D) generate better videos (in terms of FVD; see Figure 5 in our submission) and lead to higher success rate (see Figure 2 in the rebuttal document). As a result, while large video prediction models do not eliminate the need for robotics data, we believe they provide a rich prior of semantic and physics information that reduce the amount of in-domain robotics data needed, by teaching us priors on how objects move and their physics. We will clarify this claim and accordingly update our introduction. > How to model forces and tactile sensing? We agree that the current instantiation does not integrate force and tactile sensing and will discuss it in our limitations. However, we note that a video model can be used for both force and tactile sense. For instance, [2] proposed a visual model to predict future visual observations and gelsight observations given current visual and gelight observations and planned actions. Building on this work, we can extend our visual planner to additionally generate future gelsight observations given a subgoal. Then, these predicted gelsight observations can be used as input to our inverse dynamics model. While our submission doesn't focus on touch sensing, we believe future works that build foundation models for it can easily be integrated. We will update our Section 5 to discuss the limitations of our current instantiation of hierarchy and how our framework can be expanded to include additional sensory modality like touch. > Computational Cost / Bandwith of Video **Bandwidth Cost of Images/Videos:** In our submission, we generate videos of size 64x48 consisting of 50 frames, which amount to ~15Kb. Even if we generate 200 frames at a resolution of 256x256, they would still be under 1MB. Hence, we do not expect the transfer of these data to become a bottleneck. Also, by generating a video and action plan for each subgoal instead of at each timestep, we further reduce the bandwidth requirement. However, this approach may not be suitable for dynamically changing environments that generate subgoals at a high frequency and will discuss this limitation in Section 5 in camera ready version. **Computational Cost of Running Models:** We provide average runtime of HiP for a single episode in all the three domains in Table 2 of the rebuttal document. Please see our common response for more details. We ran all our inference experiments on A6000 which is more powerful than on-board GPUs available on robotics platform. However, we believe that improvements in both hardware and software (e.g. quantization techniques for faster inference) will help realize our models on on-board GPUs. > Claim of paper / Contributions As discussed in the comments above, we will clarify some assumptions that our approach makes (eg: video models serve as a prior and doesn't neccassarily eliminate the need for in-domain robotics data), and further expand upon the limitations of our work in Section 5 (e.g.: the scalability when needed to generate new subgoals at a high frequency) and tone down our overall claims. Our work focuses on leveraging hierarchies for long-horizon decision making with each level of hierarchy making use of a foundational model pre-trained on non-robotics data. Hence, we titled our paper as "Hierarchical Planning with Foundation Models". However, we are happy to modify as the reviewer sees fit. >L68 — Images as Abstraction of Physical World In general, we agree that image-level may not be the most efficient abstraction for the physical planning and can capture unnecessary detail. In settings in which there is no ego-motion, which corresponds to many manipulation tasks, then image-level motion corresponds well to physical movement. In other settings with ego-motion, then image-level plans may be unnecessary details from ego-motion, but will still also include the relevant physical motion. However, we note that image-level representation is still sufficient for this setting, and the best text-conditioned video generation models can capture relatively accurate physical motion even with ego-motion[3]. We did a study where we replaced our video diffusion model with a latent RSSM, and modeled physical-level plans in the latent space. As seen in Table 1 of the rebuttal document, video generation outperforms this baseline, with larger performance gap with more visually complex domains **References**: [1] Majumdar et al. "Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?" arXiv, 2023. [2] Calandra et al. "More than a feeling: Learning to grasp and regrasp using vision and touch.", RAL, 2018. [3] Ho et al. "Imagen Video: High Definition Video Generation with Diffusion Models" arXiv,2022. --- Rebuttal Comment 1.1: Comment: Dear reviewer 3JTC, Thank you again for your comments and suggestions on our paper. We hope that our responses and new results have addressed your questions and concerns. We still have a few days left in the discussion period. If you have any further questions, please don't hesitate to let us know and we'll be happy to address them. Thank you! Best, Authors
Summary: This paper presents a way to leverage large foundation models to perform long-horizon hierarchical planning tasks. Specifically, the authors leverage an LLM for subgoal generation from the text instruction, a video generation model for generating a plan for each subgoal, and an action prediction model that outputs an action given the current and next observation generated by the video generation model. To make the predictions between different models consistent, the authors propose the iterative refinement procedure to refine the prediction of a model based on joint distributions. The paper includes results in simulated manipulation environments and shows improvement over prior goal-conditioned/action planner/video planner-based methods. The authors also include ablation studies to demonstrate effectiveness of various design choices in the method, such as the use of iterative refinement. Strengths: 1. The authors presents a nice way to combine three different large foundation models in the problem of hierarchical planning. The iterative refinement method seems very effective in aligning predictions of various models, which is of great significance in research that plays with using multiple foundation models. 2. The authors perform rigorous empirical evaluations via comparing to various baselines and conducting a large number of ablation studies to show the good performance of the proposed method. Weaknesses: 1. While iterative refinement method is intuitively reasonable and a natural choice for making predictions of models in the pipeline consistent, the authors use approximations of the target joint distribution in the iterative refinement step. It is unclear if the some approximations fully capture the target distribution. For example, I'm not sure if Eq. (3) is a good approximation of (2) since Eq. (3) doesn't have the reachability part and thus doesn't fully capture the feasibility of achieving the language goal. 2. It is unclear if the method can outperform previous hierarhical planning methods such as [1, 2, 3, 4, 5, 6]. Since the authors only evaluate the method in simple simulated pick-and-place tasks, I'm not sure if we need large-scale video diffusion models for video generation and rather simple latent-space world model like [1] is very likely to work. 3. Following the comment in 2, I think the authors should evaluate the method in more domains included in [1,2,3,4,5,6] and also more realistic domains such as real-world robot manipulation settings to fully demonstrate the necessity of using big video foundation models. [1] Hafner, Danijar, Kuang-Huei Lee, Ian Fischer, and Pieter Abbeel. "Deep hierarchical planning from pixels." Advances in Neural Information Processing Systems 35 (2022): 26091-26104. [2] Hafner, Danijar, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. "Mastering diverse domains through world models." arXiv preprint arXiv:2301.04104 (2023). [3] Zhao, Chao, Shuai Yuan, Chunli Jiang, Junhao Cai, Hongyu Yu, Michael Yu Wang, and Qifeng Chen. "ERRA: An Embodied Representation and Reasoning Architecture for Long-horizon Language-conditioned Manipulation Tasks." IEEE Robotics and Automation Letters (2023). [4] Kujanpää, Kalle, Joni Pajarinen, and Alexander Ilin. "Hierarchical Imitation Learning with Vector Quantized Models." arXiv preprint arXiv:2301.12962 (2023). [5] Mendonca, Russell, Oleh Rybkin, Kostas Daniilidis, Danijar Hafner, and Deepak Pathak. "Discovering and achieving goals via world models." Advances in Neural Information Processing Systems 34 (2021): 24379-24391. [6] Tian, Stephen, Suraj Nair, Frederik Ebert, Sudeep Dasari, Benjamin Eysenbach, Chelsea Finn, and Sergey Levine. "Model-based visual planning with self-supervised functional distances." arXiv preprint arXiv:2012.15373 (2020). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the concerns raised in the section above. Update after rebuttal: I decided to raise my score after reading author’s response. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer 7a1H for their constructive feedback. We now answer the following concerns raised in the review. >unclear if the some approximations fully capture the target distribution. For example, I'm not sure if Eq. (3) is a good approximation of (2) since Eq. (3) doesn't have the reachability part and thus doesn't fully capture the feasibility of achieving the language goal. In Appendix D.1 of our supplemental material, we compare with optimizing the correct target distribution compared to our approximations. We find that directly optimizing Equation (2) gets an average success rate of $54.3 \pm 7.2$ compared to optimizing Equation (3) which gets a performance of $98.2 \pm 1.5$ on *paint-block* domain. This suggests that our approximations reasonably capture the target distribution. Furthermore, if we consider the rewrite of Equation (3), $\max_{w_i} \log p_\text{LLM}(w_i|g) + \log\left(\frac{p(x_{i,1} | w_i, g)}{p(x_{i,1} | g)}\right)$ we see that the first part $\log p_\text{LLM}(w_i|g)$ ensures reachability as it makes LLM produce subgoals that make progress towards the high-level goal and the second part $\log\left(\frac{p(x_{i,1} | w_i, g)}{p(x_{i,1} | g)}\right)$ ensures feasibility as the classifier approximating the log density ratio selects subgoals feasible from $x_{i,1}$. >It is unclear if the method can outperform previous hierarhical planning methods such as [1, 2, 3, 4, 5, 6]. Since the authors only evaluate the method in simple simulated pick-and-place tasks, I'm not sure if we need large-scale video diffusion models for video generation and rather simple latent-space world model like [1] is very likely to work. To show the benefits of video diffusion model, we perform an ablation where we use (text-conditioned) recurrent state space model (RSSM), taken from Dreamer-v3 [2] (and used in other prior works [1,3]), as visual model for HiP. We borrow the RSSM code from [dreamerv3-torch codebase](https://vitalab.github.io/article/2023/01/19/DreamerV3.html). To adapt RSSM to our setting, we condition RSSM on subgoal (i.e. subgoal encoded into a latent representation by Flan-T5-Base) instead of actions. Hence, sequence model of RSSM becomes $h_t = f(h_{t-1}, z_{t-1}, w)$ where w is latent representation of subgoal. Furthermore, we don't predict any reward since we aren't in a reinforcement learning setting and don't predict continue vector since we decode for a fixed number of steps. Hence, we remove reward prediction and continue prediction from the prediction loss. To make the comparisons fair, we pretrain RSSM with Ego4D data as well. We report the results in Table 1 of the rebuttal document. We see that HiP the video diffusion model outperforms HiP with RSSM in all the three domains. While the performance gap between HiP(RSSM) and HiP (i.e. using video diffusion) is small in *paint-block* domain, it widens in *object-arrange* and *kitchen-tasks* domains as the domains become more visually complex. Note that we didn't directly compare to Dreamer-v3 as Dreamer-v3 operates in online RL setting. Since we wanted to show benefits of using video diffusion, we took RSSM from Dreamer-v3 and used it with HiP for a fair comparison. We will add this ablation study on choice of video model in camera ready version of our paper >Following the comment in 2, I think the authors should evaluate the method in more domains included in [1,2,3,4,5,6] and also more realistic domains such as real-world robot manipulation settings to fully demonstrate the necessity of using big video foundation models. We evaluate HiP on a third domain *kitchen-tasks*, inspired by the *kitchen-shift* domain in Xing et al. [4], and is a generalization of *Robokitchen* domain used in Mendoca at al. [3]. Please see our common response for more details on the task setup and results. **References**\ [1] Hafner et al. "Deep hierarchical planning from pixels." NeurIPS, 2022.\ [2] Hafner et al. "Mastering diverse domains through world models." arXiv, 2023.\ [3] Mendonca et al. "Discovering and achieving goals via world models." NeurIPS, 2021.\ [4] Xing et al. "KitchenShift: Evaluating Zero-Shot Generalization of Imitation-Based Policy Learning Under Domain Shifts." --- Rebuttal Comment 1.1: Comment: Dear reviewer 7a1H, Thank you again for your comments and suggestions on our paper. We hope that our responses and new results have addressed your questions and concerns. We still have a few days left in the discussion period. If you have any further questions, please don't hesitate to let us know and we'll be happy to address them. Thank you! Best, Authors --- Rebuttal Comment 1.2: Title: Reply to rebuttal Comment: Thank you for the rebuttal! I appreciate the additional experiments and clarifications, which addresses most of my concerns. One question is why directly optimizing the correct target distribution (Eq. (2)) leads to much poorer performance. Is there something off? Overall, I’m satisfied with author’s response and would like to raise my score. --- Reply to Comment 1.2.1: Comment: Thank you for your time reviewing the paper. Please see our clarification: >One question is why directly optimizing the correct target distribution (Eq. (2)) leads to much poorer performance. Is there something off? We believe that the main reason optimizing the target distribution (Eq. (2)) leads to poor performance is because each of the underlying learned probability densities do not fully accurately capture each probability distribution in (Eq. (2)). For instance, DDPMs are typically trained with loss weightings to preferentially generate high sample quality samples as opposed to accurately modeling the underlying probability distribution. As models get larger, each learned probability density will more accurately model each probability distribution in Eq. 2. and we believe that then the performance with significantly rise. We will update Appendix D.1 to better clarify this point in the camera-ready version of our paper.
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful suggestions. We want to start by addressing the common concerns brought up by the reviewers and dive into the remaining points in the individual responses. ### **Evaluation on more realistic robotic domain**: We evaluate HiP on an additional third domain kitchen-tasks, inspired by the kitchen-shift domain in Xing et al. [1], and is a generalization of Robokitchen domain used in Mendoca et al. [2]. Description of kitchen-tasks domain: A robot has to complete kitchen subtasks given in language goal instructions, such as open microwave and light the kitchen area. However, the environment may have objects irrelevant to the subtasks that the robot must ignore. Furthermore, some kitchen subtasks may already be completed, and the robot needs to ignore those tasks when completing the goal. There are 7 possible kitchen subtasks: opening the microwave, moving the kettle, switching on lights, turning on the bottom knob, turning on the top knob, opening the left drawer, and opening the right drawer. A new task T is generated by constructing a random sequence of 4 out of 7 possible kitchen subtasks. In generating the random sequence, we randomly select an instance of microwave out of 3 possible instances, an instance of kettle out of 4 possible instances, a texture of counter, floor, and drawer independently out of 3 possible textures and randomizing the initial poses of the kettle and microwave. With 50% probability, one of 4 selected kitchen subtask is completed before the start of the task. Hence, tasks usually have 3~4 subtasks (i.e. subgoals). **Results**: We evaluate HiP and new baselines on kitchen-tasks in Table 1 of the rebuttal document. We use subtask completion rate as our evaluation metric for kitchen-tasks domain to be consistent with evaluation metric used in [1,2]. We see that HiP outperforms all other baselines significantly on both seen and unseen tasks in kitchen-tasks domain. ### **New Baselines**: We add comparisons to 2 existing foundation models for decision making: Gato and Say-Can. We compare these against these new baselines on our experiment domains in Table 1 of the rebuttal document. We see that HiP outperforms Gato and Say-Can on both seen and unseen tasks. Say-Can specifically has an inherent limitation in that it requires tasks to be expressed in terms of primitives, each of which is paired with an affordance function. However, CLIPort policies don't have a value function. To overcome this issue in our experiment domains, we use the LLM grounding strategy from Huang et al. [9]. ### **Ablation on Video Model**: To show the benefits of video diffusion model, we perform an ablation where we use (text-conditioned) recurrent state space model (RSSM) ,taken from Dreamer-v3 [10], as visual model for HiP. We report the results in Table 1 of the rebuttal document. We see that HiP the video diffusion model outperforms HiP with RSSM in all the three domains. It is evident that the performance gap increases in *object-arrange* and *kitchen-tasks* domains, where the domains are more visually complex. ### **Runtime characteristics of HiP**: We provide average runtime of HiP for a single episode in all the three domains in Table 2 of the rebuttal document. We average across 1000 seen tasks in each domain. We break the average runtime by different components: task planning (subgoal candidate generation and subgoal classification), visual planning, action planning and action execution. We execute the action plan for a subgoal in open-loop and then get observation from the environment for deciding the next subgoal. From Table 2, we see that majority of the planning time is taken by visual planning. ### **Technical contributions**: Our submission makes following technical contributions: (i) Proposing a hierarchical task planning framework for long-horizon decision-making, divided into high-level task planning in text, visual planning with videos, and action planning with joint state trajectories (ii) Designing an iterative refinement strategy to ensure consistency amongst the different levels of planning (iii) Demonstrating that pre-training each level of hierarchy on different modalities of easily accessible Internet data with small amounts of domain-specific data can be effective. In summary, we propose a general robot planning strategy that leverages a hierarchy of foundation models, which can learned separately on different modalities of Internet and robotics data, to construct long-horizon plans. This is different from existing works leveraging foundation models, with typically use a single model (typically a LLM or video model) to make decisions, and allows our approach to reason hierarchically across different modalities, enabling us to scale better to long-horizon plans. We illustrate the efficacy of our results on three long-horizon robot manipulation domains. **References**: [1] Xing et al. "KitchenShift: Evaluating Zero-Shot Generalization of Imitation-Based Policy Learning Under Domain Shifts." [2] Mendonca et al. "Discovering and achieving goals via world models." NeurIPS, 2021. [3] Salimans et al. "Progressive Distillation for Fast Sampling of Diffusion Models.", ICLR 2022. [4] Zheng et al. "Fast Sampling of Diffusion Models via Operator Learning", arXiv:2211.13449, 2023. [5] Janner et al. "Planning with Diffusion for Flexible Behavior Synthesis." ICML 2022. [6] Ho et al. "Imagen Video: High Definition Video Generation with Diffusion Models" arXiv,2022. [7] Yu et al."Video Probabilistic Diffusion Models in Projected Latent Space." arXiv, 2023. [8] Singer et al. "Make-A-Video: Text-to-Video Generation without Text-Video Data." arXiv,2022. [9] H et al. "Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control." arXiv:2303.00855, 2023. [10] Hafner et al. "Mastering diverse domains through world models." arXiv, 2023. Pdf: /pdf/c3525fbe4aafdd56516885437062f68615d14884.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Equivariant Single View Pose Prediction Via Induced and Restriction Representations
Accept (poster)
Summary: This paper shows that algorithms which learn 3D representations from 2D images must satisfy certain consistency properties, which are equivalent to SO(2)-steerable constraints. To this end, a differentiable induction layer is proposed to map signals on the plane into signals on the sphere. The method is evaluated on PASCAL3D+ and SYMSOL datasets to verify the validity of performing orientation prediction. Strengths: 1) A theoretical foundation for learned equivariant mappings from 2D to 3D is presented. 2) The theoretical background and analyses presented in this paper is thorough. Weaknesses: Major: 1) The implementation part is too short. It is unclear how the implementation details reflect the theoretical parts. Could the authors elaborate the connection between the theory and the implementation with more details? 2) There is no experimental analysis of how the learning-based framework satisfies the mentioned consistency and the properties. There could be some visualizations and ablation studies to show the effectiveness of the proposed method. 3) The paper mentioned that some previous methods are the special cases of the proposed learning-based method. However, from the experimental results, we can see that the proposed method does not always outperform the compared methods. In some cases, the learned layer's performance is even much worse than that of other methods. Moreover, it seems not convincing that the learned layer can avoid learning nuisance transformations. The analyses of failure cases might be helpful (for example, to explain why on some objects, the proposed method performs much worse than other methods). Minor: 1) The size of figure 3 is too strange. 2) Many references are not complete. 3) Some references should be their conference/journal version instead of the arXiv version. 4) Some typos: -- L35: "then" -- L64: "a SO(2)-steerable constraints" -- L158: "an standard" -- L195: "a the" -- L206: "spaces .Then" -- L209: "respectively" -- L230: "directions. which" -- L300: "stero" -- L306: "have show" -- Figure 7: "a image" -- Figure 7: "a induced" -- Figure 7: "with and SO(3)" Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1) L33: what does "group truth" mean? 2) What is the number of layers of the default ResNet in the experiments? Why does the ResNet-50 version have very poor performance for the boat category? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: From the experimental results, the learned layer can be much worse than some hand-coded methods. However, this is not well addressed in the paper. This could be mentioned as a limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough feedback. We address the questions posed by the reviewer below. $\textcolor{red}{ \text{Weaknesses: } }$ **The implementation part is too short. It is unclear how the implementation details reflect the theoretical parts. Could the authors elaborate the connection between the theory and the implementation with more details?** Understood. This point was mentioned by multiple reviewers, we have added additional sentences on the actual implementation of our proposed method. Specifically, we have expanded line 244 in the main text: "By theorem 2 in the main text, we can expand any linear map $\Phi$ that satisfies the geometric constraint in equation 1 (line 143) as $ \\Phi (f) ( \hat{n} ) = \int_{r \in \mathbb{R}^{2} }dr \text{ } \kappa( \hat{n} , r ) f(r) $ where each $\kappa$ can be written as $\kappa( \hat{n} , r ) = \sum_{\ell=0}^{\infty} F_{\ell}(r)^{T} Y_{\ell}(\hat{n}) $ where each $F_{\ell}(r)$ is an $SO(2)$-steerable kernels that depend on chosen input and output representations (which are user inputs). The terms $F_{\ell}(r)$ can be instantiated using the e2nn [39] package. Using the definition of $\kappa$, the decomposition of $\Phi(f)$ in terms of spherical harmonics is given by $ \\Phi ( f )( \hat{n} ) = \int_{r \in \mathbb{R}^{2} } dr \text{ } \kappa( \hat{n} , r ) f(r) = \int_{r \in \mathbb{R}^{2} } dr \text{ } [ \sum_{\ell=0}^{\infty} F_{\ell}^{T}(r) Y_{\ell}(\hat{n}) ] f(r) = \sum_{\ell=0}^{\infty} [ \int_{r \in \mathbb{R}^{2} } dr \text{ } F_{\ell}^{T}(r) f(r) ] Y_{\ell}(\hat{n}) $ Thus, the $\ell$-th spherical harmonic coefficient of $\Phi(f)$ is given by $\Phi_{\ell}(f) = [ \int_{r \in \mathbb{R}^{2} } dr \text{ } F_{\ell}^{T}(r) f(r) ] $. This can be computed as a tensor contraction. The inputs to the spherical convolution are then the set of spherical harmonic coefficients $\Phi_{\ell}(f)$. Spherical convolutions are performed with the e3nn [44] package." We hope that this addendum gives the reader a better understanding of how our proposed method is implemented. **There is no experimental analysis of how the learning-based framework satisfies the mentioned consistency and the properties. There could be some visualizations and ablation studies to show the effectiveness of the proposed method.** We thank the reviewer for this excellent point. We have added one additional numerical experiment that replaces our proposed layer with a linear layer. We then train the linear layer on the SYMSOL I dataset. Post-training we measure the $SO(2)$-equivariant properties of the trained model. We compare the performance of a linear layer with our equivarient layer. We include an additional section on this numerical experiment in the attached pdf. **The paper mentioned that some previous methods are the special cases of the proposed learning-based method. However, from the experimental results, we can see that the proposed method does not always outperform the compared methods. In some cases, the learned layer's performance is even much worse than that of other methods. Moreover, it seems not convincing that the learned layer can avoid learning nuisance transformations. The analyses of failure cases might be helpful (for example, to explain why on some objects, the proposed method performs much worse than other methods).** Our method is agnostic with respect to the image formation model that the data is collected with. The exact image formation model in the ModelNet-SO(3) dataset is an orthographic projection. Thus, on the ModelNet-SO(3) dataset our model has to learn the correct image formation model while [12] already uses the correct image formation model. It should be noted that on the PASCAL3D+ dataset, where the image formation model is not described by orthographic projection, our method achieves SOTA results. By adding a residual connection to our method, we are able to achieve much better performance on the ModelNet-SO(3) dataset. We address this point in more depth in the attached pdf. $\textcolor{red}{ \text{Questions: } }$ **L33: what does "group truth" mean?** We apologize for the typo. That should read “ground truth”. This has been changed in the text. **What is the number of layers of the default ResNet in the experiments?** We always choose the encoder to agree with existing baselines in order to create a fair comparison. On the PASCAL3D+ dataset the encoder was chosen to be ResNet-101. On the SYMSOL and ModelNet-SO(3) the encoder is a ResNet-50. **Why does the ResNet-50 version have very poor performance for the boat category?** All existing baselines use the ResNet-101 on the PASCAL3D+ dataset. The poor performance of our model with ResNet-50 encoder probably reflects the fact that the encoder is not deep enough and the boat performance is one of the most difficult categories due to pose ambiguity. We decided to include the results using the ResNet-50 encoder because they illustrate the fact that even with a weaker encoder we can still achieve competitive results. $\textcolor{red}{ \text{ Minor: } }$ We thank the reviewer for their editing vigilance. We have addressed all of the minor issues and typos in the updated version of the paper. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' rebuttal. -- Regarding the performance boost on ModelNet-SO(3) in the rebuttal, it seems the authors need to change the network structure from the original submission, which weakens the generalizability of the original design. -- To better illustrate that the learning-based framework is effective, there should be some necessary visualizations as most of the previous works on equivariance did. -- The authors did not explain why on some objects, the proposed method performs much worse than other methods. This weakens the claim that some previous methods are the special cases of the proposed learning-based method. -- There are some typos in the rebuttal. Besides, it is very confusing that the caption of the last table of the attachment does not correspond to the content. --- Reply to Comment 1.1.1: Title: Response to Official Comment of Reviewer hj9M Comment: We thank the reviewer for the additional comments. **Regarding the performance boost on ModelNet-SO(3) in the rebuttal, it seems the authors need to change the network structure from the original submission, which weakens the generalizability of the original design** Correct. Incorporating inductive bias into a learning algorithm is helpful, so long as the bias is correct with respect to the ground truth. Our proposed layer is agnostic with respect to the image formation model and [12] assumes that orthographic projection is the correct image formation model. On a synthetic dataset like ModelNet-SO(3), the bias of [12] accurately reflects the image formation model of the dataset. This explains why [12] outperforms our proposed layer on ModelNet-SO(3), which is a synthetic dataset. However, on a dataset of real images like PASCAL3D+, the orthographic projection assumption is incorrect and using a model with no bias (i.e. ours) yields better results. However, when our model includes the additional image formation inductive bias (which, as the reviewer points out, changes the network structure), we are able to achieve better results than [12]. **To better illustrate that the learning-based framework is effective, there should be some necessary visualizations as most of the previous works on equivariance did.** Point well taken. We want this paper to be written in an intuitive way that can be understood easily. In Figure 1, we included an ‘equivarience diagram’ although in hindsight this figure does not do a good job of illustrating our idea. Specifically, the key idea is that the output of the neural network can be rotated in three dimensions. We have modified this figure. We have tried to make an additional diagram which is more illustrative. We make an additional ‘equivarience diagram’ showing both input SYMSOL images and predictions of orientations (post-training). We think this visualization is more clear than Figure 1. Lastly, at the risk of sounding slightly pompous, we think that the conditions that we derived should be better be denoted as ‘Virtual Equivarience’ or ‘Holographic Equivarience’. Specifically, the idea is that the output of the neural network (virtual model or hologram) should be rotatable in three dimensions. This requirement forces the fibers of the neural network output to transform as an $SO(3)$ representation. We think that this nomenclature may be more conceptually meaningful than “Equivariant Induced and Restricted Representations”. We could not attach figures to this comment. The additional visualizations are available at: https://anonymous-visual.github.io. **The authors did not explain why on some objects, the proposed method performs much worse than other methods. This weakens the claim that some previous methods are the special cases of the proposed learning-based method.** As explained in the main rebuttal, [12] assumes that the correct image formation model is an orthographic projection, which is the true image formation model used in the data generation of the ModelNet-SO(3) dataset. Furthermore, we claim that some previous methods are the special cases of the proposed learning-based method because they can be realized as a special case of our architecture. There is no guarantee (and nowhere do we claim) that our architecture will achieve monotonic improvement over [12] or [14]. We design the most general architecture that respects the desired symmetry constraints of the problem. We then observe experimentally that said architecture outperforms existing methods. **There are some typos in the rebuttal. Besides, it is very confusing that the caption of the last table of the attachment does not correspond to the content.** Following the request of reviewer Nd1u, we added an additional ablation study that compared the layers proposed in section E and section F. We reran the experiments on the PASCAL3D+ dataset and compared both layers. This is shown in table 13 in the attached pdf response. We have changed the caption of table 13 to: “Comparison of $S^{2}$ and $SO(3)$ induction/restriction for Rotation prediction on PASCAL3D+. First column is the average over all categories. The feature encoder is either ResNet-50 or ResNet-101 head.” We hope that this improves the clarity of presentation.
Summary: This is a theory paper. The paper presents the general form of H equivariant linear maps that lift the signals defined on R2 to signals defined on S2, which is defined by convolution with kernel constraints. The main paper focuses on the case where H=SO(2) and G=SO(3). Under such a setting, the authors evaluate an SO(3) distribution regression task and show better performance there. Strengths: - The paper proposes a theoretical framework to study the group equivariance under projection, which is important when studying 3D reconstruction-related tasks - The formulation for the SO(2)-SO(3) setting is clean and easy to implement with existing equivariant network libraries. - The theory can be found somehow useful for the SO(3) distribution regression task. Weaknesses: - One main limitation of the proposed theory is that the authors only extensively demo the case of SO(2)—SO(3), but lack more general application settings on other groups, for example, at least SE(2) and SE(3), which has wider application settings for vision. Line 134 has an equation for the semi-direct product SE(2) but is not further illustrated as far as the reviewer understands. - Another inevitable drawback of studying the single view SO(2)—SO(3) case is that only rotation along the camera optical axis strictly obeys the equivariance since when the object is rotated in 3D or the camera is rotated towards another direction (viewpoint) the content of the image will change due to the nature of the projection, under such case, the single view has no way to guarantee any equivariance. The reviewer thinks studying the multi-view case is more reasonable to address the view content change due to the 3D rotation. This limitation is mentioned by the author and the reviewer thinks it’s relatively fair. - For equivariance theory paper, making it more rigorous, people always try to include the case of higher-order tensor features not just with trivial (zero-order) features. In Line 246, as far as the reviewer understands, the paper implementation only tests with zero order features, the reviewer is curious about the performance of including higher-order features. Technical Quality: 3 good Clarity: 3 good Questions for Authors: see weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The reviewer hasn't found explicit social impact claim. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textcolor{red}{\text{Weaknesses}}$ **One main limitation of the proposed theory is that the authors only extensively demo the case of SO(2)—SO(3), but lack more general application settings on other groups, for example, at least SE(2) and SE(3), which has wider application settings for vision. Line 134 has an equation for the semi-direct product SE(2) but is not further illustrated as far as the reviewer understands.** In this work, we choose to focus on pose prediction tasks only. The semi-direct product SE(2) appears in line 134, because the method that we implement is invariant with respect to in-plane translations. We have added additional sentences to the paper to make this more clear. **Another inevitable drawback of studying the single view SO(2)—SO(3) case is that only rotation along the camera optical axis strictly obeys the equivariance since when the object is rotated in 3D or the camera is rotated towards another direction (viewpoint) the content of the image will change due to the nature of the projection, under such case, the single view has no way to guarantee any equivariance. The reviewer thinks studying the multi-view case is more reasonable to address the view content change due to the 3D rotation. This limitation is mentioned by the author and the reviewer thinks it’s relatively fair.** In the multi-view case, there is a more intricate set of equivarience constraints that must be satisfied. However, in this work, we chose only to consider single view pose-estimation. Although we acknowledge that much of current research focuses on multi-view problems, single view pose-estimation is an interesting problem in its own right and has applications to robotics and autonomous driving (cite works that consider single view). **For equivariance theory paper, making it more rigorous, people always try to include the case of higher-order tensor features not just with trivial (zero-order) features. In Line 246, as far as the reviewer understands, the paper implementation only tests with zero order features, the reviewer is curious about the performance of including higher-order features.** Apologies, this should read 'spherical' not 'trivial'. The 'spherical' representation of $SO(3)$ consists of one copy of each $SO(3)$ irreducible. This choice of irreducibles was also used in [12]. We fixed this typo in the text. --- Rebuttal Comment 1.1: Comment: The authors answer my main questions, however, from a practical perspective of studying 3D vision, the single view pose equivariance application is a little limited, I hope the author will include more clear clarification of the current application limitation and highlight the multiview case. Although the application has limitations, i still believe the theory part is worth for a publication, I lean to keep my score. --- Reply to Comment 1.1.1: Title: Response to comment of reviewer Uxrh Comment: We acknowledge the point of the reviewer. However, we would like to emphasize to the reviewer that single view pose estimation is an interesting and applicable problem in itself. For example, single view pose estimation is an integral task in autonomous driving ( https://ieeexplore.ieee.org/document/6248074 , https://arxiv.org/pdf/2211.11962.pdf), where one needs to build models of the car environment from a single vantage point. Furthermore, pose prediction is an important task in robotic grasping (https://arxiv.org/abs/2202.03631, https://arxiv.org/abs/1809.10790 ). These problems are inherently single-view. Another problem where single view estimation is important is cryoEM, where the goal is to disentangle intrinsic molecular degrees of freedom from unknown orientational degrees of freedom (https://pubmed.ncbi.nlm.nih.gov/33542510/). For this reason, we would prefer to keep the focus on the single view case.
Summary: This work tackles the challenging problem of achieving SO(3) equivariance over 2D projected images for pose estimation tasks. They propose a unified 2-step framework by adding an induced layer to turn SO(2)-equivariant representations into spherical signal, then using SO(3)-equivariant convolution to get the final pose distribution prediction. They have laid a general mathematical derivation of SO(3) equivariance with restriction and induction representation that satisfies the universal property, which can well summarize existing works as special cases. Comparable results have been achieved in 3 (one from supp.) well-adopted public benchmark for pose estimation. Strengths: - A novel and universal theory has been proposed to achieve SO(3) equivariance over the challenging 2D image-based pose estimation, which covers several previous works as special cases - The proposed framework is able to extend to 6D pose estimation and monocular density reconstruction problem, which impact is potentially to be significant for a lot of other 2D-to-3D learning tasks - Compared to the main paper, the additional full derivation in the supp. is clear and detailed Weaknesses: - My major concern is on the contribution significance, and the author might want to give a better clarification on the original contribution of the paper. - The claimed major contribution of induced layer seems to mainly come from this [Image to Sphere(Klee et al.)](https://arxiv.org/abs/2302.13926) paper. - Given we already have a lot of works tackling SO(2)-equivariant convolutions on image space, and SE(3)-equivariant operation on 3D space, it seems a trivial contribution to get a SO(3)-equivariant framework for 2D images by simply connecting existing tools from e2nn and e3nn package. Like one possible simple solution might be first lifting the 2D image using a pre-trained monocular depth, and then use SE(3) equivariant convolutions to do 6D pose estimation on the 3D space, thus we do not need to do this complex spherical signal lifting and follow-up neural operations. - The theory on equivariant 6D pose estimation and monocular density volume seems interesting, but it is put in the supp. without any experimental support. - Though we have this nice equivariant property, the results are just comparable to previous SOTA methods, and on ModelNet40 it is obviously worse than the Klee et al. results, which doesn't seem to match with the conclusion drawn from the main paper. - Sec 4.1 seems to be poorly written, and the logic flow is very confusing to understand the connections between the 3 paragraphs, like the part mapping F to S2 is missing, and I couldn't find which one is the exact induced representation. - The theory on universal property in Sec. 5 should be put after Sec. 3, so that we can better understand the unique purpose of having both H-equivariant restriction representation and H-to-G induction representation. Also all designs in Sec. 4 can be better supported in a theoretical way before listing them out. Also the completeness property should be put in the main paper instead of only in the supp - Line 114 should be $R^D \to S^2$, since we are mapping feature from a plane onto a sphere Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. It seems that S2-conv and SO(3) are both spherical convolutions according to Sec. 6.2, I am wondering what is the major difference between these two operations. 2. The author mentions that the implicit function seems to have unique advantage in pose distribution estimation, can the propose framework be potentially extended to support implicit function prediction in the near future? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for an especially detailed and thorough review. We provide a point-by-point response to the comments and questions below: $ \textcolor{red}{ \text{ Weaknesses: }} $ **The claimed major contribution of induced layer seems to mainly come from this Image to Sphere(Klee et al.) paper** We would respectfully disagree with this statement. Although much of our work is heavily inspired by [12], the constraint that we formulate is a geometric statement that should be applicable to architectures that learn consistent three-dimensional models of the world from two-dimensional images. This is a statement that is geometric in nature, and is not unique to [12]. **Given we already have a lot of works tackling SO(2)-equivariant convolutions on image space, and SE(3)-equivariant operation on 3D space, it seems a trivial contribution to get a SO(3)-equivariant framework for 2D images by simply connecting existing tools from e2nn and e3nn package. Like one possible simple solution might be first lifting the 2D image using a pre-trained monocular depth, and then use SE(3) equivariant convolutions to do 6D pose estimation on the 3D space, thus we do not need to do this complex spherical signal lifting and follow-up neural operations.** We would emphatically disagree with the reviewer on this point. There is an active area of research that aims to learn three-dimensional models of images. The problem of ‘stitching’ SO(2)-equivariant convolutions and SO(3)-equivariant convolutions is not trivial. We would ask the reviewer to find a previous paper which observes that the SO(2) $\subseteq$ SO(3) subgroup need to ‘align’, which naturally means that the map between two dimensional and three dimensional features must be an intertwiner of restricted/induced representations. In addition, the ‘lifting’ procedure proposed by the reviewer can be dependent on the camera model. For example, for some computer vision tasks, such as cryoEM, the image formation model involves Radon transformations, which is drastically different from the pinhole camera model that is applicable to telephoto-lenses. Our proposed construction allows for the camera model to be learned, instead of assumed. The only constraint imposed is geometric consistency. **The theory on equivariant 6D pose estimation and monocular density volume seems interesting, but it is put in the supp. without any experimental support.** We did not consider 6D-pose estimation in detail because the benchmarks that we consider only measure orientation estimation. We agree that this is a natural continuation of our work and we believe that our work on orientation estimation is a promising first step in this direction. **Though we have this nice equivariant property, the results are just comparable to previous SOTA methods, and on ModelNet40 it is obviously worse than the Klee et al. results, which doesn't seem to match with the conclusion drawn from the main paper.** The performance of our model on PASCAL3D+ and SYMSOL I is current state of the art. Using a residual connection, we can achieve better results on the ModelNet-SO(3) dataset. We address this point in more detail the attached pdf. **Sec 4.1 seems to be poorly written, and the logic flow is very confusing to understand the connections between the 3 paragraphs, like the part mapping F to S2 is missing, and I couldn't find which one is the exact induced representation** We have added to section 4.1 and section E. Please see response to reviewer mVf3. **The theory on universal property in Sec. 5 should be put after Sec. 3, so that we can better understand the unique purpose of having both H-equivariant restriction representation and H-to-G induction representation. Also all designs in Sec. 4 can be better supported in a theoretical way before listing them out. Also the completeness property should be put in the main paper instead of only in the supp** We cannot fit the full derivation of the completeness property in the main text. We will include a statement of the completeness property in the main text and then refer the reader to the proof in the appendix. **Line 114 should be R2 -> S2, since we are mapping feature from a plane onto a sphere** We are interested in mapping the space of functions $f: \mathbb{R}^{2} \rightarrow \mathbb{R}^{d}$ into the space of functions $ g : S^{2} \rightarrow \mathbb{R}^{d'} $. We have changed line 114 to "For convenience we ignore discritization and treat the feature maps as having continuous inputs $f : \mathbb{R}^{2} \rightarrow \mathbb{R}^{d}$. To leverage spatial symmetries in 3D, we would like to map features $f$ into the space of features defined on a sphere: $g : S^{2} \rightarrow \mathbb{R}^{d'}$. We thus need to consider the space of maps between two function spaces. $ \textcolor{red}{ \text{Questions: }} $ **It seems that S2-conv and SO(3) are both spherical convolutions according to Sec. 6.2, I am wondering what is the major difference between these two operations.** Spherical convolutions and SO(3) convolutions are distinct operations. Spherical convolution (cite taco paper) is an operation that convolves two signals defined on $S^{2}$ (cite taco paper) and returns is a signal defined on SO(3). SO(3) convolution is an operation that takes as inputs two signals defined on the group SO(3) and the output of an SO(3) convolution is a signal defined on SO(3). In the appendix, we include an ablation study that compares mapping directly to the sphere and performing spherical convolutions vs. mapping directly to SO(3). **The author mentions that the implicit function seems to have unique advantage in pose distribution estimation, can the propose framework be potentially extended to support implicit function prediction in the near future?** Yes it is possible to generalize our work to implicit function prediction. This observation was raised by a few of the reviewers and we address this point in the attached pdf. --- Rebuttal Comment 1.1: Title: Referenced Citation Comment: The referenced citation should be: 1. Spherical convolution: https://arxiv.org/pdf/1801.10130.pdf --- Rebuttal Comment 1.2: Comment: Thanks for the authors' rebuttal, and my concerns on the significance and writing are moderately addressed. The updated results on ModelNet-40 look interesting to me, does that mean general improvement on the architecture actually matters more to improve performance? Given the potential impact of such a work on several downstream tasks, I am also wondering the efficiency of the proposed method, does the performance boost by introducing different equivariant convolutions also sacrifice a lot on the memory and inference speed compared to Klee et al? --- Reply to Comment 1.2.1: Title: Response to comment of reviewer beVj Comment: We thank the reviewer for additional comments on our manuscript. We are a bit unsure what the sentence: **does that mean general improvement on the architecture actually matters more to improve performance?** means. Can the reviewer please rephrase the question? If the reviewer is asking if there are ways to combine existing neural network methods with the geometric constraint derived in our text; the answer is most likely yes. We would expect that combining more advanced vision architectures such as feature pyramids networks (c.f. https://arxiv.org/pdf/1612.03144.pdf) or vision based transformer methods (https://arxiv.org/abs/2103.14030) with induced/restricted equivarience constraints would lead to improved performance, although this is outside the scope of this work. **Given the potential impact of such a work on several downstream tasks,** We are pleased that the reviewer recognizes the importance of designing equivariant neural networks for monocular vision tasks. **I am also wondering the efficiency of the proposed method, does the performance boost by introducing different equivariant convolutions also sacrifice a lot on the memory and inference speed compared to Klee et al?** Our proposed method incurs slight overhead relative to Klee [5] in terms of memory and inference speed. Our proposed layer is implemented efficiently by a matrix multiplication and a pooling operation, which can be done in roughly the same time as the orthogonal projection in [5]. Naively, one would think that making the projection of [5] learnable adds many additional free parameters. However, equivalence constraints drastically limit the number of allowed parameters. Specifically, for the PASCAL3D+ architecture [5] has roughly 42.663M trainable parameters (including backbone), and our proposed method has roughly 52.222M traininable parameters (including backbone). This is a ~%20 percentage increase in the number of trainable parameters.
Summary: The paper studies learning equivariant SO3 representations from 2D images. The authors first propose a generalized theory that shows SO2 equivariance constraint for image to spherical signals. Then they propose a construction algorithm called induction layer to implement equivariance. The proposed method outperforms relevant baselines on pose estimation on SYMBOL and PASCAL 3D+. Strengths: The paper studies an important problem in 3D vision (equivariant-SO3 from 2D input) and shows its empirical advantage in 3D pose estimation. This has a potential impact on lots of downstream tasks. They provide a unified theory that shows previous works are their special cases. The empirical results are convincing, especially results with 10% training data. Weaknesses: 1. It is not clear how to construct the induction layer from the paper. More details may be provided on how to implement the layer, e.g. how to solve equation 2. 2. The experiments are object-centric 6D pose estimation. I wonder if the method can generalizes to cluttered scenes like those in BOP challenges (https://bop.felk.cvut.cz/challenges/) Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see weakness section Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitation is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback. Regarding your questions about our work, here is our response: $\textcolor{red}{ \text{Weaknesses: }} $ **It is not clear how to construct the induction layer from the paper. More details may be provided on how to implement the layer, e.g. how to solve equation 2** We have added an additional details on the form of the solution of the geometric constraint and how this is implemented in PyTorch. We also would like to mention that if reading code is more helpful than reading mathematics, our code is attached in the original submission. We have tried to make the code as user friendly as possible. We have expanded line 244 in the main text: "The filters in the induction layer were instantiated using the e2nn [39] package." By theorem 2 in the main text, we can expand any linear map $\Phi$ that satisfies the geometric constraint in equation 1 (line 143) as $ \\Phi (f) ( \hat{n} ) = \int_{r \in \mathbb{R}^{2} }dr \text{ } \kappa( \hat{n} , r ) f(r) $ where each $\kappa$ can be written as $\kappa( \hat{n} , r ) = \sum_{\ell=0}^{\infty} F_{\ell}(r)^{T} Y_{\ell}(\hat{n}) $ where each $F_{\ell}(r)$ is an $SO(2)$-steerable kernels that depend on chosen input and output representations (which are user inputs). The terms $F_{\ell}(r)$ can be instantiated using the e2nn [39] package. Using the definition of $\kappa$, the decomposition of $\Phi(f)$ in terms of spherical harmonics is given by $ \\Phi ( f )( \hat{n} ) = \int_{r \in \mathbb{R}^{2} } dr \text{ } \kappa( \hat{n} , r ) f(r) = \int_{r \in \mathbb{R}^{2} } dr \text{ } [ \sum_{\ell=0}^{\infty} F_{\ell}^{T}(r) Y_{\ell}(\hat{n}) ] f(r) = \sum_{\ell=0}^{\infty} [ \int_{r \in \mathbb{R}^{2} } dr \text{ } F_{\ell}^{T}(r) f(r) ] Y_{\ell}(\hat{n}) $ Thus, the $\ell$-th spherical harmonic coefficient of $\Phi(f)$ is given by $\Phi_{\ell}(f) = [ \int_{r \in \mathbb{R}^{2} } dr \text{ } F_{\ell}^{T}(r) f(r) ] $. This can be computed as a tensor contraction. The inputs to the spherical convolution are then the set of $\Phi_{\ell}(f)$. Spherical convolutions are performed with the e3nn [44] package. **The experiments are object-centric 6D pose estimation. I wonder if the method can generalizes to cluttered scenes like those in BOP challenges** We thank the reviewer for bringing this benchmark to our attention. In its current form, our method would be unable to deal with pose-estimation for multiple objects. Most equivarient methods have a similar problem as they are equivarient with respect to global transformations. Specifically, there is not a natural way to rotate a single object in a scene and not rotate the whole scene. One natural possible method to address pose-estimation in cluttered scenes would be to first segment objects then apply pose-estimation to each object. This is an interesting problem and will be left for future work.
Rebuttal 1: Rebuttal: We thank the reviewers for their comments on our manuscript. We are pleased that the majority of reviewers recognize the fact that our theory provides a unifying perspective for a set of relevant computer vision tasks: - Reviewer Nd1u: "the paper proposes a unifying theory that encompases previously proposed architecture" - Reviewer mVf3: "The paper studies an important problem in 3D vision (equivariant-SO3 from 2D input) and shows its empirical advantage in 3D pose estimation. This has a potential impact on lots of downstream tasks." - Reviewer beVj: "A novel and universal theory has been proposed to achieve SO(3) equivariance over the challenging 2D image-based pose estimation, which covers several previous works as special cases" - Reviewer Uxrh: "The paper proposes a theoretical framework to study the group equivariance under projection, which is important when studying 3D reconstruction-related tasks" We recognize the criticism of the reviewers in regards to clarity of presentation, numerical performance on benchmarks and dearth of ablation studies. In order to address these concerns, we have added an additional description of the implementation (found in section 4.2) of our proposed method. In the attached pdf, we include tables for additional numerical experiments. Specifically, - We include an additional experiment on ModelNet-SO(3) dataset [33]. A major concern of many of the reviewers was that the performance of our architecture was worse that [12] on the ModelNet-SO(3). In some ways, this may be expected as the [12] assumes that the correct image formation model is an orthographic projection, which is the true image formation model used in the data generation of the ModelNet-SO(3) dataset. Our architecture needs to learn the correct image formation model. By including additional biases about the image formation model, we can achieve state of the art results on the ModelNet-SO(3) dataset. We added a residual connection to our induction/restriction layer that is an orthographic projection. This reflects the assumption that for the ModelNet-SO(3) model, the true image formation model is close to orthographic projection, which is common for pinhole camera models [2]. With this additional bias, our model achieves SOTA when averaged over each ModelNet-SO(3) category. These results are shown in the attached pdf in table 10. - We include an ablation study where we replace our construction with a linear layer. We replaced the induction/restriction layer with a linear layer and trained on the SYMSOL I dataset [14]. We chose the SYMSOL dataset as it consists of rotated solids and any model that performs well on SYMSOL should be approximately equivarient. We choose the spherical layer to have fibers transforming in the $ \\rho _{spherical} = \bigoplus _{\ell=0}^{6} D^{\ell} $ SO(3)-representation. Post-training, we then tested the SO(2)-equivarience properties of the output spherical layer we found a percentage error of about %18 with output SO(2)-representation approximately $\text{Res} _{SO(2)}^{SO(3)}( \\rho _{spherical} ) $. This simple numerical experiment shows that the trained linear layer approximately satisfies the geometric constraint derived in the main text. These results are shown in table 12. - A concern of the reviewers was the performance of our architecture on the SYMSOL II dataset. This phenomena is expected and also observed in [12]. Unlike SYMSOL I on the SYMSOL II dataset, a different model is trained on each class independently. Thus, the SYMSOL I task is more concerned with learning while the SYMSOL II is more concerted with representational power. The method proposed in [14] has significantly more parameters than both our method and [12]. Generalization of our proposed framework to implicit models is an interesting direction for future work. These results are shown in table 11. - We include an ablation study which compares $S^{2}$ and $SO(3)$ convolutions. We rerun the experiments in the main text using an induction layer that maps images directly to SO(3). The direct induction to SO(3) slightly underperforms the induction to $S^{2}$ on the PASCAL3D+ dataset. We believe that this adds some depth to section E and section F. These results are shown in table 13. We addressed specific comments directly in the response section. Minor: There is a mistake in the appendix in section J.4. Specifically, the constraints imposed by restricted/induced representations are conditions on the intertwiners between layers, not the content of irriducibles in each layer. Line 836- 845 should be deleted and replaced with: "Specifically, the linear map between boundary layers must satisfy, $ \\Psi \in \text{Hom}_{H}( ( \\rho _{i} , \mathcal{F} _{i}^{H} ) , \text{Res} _{H}^{G}( \sigma _{1} , \mathcal{F} _{1}^{G} ) ) \cong \text{Hom} _{G}( \text{Ind} _{H}^{G}( \\rho _{i} , \mathcal{F} _{i}^{H} ) , (\sigma _{1} , \mathcal{F} _{1}^{G} ) )$ Specifically, if $( \rho _{i} , \mathcal{F} _{I}^{H} )$ and $( \sigma _{1} , \mathcal{F} _{1}^{G} ) $ decompose into irreducibles as $ ( \rho _{i} , \mathcal{F} _{i}^{H} ) = \bigoplus _{ \rho \in \hat{H} } m _{i \rho } (\rho , V _{\rho} ) , \quad \quad ( \sigma _{1} , \mathcal{F} _{G}^{1} ) = \bigoplus _{ \sigma \in \hat{G} } n _{1 \sigma } ( \sigma , W _{\sigma} ) $ Then, we can write the induced and restricted representations in terms of the branching and induction rules, $ \text{Res} _{H}^{G}( ( \sigma _{1} , \mathcal{F} _{1}^{1}) ) = \bigoplus _{ \rho \in \hat{H} }( (\sum _{ \sigma \in \hat{G} } n _{1 \sigma } B _{ \sigma, \rho} ) ( \rho , V _{\rho} ) ) \quad \text{Ind} _{H}^{G}( ( \rho _{i} , \mathcal{F} _{i}^{H} ) ) = \bigoplus _{ \sigma \in \hat{G} }( ( \sum _{ \rho \in \hat{H} } m _{ i, \rho } I _{\rho, \sigma}) ( \sigma , W _{\sigma} ) ) $ and intertwiners $\Psi$ can be computed by considering the decomposition of irreducibles." Pdf: /pdf/122148fc53b56c615aa4727cc5c368e1a70fe5c6.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper focuses on the task of 3d pose prediction. The paper proposes a set of consistency constraints for learning 3d representation and shows that few of the previously proposed neural architectures follow these constraints. Further using their constraints they propose a new architecture with an induction layer that maps feature maps to a feature sphere. Using this new architecture they show that their method can achieve state-of-the-art performance on PASCAL3D+ and Symsol 2 dataset. Strengths: -> the paper proposes a unifying theory that encompases previously proposed architecture -> the paper achieves strong performance on PASCAL3D+ and SYMSOL II -> the paper is well written and described Weaknesses: -> I couldn't find any ablations/analysis in the paper -> No results on ModelNet which has been included in previous papers -> The differences between baseline in terms of architecture/pre-training/losses used has not been made very clear Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: -> How does the method compare on datasets such as ModelNet or CO3D? I would be interested in seeing comparisions against recent methods like these: https://arxiv.org/abs/2208.05963 -> Do baselines use the same backbone/pre-training? -> What happens if u drop the induction layer while still modelling the uncertainity? -> How does increasing/decreasing the training data affect the results? (currently only 10%) -> Why results on SYMSOL II are poor compared to the baseline? Anything specific about SYMSOL II? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the useful feedback. Please find our point-by-point response below. $\textcolor{red}{ \text{ Weaknesses: }} $ **I couldn't find any ablations/analysis in the paper** We have added two ablations studies. The first study compares the performance an architecture that uses a spherical convolution followed by an SO(3) convolution and an architecture that utilizes just SO(3) convolutions. We have also added one additional ablation study that compares a naive linear layer vs our proposed layer. We compare both the performance and equivarience error of the linear layer and our proposed layer. These results are included in the attached pdf. **No results on ModelNet which has been included in previous papers** We have included results on ModelNet-SO(3) in the appendix. Other reviewers pointed out that the performance of our architecture was not SOTA on ModelNet-SO(3). Using a residual connection, we can achieve better results on the ModelNet-SO(3) dataset. We address this point in more detail the attached pdf. **The differences between baseline in terms of architecture/pre-training/losses used has not been made very clear** In order to ensure that the reader understands our work, we have added additional clarifications to the main paper. We used the same backbone (ResNet-50 for SYMSOL and ModelNet, ResNet-101 for PASCAL3D+) and pre-training (ImageNet-1K) as the baselines. The loss was trained using cross-entropy on SO(3), which was also used in [14]. $ \textcolor{red}{ \text{ Questions: }} $ **How does the method compare on datasets such as ModelNet or CO3D?** As mentioned before, we have included the results on ModelNet in the appendix. These results are also commented on in the attached pdf. We do not have time to evaluate our method on the CO3D dataset, but we agree this would be an interesting experiment. **Do baselines use the same backbone/pre-training?** Yes. In order to make a fair comparison, we used the same backbone (ResNet-50 for SYMSOL and ModelNet, ResNet-101 for PASCAL3D+) and pre-training (ImageNet-1K) as the baselines. Interestingly, we are able to achieve very competitive results on PASCAL3D+ even when using the ResNet-50 backbone. This is somewhat surprising as the ResNet-50 backbone does not have the same representational power as the ResNet-101. **What happens if u drop the induction layer while still modeling the uncertainty?** We are a bit unsure about what the reviewer means by this statement. The uncertainty is important to modeling objects with intrinsic symmetries. Many of the existing baselines use models which model uncertainty in some way, see for example [14]. In the attached pdf we replaced the induction/restriction layer with a linear layer and compared performance. Does this address the stated question? **How does increasing/decreasing the training data affect the results? (currently only 10 percent)** In general, equivariant methods perform better than non-equivariant methods in the low-data regime. This is because equivariant methods can generalize to unseen data if said data is related to previously seen data by symmetry transformation. In the regime of very large amounts of data, equivariant and non-equivariant methods tend to have similar performance [4]. **Why results on SYMSOL II are poor compared to the baseline? Anything specific about SYMSOL II?** This question was asked by multiple reviewers. Unlike SYMSOL I where a single model is trained on all class, in SYMSOL II a separate model is trained on each distinct class. We address this point further attached pdf.
null
null
null
null
null
null
Few-shot Generation via Recalling Brain-Inspired Episodic-Semantic Memory
Accept (poster)
Summary: In this article the authors propose to augment state of the art few-shot generative models with a memory module inspired by the brain. The memory module combines episodic and semantic memory. The authors demonstrate through extensive experimentations the benefits of their module on few-shot generation: when taken either separately or combined, each type of memory improves the generation performance. Strengths: The claims of the article are well supported by extensive experimentations. Experimentations involves various and diversified datasets, and the baselines are well chosen. The memory module is well described with the Fig2. Weaknesses: The writing of the paper makes it unpleasant to read (despite interesting results), and should be improved: some sentences have just not meaning ! I suggest the authors to take time for an ‘in detail’ re-reading to improve the writing (I have just reported few typos, but there are plenty of them !! ). As a general comment, the presented article seems to be scientifically correct, but I have quickly lost track of the message because the reading flow was constantly interrupted by a poor writing ! Really sad ! Technical Quality: 3 good Clarity: 1 poor Questions for Authors: * In the experimental section (section 5), the authors are comparing the CNS (and the SCHA-VAE) with their model with different type of memory. This comparison is very interesting, but have you controlled for the number of parameters ? On my understanding, there are more parameters in the VSM-CNS (and in the VSM-SCHA) than in the CNS (and SCHA) which makes the comparison not as informative as it should be. Would it be possible to find a way to equate the number of parameters in both type of networks ? * Concerning the writing : some sentence are just not properly formed. Sometimes, a noun is used instead of a verbs, which make the reading really hard. For exemple : "which can simultaneously store both episodic and semantic memories to assistant existing generative models efficiently recall memories during generation. » (Line 8-9). In the core text of the article these kind of examples are numerous ! Tenses are also misused, and typos are all over the article !! Please re-read. Here are few exemples : Line 24-25 : A word is missing ?? \ Line 113 : Eq 9 —> Eq 5 \ Line 135 : TheMechanism —> The Mechanism \ Line 190 : initialed —> initialized \ ... Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: The authors have correctly addressed the limitation part Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments and suggestions, which will be helpful for us to further improve the quality of our paper. Following your comments, we have carefully fixed these mentioned typos and also rewritten several parts of our paper for improving the experience of reading. Q1: Concern on model's parameters. A1: Yes, it is correct that introducing the proposed variational structured memory module will bring additional model parameters, leading to the amount of parameters of VSM-CNS/VSM-SCHA will be slightly larger than those of CNS/SCHA, respectively. However, we need to highlight that the amount of parameters brought by introducing the proposed variational structure memory is limited, compared to the amount of parameters of the original baselines, which has been listed in the following table. | Models | Parameters (M)| Rate | | ----- | -------| ------- | | CNS | 7.34| 0%| | CNS + semantic| 7.39| 0.68% | | CNS + Episodic | 7.38| 0.54%| | VSM-CNS | 7.44| 1.36% | | Models | Parameters (M)| Rate | | ----- | -------| ------- | | SCHA | 5.37| 0% | | SCHA + semantic| 5.60| +4.28% | | SCHA + Episodic | 5.55| +3.35% | | VSM-SCHA| 5.77| +7.44% | | Models | Parameters (M)| Rate | | ----- | -------| ------- | | sDDPM| 34.0| 0% | | vDDPM| 35.5| 4.41%| | vFSDM | 34.8| 2.35%| | VSM-Diffusion| 35.6| 4.70% | Q2: Concern on writing. A2: Thanks for your valuable suggestions. Following your comments, we have carefully revised our paper. For Line 8-9: We have modified Line 8-9 as ``Inspired by the memory mechanism of human brain, in this work, we carefully design a variational structured memory module (VSM), which can simultaneously store both episodic and semantic memories to assist existing generative models efficiently recall these memories during sample generation`` For Line 24-25 : A word is missing ?? We have modified Line 24-25 as ``While there have been promising processes for few-shot adaptation on classification tasks, less work has been done on few-shot generation [7–10], which is mainly due to the challenging nature of learning the generative process with only a few samples in an unsupervised manner [11, 12].`` For Eq.9->Eq.5: Thanks for the notification, we have deleted Eq.9 in the manuscript. For typos: We have fixed these listed typos and carefully revised our paper. We appreciate your thoughtful comments on our paper once more, and we look forward to having more conversations with you. --- Rebuttal Comment 1.1: Title: Engage in the discussion with the authors Comment: Dear Reviewer, The author has provided responses to your questions and concerns. Could you please read their responses and ask any follow-up questions, if any? Thank you! --- Rebuttal Comment 1.2: Title: Response to authors Comment: Sorry for the late feedback. Thanks to the authors for the detail answer. In general, The authors have addressed my main concerns, I increase my rating to 7 (from 6 to 7).
Summary: This paper introduces a novel approach to few-shot generation inspired by the human memory mechanism. The authors propose a Variational Structured Memory module (VSM) to store and recall episodic and semantic memories. They also introduce a bionic memory updating strategy to model the conversion between these memory types. The effectiveness of this approach is demonstrated through its integration with various existing generative models and evaluation of few-shot generation tasks. Strengths: **Originality**: The paper presents a highly original approach by mimicking the human memory mechanism in the context of few-shot generation. The introduction of the Variational Structured Memory module (VSM) and the bionic memory updating strategy represents a creative combination of cognitive science principles with AI models. **Quality**: The paper is of high quality, reflected in the meticulous design of the VSM and its integration into existing generative models. The execution of the bionic memory updating strategy and its evaluation through few-shot generation tasks further attest to the paper's quality. **Clarity**: The paper is well-written. The authors have clearly explained their inspiration, the design of the VSM, and the results of their evaluation, making the paper accessible to readers of varying familiarity with the subject matter. **Significance**: The significance of this paper is considerable. By demonstrating a novel approach to the few-shot generation and opening up new directions for integrating memory mechanisms into generative models, this work contributes valuable insights to both cognitive science and artificial intelligence fields. Weaknesses: **Computation Efficiency**: Although the integration of episodic and semantic memory in achieving few-shot generation is intriguing, the use of memory modules inherently increases the computational overhead in terms of extra storage space and prediction time. It would be valuable if the paper could address and analyze these aspects of computation efficiency. **Acronym Overlap**: The use of the acronym VSM for Variational Structured Memory might confuse, as it has been previously used in the same context in a paper by Zhen et al. [18]. It would be beneficial to consider using a different acronym to avoid any potential confusion for readers. **Missing Reference and Cross-Domain adaptation**: Indeed, the paper has overlooked the reference to the Hierarchical Variational Memory model (HVM) [1]. I would be happy if the authors could discuss whether their approach could be applied to utilize memories from different neural network layers for achieving cross-domain few-shot generation. This discussion could provide additional depth and broaden the applicability of your method. [1] Du et al. "Hierarchical variational memory for few-shot learning across domains." ICLR 22. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) Can the authors elaborate on how the parameter $\alpha$ impacts the model's performance? (2) Is it feasible to apply your method to discriminative models, such as for few-shot classification tasks? Expanding the discussion to these applications would provide a broader view of your method's potential. (3) Could the authors provide more detailed information regarding the additional model parameters introduced by your method compared to the baseline? Additionally, how does the inference time compare to the baseline? This information could provide more clarity regarding the computational efficiency of your method. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your constructive comments and suggestions, which are helpful for us to improve the quality of our paper further. The concerns have been addressed below. W1: Concern about Computation Efficiency A1: Thanks, we will answer this question in our response to Q3. W2: Acronym Overlap: A2: Thanks for your valuable suggestion. We will update the acronym of our developed method from VSM to VSMM (Variational Structured Memory Module) in the revision. W3: Missing Reference and Cross-Domain adaptation A3: Thanks for bringing this excellent work into our view, and we will cite it and discuss its relevance to our work in the revision.For cross-domain adaptation, we believe that our memory-augmented method can be naturally extended for few-shot cross-domain generation task, and the most straightforward solution could be simply replacing the generative model for few-shot generation with another one for few-shot cross-domain generation as the backbone. We are willing to explore this promising direction in the furture. Q1: Can the authors elaborate on how the parameter $\alpha$ impacts the model's performance? A1: Thanks for your suggestion, as exhibited in the following table, we have conducted the experiment to elaborate on the impact of the hyper-parameter $\alpha$, and the experimental results show that the setting of parameter $\alpha$ will not heavily influence the performance of our method. | Model | $\alpha$ | Onmignot | | Double-mnist | | Mnist | | | -------- | -------- | -------- | ---- | ------------ | ---- | ----- | ---- | | | | ELBO | NLL | ELBO | NLL | ELBO | NLL | | VSM-CNS | 0.1 | 83.1 | 69.5 | 49.3 | 42.7 | 93.2 | 77.6 | | VSM-CNS | 0.5 | 83.2 | 69.9 | 48.1 | 42.3 | 93.5 | 77.8 | | VSM-CNS | 0.7 | 81.1 | 68.2 | 47.8 | 40.6 | 92.9 | 77.0 | | VSM-CNS | 0.9 | 84.1 | 70.8 | 48.7 | 42.6 | 93.9 | 78.5 | | VSM-SCHA | 0.1 | 74.6 | 57.0 | 42.4 | 35.6 | 89.7 | 68.1 | | VSM-SCHA | 0.5 | 73.4 | 56.2 | 42.2 | 35.3 | 88.4 | 67.1 | | VSM-SCHA | 0.7 | 74.0 | 56.8 | 42.4 | 36.5 | 89.9 | 68.2 | | VSM-SCHA | 0.9 | 74.8 | 57.5 | 43.8 | 37.9 | 90.1 | 69.7 | Q2: Is it feasible to apply your method to discriminative models, such as for few-shot classification tasks? Expanding the discussion to these applications would provide a broader view of your method's potential. A2: Thanks for your insightful suggestion! Actually, as your mentioned, the proposed variational structured memory module can be naturally extended from few-shot generation to other few-show discrimination tasks by simply replacing the generative models with other discriminative ones. And we believe that our method can also improve the performance of few-shot classification tasks, and will include this part of discussion about the method's potentials in our revision. Thanks again for your reminder. Q3: Could the authors provide more detailed information regarding the additional model parameters introduced by your method compared to the baseline? Additionally, how does the inference time compare to the baseline? This information could provide more clarity regarding the computational efficiency of your method. A3: Thanks for your suggestion, we have listed the amount of additional model parameters brought by introducing our method in the following table. | Models | Parameters (M)| Rate | | ----- | -------| ------- | | CNS | 7.34| 0%| | CNS + semantic| 7.39| 0.68% | | CNS + Episodic | 7.38| 0.54%| | VSM-CNS | 7.44| 1.36% | | Models | Parameters (M)| Rate | | ----- | -------| ------- | | SCHA | 5.37| 0% | | SCHA + semantic| 5.60| +4.28% | | SCHA + Episodic | 5.55| +3.35% | | VSM-SCHA| 5.77| +7.44% | | Models | Parameters (M)| Rate | | ----- | -------| ------- | | sDDPM| 34.0| 0% | | vDDPM| 35.5| 4.41%| | vFSDM | 34.8| 2.35%| | VSM-Diffusion| 35.6| 4.70% | We have also provided the additional inference time caused by introducing our method in the following table. | Models | Inference time(s)/(Batch_size=32)| Rate | | ----- | -------| ------- | |CNS| | 2.77| |CNS + Semantic | 2.81 | +0.14 | |CNS + Episodic | 3.37 | +0.60| |VSM-CNS | 3.0924| +0.32 | | Models | Inference time(s)/(Batch_size=32)| Rate | | ----- | -------| ------- | |SCHA-VAE | 4.17| 0 | |SCHA + Semantic | 4.32 | +0.15 | |SCHA + Episodic | 6.47 | +2.30 | |VSM-SCHA | 4.50| +0.33 | | Models | Inference time(s)/(Batch_size=32)| Rate | | ----- | -------| ------- | |sDDPM | 98.52 | 0| |vDDPM | 92.23 | -5.67| |vFSDM | 103.98 | +5.46| |VSM-Diffusion | 107.59 | +9.07| We appreciate your insightful suggestions on how to improve our paper once more, and we look forward to having more conversations with you about this. --- Rebuttal Comment 1.1: Comment: I thank the authors for their careful responses, which addressed most of my concerns. I hope the authors include the mentioned experiments and Acronym Overlap and in the modified version. I opt to accept this paper. --- Reply to Comment 1.1.1: Comment: Thanks for your time and valuable comments, we will keep on this direction of research. Best wishes.
Summary: The paper proposes a variational structured memory module (VSM) that can be combined with both VAE and Diffusion type models to improve their performance on few-shot generation tasks. The memory structure is both hierarchical and structured into episodic and semantic memory components such that the model can learn to retain detailed experiences as well as higher level general information useful for the task. Strengths: - VSM is technically sound and builds on a hierarchical conditioning of latent variables and an attention-structure to retrieve information at inference time. - The experiments show that VSM can be combined with existing few-shot generation models (both VAE and Diffusion type models) to improve their performance making it a modular addition widely applicable. - An ablation study shows efficacy of the memory types and update strategy Weaknesses: - The computational overhead of applying VSM is not discussed. Especially VSM-VAE contains many autoregressive conditionals that would have to be evaluated in sequence. How does the inference runtime compare to not applying VSM? - How sensitive is the optimization procedure of VSM to hyperparameters in comparison to not using VSM? Intuitively, the addition of ELBO terms for the memory components could make optimization trickier. Is this observed in practice? How do you deal with this added complexity? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - It would be interesting to see whether the approach also scales to more challenging datasets such as ImageNet. Are the limitations purely computational or would you expect other challenges? - Why is a top-K retrieval strategy used when attention could be applied over all available elements instead? Based on the hyperparameters in the supplementary, the size of the memory (even including episodic memory) does not seem prohibitively large. How would the results change without retrieval, or as you change K? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - How well does the method scale to larger model sizes and/or dataset sizes? - The influence of the number of few-shot examples is not discussed. What are the challenges as the number of examples decreases/increases? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments and suggestions, which are helpful for us to further improve the quality of our paper. The concerns have been addressed as below. **W1**: Concern on computational overhead of applying VSM. **A1**: Thanks for your constructive suggestion. We have provided the additional inference time caused by introducing VSM in the following table. From the results, we can find that the developed VSM-based methods will add no more than 10$\%$ to the inference time compared to these baselines without VSM, which is quite slight in our understanding. | Models | Inference time(s)/(Batch_size=32)| | | ----- | -------| ------- | |CNS| | 2.77| 0 | |CNS + Semantic | 2.81 | +0.14 | |CNS + Episodic | 3.37 | +0.60| |VSM-CNS | 3.0924| + 0.32 | | Models | Inference time(s)/(Batch_size=32)| Rate | | ----- | -------| ------- | |SCHA-VAE | 4.17| 0 | |SCHA + Semantic | 4.32 | +0.15 | |SCHA + Episodic | 6.47 | +2.30 | |VSM-SCHA | 4.50| + 0.33 | | Models | Inference time(s)/(Batch_size=32)| Rate | | ----- | -------| ------- | |sDDPM | 98.52 | 0| |vDDPM | 92.23 | -5.67| |vFSDM | 103.98 | +5.46| |VSM-Diffusion | 107.59 | +9.07| **W2**: Concern hyperparameters in comparison to not using VSM? The addition of ELBO terms for the memory components could make optimization trickier. Is this observed in practice? How do you deal with this added complexity? **A2**: The optimization of VSM-based models is stable and doesn't face any challenges. Actually, the selection of hyperparameters won't heavily influence the performance of our method. For the hyperparameter $\alpha$ in the mechanism of memory update, we have conducted the experiment to elaborate on the impact of $\alpha$, which won't influence the model performance too much as shown in the following table. |Model| $\alpha$ | Onmignot| | Double-mnist| |Mnist | | |------|-------|------|-------|------|-------|------|------| | | | ELBO | NLL | ELBO | NLL| ELBO| NLL| |VSM-CNS |0.1 |83.1| 69.5| 49.3| 42.7| 93.2| 77.6| |VSM-CNS | 0.5| 83.2 |69.9 |48.1 |42.3 |93.5 |77.8| |VSM-CNS |0.7| 81.1 |68.2| 47.8| 40.6| 92.9| 77.0| |VSM-CNS |0.9 |84.1 |70.8 |48.7 |42.6 |93.9 |78.5| |VSM-SCHA | 0.1 |74.6 |57.0 |42.4 |35.6 |89.7 |68.1| |VSM-SCHA |0.5| 73.4| 56.2| 42.2| 35.3| 88.4 |67.1| |VSM-SCHA |0.7 |74.0| 56.8 |42.4| 36.5| 89.9 |68.2| |VSM-SCHA |0.9 |74.8| 57.5| 43.8| 37.9| 90.1 |69.7| For the other hyperparameters in generative models, we directly follow the default hyperparameter settings of baselines to make a fair comparison. **Q1**: Scales to more challenging datasets such as ImageNet. **A1**: For scaling our methods to more challenging datasets such as ImageNet, in our undstanding, the computation cost would be the only challenge. As you can see, it will be extremely time-consuming to evaluate our method on the dataset in such scale. Moreover, to improve the generative performance, we may need to increase the number of blocks in the developed structured memory module. **We have provided some initial results on miniimagenet in our response to all reviwers** and will try to include more results on imagenet in the revision. **Q2**: Why is a top-K retrieval strategy used when attention could be applied over all available elements instead? ... How would the results change without retrieval? **A2**: Actually, the top-K retrieval strategy can be seen as an approach to making the attention weight more sparse, which is effective at alleviating overfitting in attention models [3]. Then, top-K retrieval strategies [5] can also speed up the training, distilling noises without losing information, and increasing the model performance. Indeed, top-K retrieval strategies have been wildly applied in various memory-augment models [1, 2]. To evaluate the effectiveness of top-K retrieval strategy, we provide additional experimental results with various settings of K on Omniglot dataset. |top-K proportion | Onmignot| | Double-mnist| |Mnist | | Training time (mini-batch)| | ----- | -------| ------- | ----- | -------| ------- | --------- | --------- | |-- | ELBO|NLL| ELBO|NLL| ELBO|NLL| -- | |1%| 73.8| 55.4| 43.2| 36.8| 89.3| 68.5| 79.3s| |50%| 77.2| 59.9| 46.8 |39.6| 92.2| 73.8| 86.4s| |100% |78.3 |59.6| 47.9 |41.5| 92.2 |76.7 |95.2s| [1] Zhen X, Du Y, Xiong H, et al. Learning to learn variational semantic memory[J]. Advances in Neural Information Processing Systems, 2020, 33: 9122-9134. [2] Blattmann A, Rombach R, Oktay K, et al. Retrieval-augmented diffusion models[J]. Advances in Neural Information Processing Systems, 2022, 35: 15309-15324. [3] Fan, Xinjie, et al. "Bayesian attention modules." Advances in Neural Information Processing Systems 33 (2020): 16362-16376. [4] Wang, Pichao, et al. "Kvt: k-nn attention for boosting vision transformers." European conference on computer vision **L1**:How well does the method scale to larger model sizes and/or dataset sizes? **A1**: We have discussed our method scale to larger dataset sizes in the response to Q1. As for larger model sizes, we believe that the high-level idea of our method can be flexibly applied on large-scale model, like Stable Diffusion, but there will be a lot of technical details needed to be considered in the process of implementation. We will try to explore this direction to introduce memory module into large-scale foundation models. **L2**: What are the challenges as the number of examples decreases/increases? **A6**: Actually, there exists a previous work [5] that has discussed the influence of the number of examples decreases/increases to few-show generation. Generally speaking, providing more data samples of a new task can help the generative model to understand the statistical properties of dataset more comprehensively, and further improve the performance of few-show generation. [5] Giannone G, Winther O. Scha-vae: Hierarchical context aggregation for few-shot generation[C]//International Conference on Machine Learning. PMLR, 2022: 7550-7569. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for clarifying my questions and adding additional experiments that support the claims and contributions of the paper. I encourage the authors to incorporate the additional insights into the paper and supplementary material. My recommendation remains to accept the paper. --- Reply to Comment 1.1.1: Comment: Thanks for your time and valuable comments. Best wishes.
Summary: This work is focused on the problem of few-shot generation using a memory mechanism. A few priors works have leveraged semantic memory or short-term (episodic) memory for such tasks. In this work, a new architecture is proposed named Variational Structured Memory (VSM) that leverages both types of memory mechanisms. Two variants are proposed: VSM-VAE and VSM-Diffusion. The paper discusses the training objective used to train the model and also present ways for recall (using attention and kNNs) from the memory when a new task is available. Lastly, the paper discusses how the memory is updated and initialized. Experiments conducted on existing datasets show that the method outperforms the baselines. Furthermore, the paper also reports the generated images and study the effect of memory size and the memory update process. Strengths: 1. The proposed idea of using both types of memory for recall is interesting. 2. The experiments are thorough and support the claims made in the paper. 3. The paper is well-written and easy to follow. Weaknesses: 1. Some design choices of the algorithm can be explained better (more on this in Questions below). 2. Limitations of the current method are not discussed. The discussion should include what can be other strategies for updating semantic and episodic memories. The paper should also talk about how to use such an architecture for generating images that can be out of distribution for the new dataset. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Semantic Memory Update: The target for the mean of the semantic memory vector seems to favor examples that are close to the current vector of semantic memory using the g(.) function. Why not simply just move the semantic memory towards the mean of all values in vector $H_n$? 2. Adding to the previous question, the update will favor examples that are closer to the current value of the semantic memory. Will this lead to a mode collapse where the model is unable to generate diverse samples for a category? Is there a way to validate if the model has diversity in the generated samples? 3. From my understanding, the number of semantic memory modules is the number of categories (N). What if we need to have a different size of semantic memory than the category? Will the model still work? 4. Will the method work for out-of-distribution datasets during evaluation because the memories are updated by having higher weights for examples closer to the current memory? 5. At line 113, should Eq. 5 be referred to instead of Eq. 9? 6. Minor Typos: - Line 135- space between ‘themechanism’ - Line 151- ‘distribution’ spelling is incorrect Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations are not discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your constructive comments and suggestions, which are helpful for us to further improve the quality of our paper. The concerns have been addressed as below. **W1**: Some design choices of the algorithm can be explained better. **A1**: Thanks, we will explain your mentioned questions one by one in the following response. **W2**: Limitations of the current method are not discussed. **A2**: Thanks for your insightful suggestions. We have discussed other strategies for updating semantic and episodic memories in the response to Q1, and claimed the setting of OOD generation in the response to Q4. We will included these discussions in our revision. **Q1**: Semantic Memory Update: The target for the mean of the semantic memory vector seems to favor examples...... Why not simply just move the semantic memory towards the mean of all values in vector? **A1**: Thanks for your valuable suggestions. Actually, we used to consider to utilize the average of the vectors stored in episodic memory to update the semantic memory, but we assume that introducing attention mechanism can highlight the contributions of typical data samples for better model performance. For instance, a typical data sample can serve as the semantic memory of a specific category in the corresponding memory block. To verify our thoughts, we conduct the following experiments to compare these two memory update methods, and the experiment results shown in the table demonstrate the superior of the attention mechanism. Similar conclusions can be found in [1]. | | omniglot | |double-minst ||mnist | | | ----- | -------| ------- | --------- | --------- | --------- | --------- | |Update method | ELBO| NLL| ELBO | NLL | ELBO| NLL | | Mean | 77.7 | 60.1 | 46.0| 39.5| 93.8| 70.3| | Ours | 74.0 |56.8| 42.4| 36.5| 89.9| 68.2| **Q2**: the update will favor examples that are closer to the current value of the semantic memory. Will this lead to a mode collapse where the model is unable to generate diverse samples for a category? Is there a way to validate if the model has diversity in the generated samples? **A2**: We note that the latent representations of data samples retrieved from our memory module only serves as a prior for the downstream generation task, and wouldn't lead to the phenomena of **mode collapse**. For the qualitative results shown in Fig.3, for a novel unseen generation task, we can find that the visualized data samples generated by our model are diverse. Moreover, the metric precision (p) in our experiments can be used to measure the performance of diversity [2]. From the results in the following table, we can find that our developed VSM-based models can outperform on the metirc of precision than other baselines. | Update method| FID |SFID | P | R | | ----- | -------| ------- | --------- | --------- | | Base | 13.52| 27.75| 0.71| 0.38| | Mean | 12.33| 25.82| 0.72| 0.40| | GAT | 12.27 |25.79| 0.72| 0.40| [1] Zhen X, Du Y, Xiong H, et al. Learning to learn variational semantic memory[J]. Advances in Neural Information Processing Systems, 2020, 33: 9122-9134. [2] Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Improved precision and recall metric for assessing generative models. Advances in Neural Information Processing Systems, 32, 2019. **Q3**: What if we need to have a different size of semantic memory than the category? Will the model still work? **A3**: It's an interesting question. Actually, each vector in the semantic memory module can be treated as the clustering center of the corresponding memory block, and the optimal setting of the clustering number should be the number of categories (N), which is also used in [1]. However, it should also work for these memory-augmented models to set less memory blocks, where some similar categories can be summarized with a single memory block. We conduct the following experiment to verify our thoughts as shown in the following table, and the experimental results demonstrate that our model can still work by setting the semantic memory size smaller than the number of categories. |proportions|Omniglot||Double-mnist||Mnist|| | ----- | -------| ------- | --------- | --------- | -------| ------- | | |ELBO| NLL |ELBO| NLL| ELBO| NLL| |SCHA + Episodic| 82.5| 63.4| 53| 47.2| 95.3| 79.1| |0.1% |81.8| 62.7 |52.2| 46.9 |95.1| 78.6| |10% |78.7 |61.5 |46.0 |39.3 |94.6| 72.5| |30% |78.6 |60.2 |46.1 |40.2 |94.2 |71.8| |50% |76.4| 60.8 |46.0 |39.7 |92.3| 71.9| |70% |77.0 |60.2 |44.5 |39.4 |92.2 |71.6| |90% |74.6 |57.5| 43.3 |36.4| 90.7 |69.1| |100% |74.0 |56.8|42.4 |36.5| 89.9 |68.2| **Q4**: Will the method work for out-of-distribution datasets during evaluation because the memories are updated with higher weights for examples closer to the current memory? **A4**: Thanks for your question. Actually, in the setting of few-shot generation, the evaluation is conducted on out-of-distribution datasets, which has been described in Section 3.1. Generally speaking, the generative models will be firstly trained with a series of old tasks (data samples of those seen classes) in the training stage and tested on new tasks (data samples from the other unseen classes). For example, the data samples of old tasks consist of birds, tigers, leopards, and elephants, while the data samples of new tasks will include wolves, dogs and other unseen animals. Thus, in the setting of few-shot generation, the model can learn how to generate OOD data samples by utilizing data information stored in old tasks. **Q5**: At line 113, should Eq. 5 be referred to instead of Eq. 9? **A5**. Yes, thanks for your careful verify. We will delete Eq.9 in our revised manuscript. **Q6** Minor Typos: Line 135- ‘themechanism’ Line 151- ‘distribution’ **A6**: Thanks. We will fixed these typos in our revision. Thanks again for your valuable advices on improving our paper, and we hope to take more discussions with you to improve this work. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed clarification of my doubts. I have a doubt regarding A3- Were there any major changes to the architecture when the number of memory blocks was less than the number of categories? --- Reply to Comment 1.1.1: Comment: Thanks for your further discussion. After reducing the number of memory blocks, which will be less than the number of categories, there should be another metirc to map each training sample to its corresponding memory block. In our case, we simply adopt the idea of clustering, and assign each training sample to its nearest clustering center, where the distance between the latent representation and the semantic vector of its assigned memory block should be closer than the others. As for the network architecture, to fairly conduct an ablation study on the impact of the number of memory blocks, we did not modify the main body of VSM-SCHA, and only reduced the corresponding amount of memory blocks, whose proportion accounts for 0.1%-90% of the memory number of our original model. As the results shown in A3, although the model performance becomes worser with the reduction of the number of memory blocks, our method can still outperform the baseline SCHA+VAE shown in Table.1 of our paper, even under the case with only one memory block, specificallly 0.1%. Although the performance with the reduction of memory block, our memory mothod still works
Rebuttal 1: Rebuttal: We really appreciate all the reviewers for their constructive and helpful comments, which can greatly help us to improve the quality of our paper. For the mentioned typos, grammar mistakes, unclear notations and missing citations, we promise that we will carefully fix them in our revised paper. For all reviewers' questions and suggestions, we have responded one by one in the blow, and you can find it in the corresponding block. Due to the page limitation, we put a small part of the supplementary experimental results of few-show generation that training the generative models on FScifar100 and then testing them on MiniImageNet in this block. | model | **FID** | SFID | P | R | | ------------------ | --------- | --------- | -------- | -------- | | DDPM | 63.13 | 33.23 | 0.61 | 0.30 | | sDDPM | 47.73 | 32.90 | 0.56 | 0.37 | | vFSDM | 61.57 | 32.65 | 0.59 | 0.31 | | FSDM | 42.32 | 25.74 | 0.61 | **0.37** | | VSM-Diffusion(our) | **38.57** | **24.80** | **0.65** | 0.36 |
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors propose to incorporate 'episodic' and 'semantic' memory inspired by the human memory system into existing generative modeling frameworks to improve few-shot generation abilities. Specifically, they propose a variational structured memory module (VSM) and show its incorporation into existing VAE based models and diffusion based models can improve few-shot generation across multiple datasets. The memory mechanism is introduced as a prior to the context variables in traditional neural statistician (NS) and diffusion modeling frameworks. The ‘semantic’ memory is modeled as a ‘general’, lightweight storage with quicker retrieval (single embedding for each category) while ‘episodic’ memory is modeled to be more detailed/'vivid' (a set of embeddings for each category) and provide context relevant information. Learning involves an attention-based recall/lookup from prior memory states and subsequent update of the semantic and episodic memory for each category. Strengths: 1. The integration and formulation of proposed memory modules (and relevant storage, recall and update mechanisms) to improve few-shot generation is relatively novel. 2. The experiments are performed on established datasets and the results support the claim that the proposed VSM module can benefit performance on few-shot generative/diffusion-based tasks. 3. The method and underlying components are largely adequately described in section 3 and 4 with relevant references or background provided. Weaknesses: 1. The work's claim of modelling the memory mechanism like a 'human being' for few-shot generation tasks seems far-fetched and not adequately backed by literature or explanation. While episodic and semantic memory indeed are components of human memory (and some references for this are provided), it is not clear how they are related to static image few-shot generation (specifically, episodic memory is largely related to an agent's experiences/events and seems more relevant to episode/reinforcement learning based tasks/frameworks). 2. Further, it remains unclear what is the authors mean when they mention 'context' in relation to the proposed semantic and episodic memory. E.g. on L146, it is mentioned 'semantic memory in human brain can provide context information'; while L139 mentions 'episodic memory can provide context information'. So it is unclear if both semantic and episodic are to provide 'context' and if so what does 'context' mean here? Again, in relation to my first point, backing statements such as L146 with references would be beneficial. 3. Experiment setup: It is unclear whether all models were evaluated with the same vision encoder and processing. (In appendix L401 authors mention they draw inspiration from vision transformer in processing image features and suggest their process differs from existing memory-based models). Were other models evaluated in a similar setup for fair comparison? Otherwise, performance benefits might be due such differences in experimental setup and not necessarily the proposed memory method itself. 4. Further, authors should consider indicating the parameters of their method and other relevant factors to provide a better comparison to existing models (e.g. see Table1 in SCHA-VAE: Hierarchical Context Aggregation for Few-Shot Generation Giorgio Giannone, Ole Winther International Conference on Machine Learning, ICML, 2022). It is currently unclear whether the performance benefits may merely be due to more parameters or if it is due to the proposed method. 5. While source code is provided in appendix, would it be released publicly for reproducibility? (currently there is no indication). Also, how many trials were performed for each experiment? Minor: 1. The writing in introduction has minor grammar mistakes and missing words / wrong phrasing (e.g. L9: 'to assistant' -> 'to assist'; L18: 'that never encountered before' -> 'that it has never encountered before'; L24: 'which mainly' -> 'which is mainly'; L49 'to assistant' -> to assist', etc) Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see weaknesses section above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: A brief limitation section is provided in the appendix L449. Perhaps it could be expanded to discuss potential negative applications of generative/diffusion-based methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments and suggestions. Q1. VSM for static image few-shot generation. A1: Thanks for your valuable question. Actually, the motivation of our work is to develop a series of memory-augmented generation models to imitate the creative process of humans. For instance, when a painter is required to paint an image with a bird, he will first recall the conceptual information (semantic memory) of a bird equipped with a series of realistic bird samples (episodic memory), and then fuses these two kinds of memory information to accomplish this creative task. And this is also the reason why we want to imitate the memory mechanism of human to redefine the generative process of existing generative models. Similar idea of introducing memory mechanism can be found in other works [1,2,3,4]. Moreover, memory modules have also been widely used in few-shot learning tasks. For example, LSTMs trained to meta-learn can quickly learn never-before-seen quadratic functions with a low number of data samples[5]. And [6] builds an effective memory augment meta-learning framework for few-shot learning with an attention module.Recently, [7] proposed variational semantic memory and applied it to the few-shot classification task, where the intuition behind this is that the model can utilize semantic information (semantic memory) gained from past tasks to solve a new task. These two mentioned aspects inspire us to develop a variational structured memory module and apply it on few-show generation task. As for the episodes in RL, in our understanding, the episodes of RL, which will be stored in buffer, are only used to update the policy network, but won't be recalled during testing. But we believe that the idea of introducing memory module can be also extended for improving the performance of existing RL-based methods. Q2. it is unclear if both semantic and episodic are to provide 'context' ......? A2. We're sorry for the confusion caused by our unclear description. Indeed, there are two advanced forms of memory in the human brain, specifically semantic memory allows the storage of general conceptual information [7] and episodic memory allows the collection of detailed episodes [8]. We have highlighted these descriptions in the main paper and modified the sentence ``episodic memory can provide context information that will later be store in semantic memory'' `` into ``episodic memory can provide detailed episodes that will be converted into conceptual information and stored in semantic memory``. We have also modified other unclear descriptions about context information in our paper. Q3. Fair comparison? A3. Sorry for the misunderstanding. Actually, the only difference between our method and other baselines is the introduction of the proposed variational structure memory module and the remained factors, like network structures, experimental settings and evaluation metrics, are exactly the same as described in their papers. Specifically, we introduce the variational structure memory module into VAE-based models [9] and diffusion models [10], leading to VSM-VAE and VSM-diffusion, respectively. And we directly follow and implement our method on their released code to make a fair comparison, which can be checked in our submitted supplemental material. We have also listed the network structures in comparsion as follows: | Model | Encoder| | -------- | --------- | |CNS |ResNet| |SCHA | ResNet| |VSM-CNS |ResNet| |VSM-SCHA | ResNet| |sDDPM | ViT| |vDDPM | Unet| |vFSDM | ViT| |VSM-Diffusion | ViT| The description in Appendix L401 is only used to illustrate the implementation details of variational structure memory module, which can be treated as another slight contribution of our method to existing memory-augment models and will not cause unfairness in experimental comparisons. Q4. More parameters. A4. Thanks for your valuable advice. Actually, as discussed in Q3, the main body (encoder and decoder) of the network structures used in VSM-VAE and VSM-Diffusion are still the same as their corresponding baselines, and the only additional cost of parameters is brought by introducing the proposed variational structured memory module, which will not bring too much additional memory cost. To make a further investigation, we list the amount of parameters of whole generative models and their memory modules in the following table, and we can find that the proposed variational structure memory will not bring too much increase in the number of parameters. | Models | Parameters (M)| Rate | | ----- | -------| ------- | | CNS | 7.34| 0%| | CNS + semantic| 7.39| 0.68% | | CNS + Episodic | 7.38| 0.54%| | VSM-CNS | 7.44| 1.36% | | Models | Parameters (M)| Rate | | ----- | -------| ------- | | SCHA | 5.37| 0% | | SCHA + semantic| 5.60| +4.28% | | SCHA + Episodic | 5.55| +3.35% | | VSM-SCHA| 5.77| +7.44% | | Models | Parameters (M)| Rate | | ----- | -------| ------- | | sDDPM| 34.0| 0% | | vDDPM| 35.5| 4.41%| | vFSDM | 34.8| 2.35%| | VSM-Diffusion| 35.6| 4.70% | Q5. Source code, how many trials......? A5. Thanks for your notification. As shown in the supplemental material, we have included the shell files to reproduce our results released on the paper. And we promise that we will absolutely release a more formal version of source code on the github, equipped with a readme file to guide the reproducibility of our work. To make a fair comparison, for each experiment, we take five trials and report the average of these obtained results. [1] Variational memory addressing in generative models.[2]Variational memory encoder-decoder.[3] Memory transformer. [4] Recurrent memory transformer.[5]Learning to learn using gradient descent.[6] Meta-learning with memory-augmented neural networks [7]Learning to learn variational semantic memory.[8] Retrieval-augmented diffusion models.[9] Scha-vae: Hierarchical context aggregation for a few-shot generation.[10] Few-shot diffusion models. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response and clarifications. Regarding point 1, perhaps the distinction between semantic and episodic memory in the specific context of few-shot learning can be better described in the main paper as currently done in rebuttal. Regarding point 4, the parameter cost could be included in the original table to highlight the performance benefits of the method with relatively low additional parameters. Point 3 could be mentioned in supplemental to inform the reader. As concerns have been addressed, I've raised score to 6. --- Reply to Comment 1.1.1: Comment: Thanks for your time and valuable comments. Best regards.
null
null
null
null
null
null
Deep Patch Visual Odometry
Accept (poster)
Summary: This paper proposes a deep learning-based method for monocular visual odometry, which equips two leading-edge advantages: (i) a deep feature-based patch representation for keypoints encoding local context, and (ii) a novel recurrent architecture designed for patches along with a differentiable bundle adjustment layer. The proposed DPVO outperforms the state-of-the-art across several common benchmarks while running 1.5-8.9x faster than the DROIOD-SLAM system with less memory leveraged. Strengths: The proposed monocular visual odometry performs accurately in estimating the camera pose in several datasets. And the proposed system shows a strong generalization that is trained on the synthetic TarTanVO dataset but validated on real datasets. Weaknesses: 1. The paper makes claims that the proposed method outperforms all prior work in several evaluation datasets, but only several works are listed which is not enough, can authors give more comparisons with SOTA? For example, the DytanVO[1], RAM-VO[2], DF-VO[3] [1]https://arxiv.org/pdf/2209.08430v4.pdf [2] https://arxiv.org/pdf/2107.02974v1.pdf [3] https://arxiv.org/pdf/2103.00933v1.pdf 2. The results on various datasets are measured by ATE(Absolute Trajectory Error), but some other evaluation criteria are also essential, for example, the relative pose error (RPE), the average translational error, and the rotational error. Can the author provide more comparison information about them? 3. The tracking of patches is achieved by designing an RNN but fails to provide a clear explanation. Why not a CNN? 4. Punctuations are necessary at the end of each equation. Please carefully check it. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See Weakness part Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The paper claims that the current learning-based VO systems are impractical for resource-constrained devices. However, the proposed method still cannot be applied to mobile GPU-free devices. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The paper makes claims that the proposed method outperforms all prior work in several evaluation datasets, but only several works are listed which is not enough, can authors give more comparisons with SOTA? For example, the DytanVO[1], RAM-VO[2], DF-VO[3]** Our claim was intended to mean that we outperform all prior work that is known to us and has reported results on the 4 datasets, which are standard benchmarks for VO/SLAM. We will clarify this point in our revision. We include an additional Table F in the rebuttal PDF of results comparing DPVO against DF-VO [Zhan et al. ‘20]. We observe that DPVO outperforms DF-VO on average. Unfortunately, we do not yet have direct comparisons with DytanVO and RAM-VO. DytanVO and RAM-VO reported no results on any of the 4 datasets we use. To use their open-source code requires us to develop nontrivial dataloading code for each dataset. We are in the process of doing so, but have not been able to complete it before the deadline. **The results on various datasets are measured by ATE(Absolute Trajectory Error), but some other evaluation criteria are also essential, for example, the relative pose error (RPE), the average translational error, and the rotational error. Can the author provide more comparison information about them?** Table F in the rebuttal PDF includes comparisons using relative pose error (RPE). It shows that our method is superior also in terms of RPE. We primarily used the ATE metric for evaluation in the main paper since this enabled us to easily compare DROID-VO/SLAM and ORB-SLAM on more datasets, such as TartanAir, TUM-RGBD, and EuRoC. We agree that more metrics is always better, however we evaluated using ATE as it is the primary metric used in several prior works [DROID-SLAM, ORB-SLAM3], which allowed us to compare to prior works on a large number of datasets without the need to reproduce the results ourselves. It is worth pointing out that ATE is popular because the final camera positions (which ATE evaluates) are effectively a function of the accumulated error in both translation magnitude and direction (which is related to the predicted rotation), i.e. it’s extremely unlikely to have very low ATE but high rotation error. **The tracking of patches is achieved by designing an RNN but fails to provide a clear explanation. Why not a CNN?** The update operator is recurrent (i.e. weight-tied) since it continually improves its own predictions based on its current state through repeated applications. A CNN would not be a natural choice. Some explanations are already provided on L147-L158 + Figure 2 in the main paper and L53-L58 + Figure C in the supplement, but we will revise and clarify. As mentioned in L171-L177, one component of DPVO’s update operator is a 1D convolution across the temporal dimension of each patch trajectory. Therefore, our update operator is an RNN with a CNN layer inside it. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. I am satisfied with your reply.
Summary: The author propose a learning-based approach for visual odometry. It is essentially a sparse version of DROID-SLAM, without the loop closure capability. Instead of predicting the dense flow field relating the different frames as in DROID-SLAM, DVPO operates on sparse image patches, which are extracted at random from each frame. In more details, a random set of patches are extracted from each video frame, a patch graph is built. A recurrent neural network predicts the residual motion of each patch in each other frame. These residual motions are then used in a differentiable BA layer which minimizes the reprojection error to optimize for the sparse depth of each patches and the camera poses. This process is repeated iteratively until convergence. The paper proposes a new architecture to predict the residual motion, adapted to patches. Strengths: 1) Thorough and convincing evaluation: The proposed approach leads to a convincing improvement in terms of speed and accuracy, compared to DROID-SLAM. The approach is evaluated on many different datasets. 2) The ablation study also pin-points the essential contributions Weaknesses: In DROID-SLAM, predicting the dense flow allowed to optimize for full dense depths along with the camera poses with accuracy, providing advantages over both direct and indirect approaches. However, the proposed approach DVPO is based on jointly estimating the motion of sparse patches and refining camera poses and sparse depth (of the patches). In that sense, it is not obvious what advantages are brought by coupling the matches estimation with the pose/depth prediction. Classical indirect approaches first predict sparse matches, triangulate 3d point to get the depth (thus providing initialization for the depth of ‘patches’), and then track further frame by performing PnP with multiple local and global BA. The paper compares to ORD-SLAM and outperforms it. However, there are now many sparse matching approaches that significantly outperform hand crafter detectors like ORB. How would DVPO compare to using a ‘classical’ indirect approach with matching performed by a deep-learning based state-of-the-art approach like SuperPoint-SuperGlue? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) see weakness 2) A better figure for the patch graph would be help the understanding, showing what are the nodes, what are the edges, where the updates are and so on. Fig.1 is not very informative. It would significantly help to read sec. 3.1. I also find using the same index notation for the patches and for the images very confusing. ex L.159 (k, i) refers to an edge between patch k and image I. It would make it easier if the patches had a different notations like bold k. 3) some implementation details: how large are the patches? 4) in the ablation study, it is shown that using a patch is better than point feature. According to the formulation for estimating the motion of the patch, using a patch effectively increases the search window of the correlation in the other image. Does this difference still hold if the point feature is correlated to a larger window in the other image to estimate its motion? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **In DROID-SLAM, predicting the dense flow allowed to optimize for full dense depths along with the camera poses with accuracy, providing advantages over both direct and indirect approaches. However, the proposed approach DVPO is based on jointly estimating the motion of sparse patches and refining camera poses and sparse depth (of the patches). In that sense, it is not obvious what advantages are brought by coupling the matches estimation with the pose/depth prediction.** The advantages of DPVO over other methods come from a combination of (1) sparse matches, (2) optimizing pose and depth (bundle adjustment) on sparse matches, (3) coupled iterations of sparse matches and bundle adjustment, (4) end-to-end differentiability and training. It is true that (1) + (2) are also used by many existing methods, particularly indirect methods. Thus it may appear that DPVO has lost its advantages by using sparse matches. However, the full combination of (1) to (4) is unique to DPVO. In other words, the advantages of DPVO over indirect methods come from integrating (3) + (4) with (1) + (2). **Classical indirect approaches first predict sparse matches, triangulate 3d point to get the depth (thus providing initialization for the depth of ‘patches’), and then track further frame by performing PnP with multiple local and global BA. The paper compares to ORD-SLAM and outperforms it. However, there are now many sparse matching approaches that significantly outperform hand crafter detectors like ORB. How would DVPO compare to using a ‘classical’ indirect approach with matching performed by a deep-learning based state-of-the-art approach like SuperPoint-SuperGlue?** In table E in the rebuttal PDF, we show that DPVO achieves 62% lower error on TartanAir than an approach based on COLMAP with SuperGlue+Superpoint matching, while running 80x faster. This superglue-based result is also reported in the DROID-SLAM paper. We will add this comparison to our next revision. The comparison is on the TartanAir test set, following the evaluation in the ECCV 2020 SLAM competition and in [DROID-SLAM]. The score is computed using normalized relative pose error for all possible sequences of length {5, 10, 15, ..., 40} meters. The Superglue + Superpoint approach runs 40x slower than real-time, as is expected since this approach is designed for offline structure-from-motion, not visual odometry. **A better figure for the patch graph would be help the understanding, showing what are the nodes, what are the edges, where the updates are and so on. Fig.1 is not very informative. It would significantly help to read sec. 3.1. I also find using the same index notation for the patches and for the images very confusing. ex L.159 (k, i) refers to an edge between patch k and image I. It would make it easier if the patches had a different notations like bold k.** In Figure L in the rebuttal PDF, we’ve included a new figure illustrating a single edge in the patch-graph connecting two nodes (a frame and a patch), and a hypothetical update that could be made to that edge. Thank you for the suggestions, we will include them in our revision. **Some implementation details: how large are the patches?** They are 3 x 3 at ¼ the image resolution, so they effectively cover 12 x 12 pixels in the original image. **In the ablation study, it is shown that using a patch is better than point feature. According to the formulation for estimating the motion of the patch, using a patch effectively increases the search window of the correlation in the other image. Does this difference still hold if the point feature is correlated to a larger window in the other image to estimate its motion?** This is an empirical question on the combination of two hyperparameters: patch size and correlation radius. While we found the patch size 3x3 to work better than 1x1 while keeping other hyperparameters constant (as is common practice for hyperparameter ablations), we did not exhaustively test all combinations so as to avoid overfitting to the validation set. Due to an initial misunderstanding of this question (we mistook it to be a conceptual question rather than empirical), we did not start the necessary experiments early enough to be able to report this specific ablation by the rebuttal deadline, but we will post the result as soon as we are able. --- Rebuttal Comment 1.1: Comment: Thank you for your answer. I agree with other reviewers about the limited novelty of the paper. Nevertheless, I also agree with the authors that providing a fast and as accurate alternative is interesting. For this reason, i will maintain my score. But i also won't oppose if the paper is not accepted. --- Rebuttal 2: Title: Response to Question #4 Comment: Thank you for your comment. As promised, we provide the requested empirical study in response to question #4. Specifically, we re-trained a second DPVO model (hence the delay) using point features instead of patches, but with the features correlated to a larger window (radius 8, instead of 7) in the other image to match the effective search window of 3x3 patches, and compared the models on the “hard” sequences of the TartanAir test set. The conclusion in the following table is the same as in our ablations: patch features still significantly outperform point features overall, even when the latter is correlated to a larger window in the other image to compensate. We will include this comparison in our revision. | _Method_ | MH000 | MH001 | MH002 | MH003 | MH004 | MH005 | MH006 | MH007 | Avg | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | | DPVO (Patches, Grid-radius=7) | **0.21** | **0.04** | **0.04** | **0.08** | **0.58** | **0.17** | **0.11** | **0.15** | **0.173** | | DPVO (Points, Grid-radius=8) | 0.73 | 0.05 | 0.09 | 0.09 | 0.76 | 0.52 | 0.14 | 0.24 | 0.328
Summary: In this paper, the author propose a new method to solve monocular visual odometry (VO) in and end-to-end fashion. In contrast to previous work - DROID-SLAM, which utilizes dense optical to build correspondences across frames, the proposed method leverages sparse patches to avoid redundancy of dense pixels and therefore works faster. Experiments on public datasets including TartanAir, TUM-RGBD, EuRoC and ICL-NUIM demonstrate its higher efficiency and competitive accuracy. Strengths: 1. Good motivation. Dense correspondences provided by optical flow estimation are not necessary for visual odometry because basically several good matches are enough to find a good pose (more matches may increase the robustness). Therefore, I agree with the author of using sparse patches to achieve higher time and memory efficiency. 2. Extensive results. The proposed method is evaluated on TartanAir, TUM-RGBD, EuRoC and ICL-NUIM datasets to show its efficiency and accuracy. 3. The paper is well-written and easy to read. Weaknesses: 1. Limited novelty. The key idea of the paper is relatively straightforward. If we review the classic monocular visual VO/SLAM systems, we can find most of them are based on sparse keypoints (ORB-SLAM) or semi-dense pixels (DSO) to avoid too much computation. From my point of view, the major contribution of the paper comes from the engineering part. The recurrent module designed to update the trajectories of patches is kind of novel as it automatically updates the covisibility graph constructed by observed patches and past frames. However, the overall novelty is still limited. 2. Related works. Another limitation is the discussion of previous works. In addition to DROID-SLAM, there are lots of excellent learning-based VO/SLAM systems proposed in the past a few years such as BeyondTracking [r1], NICE-SLAM [r2], iMAP [r3], and Li et al [r4] to name a few. These works have achieved SOTA performance. However, they are not discussed and compared neither. It would be better to give a discussion of these works in the paper. r1: Xue et al., Beyond Tracking: Selecting Memory and Refining Poses for Deep Visual Odometry.CVPR 2019 r2: Zhu et al., NICE-SLAM: Neural Implicit Scalable Encoding for SLAM. CVPR 2022 r3: Sucar et al., iMAP: Implicit mapping and positioning in real-time. ICCV 2021 r4: Li et al., DENSE RGB SLAM WITH NEURAL IMPLICIT MAPS. ICLR 2023. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. L141: random sampling. It is not very clear that random sampling of patches works better. Intuitively, with the predicted camera motion, sampling according to the predicted pose could give more correct matches between patches. Even in the classic VO/SLAM systems like ORB-SLAM and DSO, they sample keypoints/pixels from regions with rich textures because these regions could be better observed in following frames. The comparisons of random sampling with other strategies could solidate the claim. 2. In the Differential Bundle Adjustment module, the cameras poses and inverse depths are optimized with pixel coordinates fixed. In practice, the matches between patches across frames have outliers, impairing the performance. Basically, the BA and outlier removing are jointly updated in classic systems to guarantee the accuracy. It seems that this can be achieved in the recurrent module and as this step is very important especially to monocular VO/SLAM systems, it would be great to show how the module ‘corrects’ wrong matches. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Limited novelty. The key idea of the paper is relatively straightforward. If we review the classic monocular visual VO/SLAM systems, we can find most of them are based on sparse keypoints (ORB-SLAM) or semi-dense pixels (DSO) to avoid too much computation. The recurrent module designed to update the trajectories of patches is kind of novel as it automatically updates the covisibility graph constructed by observed patches and past frames. However, the overall novelty is still limited.** Regarding the novelty of our approach, please see our message to everyone in which we highlight why using sparse matches with the framework of DROID-SLAM is novel and challenging. **From my point of view, the major contribution of the paper comes from the engineering part.** We believe a well-engineered system is still a valuable contribution to NeurIPS, especially given our exceptionally fast runtime, low cost, and performance on-par or better than DROID. **Related works. Another limitation is the discussion of previous works. In addition to DROID-SLAM, there are lots of excellent learning-based VO/SLAM systems proposed in the past a few years such as BeyondTracking [r1], NICE-SLAM [r2], iMAP [r3], and Li et al [r4] to name a few. These works have achieved SOTA performance. However, they are not discussed and compared neither. It would be better to give a discussion of these works in the paper.** **r1: Xue et al., Beyond Tracking: Selecting Memory and Refining Poses for Deep Visual Odometry. CVPR 2019** **r2: Zhu et al., NICE-SLAM: Neural Implicit Scalable Encoding for SLAM. CVPR 2022** **r3: Sucar et al., iMAP: Implicit mapping and positioning in real-time. ICCV 2021** **r4: Li et al., DENSE RGB SLAM WITH NEURAL IMPLICIT MAPS. ICLR 2023.** We did not compare to r1, r2, r3, or r4 for several reasons: 1) r2 and r3 assume RGB-D input, whereas DPVO is RGB-only. 2) r1 trains on TUM-RGBD sequences that share the same scenes with the test sequences, whereas DPVO is evaluated on TUM-RGBD zero-shot without any fine-tuning. 3) r2, r3 and r4 use global optimization and are not typically considered VO methods. Following standard practice [e.g., TartanVO ‘21], we focused our comparisons on strictly VO methods, i.e., those that do not perform any global optimization: Global optimization can substantially improve results on input sequences with loops, but at the expense of speed. Regardless, we’ve included comparisons of DPVO against r1, r2, r3 and r4 in tables B,C and D in the rebuttal PDF. *Comparison to r2, r3, r4 on TUM-RGBD*: We compare to r2, r3 and r4 in Table B on the popular [fr1/desk fr2/xyz fr3/office] split of the TUM-RGBD dataset. These sequences have been used for comparisons by multiple papers including iMAP and NICE-SLAM. These results show that our approach can outperform others on some sequences, even when they use additional depth sensor input, or use global optimization. In Table B, we observe that DPVO outperforms all methods on fr2/xyz, and underperforms Li et al. on fr1/desk. DPVO underperforms others on fr3/office, which is expected because the entire fr3/desk sequence is a loop, which favors methods with global optimization. DPVO with default settings runs nearly 3x faster than all the three, and our model runs more than 8x faster in its “fast” configuration. r3 (iMAP) and r2 (NICE-SLAM) assume an RGB-D input sequence, while r4 (Li et al.) and our approach do not require depth, only RGB input. Li et al. and NICE-SLAM do not globally optimize all poses, but still use the global keyframe list and/or the global 3D map to optimize the keyframe poses in the optimization window. DPVO also has near-constant memory usage in all cases, as opposed to iMAP, NICE-SLAM, and Li et al. whose memory grows proportionally to the sequence length in an unbounded scene. Unlike these other approaches, DPVO’s memory is always bounded; The RGB sequence could go on forever and DPVO would never run out of memory because it never touches old keyframes or the old 3D map. *Comparison to r4 on EuROC*: We again compare our VO system to r4 (Li et al.) on the EuRoC test dataset in Table C in the rebuttal PDF. Li et al. outperforms DPVO on 2 / 6 of the sequences, catastrophically fails on another 2 / 6 of the remaining sequences, and underperforms DPVO on the remaining 2 / 6. *Comparison to r1 on TUM-RGBD (separate split)*: We compare to r1 (Xue et al.) in table D, on a separate split of TUM-RGBD used for evaluation by Xue et al, since they did not report results on the sequences used by r2-r4. DPVO outperforms Xue et al. on three of the sequences, while they outperform us on the remaining sequences. It is important to note that Xue et al. is trained on sequences that share the same scenes with the test sequences, whereas our approach is trained only on synthetic data and is tested on TUM-RGBD zero-shot. **Response to Q1** These comparisons were already provided in figure 5c in the main paper, where we show evidence that random sampling does indeed work better than many other approaches. One of these approaches is sampling areas with high image gradients, which is analogous to sampling from pixels with rich textures. Note that selecting patch centroids randomly is the simplest possible approach (no keypoint detector required). We do not claim that it is the optimal choice, but the fact that such a simple scheme works well is a strength, not a weakness. **Response to Q2** Our method does indeed reject outliers in the recurrent module, by predicting a confidence weight associated with each predicted match. These confidences are then treated as constants in the BA step. This enables DPVO to exclude unlikely matches from the optimization entirely, without needing to correct them. We follow DROID-SLAM in this regard, and don’t claim this as a contribution of our method. We show an example of the predicted per-match confidence in Fig E of the supplement. --- Rebuttal Comment 1.1: Title: rebuttal Comment: Thanks for the responses. The newly added comparisons with previous VO methods (r1, r2, r3, r4) make the results of the proposed method more convincing. I don’t have any more questions about the experiments and hope the analysis and discussions of these results can be included in the refined manuscript. My major concern, however, is still the novelty. As claimed by the author in the rebuttal and also mentioned by the reviewer Uwmu, the major contribution is to use randomly sampled patches to replace dense optical flow in DROID-SLAM to achieve higher efficiency. I don’t see more novelty from this design in spite of the better performance which could be achieved from the engineering. --- Reply to Comment 1.1.1: Title: Response to reviewer 5bXL Comment: Thank you for your comment. Regarding novelty, although sparse matches in itself was not new, it was not obvious whether it would work at all in the framework of DROID-SLAM, because the SOTA performance of DROID-SLAM was understood to be dependent on the dense flow. While it may look simple and obvious in hindsight, making sparse matches work in DROID-SLAM was far from straightforward, for reasons we detailed in the global message. In addition to novelty, the value of this work also lies in a practical, open-source system that achieves the same level of accuracy and runtime that was only achievable with much more computing resources. DROID-SLAM required a large-memory GPU to be real time, making it impractical for many applications. DPVO achieved the same or better accuracy while being much faster and memory efficient. Building well-engineered practical systems is an important contribution in the space of VO/SLAM. We believe this type of contribution should not be trivialized because online, real-time, resource-efficient processing is a basic requirement that differentiates the VO/SLAM task from other 3D reconstruction tasks. We believe that even though the perception of novelty can vary, a practical open-source system that achieves important, previously unavailable capabilities can be sufficiently valuable to the NeurIPS community.
Summary: This paper proposes a deep learning based visual odometry (VO) system which estimates the 6DoF poses of each frame and outputs a sparse reconstruction of the scene from an input monocular video sequence. The pipeline extracts deep features from the sampled image patches and updates deep optical flow of the patches with a recurrent module, then perform deep BA with patch correspondences within the local optimization windows on the fly. The reported statistics show that this system is able to run at 60FPS (Default) and 120FPS (Fast), respectively, while maintaining the camera pose estimation accuracy. Results show that the proposed method outperforms DROID-VO and ORBSLAM in terms of accuracy on TARTAN AIR, EuRoC MAV, and ICL-NUIM datasets. Strengths: 1. One of the main contribution of this work is to alleviate the large computational cost from the existing deep dense VO method (DROID-SLAM) while maintaining its pose estimation accuracy. 2. The paper shows the contribution of 1D Temporal Convolution and SoftMax Aggregation on neighboring frames to the pose estimation, which are novel. 3. The learned network is trained purely on synthetic data and it shows competency to perform well on the real datasets, including TUM-RGBD and EuRoC MAV. 4. The paper is clearly written and easy to follow. Weaknesses: 1. The novelty is limited, the iterative update mechanism and differential bundle adjustments are already used in DROID-SLAM, even with the introduction of 1D Temporal Convolution and SoftMax Aggregation modules in the update operator. 2. Another major contribution of the proposed method is to track image patches instead of using dense correspondence on the entire image. However, it is not theoretically sound to support the claim that the patch-based approach improves the accuracy of pose estimation results over the dense flow based approach when the patches are selected randomly rather than feature based methods. 3. I suspect that the model overfits the TartanAir dataset. That’s why there is a big performance gap between the results on synthetic and real datasets for the classic methods. When it comes to real datasets, the results are mixed. 4. The selection of baseline methods is inconsistent and questionable in Table 1-4, the ORB-SLAM version used is different, and authors did not compare the results to ORB-SLAM or D3VO on the EuRoC MAV dataset, while other VO papers do include them as baseline methods. Also, there are better results can be found for running ORB-SLAM on TUM-RGBD fr1 and TartanAir sequences (reported in https://paperswithcode.com/paper/droidslam-deep-visual-slam-for-monocular/review/). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Are there any insights for the reasons of randomly sampled patches outperforming other feature based patch selection mechanisms (ORB, SIFT, and Superpoint)? Although the empirical results in this paper favor randomly sampled patches, it is still counter-intuitive to imagine most patches are randomly sampled on regions with repetitive patterns or occlusion. Are there any additional experiments to demonstrate the robustness of random patch selection? 2. As mentioned in Weaknesses, why ORB-SLAM or D3VO are not included when comparing the results on the EuRoC MAV dataset? Are these methods not considered visual odometry systems? 3. Both DROID-SLAM and DVPO are trained from synthetic datasets, are there any suggestions on the reasons for the worse performance of DROIDSLAM than DPVO on the ICL-NUIM dataset, even with global bundle adjustment? 4. Are the running parameters (window size, patch size, number of update iterations) of the system tuned for different datasets, or are they fixed? Does patch size matter (not including the 1x1 point feature)? 5. Are there any comparison between the system and classical methods in terms of efficiency, with different input frame sizes? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The model can only be trained on synthetic datasets with ground truth optical flow. Although the system shows the ability of generalization over the real data, its performance and robustness may be limited in the real scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The novelty is limited, the iterative update mechanism and differential bundle adjustments are already used in DROID-SLAM, even with the introduction of 1D Temporal Convolution and SoftMax Aggregation modules in the update operator.** We do not claim the idea of an iterative update mechanism nor the differentiable bundle adjustment to be contributions of our method. Please see above. **It is not theoretically sound to support the claim that the patch-based approach improves the accuracy of pose estimation results over the dense flow based approach when the patches are selected randomly rather than feature based methods.** Our empirical results support the claim that DPVO, which uses randomly selected patches, outperforms DROID-VO, which uses dense flow (see tables [1,2,3,4]). This result may be counter-intuitive, but it is not a weakness of our approach. We believe one possible theoretical reason is that random patches reduce redundant computation, and allow us to effectively re-allocate the computation and network capacity to learn better features for matching. We will revise our paper to clarify this point. **When it comes to real datasets, the results are mixed.** On real datasets, we outperform other methods in terms of *average error”, which is a standard way to compare performance. The results are “mixed” only in the sense that we are not No. 1 on every single test sequence, but requiring superiority on every single test example would be an unusually strict standard. **The selection of baseline methods is inconsistent and questionable in Table 1-4, the ORB-SLAM version used is different.** ORB-SLAM has multiple versions, and on each dataset we use the version that produces the best results. The appearance of inconsistency, as mentioned in the caption of table 4, is because ORB-SLAM3 would not work on TUM-RGBD despite our best efforts; The v1.0 implementation (the latest) fails to produce any poses for the keyframes. **authors did not compare the results to ORB-SLAM or D3VO on the EuRoC MAV dataset.** We omitted ORB-SLAM on the EuRoC table because we follow standard practice of VO evaluation [e.g. TartanVO ‘21] which focuses on comparing strictly VO methods, i.e., those that do not perform global optimization, as opposed to SLAM methods that include global optimization. Global optimization can substantially improve results on input sequences with loops (such as those on EuRoC). As a result, VO methods are typically not expected to outperform SLAM methods and are not evaluated with SLAM methods as baselines. We omitted D3VO as it would not be a fair comparison - D3VO performs unsupervised training on 5 of our 11 test sequences , and then evaluates on the remaining ones which contain the same scenes. This is a non-standard setting and gives D3VO an unfair advantage because unsupervised training on the same test scenes means that a method can essentially perform offline 3D reconstruction in advance for the same scenes and memorize the solutions. In offline reconstruction, a system can access all frames, including those from the future, whereas a VO system cannot use future frames. In contrast, our method is trained on synthetic data only, and is evaluated on each test sequence zero-shot, on the fly, without prior exposure to the same scenes. With the above caveats, in Table A in the rebuttal PDF, we nonetheless report results of D3VO and ORB-SLAM3 on EuRoC. We observe that D3VO outperforms DPVO on a subset of the evaluation sequences. This is expected because D3VO performs test-time-training on the remaining sequences that contain the same scenes, effectively allowing offline 3D reconstruction of the same scenes in advance. In contrast, DPVO is evaluated on all the EuRoC test sequences zero-shot, trained only on synthetic data. ORB-SLAM3 outperforms DPVO on all sequences (except it catastrophically fails on V202), but this is to be expected because it uses global optimization. **There are better results can be found for running ORB-SLAM on TUM-RGBD fr1 and TartanAir sequences (reported in https://paperswithcode.com/paper/droid-slam-deep-visual-slam-for-monocular/review/).** It is not clear that there are better results. ORB-SLAM has three versions (ORB-SLAM1, ORB-SLAM2, & ORB-SLAM3), and the cited URL reports results from ORB-SLAM1. We reported ORB-SLAM3 results because it is newer and has fewer catastrophic failures (0/16 versus 2/8) than ORB-SLAM1. Considering the catastrophic failures, it is not clear which version is better. Regardless, we include both versions in Table G in the rebuttal PDF and show that this choice has no impact on our conclusions. Our approach has better results than both versions of ORB-SLAM on TUM-RGBD and TartanAir. Specifically, we outperform or match ORB-SLAM 1 & 3 on all TartanAir sequences. Both DPVO and DROID-SLAM report identical results for running ORB-SLAM3 on TUM-RGBD: [X 0.017 0.210 X 0.034 X X X 0.009] DROID-SLAM also reports ORB-SLAM2 results on TUM-RGBD, which are even worse than ORB-SLAM3 (it fails on more sequences): X 0.071 X 0.023 X X X X 0.010 **Response to Q1** Yes, we provide a discussion of this exact question in section G of the supplement. Tldr; we suspect that random patches have the most uniform coverage out of the 5 tested methods, which may help robustness. **Response to Q2** Addressed above. **Response to Q4** The running parameters are fixed across all datasets in the default and fast variants, respectively. **Limitation: The model can only be trained on synthetic datasets with ground truth optical flow.** This is not true, our method is not limited to synthetic datasets; it can be trained on any video dataset with poses and depth (e.g. Scannet, SUN3D, MannequinChallenge, among many others). As stated on L85-86 of the supplement, we only train on a synthetic dataset to fairly compare with DROID-SLAM. <ran out of space> --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal, I am satisfied with the newly added experiment results. They are very comprehensive. However, I still wish to address the following concerns: 1. The novelty: theoretical contribution of this paper is limited, whereas the sparse patch alignment is already used in other sparse direct methods, such as SVO, and neural iterative updates plus differentiable bundle adjustment is the design from DROID-SLAM. The system is undeniably valuable from the perspective of engineering, which brings some new insights into integrating traditional VO methods with deep neural networks. 2. The authors mentioned that it is affordable for DROID-SLAM to make more mistakes with dense flow fields in the rebuttal message, which means DPVO is prone to incorrect matches and is potentially less robust than DROID-SLAM. In order to compensate for the less redundancy, they used a larger spatial resolution and optimization window than DROID-SLAM. To be more specific, are there any differences in spatial resolution and optimization window size between DROID-SLAM and DPVO in the experiments? Would it be unfair to compare their accuracy under different optimization window sizes? --- Reply to Comment 1.1.1: Comment: **On novelty:** Thank you for your comment. Regarding novelty, although sparse matching in itself was not new, it was not obvious whether it would work at all in the framework of DROID-SLAM, because the SOTA performance of DROID-SLAM was understood to be dependent on the dense flow. While it may look simple and obvious in hindsight, making sparse matching work in DROID-SLAM was far from straightforward, for reasons we detailed in the global message. **On fair comparisons between DROID-SLAM (frontend) and DPVO:** DPVO does use larger optimization windows and higher spatial resolution than DROID-SLAM, but these are not unfair advantages, because they would be prohibitively expensive in DROID-SLAM, but are feasible in DPVO precisely because of the novel design with sparse matches. High GPU memory cost is already a major drawback of DROID-SLAM. Increasing its optimization window and resolution to that of DPVO would increase its already high memory cost by additional factors of ~14x and ~4x, respectively. This would require ~476GB of GPU memory and make DROID-SLAM practically useless. The main contribution of DPVO over DROID-SLAM is the efficiency improvement. Thus it is fair to compare efficiency at the same level of accuracy, allowing each system to pick its own hyperparameters appropriate for its design. Thus it is reasonable to use the default settings of DROID-SLAM, and pick the appropriate window size and resolution for DPVO to match the accuracy of DROID-SLAM and then compare efficiency. This comparison is valid and fair for the purpose of supporting our claim that DPVO has better efficiency than DROID-SLAM (frontend). Note that the estimated memory cost of increasing the optimization window size/density is because of the increase in the number of unique pairs of connected frames (383 in DPVO on average, 28 in DROID-VO on average. Both measured on TUM-RGBD.)
Rebuttal 1: Rebuttal: **Message to all reviewers regarding the novelty of DPVO:** Our novelty is sparse matches *integrated with* neural iterative updates and differentiable bundle adjustment, which has not been done before and is nontrivial to design. While sparse matches have been used in prior work, they were not integrated with neural iterative updates and differentiable bundle adjustment. Our method can be understood as a sparsification of DROID-SLAM, a state-of-the-art SLAM approach, to improve efficiency, but such sparsification is nontrivial. DROID-SLAM does iterations through recurrent 2D convolutions on 2D feature maps, producing a dense flow field between existing and incoming video frames. Naive sparsification schemes would encounter the following difficulties: - Simply sparsifying the feature maps and applying 2D convolutions does not work because we would be doing 2D convolutions on a feature map that is 95% empty - Simply replacing K x K convolution filters with 1x1s means losing the ability to use neighboring spatial context when making predictions - a fundamental benefit of 2D convolutions. - Using 95-99% fewer keypoint matches means there is less redundancy against incorrect matches. DROID-SLAM can afford to make many more mistakes since it produces a dense flow field rather than sparse matches. Our solutions to these above challenges are: - Incorporating contextual information using patch-based correlation, and message passing between patches. - Adding 1D convolutions along the temporal dimension of a trajectory to pass around additional contextual information. - Offset the potential accuracy loss from the less redundancy by using the significant memory savings to use a higher spatial resolution (¼ instead of ⅛) and use a larger optimization window. Another contribution of this paper is a VO system that is open-source, runs at 60-120 FPS, uses little GPU memory, and is as accurate or better than the current state-of-the-art open-source VO system (which is 2x-3x slower, 2x as expensive). We believe a well-engineered open-source VO system is valuable and of interest to the research community. Pdf: /pdf/035189c71da4b7b69a998476bcd060bf02469b64.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
SmoothHess: ReLU Network Feature Interactions via Stein's Lemma
Accept (poster)
Summary: For smooth functions the joint interaction between pairs of features and their impact on the function output can be modeled and quantified via second order partial derivatives. The Hessian of a neural network can therefore provide insight as to how interactions between pairs of features impact predictions made by the network. ReLU networks are piecewise linear and therefore have a zero Hessian almost everywhere, as a result a different approach to quantifying interactions is needed. One way to overcome this issue is to consider a smoothed approximation of the network. Prior works substitute ReLU for Softplus and compute the Hessian of the corresponding smooth network, however, the sensitivity to the $\beta$ parameter of the Softplus activation is pronounced. Furthermore this method affords poor control over the level of smoothing. This work considers instead estimating interactions by computing the Hessian of the convolution of the network with a Gaussian - this Hessian is referred to as SmoothHess. Furthermore, using Stein's Lemma it is shown that one can efficiently estimate SmoothHess from first order information alone using Monte Carlo simulation. Non-asymptotic bounds on the error of this estimate are also provided in Theorem 3. Experimental results demonstrating improvements of this method over existing techniques, at least in the context of relatively simple problems such as MNIST and CIFAR10, are provided. Strengths: This paper appears to provide a new and natural method with which to estimate the impact of interactions between pairs of features on the network output in the setting in which the network of the Hessian is zero almost everywhere. The method seems principled and outperforms existing methods on a number of benchmarks. The technical results appear correct although I confess I did not check them thoroughly. The paper is generally well written and clear. I am not an expert in this space so cannot say much to as to its broader context with respect to prior works. Weaknesses: One minor concern might be around how one actually chooses the covariance matrix in practice. However, choosing hyperparameters is a problem for the other methods as well it seems and furthermore this method appears to give you more direct control of the smoothing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Are these techniques limited to ReLU networks or could one in fact apply them to a more general class of functions for which the first and second order partial derivatives are not defined everywhere (non-smooth) and or zero almost everywhere? 2. How does one choose the covariance matrix and hence the degree of smoothing? 3. Can you identify any (potentially pathological) examples in which SmoothHess really fails to model interaction effects? In short could you provide further comment on potential failure modes of the method and highlight any warnings one should be aware of when using it? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations of the work, in particular the computational complexity with regard to the input dimension, are highlighted and discussed in the conclusion. A link to code is included in the supplementary material. Potential social impacts are also discussed in the supplementary material: the authors are careful to point out that as with any method used to explain decisions made by a model one needs to treat such outputs with care and due oversight, and that it is best to work in tandem with domain experts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work and for the positive assessment. Below, we respond to the weaknesses and questions brought up. --- ## Weaknesses **W1: Choosing the covariance** As you mention, SmoothHess gives the user more direct control over smoothing compared to the competing SoftPlus method. In our experiment, we use isotropic covariance matrices of the form $\sigma^2 I \in \mathbb{R}^{d \times d}$. In this work we recommend a straight-forward way of choosing the level of smoothing $\sigma$. As $d \rightarrow \infty$ it can be seen that the $d$-dimensional Gaussian distribution $\mathcal{N}(0, \sigma^2 I_{d \times d})$ converges to a uniform distribution over the sphere of radius $\sigma \sqrt{d}$ [1]. in practice we have found that for the finite values of $d$ used in this paper (300,784,3072) the samples from the $d$-dimensional Gaussian $\mathcal{N}(0, \sigma^2 I)$ are indeed close to the sphere of radius $\sigma \sqrt{d}$. Thus, given one wishes to model the interactions of some network $F : \mathbb{R}^d \rightarrow \mathbb{R}^c$ in a radius $\varepsilon$-ball around a point $x_0 \in \mathbb{R}^d$ with SmoothHess, they may select $\sigma = \frac{\varepsilon}{\sqrt{d}}$. On the other hand, there is no clear cut way to choose the smoothing parameter $\beta$ for SoftPlus to draw information from a specific locality. We validate the utility of this perspective in our P-MSE experiments, shown in Table 1. In this experiment we aim to model network outputs over three neighborhoods of size $\varepsilon = 0.25, 0.50, 1.00$ for MNIST, FMNIST and CIFAR10. SmoothHess, for which $\sigma$ was chosen from a set of three values based on validation performance curated using the intuition above (described in lines 271-275), outperformed SoftPlus Hessian in each experiment. This is despite the fact that the best values of smoothing parameter $\beta$ were chosen for SoftPlus Hessian from over one-hundred options, each checked on a validation set. We additionally give an example of a use case of an alternative, non-isotropic, covariance and illustrate how such a covariance may be set. Here the goal is to focus smoothing to particular directions of input space. Consider one has a network $F: \mathbb{R}^d \rightarrow \mathbb{R}^c$ and two points $x, y \in \mathbb{R}^d$, which they wish to quantify the interactions of $F$ between. These points may have some interesting relationship: for instance $y$ may be an adversarial example for $x$ under $F$, and one may wish to find the interactions which influence this to be the case. Let us consider the sub-network $f : \mathbb{R}^d \rightarrow \mathbb{R}$, which outputs the logit (or if so desired, SoftMax probability) of the predicted class for $x$: $\text{argmax}_{c} F(x)$. Concretely, we aim to find interactions which affect $f$ between $x$ and $y$. This can be accomplished in the following way: Intuitively, we wish to estimate SmoothHess for the point between $x$ and $y$, $z = \frac{x + y}{2}$, and to focus smoothing along the direction $v = \frac{y - x}{\lVert y - x \rVert_2}$. One may use a procedure such as Gram-Schmidt to obtain an orthonormal basis for $\mathbb{R}^d$ for which the first vector is $v$ : $v, a_1, \ldots, a_{d-1} \in \mathbb{R}^d$. Let us choose a relatively large eigenvalue of $\sigma_l > 0$ for which to smooth along $v$ and small $\sigma_s \approx 0$ to smooth along the other directions. One may construct eigenvector matrix $Q = [v | a_1 | \ldots | a_{d-1}] \in \mathbb{R}^{d \times d}$ and eigenvalue matrix $\Lambda = \text{diag}(\sigma_l, \sigma_s, \ldots, \sigma_s) \in \mathbb{R}^{d \times d}$ and the covariance may be set simply by matrix multiplication $\Sigma = Q\Lambda Q^T$. Finally, SmoothHess may be fit over mid-point $z$ using $\Sigma$. One may see that this affords a higher weight to samples which are on, or close to, the line containing $x$ and $y$. Conceptually this is an example of the more general approach we allude to in lines 163-165; directions of interest can be encoded with eigenvectors and their relative weighting with eigenvalues. We will include this explanation in the camera-ready Appendix. --- ## Questions **Q1: Limitation to ReLU networks** This is a great point. As we explain in global response (G1), these techniques can be applied to any function which satisfies the assumptions of Proposition 1, namely that the function is Lipschitz continuous. Thus, as long as one has access to a first-order gradient oracle, SmoothHess may be estimated for any Lipschitz continuous function. **Q2: Choosing the covariance** We kindly direct the Reviewer to our response to W1 above. **Q3: Potential pathological examples** As illustrated in our discussion regarding the choice of covariance, SmoothHess allows for fine-grained control over the locality of the information that is taken into account when computing interactions. Specifically, this locality is determined by the eigenvectors and eigenvalues chosen for the covariance matrix. However, in our current work, we assume that the SmoothHess smoothing is determined by a Gaussian distribution. Given one wishes to represent interactions from a specific region, they should take into account how well a Gaussian smoothing can agglomerate information from that region. Future work might investigate parameterizing the SmoothHess smoothing with different distributions, which would allow for estimating interactions over a broader range of regions for a given sample. **References** [1] Roman Vershynin. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018. --- Rebuttal Comment 1.1: Comment: Thanks for answering my questions, I will keep my current score.
Summary: The goal of this paper is to model the point-wise interactions between input features in a ReLU network. ReLU networks are piece-wise linear, which inhibits using Hessian to model feature interaction (since it’s zero almost everywhere). This paper proposes SmoothHess, a new method to compute feature interaction by the Hessian of a smooth surrogate of the ReLU network. The authors prove that SmoothHess can be estimated using only gradient calls, and prove non-asymptotic sample complexity bounds. One result that’s particularly elegant is the connection between SmoothHess and SmoothGrad: combined, they define a local, second-order Taylor expansion of the network. The author designed several evaluations of the proposed method based on this connection. The main baseline that SmoothHess is compared against is SoftPlus Hessian which is based on an alternative smooth surrogate of ReLU. These methods are evaluated as local second-order estimates of the ReLU model: they are evaluated by the accuracy of their local approximation, as well as the capability to guide adversarial attacks. SmoothHess performs best in both tasks on three standard image classification datasets. The authors also performed qualitative analysis on a regression problem from the medical domain. Strengths: Although feature interaction is a well-studied topic, I find the proposed method to be quite novel. The motivation of the problem, i.e., the piece-wise linearity of ReLU networks is very clear and the survey of related work seems thorough. The proof sketches provided in the main paper are explained clearly, with detailed references to pieces of proofs from other work. Overall, I find the paper to be very well-written and easy to follow. It strikes a great balance between having detailed theoretical results and clearly conveying the intuition. Weaknesses: The main weakness of the paper is the comprehensiveness of experiments. Due to computational cost of computing SmoothHess, all experiments are conducted on relatively small-scale datasets and small-ish models. Although I do not see reasons why this method would not work on larger models, it would make the paper much stronger to have empirical evidence. I also find the qualitative analysis a bit dry. Part of this is due to my lack of familiarity with the Spirometry regression task. But I cannot confidently evaluate the significance of that result, in particular regarding what it means for interpretability of ReLU networks in general. Can this method improve tasks that can benefit from interpretability, like error detection, trust calibration, debugging, auditing, or knowledge discovery? I'm not sure. This is a bit disappointing since interpretability is a primary motivation for this work. It would help to at least have some qualitative comparison (and with more competing methods) on more familiar datasets to at least help us understand the behavioral differences between SmoothHess and existing methods. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Regarding Equation 10: if my understanding is correct, what Equation 10 does is it defines how SmoothHess (and competing methods) can be used in conjunction with gradient methods to define a local, second-order model of the underlying ReLU model. This equation is then used as the basis of experiments which evaluates the goodness of fit from several aspects. Is this understanding correct? Are there alternatives to Equation 10? Is there an alternative way to do similar experiments without Equation 10? Is it possible for this specific definition (for example by using f instead of g as the first term) to bias the results? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have adequately address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your enthusiasm about, and careful reading of, our work. We are especially glad to note your appreciation of the connection between SmoothHess and SmoothGrad. Below, we respond to the weaknesses and questions you have brought up. --- ## Weaknesses **W1: Generalization to larger models** Following your recommendation, we ran an experiment validating that SmoothHess can capture interactions for a ResNet101 trained on CIFAR10. We kindly point you toward the global response (G4) for an explanation and Table 2 of the global response pdf for results. Thank you for bringing this concern up, this new evidence improves the generalizability of our work. **W2: Spirometry regression and qualitative comparison** We provide an updated explanation of the spirometry case-study in the global response (G2). We agree that there are many possible real-world applications for interpretability methods. As our quantitative results in Section 5.2 show, SmoothHess outperforms existing methods of estimating Hessians as well as first-order approaches, suggesting that using SmoothHess should improve downstream applications that use Hessian-based interactions. In addition, SmoothHess is very flexible in that $\sigma$ may be set to capture local interactions very effectively, at many different localities around a point. Thus, there is some reason to believe SmoothHess may outperform other methods in applications where the Hessian is important; e.g. validating that the model is picking up on interactions which domain experts know to be in the data. We agree that having additional qualitative comparisons will be beneficial. We plan to add a qualitative comparison, on image examples, with other methods to the camera-ready Appendix. Specifically, we plan to segment images, agglomerate SmoothHess interactions within each super-pixel, and carefully compare the difference in behavior with other methods. --- ## Questions **Q1: Equation 10. second-order Taylor expansion** We appreciate this thoughtful question regarding our experimental methodology. Your understanding of Equation 10 and its use in our experiments is correct. As you hint at, the most straightforward modification to Equation 10 would be using $g(x_0)$ instead of $f(x_0)$ as the first term. Our motivation in using $f(x_0)$ for the first term stems from the fact that our goal is to evaluate the ability of the smooth surrogates _corresponding gradient-hessian explainer pair_ to model the network locally. We do not consider the quantity $g(x_0)$ (the smoothed function output) to be an explainer of interest; it does not indicate feature interactions such as SmoothHess or feature importance such as SmoothGrad. For this reason we elected to use $f(x_0)$ as the first term for all models. From our perspective this, in some sense, actually removes the bias that could be introduced by the term $g(x_0)$ which we do not aim to evaluate. This allows us to isolate the impact of the gradient-hessian pair in modeling local changes in the function. This is opposed to the different, but related, goal of _assessing the smooth surrogates second-order Taylor expansion_ in its ability to model the network, which would require replacing $f(x_0)$ with $g(x_0)$. While the experiments based on Equation 10 seem the most natural to us, other related experiments can also be run without Equation 10. For instance, as Reviewer FahQ mentioned, one may leverage the interpretation of the Hessian as a first order approximation of the gradient (or some scalarization of the gradient) to create experiments which model the gradient on the first-order, as opposed to modeling the network itself on the second-order as we have focused on. We find Equation 10 to be more in-line with our interpretability goals as generally we hope to quantify the effects of interactions for the function $f$.
Summary: This paper introduces SmoothHess, a method to define a smooth approximate Hessian for networks where many components of the Hessian may be zero (e.g. ReLU networks). This can aid in tasks where interactions between features are needed, but where those components of the exact Hessian are zero. SmoothHess is defined as the convolution of the output of the network with the Gaussian. It is easy to show, using Stein's lemma, that this is equivalent to the expectation of the gradient of a perturbed output over that Gaussian distribution. They then do a number of experiments. In the first two sets of experiments (perturbation MSE and adversatrial attakcs) on three datasets to show the advantages of using SmoothHess over alternative smoothings of ReLU, such as softPlus. In both experiments, SmoothHess significantly outperforms SofPlus. They also test this on a 4 quadrants dataset with similar results. They also apply this to a real-world medical dataset of spirometry to measure feature interactions. Strengths: The paper shows the benefit of using a smoothened Hessian of ReLU networks for tasks involving feature interactions. The benefit of the method is that it is post-hoc and can be applied to the output of a trained network without a modification to the architecture. The theoretical derivations are easy to follow and well organized. The experiments also make a good case for using their construction, showing superiority to SoftPlus. Overall, the simplicity of the method is its main strength. Weaknesses: My main reservation is how impactful the work can be. It is a neat idea, but with a rather limited scope. For example, given the expense of computing SmoothHess and the fact that we only need it for diagonal blocks of the Hessian of ReLU networks (explained next) makes the impact rather limited for me. Maybe the authors can explain or emphasize more the breadth of the problems where this would be useful or required? One minor issue to raise is that, as I understand it, the Hessian of ReLU networks is *not* zero almost everywhere. In multi-layer networks, only the Hessian components involving weights in the same layer are zero. All cross layer Hessian terms should be nonzero. Let $W^l_i$ denote a component indexed by $i$ in the weights of layer $l$. The Hessian term $${\partial^2 F(x) \over \partial W^l_i \partial W^k_j} $$ is in general not zero, even in ReLU networks. In other words, the paper's claims about vanishing Hessian only hold for block diagonal components of the Hessian. They need to argue for why these components are more important than off-diagonal blocks. Additionally, the baselines in the experiment are rather limited. SoftPlus is not the only smooth alternative to ReLU. The swish activation and similar functions could have been compared as well. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Why only consider diagonal Hessian components, while off-diagonal blocks of ReLU networks can be non-zero? 2. How do baselines other than SoftPlus (e.g. swish) perform? 3. In the spirometry experiment, what is the conclusion? Can you trust or validate that the interactions revealed by SmoothHess are real? Does it help predict which datapoints to discard, or is it about which ones not to drop? 4. regarding the quadratic computation of SmoothHess, is there an alternative for feature interactions which is not quadratic? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The paper does discuss limitations, notably the quadratic cost in feature dimensions, which is prohibitively expensive for large models/datasets. I don't see an immediate societal impact issue. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. Below we respond to the weaknesses and questions brought up. --- ## Weaknesses **W1: Impactfulness and non-zero Hessian** We kindly wish to clarify any misunderstandings about the Hessian we are computing and its use in our work. We are interested in the Hessian used in interpretability [1,2,3,4] which is the Hessian with respect to the input $x \in \mathbb{R}^d$, with elements of the form $\frac{\partial^2 F(x)}{\partial x_i \partial x_j}, \ i,j \in \{1, \ldots, d\}$. The interpretation of this quantity is that, following the intuitions from calculus, $\frac{\partial^2 F(x)}{\partial x_i \partial x_j}$ represents the interaction between features $x_i$ and $x_j$ in affecting the models prediction for any specific point $x$, i.e. $F(x)$. In the particular comment the reviewer refers to the Hessian with respect to the weights ($\frac{\partial^2 F(x)}{\partial W_i^l \partial W_j^k}$), which is also a very interesting quantity and important for optimization, e.g. second-order methods. However in this work we are not interested in this particular Hessian. In this respect, we neither need to assume nor have a block structure. For the fact that the Hessian (with respect to inputs) of ReLU networks is 0 almost everywhere, please see, e.g. [1]. We convey a proof sketch here: First observe that any affine function $f : \mathbb{R}^d \rightarrow \mathbb{R}^c$ of the form $f(x) = Wx + b, \ x \in \mathbb{R}^d, W \in \mathbb{R}^{c \times d}, b \in \mathbb{R}^{c}$ has a Hessian that is 0. From here one may see that any piecewise linear function has a 0 Hessian, wherever it is defined. Further, note that the composition of piecewise linear functions is piecewise linear. Deep ReLU networks are proven to be compositions of piecewise linear functions [5]. **W2 : Limited scope of baselines** Following the reviewer’s suggestion we ran new perturbation mean-squared-error (P-MSE) experiments comparing with Swish smoothed networks. We kindly point you toward the global response (G3) for an explanation of our experiment and Table 1 of the global response pdf for the results. SmoothHess outperforms Swish in all 9 permutations of the P-MSE experiment we have run. --- ## Questions **Q1 and Q2** Please see the responses to W1 and W2 above, respectively. **Q3: Regarding Spirometry Case-Study** We kindly point you toward the global response (G2) for clarification on the spirometry experiment, and the conclusions that may be drawn from it. **Q4: Regarding quadratic cost** Methods which aim to quantify second-order interactions between every pair of features for $d$-dimensional inputs are generally $\Omega(d^2)$ given each pair of $d$ elements are compared. We are not aware of works which circumvent this inherent issue. --- ## References [1] Joseph D Janizek, Pascal Sturmfels, and Su-In Lee. Explaining explanations: Axiomatic feature interactions for deep networks. J. Mach. Learn. Res., 22:104–1, 2021. [2] Michael Tsang, Dehua Cheng, Hanpeng Liu, Xue Feng, Eric Zhou, and Yan Liu. Feature interaction interpretability: A case for explaining ad-recommendation systems via neural interaction detection. In International Conference on Learning Representations, 2019. [3]Samuel Lerman, Charles Venuto, Henry Kautz, and Chenliang Xu. Explaining local, global, and higher-order interactions in deep learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1224–1233, 2021. [4] Sahil Singla, Eric Wallace, Shi Feng, and Soheil Feizi. Understanding impacts of high-order loss approximations and features in deep learning interpretation. In International Conference on Machine Learning, pages 5848–5856. PMLR, 2019. [5] Randall Balestriero and Richard Baraniuk. Mad max: Affine spline insights into deep learning, 2018. --- Rebuttal Comment 1.1: Title: Thanks fro clarifications Comment: Thank you for clarifying which Hessian you are using, which I had misinterpreted. For Hessian w.r.t. the input, it is clear that off-diagonal terms are zero for ReLU networks. Also, thanks for the Swish experiments. I will increase my score.
Summary: The paper propose to compute Hessian of a ReLU neural network by a convolution of the network function with a Gaussian distribution function. It's equivalent (or cover) some prior work, e.g., SmoothGrad. The method is generally easy to understand, without modifying the network structure. A non-asymptotic bound on the sample complexity is provided. Strengths: Strengths: -- The proposed smooth-surrogate is easy and straightforward to understand, the computation method is somewhat novel using Stein's Lemma, following [40] with a mild extension to all Lipschitz continuous functions, which addresses the challenge posed by the zero Hessian of piecewise-linear ReLU networks. The paper provides a non-asymptotic bound on the sample complexity of the estimation procedure. -- Proposed SmoothHess can be applied post-hoc and does not require any modifications to the ReLU network architecture. This allows analysis and interpretation of the interactions of existing ReLU neural networks without retraining or redesigning the model. -- Efficient sampling: SmoothHess utilizes an efficient sampling algorithm, which only requires network gradient calls, making it computationally feasible to large-scale networks and complex datasets. Weaknesses: Weaknesses: -- Scope limited to ReLU networks: While SmoothHess addresses the challenge of zero Hessian in ReLU networks, its applicability is limited to this specific activation function. It would be useful to discuss the potential generalizability of the method to other types of neural networks with different activation functions. -- Generalize beyond simple datasets: SmoothHess can be applied to large models and datasets with its simple computations, it would be beneficial to clarify the generalizability of SmoothHess and the potential limitations in terms of dataset diversity. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Some notations are confusing, e.g., caption of figure 2, is it $\log(10\sigma)$ or $\log_{10}{\sigma}$? also the text, "Aside from at minute", what does "minute" mean here? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. Below we respond to the weaknesses and questions brought up. --- ## Weaknesses **W1: Limitation of scope to ReLU** We appreciate the comment as we feel this is an important point to clarify in our work. Our method is not limited only to ReLU networks and in fact it is of use in far broader settings. We focus our presentation on ReLU networks because, for their logits and internal neurons, smoothing _is required_ for the computation of non-zero Hessians. Proposition 1 states an equivalency between the expectation our method estimates and SmoothHess for _any Lipschitz continuous function_. Thus our method is applicable to any Lipschitz continuous network or model for which one has access to the gradients (i.e. can backpropagate through). This covers a wide range of networks, for instance networks which may be composed of convolutional layers, fully connected layers, max pooling, common activation functions such as sigmoid, skip connections and batch normalization among other frequently used network building blocks [1]. SmoothHess still provides useful information for Lipschitz networks which have a non-zero Vanilla (unsmoothed) Hessian. Although the Vanilla Hessian does provide information pertaining to the behavior of such functions, the locality of this information is infinitesimally small. On the other hand, SmoothHess has a smoothing parameter $\Sigma$ which may be chosen to model the interactions over neighborhoods of varying size, providing the user great flexibility. We plan to emphasize this point in our camera-ready paper. **W2: Generalization to larger datasets and models** Indeed, the input size matters, which we point out in the limitations section. Specifically, we note that the most computationally expensive aspect of SmoothHess is the outer-products, which are $\mathcal{O}(d^2)$. For large $d$, this becomes expensive. We gave the example of ImageNet, which has $d^2 \approx 10^{10}$. However, we do not view this as a limitation relative to other works, since interaction methods are generally $\Omega(d^2)$ [2,3,4], as each pair of $d$ features must be compared. In our work, we have run SmoothHess for inputs of up to size $d=3072$, although we have not exhaustively checked the limit for which SmoothHess is feasible. From a qualitative perspective of dataset diversity, we have managed to capture feature interactions for a number of different datasets: MNIST, FMNIST, CIFAR10, real world Spirometry regression, synthetic Four Quadrant and the synthetic Nested Interactions dataset found in Appendix F.3. We also wish to highlight the new experimental results reported in the global response (G4), which provide evidence that SmoothHess can generalize to larger models. Specifically, our results show that SmoothHess can capture feature interactions for a ResNet101. Table 2 of the global response pdf shows our results for this experiment. --- ## Questions **Q1: Figure 2 caption.** Thank you for pointing this out. We mean $log_{10} \sigma$ and will add the subscript to make this more explicit in the camera ready version. **Q2: Meaning of “minute”** Here minute means “very small”. --- ## References [1] Gouk, Henry, et al. "Regularisation of neural networks by enforcing lipschitz continuity." Machine Learning 110 (2021): 393-416. [2] Joseph D Janizek, Pascal Sturmfels, and Su-In Lee. Explaining explanations: Axiomatic feature interactions for deep networks. fJ. Mach. Learn. Res., 22:104–1, 2021. [3] Michael Tsang, Sirisha Rambhatla, and Yan Liu. How does this interaction affect me? inter- pretable attribution for feature interactions. Advances in neural information processing systems, 33:6147–6159, 2020. [4] Dhamdhere, Kedar, Ashish Agarwal, and Mukund Sundararajan. "The shapley taylor interaction index." arXiv preprint arXiv:1902.05622 (2019). --- Rebuttal Comment 1.1: Comment: I have read the authors' reply. Many thanks to the authors for the clarifications. My concerns and questions are mostly satisfactorily addressed. I have no further questions and will remain positive to this paper.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their thoughtful questions and comments, which have helped us to improve our work. --- ## Recurring Questions Below we respond to recurring questions among the reviewers. We also give individual responses to all the reviewer questions under each reviewer’s comments **(G1) Generality of results:** Reviewers GnHq and bZYQ raised questions about the generality of our results for non-ReLU networks or functions. As highlighted in Proposition 1 in the paper, SmoothHess may be estimated for any Lipschitz continuous function, given one can query the gradient. We focus our presentation on ReLU networks as smoothing _is required_ in order to evaluate a non-zero Hessian. SmoothHess is still useful for Lipschitz continuous networks which have a non-zero Hessian. The reason for this is that this Hessian reflects a very small area, while SmoothHess can incorporate information from larger regions by adjusting the smoothing parameter $\Sigma$. **(G2) Spirometry experiment:** Reviewers yoRr, FahQ, and HbQW expressed a need for clarification of the spirometry regression case-study. The purpose of the spirometry experiment is to qualitatively evaluate whether SmoothHess can be used to understand how ReLU models make predictions. FEV$_1$, a metric frequently used to evaluate lung health, is always calculated within the first 2 seconds of the spirometry curve, and _is known to be strongly affected by the presence of coughing_ [1]. In each curve, coughing may occur during, or after, the first 2 seconds. The time when coughing occurs, indicated by plateaus in the spirometry curves in Figure 3, should be an important signal for any model trained to predict FEV$_1$. We aim to test whether SmoothHess can detect interactions between the initial segment of the curve and segments that indicate coughing (as characterized by plateauing in the curve). To this end, we train a network to predict FEV$_1$ for spirometry samples that exhibit coughing. When we apply SmoothHess in Figure 3, we find that indeed the strongest interactions for the first 0.5 second curve segment are where plateaus occur in the curve, indicating coughing. We will add this clarification to the camera-ready revision. --- ## New Results We also present new results in the global response pdf, which we explain below. **(G3) Table 1**: Reviewer HbQW asked for a comparison with the Hessian and gradient of a Swish smoothed network. Swish is a smooth activation function that may be parameterized by a smoothing value $\beta$ [2]. We compare the P-MSE of the Hessian + Gradient and Gradient of the Swish smoothed network with our previous results for the predicted class logit on MNIST, FMNIST, and CIFAR10, over three different radii $\varepsilon = 0.25, 0.50, 1.00$. Following the methodology used for SoftPlus in our work, we select the best value of $\beta$ for Swish based on the validation set performance from a set of over one-hundred options. This is done separately for Swish Hessian + Swish Gradient (Swish (H + G)) and Swish Gradient (Swish G), each dataset, and for each locality. Our new results indicate that the second-order Taylor expansion using SmoothHess and SmoothGrad (our method) outperforms both Swish (H + G) and Swish G in all cases. Swish (H + G) has a particularly poor performance on CIFAR10. We will include these results in our camera-ready Appendix. **(G4) Table 2**: Reviewers yoRr and GnHq were both interested in the ability of SmoothHess to generalize to larger networks. We train a ResNet101 ($\approx 44.5$M parameters) on CIFAR10 and calculate P-MSE results for SmoothHess + SmoothGrad (our method), SmoothGrad, and the vanilla (unsmoothed) gradient at three radii $\varepsilon = 0.25, 0.50, 1.00$. Our method significantly outperforms both SmoothGrad and the vanilla gradient in each case. This provides evidence that SmoothHess can model interactions in large networks such as ResNet101, improving upon first-order models. Due to the size of ResNet101, we did not have the time to evaluate SoftPlus or Swish results for ResNet101. This is due to the expensive validation procedure (mentioned in G3) that must be used for SoftPlus and Swish. In our original P-MSE experiment, we validate over $100$ different values of SoftPlus parameter $\beta$ on a held out set, before selecting the best to use on the test set. This is because there is no clear-cut way to choose $\beta$ given a desired locality for smoothing. This is opposed to our procedure for choosing $\sigma$ for SmoothHess before computing P-MSE, which exploits well known properties of the Gaussian to curate a set of only three values of $\sigma$ for validation (as described in lines 171-177 and 269-275). We plan to validate the best values of $\beta$ for SoftPlus and Swish on the ResNet101 and add the results to Table 2, which we will include in our camera-ready Appendix. We note that there is little reason to suspect SoftPlus or Swish will be competitive with SmoothHess. As described in (G3), SmoothHess outperforms the Swish Hessian in all cases. Further, our original results show SmoothHess outperforming SoftPlus on 17 of the 18 P-MSE experiments, with one tie. However, Reviewer FahQ inquired as to the printing precision (number of significant figures) of the reported tie, and, once checked, we found that SmoothHess achieved a lower P-MSE. --- If any reviewers have further questions or wish us to elaborate on our responses, we would be happy to address them over this upcoming discussion period. --- ## References [1] Luo AZ, Whitmire E, Stout JW, Martenson D, Patel S. Automatic characterization of user errors in spirometry. Annu Int Conf IEEE Eng Med Biol Soc. 2017 Jul;2017:4239-4242. doi: 10.1109/EMBC.2017.8037792. PMID: 29060833. [2] Ramachandran, Prajit, Barret Zoph, and Quoc V. Le. "Searching for activation functions." arXiv preprint arXiv:1710.05941 (2017). Pdf: /pdf/485e367d3276af4eb81904ce822155e26d2e2344.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The work proposes an approach for measuring feature interactions for neural networks that have no useful second order gradients or Hessians (notably those employing ReLU). The approach convolves a smooth normal with the network as a proxy with usable Hessians and then offers a means of estimating them using sampling of first-order gradients, thereby making it more tractable than methods that rely on computing Hessians naively. Experiments evaluate the proposed method and several adaptations of existing methods for computing 1) the 3rd-term (second order gradient) of the Taylor expansion of the output of a neural network around an input, and 2) for adversarial inputs via optimization using (1). In these experimental terms, the proposed approach (aided by SmoothGrad for the 2nd Taylor term) performs better than those based on only the first order gradient or those that use include both first and second order gradient but proxy ReLU models using SmoothPlus activation. Strengths: + Almost strictly better experimental results than the compared-to methods. + Hessians for networks which don't naively have them due to technicalities should be generally useful in a wide variety of applications. Weaknesses: - Experimentation focuses on agreement between a network and one evaluated as per Taylor expansion using SmoothHess. While this may be a fairly general test, it does not offer any insight into benefits in cases where the method would actually be useful to use, with no trivial alternatives. Presumably in most cases we can already evaluate the network regardless of Hessian non-existence so at surface level there is no need for Taylor expansion and thus the Hessian of the smooth proxy. The adversarial attack is also based on use of Taylor expansion of smoothed model as proxy so it does not improve this problem. The final case-study does present interaction results but is difficult to follow (see questions below). I suggest, then, that the paper include evaluation of the methods that focus on the Hessian, instead of network output. One example is to test adversarial attacks on gradient-based attributions by gradient descent on the gradient (i.e. using Hessian). Compare to other methods for attacking attributions. - Tradeoffs with respect to the parameters are not explored or described. The use of smoothing serves to increase the impact of network regions further away from each point but smoothing is a double-edged sword. If smoothing (and subsequent sampling) is insufficient compared to the sizes of linear network regions, the results may not be useful. On the other hand, large enough smoothing reduces the network to a constant. One impact of this tradeoff is how fast gradient descent can arrive at good solutions. Thus in addition to the above suggestion of testing usefulness of the Hessian, I also suggest that the effect of the parameters (or smoothness) have on that usefulness. For example, sufficiently significant smoothness I expect to result in useless gradients of gradients for attacking attributions. Smaller things: - What is δ (without subscript) in Equation 8a? Did you mean δᵢ ? - On line 117, the expectation and perturbation are presumed not significant to the point that the Hessian is 0 almost everywhere. - By "second-order interactions" did you mean interactions via second-order gradients? Interactions seem already second-order so saying second-order interactions is suggesting third-order gradients. - In Table 1, are the two best (bolded) results in the 4th numerical column identical or is it just that the printout identical? - Around line 255, you write "We denote Δ* ... as the output after x₀ is attacked". This may be a bit confusing as I think you mean Δ* as the perturbed input, not the network output. Is that right? - Line 349 writes "Interestingly, the smaller kernel width constraints the interactions to features within a small locality." Why is that interesting? Isn't that expected? - Typo near "between between". - The Adversarial Attacks paragraph on line 254 and Equation 12a is missing the target class of the attack. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Conceptually, when sampling as per MC-estimation, if samples end up not spanning more than 1 linear region of the original network, would the estimate be zero in the same sense as second order gradient is zero? - What is the impact of smoothness/covariance on sample bound as per Equation 9? - Does "SP H + G" mean "SoftPlus (Hessian + Gradient)" or "(SoftPlus Hessian) + (ReLU Gradient)"? Line 285 suggests the latter but the label/caption suggest the first. If the first, the comparison may not be as fair as it could be. - Example of Figure 3 is difficult to understand. Why are the arrows labeled "interaction" crossing from one method to another? Also, SmoothGrad vs. SmoothHess is indicated by both color and the separate graphs? Did you mean that green/red are indicators of positive or negative interactions for both of those methods or for one respectively? Or perhaps the second of each pair of graphs shows the interaction of the other timesteps with the first timestemp only? Also, what is the kernel width for the example shown in Figure 3? - The covariance matrix is set in a specific way in the text but the benefit of the option to set it otherwise are mentioned. How would one go about setting it alternatively and for what purposes? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The work describes experiments with adversarial attacks which could use some mention with relation to negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our work. **We've addressed your main concerns in our official rebuttal. Other concerns have been addressed in a comment due to the character limit.** --- **W1 Experimentation Focus** We emphasize that, while we agree as to the potential for use in optimizations, SmoothHess is foremost meant to quantify feature interactions. Assessing how well any given interpretability method explains network behavior is non-trivial: it is often subjective and dependent on expert knowledge. A number of techniques have been introduced to evaluate univariate methods[1,2]. Note that these methods don’t necessarily correspond to real world uses. They are used as a quantitative proxy for interpretation quality, but are not the final goal. As SmoothHess and SmoothGrad together define a second-order Taylor expansion we developed P-MSE and the adversarial attack experiment as intuitive quantitative measures of interaction quality, using this Taylor expansion as a proxy. Low values of P-MSE may indicate that the average interactions are captured. Successful adversarial attacks may indicate that extremal interactions are captured. We find your experiment valuable, but believe it may be subject to the same comments given regarding our adversarial attack experiment. While the use of SmoothHess to attack the function f presupposes access to f, the use of SmoothHess to attack $\nabla f$ presupposes access to $\nabla f$, which is used for SmoothHess computation. We believe both experiments are interesting, but note that the adversarial attack on f may be more in line with our goal of interpreting interactions for f. This is as opposed to interpreting first order importance for $\nabla f$, which is also interesting. We leave this for future work. As stated above, the main application of SmoothHess is to be used to interpret network behavior. This is exemplified by the spirometry case study, which we clarify in the global response(see G2). **W2 Smoothing Tradeoffs** We agree with this need and note that we consider the benefits of SmoothHess to be highlighted from this perspective. First, from our viewpoint, the trade-off from smoothing is not inherent, i.e. “smoothing level $\sigma$ is too high for a given point/network”, but rather predicated on the information one is searching for. E.g., assume one is given a point $x \in \mathbb{R}^d$ and network $f : \mathbb{R}^d \rightarrow \mathbb{R}$, and they wish to find the interactions in the radius 0.001 ball: $B_{0.001}(x) \subset \mathbb{R}^d$. If there exists a linear region $Q \subseteq \mathbb{R}^d$ s.t. $B_{0.001}(x) \subseteq Q$ then, from the Hessian perspective of interactions, there is _no interaction occurring_ in $B_{0.001}(x)$. In this scenario using a tiny $\sigma$ would actually be appropriate for answering the users’ question, _precisely because it would result in a near zero Hessian._ Herein lies the relative benefit of our method over SoftPlus. Given a ball of radius r over which a user wishes to model interactions, $\sigma$ can be chosen so that SmoothHess draws samples from near the boundary of this ball. This is because the d-dimensional Gaussian with covariance $\sigma^2 I$ converges to a Uniform distribution over the sphere of radius $\sigma \sqrt{d}$ as d goes to $\infty$ [3]. Thus the covariance may be set to $\frac{r}{\sqrt{d}}$, ensuring that SmoothHess uses samples near this sphere. This is explained in lines 171-177. Although this convergence occurs as d goes to $\infty$ we have observed that it is close to reality with the datasets used, d = 300,784,3072. We validate this approach in our P-MSE experiment: SmoothHess with $\sigma$ chosen based upon $\frac{r}{\sqrt{d}}$ achieves the best P-MSE over competing methods. Alternatively, the smoothing parameter $\beta$ for SoftPlus has no clear-cut connection to a locality, for exploitation. Lastly, we point to the Nested Interactions Experiment in App. F.3, in which we assess the ability of SmoothHess to pick up interactions as $\sigma$ is varied. We sample a set of points $x_1, \ldots, x_N \in \mathbb{R}^2 $ uniformly from $[-2,2] \times [-2,2]$ and set targets $y(x)$ by: $x \in B_{0.6}(0) \implies y(x) = \frac{1}{2} x_1^2 + x_1 x_2$, $x \in B_{1.2}(0) \backslash B_{0.6}(0) \implies y(x) = x_1x_2$ $x \in \mathbb{R}^2 \backslash B_{1.2}(0) \implies y(x) = -5x_1x_2$. We train a network to “memorize” this dataset. Thus, we have created “ground truth” interactions for _network behavior_ which change with distance to the origin. We estimate SmoothHess at $x_0 = 0$ with $\sigma \in \{1e-6, \ldots, 10 \}$ and plot the interactions in Figure 5a. The interaction between $x_1$ and $x_2$ should be $\approx 0$ within a single region. At slightly larger localities one should expect to capture the noisy behavior of the network. Then an interaction of $1$ should occur and then a decrease to $-5$. Our results in Figure 5 capture this behavior. The behavior at largest $\sigma$ is due to the many samples incorporated outside of the train distribution. --- **S4 Precision** This is due to precision. Checking with higher precision, we found SmoothHess P-MSE = 4.9e-8 and SoftPlus Hess P-MSE = 5.5e-8. Thus, SmoothHess achieves lowest P-MSE in all cases. We will add this to the caption. **Q3 Fair comparison** We use SP H + SP G. We feel this choice is more fair. Table 1 in the response pdf shows SP G significantly outperforms (ReLU) G. We will make this clear in the final version. --- **References** [1] Hooker, Sara, et al. "A benchmark for interpretability methods in deep neural networks." Advances in neural information processing systems 32 (2019). [2] Chen, Jianbo, et al. "Learning to explain: An information-theoretic perspective on model interpretation." International conference on machine learning. PMLR, 2018 [3] Roman Vershynin. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018. --- Rebuttal Comment 1.1: Title: Re: Supplementary Clarifications Comment: In the Official Rebuttal, we have referenced a comment we would be making below our official response to the Reviewer, clarifying more points, which we have already prepared. However, upon further re-reading of the guidelines and new 6,000 character limit rule, we are now uncertain if providing this would be a violation of any policy and thus decided not to post. Thank you for your thoughtful questions, which we have tried to answer within the given character limit. If you have any specific questions please feel free to reach out during the discussion phase and we would be happy to respond.
Summary: The author introduces a novel method to approximate the Hessian of a smoothed loss function at a specific point. By incorporating a variation of Stein’s lemma, they cleverly estimate the gradient over a smoothed surrogate of a ReLU network at that point. To gauge the method’s effectiveness, they measure its performance by testing the normalized L2 distance between the loss function and its first- order and second-order approximations within a ball centered around the given point. Strengths: The method offers a straightforward and comprehensible approach, presenting a powerful and novel tool for approximating the Hessian of a ReLU network at a specific point. The author’s analysis is comprehensive and remarkably easy to follow. The experiments conducted in the paper appear relevant, and the obtained results show promising potential. Weaknesses: While the ideas and methods of the paper are solid, I came across some read- ability issues. 1. The statement of Theorem 3 (line 211) is confusing. The real positive $\delta$ is used without declaration, while the random variables $\delta_1,\ldots,\delta_n$ are declared. This makes the reader wonder how one can compare a probability with a random vector, and what is the meaning of this inequality which is essentially a random variable itself. I had to read the proof in the appendix to fully understand the statement of the theorem. I think using a different symbol for $\delta$ might increase readability. 2. I would add the definitions of the oracles used in the article for clarity. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. line 103 "An general L-hidden..." is a typo? 2. line 287 "with the non-smooothed..." is a typo? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The article address the time and space complexity of the method in the presence of a high-dimensional input. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive assessment of our work. Below, we address the weaknesses and questions that you have brought up. ## Weaknesses --- **W1: Regarding the statement of Theorem 3.** Thank you for your careful reading of the statement of Theorem 3. We agree with the reviewer’s suggestion. In the camera ready version, we will change the real positive $\delta$ to $\gamma$ in the statement and proof of Theorem 3. We will also explicitly declare $\gamma$ in the statement of Theorem 3. **W2: Regarding oracle definitions** A: We will add the definitions of both oracles to the camera-ready version. Namely, a zero-th order oracle is a function call. That is, it is an oracle that, given a function $f : \mathbb{R}^d \rightarrow \mathbb{R}$ and input $x \in \mathbb{R}^d$, returns $f(x)$. In the context of our work, this amounts to a neural network forward pass. Likewise, a first-order oracle is an oracle which, given $x$, returns $\nabla_x f(x)$. In the context of our work, this amounts to a network backpropagation call. ## Questions --- **Q1 and Q2: Typos: “An general L-hidden … “, “non-smoothed”** Thank you for catching these. We will change the sentence to “A general L-hidden … “ and “non-smoothed” to “unsmoothed” in the camera ready version.
null
null
null
null
Generalization in Neural Operator: Irregular Domains, Orthogonal Basis, and Super-Resolution
Reject
Summary: This paper studies the generalization error of neural operators that contain kernel operations. Under the basic setting of neural operators (such as FNO), this paper establishes upper bounds of the excess risk of neural operators. How to apply NO to irregular domains, and the error analysis of super resolution are also discussed. The techniques of this paper is rather standard. Some parts of the upper bounds are not clear. I think the contribution of this work is incremental. Strengths: 1. This paper provides an upper bound for the excess risk of neural operators. 2. The upper bound in this paper improves the one in [14] 3. Extension of NOs to irregular domains is discussed. 4. The super resolution error is studied. Weaknesses: 1. The technique used in this paper is pretty standard. The author should emphasis their novelties. 2. Theorems in this paper is not impressive and is unclear. For example, the error bound in Theorem 3.1 depends on the network structure, parameters, and the covering number of the space $B$, which is infinite dimensional. If the norms of network parameters are all larger than 1, the upper bound can be very large. Since the space $B$ is infinite dimensional, the covering number can also be very large, which makes the result less attractive. On the other hand, the magnitude of the training loss is also unclear. I believe the training loss depends on the network's width and depth, which relates closely to the upper bound in this paper. It will make the paper much stronger if these relations are analyzed clearly and a more clear upper bound is derived. 3. For applying NOs to problems with irregular domains, the authors only give an example on unbounded domain. The case for arbitrary irregular domains is only briefly discussed. However, the authors claim the construction of NOs on arbitrary domains as the second contribution. More details on this part should be given. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The upper bound in Theorem 3.1 depends on the covering number of the set $B$, which is an infinite dimensional space. Could the authors discuss how to estimate and bound the covering number? 2. The bounds in this paper are only for excess risks, which also depends on the training loss. Could the author discuss how to bound the training loss? 3. For the super-resolution error, why the upper bound does not depend on N_{grid,test}? 4. In Section 6.1, how the upper bound is computed? The upper bound in this paper requires a tradeoff between \gamma and N_{train}. How this tradeoff is made and how the covering number is computed? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are not disucssed in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Questions are addressed as below. >Q1: Novelty? **A1**: See general response. >Q2: Theorems are not impressive and are unclear... **A2**: **Network parameter norm larger than 1**. First, it aligns with the effectiveness of our commonly used L2 regularization. Generally, employing regularization tends to lead to smaller norms of model parameters, which in turn correspond to reduced generalization error. Furthermore, our intention is to convey to the readers that by selecting an appropriate basis, the norm of model parameters does not need to be excessively large (meaning the function represented by the model is not overly complex) to achieve a well-fitting of the target function. Thus, simpler models have a better generalization to unknown test data. This is consistent with the performance of various bases in our experiments (Experiment 6.4). Additionally, our derived bound outperforms (Experiment 6.1). **Covering number of input space $B$**. See A4. **Training loss**. See our answer to A5. >Q3: On irregular domains **A3**: We chose to test our approach on an unbounded domain due to its notably challenging nature which is never tackled before by others. For existing methods, dealing with an entire infinite domain poses considerable difficulties in selecting function value sampling points for model input. But, our methodology, facilitated by the orthogonal basis setup, provides an easily manageable solution for point selection using the quadrature points on unbounded domains, which is in turn strongly correlated with the problem nature and the input functions, validated by our experiment on DFT in section 6.3. Regarding the conventional irregular domain scenario, due to constraints of paper length, we devoted limited discussion to it. Nevertheless, we underscore the universality of our framework in addressing such cases since an orthogonal basis can be constructed on arbitrary domains (see lines 246-256 on page 6). While the full depth of our treatment may be concise in the current presentation, we assert that our approach is inherently adaptable and applicable to arbitrary irregular domains, representing a significant facet of our secondary contribution. >Q4: On the covering number of the set $B$ **A4**: Owing to the finite precision at which computers handle input, functions are essentially represented by a series of discrete values. Consequently, the input space is of finite dimensionality, leading to a finite covering number. It is also important to note that our Theorem 3.1 also works for discretized neural operators implemented in practice, by substituting the infinite-dimensional $l^2$ space by the corresponding finite-dimensional Euclidean space, where the input set $B$ has a finite dimension. When considering continuous function spaces, two primary scenarios arise. Firstly, in cases where high-dimensional data resides on a lower-dimensional manifold, the input set $B$ assumes a lower dimensionality, thus resulting in a finite covering number. Secondly, in situations where the input set $B$ genuinely exists in an infinite-dimensional space and possesses an infinite covering number, our theory accommodates this circumstance by unveiling the impracticality of machine learning within such a context. To elaborate, when $B$ has an infinite covering number, finite point sampling inadequately represents the distribution of training data. Consequently, any learning procedure grounded in finite training data is insufficient for deducing the inherent characteristics of the operator itself. This scenario embodies the essence of our theoretical framework, serving as an indicator of when operator learning becomes unattainable. >Q5: On train loss. **A5**: Fundamentally, this paper provides posterior bounds, and generalization is based on the training loss and model complexity. This setting is widely adopted within the field of statistical learning theory. Concerning the bound for the training loss, it constitutes a distinct consideration from the focus of our investigation. Existing theoretical results demonstrate the universal approximation capabilities of neural operators arxiv.org/abs/2107.07562. Consequently, with appropriate optimization, the training loss can indeed approach minimal values. This assertion of low train loss: arxiv.org/abs/2002.08709. >Q6: For the super-resolution error, why the upper bound does not depend on $N_{grid,test}$? **A6**: We assume the true super-resolution error is evaluated using continuous integration, i.e., corresponding to the case where $N_{grid,test}$ tends towards infinity. This represents the true essence of super-resolution error. However, in practical situations, $N_{grid,test}$ is finite, offering an approximation to the continuous integration. We have defined the abovementioned setting in line 83 of page 2. For any given finite $N_{grid,test}$, our theoretical framework can be effectively extended. >Q7: In Section 6.1, how the upper bound is computed? **A7**: See **Q4** for bounding the covering number of $B$. For $\gamma$, we choose it as 0.1, and for the covering number of a discretized finite-dimensional function space, it is the same as computing the covering number of a finite vector space containing all the data points. Suppose that the input space is normalized to $[0,1]^d$ ($d$ is the input dimension), which is a common practice in deep learning for training stability, we simply need to place $1/\gamma$ different grid points at intervals of $\gamma$ along each dimension of the input space. This set of grid points forms a $\gamma$-cover for the input space, and the cardinality of this set can be calculated. Furthermore, the techniques in https://arxiv.org/abs/2206.13497 can be adopted to further reduce our bound's numerical value. Meanwhile, the term $B_{l,i},W_l$ are the trained model parameters. $M$ is the upper bound is the loss function. $N_{train}$ is the number of training data. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response. I suggest the authors to add the computation of covering number to the main paper. I still have some question. For Network parameter norm, Experiment 6.4 only provides the experimental error. We still have no idea how large the parameter norm is. I understand that the authors want to show simpler models have better performance. But it is unclear whether small parameter norm will give a small training loss. The two reference provided by the authors do not mention the scale of parameters. But in many existing work that discuss parameter scale, such as https://arxiv.org/abs/1610.01145, the parameters are very large. Consider an extreme case, if we require all parameters are very close to 0, and when N_{train} is large enough, your error bound will be almost 0. However, under such a setting, I think the training loss will be very large. Even though we have a very small upper bound, but it does not make any sense. Perhaps I am wrong. Could the author provide some insights or evidence (either theoretical or experiment) on this point? --- Reply to Comment 1.1.1: Title: Response to the reviewer Comment: We would like to thank the reviewer for the helpful discussion. Your further question is addressed below. Let us know if you need more explanation. >Q8: I suggest the authors add the computation of the covering number to the main paper **A8**: Sure! Thanks for pointing this out. Actually, we refer the reviewer to lines 172-177 in the main paper, where we have already explained it during the submission. We will add more details for sure in the revision. >Q9: Consider an extreme case, if we require all parameters to be very close to 0, and when $N_{train}$ is large enough, your error bound will be almost 0. However, in such a setting, I think the training loss will be very large. Even though we have a very small upper bound, it does not make any sense. **A9**: The phenomenon pointed out by the reviewers is actually a trade-off between model complexity and lower training error. In practical model training, we often encounter a situation where shorter training times lead to higher training errors, resulting in less optimized models that are comparatively simpler and have smaller parameter matrix norms. On the other hand, longer optimization times tend to reduce training errors to near zero, but the resulting models can be more complex and may even lead to overfitting, corresponding to larger parameter matrix norms. Thus, this situation represents a trade-off. In practice, this trade-off is often managed through validation data. We typically set aside around 10\% of the data as validation data. During training, we select the epoch with the lowest validation data error for testing. In theoretical terms, our established bounds precisely describe this trade-off. Specifically, the smaller the training loss, the higher the risk of overfitting, and conversely, a larger training loss may lead to underfitting. Therefore, we need to strike a balance using the validation loss, aiming to identify a middle point where the model performs optimally. >Q10: But it is unclear whether a small parameter norm will give a small training loss. The two references provided by the authors do not mention the scale of parameters. But in many existing works that discuss parameter scale, such as arxiv 1610.01145, the parameters are very large. **A10**: We greatly appreciate the reviewers for highlighting this issue, as it delves into the enigma of generalization in deep learning. Deep learning is often able to learn intricate models that exhibit the capability to generalize to unseen data, and this paradox has been a subject of significant interest. Dr. Albert Einstein has a famous quote: “In theory, theory and practice are the same. In practice, they are not.” The main objective of theoretical work, in this context, is to facilitate a comparison of the generalization capabilities of two models under identical training conditions, providing a preliminary understanding before conducting tests on unknown datasets. For instance, in the models presented in Experiment 6.4, the parameter matrix norms are now supplemented with values as below. This information from the theoretical perspective enables us to discern which model is likely to perform better in terms of generalization, thereby guiding the practical decision-making process on the model choice. Every theory has its applicable scope, and our aim is to equip users of the neural operator framework with insights into which model could exhibit superior performance on test data under the same training environment. This objective extends to our other theorems as well, such as those concerning discretization and super-resolution errors. Through theoretical analyses, we delineate three influential factors: model complexity, the numerical format of integration, and grid density. These factors have demonstrated remarkable alignment with real-world scenarios. We are confident that our theoretical framework offers superior guidance for comprehending the generalization, super-resolution, and discretization errors intrinsic to neural operators. In conclusion, the aim is to endow practitioners with intuitive insights and a mechanism for comparing different models under similar conditions. **This related work validates the relation between bound and empirical performance too, serving as a justification: arXiv 2109.09444.** Here are the additional results: ===== Model performance in Table 2 of the original paper ===== NO-Sin/Sin || NO-Poly/Poly || NO-Sin/Poly Advection (1) || 8.34E-3 || 1.96E-2 || 1.01E-2 Advection (2) || 1.00E-2 || 1.76E-2 || 7.66E-3 ===== Computed bound for the models ===== NO-Sin/Sin || NO-Poly/Poly || NO-Sin/Poly Advection (1) || 100% || 259% || 173% Advection (2) || 100% || 195% || 82% Here we normalize the bound of the first model NO-Sin/Sin as 100% for clear comparison. As you can see, the theoretical bound and the empirical results are consistent. --- Reply to Comment 1.1.2: Title: Looking forward to more discussion Comment: Dear Reviewer 59nv, We want to express our thanks once more for your comprehensive and perceptive input. We have addressed your additional considerations. **As we approach the end of this phase of response, we are looking forward to your further response.** Specifically, **we summarize our response for you, for the full version kindly refer to our previous response.** >Q9: Consider an extreme case, if we require all parameters to be very close to 0, and when $N_{train}$ is large enough, your error bound will be almost 0. However, in such a setting, I think the training loss will be very large. Even though we have a very small upper bound, it does not make any sense. **A9**: This scenario reflects a trade-off between model complexity and lower training error. Shorter training can lead to higher training errors but simpler models with smaller parameter norms, while longer training reduces the training errors but can result in complex models and overfitting. Validation data helps strike a balance. Our bound can reflect the tradeoff and is thus informative. >Q10: But it is unclear whether a small parameter norm will give a small training loss. The two references provided by the authors do not mention the scale of parameters. But in many existing works that discuss parameter scale, such as arxiv 1610.01145, the parameters are very large. **A10**: Theoretical insights aid in comparing generalization capabilities under identical conditions, guiding model selection. Models in Experiment 6.4 illustrate the relationship between parameter norms and generalization: Additional numerical results also show that our theoretical bounds align with empirical results. Consider a practical scenario involving the application of a neural operator. In this context, the nature of the testing dataset remains unknown, and prior to its deployment into production, our derived bounds can be employed to facilitate the process of model selection. **Here is an important reference that shows that the theoretical bound based on the parameter matrix norm basically aligns with the empirical performance in most cases:** When Do Extended Physics-Informed Neural Networks (XPINNs) Improve Generalization? SIAM Journal on Scientific Computing (SISC). arXiv 2109.09444 The authors of that paper validate the bounds of physics-informed neural networks (PINNs) for solving partial differential equations (PDEs), which is closely related to neural operators solving PDEs too. Another justification for the parameter matrix norm-based bounds in our paper is that the bounds can help inspire new regularization methods to prevent overfitting for deep neural operators. L2 and L1 regularizations are proven to be successful in deep learning, which is closely related to the theory in the related work (see arXiv 1712.06541). One related work designed novel regularization based on the parameter matrix norm-based they derived (see arXiv 2205.11359). In sum, our bound provides a means to compare the different models to anticipate their test performance before real-world deployment and inspires novel regularization for neural operators, and the bounds are consistent with the practice in most cases. Warm regards, The authors --- Reply to Comment 1.1.3: Title: Last message before rebuttal ends Comment: Dear Reviewer 59nv, Hope you can refer to our rebuttal at the decision stage. We are grateful for your helpful discussion. Warm regards, The authors
Summary: A theoretical analysis of neural operators (NOs) is presented, which provides further insight into the construction and performance of NOs. The theoretical insights are validated by numerical experiments. Strengths: A thorough theoretical analysis of NOs is presented, with significant improvements in the tightness of generalization bounds compared to prior work. The implications of the theory and insights that may be drawn are discussed. While some of these may seem obvious, it is important to base such insight on robust underlying theory, as presented. The insights gained from the theory are validated by numerical experiments. Weaknesses: The summary of the overall model is not the most clear (Equation 1), which suggests the non-linear activation is applied first, rather than following the kernel and linear transforms. Typo: "project" -> "projection" Typo: sometimes $\mathcal{L}$ used to represent loss, sometimes $L$. Typo: "Conbining" -> "Combining" Technical Quality: 3 good Clarity: 3 good Questions for Authors: While Figure 1 shows the generalization bounds are much tighter than alternative work, how can one have confidence the bounds are valid? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No special societal concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the valuable comments. Your questions are addressed as follows. >Q1: The summary of the overall model is not the most clear (Equation 1), which suggests the non-linear activation is applied first, rather than following the kernel and linear transforms. **A1**: In equation (1), our intention is to convey that for the 0-th layer, which corresponds to the input layer, activation is not required for the input to undergo the subsequent kernel transformation and linear transformation. However, for layers beyond the 1st, we first apply activation to the input from the preceding step, followed by the kernel transformation and linear transformation. Hence, we emphasize that activation is needed for $l \geq 1$, and no activation is performed for $l=0$. This distinction arises fundamentally from the fact that activation is unnecessary for the 0-th layer, while subsequent layers necessitate activation. To enhance clarity regarding the overall structure of the model, we may consider incorporating a diagram in the revision. >Q2: On the typos. **A2**: Thanks for pointing out our typo, we will carefully check all the content in the revision to complete high-quality writing >Q3: While Figure 1 shows the generalization bounds are much tighter than alternative work, how can one have confidence the bounds are valid? **A3**: Our generalization bound can be explicitly computed based on the model parameters, the quantity of training data, and the characteristics of the training data. Therefore, the results presented here are a manifestation of this explicit computation. Importantly, our bound is remarkably tight owing to the substantial enhancement we have achieved in reducing the dependency of the bound on model parameters. In particular, in terms of parameter matrix norm contributed by each layer, please see lines 224-231 in the main paper, which demonstrates that our bound is much tighter than previous work. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Many thanks to the Authors for their response. I appreciate the generalization bound can be directly computed and compared to other works. I was more interest in whether there has been any validation of the bound, e.g. mathematical consistency checks, derived by an alternative approach, or numerical validation. I am not suggesting there is any error but wanted to understand to what extend the results are validated. --- Reply to Comment 1.1.1: Title: Response to the reviewer Comment: We would like to thank the reviewer for the helpful discussion. Your further question is addressed below. Let us know if you need more explanation. >Q4: I was more interested in whether there has been any validation of the bound, e.g. mathematical consistency checks, derived by an alternative approach, or numerical validation. **A4**: The bound in Theorem 3.1 is mathematically rigorous. Our bound constrains the test error on the left side of the inequality, and it encompasses three components: the first is the training error, the second is the model complexity, and the third is a probabilistic term. A model with smaller training errors and lower complexity is more likely to generalize well. A low model complexity results in a smaller numerical value for the complexity term on the right side, thus enhancing its ability to generalize to unseen test data. The third term is a probabilistic factor that includes the probability of the bound holding true, denoted as $1-\delta$. Typically, we select $\delta=0.1$ to ensure a 90\% probability of the bound holding. The stochastic nature of the bound arises from randomly selecting training points from an unknown data distribution. The bounds presented in Theorem 3.2 and Theorem 3.3 are also rigorously established in mathematical terms. They pertain to the discretization error and super-resolution error of the neural operator. Consequently, once the model and data are provided, we can employ theoretical analysis to indicate the specific factors upon which these errors depend. This can serve as inspiration for devising models that exhibit smaller errors. Specifically, both of these bounds primarily rely on the accuracy of the numerical integration scheme employed within the neural operator, the density of grid points, and the complexity of the functions represented by the model which in turn depends on the model parameter matrices. If a model represents functions that are more intricate or steep, the accuracy of numerical integration might decrease, leading to an increase in discretization error.
Summary: The image super-resolution (SR) is a recurrently used task, nowadays, since the SR images can improve the accuracy of downstream tasks like object detection. Many proposals rely on heavy Deep Learning models or lightweight models based on efficiently designed architectures. This work studies neural operators based on examining the orthogonal base. The operations proposed in the manuscript, according to their theorems and demonstrations with the appropriate orthogonal bases and the grip points, reduce the time of convergence and make the network adapt faster to the irregular domains. Strengths: * The proposal section in the manuscript is easy to read and follow. * Theorem 3.2 and 3.3 are properly defined. For theorem 3.3, this will assist in re-planning new proposals for improving image SR accuracy. Weaknesses: Despite a short evaluation performed that shows a better performance of the proposal, I feel it is not sufficient to validate the generalization ability in the super-resolution task. To this end is necessary to validate with the standard evaluation metrics used in the SR domain. Personally, I feel that more explanation and motivation is needed for equation 3. Minor error: The machine learning articles, mostly, are organized Introduction, related works, the proposal, material, and experiments. It should be convenient to use this structure in the manuscript. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No questions, but receive the weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There are no limitations addressed in the manuscript Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Your questions are addressed as follows. >Q1: Despite a short evaluation performed that shows a better performance of the proposal, I feel it is not sufficient to validate the generalization ability in the super-resolution task. To this end is necessary to validate with the standard evaluation metrics used in the SR domain. Personally, I feel that more explanation and motivation are needed for equation 3. **A1**: In assessing super-resolution error, we indeed adhere to the standard procedure, specifically encapsulated by the definition on the left side of equation (5) in Theorem 3.3. We would like to stress that this paper focuses on the theory of neural operators (NOs) which learns mappings between functions, but not image classifiers. We would also like to emphasize that we propose a novel and more accurate definition of superresolution in NOs based on orthogonal bases. Our research is conducive to the NO literature that fails to delve into NOs' superresolution before. So, we compare with the standard definition of superresolution in NOs, see Definition 4 and Theorem 8 in related work https://arxiv.org/pdf/2108.08481.pdf (JMLR paper). On Definition 4 and Theorem 8 in the JMLR paper: These definitions all pertain to the characterization of the super-resolution error. Notably, our definition of super-resolution error diverges significantly from that presented in this paper. Their focus is primarily geared towards the extrapolation of model super-resolution under extreme conditions, specifically when the grid points of integration and the vector size of model inputs both tend towards infinity. This arises from the requirement in super-resolution for broader pointwise evaluations across the grid, leading to an escalation in the dimensions of the input vector. Consequently, their approach hinges on a limit-based perspective. Actually, our framework maintains a greater degree of flexibility than that of the JMLR paper. We avoid the utilization of such limiting considerations to define the model. Our approach affords us the capability to investigate various influencing factors, such as the impact of model numerical integration precision and the norm of model parameters on the super-resolution error. In their limit-based framework, regardless of the choice of integration scheme, input functions, or model parameters, the super-resolution error converges to zero as the grid size tends to infinity. In other words, they establish the convergence to this limit, but unlike our approach, they do not provide explicit error analyses or quantify the rate of convergence to zero under different settings within their framework. This super-resolution error quantifies the disparity between test error on a more refined grid $L_{\text{test-sr}}(\theta_S)$ and test error on a sparser grid $L_{\text{test-reg}}(\theta_S)$. The concept of "super-resolution" materializes as a consequence of employing dissimilar grids during testing, thus characterizing the notion of "super-resolution." Our approach aligns with this established definition, rooted in the traditional practices of the field. Regarding equation (3), it encapsulates our proposed notion of generalization error. This metric serves as an indicator of the model's ability to generalize its learned behaviors beyond the training data, encapsulating the essence of our theoretical framework. Revisiting equation (3) in the main paper, $L_{\text{test-reg}}(\theta_S)$ is the regular test error without considering super-resolution. $L_{\text{train}}(\theta_S)$ is the train loss. $B_{l,i}$ and $W_l$ are the model parameters in $\theta_S$ where the model structure is defined in equations (1) and (2) in the paper. The bound holds for all $\gamma > 0$ and choose $\gamma = 0.1$ in our numerical experiment. $K$ (see line 166 of the main paper for detai;s) is the $\gamma/2$-covering number of the input space $\mathcal{B}$ under the l2 norm, which can be computed analytically after given the training data set and the value of $\gamma$. $M$ is the upper bound of the loss function which can be set to the maximal train loss value during multiple training instances, and empirically, after proper choice of the learning rate and model initialization, the loss throughout the training is always bounded. $N_{train}$ is the number of training data. $\delta$ defines the probability of our bound holds, and we choose $\delta=0.1$ in the experiment so that our bound has a probability of 0.9 to hold given the random draw of training data. >Q2: Minor error: The machine learning articles, mostly, are organized into Introduction, related works, the proposal, material, and experiments. It should be convenient to use this structure in the manuscript. **A2**: Thank you for your suggestion. However, our manuscript diverges from the conventional structure of other papers in the field. We have intentionally chosen to present our content in a distinct manner, aiming to underscore the significance of our theory and its practical applications. As such, we have dedicated a single section to comprehensively outline the practicality of our theory, covering aspects such as generalization, super-resolution error, discretization error, and irregular domains. Each of these aspects is seamlessly connected to specific experiments, demonstrating the thorough validation of each of our claims. This meticulous approach serves to establish the comprehensiveness and robustness of our paper, even though it deviates from the standard organization you mentioned. --- Rebuttal Comment 1.1: Comment: Dear Authors: thank you for addressing my review. After revising other rebuttals, which make clearance, I can say, It is a good contribution in the Neural Operator domain. I have just changed my rating.
Summary: In this paper, the authors propose to analyze neural operators from the orthogonal bases in the kernel operators, which helps to guide designing kernel operators and choosing grid points, analyzing generalization and super-resolution capabilities, and adapting neural operators to irregular domains. Strengths: This is an interesting paper providing not specific models but new analysis perspectives and design principles on Neural Operators. Weaknesses: - I feel there are still gaps between the overall claims, the theoretical results in Section 3, and the implication and experiments in Sections 4&5. For example, the "NOs on Unbounded Domain" section is not what I would expect for "adapting neural operators to irregular domains"; the "Combining Multiple Bases" section seems more like analyzing kernel operators, instead of providing insight and principles in "designing kernel operators". - The writing is not always good. For example, in the abstract, similar contents reappear three times. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This work is a bit beyond my capability. I think it is an interesting and important work, but I don't feel like it has reached its perfect state. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Your questions are addressed as follows. >Q1: I feel there are still gaps between the overall claims, the theoretical results in Section 3, and the implication and experiments in Sections 4\&5. For example, the "NOs on Unbounded Domain" section is not what I would expect for "adapting neural operators to irregular domains"; the "Combining Multiple Bases" section seems more like analyzing kernel operators, instead of providing insight and principles in "designing kernel operators". **A1**: On irregular domains: We chose to test our approach on an unbounded domain due to its notably challenging nature which is never tackled before by others. For existing methods, dealing with an entire infinite domain poses considerable difficulties in selecting function value sampling points for model input. But, our methodology, facilitated by the orthogonal basis setup, provides an easily manageable solution for point selection using the quadrature points on unbounded domains, which is in turn strongly correlated with the problem nature and the input functions, validated by our experiment on DFT in section 6.3. Regarding the conventional irregular domain scenario, due to constraints of paper length, we devoted limited discussion to it. Nevertheless, we underscore the universality of our framework in addressing such cases since an orthogonal basis can be constructed on arbitrary domains (see lines 246-256 on page 6). While the full depth of our treatment may be concise in the current presentation, we assert that our approach is inherently adaptable and applicable to arbitrary irregular domains, representing a significant facet of our secondary contribution. In the context of defining an orthogonal basis on any irregular domain for constructing the neural operator, the methodology is as follows. Firstly, we can establish a set of linearly independent functions on the arbitrary domain, such as varying-degree polynomials and trigonometric polynomials with different frequencies – these are commonly employed bases widely used in interpolation and similar applications. However, these functions are not orthogonal. The subsequent step involves outlining the process of orthogonalization. For this purpose, a numerical integration scheme is introduced on this domain. A common numerical integration approach, for instance, is based on the Monte Carlo method, wherein a sufficient number of points are randomly sampled within the domain, and the function values at these points are employed for estimating the integral. Alternatively, a more accurate quadrature method can be employed, involving partitioning the irregular domain into a series of triangular grids, akin to finite element methods. Subsequently, a quadrature approach is defined within each triangular grid, and the summation of all quadrature computations over these grids constitutes the overall quadrature integration scheme. Leveraging this numerical integration, we can employ the Gram-Schmidt orthogonalization process to orthogonalize any set of linearly independent functions. On combining multiple bases: In fact, the process of combining multiple bases not only involves analyzing kernel operators but also encompasses the design of novel kernel operators. Specifically, as demonstrated in Experiment 6.4, the features of the target function differ between the x and t axes; the former is periodic while the latter is non-periodic. To accommodate these distinct function characteristics, we utilize polynomial bases on the non-periodic axis and trigonometric bases on the periodic axis. This strategic choice enables the creation of a more effective neural operator. Another illustration of our theoretical framework's design philosophy can be found in the DFT experiment within Section 6.3. Here, we tailor our approach to the unique characteristics of the DFT target function, employing a multi-center Gaussian-type basis that closely aligns with the function's nature, yielding superior results. In essence, our proposed method serves a dual purpose of both analyzing and designing kernel operators, as exemplified by these instances. By strategically combining various bases, we manifest our approach's capacity for sophisticated analysis and intentional design, thereby substantiating its broader utility and significance. >Q2: The writing is not always good. For example, in the abstract, similar contents reappear three times. **A2**: Thank you for your suggestion. We will certainly address this concern and work on refining the writing in our revision. Your feedback is greatly appreciated as we strive to enhance the quality of the manuscript. --- Rebuttal Comment 1.1: Comment: Thank you very much for your detailed responses. The comments and discussions have been very helpful. I decide to remain my original positive rating.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their valuable comments. Here, we present general responses for understanding the manuscript better. Your specific questions are answered respectively. >P1 (reviewer 59nv): On the contribution. We present two primary theoretical contributions, we innovatively transform the generalization and super-resolution problems in the infinite function spaces into analyzable problems within the more tractable infinite-dimensional $l^2$ space. In this context, elements are represented as infinite-dimensional numerical vectors rather than abstract functions, facilitating a more comprehensive analysis. This transformation is inherently innovative. Secondly, we investigate the generalization and super-resolution properties of neural operators in the $l^2$ space. The extension of finite-dimensional theories to infinite-dimensional domains is far from trivial. As a culmination of our efforts, we derive the generalized error, discretization error, and super-resolution error of the neural operator as an infinite-dimensional mapping. Notably, our work stands as the pioneering analysis conducted within an infinite-dimensional space. >P2 (reviewer 2NvK): Intuitions of the theorems' proofs. Theorem 3.1: The proof of Theorem 3.1 primarily delves into the examination of the neural operator model's robustness and Lipschitz continuity. In Proposition 3.1, we establish that the activation functions in the coefficient space maintain their Lipschitz continuity as in the original space. Subsequently, the Lipschitz continuity of the linear mappings within the neural operator is determined by the matrix norms of these linear mappings. Combining these norms through multiplication yields the Lipschitz continuity of the entire model. By inserting these findings into the relationship formula encompassing robustness (provided by Ref 29 in the main paper), Lipschitz continuity, and generalization, the proof is concluded. Theorem 3.2: The proof of Theorem 3.2 establishes the discretization error of the neural operator that error essentially originates from the numerical integration component within the neural operator, as in an ideal scenario, a continuous neural operator would utilize continuous integration rather than numerical methods. Thus, fundamentally, this proof aims to bound the disparity between numerical and continuous integration. Consequently, the ultimate outcome indicates that the discretization error is determined by the accuracy of numerical integration and the nature of the integrand. This integrand effectively represents the outputs of the model's intermediate states, and its complexity is expressed through the matrix norm of the model parameters. Theorem 3.3: Firstly, we bound the prediction error of continuous NOs across all points in the domain $\Omega$ (i.e., the super-resolution error of continuous NOs). During the training phase, NOs are trained on a finite training grid, leading to this error. Subsequently, we bound the discrepancy between continuous and discrete NOs, corresponding to the discretization error stated in Theorem 3.2. These two terms are reflected in Theorem 3.3. >P3 (reviewer 2NvK): Intuition of the 3 theorems. Theorem 3.1: The generalization bounds, similar to vanilla neural nets, rely on the products of multilayer parameter norms. It aligns with the effectiveness of our commonly used L2 regularization. Generally, employing regularization tends to lead to smaller norms of model parameters, which in turn correspond to reduced generalization error. Additionally, our bound suggests that increasing the number of modes does not necessarily increase the complexity of the model. On the right-hand side of Theorem 3.1, the second term contains $N_{train}$, indicating that as the training data increases, the generalization error diminishes. This aligns with our intuition. Furthermore, the factor $2K\log 2$ in the numerator signifies the vastness of the entire input data space, and our approach employs the concept of the covering number. Here, $K$ represents the so-called covering number, and a larger $K$ indicates that the input space is considerably expansive. Consequently, a greater amount of training data is required to effectively cover it, thereby yielding enhanced training outcomes. Conversely, if $K$ is smaller, it implies that the input space is relatively more compact, and thus, a reduced amount of data can still result in a well-performing model. In essence, $K$ intuitively reflects the efficiency of training data for a machine learning problem. Theorem 3.2: The discretization error in discrete NOs relies on the norm of model parameters ($B_{k,i}, W_k$) and the accuracy of the integration method employed for integrating intermediate output functions (i.e., $e_{grid},e_{func}$). The dependence of discretization error on the accuracy of the integration scheme is inherently intuitive. This correlation arises from the fact that as the integration becomes more precise, the requirement for fewer grid points to achieve the same level of accuracy is observed. On the other hand, the norm of the model's parameters reflects the complexity of the integrated functions, which the model characterizes. A larger norm signifies heightened complexity in the functions represented by the model. This complexity implies that these functions are less easily approximated by values at finite points, leading to a potentially increased discretization error. Theorem 3.3: The first term pertains to the integral error in Theorem 3.2, while the second term relates to the interpolation and generalization ability of NOs across the entire domain. In other words, the super-resolution error is essentially determined by two factors within the neural operator framework: the numerical error introduced by the employed numerical integration and the capability of its predictions to generalize beyond the training grid points.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors consider a new perspective to neural operators, by examining the role of orthogonal bases. The kernel operators are constructed such that their eigenfunctions are predefined orthogonal bases, with the eingenvalues as trainable parameters. That is, a neural operator can be seen as a mapping from input coefficients to output coefficients of the orthogonal basis functions. The authors show several theoretical results using this new perspective, backed by empirical results: - Improved generalization bounds. - Super-resolution error bounds. - Irregular domains - they show that neural operators can be extended to irregular domains using random Fourier features and polynomials on irregular domains. - Choosing other orthogonal bases. The authors show that Fourier bases can be combined with wavelet or polynomial bases. Strengths: This paper can be impactful both because of the four concrete results they show (generalization bounds, super resolution bounds, irregular domains, and choice of orthogonal bases), but also because of the novel perspective of neural operators. In particular, the novel eigenvalue / orthogonal basis-based perspective of neural operators can be a useful view for studying neural operators in the future, both theoretically and empirically. While several other works have studied neural operators for irregular domains / topologies, the other results by the authors are much more novel. Not much prior research has studied generalization bounds, super resolution bounds, or combining bases before this work. Weaknesses: - Some of the results are fairly obvious or have only limited empirical value. For example, the super-resolution theorem/experiments, although it is novel to study super resolution, the main punchline is that super-resolution depends on the accuracy of the integration method, and the density of the low-resolution grid (i.e., the results in Figure 2), which is not so surprising. - Section 4 shows implications of the theory, motivating the experiments in the next section. However, some of these experiments have only tangential ties to the theory. For example, “Guiding the choice of Orthogonal Basis” is justified because Theorem 3.1 impacts their expressiveness. I would argue that we would decide to research the choice of orthogonal basis even if we didn’t see Theorem 3.1 first. - It would be nice if there was a bit more intuition for each of the proofs, in the main text. - Did not discuss limitations. - The authors could have released the code (anonymously during submission). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - It would be good to discuss the relation of this work to other works on the theory of neural operators (beyond the discussion in Section 5). For example, how does your work relate to the JMLR paper https://arxiv.org/pdf/2108.08481.pdf, e.g. your super-resolution results compared to discretization invariance in Definition 4 and Theorem 8, and your perspective of neural operator compared to the set of neural operators used in their theory, defined in Section 9.1? - Wavelet bases are not necessarily discretization invariant. The definition used in the above JMLR paper says that we fix a finite set of weights for the architecture, and then we can take any discretization of the domain as input. The FNO accomplishes this by truncating to a fixed number of frequency modes $x$, which stays the same at higher resolutions. But it is not clear how Wavelet basis can satisfy this. - What are the limitations of your work (see below)? - Can you add a bit more intuition to all the theoretical results? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors did not discuss the limitations of their work, and they answered “no” without explanation in the OpenReview paper checklist to that question. The NeurIPS call for papers states that authors can answer no to a checklist question, provided they give a good explanation. But I cannot think of any explanation why an author would be justified in not describing the limitations of their work; I think it is strictly better to do so. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Your questions are addressed below. >Q1: Some of the results are fairly obvious. **A1**: The implications of our theory for practical applications, particularly in **grid type selection but not grid size**, carry profound significance. Conventionally, generated data often adhere to uniform grids. However, our theory states that super-resolution error is intricately linked to the chosen integration scheme. Commonly used uniform grid integration yields an error proportional to $(1/N_{grid}^2)$ where $N_{grid}$ is the grid size. Alternatively, employing high-precision quadrature significantly diminishes this error at an exponential rate with increasing grid size. This revelation has substantial implications, as it implies that training an equally accurate neural operator requires fewer grid points on a quadrature grid, thus aiding in reducing computational costs during training. >Q2: Experiments have only tangential ties to the theory. **A2**: The theory we propose serves three main purposes. Firstly, it provides theoretical underpinnings that offer assurances for observed phenomena. Secondly, it offers practical recommendations to improve neural operator (NO) training. These recommendations include guidance on grid selection, orthogonal basis selection, and modeling NO on irregular domains. The third is the innovative nature of the theoretical tools themselves. Despite your concern, theory and practice are related. For instance, the investigation into guiding the choice of an orthogonal basis is justified by Theorem 3.1 on their expressiveness. However, what makes our approach innovative is that we directly handle functions in infinite-dimensional spaces, while most previous studies discretize the infinite-dimensional input onto finite grids before applying traditional finite-dimensional theories. Our theory is versatile and applies even to discretized models. This innovation opens doors for future research to build upon our framework, advancing the field with more refined and comprehensive theoretical insights. >Q3: IIntuition for each of the proofs. **A3**: See the general response. >Q4: Limitations and codes. **A4**: We will discuss this in the revision and publish codes. >Q5: Comparison with JMLR paper. **A5**: On Definition 4 and Theorem 8 in the JMLR paper: These definitions all pertain to the characterization of the super-resolution error. Notably, our definition of super-resolution error diverges significantly from that presented in this paper. Their focus is primarily geared towards the extrapolation of model super-resolution under extreme conditions, specifically when the grid points of integration and the vector size of model inputs both tend towards infinity. This arises from the requirement in super-resolution for broader pointwise evaluations across the grid, leading to an escalation in the dimensions of the input vector. Consequently, their approach hinges on a limit-based perspective. In contrast, our framework maintains a greater degree of flexibility. We avoid the utilization of such limiting considerations to define the model. Our approach affords us the capability to investigate various influencing factors, such as the impact of model numerical integration precision and the norm of model parameters on the super-resolution error. In their limit-based framework, regardless of the choice of integration scheme, input functions, or model parameters, the super-resolution error converges to zero as the grid size tends to infinity. In other words, they establish the convergence to this limit, but unlike our approach, they do not provide explicit error analyses or quantify the rate of convergence to zero under different settings within their framework. On Section 9.1 in the JMLR paper: A significant departure between our formulation and theirs lies in how we delve into understanding the neural operator within the coefficient space. Notably, in their paper, the neural operator continues to be treated as a mapping between infinite-dimensional functions, thereby rendering many theoretical analyses inapplicable to such infinite-dimensional functions. In contrast, we ingeniously employ an orthogonal basis to transform the input functions into an equally infinite-dimensional vector space, albeit with real-valued elements. This strategic move enables the adoption of numerous tools from learning theory since the inputs are now an infinite-dimensional real number vector so that we can nontrivially extend traditional results on finite-dimensional real number vector learning. Consequently, this approach facilitates the derivation of more intricate and insightful theories, which underscores the substantial contribution we make in comparison to their work. >Q6: Wavelet bases are not necessarily discretization invariant. **A6**: The discussion on wavelets is refs 8 and 28. In essence, the neural operator corresponding to wavelet or any other basis functions entails computing the projection coefficients of the input function onto this basis through numerical integration. These coefficients are then mapped by the parameters of the neural network. Since the coefficients are derived from numerical integration, the concept of super-resolution involves employing different numerical formats for integration. In this context, the coarse grid entails employing coarse discretization points for numerical integration, while super-resolution necessitates more precise and finer discretization points. So, wavelet basis has no difference with other orthogonal bases, the same theoretical framework can still be applied. Therefore, with a well-defined basis and a numerical integration scheme, the concept of super-resolution can indeed be applied. >Q7: Intuition to all the theoretical results? **A7**: See the general response. --- Rebuttal Comment 1.1: Comment: Thank you very much for preparing the rebuttal for my review and the other reviews. I especially appreciate the longer discussion on related work, and the discussion on the intuition and contribution of your theoretical results. I encourage you to add these to the paper. Overall I have a better opinion of your work. However, I have a few remaining questions. - Limitations: a couple reviewers mentioned that there was not a discussion of the limitations, even though the NeurIPS checklist specifically brings up that authors should discuss limitations. You said that you would include limitations in the revised manuscript. Please be more specific: what will you say about the limitations of this work? - Relation to prior work. Thank you for going through the relation to prior work. Some of your responses are a bit high-level. Can you be a bit more specific? For example, while the JMLR paper only establishes convergence of resolution invariance, can you comment on the convergence rate using your theory? And in the paragraph, "On Section 9.1 in the JMLR paper", can you be more specific, for example, can the new perspective in your theory give better results compared to those in Section 9? --- Reply to Comment 1.1.1: Title: Response to the reviewer Comment: We would like to thank the reviewer for the helpful discussion. Your further question is addressed below. Let us know if you need more explanation. >Q10: For example, while the JMLR paper only establishes convergence of resolution invariance, can you comment on the convergence rate using your theory? **A10**: Our Theorem 3.3, in contrast to the JMLR paper, provides a more precise convergence rate along with its associated constants. Specifically, the super-resolution error depends on two components: the discretization error and the inherent super-resolution error of the continuous model itself. These correspond to the upper and lower terms on the right-hand side of Equation (5). For the former term, its convergence rate is determined by $e_{grid}(N_{grid})$. This rate is contingent upon the chosen integration format, and we furnish relevant discussions in lines 195-201 of the original paper. As an example, in the case of the integration used in FNO, the convergence rate is $O(1/N_{grid}^2)$. Note that: As the number of grid points, denoted as $N_{grid}$, approaches infinity, the super-resolution error diminishes towards zero. This is attributed to the fact that with an infinitely increasing grid point count, the model achieves predictions across all points and it becomes continuous so that there is no discretization error, resulting in a zero super-resolution error. The second term pertains to the super-resolution error, and its convergence rate is $O(1/N_{grid})$. Furthermore, we provide an accompanying constant for each of these convergence rates. These constants, concealed within the $O()$ notation, are influenced by the model parameter matrix norm and the Lipschitz continuity of the chosen basis and input functions. To encapsulate, our work presents a finer-grained convergence rate in comparison to the JMLR paper, along with the disclosure of the constants within these convergence rates. >Q11: And in the paragraph, "On Section 9.1 in the JMLR paper", can you be more specific, for example, can the new perspective in your theory give better results compared to those in Section 9? **A11**: Recall that we discuss section 9.1 in the JMLR paper since the reviewer asked that "how your perspective of neural operator compared to the set of neural operators used in their theory, defined in Section 9.1?" Here we provide more details. Equation (1) in our paper is essentially similar to the model $NO_n$ defined in the second equation on page 54 of the JMLR paper. They all adhere to the conventional paradigm of defining neural operators. Consequently, in our paper, Equation (1) falls under the category of preliminary groundwork. Our pivotal contribution resides in introducing the formulation of neural operators in the infinite-dimensional $l^2$ coefficient space, as outlined by Equation (2) in our work. Equation (2) in our paper constitutes the pivotal point of innovation. We ingeniously employ an orthogonal basis to transform the input functions into an equally infinite-dimensional vector space, albeit with real-valued elements. This strategic move enables the adoption of numerous tools from learning theory since the inputs are now an infinite-dimensional real number vector so that we can nontrivially extend traditional results on finite-dimensional real number vector learning. Conversely, if we were to adopt the conventional approach of defining models through function mappings, the analysis of such abstract infinite-dimensional functions becomes exceedingly challenging within a theoretical framework. >Q12: Detailed limitation? **A12**: Deep learning is often able to learn intricate models that exhibit the capability to generalize to unseen data, and this paradox has been a subject of significant interest. Dr. Albert Einstein has a famous quote: “In theory, theory and practice are the same. In practice, they are not.” Although we are confident that our theoretical framework offers superior guidance for comprehending the generalization, super-resolution, and discretization errors intrinsic to neural operators, there exist gaps between our theory and practice. For instance, in the generalization analysis of Theorem 3.1, the model parameter matrix norm could potentially be quite large in practical scenarios, which might lead to a comparatively loose bound. Therefore, the underlying assumption of our theorem is that when applying the same training procedures to two models, the obtained bound offers the predictive capability for the models' generalization performance. However, if entirely different training approaches are employed for the two models, there might exist a certain gap between theory and practice.
Summary: The paper attempts to provide theoretical analyses of Neural Operators (NOs) mainly considering the following aspects: 1. The generalization bound of NOs; 2. The discretization error of NOs; 3. The "super-resolution" (trained on low-resolution grid then evaluate on the high-resolution grid) error of NOs on the uniform grids. Several numerical experiments are carried out to validate the theory. The theory results also motivate the authors to come up with improved designs on existing NOs, including bases/integration schemes that suit specific PDE better, and generalization to unbounded domains Strengths: * A tighter and more general generalization bound compared to prior works; * Advance in the understanding of discretization errors and super-resolution errors in NOs; * The improved design of NOs is validated by extensive numerical experiments. The paper in general has analyzed several crucial properties of NOs, which is timely, and its practical implication I believe will benefit the scientific ML community. Specifically given that the concept of doing super-resolution with NOs is often vaguely studied in many relevant works. Weaknesses: The overall presentation of the paper is fairly clear and easy to follow. However, some of the discussion in the experiment section is relatively vague, especially in section 6.3. As revealed by Theorem 3.3, the integration scheme along with basis selection has a crucial effect on the super-resolution error. The authors only briefly covered what basis and quadrature rule are used for the harmonic oscillator example, but then skim through the 3D DFT experiment, which makes it difficult to interpret the improvement in the results shown in Table 1. These details might be trivial for someone who is an expert in DFT, but they can help other audiences better understand the practical implication of the theorems. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * One advantage of FNO is its superior computation efficiency on a uniform grid thanks to FFT. What is the rough estimate of the increase in the computation cost when using other integration schemes and bases? * Following theorem 3.3, will the increase in truncated modes harm the super-resolution performance? Possible typos: * line 197, page 5: , $e_{grid}(N_{grid}) = 1/N^2_{grid}$ -> $e_{grid}(N_{grid}) = 1/N_{grid}$ * line 353, page 9: $u^2(x, t)$->$u(x, t)$ Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper did not discuss any limitation. While this is somewhat understandable as a theory paper, the authors can discuss the limitation of their theory results in terms of the application scope and its assumption. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Your questions are addressed as follows. >Q1: Some of the discussion in the experiment section is relatively vague, especially in section 6.3. As revealed by Theorem 3.3, the integration scheme along with basis selection has a crucial effect on the super-resolution error. The authors only briefly covered what basis and quadrature rule are used for the harmonic oscillator example, but then skim through the 3D DFT experiment, which makes it difficult to interpret the improvement in the results shown in Table 1. **A1**: We provide more details for the numerical integration format and the orthogonal basis in the DFT experiments here and will add them to the revision. Quantum harmonic oscillator: The orthogonal basis is the 1D Hermite polynomial $ \phi_i(x) = \dfrac{1}{\pi^{\frac{1}{4}} 2^{\frac{i}{2}}\sqrt{i!}}H_i(x)e^{-\frac{x^2}{2}}, x\in\mathbb{R}, $ The grid for training is the points in the Gauss-Hermite quadrature with 32 points, and the super-resolution testing grid is with degree 64. 3D DFT: Kohn-SHam-DFT offers natural basis sets. Additionally, grids with varying resolutions, characterized by levels, are provided based on the multi-center Gaussian nature of the functions in DFT. In our experiments, we select level 1 for training and level 2, which has more points, for testing. More concretely, for basis sets, common choices are the sto-3g basis where several primitive Gaussian orbitals are fitted to a single Slater-type orbital (STO), and 6-31g basis containing 6s6p Gaussian functions and 3d Polarization functions and 1d diffuse function. In the experiment, we used sto-3g basis sets, which can be orthogonalized using Gram-Schmidt orthogonalization. For sto-3g basis, the choice of quadrature points is related to the number of Gaussians used to construct each basis function. The process of selecting these quadrature points involves finding a set of points and weights that accurately approximate the integrals while minimizing computational cost. Generally, due to the distribution of atoms throughout the entire space, the electron density in a molecule (i.e., the input function of a molecule) resembles a mixture of Gaussian distribution centered around the different atoms within the molecule. As a result, quadrature points are selected densely around each atomic center to accommodate the multi-centered Gaussian nature of the molecular density function. Our sampling approach follows the interface provided by the commonly used quantum chemistry software, PySCF (https://pyscf.org/) and D4FT (ref 17 in the main paper). By providing the molecular formula and the desired grid density, PySCF returns the appropriate quadrature points to us. For the practical implication of the theorems by the experiments, we kindly refer the reviewer to lines 340-349 on page 9 of the main paper. >Q2: One advantage of FNO is its superior computation efficiency on a uniform grid thanks to FFT. What is the rough estimate of the increase in the computation cost when using other integration schemes and bases? **A2**: In FNO, the Fast Fourier Transform (FFT) is used to compute integrals, with a time complexity of $O(N_{\text{grid}}\log N_{\text{grid}})$. However, in practice, only the first $N_{\text{modes}}$ integrations between the basis and input need to be calculated. This reduces the complexity to $O(N_{\text{modes}}N_{\text{grid}})$. Since $N_{\text{modes}} \ll N_{\text{grid}}$, even computing the numerical integration without FFT can ensure that the integration step does not become a bottleneck in the NO model's time complexity. The computational efficiency of other integration schemes and bases is still $O(N_{\text{modes}}N_{\text{grid}})$, which has negligible additional cost compared to FFT and may even be faster than FFT since $N_{\text{modes}} \ll N_{\text{grid}}$. In practice, the follow-up work of FNO by the same authors, the Geo-FNO (ref 19 in the main paper), adopts direct integration with $O(N_{\text{modes}}N_{\text{grid}})$ complexity and does not use FFT. In our computational approach, to perform kernel transformations, a total of $N_{\text{modes}}$ numerical integrations are required to obtain $N_{\text{modes}}$ coefficients. The complexity of each numerical integration is proportional to $N_{\text{grid}}$, resulting in an overall complexity of $O(N_{\text{modes}}N_{\text{grid}})$. It is important to note that the computation for each numerical integration involves calculating a weighted average of the integrand's values at $N_{\text{grid}}$ grid points. In contrast, in the context of FFT, they are obligated to compute all $N_{\text{grid}}$ integrals, which introduces considerable redundancy. Although their time complexity per integral reduces from $N_{\text{grid}}$ to $\log N_{\text{grid}}$, the need to compute all integrals adds significant overhead. >Q3: Following theorem 3.3, will the increase in truncated modes harm the super-resolution performance? **A3**: Yes, introducing more modes could potentially lead to an increase in the super-resolution error. This is because, within the bound of the super-resolution error, we can observe that the right-hand side includes the Lipschitz continuity constants of these orthogonal bases. If we incorporate larger modes, implying higher-frequency or higher-order basis functions, e.g., $\phi_k(x) = \sin(kx)$ with large $k$ value, they typically exhibit steeper profiles with larger Lipschitz constants. Consequently, this could result in less accurate numerical integrations on the training grid, hindering the generalization to more accurate testing grids. The lack of precision in numerical integration during training could consequently lead to significant discrepancies between predictions in the training and testing phases. >Q4: On the typos and limitations. **A4**: Thanks for pointing out our typo and not discussing limitations, we will carefully check all the content and add discussion in the revision. --- Rebuttal Comment 1.1: Title: Reply to the author Comment: I would like to thank the authors for their detailed responses and the efforts they have made during the rebuttal period. Most of my concerns in the experiment section are addressed. In general, I think the theory results presented in the paper will be helpful for practitioners in the neural operator communities. The authors also further clarify the difference between their work and some of the existing works (in particular the JMLR paper) and what are the new implications. I remain positive towards the paper and thus keep my score unchanged.
null
null
null
null
Actively Testing Your Model While It Learns: Realizing Label-Efficient Learning in Practice
Accept (poster)
Summary: The paper introduces the Active Testing while Learning (ATL) framework, which efficiently collects testing samples for online model risk estimation during the active learning process. The online risk estimation enables early termination of active learning. Additionally, ATL establishes a connection between Active Learning (AL) and Active Testing (AT) through an active feedback mechanism that transfers selected testing data points to the training dataset, thereby enhancing model performance. Strengths: 1. The paper tackles two crucial problems in the AL domain: efficient sampling of testing data points and the combination of AL with AT. Both of these problems are highly relevant and significant in AL research. 2. The paper introduces an unbiased estimator of the model risk, and also proposes a unique method to combine active quizzes as the final exam, maximizing the utilization of testing samples. 3. The paper provides a comprehensive set of experiments to evaluate the proposed estimator and feedback methods. The experimental results demonstrate the superior performance of the proposed approaches compared to existing techniques. Weaknesses: In the first half, the paper focuses extensively on optimizing the data sampling process to obtain an unbiased estimator of model risk. However, a flaw arises in the feedback mechanism, which moves data points from testing dataset to the training dataset based on the rule described in equations (8) or (9). This feedback undermines the unbiasedness of the risk estimator by introducing dependence in the selection of testing samples. Specifically, when we extract samples from the testing datasets based on some specific rule such as (8) or (9), the remaining samples are no longer independently sampled, thereby introducing bias. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: The paper needs more discussion on the potential bias introduced by the feedback mechanism. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 4 excellent Contribution: 4 excellent Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Bias introduced by the feedback mechanism.** Thank you very much for raising this insightful point! As we agree with your comment, we here further elaborate on the problem following the answer in Q5. The key objective of feedback is to improve model learning without sacrificing the risk estimation too much. In Section 3.5, we investigate active feedback as a way to optimize a (novel) joint objective of learning and testing. Our theoretical result reveals that it is possible to achieve an optimal balance between the two sub-goals (i.e., (I) and (II) as specified in Eq (8) of the paper) by choosing a suitable feedback set such that we can further improve the model learning performance while maintaining the risk estimation quality from our quizzes-testing process. Meanwhile, given the AL-agnostic nature of the framework, no further assumptions on the actual AL algorithms are available, thus leaving a general analysis. Following the general analysis of the combined objective of learning and testing, we focus on proposing a practical solution. To minimize the negative impact of the feedback process on the risk estimation, we propose a practical solution in Eq.(9) that considers the corresponding test sampling importance $q({\bf x})$ of test samples. It aims to improve the combined objective of learning and testing in active feedback. The empirical results in Table 3 also show that we maintain the risk estimation better than random feedback sampling. In the revised paper, we will further clarify the limitation of the feedback process and clearly state the further analysis of the feedback impact as part of future works. --- Rebuttal 2: Comment: Thank you again for your review and suggestions. We appreciate the comments about the importance of the unbiasedness analysis. We have incorporated your suggestions and further clarified the contribution including the unbiased risk estimator for integrated quizzes over AL and the high-level analysis plus practical solution for active feedback. We sincerely hope that you can find our answers satisfactory and consider updating the assessment. We are happy to provide any additional information if needed.
Summary: This paper considers the active testing procedure to assess the risk. The keys are 1) a proposal of a quiz using importance sampling, 2) combining the historical quizzes to make a test set for the final model test, and 3) a proposal of a feedback algorithm to strengthen the model using the labeled test data. Experiments focus on the performance of risk estimation, and some work concern test accuracy with feedback algorithms. Strengths: The proposal of an activel testing procedure can catch realistic problems. The quiz usage is impressive, and the set of quizzes is fully used. Weaknesses: Maybe we can consider a more straightforward example, such as the explicitly calculated risk. The performance of the proposed algorithms is validated. However, more solid experiment results are limited since the active learning algorithms are various, but only a few algorithms are considered. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have some questions. 1) what's the exact AL algorithm used in the experiments? 2) Is there any effect of different AL algorithms on ALT? 3) the feedback algorithm seems to be a hybrid of AT and ATL. Therefore, it seems that the benefit of feedback cannot be larger than the pure AL in the same budget (not the same budget in experiments)? 4) Is the algorithm possibly adaptive to the quiz sets (test sets)? I want to know how to construct the test set in Table 2. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Moderate explained. The connection between AL and AT seems lower. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Explicitly calculated risk and limitation from AL algorithm.** We would like to clarify that our design of framework is AL-agnostic, and we do not assume any fixed distribution for AL sampling. We will further discuss the AL strategies in Q2 and Q3 below. As for the test risk specification, in the evaluation, we do have the hold-out test risk as the reference true risk (we will further explain in Q5 below). **Q2\&Q3: AL strategy specification and various strategies.** Please see Q2 in the general response. Since our integrated risk estimator is AL-agnostic, different AL strategies do not have an effect on the risk estimation. However, the choice of AL strategies might affect how much the active feedback approach can improve the training. As we see from the results, the relative comparisons among different risk estimations are similar when using various AL strategies. Although random feedback might not be as effective when using BALD, our proposed feedback still mostly outperforms the fair comparison baseline. **Q4: The benefit of feedback cannot be larger than pure AL.** We would like to clarify that in our framework, both learning and validation are considered to be sample-efficient and we consider the total labeling budget. As we are not certain about the "pure AL in the same budget" setting description, to respond to the comment, we would like to discuss two different cases: 1) pure AL with full budget vs ATL; 2) fair comparison between AL and ATL. (For confusion about the hold-out test set used for evaluation, please see Q5 below.) 1) In the first case, we assume that pure AL uses the entire budget $N_B$ for training, while ATL needs to divide the budget into $N_L$ and $N_T$ ($N_{FB}$ will be taken from $N_T$ and added to $N_L$). Indeed, the benefit of feedback can not be larger than using all labeling budget for AL. However, we are assuming that some validation is needed during the AL process, and there is no way to perform validation if the labeling budget is drained by AL. Thus, our goal is a combined objective of improving the training performance while not sacrificing too much of the estimation quality. 2) In the second case, we compare AL using the fair budget $(N_{L})^*=N_{L}+N_{FB}$. This is the setting we use in Table 2 and Appendix D.2.3-D.2.4, Tables 7 and 8. In this case, whenever we compare the model performance (hold-out test risk), the model is trained using the same number of labels for ATL-NF and all feedback methods. In this case, we do not think a clear conclusion can be drawn about whether the benefit of feedback can be larger than pure AL. On the one hand, we have a limited selection range. On the other hand, we have already obtained labels for all samples within $N_T$, thus could potentially choose better samples. We acknowledge that this could be tightly related to the specific AL strategy, but at least in our experiments using entropy sampling, feedback results can in fact surpass pure AL results in most cases (Table 7). We do provide an additional study on the various percentage of labeling budget being used for active feedback in Tables 9 and 10 in Appendix D.2.4, which shows that the relationship is not trivial. We consider the further study on the trade-off between learning performance and risk estimation quality as an important future direction. **Q5: Constructing test sets and the evaluation setting.** For the first part of the question, we do make an adaptation of active risk estimation approaches to the AL setting, where we formulate the integrated risk estimator from individual quizzes, aiming to minimize the estimation variance. For the second part of the question, the hold-out test set is fixed using a large number of samples to provide the reference true risk. The hold-out test risk is solely used for evaluation purposes. (Similar to AL works that still use the hold-out test accuracy as the evaluation method, we also need to have a hold-out test set to provide the reference true risk. The difference is that in pure AL, the hold-out test set only provides reference for the risk/accuracy performance, while here it is also used to compute the risk estimation error.) This should come naturally to any AL/AT work. --- Rebuttal 2: Comment: Thank you again for your review and suggestions. In particular, we have further clarified the problem setting and the ATL framework. We appreciate the question about the benefit of active feedback and hope we have addressed it well. As we have also provided the missing results and necessary information in the rebuttal, we sincerely hope that you can find our answers satisfactory and consider updating the assessment. We are happy to provide any additional information if needed. --- Rebuttal Comment 2.1: Title: Response Comment: I carefully read your responses. All concerned issues are well addressed, and I have no further questions. I'll raise my score. --- Reply to Comment 2.1.1: Comment: Thank you very much for your positive feedback. We are happy to know that our responses have addressed your concerns. Your insights are instrumental in shaping the final version of our work.
Summary: The paper provides a relatively new flavor to recent active learning works, by considering active testing. In particular, they evaluate the learned model on the fly on adaptively acquired set, called quizzes. Model learns based on those quizzes and part of it is added, as a feedback from the quiz. At the end, it is evaluated on the final exam set (cumulative quizzes excluding feedback examples). Statistical convergence guarantees have been provided, and low variance (unbiased) estimators are considered. Proof of concept is provided on test dataset w/ GP model and common classification datasets are considered for the real world setting. The main paper provides the results on how far is true risk (estimated from large number of held out samples) from empirical risk. Strengths: The paper provides a relatively new flavor to recent active learning works, by considering active testing. In particular, they evaluate the learned model on the fly on adaptively acquired set, called quizzes. Model learns based on those quizzes and part of it is added, as a feedback from the quiz. At the end, it is evaluated on the final exam set (cumulative quizzes excluding feedback examples). Statistical convergence guarantees have been provided, and low variance (unbiased) estimators are considered. Theory and practical aspect seem interesting to the broad community. It is generally well written. Weaknesses: Much of the weakness in the work are some of my confusions and doubts. I *strongly* appreciate the authors to add an algorithmic block which shows how each and every step is done in the true implementation. Lastly, I haven't verified the correctness of the proof line by line, but the theorems seem sound. On AL acquisition functions: - I don't see any discussion at all about what AL algorithm is being used in all the experiments. Can authors please add that discussion. On theory: - How is the sampling from optimal $q_t$ done to get the quizzes at each testing round? The expression mentioned in equation (4) involves an integral done over squared error of loss function and true risk and the true underlying posterior distribution over labels. While for the true risk, it is approximated using a multi-source estimate, I am not sure where do we get the labels to even define the distribution. - The multi-source estimate is a function of time step, that is ideally equation (5) should've $t$ in the LHS. Secondly, right after the first round, how are we getting $\hat{R}_{Q_t}$ to multi-source estimate? - Wherever the expectation (integrals) are approximated using the empirical estimates in the quizzes or final test, or to get $C_t$, what is the scale of example? I feel that for small number of examples, $C_t$'s estimate can be pretty noisy, and therefore making $\tilde{R}$ estimate noisy. - For the active feedback setting, is only 1 example chosen from the quiz set $\mathcal{Q}_t$? If not, how is the batch selection done? Equation (9) only seem to be providing information for 1 example. If it is more than that, then I'd appreciate writing this as an algorithm (greedy procedure), or at least some discussion. On experiments: - Can authors provide absolute accuracy numbers, and how does that change over the course of AL rounds? - How is the scale of error in Fig 3g so different than the others? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I feel that more dataset/arch should be considered to study the real impact of the early stopping method, and in particular it should be compared against (in a fair manner) to fix accuracy based criterion. While I appreciate the provided framework, it needs much more discussion as to how it would fit in the current paradigm, in particular when we have low budget active learning regime, or where we already start with a pretrained backbone (to be sample efficient). Lastly, I feel skeptic about the empirical estimates for the expectations, as usually the examples used for the testing would be small (as it counts in the labeling budget as well). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: AL strategy specification.** Please see Q2 in the general response. **Q2: Sampling using $q_t$ and distribution estimation.** As we explained in Q1 in the general response, the test sampling is done in a sequential way. In each test sampling round, we compute $q({\bf x})$ over the entire unlabeled pool according to Eq. (4). We use a deterministic way of summing over all classes based on the posterior distribution predicted by the model. The multi-source estimate only utilizes labels of the training samples and previously selected test samples, and also uses the predictions for unlabeled samples. Later in the active feedback stage, if a test sample is selected to be added to the training set, we remove them from the testing set. Our newly provided ATL algorithm pseudo code also demonstrates the entire process (see our answer to Q3 in the general response). **Q3: Multi-source risk estimate.** Thank you for the suggestion, we will add $t$ to the terms in Eq.(5). In the first round, $\tilde{R}$ is not considered because there are no test samples yet. In each round since the first, all the terms in Eq.(5) are based on the current stage of the ATL process. Meaning that at time $t$, we will use the current training loss $R_{train}$, the current test risk $\tilde{R}$ (we will fix the notation in the paper, cannot add $t$ in openreview), and the currently computed $R_{\theta}$ (computed in the same way as in ref[19]). **Q4: Estimate of $C_t$ and scale.** Thank you for the question. As we find there is indeed a notation issue in the description of $C_t$, we would like to clarify that it should be $\frac{1}{n_t}\sum_{i=1}^{n_t}\sum_y\frac{p(x)}{q_t(x)}[\mathcal{L}(f_T(x),y)-R]^2p(y|x)$ ($R$ estimated by (R^{multi}{\theta}) at T). This definition is based on $\sigma_t(F_T)$ as previously mentioned, which scales with $n_t$. Although we do not consider the final $\tilde{R}$ noisy because of it, the number of test samples being small can indeed raise the concern of estimation quality, but this aligns with the fundamental challenge studied in this work. We have taken approaches to make the result as accurate as possible, e.g. the estimate of $(R^{multi}_{\theta})_T$ is done over the remaining unlabeled set. Since $v_t$ is always used after normalization, the key idea is to compare the confidence of quizzes of different $t$ and apply weights accordingly, serving the purpose of achieving the approximately variance-optimal result in the end. **Q5: Active feedback setting and algorithm box.** Please see Q3 in the general response about the algorithm. For test and feedback selections, we sequentially sample $n$ times to obtain each batch. The testing process does not involve re-training the model, thus the batch mode has no effect. For the feedback process, unlike AL where obtaining the label and re-training the model could have a large impact, we already have the labels for both training and testing samples at this moment. Re-training may have an impact, but is probably not worth the cost. **Q6: Test accuracy results.** Thank you for the suggestion. We here present the test accuracy results corresponding to the tables in the paper: | Dataset | AL strategy | Feedback | Iter4 | Iter8 | Iter12 | Iter16 | Iter20 | | -------- | --------- | ----- | ----- | ----- | ----- | ----- | ----- | | MNIST | Entropy | NF | $83.6\%$ | $91.2\%$ | $93.2\%$ | $94.8\%$ | $95.9\%$ | | MNIST | Entropy | RF | $83.3\%$ | $91.4\%$ | $94.2\%$ | $94.6\%$ | $96.1\%$ | | MNIST | Entropy | ATL | $84.3\%$ | $92.2\%$ | $94.3\%$ | $95.3\%$ | $96.1\%$ | | FashionMNIST | Entropy | NF | $69.9\%$ | $77.0\%$ | $78.1\%$ | $81.5\%$ | $83.2\%$ | | FashionMNIST | Entropy | RF | $73.0\%$ | $76.1\%$ | $78.6\%$ | $82.0\%$ | $83.3\%$ | | FashionMNIST | Entropy | ATL | $72.9\%$ | $77.5\%$ | $80.8\%$ | $82.1\%$ | $84.3\%$ | | CIFAR10 | Entropy | NF | $42.5\%$ | $46.8\%$ | $52.4\%$ | $57.0\%$ | $57.8\%$ | | CIFAR10 | Entropy | RF | $42.1\%$ | $50.5\%$ | $49.1\%$ | $57.3\%$ | $58.2\%$ | | CIFAR10 | Entropy | ATL | $43.3\%$ | $50.4\%$ | $51.4\%$ | $56.8\%$ | $59.5\%$ | As we can see from the table, the accuracy results are mostly consistent with the true risk (lower loss coincides with higher accuracy) with few exceptions. We will further explain early-stopping related accuracy results in Q8: limitations. **Q7: Scale of error in Fig 3g.** Thank you for noticing the detail and raising the question. Figure 3 demonstrates the learning-testing-feedback process on a synthetic dataset. Because the dataset is simpler, the true risk of the model is very low, especially at a later stage as in quiz 18. The estimation error is also low in this case, particularly with the no feedback case (ATL-NF) as in Figure 3(g). However, in this case, there is no benefit to model training from the feedback. **Q8: Limitations** Thank you for the suggestion. We acknowledge that the current early stopping results have limitations as mentioned. However, the fixed accuracy stopping will result in a much higher variance in the stopping iteration. | Dataset | Method | Iteration | Variance | Test Accuracy | Variance | | -------- | --------- | ----- | ----- | ----- | ----- | | Fashion MNIST | Fixed | 15.8 | 1.36 | $81.33\%$ | $4.4e-5$ | | SVHN | Fixed | 16 | 1.6 | $85.20\%$ | $6.1e-5$ | | SVHN | Combined | 15.6 | 0.64 | $84.02\%$ | $3.2e-4$ | We agree that the expectations make the problem difficult. In this paper, we use the integration of quizzes, the improved intermediate estimate, and practical feedback solutions to address these challenges and conduct empirical studies. However, as the reviewer mentioned, there is still much room for improvement in the current framework. Again, we would like to re-state our contribution as proposing the first ATL framework, and presenting a practical solution that improves the learning performance while maintaining the quality of risk estimation given a limited total labeling budget. --- Rebuttal 2: Comment: Thank you again for your review and suggestions. We hope that our responses have cleared the previous confusions. We appreciate the comments about the novelty of the problem and have incorporated your suggestions to further clarify the contribution as well as limitations. As we have also clarified the AL strategy and provided the missing results and necessary information in the rebuttal, we sincerely hope that you can find our answers satisfactory and consider updating the assessment. We are happy to provide any additional information if needed. --- Rebuttal Comment 2.1: Title: Thanks for the rebuttal. Comment: I thank the authors for the rebuttal; I'd retain my rating. --- Reply to Comment 2.1.1: Comment: Thank you very much for reading our responses and keeping the positive rating. We will make sure to incorporate your suggestions in the revised paper.
Summary: This paper proposes a framework for integrating active learning and active testing in an online fashion. The proposed algorithm incrementally sample training and testing examples for each batch and can leverage all of the testing examples so far for evaluation. Moreover, the sampling distribution for testing is designed based on an estimate of the variance of the risk. Experiments are conducted for both Gaussian processes and neural networks on multiple datasets. Strengths: The paper studies a very important problem and provides an effective solution. I find the method to be novel. Weaknesses: 1. I am not sure if the claim that the test risk estimate is unbiased. It is indeed unbiased if one knows the true risk, but when using the estimate proposed in section 3.3 to substitute the true risk, is this still unbiased? Moreover, if the feedback set is used in training, the estimator doesn't seem to be unbiased either. I think the authors should play down the claim of the proposed risk estimate is unbiased since it is not when combined with all of the tricks. 2. In the experiments, the results of random sampling are not shown in the tables. 3. There are quite a few heuristics proposed in the paper from sections 3.3 to 3.5. Although the authors provided intuitive explanations, the design of the algorithm may have overfitted to the three datasets the authors tested on. For example, the weighting between different components of the intermediate estimate seems quite arbitrary (section 3.3). It is also not clear how C is chose in practice in section 3.5. I strongly encourage the authors to further test their design across a wider range of settings. 4. The paper assumes the active learning algorithm to sample based on a probability distribution. However, many practical AL algorithms do not. This can cause the test risk estimate to be really high for those algorithms. I strongly encourage the authors to address this problem. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. The active learning strategy used in this paper seems to be very vague. What's the algorithm being used? 2. There are quite a few components to the proposed algorithm. Can you give an algorithm box to make it clearer? Also, can you provide a time complexity analysis? 3. In [1] appendix , the authors used a water-filling algorithm to make sure the examples sampled up to time t follows roughly from distribution $q_t$, even though $q_1, ..., q_{t-1}$ may differ from $q_t$. I wonder how this technique compares against what's proposed in 3.4. 4. Please also see the weakness section. I think this work studies an important problem and proposes a cool solution. I am willing to raise my score if the authors sufficiently address (part of) my concerns. [1] Katz-Samuels, J., Zhang, J., Jain, L., & Jamieson, K. (2021, July). Improved algorithms for agnostic pool-based active classification. In International Conference on Machine Learning (pp. 5334-5344). PMLR. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Sufficient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The unbiased risk estimate.** Thank you very much for this insightful question! Please see the answer to Q4 and Q5 in the general response. Adding to Sections 3.2 to 3.4 in the paper, we discussed in detail the improved multi-source intermediate estimator (which impacts the variance-optimal solution but does not change the unbiased nature) in Q4, and the effect of feedback (for which we provide a practical solution and empirical studies) in Q5. **Q2: Random sampling results.** Thank you for raising this concern. We have already included the random feedback results in Table 3. In Appendix D.2.2 Table 6, we also provide the results using random test sampling results and detailed analysis. Please note that this result is quite different from the AT comparisons from previous works without the AL setting. First, the AL changes the proposal distributions and the straightforward ARE/AT/ASE integrates actually introduce biases. Second, the base model is not accurate enough, making the test sampling proposal less optimal. From the results, we can see that random test sampling is comparable to the baselines as it also asymptotically converges to the true risk, but ATL performs better. **Q3: Analysis from Sections 3.3 to 3.5 and the constant C.** We have further explained the problem setting in the general response. More specifically, we have established the quiz by using existing results in Section 3.2, improved upon existing intermediate estimate of the true risk in Section 3.3, theoretically constructed an unbiased low-variance integrate estimator in Section 3.4, and provided further analysis in Section 3.5. Most part of Section 3.4 and 3.5 are strict theoretical analyses using the properties of multi-variate Gaussian and other theoretical results. We have to introduce some heuristics in Section 3.3 and the later part of 3.5 to have practical solutions or discussions. The weighting between different components of the intermediate estimate is based on the size of each component at the current stage. Each component (training loss, predictive uncertainty, and test loss) is assigned a weight according to the number of samples we have. The idea is to improve upon only using $R_{\theta}$ since $R_{\theta}$ usually underestimates the true pool risk (in a poor-calibrated case, overestimating is also possible). We agree that the current $R^{multi}_{\theta}$ is still heuristic, but in future works we might further improve the estimate using some theoretical insights (which usually require more assumptions to be made about the model and AL). In Section 3.5, the constant $C$ is not a specific parameter, and is not utilized in the practical solution. It is used for the purpose of theoretical analysis, and the other part that matters is that in general settings it can be inferred to be a constant of $\mathcal{O}(1)$, meaning that a balance can indeed be achieved between the two components in the joint learning-testing objective. **Q4: Practical AL algorithm and setting clarification.** We would like to clarify that we work in a setting where the unlabeled pool is significantly larger than the labeled training set. Such a setting is commonly adopted that covers many practical scenarios where the labeling budget is limited or the labeling cost is prohibitive. The proposed risk estimate does not depend on (or make assumptions about) the AL sampling distribution. We agree that practical AL strategies might not have clearly defined sampling distributions, but given the scale of our problem setting, we assume the test risk estimation to be mostly orthogonal to the AL. As we explained about the unbiased risk estimation (see the answer to Q4 in the general response), the estimation error is expected to be low when we use the approximately variance-optimal test sampling approach. **Q5: AL strategy specification.** Please see Q2 in the general response. **Q6: Algorithm box and time complexity.** Please see Q3 in the general response. As for the time complexity, we should study each component separately as the number of samples varies. The AL and training components are standard. In test selection, we first compute $R_{\theta}$ ($\mathcal{O}(N_U)$), then evaluates $q(\bf x)$ for the unlabeled pool ($\mathcal{O}(N_U)$). The integrated risk estimation takes $\mathcal{O}(N_UTN_T)$. The feedback might require an additional step of computing the distance ($\mathcal{O}(N_TN_L)$). The sampling complexity should be comparable to AT/ASE methods, which require additional complexity $\mathcal{O}(N_LTN_T)$ for training surrogate models. **Q7: Comparison with reference [1].** Thank you for suggesting the reference! The method mentioned is a very interesting approach to achieving optimal sampling in multiple rounds. However, since the scope of the method is only on active learning, full replacement is allowed and the entire sampling serves the single purpose of ensuring the learning performance. In our setting, since our test sampling follows AL sampling and model training, the change of target is much more significant between rounds. As we show in Section 3.3, the sampling in each quiz is theoretically variance-optimal for the current model. However, as the model changes, the goal also changes, and we expect the sampled data instances needed to compensate for the change will be greater than the pure-AL case, meaning more labels might be required to achieve a globally optimal sampling result than the locally optimal one. Unfortunately, due to the vast difference in settings (AL vs ATL, binary vs general, among others), it is not likely we can provide a reasonable comparison between the methods. Our integration in Section 3.5 has a similar effect to adjusting to the global optimal (but using the fixed budget for each round) by adding another layer of weights to the local optimal according to a confidence-based metric. --- Rebuttal 2: Comment: Thank you again for your review and suggestions. We hope that our responses have clarified the confusions raised in the review. We have incorporated your suggestions and further clarified the contribution including the unbiased risk estimator for integrated quizzes over AL and the high-level analysis plus practical solution for active feedback. As we have also clarified the AL strategy and provided the missing results and necessary information in the rebuttal, we sincerely hope that you can find our answers satisfactory and consider updating the assessment. We are happy to provide any additional information if needed. --- Rebuttal Comment 2.1: Comment: I would like to thank the authors for their detailed rebuttal. Most of my concerns are addressed and I have raised my score. I would like to suggest the authors to distinguish what is heuristic versus principled by restructuring section 3. Currently, it is not clear which of the estimates are unbiased. Perhaps stating out the biased and heuristics could really help the reader to understand. I believe we should not shy away from admitting things are heuristic in our papers. --- Reply to Comment 2.1.1: Comment: Thank you very much for the response. We are happy to find that we have addressed most of your concerns. We will carefully follow your suggestions to update the paper. We will further clarify in Section 3 to distinguish what is the unbiased integrated estimator versus the heuristics used in the practical solution.
Rebuttal 1: Rebuttal: In this general response, we address a set of important questions that commonly occur in multiple reviewers' comments, avoiding repeating the same response in each individual rebuttal. **Q1: Clarification of the problem setting. (To all reviewers)** In this paper, we aim to construct an integrated framework of active-testing-learning. This problem is a novel but practical one. Unlike many active learning (AL) or few existing active testing (AT) works that focus on efficient sampling designed solely for training or testing, respectively, the proposed work is the first effort that simultaneously considers both active testing and learning using an integrated framework. It tackles a much more challenging but highly practical setting, where the total labeling budget for the entire learning-testing (validation) process is limited. In the scope of the proposed work, we focus on studying risk estimation and the active feedback process that can work seamlessly with a wide range of different AL algorithms (we will elaborate on the AL-agnostic related results below). Following this rationale, our evaluation mainly covers the following perspectives: (1) for the first objective, we compare the estimated risk with a hold-out test risk (representing the true risk); (2) for the second objective, we evaluate how well we can maintain an accurate risk estimation while improving the model performance using active feedback. **Q2: AL strategy. (To reviewers LqNc, bQCr, and 5t4X)** The ATL framework is designed to be AL-agnostic so that it can be easily integrated with a wide range of commonly used AL algorithms. To demonstrate its general applicability, the main results are obtained using uncertainty (i.e., entropy) based sampling as the AL strategy given its great popularity in many AL models. We will more clearly describe and explain the chosen AL strategy in the revised paper. Additionally, we have gathered risk estimation results using two commonly used AL strategies: *margin sampling* (Wang et al. A new active labeling method for deep learning), *BALD* (Houlsby et al. Bayesian Active Learning for Classification and Preference Learning). The results (in PDF) show that the integrated risk estimation performs similarly regardless of the AL strategy. The relative scales of different risk estimations are also similar to the entropy case in the main paper. As for feedback results, we do see some different effects. Depending on the AL strategy, random feedback performs unstably. However, our proposed feedback still mostly outperforms the fair comparison baseline. **Q3: Algorithm block. (To reviewers LqNc and bQCr)** Following the reviewers' suggestion, we have provided an algorithm block (in the attached PDF) that includes and organizes all the key steps in a systematic way. **Q4: The unbiased risk estimator and the multi-source estimate. (To reviewers LqNc and bQCr)** The unbiased risk estimation achieved by each quiz is guaranteed asymptotically by Lemma 1 in (Active risk estimation [19]), which we referenced in Eqs. (2)(3). Then, we extended the unbiased quiz-wise risk estimator to the integrated version (Section 3.4) through the multivariate Gaussian analysis before Theorem 2 in our paper. Since the asymptotic relationship is not great when $N$ is small, the need for reducing the variance of the estimation becomes even more important in the AL setting. The key idea of Theorem 1 is to analyze the best variance of the integrated estimator given quizzes. To achieve this, the test sampling proposal in each quiz is kept as in Eq. (4). Guided by our theoretical analysis, the multi-source estimate $R_\theta^\text{multi}$ of the true risk is introduced to construct the test sampling proposal distribution $q$ in practice. It is an improvement upon the $R_\theta$ proposed by Christoph et al. (Active risk estimation [19]). Ideally, if $R_\theta$ is close to the true risk $R$, the test sampling proposal will have the lowest variance while unbiasedly converging to the true risk. Similarly, in Active Testing ([13]), if the true risk is known, the optimal test sampling proposal can be directly constructed using the individual losses, resulting in an immediately converging estimator. However, in reality, the labels are unknown for the samples to be selected. Thus, an approximation has to be made and will affect the quality of risk estimation. We would like to clarify (similar to [13, 19]) that since the estimators are weighted (by importance weighting or AT-unbiased weighting) sums, they always asymptotically converge to the true risk (in the same way as random sampling will also converge to the true risk, just slowly). Thus, the quality of the estimation really depends on the variance of the estimator. Again, the true variance can not be known beforehand because we do not have access to the labels, thus the practical solutions are proposed to achieve good testing sampling. In Proposition 1, we show that existing $R_\theta$ is not optimal. By combining the current loss information that we do have access to, including ($R_{train},R_{\theta},\hat{R}_{Q_t}$), it allows us to bring the intermediate estimate closer to the true risk. **Q5: The effect of feedback on the unbiased risk estimator. (To reviewers LqNc and 2XK2)** We want to clarify that our paper explicitly claims unbiasedness only for the integrated estimator $\tilde{R}$. The active feedback process may indeed affect the unbiased risk estimator in a negative way. We show that the negative impact could be controlled in a combined objective of learning-testing. In a general analysis, we show that active feedback could indeed achieve an optimal solution for the joint optimization problem. Then we provide a practical solution and empirically verify the theoretical by showing that active feedback can indeed improve model learning without sacrificing the risk estimation too much. Please see the answer to Q5 from reviewer 2XK2 for more details. Pdf: /pdf/d51704f3c3c9a41a3f734c8c380c2c54333305a4.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Counterfactual-Augmented Importance Sampling for Semi-Offline Policy Evaluation
Accept (poster)
Summary: The paper proposed a novel method for off-policy evaluation with offline data that entails human input. Specifically, the paper assumes access to human annotations for counterfactual state-action pairs as counterfactual immediate reward (in bandits) or Q values (in RL) under the behaviour policy (as a relaxation to Q values under the evaluation policy). They used a bandit example to show that simply incorporating counterfactual annotations into the offline data can shift state distribution and therefore bias overall estimates. To keep the state distribution unchanged, they further assume access to a human-privided weight for each visited / annotated counterfactual state-action pair. Effectively this creates an augmented behaviour policy as if smoothed towards a uniform policy. They are then able to reweigh the importance sampling ratios and conduct OPE as a weighted sum of the conventional backwards bootstrapping and human annotation. They then move on to give theoretic analyses about the bias and variance of the proposed OPE estimator. Experiments conducted on a synthetic bandit problem and a healthcare problem with simulator demonstrated overall improved performance than vanilla IS and ablated versions of the proposed method itself Strengths: 1. proposed a novel method for OPE with offline data, which seems increasingly timely as machine learning is marching into real life 2. experiments have tested thorough aspects of the proposed idea Weaknesses: 1. even if the paper goes into length to keep state distribution not affected by incorporating human annotations, it still does not address the distribution shift between the state distribution induced by the behaviour policy and by the evaluation policy 2. expert annotations for counterfactual state-action pairs along with their weights are also expensive to get (though argubly safer / easier than rolling out the evaluation policy in real world) and inevitably subjective 3. unclear how the promising results could extend to real-world scenarios Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. how would human experts provide Q values under the behaviour policy in practice? It does not seem any easier than providing Q values under the evaluation policy both of which entail a sequence of counterfactual state-action pairs that are not their own choices 2. Appendix D.2, to relax Q under the evaluation policy to Q under the behaviour policy in the annotation, why simply considering a bias term in the annotation is enough? It seems to me you would also need to reweigh the annotation based on how relatively likely the counterfactual action is selected. Yes you have indeed reweighed for this between the evaluation policy and the augmented behaviour policy, but now you have a third policy, i.e. the original behaviour policy before augmentation (under which you have the Q values). I would imagine in addition to the IS ratio as you have in place already, you would also need to reweigh the Q value with e.g. product of subsequent IS ratios between the evaluation policy and the original behaviour policy. However, I see in your experiments you showed that the performance is not much affected with biased annotations, so I suppose not reweighing the biased annotations is not a major issue, but it would still be good to see further justification for that. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: please see above Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer rPY7 for the time spent reading our paper (including the appendix) and their helpful feedback. We are also encouraged by their overall positive assessment of our paper. Below we address the reviewer's questions and concerns. **1. The proposed method does not address state distribution shift between $\pi_e$ and $\pi_b$** Vanilla IS (and PDIS) are unbiased under the common support assumption (Assumption 3 in our paper) and do not require explicit handling of the state distribution shift due to different policies. Our paper points out the bias issue of naive incorporation of counterfactual annotations (where states with more annotations are over-represented) and provides a simple reweighting solution to address it. - We also acknowledged in related works (L562-L564) that there is a family of approaches based on importance weighting on marginalized state distributions $d_{\pi}(s)$. We believe that similar modifications to the MIS/DICE estimator family can be made, but require separate full analyses on their bias/variance, which is outside the scope of our current paper. **2. Is it even practical to collect counterfactual annotations? “How would human experts provide Q values under the behaviour policy in practice?”** (see also: [overall response](/forum?id=dsH244r9fA&noteId=jKEOBV6cyp), [noPk](/forum?id=dsH244r9fA&noteId=l1RyMRJL8s)) One domain that the authors have the most experience in is healthcare, where the goal is to optimize sequential treatment policies (e.g. Komorowski et al. 2018). In this domain, clinicians are constantly evaluating alternative treatment paths in their minds when making treatment decisions but we only observe what was actually done; counterfactual annotations can be seen as a mechanism to elicit this information in their thought process that is otherwise not recorded. We envision clinicians to be asked questions such as “given that the patient received treatment A, if treatment B was used instead, what do you think would happen to the patient”. [1] Komorowski et al. “The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care”. Nature Medicine, 2018. - Re subjectivity of annotations: while an inevitable property of human annotations, we want to point out two potential remedies that could improve the usefulness of our approach: (i) we can average the annotations from multiple annotators to reduce annotation variance, (ii) our approach is relatively robust to high levels of annotation noise (as seen in Fig 5 experiments), and informally speaking, if the annotation noise is similar to observed reward variance, then the variance reduction we get is as if we have collected more offline data. We also believe there is opportunity for future research especially in the HCI space (which we mention on L530-L534) to improve annotation solicitation (e.g. better question phrasing to reduce subjectivity in response). **3. Question about Appendix D.2, why simply considering a bias term in the annotation is enough?** Thank you for raising this detailed question, we are happy to provide more clarification to your question (main points are summarized below). > “why simply considering a bias term in the annotation is enough? ... you would also need to reweigh the Q value with e.g. product of subsequent IS ratios between the evaluation policy and the original behaviour policy…” We would like to refer the reviewer to the unbiasedness proof of C-PDIS, which we have sketched in L272-L279 and illustrated in Figure 4, and fully derived in Appendix C.4 (starting L725 on page 21). In Figure 4 (ii) and (iii), what we require for unbiasedness is that both the factual and counterfactual branches provide unbiased estimates of Q under $\pi_e$, for factual $a_t$ and counterfactual $\tilde{a}$ respectively. When using the bias-correction approach in Appx D.2, we are directly mapping the annotation from Q of $π_b$ (the original behavior policy) to Q of $π_e$ (the evaluation policy) by estimating their difference (i.e. the annotation bias). Indeed, an alternative approach to correct this bias is to apply IS, as proposed by the reviewer by using the “product of subsequent IS ratios between the evaluation policy and the original behaviour policy”. However, the main challenge here is that we only have the annotation for $\tilde{a}$ but no subsequent sub-trajectory information (recall Fig 4(iii)). We have experimented with a version of this approach where we search for all matching sub-trajectories that start with $(s_t, \tilde{a})$ and use the average subsequent IS ratio to reweight, but we found that it introduced a lot more variance (due to limited number of matching sub-trajectories) than the model-based bias-correction procedure described in the paper. --- Rebuttal Comment 1.1: Comment: Thank you authors for your response. 1. I acknowledge that my state distributional shift concern is addressed. The authors did point out that reweighing with marginalized state distributions is an orthogonal direction. 2. I'm half-convinced by the accuracy and ease of getting counterfactual human annotations, they are mind processes after all. But this aside, the technical part of the paper seems of value and self-contained. 3. Regarding the bias term in appendix D.2, so you mean you are not learning/correcting for this term but only pointing out its existence? --- Reply to Comment 1.1.1: Comment: Dear Reviewer rPY7, thank you for the reply. For **3.**: yes we are learning/correcting the bias term. Sorry if our initial response wasn't clear. In Appendix D.2, - the first paragraph (L772-L777) points out the bias issue, - the second paragraph (L778-L786) describes a model-based procedure to learn and correct this bias. In the equation after L783, $\hat{g}^{(i),\tilde{a}}_t = g^{(i),\tilde{a}}_t - \hat{\epsilon}\_{G_t}(s_t, \tilde{a})$, the learned bias is subtracted from the annotation $g$ so that it has the correct, “unbiased” expectation. - We also acknowledge (L785) when this is not possible (e.g. due to missing support, when the annotated counterfactual action did not appear in offline data), in which case we recommend using the obtained annotation as is.
Summary: - Propose a setting in OPE where experts can annotate the value of counterfactual actions. - A new importance sampling scheme for the setting is proposed. - Theoretical analysis and experiments confirm superiority over the unweighted case, etc. Strengths: 1. The proposed setting of expert annotating the counterfactual values seems like a direction worth exploring 1. Clearly written with some theoretical guarantees. In particular, the bias and variance are carefully discussed for cases where assumptions such as the coverage are not met, and benefit such as bias reduction compared to importance sampling (IS) is shown. 1. A useful heuristic is proposed (equal weighting) Weaknesses: 1. The method itself and the baselines are fairly naive. 1. Comparisons with standard OPE baselines, such as the direct and doubly robust methods as well as the IS method, may be of value to better estimate the benefit of annotating counterfactuals for readers. 1. Also have a question about the claimed problem with the naive IS method, which is pointed out in section 3.1. See Question 2 below. 1. Other similar annotation augmentation approaches (e.g., [1,2], but not limited to) are not compared neigher theoretically nor experimentally. I believe that the proposal to add counterfactual annotations is a costly and significant change to the problem setting and should be widely compared to similar approaches widely. [1] Srivastava, Megha, Tatsunori Hashimoto, and Percy Liang. "Robustness to spurious correlations via human annotations." International Conference on Machine Learning. PMLR, 2020. [2] Kaushik, Divyansh, Eduard Hovy, and Zachary Lipton. "Learning The Difference That Makes A Difference With Counterfactually-Augmented Data." International Conference on Learning Representations. 2019. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: 1. What is the exact definition of "naive weighted"? What is the theoretical advantage of the proposed method? 1. The problem with the naive IS method is shown in section 3.1, but is this not simply because the state visitation probability $d_\pi(s)$ is not corrected? The problem can be solved by simply applying any OPE method that takes $d_\pi(s)$ into account, such as DualDICE [3], to the augmented dataset. [3] Nachum, Ofir, et al. "Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections." Advances in neural information processing systems 32 (2019). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: 1. The proposed method requires costly accurate annotation, which is properly investigated theoretically and experimentally. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to read and evaluate our work. Below, we provide clarifications and answers to your questions. **1. “The method itself and the baselines are fairly naive”** (see also: [overall response](/forum?id=dsH244r9fA&noteId=jKEOBV6cyp), [71Lt](/forum?id=dsH244r9fA&noteId=tJFHuJCKwM)) In this paper, we establish a formal framework of ‘semi-offline’ evaluation (offline data + counterfactual annotations), and propose a new estimator for this setting based on importance sampling. While the C-IS estimator is simple, our analysis **reveals new theoretical insights and important practical considerations** for practitioners to keep in mind, e.g. it’s better to impute missing annotations and use equal weights. Therefore, we believe our paper makes non-trivial contributions to the community and is an important step to enabling RL applications in high-stakes domains such as healthcare. - Comparison to other OPE methods: We believe the relevant comparison is C-IS against IS (and C-PDIS against PDIS), as **improvement of C-PDIS over PDIS directly illustrates the benefit of counterfactual annotations**. In other words, whenever good counterfactual annotations are available, our results (theoretical + empirical) suggest that the counterfactual-augmented version C-PDIS should be preferred over vanilla PDIS. Results for other OPE methods are not shown as they are not directly relevant to supporting our main claim on the utility of counterfactual annotations (and how to incorporate them to IS-based estimators). In addition, past benchmarking works have found that the best OPE method is environment-specific and has recommended multiple OPE methods should be used in practice [Fu et al. 2021, Voloshin et al. 2021]. We believe there is opportunity to augment other classes of OPE methods with counterfactual annotations and have highlighted this potential in related work (in Appendix A.1 on page 12). [1] Voloshin et al. “Empirical Study of Off-Policy Policy Evaluation for Reinforcement Learning”. NeurIPS D&B 2021. [2] Fu et al. “Benchmarks for Deep Off-Policy Evaluation”. ICLR 2021. **2. Relationship to other annotation/augmentation approaches** We appreciate these additional references. It is certainly true that the annotations required in our approach will share similar challenges to those in the cited works (e.g. subjectivity, inter-rater disagreement), and we will expand our discussion on this in the revised version. Here, we would like to point out a few key distinctions between our approach and these past work: compared to Kaushik et al. ICLR 2019 (annotators alter text to match a counterfactual target label) and Megha et al. ICML 2020 (humans provide annotations of potential unmeasured confounder), our formulation of counterfactual annotations is more naturally suited to sequential decision making in RL, focused on the unobserved trajectories in the offline setting. This is an underexplored area with great potential to enable practical RL in high-stakes domains. **3. What’s the exact definition of naive weighted?** We believe the reviewer is referring to the “naive weighted” baseline that was mentioned on L361 in the sepsis simulator experiments, whose results are shown in Table 2 on page 9. We have informally defined “naive weighted” on L361 as an approach that “reweights the annotations at the trajectory level instead of per-decision” (we weren’t able to elaborate in the main text due to space constraint). More formally (wlog), assuming a binary action space $\\{0,1\\}$, given a trajectory of length $T$ with counterfactual annotations at each step, $\tau = [s_t, a_t, r_t]\_{t=1}^{T}$, $\boldsymbol{g} = \\{g_t^{1-a_t}\\}\_{t=1}^{T}$ where $1-a_t$ is the counterfactual action for $a_t$, the naive weighted estimator is defined as $(1-\sum\_{t=1}^{T} w_t) \rho_{1:T} (\sum\_{t=1}^{T} r_t) + \sum\_{t=1}^{T} \big( w_t \rho_{1:t-1} \rho_t^{1-a_t} (\sum\_{t’=1}^{t-1} r_{t’} + g_t) \big)$ Intuitively, this first converts each annotation into a sub-trajectory that terminates at the step of annotation with the counterfactual action $[s_1, a_1, r_1, \cdots, s_t, 1-a_t, g_t]$, and then performs IS on each sub-trajectory (including the original trajectory), and finally computes a weighted sum of these T+1 estimates (1 factual estimate, T counterfactual estimates) using weights $(1-\sum\_{t=1}^{T} w_t)$, $w_1, \cdots, w_T$. We have elaborated the reason why it does not work in Appendix E.2, L886-L891 on page 28: > The reason why “naive weighted” does not work is more subtle: while reweighting the (partial) trajectories constructed from the counterfactual annotations correctly maintains the initial state distribution, it does not correctly maintain the intermediate state distributions. **4. Why not apply OPE methods that take state distribution d_π(s) into account?** We chose to focus on importance sampling based OPE methods due to their simplicity of implementation and common use, as well as comprehensive theoretical guarantees as discussed in past work [1][2]. In the paper, we do provide a solution to the bias issue with the naive incorporation of counterfactual annotations to IS. On the other hand, we acknowledged in related works (L562-L564) that there is a family of approaches based on importance weighting on marginalized state distributions $d_{\pi}(s)$. We believe that similar modifications to the MIS/DICE estimator family can be made - and are also a valid solution to the overall problem - but require separate full analyses on their bias/variance properties, which is outside the scope of our current paper. [1] Levine et al. “Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems”. arXiv 2020. [2] Precup. “Eligibility traces for off-policy policy evaluation”. 2000. --- Rebuttal Comment 1.1: Comment: My concern is about the nontriviality of this work. It is obviously better to incorporate annotation-augmented data if available; thus it is difficult to believe that the lack of such practice is due to people not knowing that it is better to incorporate annotation-augmented data. Rather, it would be normal to assume that this is because of the high cost or inaccuracy of obtaining such annotation. Therefore, the difference of augmentation with annotation, i.e., the difference with PDIS alone, does not seem to be worthy for such a top venue. In my understanding, the motivation that might answer the above question is described in Section 3.1. Author(s) describes an issue when weighting (IS) is simply applied to the augmented data. However, it does not take the difference in state visitation probability $d_\pi(s)$ into account (since they compare only within the "per-decision" approaches). Therefore, a natural question would be "What if simply apply (trajectory-level) weighting methods such as DICE to the augmented datasets?" If such a strategy was enough, the contribution of this work would be limited. Contrasting this with such a naive baseline would make the contribution of this paper easier to understand. According to the Author(s)' response (3), the compared baselines ("naive weighted") are also per-decision, and my concern is not clear. --- Reply to Comment 1.1.1: Comment: Thank you Reviewer fzzo for the reply. We appreciate your engagement in discussions and would like address your concerns regarding our work's contributions: - **"it is difficult to believe that the lack of such practice is due to people not knowing that it is better to incorporate annotation-augmented data"**: to our knowledge, we are the first to pose the task of semi-offline evaluation using the specific combination of *'offline data + counterfactual annotations'* in the RL OPE setting - we're happy to discuss the distinctions wrt further related work if exists, in addition to the references you already provided. We therefore believe what we propose constitutes a *new task* of interest to the NeurIPS community and has the potential to be *adopted by practitioners* in high-stakes offline RL domains to as a safe mechanism to gain more confidence in new policies. - **"high cost or inaccuracy of obtaining such annotation"**: We agree that, as with any data annotations, there will be cost and variability associated with our proposed counterfactual annotations. However, compared to online deployment which is the gold-standard for evaluation in RL, collecting such annotations is arguably a safer, less costly alternative, especially in high-stakes domains such as healthcare or autonomous driving, and is an important step to realize the real-world impact of ML in these domains. - Re cost: The success of modern supervised learning is built upon large labeled/annotated datasets, which are associated with high costs. We do not believe cost alone should deter research, but rather it needs to be justified with respect to the potential benefits and ethical/safety considerations. In high-stakes RL domains, the dangers and uncertainty of online evaluation (e.g. recommending potentially suboptimal or wrong treatments to clinicians) can often make it justifiable to collect counterfactual annotations instead. That said, there are also ways to reduce cost: one way is to incorporate the counterfactual annotation prompt(s) into existing supervised annotation pipelines, e.g. when annotating a patient (retrospectively/offline), we can simultaneously ask clinician annotators to provide disease diagnosis(es) as well as predicted effect(s) of alternative treatment(s); another possibility is to collect counterfactual annotations on-the-fly in a "semi-online" fashion, e.g. at the end of daily grand rounds, ask clinicians to record the predicted effect(s) of alternative treatment(s) they did not select for patients they just saw. - Re inaccuracy: it is certainly true that human annotations involve the challenges of subjectivity, ambiguity, and annotator bias; we also agree that our new framing of "counterfactual reasoning" in annotations may introduce additional variability. It is thus important to incorporate existing remedies (provide clear annotation guidelines and training sessions, and recruit diverse representative annotator teams) and to conduct human experiments in the domains of interest so we can understand any additional challenge our approach introduces and their downstream impact. For example, in the context of healthcare, it would be interesting to measure the inter-rater reliability in disease diagnosis labels (used in supervised learning) vs counterfactual annotations (used in semi-offline eval) across multiple annotators -- which in our opinion is a research question complementary to our present paper and worthy of a separate investigation. That said, our experiments (Sec 5.2) provide some initial evidence that imperfect counterfactual annotations can still be valuable despite being noisy or biased. - **"difference with PDIS alone ... does not seem to be worthy for such a top venue"**: we'd like to point out that our contribution is not simply the C-PDIS estimator *alone* - our contributions include (1) framing the 'semi-offline evaluation' task, (2) the C-PDIS estimator which is a simple modification of existing IS estimators, (3) perhaps most importantly, theoretical analyses that justify how to best apply this estimator in practice. Specifically, (3) includes the bias-correction procedure (L293; Appx D.2) as well as the recommendation of using equal weights after imputing missing annotations (L286; Appx D.1). These insights -- rooted in our extensive theoretical analyses (Sec 4) and otherwise unknown to readers -- are substantiated by empirical evidence (Sec 5) demonstrating their advantages over applications of C-PDIS without incorporating these strategies. Our work thus provide unique findings and valuable guidance for practitioners. We further respond to the question about DICE-based approaches in a follow-up comment below.
Summary: This paper offers a new OPE estimator leveraging additional annotations of counterfactual actions. The paper provided an extremely thorough and comprehensive discussion of the theoretical property of this estimator. This helps us develop an understanding of how it operates. This paper also discussed fully the limitations of the approach and how the estimator would behave if different assumptions have been violated. I generally find this to be a good paper, with limited experiments and not enough real-world motivation, but I can potentially see an application of this method, even though, for now, I don’t yet know what it could be. Strengths: The theoretical discussion of the estimator is very clear and very thorough. The paper discussed how bias of the annotator functions would impact the bias of the estimator. The paper also discussed when and how annotations could help reduce bias and variance of the estimator. The paper is also very clearly written. The proofs in the appendix are easy to follow (although I have not checked all of them). The experiments in the appendix are well documented. Weaknesses: There are a few weaknesses of this paper that prevents it from having a higher impact. 1. No human experiment has been offered. There is no actual investigation of what human inter-annotator variance would look like. It is somewhat known in a few important domains, such as dialogue evaluation (perhaps not surprisingly, this is a huge application field where RL is applied to natural language processing), inter-annotator agreement on certain metrics can be huge [1] (Table 2). The authors didn’t collect real human annotations at all, and the paper lacks key data that can shed light on whether this approach can work with real human annotations. Also, collecting $\omega$ would seem to be difficult as well. It is unfortunate that the authors didn’t opt to do this, but I don’t think this eclipses other amazing parts of this paper. 2. The experiments are too synthetic. This is a complaint similar to but slightly different from the previous point. Most RL/Bandit tasks have real-world motivations. By using a synthetic bandit experiment and Sepsis, the authors did not convince me that these two tasks would clearly benefit from human annotations. When are human annotations desirable in an RL/bandit task? How hard is it to obtain human annotations, especially for a sequential setting where some level of “imagine what the policy would do in the future” is involved? Can the authors shed some light on what kind of real-world task they hope to apply this OPE estimator to? [1] Lowe, Ryan, et al. "Towards an automatic turing test: Learning to evaluate dialogue responses." arXiv preprint arXiv:1708.07149 (2017). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Section 4.2 discusses the variance of the annotation function that would impact the variance of C*-IS. It assumes that the annotation function has the same variance as the reward function. Can the authors provide some insights on what happens when the annotation function has a lower variance than the reward function (this is possible because human annotations are malleable) or higher variance? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors addressed the limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are encouraged that the reviewer appreciated our clear/thorough theoretical analyses, as well as well-documented experiments. We also want to point out that we have provided full code implementation in the supplement. Below, we address the feedback from the reviewer: **1. Regarding human experiments** We resonate strongly with the reviewer that, in order to further realize the utility and potential impact of our work, human experiments on the annotation quality/variability would be necessary; we have explicitly mentioned this under limitations and as future work (L520). **We are grateful that the reviewer acknowledges the contributions of our paper even without a user study.** Importantly, our work sheds light on desirable properties of useful counterfactual annotations, which points to exciting directions of future research in RL as well as HCI (L530-L534); there is also opportunity to draw lessons from the NLP community about human annotation quality/variability (such as ref [1] mentioned by Reviewer noPk). Re the side note about “collection of w”: please see [overall response](/forum?id=dsH244r9fA&noteId=jKEOBV6cyp) paragraph **4.** (and we also responded to Reviewer [71Lt](/forum?id=dsH244r9fA&noteId=LuHvfFOj0Q) regarding this); to summarize, in the paper we explored different weight settings both theoretically and empirically, and we suggest using equal weights as per C*-IS as a useful heuristic. **2. What are the real-world application domains of our approach?** (see also: [overall response](/forum?id=dsH244r9fA&noteId=jKEOBV6cyp), [rPY7](/forum?id=dsH244r9fA&noteId=6fGRmonfcP)) Our approach is best suited for high-stakes RL domains where online evaluation is risky, but domain experts can provide additional feedback on new policies. One domain that the authors have the most experience in is healthcare, where the goal is to optimize sequential treatment policies (e.g. Komorowski et al. 2018). In this domain, clinicians are constantly evaluating alternative treatment paths in their minds when making treatment decisions but we only observe what was actually done; counterfactual annotations can be seen as a mechanism to elicit this information in their thought process that is otherwise not recorded. We envision clinicians to be asked questions such as “given that the patient received treatment A, if treatment B was used instead, what do you think would happen to the patient”. The sepsis simulator experiment is our best attempt at using an existing RL domain with strong healthcare motivations, where we have full control over the simulation parameters (various policy configurations etc) and online access (to evaluate OPE methods with ground-truth). While admittedly imperfect (which we acknowledged as a limitation on L528), our simulation experiments do attempt to cover various realistic scenarios including missing/noisy/biased annotations and show the robustness of our approach. [1] Komorowski et al. “The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care”. Nature Medicine, 2018. **3. What happens if annotation variance differs from reward variance?** (see also: [overall response](/forum?id=dsH244r9fA&noteId=jKEOBV6cyp), [Qcjd](/forum?id=dsH244r9fA&noteId=uEp9MwDHCX)) For clarity, in Theorems 6&13 we assume annotation variance $\sigma_G^2$ is the same as reward variance $\sigma_R^2$. More generally, we can assume some relationship between the two such that $\sigma_G(s,a)^2 = \sigma_R(s,a)^2 + \Delta_{\sigma}(s,a)^2$. Then, in the proof of Theorem 13 (Appendix C.3, page 13), the majority of the derivation stays the same except for term (3) where we now apply a different assumption on annotation variance (L712). The resulting variance decomposition will have an additional term $\mathbb{E}\_{s \sim d_1} \mathbb{E}\_{a \sim \pi_b(s)} [\sum\_{\tilde{a} \sim \mathcal{A} \setminus \\{a\\}} \rho^{+}(\tilde{a}|s)^2 \bar{W}(\tilde{a}|s,a)^2 \Delta_{\sigma}(s,\tilde{a})^2]$ related to the difference in variance $\Delta_{\sigma}^2$. Note that this also suggests that when the annotations have a larger variance than the rewards, we may want to assign it a smaller weight, and vice versa -- which makes sense intuitively. Similarly, the modified version of Theorem 6 will contain an additional third term: $\mathbb{V}[\hat{v}^{\textup{C*-IS}}] = \mathbb{V}\_{s\sim d_1}[V^{\pi_e}(s)] + \mathbb{E}\_{s\sim d_1} \mathbb{E}\_{a\sim \pi(s)}[\pi_b(a|s) \rho(a|s)^2 \sigma_R(s,a)^2]+ \mathbb{E}\_{s\sim d_1}[\sum\_{\tilde{a} \sim \mathcal{A} \setminus \\{a\\}} \pi_e(\tilde{a}|s)^2 \Delta_{\sigma}(s,\tilde{a})^2]$ Where the original Theorem 6 can be obtained by setting $\Delta_{\sigma}^2$ to 0, making the last term vanish. We will add the full derivations to the appendix. --- Rebuttal Comment 1.1: Comment: The additional derivation is highly appreciated. This is in fact, quite interesting, and now the impact of human annotator variance is easily seen in the estimator. I hope the authors would consider adding a synthetic experiment that shows different settings of annotator variance (technically, lower human annotator variance than reward variance could also make the estimator variance lower, right? Similar to control variate techniques?) It would be nice to see this being shown in an experiment (even though the derivation already helps a lot). I will keep my score as is. I think the lack of real-world datasets or applications is definitely concerning, but the paper has made enough contributions to convince me that it can generate more discussions and inspire follow-up works. --- Reply to Comment 1.1.1: Comment: Dear Reviewer noPK, thank you for the reply! Re your suggestion on additional experiments with different settings of annotation variance: absolutely - this is a great idea! We already have some preliminary versions of this, - Fig 7(c-d): bandits where annotations have a larger or smaller variance than the reward function, - Fig 5-center: SepsisSim with varying annotation noise, but we agree this is worth exploring more systematically (e.g. in the SepsisSim experiments, we will add direct comparisons of annotator variance against reward variance), and we will include those results at revision.
Summary: The paper considers OPE with annotation to improve the performance of OPE. The paper explains why naively incorporating such information may lead to a biased estimate. And the paper provides a theoretical analysis of the bias and variance. Finally, the paper demonstrates the performance of the proposed method using synthetic bandit and healthcare experients. Strengths: OPE is well-known to suffer from high variance. And incorporating other information is important and relevant. The paper studies an interesting and important question and proposes useful solutions with theoretical guarantees. Weaknesses: The clarity of the paper needs to improve. The main concept of this paper, annotation, is first introduced in Section 2. And In the Introduction, the authors only mention this notion in Figure 1. I think the authors should introduce and discuss "annotation" in the introduction to facilitate the reader's understanding. There are some other confusing points that may hinder comprehension, which I list in the Question section. Assumptions 1 and 2 seem very strong. Could the authors discuss the impact of the bias of the annotation? It seems in the current formulation, a behavior trajectory with annotations is very similar to many behavior trajectories. At least in the bandit case, I think they are equivalent. If it is correct, it will affect the paper's novelty and contribution. Related to the clarity, "distribution shifts" are mentioned in the abstract, but never show up again in the paper. I think it is better to discuss the issues of distribution shifts induced by different policies in RL. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. IN Section 3, are weights state-dependent? And it seems the weights are random. How should I understand it? 2. Why the variance does not depend on the variance of the annotation var(g)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors discussed some limitations in Section 4.4. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to read and evaluate our work. Below, we list specific updates on which we plan to focus our efforts, as well as provide answers to the questions raised. **1. “The clarity of the paper needs to improve”** Thank you for your detailed suggestions on improving the clarity of the paper. In our revision we will address the following points: > **Paper should introduce the concept of annotation earlier** > > We agree. We plan to update the introduction to (i) provide concrete examples of what a counterfactual annotation means, specifically in the context of healthcare: “asking doctors what they think would happen to the patient if the opposite treatment was used instead, assuming the standard care procedure was followed afterwards”, and (ii) incorporate related works on annotations/augmentations mentioned by Reviewer fzzo, and point out that our definition of counterfactual annotation is specific to RL and differs from existing notions of annotations. > **Are weights state-dependent?** > > No. Weights are specific to **each sample**, not necessarily to each state. Here’s a clarifying example: given an MDP with a single state $s$ and two actions {↗, ↘}. Suppose we have two samples: sample #1 is simply ($s$,↗) with no annotation for ↘, so the weights to the two actions must be $[1,0]$; sample #2 is also ($s$,↗) but with an annotation for ↘, and the weights can be any two non-negative number that sums to 1 as specified by the user, e.g. $[0.8, 0.2]$. (Note that this doesn’t mean the weights are “random”, as the user may *deliberately* set weights to be equal $[0.5,0.5]$, or to $[1,0]$ which ignores the annotation if they believe it is of poor quality.) The average weight (L159) of these two samples is $\bar{W}(\$↗$|s,$↗$) = 0.9$, $\bar{W}($↘$|s,$↗$) = 0.1$, and is used in computing the augmented behavior policy per Definition 1. We hope this example, together with the description in Sec 3.2 as well as our code implementation in the supplement, can help clarify the definitions, and we will add this example to the appendix. > **Paper should discuss the issues of “distribution shift” induced by different policies in RL** > > Absolutely, we will expand the discussion of distribution shift in (offline) RL and how it is the main challenge of OPE, providing additional references such as [1][2][3][4]. Thank you for this suggestion. > > [1] Kumar, et al. "Stabilizing off-policy q-learning via bootstrapping error reduction." NeurIPS 2019. > [2] Kumar, et al. "Conservative q-learning for offline reinforcement learning." NeurIPS 2020. > [3] Laroche, et al. "Safe policy improvement with baseline bootstrapping." ICML 2019. > [4] Parbhoo, et al. "Generalizing off-policy evaluation from a causal perspective for sequential decision-making." arXiv 2022. **2. Assumptions seem very strong, what’s the impact of annotation bias?** We agree with the reviewer that Assumptions 1&2 are rather strong, and this is exactly why our paper also presents analyses and experiments for **when the assumptions are violated, i.e., annotations are biased**. - Under Sec 4.1, in Proposition 3 we provide detailed theoretical analysis of the effect of bias due to imperfect annotations, where the bias is $\epsilon_G$. - In Sec 4.4 (L293) and Appendix D.2 we further provide an approach to directly correct for annotation bias using a model-based approach. - In Sec 5.2 experiments on the sepsis simulator (L374), we empirically studied the impact of bias and we found our approach to be robust to annotation bias. **3. Behavior trajectory with annotations vs many behavior trajectories, are they the same?** No, they're not the same. If we understand the reviewer correctly, the interpretation of “many behavior trajectories” corresponds to the naive approach that we describe in Sec 3.1 - this approach is the first thing that came to our mind as well, but as it turns out it's not correct as it does not handle state distributions correctly: states that receive more annotations will be overrepresented in the augmented dataset, and this will bias the final estimate. **4. What happens if annotation variance differs from reward variance?** (see also: [overall response](/forum?id=dsH244r9fA&noteId=jKEOBV6cyp), [noPk](/forum?id=dsH244r9fA&noteId=l1RyMRJL8s)) For clarity, in Theorems 6&13 we assume annotation variance $\sigma_G^2$ is the same as reward variance $\sigma_R^2$. More generally, we can assume some relationship between the two such that $\sigma_G(s,a)^2 = \sigma_R(s,a)^2 + \Delta_{\sigma}(s,a)^2$, which leads to an additional term that depends on $\Delta_{\sigma}(s,a)^2$ in Theorems 6&13 variance decomposition expressions. **Due to space constraints, please see paragraph 2 of the [overall response](/forum?id=dsH244r9fA&noteId=jKEOBV6cyp) (above) for detailed derivations, which we plan to add to the appendix.** --- Rebuttal Comment 1.1: Title: Thanks for your explanation and clarification Comment: Thanks for the author's explanation and clarification especially for 3. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Qcjd, we really appreciate your timely reply! Given that we have provided answers to all the points the reviewer mentioned under "Weaknesses" and "Questions", we kindly ask the reviewer to reassess the paper, or outline any further concerns that the reviewer might have. Once again, we extend our sincere thanks to you for your time and valuable feedback!
Rebuttal 1: Rebuttal: We thank all reviewers for taking time to read our paper and providing valuable feedback. Overall, reviewers found our paper to be well-written (71Lt, noPk, fzzo), addressing an interesting (71Lt), important (Qcjd) and timely problem (rPY7), and proposing a novel solution (rPY7) with comprehensive theoretical guarantees (noPk) and thorough experiments (rPY7). Reviewers also commented that the paper has carefully considered limitations and the effects when assumptions are violated (noPk), suggesting our work points to a direction worth exploring (fzzo). Below we respond to each review with a separate rebuttal, and here we summarize the common themes mentioned by multiple reviewers. **1. Clarifying our contributions: concerns about “novelty” (71Lt), proposed method is “fairly naive” (fzzo)** In this paper, we establish a formal framework of ‘semi-offline’ evaluation (offline data + counterfactual annotations), and propose a new estimator for this setting based on importance sampling. While the C-IS estimator is simple, our analysis **reveals new theoretical insights and important practical considerations** for practitioners to keep in mind, e.g. it’s better to impute missing annotations and use equal weights. Therefore, we believe our paper makes non-trivial contributions to the community and is an important step to enabling RL applications in high-stakes domains such as healthcare. **2. What happens if annotation variance differs from reward variance?** (Qcjd, noPk) For clarity, in Theorems 6&13 we assume annotation variance $\sigma_G^2$ is the same as reward variance $\sigma_R^2$. More generally, we can assume some relationship between the two such that $\sigma_G(s,a)^2 = \sigma_R(s,a)^2 + \Delta_{\sigma}(s,a)^2$. Then, in the proof of Theorem 13 (Appendix C.3, page 13), the majority of the derivation stays the same except for term (3) where we now apply a different assumption on annotation variance (L712). The resulting variance decomposition will have an additional term $\mathbb{E}\_{s \sim d_1} \mathbb{E}\_{a \sim \pi_b(s)} [\sum\_{\tilde{a} \sim \mathcal{A} \setminus \\{a\\}} \rho^{+}(\tilde{a}|s)^2 \bar{W}(\tilde{a}|s,a)^2 \Delta_{\sigma}(s,\tilde{a})^2]$ related to the difference in variance $\Delta_{\sigma}^2$. Note that this also suggests that when the annotations have a larger variance than the rewards, we may want to assign it a smaller weight, and vice versa -- which makes sense intuitively. Similarly, the modified version of Theorem 6 will contain an additional third term: $\mathbb{V}[\hat{v}^{\textup{C*-IS}}] = \mathbb{V}\_{s\sim d_1}[V^{\pi_e}(s)] + \mathbb{E}\_{s\sim d_1} \mathbb{E}\_{a\sim \pi(s)}[\pi_b(a|s) \rho(a|s)^2 \sigma_R(s,a)^2]+ \mathbb{E}\_{s\sim d_1}[\sum\_{\tilde{a} \sim \mathcal{A} \setminus \\{a\\}} \pi_e(\tilde{a}|s)^2 \Delta_{\sigma}(s,\tilde{a})^2]$ Where the original Theorem 6 can be obtained by setting $\Delta_{\sigma}^2$ to 0, making the last term vanish. We will add the full derivations to the appendix. **3. Is it even practical to collect counterfactual annotations?** (noPk, rPY7) Our approach is best suited for high-stakes RL domains where online evaluation is risky, but domain experts can provide additional feedback on new policies. One domain that the authors have the most experience in is healthcare, where the goal is to optimize sequential treatment policies (e.g. Komorowski et al. 2018). In this domain, clinicians are constantly evaluating alternative treatment paths in their minds when making treatment decisions but we only observe what was actually done; counterfactual annotations can be seen as a mechanism to elicit this information in their thought process that is otherwise not recorded. As acknowledged by Reviewer noPk, even without results from a user study, our paper provides valuable contributions to the community, where we formalize a framework for semi-offline evaluation using counterfactual annotations and study the theoretical properties of a new estimator for this setting. Importantly, our work sheds light on desirable properties of useful counterfactual annotations, which points to exciting directions of future research in RL as well as HCI (L530-L534); there is also opportunity to draw lessons from the NLP community about human annotation quality/variability (such as ref [1] mentioned by Reviewer noPk). Komorowski et al. “The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care”. Nature Medicine, 2018. **4. How to set weights for optimal estimation error reduction?** (71Lt, noPk) This is indeed an interesting question that arises from our approach, and **in the paper we have provided a partial answer**. - We explored this empirically in Fig 7 (Appendix E.1, page 27), where we sweep the weights and measure the resulting estimator variance. We found that (L839) “the ideal weighting scheme is problem-specific, and certain weights may lead to higher variance compared to standard IS”. We also found that (L841) “C*-IS, though not always variance-minimizing, consistently achieves lower variance than standard IS”. Unfortunately due to space constraints we couldn’t include the full results and summarized as (L350) “Equal weights (in C*-IS) is a good heuristic though not always ‘optimal’ ” in the text. Since it is an important takeaway of our paper, we will expand upon this part given the one extra content page at camera-ready. - We also considered this problem analytically (solving for variance-minimizing weights in the bandit case, which turned into a quartic equation) and numerically (optimizing variance objective using SGD). However, unless problem-specific parameters are known, we have yet to come up with a general way to minimize variance. Therefore, we suggest C*-IS (with equal weights) as a heuristic and highlighted this as a direction for future research (L311) that would build upon our semi-offline framework.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies offline policy evaluation in reinforcement learning, where the behavior policy, some interaction trajectories, and some human annotated trajectories (modified from interaction trajectories) are available. The paper points out that, since the number of human annotated trajectories are not the same across interaction trajectories, naively treating these human annotated trajectories the same as interaction trajectories can introduce bias (for example, in bandits, the context distribution changes), even if human annotations are perfect. So the paper proposes to weight the trajectories such that for each interaction trajectory, the weights of this trajectory and the trajectories annotated from this trajectory sum to one. This ensures estimating the value of a policy from the correct state distribution. The paper analyzed the bias and variance of the proposed estimator in the bandit setting and the bias of the estimator in the RL setting, under some specific settings (sometimes when the weights are the same and every trajectory is annotated, sometimes human annotations are perfect) and conducted empirical evaluation to show that the proposed estimator can reduce estimation error. Strengths: The paper is well-written and easy to follow. The paper studies an interesting problem of off-policy evaluation in reinforcement learning with the help of both interaction data and human annotations. This setting makes sense when data is sparse and human annotations are accurate and affordable. The paper both theoretically and empirically analyze the bias and variance of the proposed estimator. Weaknesses: My major concern is about the novelty of the paper. The paper did not answer the question of how to set the weights of interaction data and human annotations. To me, it seems that weighting is an obvious approach to try when correcting for the state distribution. I believe there are other works using similar idea. For example [1]. The interesting question to me is how to set the weights for optimal estimation error reduction. Unfortunately, this is not discussed in the paper. It would be nice to discuss the relationship between this work and policy evaluation from multiple loggers (e.g. [2]). To me, they are similar problems since human annotations are like collected from another logging policy (except we need to correct for the state distribution). I am a bit confused about the definition of C^\star-IS. The authors mentioned that C^\star-IS corresponds to w = 1/|A|. The authors also mentioned that the weight should be zero when there are no annotations. This means only when we have full information data can we use C^\star-IS. First, this seems a less interesting setting to analyze since the fundamental problem of missing data in RL is gone. Second, in the experiments, some actions were not annotated, but C^\star-IS is still used. There seems to be a contradiction? Minor: In line 50, the paper mentioned that the estimator requires a weaker condition. To me, this is an overclaim, since in reality, only the support condition is weaker, but an additional strong assumption that human annotations are perfect is also required. [1] Learning to Rank with Selection Bias in Personal Search Xuanhui Wang, Michael Bendersky, Donald Metzler, Marc Najork SIGIR 16 [2] Effective Evaluation using Logged Bandit Feedback from Multiple Loggers A. Agarwal and S. Basu and T. Schnabel and T. Joachims KDD 17 Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their overall positive assessment of our paper and valuable feedback, including providing additional references. We are encouraged that the reviewer found our paper to be well-written and recognized its potential impact. We address the main questions below: **1. How to set weights for optimal estimation error reduction?** (see also: [overall response](/forum?id=dsH244r9fA&noteId=jKEOBV6cyp), [noPk](/forum?id=dsH244r9fA&noteId=l1RyMRJL8s)) We agree with the reviewer that this is indeed an interesting question that arises from our approach, and **in the paper we have provided a partial answer**. - We explored this empirically in Fig 7 (Appendix E.1, page 27), where we sweep the weights and measure the resulting estimator variance. We found that (L839) “the ideal weighting scheme is problem-specific, and certain weights may lead to higher variance compared to standard IS”. We also found that (L841) “C*-IS, though not always variance-minimizing, consistently achieves lower variance than standard IS”. Unfortunately due to space constraints we couldn’t include the full results and summarized as (L350) “Equal weights (in C*-IS) is a good heuristic though not always ‘optimal’ ” in the text. Since it is an important takeaway of our paper, we will expand upon this part given the one extra content page at camera-ready. - We also considered this problem analytically (solving for variance-minimizing weights in the bandit case, which turned into a quartic equation) and numerically (optimizing variance objective using SGD). However, unless problem-specific parameters are known, we have yet to come up with a general way to minimize variance. Therefore, we suggest C*-IS (with equal weights) as a heuristic and highlighted this as a direction for future research (L311) that would build upon our semi-offline evaluation framework. **2. Clarifying C\*-IS and missing annotations, is there a contradiction?** You are correct in noting that (i) C*-IS corresponds to w = 1/|A|, (ii) weights should be zero when there are no annotations, and (iii) in the experiments, some actions were not annotated i.e. missing. However, there is **no contradiction** because we proposed an approach to impute missing annotations, described in Sec 4.4 (L286-292) and Appendix D.1, and applied it in our experiments (L385). - Our variance analysis (Sec 4.2, Appendix C.3) suggests that variance may increase if non-uniform weights are used (illustrated empirically in Fig 8 on page 27 of Appendix), motivating the idea of imputing missing annotations. This can reduce variance at the potential cost of increasing bias (L768-770), and in experiments we have empirically observed it to be effective overall. This is an important part of the approach to make it more practical, and in the final version we plan to expand upon this part to make sure readers do not miss it. **3. “My major concern is about the novelty of the paper”** (see also: [overall response](/forum?id=dsH244r9fA&noteId=jKEOBV6cyp), [fzzo](https://openreview.net/forum?id=dsH244r9fA&noteId=SS3U84sY4C)) In this paper, we establish a formal framework of ‘semi-offline’ evaluation (offline data + counterfactual annotations), and propose a new estimator for this setting based on importance sampling. While the C-IS estimator is simple (using reweighting, similar to e.g. ref [1] in the review), our analysis **reveals new theoretical insights and important practical considerations** for practitioners to keep in mind, e.g. it’s better to impute missing annotations and use equal weights. Therefore, we believe our paper makes non-trivial contributions to the community and is an important step to enabling RL applications in high-stakes domains such as healthcare. **4. Relationship to OPE with multiple loggers** This is indeed an interesting connection. We agree with the reviewer that a possible interpretation of counterfactual annotations is “data” from another logging policy i.e., the “annotation policy”. There is also a nice parallel in the policy definitions between Agarwal et al. (ref [2] in the review) vs our paper: in Agarwal et al. (Definition 5.1) the average policy $\pi_{avg}$ is the weighted average of multiple logging policies, whereas our augmented behavior policy (Definition 1, L162) $\pi_{b^+}$ can be seen as the weighted average of the behavior policy and “annotation policy”. However, there are two key distinctions between our framework vs Agarwal et al.: (i) our formulation of annotation is a single number rather than a (sub-)trajectory in the RL setting, (ii) we need to correct for state distributions that a naive approach fails to account for. We will add this discussion to the related works section (currently in Appendix A.1). **5. Regarding the language around weaker condition** Thank you for this suggestion. We will change the claim "weaker condition" -> "weaker condition on support" so it is more accurate. --- Rebuttal Comment 1.1: Comment: I thank the authors for explanation and clarification. The rebuttal addressed most of my concerns. I would like to keep my evaluation. I would not increase my evaluation because I still think that the proposed method is not that novel (in particular, weighting seems obvious to me and the authors did not answer how to set the weights theoretically) and that there are no real-world experiments. --- Reply to Comment 1.1.1: Comment: Dear reviewer 71Lt, thank you very much for the response! We are glad that we are able to address most of your concerns and clarify a few misunderstandings. We would like to share a few more thoughts in response to your latest comment: - **"weighting seems obvious"**: while in retrospect our weighting approach may be seen as a "straightforward" modification, our paper brings important contributions because (1) we are the first to formalize a new "semi-offline evaluation" task using offline data + counterfactual annotations, (2) under this setting, we point out the bias issue with the naive approach that *many people* may instinctively adopt without reading our paper -- "just add the annotations as new data points" -- and propose a theoretically sound, simple-to-implement solution. - **"did not answer how to set the weights theoretically"**: we agree with the reviewer that this is an important question, and we do not claim our paper provides a complete answer to this non-trivial question. Instead, our paper provides useful guidance as to how to use our estimator effectively. - Our experiments show that the C-IS estimator combined with our proposed heuristic (of using equal weights) can reduce both bias and variance, which speaks to the practical utility of our proposed estimator even without a theoretically satisfactory answer. - To further demonstrate non-trivial nature of this problem, in addition to Sec 4 + Appx C theoretical analyses (which lays important groundwork for answering this question theoretically) and subsets of the experiments (Appx E, Fig 7) which showed no $w$ is always best, we will update the appendix to include our analytical derivations (finding the derivative and solve for its zeros) and numerical optimization experiments (using gradient descent) that optimize $w$ to minimize Theorem 13 variance expression when *all* problem parameters are known, with the caveat that in practical problems where not all the problem parameters are known, neither can be applied. - On L311 we stated "We believe that optimizing the weights can further improve OPE performance and is an interesting direction for future work" and we will make sure to emphasize this again in our limitations section. - **no real-world experiments**: we have explicitly acknowledged this fact in our limitations section (L519). As pointed out by Reviewers noPk and rPY7, the technical part our paper (formalizing the problem setting and estimator, detailed theoretical analyses, and proof-of-concept experiments) is a self-contained, complete contribution. In our future work, we are excited to collaborate with experts in other research areas (e.g., HCI, healthcare) to carry out human experiments and design methods that can solicit meaningful/useful counterfactual annotations, and we believe those are outside of the scope of the main research question we seek to address in our current paper. Lastly, we would like to express our gratitude for your constructive and insightful feedback, and we hope these clarifications can help better contextualize the contributions of our paper.
null
null
null
null
null
null
The Memory-Perturbation Equation: Understanding Model's Sensitivity to Data
Accept (poster)
Summary: This paper proposes measuring sensitivity to data in some problems in machine learning through what they refer to as the “memory perturbation equation” (MPE), derived using the Bayesian learning rule. They argue that sensitivity properties of machine learning models are not as well understood as they could be in the literature, and therefore propose a remedy to this issue that generalizes some prior approaches. Much of the paper is about justifying the equation and showing that some prior works are special cases. There is also an experimental section that provides some empirical validation of some equations. Strengths: Although I am not familiar with this general area of research, which seems to be related to influence functions in machine learning, I found the paper interesting and novel. I also found the topic to be highly relevant for NeurIPS and an important one for machine learning in general. The fact that the approach generalizes some prior approaches in a non-trivial way is powerful. Weaknesses: While I’m generally supportive of this paper, I had a hard time following many of the details in this paper and cannot verify aspects of the approach. This is partly because of my lack of familiarity with the subject, but also because I think the authors have not provided enough background and explanation for a reader with little know-how around related topics. Also, I think the presentation can be improved significantly – please see my detailed questions/comments for some suggestions. I will rely heavily on more knowledgeable reviewers to provide a more thorough assessment. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Some questions/comments follow: - For the title, I don’t think “model’s” is the most suitable choice of term. I suggest using either “model” or “models’”. - “MPE” also stands for most probable explanation in Bayesian modeling, so this could be a point of confusion for some, without context. - I’m not sure how the word “memory” is suitable for what the authors intend in this work. This is what is written in Section 1: “These highly sensitive examples can be seen as those characterizing the model’s memory; the model is highly sensitive to them and perturbing them can make the model forget its essential knowledge”. Perhaps the authors could explain and justify their choice of term? It may help to connect any other work that they deem to be relevant or inspirational. - For Fig 1, what is the dataset on which a) and b) panels are based on? Are these all for FMNIST? - What is the relation between output function f_i(.) and the terms in equation 1? I don’t think this is ever specified. Is f_i the same as l_i? - Several of the equations (such as those in Section 3) are stated with pointers to prior work but without clear derivation. I understand that there are space limitations, but I had trouble following the correctness of the equations. - There is a claim in line 207 about how the proposed approach is more principled than some prior work – as someone not familiar with the field, I did not observe much justification for this claim, or perhaps just did not understand it clearly. - “Sharpness” is mentioned in line 227 but never defined. There are other such terms that don't get enough explanation. - There are typos, grammatical errors or other such issues in at least the following lines: 39, 41, 59, 61, 63, 68, 110, 113, 124, 161, 221, 236, 244, 249, 268, etc. Note that many of the early typos are due to the incorrect use of verbs for single vs. plural nouns. I suggest the authors carefully review the paper and fix these errors. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations are not mentioned in enough detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Q1: For the title, I don’t think “model’s” is the most suitable choice of term. I suggest using either “model” or “models’”....“MPE” also stands for most probable explanation in Bayesian modeling, so this could be a point of confusion for some, without context. A1: Thanks for the suggestions. We will try to avoid the confusion. >Q2: I’m not sure how the word “memory” is suitable for what the authors intend in this work… perhaps the authors could explain and justify their choice of term? It may help to connect any other work that they deem to be relevant or inspirational. A2: We use the word “memory” due to a connection to Bayesian methods in psychology where similar concepts are used, e.g., see “Bayesian sets” by Ghahramani and Heller, 2005, where log-ratios of posteriors are used to retrieve “relevant” examples. There is work in psychology (e.g., representativeness) to connect these ideas to human memory. We did not go into detail about this, but hope to write about this connection in a future study. >Q3: For Fig 1, what is the dataset on which a) and b) panels are based on? Are these all for FMNIST? A3: Panel b) is for MNIST and c) is for FMNIST, while Panel a) is just an illustration. We will fix the caption to clarify this. >Q4: What is the relation between output function f_i(.) and the terms in equation 1? I don’t think this is ever specified. Is f_i the same as l_i? A4: $f_i$ is the model’s output, while $\ell_i$ is the loss function. Line 60 shows an example for linear regression and the mean-squared error loss, where $f_i = x_i^{T} \theta$. >Q5: Several of the equations (such as those in Section 3) are stated with pointers to prior work but without clear derivation. I understand that there are space limitations, but I had trouble following the correctness of the equations. A5: For the final version, we add derivations to the appendix to make the work more self-contained. >Q6: There is a claim in line 207 about how the proposed approach is more principled than some prior work – as someone not familiar with the field, I did not observe much justification for this claim, or perhaps just did not understand it clearly. A6: We will expand this part. Essentially, the previous work uses an ad-hoc smoothing mechanism to make the loss differentiable, while in our work smoothing is naturally done with the posterior distribution (by using the expectation before taking the derivative as shown in line 206). >Q7: “Sharpness” is mentioned in line 227 but never defined. There are other such terms that don't get enough explanation. A7: By sharpness we mean the measures studied in the cited paper by Jiang et al. ([11] in the submitted draft). Sharpness of a local minimum captures the sensitivity of the empirical risk to perturbations in model parameters. We will clarify the confusion for the final version. Please let us know the other terms that cause confusion, so we can add explanations. >Q8: Limitations are not mentioned in enough detail. A8: We will add this. The major limitation is that better sensitivity may require better variances which may not always be computationally feasible. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I thank the authors for their response to my comments. Again, I suggest they edit the paper slightly to make it more readable for a generic reader. I also think more details about terms such as "memory" and "sharpness" are important; hopefully they will make such edits in a revision. --- Reply to Comment 1.1.1: Comment: Thanks for your feedback. We will take it into account to improve this aspect of our work.
Summary: This paper studies the problem of the model’s sensitivity to its training data (e.g., the counterfactual of how the model's performance will change if trained without certain data), commonly referred to as the "data influence" problem. This paper presents the "memory perturbation equation (MPE)" to study the model’s sensitivity to such perturbations and uses Bayesian learning rule (BLR) approximated using an exponential family distribution. Then, the BLR can be used to estimate the deviation caused by the removal of a group of data. The paper shows that MPE with Gaussian posterior recovers the celebrated Influence Functions (INF) in both linear and nonlinear cases, and INF can be considered as a special case of the MPE. The work also provides a variety of empirical results on MNIST to CIFAR-10 that shows the estimated sensitivity highly correlates with the actual deviation that would be caused by removing the data and thus can be used as its predictions. Strengths: This paper is an interesting addition to the research line on "data influence" problems. The paper is well-written. The conceptual development of this paper is clear and smooth. The methodology development and its derivations are solid and the paper also provides insightful remarks supporting the narrative. The method presented in the paper is general and can be potentially applied to a variety of work. Experiment results are nicely presented, clear and interesting. Weaknesses: I am listing some minor issues that may be improved. In this work, despite strong correlations to the Influence Function have been shown, the comparison to data influence methods in general isn't comprehensive. There are often works talking about recovering linear influence function or using Taylor approximation to extend to nonlinear cases. It would be nice to see how it relates to or compares to broader works, such as [1] or TracIn [2], just to name a few. [1] Repairing Neural Networks by Leaving the Right Past Behind. NeurIPS 2022. [2] Estimating training data influence by tracing gradient descent. NeurIPS 2020. The empirical results of the proposed approach seem isn't especially strong. As we can see, the correlation between prediction and the actual counterfactual deteriorates quickly as the model becomes bigger. It seems quite well for MNIST/MLP, but less satisfactory for LeNet/FMNIST and CNN/CIFAR-10. I appreciate the honesty of authors that show these results in a straightforward manner such that the reader can directly see the real capability of these methods. Because of that, I would think the relatively weak empirical results do not pose a major compromise to the contribution of this work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I was wondering about the approximation error for using exponential family distributions or Gaussian priors. It would be nice if the paper can provide some discussions on this to help better understanding. What is the computational overhead of the proposed method, in terms of time complexity and memory demand? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The empirical results are relatively weak. The computational demand and its scalability are unclear. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Q1: Despite strong correlations to the Influence Function have been shown, the comparison to data influence methods in general isn't comprehensive…. It would be nice to see how it relates to or compares to broader works, such as [1] or TracIn [2].. A1: This is a good point. The MPE unifies many such approaches (e.g., those mentioned in line 75-77) by essentially choosing an appropriate posterior form and an approximation to the natural-gradient (these same approximations are also made in the BLR). Currently, we only give detailed derivations for linear regression and neural networks, but we will expand and rewrite these parts to explain the unification of other approaches. To summarize: - methods that use Hessians (Koh and Liang 2017, Hara et al. 2019) are obtained by using a Gaussian with an unknown “full” covariance; also see the note in line 171. - deep learning optimizers are obtained by using a Gaussian with an unknown “diagonal” covariance. - methods that use prediction error or the gradient (see the references in line 76) can often be obtained by using a Gaussian with unknown mean but a fixed known covariance. We will expand the paper to explain these connections in detail. We will also add the references you suggested. >Q2: The empirical results of the proposed approach seem isn't especially strong. As we can see, the correlation between prediction and the actual counterfactual deteriorates quickly as the model becomes bigger. It seems quite well for MNIST/MLP, but less satisfactory for LeNet/FMNIST and CNN/CIFAR-10. I appreciate the honesty of authors that show these results in a straightforward manner such that the reader can directly see the real capability of these methods. Because of that, I would think the relatively weak empirical results do not pose a major compromise to the contribution of this work. A2: We stress that our measures can work on bigger models as well; see Fig. IIa in the 1-page PDF where we show on ResNet-20 (270K parameters) a perfect match between true and estimated generalization error. Also, the MLP used for MNIST in Fig. 2a has 545K parameters. We have several such experiments now. The reason behind the less satisfactory results in Fig. 2b and 2c is due to a suboptimal training to produce the ground-truth in the x-axis. The ground truth requires retraining with individual examples removed. We will fix this in the next version of the paper. We do not observe these problems in other experiments (e.g., when predicting generalization error as shown in Fig. IIa) >Q3: I was wondering about the approximation error for using exponential family distributions or Gaussian priors. It would be nice if the paper can provide some discussions on this to help better understand. A3: We mention this briefly in line 183 but we will expand this. As mentioned in our response to Q1, by making an appropriate exponential-family approximation we can recover different types of measures. So the quality of approximation affects the quality of the sensitivity measure. See the results shown in Fig. I of the 1-page PDF. >Q4: “What is the computational overhead of the proposed method, in terms of time complexity and memory demand?” A4: For diagonal Gaussian approximations, we get deep-learning optimizers that estimate weight-space variances for free (see Sec. 4 of the BLR paper [1]). These can be turned into sensitivity estimates with almost no additional overhead, for instance, using a few Jacobian vector products; see the results shown in Fig. I of the 1-page PDF. In general, as mentioned in line 184, the computational overhead depends on the chosen posterior approximations. [1] M.E. Khan, H. Rue. The Bayesian learning rule. arXiv 2107.04562, 2023. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I have read through the reviews as well as the author's response. I appreciate the authors for their dedicated work and thanks for the response to my comments as well as for providing additional results. As has been pointed out by multiple reviewers, the presentation of this paper could be improved for better accessibility to a broader audience. The questions on approximation errors and computational overhead are not directly addressed. The runtime for the experiments conducted in this work or its order of magnitude is unknown. Its practicality cannot be readily assessed. I recommend the authors to better clarify these parts shall the paper be published. I would keep my score as is. I hope the authors compile the new results and additional discussions into the paper or its Appendix. Regards, Reviewer YYDA --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the feedback and we appreciate their comments! We will improve the presentation and add new results and additional discussions to the paper. >The questions on approximation errors and computational overhead are not directly addressed. In the rebuttal, we show that a better estimate of variance (better approximation quality) gives better results; see Fig. I in the 1-page PDF. >The runtime for the experiments conducted in this work or its order of magnitude is unknown. Its practicality cannot be readily assessed. The method is practical and scales well to large problems. For example, in Fig. I, we used Adam, which requires less computation than iBLR but gives slightly worse variance estimates. Both require the computation of Jacobian-vector products but can be scaled to large problems as they both use diagonal weight-space covariances. For Fig. 2, 3, and 4a, we analyzed trained models (e.g., to compare to ground truth deviations), so we used KFAC-Laplace which is a bit more computationally expensive but can still scale to large problems.
Summary: In this paper, the authors introduce the Memory Perturbation Equation (MPE), a generalization of sensitivity measures of models that uses the recently proposed Bayesian Learning Rule (BLR) as a foundation. Thanks to its Bayesian foundation, the MPE can be used for used for non-converged models and for non-differentiable loss functions. Strengths: The biggest strength of the paper is the writing. This is an *extremely* well written paper that conveys the idea beautifully. It flows extremely well and is very easy to read and follow along. The generality of the MPE is also mind boggling. This is an extremely general method that I think will be pivotal for deep learning. Lastly, the experiments section did a great job verifying the theory of the method and giving a taste of all of the potential uses for it. Weaknesses: While I think the writing is fantastic, I think a small exposition on the BLR would due wonders, especially because the paper cites specific equations form the BLR paper. To make the paper more readable, a small section in the appendix with details relevant to the paper and its experiments would do wonders. While I understand the theory of the method, I have a slight practical disconnect between the experiments from this paper and the BLR. Since Adam naturally arises from the BLR, the posterior variances can be constructed easily and, crucially, the posterior covariance would be diagonal, sidestepping messy matrix inversions. But, from reading the appendix section, it seems like the authors are doing the opposite and are instead getting the Hessian of the loss function (in the appendix the authors write: "Variance computation requires inversion of matrices. For large problems, we use a Kronecker factored Laplace variance approximation as implemented in the laplace package[5]"). If this is indeed the case, then I do find this slightly concerning and is a major limitation of the method as computing Hessians are expensive and practically infeasible for modern architectures. Moreover, it doesn't fall in line of the mantra of the paper which is to leverage the BLR to get these sensitivity measure for practically free. Please let me know I have misunderstood what is going on. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: A limitation section is missing from the paper, which I think is important to have. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Q1: I think a small exposition on the BLR would due wonders, especially because the paper cites specific equations from the BLR paper. To make the paper more readable, a small section in the appendix with details relevant to the paper and its experiments would do wonders. A1: Thanks for the suggestion. We will add a section on this. >Q2: I have a slight practical disconnect between the experiments from this paper and the BLR. Since Adam naturally arises from the BLR, the posterior variances can be constructed easily and, crucially, the posterior covariance would be diagonal, sidestepping messy matrix inversions….. it seems like the authors are …..getting the Hessian of the loss function…. I do find this slightly concerning and is a major limitation of the method as computing Hessians are expensive and practically infeasible for modern architectures. Moreover, it doesn't fall in line with the mantra of the paper which is to leverage the BLR to get these sensitivity measure for practically free. A2: This is a great question and what you said is exactly the idea: the MPE suggests that better estimates of uncertainty give better estimates of sensitivity; see the 1-page PDF on the experiments, where we compare sensitivity at iterations of methods like SGD, Adam, AdamW, iBLR, etc. Previously, we did KFAC only for some experiments where trained networks were used. For iterations, we always use diagonal versions, but we will fix the writing and make the experiments more clear. >Q3: A limitation section is missing from the paper, which I think is important to have. A3: We will add a section to the paper.
Summary: The paper studies the sensitivity of machine learning models to training data. Previously, sensitivity was often studied through empirical investigations which was costly and do not always generalize across models. In this paper, the authors present the memory perturbation equation (MPE), based on the Bayesian learning rule, as a unifying equation to understand sensitivity of generic ML algorithms to training data. The MPE has two features: Sensitivities to examples is estimated by using natural gradients of those examples alone, and examples with larger natural gradients contributes more to the sensitivity. The authors show that the MPE when specialized to Gaussian posteriors, recovers influence functions in linear models and deep learning, and examples with high prediction error and predictions variances are the most influential and the sensitivity is obtained by multiplying the two; The sensitivity can be estimated cheaply whenever natural gradients are cheap to compute; Finally, the authors show empirically that the MPE can be used to accurately estimate generalization performance on image classification tasks. Strengths: - The paper is well written and easy to follow for non-experts. - The proposed MPE formulation is well supported by theoretical foundations based on bayesian learning rule. And according to the authors, the proposed MPE has several nice and intuitive theoretical properties for Gaussian posteriors. - The experiments are well designed to show various aspects of the the properties of the proposed sensitivity measure, which is interesting and clear. Weaknesses: - I don’t see much discussions on related work or empirical comparisons to state-of-the-art methods - The example datasets used in the empirical evaluations seem to be mostly smaller datasets like MNIST and CIFAR. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Out of curiosity, can the proposed MPE or part of the proposed perturbation-based formulation be used to perform machine learning data valuation, i.e., to evaluate the importance of groups of data samples towards a machine learning task (e.g., https://bair.berkeley.edu/blog/2019/12/16/data-worth/) Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations in their discussions section. I don’t see any potential negative impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Q1: I don’t see much discussions on related work or empirical comparisons to state-of-the-art methods A1: Thanks, we will improve this point, but could the reviewer mention some related works they would like us to add? For now, we considered influence function as state-of-the-art and we have new experiments comparing influence functions at iterations of SGD, Adam, AdamW, etc. The experiments clearly show the limitation of existing approaches that do not work well during iterations. See Fig. I in the attached 1-page pdf. >Q2: The example datasets used in the empirical evaluations seem to be mostly smaller datasets like MNIST and CIFAR. A2: We agree but this is because our focus is on “unifying existing measures”. Our experiments are essentially validating the theory. We will take experiments on the TinyImageNet dataset into consideration. In the future, we plan to work on large-scale models such as GPT-2. >Q3: “Out of curiosity, can the proposed MPE or part of the proposed perturbation-based formulation be used to perform machine learning data valuation, i.e., to evaluate the importance of groups of data samples towards a machine learning task (e.g., https://bair.berkeley.edu/blog/2019/12/16/data-worth/)” A3: Absolutely! We hope to consider such applications in the follow-up study, where we use the framework developed here for larger problems. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for your rebuttal! I don’t have further questions.
Rebuttal 1: Rebuttal: We appreciate the feedback provided by all reviewers. The main weakness seems to be that \ (1) the relevant background and related work can be improved; and\ (2) the experimental results are weak at times and not on large models. For issue 1, we believe it can easily be fixed and we have given relevant responses for those. For issue 2, we provided additional results in the 1-page PDF. We also include an experiment on ResNet-20 (270K parameters). The experiments support our main message that (1) better estimates of uncertainty give better estimates of sensitivity, and (2) sensitivity can be estimated well during training. We will further improve the writing and presentation of the paper to take the reviewers' suggestions into account. Pdf: /pdf/bd7afdb849d59b35290fc5111c3c91142964f038.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents a generic method, relying on some Bayesian analogy, to perform instance-wise sensitivity analysis of learned models. A generic updating rule and derivative is presented, as well as its different variations on popular models (Neural networks and Gaussian models). One notable use of the proposed sensitivity indices is to estimate the genralisation capabilities of a given model. Experiments are performed on classical MNIST/FMNIST data sets to confirm that the behaviour of the method is the expected one. Strengths: +: a versatile method to perform sensitivity analysis +: a strong technical proposal, at least from what I can assess Weaknesses: See questions for more detailed comment -: link with Bayesian approach remains to some extent unclear in the general case. -: the approach mainly consider instance-wise sensitivity indices, and it is unclear whether this is sufficient in general? (minor) -: the paper contains a number of typos, and a final read could be useful to correct them (e.g., L41 "can estimated", L63 "the expressions shows", L59, "data as large", L73, "such topics as rare", ...) Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * Bayesian approximation validity: In L92, I found it very strange that one can assume that there is always a proportional link between a Bayesian posterior and the exponential of a Loss function. Sure, the exponential allow to turn the additivity of the loss function into a product form mimicking Bayesian updating, but is it guaranteed that any loss function can be related to a legitimate (proper) prior/likelihood pair in a standard Bayesian framework? Similarly, except for the Gaussian case, what is the quality one can expect from an appriximation using the exponential family? This is a bit hard to figure out, as all examples given in the rest of the paper rest on the (specific) Gaussian case? A good summary of this whole question would be "How Bayesian (in the sense of satiisfying the axioms and coherence of Bayesian approaches, as expressed by, e.g., De Finetti) actually is the BLR rule?" * In echo to my previous question, do we have an idea of how often the premisses of Theorem 2 ($q_\lambda=p(\theta|\mathcal{D})$) are true? Verifying it seems highly non-trivial to me. * Thanks to the underlying assumptions made, it makes perfect sense to consider that perturbation effects are additive and that samples have independent effects on the model. However, it would seem more reasonable to consider that, in practice, samples do have interactions between them and that the impact on the learned model are not merely additive, hence the need to include some possible dependencies/interactions between the samples. Maybe the authors could clarify a bit the situations where the additivity assumption makes sense? * While the removal of an observation is perfectly interpretable from the data perspective, I would appreciate some clarification about the $\epsilon_i$ perturbation. From what I get, this would correspond to weighting the data in the learning scheme, and see what happens in this case. Is this correct, or would it be more accurate to consider that the data can move within a given neighboorhood within the input space (a perturbation that would probably make more sense from a physical viewpoint, even if I am fine with the weighting interpretation). * Suggestion: I do not really see the advantage of repeating figures that can be found in some other places in Figure 1, especially as a reader will not have the necessay elements to properly interpret them at this stage. Maybe remove them to better discuss some other points? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: See questions about some possible limitations, not especially discussed/mentioned in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Q1: Bayesian approximation validity: In L92, I found it very strange that one can assume that there is always a proportional link between a Bayesian posterior and the exponential of a Loss function… is it guaranteed that any loss function can be related to a legitimate (proper) prior/likelihood pair in a standard Bayesian framework? A1: Yes, this is the so-called “generalized” Bayesian framework where any arbitrary loss can be used in place of a likelihood. A short description of this can be found in the Sec 1.2 of the BLR paper [1]. It is well-known in the Bayesian community and several papers exist on this topic; see [2-4]. We will add a short description to the paper. [1] M.E. Khan, H. Rue. The Bayesian learning rule. arXiv 2107.04562, 2023.\ [2] T. Zhang. Theoretical analysis of a class of randomized regularization methods. COLT, 1999.\ [3] PAC-Bayesian supervised classification: The thermodynamics of statistical learn- ing. institute of mathematical statistics lecture notes—monograph series 56. IMS, 2007.\ [4] P. G. Bissiri, C. C. Holmes, and S. G. Walker. A general framework for updating belief distributions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2016 >Q2: Similarly, except for the Gaussian case, what is the quality one can expect from an approximation using the exponential family? This is a bit hard to figure out, as all examples given in the rest of the paper rest on the (specific) Gaussian case? …. do we have an idea of how often the premises of Theorem 2 ($q_{\lambda} (\theta \lvert D$) are true? Verifying it seems highly non-trivial to me. A2: We will add a non-Gaussian example for the Beta-Bernoulli Mixture model where the posterior is exact. There are plenty of such cases where the posterior is non-Gaussian and exact. There are no issues with the “quality” of the approximations because they merely reveal the implicit approximations made in the previous non-Bayesian approaches. Our work simply figures out the exact form of the approximations to recover these other approaches from the MPE. >Q3: How Bayesian actually is the BLR rule? A3: Yes, the BLR is entirely Bayesian. We encourage the reviewer to refer to the BLR paper, where it is shown that the BLR coincides with the Bayes’ rule for well-known conjugate models and, in others, gives the closest approximation to the posterior according to the KL divergence. For Bayesian models, such as in Bayesian deep learning, this is just variational inference. We will add some text to explain this. >Q4: Thanks to the underlying assumptions made, it makes perfect sense to consider that perturbation effects are additive and that samples have independent effects on the model. However, it would seem more reasonable to consider that, in practice, samples do have interactions between them and that the impact on the learned model are not merely additive, hence the need to include some possible dependencies/interactions between the samples. Maybe the authors could clarify a bit the situations where the additivity assumption makes sense? A4: Thanks for the question. We are not sure about which “additive assumption” the reviewer is referring to. In Eq. 6 we make an additive assumption in the “natural-parameter space” but this still takes the interactions into account when estimating effects on the predictors. For example, see Eq. 19 where the covariance matrix (in the first approximation) takes care of the interaction and collinearity between the examples (similarly to the influence function). In Fig. 3a, we show that ignoring the interaction still works on some problems, but considering full covariance is better in general as suggested by Eq. 19. >Q5: While the removal of an observation is perfectly interpretable from the data perspective, I would appreciate some clarification about the perturbation. From what I get, this would correspond to weighting the data in the learning scheme, and see what happens in this case…. A5: Yes, you are right! For example, we can multiply the loss function of the respective data by $\epsilon_i$, perturb the label $y_i$, or some other perturbation of this kind. >Q6: Suggestion: I do not really see the advantage of repeating figures that can be found in some other places in Fig. 1, especially as a reader will not have the necessary elements to properly interpret them at this stage. Maybe remove them to better discuss some other points? A6: Thanks, we will think about this, and also improve the caption to make it more understandable. >Q7: the paper contains a number of typos… A7: Thanks, we have fixed them now. --- Rebuttal Comment 1.1: Title: Thanks for the clarification + additional elements Comment: Dear authors, Thank you for your various clarifications and pointers linking Bayesian approaches and loss functions, after which I would gladly raise my score. Regarding the additivity, I gavce a look at Equation (19), however I have the feeling that my point still remain after this. To clarify a bit waht I wanted to say: I did not mean additivity and interactivity in the predictive model, but in the way sensitivities with respect to data removal are computed: from what I get, the MPE obtained by removing two pieces of data amounts to the sum of removing each data point individually. This is the additivity assumption I was mentioning. It may be the case that the covariance matrix mentioned do take care of that, but even in this case it is then approximated by a summative form over each removed data point. Best regards --- Reply to Comment 1.1.1: Comment: We are happy to see an increase in the score. Thank you! Regarding the “additivity assumption”, this does not cause a problem in the first approximation in Eq. 19 but (as you point out) only in the second approximation where the sum over examples is used. This is why we did the experiments in Fig. 3a and 3b to understand the effect on a real problem. We will improve our writing to avoid the confusion. To understand why the assumption does not cause a problem, we give the equation below for linear regression where the “exact” deviations are indeed additive. That is, the following holds exactly for the new model $\theta_*^{\backslash \mathcal{M}}$ obtained by removing a group of examples $\mathcal{M}$ from $\theta_*^{}$, $S_{*}^{\backslash \mathcal{M}}\theta_*^{\backslash \mathcal{M}} - S_*^{} \theta_*^{} = - \sum_{i \in \mathcal{M}} x_i y_i, \qquad -\frac{1}{2} S_*^{\backslash \mathcal{M}} + \frac{1}{2} S_*^{} = \frac{1}{2} \sum_{i \in \mathcal{M}} x_i x_i^T$ where $S_*^{\backslash \mathcal{M}}$ denotes the precision for the new model. The expression, when written in terms of the parameters or the function values, has a covariance matrix that correlates examples.
null
null
null
null
null
null
Large-Scale Distributed Learning via Private On-Device LSH
Accept (poster)
Summary: This paper presents a novel framework for on-device locality-sensitive hashing (LSH) that addresses the limitations of existing algorithms in terms of computational efficiency, memory constraints, and privacy concerns. The authors introduce a new family of hash functions that enables each device to generate hash tables independently, without relying on a central host. This approach allows for personalized and private LSH analysis while conserving memory and computational resources. Strengths: 1. The proposed PGhash family is novel. It addresses computational and memory challenges introduced by repeated randomized projection of full-layer weights in distributed settings. 2. The authors claim to have proven several statistical and sensitivity properties of their PGhash functions, ensuring the reliability and robustness of the proposed framework. They also present experimental results that demonstrate the competitiveness of their approach in training large-scale recommender networks compared to existing LSH frameworks that assume unrestricted on-device capacity. 3. The paper is well-written. The theoretical formulation is well-organized. Weaknesses: NA Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Is there any intuition on the choosing between PGhash version of DWTA and SimHash in practice? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. We address all questions and concerns below. --- > Is there any intuition on the choosing between PGHash version of DWTA and SimHash in practice? DWTA and SimHash approximate similarity in fundamentally different ways: * DWTA computes similarity between vectors based on their relative attributes (similar ordering of indices by magnitude). * SimHash approximates cosine similarity between vectors. In practice, selection of a PGHash version of DWTA versus SimHash stems from the desired speed of LSH analysis as well as the label sparsity of the data. 1. Speed: DWTA is faster than SimHash (one of the fastest known hashing schemes) since matrix multiplication is not needed. DWTA can also generate multiple hashes at once (SimHash requires multiple random projections to get multiple hashes). 2. Label sparsity: We can further speculate on choice of hash based on some interesting empirical data: Table 4 in our supplemental material demonstrates that SimHash is inferior for training on Amazon-670K, and we also use require usage of DWTA to train over Wiki-325K (Figure 7). The average number of labels per sample are as follows, Delicious-200K: ~75 labels, Amazon-670K: ~5 labels, Wiki-325K: ~3 labels, which suggests that DWTA should be used when the expected number of labels per point is **extremely low.**
Summary: The paper proposes a family of hashing functions that reduces the memory cost. Let W be a weight matrix. It is observed that in many applications, one needs to draw a Gaussian matrix T and compute T * W when constructing hash tables. Also, given a query x, one needs to compute T * x which requires the storage of the entire matrix T. These two steps will be less practical on memory-limited devices. The main contribution of the work is a more efficient hashing scheme. The key idea is to perform a low-dimensional projection B of x on to dimension c, and use a smaller Gaussian matrix S. It is shown that the cosine distance on Bx preserves LSH property. Strengths: + The problem and results are clearly presented. + The saving of memory is important to many applications. + The techniques are easy to follow. + A set of experiments were included to support the theory. Weaknesses: - My primary concern is the novelty of the approach. It turns out that Theorem 1 is the main result of the paper, which shows that projection with the matrix B = [I_c | I_c | ... | I_c] would preserve cosine distance in the sense of LSH. I am however uncertain whether the design of B is novel, or has been considered before in a similar context. (I guess it appeared somewhere but I cannot recount.) I defer to authors' clarification and other reviewers' comments. - The parameter 'c' controls the new dimension, which is supposed to be lower bounded by some quantity relevant to d or the training size, or will change the performance of the similarity function Sim_c, but there was no such discussion or condition. - Algorithm 1 looks superficial and lacks guarantee. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: see weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. We address all questions and concerns below. >My primary concern is the novelty of the approach. To the best of our knowledge, **our LSH approach is novel** due to the following reasons: - Existing literature is not focused on the serial generation of pseudo-random projections, namely because there is no memory concern -- SLIDE [R1], Mongoose [R2], [R3], G-SLIDE [R4], etc., are designed for single-machine setups. - Projections over federated settings is a relatively underexplored setup -- Federated SLIDE [R5] performs LSH analysis of hashed client data at the server (and therefore, avoids memory constraints), which we seek to avoid due to privacy concerns. - Furthermore, our on-device approach allows for the personalization of LSH analysis by allowing devices to select their own LSH hyperparameters (which other approaches lack). > I am however uncertain whether the design of $B$ is novel, or has been considered before in a similar context. To the best of our knowledge, the structure of our $B$ matrix is novel in the context of LSH methods. The design of $B$ comes from the consideration of a nontrivial problem: - Given a **fixed** $c \times d$ matrix $B$ with $c<<d$, produce a sequence of $c \times c$ matrices $S_i$ for $1\leq i\leq n$ such that the $S_iB$ *resemble* random $c \times d$ Gaussian matrices. Of course, it is not mathematically possible to obtain a sequence of independent, random Gaussian matrices given a fixed $B$, so we will try to produce near-Gaussian matrices. It is clear that the $S_i$ should be $c\times c$ random Gaussian matrices. Since a random variable $\mathcal{Z}\sim\mathcal{N}(0,I_c)$ is invariant under orthogonal transformations, we should have $B=[O_1 | O_2 |\cdots | O_{d/c}]$, where the $O_i$ are orthogonal matrices. We set $O_i=I_c$ since: 1. It is an **effective discriminator** of angular similarity according to Figures 2 and 8, 2. Its simple structure makes it possible to analyze the distribution of $||Bv||^2$, 3. For instances where dissimilar vectors are grouped as similar, it simply results in an increased number of weights to train, which according to Figure 6(a) is **never a significant amount**. --- > The parameter 'c' controls the new dimension, which is supposed to be lower bounded by some quantity relevant to d or the training size, or will change the performance of the similarity function Sim_c, but there was no such discussion or condition. In Proposition 1, the distribution of $||Bv||^2$ critically depends on the sketch dimension $c$. Namely, a higher value of $c$ (i.e., less aggressive compression) produces a generalized Beta distribution with less variance. The altered magnitude of vectors under the multiplication of $B$, i.e., $||Bv||^2$ will affect the extent of angle distortion presented in Theorem 2 (explained in lines 238-243), so the degree of distortion is affected by the sketch dimension $c$. --- > Algorithm 1 looks superficial and lacks guarantee. To the best of our knowledge, no LSH-based dynamic pruning approach, such as SLIDE [R1] or its predecessor MIPS-dropout [R2], G-SLIDE [R3], or Federated SLIDE [R4] contains a guarantee because it is difficult to theoretically establish a convergence rate when the parameters are dropped out in a structured, continually-changing manner as opposed to, say, random dropout. Similar to these other works, we are focused on the similarity estimation/sensitivity of our LSH strategy. Although this is beyond the purview of our work, a starting point for establishing the convergence of adaptive dropout algorithms is to use the theory established by [R6], which demonstrates that federated training on heterogeneous subnetworks can succeed if the overlap of parameters from round-to-round and client-to-client is significant enough. --- Thank you for your review. If we have addressed your questions, we would appreciate it if you would consider updating your score. If any other questions or concerns remain, please let us know. **References:** [R1] Chen, Beidi, et al. "Slide: In defense of smart algorithms over hardware acceleration for large-scale deep learning systems." (2020) [R2] Chen, Beidi, et al. "Mongoose: A learnable lsh framework for efficient neural network training." 2020. [R3] Spring, Ryan, and Anshumali Shrivastava. "Scalable and sustainable deep learning via randomized hashing." (2017) [R4] Pan, Zaifeng, et al. "G-SLIDE: A GPU-Based Sub-Linear Deep Learning Engine via LSH Sparsification." (2021) [R5] Yan, Minghao, et al. "Distributed slide: Enabling training large neural networks on low bandwidth and simple cpu-clusters via model parallelism and sparsity." (2022). [R6] Zhou, Hanhan, et al. "Federated Learning with Online Adaptive Heterogeneous Local Models." (2022)
Summary: **Main Idea** : If you are using LSH based sparsity, then why not project the weight matrices down using random / even structured projections and then use the LSH. **Potential applications**: device-based training / inference (resource-constrained) / federated learning. The authors provide theoretical analyse of proposed PGHash and provide some experiments in single and multi-device settings. **[UPDATE AFTER REBUTTAL]** Sorry for the late response. Due to unavoidable reasons, i could not get to this earlier. 1. As agreed by authors on most questions on theoretical results, there were issues in the provided theory. While authors provide quick fixes to the theorems. In order to evaluate these fixes, one would have to reconsider the entire theory again. 2. I do not agree with authors argument in favor of structured matrix B. I still believe it has issues and detailed proof must be provided to support it. I am afraid structured matrix B will kill any best bounds you can give. For example, in the example I gave in the review, how can one ever cope up with the error caused there. For such as example, the bounds have to be trivial [-1,1] for cosine similarity. 3. I am skeptical with the statement that "The block identity structure solves the following problem: given a fixed $c \times d$ matrix $B$ with $c<<d$, produce a sequence of $c \times c$ matrices $S_i$ for $1\leq i \leq n$ such that the $S_iB$ resemble random $c \times d$ Gaussian matrices". "Resemble" is not what we are looking for in the theory. **Would request ACs to take a serious look at the theory to confirm its correctness.** About experiments, I would request the authors to show results after the peak as well to get the full picture. I will be maintaining my score for the submission in its current form. Strengths: The idea of using projections before LSH might be useful in practice depending on the sensitivity of the model to the compression. Weaknesses: See question section. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: **Theoretical soundness of approach and provided theorems.** 1. Theorem 1 does not provide any information on the approximation. The statement of theorem 1 is obvious as the similarity is defined as cosine similarity over folded vectors. After folding, PGhash is just Simhash so it is not surprising that it is a LSH on folded vectors 2. Theorem 2 does provide some information. But has issues. a). The $\alpha$ is infimum of a set and can be equal to $0$ even if the set itself does not contain $0$. (on a related note, i believe the definition of $\alpha$ can be improved in terms of how the set is written). Is it that $\alpha = inf \\{ ||Bv|| , v \in S_{x,y} \\}$ ? I assume this is what the authors meant. b) If $\alpha$ is 0, then the range of distortion is trivially -1,1 c) Proposition 1 is used to comment on the value of $\alpha$ but it's not exactly clear how they are related, the set used in proposition 1 is that of unit sphere in R^d and that in theorem 2 is linear span of $x$ and $y$ where coefficients are from a unit circle . Clearly the two sets are very different. Also, E||Bu|| = 1 cannot be directly extended to understand the range of $\alpha$ as $\alpha$ is infimum of the $ \\{ ||Bv|| , v \in S_{x,y}\\}$ d) It is possible that writing issues in this section might have mislead me a bit. It would be good to get clarification from authors. e) The structured form of matrix B is actually troublesome for theoretical guarantees. Consider a simple example , $d=20$, $c=5$, the vectors be, $S = \\{ [x_1, 0,0,0,0, x_2, 0,0,0,0, x_3, 0,0,0,0, x_4, 0,0,0,0] | x_1+x_2+x_3+x_4 = 1 \\}$. Note that two vectors from S can have cosine similarities that vary from $-1$ to $1$ but under PGHash , they will always have similarities of $1$. **Experiments:** 1. Both from Figure 5 and Figure 4, it seems like higher number of devices causes PGHash to perform worse. Is there any reason why this occurs? 2. I do expect that using projections to reduce size of weight matrices is going to lose accuracy. However, the hope for this paper is that the trade-off only significantly kicks in after large amounts of compression. a. The plots provided for Amazon-670K and Delicious-200K are not shown to convergence. So it is impossible to know what the eventual loss of accuracy is. b. With multi-device settings, the plots already show that PGhash is losing accuracy as compared to baseline. Given that multi-device setting is the proposed application area of PGhash, it is unclear if the overall proposal has value. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. Below, we address all questions and concerns. ## Theory > [Q1]: Misleading impression of Theorem 1 being the main result Theorem 1 and its corollary are warmup results intended to illustrate a simple connection between traditional SimHash and our PGHash, while Theorem 2 and Proposition 1 are our core and novel contributions. In our revision, we rename Theorem 1 as Fact or Proposition. --- > [Q2(a)]: $\alpha$ in Theorem 2 might be 0 Indeed, our intended definition is $\alpha=\inf\bigl(||Bv||^2: v\in S_{x,y} \bigr)$. The observation that $\alpha$ might be 0 within our current definition is true and a fixable caveat in our formulation. To remedy this, we now assume that $\alpha=\inf\bigl(||Bv||: v\in S_{x,y} \bigr)>0$, which trivially absorbs assumption of no 0-eigenvectors in $S_{x,y}$ and eliminates the possibility of infinitesimally-vanishing norm. This updated definition and assumption of $\alpha$ will be included in the revision. > [Q2(b)]: If $\alpha$ is 0, then the range of distortion is [-1,1] Correct, if $\alpha$ is 0 then the range of distortion is trivially [-1,1]. We assume that $\alpha>0$ to provide meaningful distortion bounds in Theorem 2. Empirically, grouping dissimilar vectors as similar is statistically rare for randomly drawn unit vectors (Figures 2 and 8). **Furthermore, in practice it doesn't hinder training**: this simply results in less pruning since more weight vectors are deemed similar to the input and therefore updated. --- > [Q2(*c*)]: Relation of Proposition 1 to $\alpha$ First, we clarify that $S_{x,y}$ is defined as the set $\bigl( v\in \mathrm{span}(x,y): ||v||=1\bigr)$. This was a technical typo that should resolve much of the confusion on relating Proposition 1 and Theorem 2. It is useful to know how the norms of arbitrary unit vectors are distorted by $B$, which, according to Proposition 1, follows a generalized beta distribution with concentrated mass around 1 (see Figure 10 for a simulation). Indeed, $S_{x,y}$ is not the unit sphere, but since $x$ and $y$ are assumed to be random unit vectors ($\alpha>0$ is assumed), Proposition 1 provides intuition on how we probabilistically expect random, unit linear combinations of $x$ and $y$ to shrink/stretch under $B$. --- > [Q2(e)]: Justifying the efficacy and structure of $B$ The $B$ matrix we suggest, 1. Is an **effective discriminator** of angular similarity according to Figures 2 and 8. 2. Its **simple structure** makes it possible to analyze the distribution of $||Bv||^2$. 3. For instances where dissimilar vectors are grouped as similar (like the example you provided), it simply results in an increased number of weights to train, which according to Figure 6(a) is **never a significant amount.** The block identity structure solves the following problem: given a **fixed** $c \times d$ matrix $B$ with $c<<d$, produce a sequence of $c \times c$ matrices $S_i$ for $1\leq i \leq n$ such that the $S_iB$ *resemble* random $c \times d$ Gaussian matrices. It is clear that we should have $S_i\sim \mathcal{N}(0,I_c)$. Since the standard normal distribution is invariant under orthogonal transformations, $B$ should contain blocks of orthogonal matrices, i.e., $B=[O_1|O_2|\cdots|O_{d/c}]$, where the $O_i$ are orthogonal matrices. Since the $B$ and $O_i$ are fixed, for any Gaussian $S$ the blocks $\{SO_i\}$ are always correlated, so examples of dissimilar vectors grouped as similar **will always exist** and thus we simply choose $O_i=I_c$. --- ## Experiments > [Q2(b)] Novelty of PGHash and performance versus baselines The novelty of our work is that PGHash is the first: 1. On-device and memory-efficient LSH approach, 2. Privacy-preserving and personalizable LSH method in the distributed setting. As mentioned by the reviewer, it is common to "expect that using projections to reduce size of weight matrices is going to lose accuracy". Even still, we empirically showcase: * PGHash matches (Figure 5 for Amazon) or is competitive with full-memory baselines **using only 6.25% of the full weight matrix**. * PGHash matches baselines while allowing on-device, private LSH analysis with **personal compression rates and hash code lengths** (something that has never been allowed previously). --- > [Q1] Performance with higher numbers of devices worse than single device? Slightly degraded performance with more devices is typical and expected in FL since we are averaging gradients trained on partitions of the dataset. This is sub-optimal compared to optimizing over the entire dataset (especially with large datasets like ours). Even still, final accuracy for Delicious degrades by a negligible amount (<1%) in the FL setting. Nonetheless, - Peak accuracy should also be taken into consideration when evaluating performance, and multi-device achieves a **higher** peak accuracy on Delicious. - In contrast to Delicious, multi-device **performs better** on Amazon for both peak and final accuracy (+4.5% as shown in Figure 5). > [Q2(a)] Convergence of Amazon and Delicious accuracies Peak accuracies for our given architecture (with 1 device) are approximately 45% and 33% for Delicious and Amazon respectively. These are the values within SLIDE [R1] and validated within our own work. For Delicious200K, peak accuracies are reached between 2-3 thousand iterations. This is why [R1] only plots the first 3,000 iterations within their experiments. For Amazon, we ended our accuracy plots after peak distributed accuracy is achieved. What is important to show, which we do in Figures 4 and 5, is that **PGHash achieves peak distributed accuracy for Delicious and Amazon**. --- Thank you for your review. If we have addressed your questions, we would appreciate it if you would consider updating your score. If any other questions or concerns remain, please let us know. **References:** [R1] Chen, et al. "Slide: In defense of smart algorithms..." 2020.
null
null
Rebuttal 1: Rebuttal: ### Summary of Rebuttal: **General Comments:** We thank the reviewers for their valuable feedback and questions on theory and experiments. **Reviewer highlights:** Reviewer 7yAa believes that our "approach allows for **personalized** and **private** LSH analysis while conserving **memory** and **computational** resources," "the proposed PGhash family is **novel**, and that "the theoretical formulation is **well-organized**." Reviewer QaZe remarks "the saving of memory is important to **many applications**" and that our "techniques are **easy** to follow." Reviewer QZPn notes that our approach has several **potential applications** while also mentioning that "the idea of using projections before LSH might be **useful in practice** depending on the sensitivity of the model to the compression." **Summary of Core Contributions:** We summarize our key contributions as follows: * We introduce a **novel** LSH family and framework for **dynamic pruning** of a massive final layer weight in a distributed setting. Our approach allows for serial generation of hash tables, enabling **memory** and **computationally efficient** on-device LSH analysis. Appealing to federated learning principles, PGHash-based LSH analysis is **personalizable** and **private**. * Our novel LSH family, PGHash, estimates angular similarity between "foldings"/deterministic hashings of the weight vectors and the layer input. We theoretically establish the sensitivity and statistical properties of PGHash, and empirically demonstrate it is a **good discriminator** of similarity over randomly-drawn unit vectors. * Our framework enables **competitive multi-device training** of a network on extreme multi-label datasets, Delicious-200K and Amazon-670K. Multi-device PGHash training: 1. **Matches** federated SLIDE training [1,5,6] of Amazon-670K, 2. **Outperforms** single-device training of Amazon-670K, 3. **Closely matches** the peak accuracies of SLIDE and federated SLIDE training on Delicious-200K. The results above are accomplished all while storing **only 6.25%** of the final massive layer weight (a reduction of **tens of millions of parameters compared to SLIDE**). **Summary of Changes**: Guided by the helpful suggestions of our reviewers, our changes are as follows: * Amended definition and assumption of $\alpha$ to improve readability and eliminate an edge case ($\alpha=0$) pertinent to Theorem 2. Core result and proof of Theorem 2 **are preserved.** * Corrected technical typo in the formulation of $S_{x,y}$ to stress that it should contain unit vectors. This correction is needed to appropriately link Proposition 1 and Theorem 2. Core result and proof of Theorem 2 + Proposition 1 **are preserved**. **References:** [R1] Chen, Beidi, et al. "Slide: In defense of smart algorithms over hardware acceleration for large-scale deep learning systems." 2020. [R2] Chen, Beidi, et al. "Mongoose: A learnable lsh framework for efficient neural network training." 2020. [R3] Spring, Ryan, and Anshumali Shrivastava. "Scalable and sustainable deep learning via randomized hashing." 2017. [R4] Pan, Zaifeng, et al. "G-SLIDE: A GPU-Based Sub-Linear Deep Learning Engine via LSH Sparsification." 2021. [R5] Yan, Minghao, et al. "Distributed slide: Enabling training large neural networks on low bandwidth and simple cpu-clusters via model parallelism and sparsity." 2022. [R6] Xu, Zhaozhuo, et al. "Adaptive Sparse Federated Learning in Large Output Spaces via Hashing." 2022.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Revisit Weakly-Supervised Audio-Visual Video Parsing from the Language Perspective
Accept (poster)
Summary: The goal of this paper is to create a model that can solve the task of audio-visual video parsing in a weakly-supervised way. For this, the authors propose to use the language modality which proves to provide benefits and improve the performance of the proposed model over the previous baseline. The language modality is used to provide additional supervision for the system. The paper provides an extensive comparison with previous baselines. Also, the authors provide extensive ablations showing the importance of each component and some qualitative examples which provide another perspective of how the model behaves. Strengths: - The proposed system performs better than the previous baselines. - Multiple baselines were used to compare against the proposed method. - Extensive ablation showing the importance of each component - Presence of qualitative results. Weaknesses: - Line 2: typo “andvisual”->”and visual” - Figure 1: last line of the description: the cross product looks different than in the figure. Make them consistent. - Line 127: Do the authors also train the Resnnet and VGGish? Or are they pre-trained and then frozen? - Equation 1. I have some questions here. What is the shape of w^a_t and w^v_t? Are these scalars or vectors? Moreover, p_t^a/p_t^v is a vector. What will p^a and p^v? Also vectors? If w is a scalar then p^a/p^v will be vectors. If w is a vector, then I would assume there is a dot product between w and p which may be wrong depending on the dimension of w. Is there a dot product in the p^a and p^v? Moreover, if there is a dot product then p^a and p^v will be a single scalar, and not a tensor. However, I think the authors want to obtain p^a which will have the same shape as p^a_t, but it will aggregate the scores from the whole video. I also observed that p^{av} uses a Hadamard product. The first thing would be to clarify the shape of the weights and that would also clarify a lot of things in the equations. Right now, for me, it is very hard to understand what is going on there. - Line 154: How are the features per segment obtained? CLIP gives features per frame. Do the authors average the per-frame features from CLIP to obtain per-segment features? - Line 179: How is the overlap region chosen? What are the thresholds? It is said before that “...some segments in orange are labeled as without car, their similarity is still high…”. How high and in comparison with what? Thus, provide some details on how it is judged if the similarity is still too high or too low. - Line 182: is this t-th segment one of the unreliable segments, or it is just the t-th segment of the video? Make this explicit, as right now it seems that it is just the t-th segment of a video, but I understood that the re-weighting is only applied to the unreliable segments, as mentioned in Line 170. - Line 184: What are the values of alpha and beta? - Line 191-194. I am a bit confused by these lines. First, I will refer to the figure. In the upper part of Figure 1 (d), it is seen that the method is learning in a supervised way to label segments correctly, based on the labels that are provided by the denoising and the re-weighting mechanism. However, for the lower part of Figure 1 (d), only the VGGish features are used for the MMIL. Is this correct? Shouldn’t both the visual and the audio features be used by MMIL, as in equation (1)? As far as I understand the only thing that changes is the loss of the video modality, which now can provide segment-level supervision. But everything else is staying the same. - Line 193: Here it is said that CLIP is used to extract the visual features during training. Does this mean that the authors train HAN but they replace the Resnet with CLIP? If yes, is CLIP trained, or it is frozen? - Line 230: typo “leave the label” -> “leave out the label” Technical Quality: 3 good Clarity: 3 good Questions for Authors: Most of the above questions can be easily fixed and clarified. For now, I think this paper looks good, so my score is weak accept. However, I would encourage the authors to clarify the mathematical equations, as this will make the proposed method easier to be understood. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss the limitations of their work and how the work can be extended. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your reviews and address your concerns as follows. **Q1**: Do the authors also train the Resnet and VGGish? Or are they pre-trained and then frozen? **A1**: Yes, they are pre-trained and then frozen. The Resnet is pre-trained on ImageNet and VGGish is pre-trained on Audioset. **Q2**: What is the shape of $\mathbf{w}^a_t$ and $\mathbf{w}^v_t$? Are these scalars or vectors? Moreover, $\mathbf{p}_t^a$ /$\mathbf{p}_t^v$ is a vector. What is $\mathbf{p}^a$ and $\mathbf{p}^v$? Also vectors? Is there a dot product in the $\mathbf{p}^a$ and $\mathbf{p}^v$? Moreover, if there is a dot product then $\mathbf{p}^a$ and $\mathbf{p}^v$ will be a single scalar and not a tensor. **A2**: We apologize for not describing this information clearly in the paper. $\mathbf{w}_t^a,\mathbf{w}_t^v\in\mathbb{R}^{1\times{C}}$ are all vectors, whose shape is the same as $\mathbf{p}_t^a$ and $\mathbf{p}_t^v$. $\mathbf{p}^a$, $\mathbf{p}^v$ are the weighted average of $\mathbf{p}_t^a$, $\mathbf{p}_t^v$ respectively. So $\mathbf{p}^a$, $\mathbf{p}^v$ are also vectors. We will describe these weights and equations more explicitly in the paper. **Q3**: How are the features per segment obtained? CLIP gives features per frame. Do the authors average the per-frame features from CLIP to obtain per-segment features? **A3**: Yes, since a segment/second is composed of 8 frames, we average the features of the 8 frames to get the feature of a segment. **Q4**: How is the overlap region chosen? What are the thresholds? It is said before that “...some segments in orange are labeled as without car, their similarity is still high…”. How high and in comparison with what? Thus, provide some details on how it is judged if the similarity is still too high or too low. **A4**: We apologize that we didn't describe it clearly. For positive samples ($\mathbf{\widetilde{y}}_t^v[c] = 1$), the unreliable one is that its prompt similarity score $\mathbf{s}_c$ is lower than the maximum similarity scores of negative data ($Max_w/o$). For negative samples, the unreliable one is that its prompt similarity score $\mathbf{s}_c$ is higher than the minimum similarity scores of positive data ($Min_w$). Indeed the unreliable data are the intersection of the yellow region (negative data) and blue region (positive data) as shown in Fig. 2 of the original manuscript. We then change the labels of unreliable samples by a soft value (proportional to its prompt similarity), rather than the hard values (0 or 1). For those reliable ones, we keep their original label unchanged. Thus the detailed procedure for applying dynamic re-weighting on unreliable segments is illustrated below. $$\mathbf{\hat{y}}_t^v[c]= \begin{cases} Min(1,\alpha \times \mathbf{s_c} ) & ,\mathbf{\widetilde{y}}_t^v[c] = 1 \\ and \\ \mathbf{s_c} \leq Max_w/o \\\\ Min(1,\beta \times \mathbf{s_c})&, \mathbf{\widetilde{y}}_t^v[c] = 0 \\ and \\ \mathbf{s_c} \geq Min_w\\\\ \widetilde{y}_t^v[c]& , otherwise, \end{cases}$$ where $ \mathbf{s_c}$ is the similarity between the segment and the event, $Min_w$ is the lowest similarity of the segments with the event, $Max_w/o$ is the highest similarity of segments w/o the event, $\alpha$ and $\beta$ are two scalars. **Q5**: Is this t-th segment one of the unreliable segments, or it is just the t-th segment of the video? Make this explicit, as right now it seems that it is just the t-th segment of a video, but I understood that the re-weighting is only applied to the unreliable segments, as mentioned in Line 170. **A5**: Thanks for pointing out! In the original Eq. (4), t-th segment is one of the unreliable segments. The Eq.(4) will be updated as replied in the question above (Q4). **Q6**: What are the values of alpha and beta? **A6**: We set $\alpha$ as 4 and $\beta$ as 0.4, because the similarity of segments with the event is supposed to be higher than segments w/o the event. We'll add the value of these two parameters in the implementation details. Besides, the analysis of different $\alpha$ and $\beta$ values is shown in Table 4 (a) of the original manuscript. **Q7**: I am a bit confused by these lines. First, I will refer to the figure. In the upper part of Figure 1 (d), it is seen that the method is learning in a supervised way to label segments correctly, based on the labels that are provided by the denoising and the re-weighting mechanism. However, for the lower part of Figure 1 (d), only the VGGish features are used for the MMIL. Is this correct? Shouldn’t both the visual and the audio features be used by MMIL, as in equation (1)? As far as I understand the only thing that changes is the loss of the video modality, which now can provide segment-level supervision. But everything else is staying the same. **A7**: Yes, only the VGGish features (i.e. audio features) are used for the MMIL. The reason of using MMIL is to aggregate the segment-level predictions to the video-level features predictions for weakly supervised optimization. Since we generated the segment-level labels for the visual modality, we have already turned the weakly supervised labels into the fully supervised labels. Thereby we do not need to perform the MMIL mechanism again. Sorry we didn't make it clear in Figure 1, actually we use the BCE (binary cross-entropy) loss for the video modality, and the final loss function is: \begin{equation} \mathcal{L} = \mathrm{BCE}(\mathbf{y}^{av}, \mathrm{p}^{av}) + \frac{1}{T}\sum_{t=1}^{T}\mathrm{BCE}(\mathbf{\hat{y}}_t^v, \mathbf{p}^v_t) + \mathrm{BCE}(\overline{\mathbf{y}}^a, \mathbf{p}^a), \end{equation} **Q8**: Here it is said that CLIP is used to extract the visual features during training. Does this mean that the authors train HAN but they replace the Resnet with CLIP? If yes, is CLIP trained, or it is frozen? **A8**: Yes, we replace Resnet with CLIP Image Encoder. And CLIP is frozen during training. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my concerns in their rebuttal. I think the paper is technically strong and it provides interesting results. However, one of the reviewers pointed out that the evaluation is done only on one dataset, LLP. This is not ideal and a comparison on multiple other datasets would have made the paper stronger. Thus, I will not increase my score (even if all my concerns were addressed). However, I will also not decrease my score because of this drawback, as I still think that even with one single dataset, the paper provides some interesting insights that could benefit the research community. --- Reply to Comment 1.1.1: Title: Thanks for acknowledging the technical strength and interesting insight of our paper! Comment: We greatly appreciate your acknowledgment of the technical strength and interesting insight of our paper! Unfortunately, only the LLP dataset currently has fine-grained (segment-level and modality-level) annotations and labels for audio-visual understanding. That's why state-of-the-art works (e.g., HAN [1], MA [2], CVCMS [3], MGN [4], JoMoLD [5], DHHN [6], MM-Pyramid [7], CMPAE [8]) and our work solely tested their models on a single dataset. In the future, we will try to collect and annotate more new fine-grained audio-visual datasets to facilitate the research community. [1] Tian et al., Unified multisensory perception: Weakly-supervised audio-visual video parsing. In ECCV, 2020. [2] Wu et al., Exploring heterogeneous clues for weakly-supervised audio-visual video parsing. In CVPR, 2021. [3] Lin et al., Exploring cross-video and cross-modality signals for weakly-supervised audio-visual video parsing. In NeurIPS, 2021. [4] Mo et al., Multi-modal grouping network for weakly-supervised audio-visual video parsing. In NeurIPS, 2022. [5] Cheng et al., Joint-modal label denoising for weakly-supervised audio-visual video parsing. In ECCV, 2022. [6] Jiang et al., Dhhn: Dual hierarchical hybrid network for weakly-supervised audio-visual video parsing. In ACM MM, 2022. [7] Yu et al., Mm-pyramid: Multimodal pyramid attentional network for audio-visual event localization and video parsing. In ACM MM, 2022. [8] Gao et al., Collecting Cross-Modal Presence-Absence Evidence for Weakly-Supervised Audio-Visual Event Perception. In CVPR, 2023.
Summary: This paper tackles the audio-visual parsing task by dividing the video-level label into segment-level labels with the help of language based on CLIP. Training with the fine-grained segment-level labels, instead of the coarse video-level labels, makes the model perform better. In the process, the processing for noisy and unreliable labels bring obvious improvement on the task. Strengths: The presentation is clear and it’s easy to follow. The manner to introduce CLIP to generate segment-level labels to replace video-level labels for training is smart, simple and effective. Weaknesses: I think the main contribution in this paper is to introduce large models such as CLIP to help the audio-visual video parsing task, and the exploration could be more thorough. The main task is the “audio-visual” video parsing task, then how would the relationship between the two modalities influence the task? In default case, would the audio modality perform better than visual modality? In Table 1,2 and so on, the results show that “A” always perform better than “V” when V has not been helped by CLIP; and Even when introducing CLIP, the “HAN +CLIP” also shows that “A” better than “V”. Does this mean that “A” always perform better than “V” by default? And also compared the results of “A-V” and others with “A”, the improvements are small. If so, enhancing the audio modality may be more effective for this task, while the audio modality in this work is weaken in the process. This is because that there is no proper audio-based large models like CLIP? Technical Quality: 3 good Clarity: 3 good Questions for Authors: How to generate the prompts? how to obtain A and B? Randomly choose a negative class from 25 categories together with the original class to compose A and B respectively? And the paper says that “the language prompts of the video [could] be xx”. Are there any other formats besides the “other/none/notclass” format in experiment? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: One main motivation of this paper it to tackle the problem of unmatching between the audio and visual modality. Then what’s the actual situation in the evaluated data? And it would be better to provide a statistics of this unmatching in this data to show that the data is proper to evaluate this point, together with quantitative comparison of the performance on the data with or without this unmatching. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your reviews and address your concerns as follows. **Q1**: The main task is the “audio-visual” video parsing task, then how would the relationship between the two modalities influence the task? **A1**: Both modalities contribute to the recognition of events in audio-visual parsing. Even audio signals are still useful in visual event prediction. When we treat audio and visual modality separately, i.e., only using audio signals to predict audio events and only using visual frames to predict visual events, we found the segment-level overall metric drops from 48.9 to 43.1, and the event-level overall metric also decreases by 7.5 points, compared to the model with the interaction and cross attention of audio-visual modalities. In addition, as can be seen from Table 1 of the manuscript, when we use the Clip visual feature in HAN and MGN, all the metrics including pure audio recognition performance have been improved, which means better visual features could benefit not only visual recognition but also audio recognition. **Q2**: In default case, would the audio modality perform better than visual modality? **A2**: Thanks for the question. Audio doesn't always perform better than visual but indeed has better performance in more cases for audio-visual events. Visual frames usually have occlusion, camera movement, low resolution, and more variance, so the event may be hard to see from the visual modality. In contrast, audio signals are relatively easier to capture and clear to hear, thus they are easier to recognize different events. **Q3**: Compared the results of “A-V” and others with “A”, the improvements are small. If so, enhancing the audio modality may be more effective for this task, while the audio modality in this work is weaken in the process. This is because that there is no proper audio-based large models like CLIP? **A3**: Thanks for the question. The audio modality was weakened because we only perform visual-track denoising in the manuscript. To better address your concern, we further conduct experiments with a large-scale audio-language pre-trained model called LAION-CLAP [1] for audio denoising, and observe significant improvement in audio-related metrics. LSLD* means we perform both audio and visual label denoise. |Method||| Segment-Level||||| Event-Level||| | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: |:--: | | | A | V| A-V|Type|Event |A| V |A-V| Type |Event| | HAN | 60.1| 52.9| 48.9 |54.0 |55.4 |51.3| 48.9 |43.0| 47.7 |48.0| | LSLD* | 62.7| 67.1| 59.4| 63.1| 62.2| 55.7|64.3| 52.6 |57.6 |55.2| |HAN+Clip+Clap| 66.9| 54.3| 50.0| 57.1| 60.2| 59.1| 50.4 |43.9 |51.2 |55.8 |LSLD*+Clip+Clap| **68.7**| **71.3**| **63.4** |**67.8** |**68.2** |**61.5** |**67.4** |**55.9** |**61.6**| **60.6** **Q4**: How to generate the prompts? how to obtain A and B? Randomly choose a negative class from 25 categories together with the original class to compose A and B respectively? And the paper says that “the language prompts of the video [could] be xx”. Are there any other formats besides the “other/none/notclass” format in experiment? **A4**: We apologize that we didn't describe it clearly. Here is an example. Suppose the label annotation of a video is that the video contains Violin (denoted as A) and Cello (denoted as B). Then, we construct the prompts for this video as [Violin and Cello, Violin, Cello, Other]. There is no other format used in the paper. We tried to replace "Other" with "None" and "Notclass" and found "Other" achieves the best performance. **Limitations**: One main motivation of this paper is to tackle the problem of unmatching between the audio and visual modality. Then what’s the actual situation in the evaluated data? And it would be better to provide statistics of this unmatching in this data to show that the data is proper to evaluate this point, together with a quantitative comparison of the performance of the data with or without this unmatching. **A5**: Since the training set only has video-level labels, we investigate the unmatching percentage of videos and segments on the validation set. We discover that the unmatching ratio is 73% for videos and 48% for segments, which indicates that unmatching between the audio and visual modality appears extensively in the dataset. Since the training set does not have segment-level labels, we cannot train the model on the matching and unmatching data respectively. However, existing works have studied audio-visual synchronization [2] [3] and Asynchronous [4] [5] [6] phenomenon. **References** [1] Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, and Shlomo Dubnov. Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation. In ICASSP, 2023. [2] Naji Khosravan, Shervin Ardeshir, and Rohit Puri. On attention modules for audio-visual synchronization. In CVPR Workshops, pages 25–28, 2019. [3] Yasheng Sun, Hang Zhou, Ziwei Liu, and Hideki Koike. Speech2talking-face: Inferring and driving a face with synchronized audio-visual representation. In IJCAI, volume 2, page 4, 2021. [4] Juergen Luettin, Gerasimos Potamianos, and Chalapathy Neti. Asynchronous stream modeling for large vocabulary audio-visual speech recognition. In 2001 ICASSP, pages 169–172, 2001. [5] Chuang Gan, Yi Gu, Siyuan Zhou, Jeremy Schwartz, Seth Alter, James Traer, Dan Gutfreund, Joshua B Tenenbaum, Josh H McDermott, and Antonio Torralba. Finding fallen objects via asynchronous audio-visual integration. In CVPR, pages 10523–10533, 2022. [6] Lee, Sangmin, Sungjune Park, and Yong Man Ro. Audio-Visual Mismatch-Aware Video Retrieval via Association and Adjustment. In ECCV, pages 497-514, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response from the authors. Most of my concerns have been addressed, and I have a few suggestions to the authors. Firstly, regarding the results after incorporating the audio-language pre-trained model and the discussion about enhancing audio modality, they could be included in either the main body of the text or the appendix. Secondly, concerning the design of prompts and the observation that using "other" yields better results than "None" and "Notclass," it would be beneficial to include or briefly mention this in the main text to provide readers with more information. --- Reply to Comment 1.1.1: Title: Thanks for your suggestions! Comment: Thanks for your recognition and suggestion. We will include the above discussions in the revised main text.
Summary: This paper proposes LSLD, which leverages CLIP model to denoise unreliable segment-level video labels for audio-visual video parsing. Strengths: $+$ The proposed method can achieve state-of-the-art results on LLP datasets in several metrics. $+$ The improvement of video and audio-visual events is significant. Weaknesses: $-$ The proposed method can only denoise labels for visual domains. It will limit the overall accuracy of audio-visual video parsing. Although the authors mentioned CLAP model is not applicable to the proposed pipeline in L.59-L.60, the audio waveform can be simply blocked out to get segment-level audio information as well. $-$ Introducing the language model might not be necessary. Since LLP dataset is collected from AudioSet, pre-trained models from AudioSet can find the corresponding labels for LLP in both audio and visual labels. These models may contribute to stronger label-denoising. $-$ The effectiveness of LSLD is not clear. * In L.231, it mentioned that LSLD could also benefit audio accuracy. However, LSLD causes an accuracy drop from 62.4 to 62.3 (Segment-Leve) and from 53.9 to 53.0 (Evet-Level). Since LSLD is based on HAN, which leverages audio-visual feature aggregation, LSLD would be expected to improve audio results as well. * Does LSLD also benefit other approaches (e.g., MGN)? The proposed approach should also benefit other baselines including CLIP encoder or Resnet encoder. * Is LSLD complementary to other denoising approaches (i.e., MA and JoMoLD)? These approaches also denoise audio labels. They should be complementary to LSLD. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: $-$ Do the baselines with labels denoising (e.g., MA) also leverage CLIP to do denoise? For example, for MA+CLIP baseline, the refined labels are from CLIP or ResNet? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The authors mentioned this in the paper. The proposed method can only work on visual segment-level labels. However, this claim might not be entirely correct. The possible way to do this is mentioned at weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your reviews and address your concerns as follows. And LSLD means only visual denoise and LSLD* means both audio and visual label denoise in the following answers. **Q1**: The proposed method can only denoise labels for visual domains. Now we have extended our denoising method to the audio modality, which leads to further improvement compared to the visual-only denoising results (e.g., from 51.3 to 55.7 on the event-level audio accuracy), even with the VGGish feature. To be specific, the audio-denoising is based on the audio-text similarity computed by LAION-CLAP [1], which is a large-scale audio-language model pre-trained on Audioset. Same to the denoising process of visual labels (Sec. 3.3), the similarity between prompts and segments is calculated, where the event of the most similar prompt is regarded as the denoised segment-level label. |Method||| Segment-Level||||| Event-Level||| | :--: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |:-: | | | A | V| A-V|Type|Event |A| V |A-V| Type |Event| | HAN | 60.1| 52.9| 48.9 |54.0 |55.4 |51.3| 48.9 |43.0| 47.7 |48.0| | LSLD* | 62.7| 67.1| 59.4| 63.1| 62.2| 55.7|64.3| 52.6 |57.6 |55.2| |HAN+Clip+Clap| 66.9| 54.3| 50.0| 57.1| 60.2| 59.1| 50.4 |43.9 |51.2 |55.8 |LSLD*+Clip+Clap| **68.7**| **71.3**| **63.4** |**67.8** |**68.2** |**61.5** |**67.4** |**55.9** |**61.6**| **60.6**| 'Clip + Clap' means we use the visual feature extracted from CLIP and the audio feature extracted from LAION-CLAP. **Q2**: Introducing the language model might not be necessary. The AudioSet pre-trained model learns from audio-visual correlations, making it more suitable for handling videos whose audio and visual events are temporally. However, in AVVP, audio and visual are assumed to be misaligned, and the model needs to predict different labels for the two modalities. Thus the AudioSet pre-trained model cannot be directly used for label denoising. Also, it cannot be used to generate segment-level labels. In contrast, a language model is more general but effective way to denoise the audio and visual tracks individually. Introducing CLIP or CLAP is crucial, since language models are flexible and can describe all events occurring in a video, enabling the generation of segment-level labels. Furthermore, using CLIP makes our method capable of extending to other datasets and tasks, not confined solely to a subset of AudioSet. Moreover, our audio backbone (VGGish) is pre-trained on AudioSet, but using CLIP still leads to better performance upon it. **Q3**: LSLD would be expected to improve audio results as well. Since the original LSLD only performs visual denoising, the performance of audio accuracy is not that highly assured. Although in Table 1 of the manuscript, the audio accuracy is slightly decreased, LSLD indeed improves the audio accuracy upon DHHN[3] and MM-Pyramid[2] as shown in Sec. A of the appendix. Please see the table below. |Method||| Segment-Level||||| Event-Level||| | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |:-: | | | A | V| A-V|Type|Event |A| V |A-V| Type |Event| | MM-Pyramid | 60.9 |54.4 |50.0 |55.1 |57.6 |52.7 |51.8 |44.4 |49.9 |50.5| | MM-Pyramid + Clip | 62.8 |56.3 |51.6 |56.9 |60.0 |53.1 |52.2 |45.5 |50.3 |50.7| |MM-Pyramid + LSLD + Clip| **63.3** |**66.8** |**60.4** |**63.5**| **62.4**|**54.5**|**62.5**|**52.6**|**56.5**| **53.2**| | DHHN | 61.3| 58.3 |52.9 |57.5 |58.1 |54.0 |55.1 |47.3 |51.5 |51.5| | DHHN + Clip | 62.6 |59.1 |53.4 |58.4 |59.1 |54.3| 55.2| 46.8| 52.1| 52.5| |DHHN + LSLD + Clip|**64.1**|**69.0**|**60.7**|**64.6**|**64.0**|**56.2**|**66.2**|**53.7**|**58.7**|**56.4**| In addition, we also performed LSLD* by extending our method into audio denoising, which leads to significant improvement over audio metrics. Please refer to the table of Q1. **Q4**: Does LSLD also benefit other approaches? The proposed approach should also benefit the CLIP encoder or Resnet encoder. In the original supplementary files, we reported the results of applying LSLD on MM-Pyramid and DHHN. The results are shown in the table of Q3. We can see LSLD clearly improves both architectures. Following your suggestion, we also tested on the ResNet encoder with LSLD*, please see the results of 'LSLD*' from the table of Q1. **Q5**: Is LSLD complementary to other denoising approaches? Yes, we apply LSLD* to MA and JoMoLD and see both of them obtain significant improvement. All the experiments are conducted with the VGGish feature and Resnet feature for a fair comparison. Please see the table below. |Method||| Segment-Level||||| Event-Level||| | :-------: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |:-: | | | A | V| A-V|Type|Event |A| V |A-V| Type |Event| | MA | 60.3| 60.0| 55.1| 58.9| 57.9 |53.6| 56.4| 49.0| 53.0| 50.6| |MA + LSLD* | **61.2**| **66.6**| **59.0**| **62.3**| **61.1**| **54.3**| **64.0**| **52.5**| **56.9**| **54.2**| | JoMoLD | 61.3 |63.8| 57.2| 60.8| 59.9| 53.9| 59.9| 49.6| 54.5| 52.5| |JoMoLD + LSLD* | **62.3**| **66.1**| **58.7**| **62.4**| **61.8**| **55.8**| **63.6**| **52.2**| **57.2**| **55.0**| **Q6**: Do the baselines with labels denoising also leverage CLIP to do denoise? For MA+CLIP baseline, the refined labels are from CLIP or ResNet? No, MA and JoMoLD don't leverage CLIP to denoise labels. For MA+CLIP, the refined labels are from CLIP visual features, but CLIP similarity (e.g., computing similarity between the image and our constructed prompt) is not used in the denoising process. [1] Yusong Wu et al. Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation. [2] Jiashuo Yu et al. Mm-pyramid: Multimodal pyramid attentional network for audio-visual event localization and video parsing. [3] Xun Jiang et al. Dhhn: Dual hierarchical hybrid network for weakly-supervised audio-visual video parsing. --- Rebuttal Comment 1.1: Comment: Thanks for providing the response. It addressed my main concern. Thus, I increase my rating. Also, I encourage the authors to include baselines (mentioned in 2DCt response [1-8] ) in Table 1. --- Reply to Comment 1.1.1: Title: Thank you for recognizing our responses! Comment: Thanks for recognizing our responses. We will definitely include these baselines in our revision. Thank you for helping to improve our work again!
Summary: The authors have observed that weakly-supervised labels in audio-visual tasks are noisy at the segment-level label. To solve the problem, the authors introduce language as an additional source of information to assign soft labels to each of the 1-second video segments. More specifically, a similarity score is computed between each segment and possible prompts, and segment-level labels are softly assigned by the proposed dynamic re-weighting method. The experimental results show the performance boost with the re-weighing. Strengths: 1. The paper is mostly well-written, with details and figures. 2. The idea of dynamic re-weighting for segment-level label noise is interesting. 3. The authors show various experimental results, including the ablation study of different prompts. Weaknesses: 1. Some manuscripts need to be clarified. * Section 3.4, especially Equation (4), needs to be clarified. According to Equation (4), $\tilde{\mathbf{y}}_t^v[c]$ is already assigned with 1 or 0. Where does the value come from? * Moreover, there is a discrepancy between the descriptions in Lines 171-179 and the equation. For example, how is *relatively high similarity* applied to the equation? * Figure 1 (c): Does```w/o Cello``` stand for a a prompt "w/o Cello"?   * In Table 1's caption, it is written, "The last 3 lines are all label denoising methods ..." Do the last 3 lines indicate ```MA+Clip```, ```JoMoLD+Clip```, and ```LSLD (ours)```? 2. Although Section 4 shows multiple experimental results, all of the results are on a single dataset (LLP dataset). Ideally, the proposed approach should be demonstrated on various datasets, especially those with no segment-level labels or longer video clips. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Does ```other``` include the case where the segment is similar to some other labels than class A or class B? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discussed the limitations of the work (e.g. segment-level labels for audio) in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your reviews and address your concerns as follows. **Q1**: Section 3.4, especially Equation (4), needs to be clarified. According to Equation (4), $\mathbf{\widetilde{y}}_t^v[c]=1$is already assigned with 1 or 0. Where does the value come from? **A1**: Sorry for the confusion. The value is derived from Sec 3.3, where each visual segment is assigned with segment-level label $\mathbf{\widetilde{y}}_t^v$ by segment-level denoising. After denoising, for each probable class $c$, $\mathbf{\widetilde{y}}_t^v[c]$ could be either 0 (not exist) or 1 (exist). An example of the label denoising process is illustrated below: Suppose we have a video containing two events, Violin and Cello, its video-level label is assigned as $\mathbf{y}^v$['Violin']=1, $\mathbf{y}^v$['Cello']=1. The labels of each segment are initially the same as the video-level label. To perform label denoising, the prompts are created as [Violin and Cello, Violin, Cello, Other], where ``Other'' means neither Violin nor Cello appears. Then we calculate the similarity between segments and prompts, and take the prompt with the highest similarity as the event that appears in this segment. Suppose for $t$-th segment, the prompt with the highest similarity is Violin, we then adjust $\mathbf{\widetilde{y}}_t^v$['Cello'] to 0, while retaining $\mathbf{\widetilde{y}}_t^v$['Violin'] at 1. In this way, we obtain the segment-level denoised label $\mathbf{\widetilde{y}}_t^v[c]$ for each video segment. **Q2**: There is a discrepancy between the descriptions in Lines 171-179 and the equation. For example, how is relatively high similarity applied to the equation? **A2**: We apologize that we didn't describe it clearly. Lines 171-179 describe the procedure of finding unreliable segments, after that we apply dynamic reweighting to these unreliable segments by Eq.(4). For positive samples ($\\mathbf{\\widetilde{y}}_t^v[c] = 1$), the unreliable one is that its prompt similarity score $\\mathbf{s}_c$ is lower than the maximum similarity scores of negative data ($Max_w/o$). For negative samples, the unreliable one is that its prompt similarity score $\\mathbf{s}_c$ is higher than the minimum similarity scores of positive data ( $Min_w$ ). Indeed the unreliable data are the intersection of the yellow region (negative data) and blue region (positive data) as shown in Fig. 2 of the original manuscript. We then change the labels of unreliable samples by a soft value (proportional to its prompt similarity), rather than the hard values (0 or 1). For those reliable ones, we keep their original label unchanged. Thus the detailed procedure for applying dynamic re-weighting on unreliable segments is illustrated below, $$\mathbf{\hat{y}}_t^v[c]= \begin{cases} Min(1,\alpha \times \mathbf{s_c} ) & ,\mathbf{\widetilde{y}}_t^v[c] = 1 \\ and \\ \mathbf{s_c} \leq Max_w/o \\\\ Min(1,\beta \times \mathbf{s_c})&, \mathbf{\widetilde{y}}_t^v[c] = 0 \\ and \\ \mathbf{s_c} \geq Min_w\\\\ \widetilde{y}_t^v[c]& , otherwise \end{cases}$$ where $ \mathbf{s_c}$ is the similarity between the segment and the event, $Min_w$ is the lowest similarity of the segments with the event, $Max_w/o$ is the highest similarity of segments w/o the event, $\alpha$ and $\beta$ are two scalars. **Q3**: Does w/o Cello stand for a prompt "w/o Cello"? **A3**: As illustrated in Q2, w/o Cello stands for segments that are labeled without the Cello event. We will make it clearer in our manuscript. **Q4**: In Table 1's caption, it is written, "The last 3 lines are all label denoising methods ..." Do the last 3 lines indicate MA+Clip, JoMoLD+Clip, and LSLD (ours)? **A4**: Yes, MA, JoMoLD, and LSLD(Ours) are all label denoising methods. And MA+Clip, JoMoLD+Clip means that we reproduce MA and JoMoLD with extracted features from Clip. **Q5**: Although Section 4 shows multiple experimental results, all of the results are on a single dataset (LLP dataset). Ideally, the proposed approach should be demonstrated on various datasets, especially those with no segment-level labels or longer video clips. **A5**: Sorry that we have not carry out experiments on additional datasets. Since LLP is the only dataset that has both segment-level and modality-level labels for test, all state-of-the-art AVVP methods were evaluated exclusively on the LLP dataset. In the future, we will try to conduct experiments on more general datasets. **Q6**: Does 'other' include the case where the segment is similar to some other labels than class A or class B? **A6**: Yes, `Other' means that neither class A nor class B appears. --- Rebuttal Comment 1.1: Title: Welcome to discuss! Comment: Dear reviewer, We are following up on our paper rebuttal. In summary, we elaborated on deriving the value of $\mathbf{\widetilde{y}}_t^v[c]$ and modifying Eq. 4 to incorporate *relatively high similarity*. We explained the meanings of 'w/o Cello' and 'other', and emphasized the reason for experimenting solely on the LLP dataset. Your response and feedback are highly valued. Thank you for your time and consideration.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Fully Dynamic $k$-Clustering in $\tilde O(k)$ Update Time
Accept (poster)
Summary: This paper studies dynamic algorithms for k-median (and k-means) in general metrics. The main result is an \tilde{O}(k) update time, poly(k) query time algorithm that achieves O(1)-approximation. Here, the update model is point insertion/deletion, and the algorithm has access to the distance oracle. A query operation asks to return a list of k points that is O(1)-approximate. The result improves over the k^2 update time achieved in HK20. This is achieved using a somewhat different but more direct approach. In particular, in HK20, a general reduction to coreset via merge-and-reduce framework was employed, but this work employs a direct dynamization of MP04. Comprehensive experiments are also provided. Compared with HK20 as a baseline, the new algorithm runs faster and achieves an overall better accuracy. Strengths: - The paper achieves near-linear in k update time, which is a nice improvement over the previous k^2 - The experiments are convincing, and justify the theoretical improvement - A nearly-matching lower bound is provided Weaknesses: - This work only yields O(1)-approx, while HK20 uses coreset approach and hence can build an eps-coreset even for general metrics (even though it does not imply an efficient algorithm for finding a solution). This also means this work cannot be easily generalized to the Euclidean case, where near-linear time PTAS was known. - The query time is k^2, which may be improved Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The first paragraph of "our techniques" tried to explain why HK20 has k^2 dependence, but it's not very clear. Here, the black-box coreset is linear in k, and the depth of the binary tree is O(log n). Then how come we have another k factor? - Followup of the above question: is it true that HK20 relies on a bi-criteria solution? Is that part constitute the bottleneck? Also, does HK20 give bi-criteria solution whose update time is near-linear in k? If so, then maybe it's also an interesting angle to compare with their result, since yours may be viewed as a (strict) improvement of it. - In the end of Sec 1, it is mentioned that an algorithm for weighted k-median is presented in Sec 2. But I didn't find it - In Sec 3.1, it seems the algorithm only returns a bi-criteria solution. What's the procedure for making it a real feasible solution? How does that procedure affects the final time complexity? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I didn't find these explicitly discussed. A discussion of limitations, and possibly mentioning future directions, could be helpful. This paper is a theory paper and I don't see a potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their effort and the review. Weakness 1: We would like to point out here that in the HK20 result, if one wishes to ensure that $\epsilon$ is small, then the resulting coreset has to be very large. For example, for the HK20 guarantee to hold with probability $½$ and $\epsilon=1$, the size of the output coreset would need to be at least $3.3$ million points in size, when $n=2000$ and $k=100$. So in many real world scenarios, the HK20 algorithm would need a coreset that contains the full dataset, to have the concerned theoretical guarantees. Weakness 2: The ultimate goal in this line of research is to obtain a $O(1)$ approximate dynamic $k$-median algorithm with $\tilde{O}(k)$ update time and $\tilde{O}(k)$ query time. We made substantial progress towards this goal by bringing down the update time from $\tilde{O}(k^2)$ to $\tilde{O}(k)$. Bringing the query time down to $\tilde{O}(k)$ remains a tantalising open question. Question 1: While the sizes of the corsets is $\tilde{O}(k)$, the time taken to compute these coresets (on inputs of size $\tilde{O}(k)$) is $\tilde{O}(k^2)$, which is why the update time is $\tilde{O}(k^2)$. This is because HK20 needs to recompute the coresets after each update, to ensure that the output is a valid coreset at all times. Question 2: Our focus on this paper has been to consider the original $k$-median problem, without any bi-criteria approximation. But it is true that we can also maintain a bicriteria approximation in $\tilde{O}(k)$ update time. Also, it is indeed the case that the bottleneck in the static coreset construction used by HK20 is the computation of a bi-criteria solution (while the rest of the coreset construction can be done in $\tilde{O}(k)$ time), which is the reason why HK20 takes $\tilde{\Omega}(k^2)$ time to handle an update. While the output coreset of HK20 can also be viewed as a bi-criteria solution, it’s still not clear how this can be maintained in $o(k^2)$ update time using the HK20 framework. Question 3: We apologise for the confusion. We actually intended to cite the definition of the weighted k-median problem, not the algorithm. We use a weighted k-median algorithm as a black box, as described in the proof of Corollary 3.5. We will clarify this point in the final version. Question 4: This is answered in line 177 (towards the end of Section 3.1). In the final version of the paper, we will take this comment into account and make appropriate changes. Limitations: Thanks for pointing this out. In the final version of the paper, we will add a paragraph at the very end, pointing out the key, remaining open problem in this topic - namely, to get a constant approximate dynamic $k$-median algorithm with $\tilde{O}(k)$ update time AND $\tilde{O}(k)$ query time. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I'm fine with most of your comments. But there's one thing I'm not sure: why HK20 with eps = 1 and 0.5 success probability (and n, k as you set) require to use 3.3 million points? How do you derive this number of "3.3 million"? Did you obtain this from some experiment, or just a simple calculation from their worst-case bounds? --- Reply to Comment 1.1.1: Comment: This number follows from the bounds in the theoretical guarantees of the HK algorithm and the bound on the number of points needed by the static coreset construction of Braverman et al. (which is used by HK as a black box) in order to get good approximation. Since this coreset approach is not practical if we want to maintain theoretical guarantees (as illustrated by this calculation), in order to give a fair comparison of the algorithms, in our experiments we work in a regime which does not guarantee any worst case bound.
Summary: This paper studies the $k$-median/means problems in fully dynamic settings. Clustering in fully dynamic setting is a recent hot topic, where fully dynamic $k$-center has been well studied in literature. However, little was known for fully dynamic $k$-median/means problems. Inspired by the static framework proposed by Mettu and Plaxton, where a minimum radius ball coverage strategy is used to obtain $t=O(logn)$ layers of representations for good approximation of the given $k$-median/means instance. This paper modifies the classic framework by Mettu and Plaxton and presents an $O(1)$-approximation algorithm for the fully dynamic $k$-median/means problems with $\tilde{O}(k)$ amortized update time and $\tilde{O}(k^2)$ worst case query time, which improves the previous coreset-based method with $\tilde{O}(k^2)$ worst case update time. Strengths: 1. This paper proposes simple but efficient approximation algorithm for the fully dynamic $k$-median/means problems with $\tilde{O}(k)$ amortized update time and $\tilde{O}(k^2)$ worst case query time, which improves the previous result with $\tilde{O}(k^2)$ update time and $\tilde{O}(k^2)$ query time. The authors also show that the update time of our algorithm is optimal up to polylogarithmic factors. 2. This paper gives detailed experimental evaluation of fully dynamic k-median algorithms for general metrics and shows that the proposed framework is more efficient than previous ones. Weaknesses: 1. The techniques used in this paper seem to rely heavily on the minimum ball coverage method by Mettu and Plaxton. 2. The challenges for obtaining good update time and query time is not well discussed. 3. The theoretical analysis is not an easy read in a limited time. The intuition behind the analysis and algorithm before going into the details of lemmas and proofs should be given before the proofs. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. In each insertion, why should the data point to be inserted added to each layer $i \in [t]$ instead of one of the $t$ layers? 2. Can the authors discuss the challenges for applying the Mettu's and Plaxton's method for solving the fully dynamic $k$-clustering problems? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Since this is a theoretical paper, I don't think there is potential negative societal impact of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their effort and the review. Regarding the challenges behind getting our result: Typically, the first challenge towards designing a dynamic algorithm for a given problem is to identify a suitable static algorithm which is flexible enough that it can be adapted to the dynamic setting. This is emphasised in line 83 (and the paragraph preceding it), where we explain that the static algorithm that we build upon is completely different from prior work. Next, even after identifying the Mettu-Plaxton algorithm as our starting point, we have to note that simply following a strategy of corresponding to the output of the static algorithm at every time-step will lead to very large update time. To circumvent this difficulty, we have to suitably modify the Mettu-Plaxton algorithm so that it lazily waits until it accumulates sufficiently many updates at some layer $j$, and then reconstructs all the layers $i \geq j$. Thus, we need to be lazy in a ``fine-grained’’ manner. We cannot simply say that we wait for a certain period of time, and then run the static algorithm again from scratch on the whole input. We will clarify this point in the final version. Regarding handling an insertion: Note that the layers are nested within one another (see line 4 of Algorithm 1). Thus, if we add the point $x$ being inserted to layer $U_i$, then we must add it to all layers $j \leq i$. Now, one might ask why can’t we just add the point $x$ only to $U_1$ and be done with it? The reason is this: Then necessarily $x$ will become part of the set $C_1$ (see line 4 of Algorithm 1), but there will be no guarantee that this point $x$ in $C_1$ is covered by some ball of radius $\nu_1$ around $S_1$ (see lines 124 - 131). Regarding the presentation in the paper: In Section 3.1, we described the static algorithm by Mettu-Plaxton, which was our starting point. In Section 3.2, we provided some intuitions regarding the modifications we need to make in order to make it dynamic (see lines 169 - 172), and along with it described concretely the dynamic algorithm. For space constraints, we had to defer the complete proofs for the analysis to the appendix. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. I keep my evaluation of the paper unchanged.
Summary: Fully dynamic k-clustering in O(k) update time This paper studies fully dynamic k-clustering. It gives a fully dynamic algorithm that maintains O(1)-approximate solutions to k-median and k-means with \tilde O(k) amortized update time and \tilde O(k^2) worst-case query time. On the negative side, the authors showed that the Omega(k) amortized update time is required if one needs to achieve O(1)-approximation ratio and poly(k) query time. So they gave the optimal update time, and the time improves the prior best-known update time of O(k^2). The authors also did experiments to complement the theoretical analysis. The dynamic algorithm is built on the static algorithm of [27]. It runs for many iterations. In each iteration i, the algorithm samples a set S_i of points and creates a set of smallest-radius balls around the samples so that the balls cover at least beta fraction of the remaining points, for a constant beta. The algorithm then removes the covered points and repeats the process. This gives a partition of the points into \tilde O(k) balls. They construct an assignment that maps every point to its ball center. Then a k-median or k-means solution can be constructed using the centers only. The dynamic algorithm maintains the tree-structure constructed in the static algorithm in a lazy manner by allowing some slack at many places. It needs to rebuild the tree from some layer if the cost of the assignment at that layer or above becomes bad. On average, it needs many updates for the algorithm to rebuild a tree from some layer, giving a good amortized update time. Strengths: Overall, the paper gives a tight update time of \tilde O(k) for the fully dynamic k-clustering problem, using elegant techniques. This is a solid accept. Weaknesses: The hidden approximation ratio is a little big. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Can you give a rough bound on the approximation ratio of the algorithm? 2. What happens if each update contains a batch of p points? Can you achieve an update time of O(p + k), instead of O(pk)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: No limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their effort and the review. Q1: The approximation ratio of our algorithm is (very close to) a factor $4$ off the approximation ratio of the static algorithm by Mettu-Plaxton. This is about $83 + 168\alpha$, where $\alpha$ is the approximation ratio of the static algorithm used to handle queries. However, we observe in our experiments that the quality of the solution returned by our algorithm is significantly better than the one predicted by the analysis. Q2: There is an existing lower bound which asserts that for any $k$, a constant approximate k-median algorithm takes $\Omega(nk)$ time in the static setting. If our dynamic algorithm could handle an update consisting of a batch of $p$ points in $o(pk)$ time, then this would imply the existence of a static algorithm that runs in $o(nk) + O(k^2)$ time (by passing all the n points to the dynamic algorithm in one batch and making a query). In the event that $k = o(n)$, this leads to a contradiction. Hence, any constant approximate dynamic algorithm must take $\Omega(pk)$ time to handle a batch update consisting of $p$ points (as long as the query time is polynomial in $k$). --- Rebuttal Comment 1.1: Comment: Thank the authors for the responses. I have no more questions.
Summary: This paper considers the dynamic version of the $k$-median and the $k$-means clustering problem in an arbitrary metric space. The authors provide an $O(k)$ amortized time (which is near optimal based on a lower bound that the authors provide) for insertions and deletions of points and a query time of $O(k^2)$. This is an improvement of the recent result by Henzinger and Kale, ESA 2020 which provided a dynamic algorithm with a worst case $O(k^2)$ update time. The algorithm in this paper is based on making the algorithm of Mettu and Plaxton dynamic. The authors also provide implementations of their algorithm as well as prior works (including a coreset construction algorithm) and then provide an empirical performance analysis of these algorithms. Strengths: This paper is providing a near-optimal bound for amortized update time for dynamic k-means and k-median clustering problems. The authors provide implementations of their algorithm as well as prior works which is very useful. Weaknesses: In terms of techniques, the methods used are adaptation of an existing algorithm of Mettu and Plaxton and perhaps somewhat incremental in nature. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: My questions were answered in a previous review cycle Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their effort and the review. We agree that we extend the Mettu-Plaxton algorithm to the dynamic setting. We believe this to be an important contribution, since our algorithm is simple and practical to implement, and improves upon the prior bound for this fundamental clustering problem in the dynamic setting.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Efficient Multi-Task Reinforcement Learning via Selective Behavior Sharing
Reject
Summary: This work presents Q-switch Mixture of Policies (QMP) that identifies shareable behaviors and incorporates shareable behaviors. The authors propose to utilize each task’s learned Q-function to evaluate shareable behaviors, and incorporate helpful behaviors from other tasks to aid the exploration of the current task. Experiments on manipulation and navigation tasks are done to validate the proposed method. Strengths: 1, The paper is written well and the MTRL framework has the potential to generalize to kinds of RL tasks because of its simplicity. The idea of Q-switch is straightforward, but seems to work in the experiments. 2, The analysis is comprehensive, and validates the effectiveness of the behaviors identifying and incorporating. Moreover, behavior sharing seems to be a good complement to parameter sharing. By combining them, the sample-efficiency would get improved further. Weaknesses: Though compared with many benchmarks, the experiment environments are sort of over-simplified. I would suggest testing the framework in the meta-world environment with more tasks, like insert peg, pick&place, to make the results more convincing and reliable. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In figure2, the value of Q(s,a_3) is highest, but why is policy 1 chosen in data gathering? Besides, in section 3, action space S -> action space A. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: As discussed above in the weaknesses, it's still a good paper to accept. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and feedback that helped us improve the paper. We address each of your concerns below, including new experiments. ### More Complex Environment added: Metaworld MT10 As advised, we have added an MT10 experiment to test QMP’s contribution to parameter-shared and parameter-separated policy architectures in Rebuttal Fig. P6. Discussion: **Metaworld state space is not ideal for sharing.** - As noted in Sec 5.1, we choose the 4-task setup of Yu et al. (2021) [3] to ensure consistent state space. It is important to recognize that MT10’s task set is peculiarly constructed over separate environments that are *artificially connected* through overloaded state dimensions for different objects. This makes the problem of behavior-sharing inapplicable because there is no shared state structure to generalize the behaviors over. - In fact, we find that **even parameter-sharing is not a good solution to MT10**, despite Meta-world paper reporting its best results on parameter-sharing SAC. Our results in Rebuttal Fig. P6 that while parameter-sharing accelerates learning at the start, it converges to a suboptimal value of 68.3% success rate (our numbers match Metaworld paper). In contrast, naively training parameter-separate SAC policies reach around 75% success rate! Crucially, when we add QMP’s behavior sharing, it improves the success rates to over 80% while also improving sample efficiency. - Since parameter-sharing makes learning unstable, likely due to gradient interference between tasks. Thus, when QMP is combined with parameter-sharing, it also suffers from similar gradient interference. ### The experiments are computationally complex because of online multi-task RL from scratch Since we do not assume demonstration data, most of our experiments take 1.5 days to train on an RTX 3090, as all the tasks need to be trained with SAC. Some MT10 experiments we now added took about **7 days to complete** per run! Given the state of RL methods and our non-industry scale compute budget, we find this to be the most feasible environment setup we can train on, while still being able to perform exhaustive experiments on 5 seeds over 6 baseline methods. Such computational complexity is precisely why behavior-sharing methods such as QMP are important, to improve the sample efficiency in online multi-task RL. ### Clarifying Fig. 3 As you rightly interpret, the Gather data step is indeed done by Policy 3. We intended to show this by the robot pose matching policy 3’s action proposal. To eliminate any confusion, we have now labeled the best behavior with “$a_3$” to show that policy 3 was chosen for data gathering. We also hope the added Algorithm 1 makes it easier to follow our approach. Please let us know if there are any other concerns we can address to help increase your score. --- Rebuttal Comment 1.1: Comment: Thanks for your responses. I totally understand the adaptation issues and computational power concerns. I may keep my score for now.
Summary: The paper studies sharing behaviors between tasks in multi-task reinforcement learning. In the proposed method, each task maintains an independent policy network. During online exploration, it selects the action maximizing the task's Q function among actions proposed by all the tasks' policies. Experimental results show the method improves multi-task performance in three continuous control environments. Strengths: 1. The paper is well written. The motivation of behavior sharing for MTRL is clear. 2. In experiments, figures and results are clearly presented. Weaknesses: 1. The paper makes a strong assumption that tasks are only different in reward functions. Many complementary methods, like parameter sharing, tackle a wide problem setting where the transition functions and state spaces can be diverse. 2. In baseline methods: the Fully-Shared-Behaviors baseline, a policy without any task information as input for multi-task RL, is weird. A fully-shared baseline with task identifier input makes more sense. Two ablation methods seems unnecessary, since the proposed method is simple enough. 3. In experimental results, the proposed method does not outperform baselines very significantly. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: As shown in figure 6.c, parameter sharing method contributes more than the proposed method. The proposed method+parameter sharing only improves a little to the method sharing parameters only. How are the results of parameters+behaviors sharing in the other two environments? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Authors discussed some of the limitations and they should be addressed in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive feedback and suggestions for additional parameter sharing experiments which we have added. We address each of your concerns below. ### Shared state space and transition function As we address in more detail in the general response, shared state space and transition function are common assumptions to many prior works [1-7] and we believe to be an important subset of MTRL problem setups. Furthermore, as you mention, parameter sharing is a complementary approach and can be combined with and does not detract from our work in MTRL settings with a shared environment. We empirically elaborate below that parameter-sharing’s applicability to a method does not mean it necessarily improves learning and can even hurt. In that sense, behavior-sharing can also be “applied” to different environments. But it is more likely to help when tasks are conducted in the same environment, which is true for a large family of single-agent RL settings. ### Fully-Shared-Behaviors Baseline This baseline in Section 6.1 answers the question “How exploration sharing compares to other forms of behavior sharing?”. The Fully-Shared-Behaviors baseline represents one end of the spectrum of behavior sharing which would be ideal when the agent’s optimal behaviors across tasks do not conflict at all. Providing a task identifier as input makes this a parameter-sharing approach, because except the task ID the parameters of all task policies are shared. We do already evaluate a multi-head shared SAC policy (equivalent to a shared policy with task ID) in Figure 6c and show that the benefits of parameter sharing are complementary with behavior sharing. ### Parameter Sharing vs. Behavior Sharing As suggested, we have added the QMP + parameter sharing experiments in Rebuttal Fig. P4-P6. We thank the reviewer for this suggestion and discuss the crucial insights about parameter-sharing below: - **Parameter-sharing is suboptimal with conflicting tasks**: In Maze (Paper Fig 6), Reacher (Fig. P4) and new MT10 results (Fig P6), parameters-sharing SAC converges suboptimally. Especially, in Reacher where Task 4 is behaviorally different (stay at the same position) from other tasks (reach subgoals), parameter-sharing is highly suboptimal (60%). On closer inspection, we find that because Task 4 is easier to learn, parameter-sharing often converges suboptimally to that task which hurts its learning on other tasks. Crucially, parameter-sharing is even worse than no-sharing at all on Reacher (60 v/s 80%) and MT10 (68% v/s 75%), which shows that **parameter-sharing can hurt performance**. Similar phenomenon has been observed in prior works [R3-1]. - **QMP can help deal with conflicting tasks**: QMP + parameter-sharing on Maze and Reacher improves the performance over the parameters-only results. In Reacher, the influence of parameter-sharing is so negative that QMP-separate is better than QMP-parameter-sharing. But QMP consistently improves the performance over no-parameter-sharing in all 4 environments we tested: Reacher, Maze, MT4, and MT10. In MT10, QMP + no-parameter-sharing achieves 85% success as compared to Metaworld paper’s reported results of 68.3% with parameter-sharing. While there exist other approaches that can help improve MT10 performance, our claim is simply that when the influence of parameter-sharing is disentangled, behavior-sharing is consistently shown to help. - **QMP + parameter-sharing is not always complementary**. While we could not hyperparameter-tune sufficiently in MT4 and MT10 experiments given the short timeframe of rebuttal, our preliminary results show that the combination of QMP with parameter-sharing can negatively interfere. Parameter-sharing makes QMP worse on Reacher (Fig P4) while QMP makes parameter-sharing worse on MT4 and MT10 (Fig P5-P6). We hope stabilizing parameter-sharing with approaches such as gradient surgery [R3-1] can improve the performance of QMP’s combination with it, just like QMP consistently helps over no-parameter-sharing. However, given the open challenges of parameter-sharing itself (as demonstrated above), this is beyond the scope of our work’s focus on behavior sharing. ### Significance of Experimental Results QMP employs a simple hyperparameter-free approach of utilizing a Q-function to share behaviors without introducing any additional method-specific hyperparameter tuning. We demonstrate 3x sample efficiency improvement in Reacher and 15-20% optimal performance gain in Maze and Metaworld tasks over all existing approaches. In Fig 4 (right) and Fig. 6 (c), QMP is the only method that achieves a 100% success rate, while the best baselines show a suboptimal performance of 85% and 90%. Finally, in newly added MT10 results in Fig P6, QMP helps achieve success rates of 85% while parameter-sharing converges at 68%. We hope the simplicity of QMP without the need for new hyperparameters, and consistent performance improvements are valuable. ### Need for ablations The ablations demonstrate the need for an adaptive behavior sharing scheme via our Q-filter, by comparing against fixed schemes of behavior-sharing, including a manually crafted and uniform sharing scheme. As Reviewer R1 pointed out, it is important to assess the question of static task similarity and whether that itself is enough to perform behavior-sharing. We would be happy to add any additional ablations that the reviewer thinks are necessary to add in place of or in addition to these ablations? ### [References] [R3-1] Yu, Tianhe, et al. "Gradient surgery for multi-task learning." NeurIPS 2020. We hope this clarifies and addresses the concerns raised. We again thank the reviewer for the new insights derived on parameter-sharing and we would be happy to answer further questions, if any remain.
Summary: The paper introduced a new exploration mechanism for MTRL. They suggested training a different policy for each task and “sharing the behaviors” between them. In order to do so, each policy, in a certain state, chooses the most suitable action using its own Q function. The author evaluates several MTRL benchmarks and shows increased sample efficiency and final performances over behavior-sharing baselines. Strengths: $\underline{\text{Clarity:}}$ 1. Overall, the paper is coherent and easy to follow 2. The introduction is well-written and the motivation for the work is clear $\underline{\text{Significance:}}$ 1. The method seems quite general and might be useful in many cases 2. To the best of my knowledge the idea of using the Q function as a metric for policy selection is novel Weaknesses: My main issue is with the technical soundness of the paper. Throughout the paper, the level of the formulation was relatively low, and I spotted some inaccuracies in notation and claims. In general, I understood the motivation for the Q-switch, but there lacks some theoretical analysis or empirical study to support it. I think this method might be suited for some set of tasks, but probably have a limitation, due to the generalization capabilities of the Q function, that was not discussed in the paper. Here are some more specific examples regarding the technical quality of the paper: $\underline{\text{In the problem formulation section:}}$ 1. The MDP components are not defined. Are the state and action spaces continuous or discrete? Should state that $\gamma \in \mathbb{R}$ 2. In line 104 you state that $T$ is a number of tasks in the task set and in line 109 you denote the i’th task in the set as $T_i$. This is a confusing abuse of notation. 3. In line 107 - “We parameterize the multi-task solution as…” what does the solution for a multi-task mean? 4. In line 109+110 - the objective is not formulated. “the tasks are uniformly sampled during training” - what does that mean? From which distribution the tasks are being sampled? Does the sample accure at the beginning of the training phase once? 5. In line 112 - what is $\pi_i^*$? It is not defined. $\underline{\text{In section 4.3:}}$ 1. line 173 - “over all the task policies $\pi_j$” -> “over all the task policies $\left[\pi_j\right]_{j=1}^T$” 2. In line 173 - how does $\pi_i^*$ defined? I believe it is an abuse of notation from section 3. $\underline{\text{Related work and baselines:}}$ 1. Limited coverage of skill learning and intrinsic reward literature. I don’t think the statement in line 101 (“.. assume that the optimal behaviors of different tasks do not conflict with each other”) is true for many skill-learning methods 2. Although this work approach is quite different than intrinsic reward/skill learning, I believe that a standard state visitation bonus should be competitive (or at least a good baseline) for the evaluation benchmarks $\underline{\text{Experiments:}}$ 1. In section 6.2, the chosen baselines (QMP-uniform and QMP-domain knowledge) are too simplistic, please provide a more solid baseline, e.g. the one you suggested in line 178 (a probabilistic mixture). 2. In Figure 6(c) you show that the best performances were achieved when incorporating both parameter sharing and behavior sharing, and showed that using only parameter sharing beats using only behavior sharing. This raises the question of what would happen if we used different kinds of exploration mechanisms together with your method (e.g. intrinsic exploration). Overall, I feel that this evaluation is limited, both in the variation in testing environments and exploration combinations. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Typo - line 106: action space $\mathcal{S}$ -> action space $\mathcal{A}$ 2. Can you clarify if the set of policies that $\pi^{mix}_i$ is defined over includes $\pi_i$? 3. Line 76 - “We do not require pre-defined task similarity or exploration bonus” - can you clarify this? 4. In which task families do you believe your approach will have the upper hand over simple exploration bonuses (e.g. state visitations)? 5. Can you provide a pseudo-code for your algorithm? 6. Can you provide a version of the graphs in Figure 4 with more env-steps, I find it quite surprising that the no-shared behavior baseline doesn’t converge to the optimal performances. 7. Can you provide a graph for the phenomenon discussed in lines 192-193? Something like the rate the Q-switch of task $i$ chooses $\pi_i$ as a function of the number of train env steps. 8. In Figure 6(a), doesn’t every row should sum to 1? I’m not entirely sure I understood this figure. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Although the authors raised a valid point in the Limitation section of the paper, I believe other limitations of the method exist and aren’t addressed (please see the weaknesses section for further details). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your rigorous, thoughtful and constructive feedback on the technical soundness of the Q-switch and suggestions for additional baselines. We address the main concerns below and the questions after. ### Technical Soundness of Q-switch While theoretical guarantees are outside of the scope of this paper, we believe there is ample empirical evidence to support a Q-function based Q-switch. As we note in Section 4.2, Q-functions have been widely used for imitation learning and offline RL (Yu et al., 2021; Nair et al., 2018; Sasaki & Yamashina, 2020) to filter for high-quality data, which provides evidence for its effectiveness for evaluating actions from other policies. We performed empirical studies on the Q-switch in our paper showing that it: - successfully identifies shareable behaviors (Fig 6a) - becomes more selective over the course of training (Fig 6b) - learns to not share from conflicting tasks (Supp. Fig 9) - effectively switches over different policies within an episode (Supp. Fig. 10) Due to the prior work and our own empirical findings, we believe that our proposed method is well-supported. The reviewer also correctly points out that Q-switch can be limited by error in the Q-function, which we already discussed in Section 4.3 (line 185). We reiterate that Q-switch has a **self-correcting** characteristic: if the Q-function generalizes poorly and the Q-switch chooses some non-optimal action, the agent will get data, and train its Q-switch — similar to how standard Q-learning corrects its Q-function by exploring and making mistakes. ### Potentially Related Work - **Skill-learning**: We appreciate the reviewer’s suggestion, however, skill learning can broadly refer to anything from single-task RL to hierarchical RL. To the best of our knowledge, we do not miss any relevant baselines for MTRL behavior-sharing. We would greatly appreciate some references of papers that the reviewer identifies as missing and are happy to discuss them in the revision. - **Exploration / Intrinsic Reward** + We highlight that our evaluation benchmarks are already dense-reward tasks (except one task in Reacher), thus, specialized exploration is not expected to help in these tasks. In response to the reviewer’s comment, “I believe that a standard state visitation bonus should be competitive”, we add a new experiment in Rebuttal Fig. 2, where SAC with increased exploration bonus (by increasing the entropy coefficient) does not benefit in performance. + Intrinsic rewards help exploration in tasks with sparse rewards and can be complementary to QMP. The challenge we address in this work is not that individual tasks are hard to learn, but how to learn multiple related tasks together efficiently. The key idea of behavior-sharing is that it *shortcuts the need to explore* by exploiting similar experiences made in other tasks. + An investigation of combinations with different explorations mechanisms is tangential to the focus of this work, but would be an interesting extension. For example, a sparse reward multi-task problem would likely require both multi-task behavior sharing and intrinsic exploration for each task. ### Potential Baselines - **More solid baselines** Section 5.2 and 6.1 have solid baselines including all prior approaches in behavior sharing. Section 6.2 particularly ablates the need for an adaptive behavior sharing scheme via our Q-filter, by comparing against a manually crafted and uniform sharing scheme under our method. We would appreciate it if the reviewer can list down any other baselines or ablations from prior work they believe are missing. - **Probabilistic mixture** A probabilistic mixture of policies is a design choice of our approach where arg-max is replaced with softmax. However, in our initial experiments, we found no significant improvement in performance and it came with an additional hyperparameter of tuning the temperature coefficient. We attach that result in the Rebuttal Fig. P3 to justify the design choice of argmax over softmax due to its reliable performance and simplicity. ### Questions We greatly appreciate the detailed feedback on the typos and improper notations. We are glad it didn’t hinder the general understanding and flow of the method. We have fixed it in our manuscript thanks to the reviewer’s detailed comments. 1. Fixed typo. 2. $\pi_i^{mix}$ is a mixture of policies over all task policies including $\pi_i$; we will clarify in the paper. 3. Our method does not require any prior knowledge about the relationships between tasks like Kalashnikov et al. (2021b) does by assuming given “task-skill groupings”. We also do not use exploration bonuses like Bangaru et al. (2016). 4. While simple exploration bonuses are typically applied to a single task, our method improves exploration by sharing information between tasks. Therefore, our method would be more effective in task families with more shared behavior between tasks. 5. Thanks, we added pseudo code in Rebuttal Algorithm 1. 6. We have provided this graph in Rebuttal Fig. P7. The no-shared behavior baseline does converge to optimal performance but takes around 2x the number of samples to do so. 7. In Supp. Fig. 9 in the paper we already provide the proportion of shared behavior from each policy over training in the Multistage Reacher task and discuss how behavior sharing decreases over training as hypothesized in lines 192-193. 8. Fig. 6a -- the caption notes that the diagonal is zeroed out. Yes, they should sum to 1 otherwise. So, the diagonal = 1 - (sum of non-diagonal row entries). We zero out the diagonal for color-scaling, because diagonal elements are on the order of 0.8 - 0.9 while non-diagonal elements are 0.0 - 0.1, so including the diagonal values removes the contrast we want to show between low-valued elements. We hope this clarifies and addresses the concerns raised and we would be happy to answer any further questions. --- Rebuttal Comment 1.1: Comment: I appreciate the clarifications and additional results. Your reply on the the Technical soundness of Q-switch relaxed my concern, especially the part regarding the self-correcting characteristic. I also appreciate the response regarding exploration and intrinsic rewards. Regarding baselines, I mainly aimed for methods on the line of e.g Pertsch et al - Accelerating Reinforcement Learning with Learned Skill Priors, CoRL 2020. There are many extensions to this line of work, some of which you referred to, but haven't discussed in details. Can you please explain why these are not valid baselines? Due to the clarifications and additional results, I have raised my score. --- Reply to Comment 1.1.1: Title: Adding discussion on skill-based RL (offline data; temporal abstraction) v/s MTRL (online RL; behavior-sharing) Comment: We sincerely thank the reviewer for acknowledging our rebuttal and pointing out the skill-based RL works. The key premise of [Pertsch et al. 2020] and its extensions, often referred to as skill-based RL, is how to extract temporally extended behaviors from an **offline, task-agnostic dataset** of agent trajectories of *meaningful* interaction with environments (e.g., human-collected data). There are two phases: 1. Use the offline data to learn a skill space (where a skill is usually a sequence of N actions: $a_1, ... a_N$), and 2. On **one** downstream task, train a new policy whose action space is now the pretrained skill space. This allows efficient downstream learning in a sparse reward task as the effective horizon of the task is sharply reduced, thanks to learning meaningful skills from a good behavioral offline prior dataset. We actually consulted with the author of the prior work you listed, and the key conclusion was that these methods are not comparable, but rather complementary. Specifically, skill-based RL like [Pertsch et al. 2020] can learn a skill space and QMP can do **online multi-task RL** with this skill space as the new action space. So, QMP + Skills can perform **temporally extended behavior-sharing** between the multiple policies learned on top of a skill space! Creating new benchmarks on *long-horizon multi-task RL augmented with offline data* would be a very exciting future research direction! We thank the reviewer for this interesting perspective. To sum up, skill-based RL methods are not ideal baselines for our work because: - Our problem formulation of online MTRL does not assume access to any **dataset** with agent trajectories, unlike skill-based RL methods which tackle a totally different problem. - Skill-based RL methods mainly consider a **single target task**, while our proposed method aims to simultaneously learn **a set of multiple tasks** with improved overall efficiency. - Skill-based RL’s key focus is enabling a challenging sparse-reward task with temporal abstraction, while behavior-sharing methods, like ours, focus on selectively sharing action proposals from other tasks. As we show, our approach shows benefits in dense reward tasks too. Since offline datasets are a key requirement for such skill-based RL methods as [Pertsch et al. 2020], the closest baseline we can formulate is to compare with an approach that “consider the other tasks as unlabeled datasets for our current task”. This is **exactly** the UDS (Data Sharing) baseline [Yu et al. 2022] in our experiments, which we already show doesn’t work well for online MTRL. We will add the above insightful discussions to the revised paper: (a) the complementary nature of skill-based RL + MTRL behavior sharing and (b) trying to apply skill-based RL makes it equivalent to UDS. Thank you again for genuinely helping us improve our paper’s discussion, and we hope this would alleviate any leftover concerns. [Pertsch et al. 2020] Pertsch et al. "Accelerating Reinforcement Learning with Learned Skill Priors" CoRL 2020. [Yu et al. 2022] Yu et al. How to leverage unlabeled data in offline reinforcement learning. ICML 2022.
Summary: This paper proposes Q-switch Mixture (QMP) for identifying shareable behaviors over tasks and incorporating them to guide exploration. QMP identifies shareable behaviors from other tasks and incorporates them to make exploration efficient. The proposed framework is tested on three different multi-agent tasks and compared with other methods. Strengths: The paper introduces the problem of selective behavior sharing for improving exploration in multi-task reinforcement learning requiring different optimal behaviors. The proposed method consists of a Q-switch for identifying shareable behaviors and is used to guide an exploration scheme incorporating a mixture of policies. The Q-function of each task is used to assess the quality of other task policies’ behaviors when applied to the task. The Q-switch acts as a filter to evaluate the potential relevance of explorative behaviors from other tasks. Weaknesses: 1. The proposed method aims to simultaneously learn multiple tasks. Do they share the same observation space and action space? If it is true, the contribution of the work is limited. If not, the author should consider how to measure the similarity of two tasks. If the two tasks are quite different, it is hard to transfer samples from one task to the other. 2. For incorporating shareable behaviors, the number of training samples from other tasks may be much less than the number of training samples generated for the current task. It would be hard to learn from training samples from other tasks. 3. The scenarios used in experiments are simple tasks. It would be better to see the performance of the proposed method in complex problems. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: I mainly have the following concerns. 1. The proposed method should work for training a set of similar tasks. How to measure the similarity of those tasks. 2. How to training the task if the number of samples from other tasks is much less than the training samples of the current task. 3. Is it effective when applying the proposed method to multi-agent problems, for example, StarCraft Multi-Agent Challenge (SMAC). Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The author has discussed the limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful feedback on the multi-task problem setting and the complexity of the environments. We refer to the general response where we address these concerns in detail. To summarize again: - our multi-task setup with shared observation and action space, i.e., single-agent RL, is a common assumption shared by many prior works [1,2,3,4,5] and we believe to be an important subset of MTRL problems. - With regard to the environment complexity, online MTRL is a computationally expensive problem so we chose tasks that were feasible for our compute budget while introducing complexity in the differing behavior between tasks. Environments like Metaworld are the standard complex choice for MTRL in prior works [3,4,8,9,10]. - “If the two tasks are quite different, it is hard to transfer samples from one task to the other.” - This is precisely the problem that our proposed idea **selective behavior-sharing** in QMP addresses. We recall that Multitask Reacher has Task 4 of “staying at initial position” conflicting with all other tasks of “goal-reaching”. QMP learns to ignore Task 4’s proposals (Appendix Fig. 9) which helps it to learn optimally on all tasks (Appendix Fig. 8). In contrast, all sharing baselines suffer, especially on the sparse reward Task 3, because of the conflicts from Task 4. We address your remaining questions below: ### 1. How to measure the similarity of task sets? Measuring similarity in multi-task learning is an important line of research where prior works [5, R1-1, R1-2] learn it end-to-end, unsupervisedly or simply define it manually. However, we posit that it is not enough to know similarity at task-level, but the agent requires task+state level similarity.. Fig. 5 shows that a hand-crafted task similarity measure is not enough to solve the Reacher task. Our proposed Q-filter induces an **implicit similarity metric** by evaluating the proposals from other tasks for the current task and state. Thus, our selective behavior sharing method works for a wide range of task sets without having to explicitly measure task similarity. We do observe that tasks that share a greater percentage of optimal behavior benefit most from behavior-sharing methods such as ours. ### 2. Less samples from other tasks than from the current task? - We would like to clarify that no “training samples” are transferred from other tasks, because the reward labels are incompatible. What is transferred is “action proposals” by querying the other tasks’ policy networks with the current state. - Indeed different numbers of training samples are selected from other tasks. The Q-filter ensures that the other tasks are utilized only when they are helpful. Fig. 9 in Appendix addresses your question and demonstrates the extent to which other policies are selected by the current task. Thus, despite the differences in tasks, QMP identifies the shareable behaviors which results in the performance improvement. We hope this addresses the concern that it is, in fact, **not** *hard to learning from training samples from other tasks* despite their number being smaller as compared to the current task. ### 3. Applicability to multi-agent problems? While multi-agent RL is not the focus of our paper, many multi-agent problems could be posed as simultaneous multi-task problems where our method would be applicable. Specifically, multiple similar agents could benefit from shared behavior that other agents explored. However, our method is not specifically targeted towards the multi-agent setting, where there is more interest in challenges like non-stationarity and communication. ### References [R1-1] Achille, Alessandro, et al. "Task2vec: Task embedding for meta-learning." ICCV 2019. [R1-2] Zamir, Amir R., et al. "Taskonomy: Disentangling task transfer learning." CVPR 2018. We hope this clarifies and addresses the concerns raised and we would be happy to answer any further questions. --- Rebuttal Comment 1.1: Title: Thanks for your explanation. Comment: Thanks for your explanation. --- Reply to Comment 1.1.1: Title: Thank you for responding. Comment: We appreciate your response and are glad our explanation clarifies the concerns. We are happy to discuss/experiment with any leftover concerns, and otherwise, we would really appreciate it if you could consider raising your score. ---- PS: Apart from our explanation, we would like to highlight and reiterate the **new MT10 experiments** that were added during the rebuttal that can help address your feedback: - **“Complex Environment”**: MT10 — QMP improves success rates and sample efficiency, even where parameter-sharing converges suboptimally (Fig. P6). - **“When observation spaces are not shared”**: the go-to solution is parameter-sharing, so we report results on QMP + parameter-sharing on all 4 tasks (Fig P4-P6). Crucially, we show that parameter-sharing can be limiting in Reacher because the tasks can be conflicting (Fig P4), and QMP works even better than parameter-sharing. - **“Different number of training samples across tasks”**: Appendix already demonstrates QMP can flexibly use different tasks with different frequencies over training (Appendix Fig. 9) and within the episode (Fig. 10).
Rebuttal 1: Rebuttal: We thank the reviewers for the constructive feedback. We appreciate the positive notes on the motivation for selective behavior sharing, the novelty and simplicity of Q-switch as a policy selection metric, comprehensive analysis of identified behaviors and combination with parameter sharing, and the clarity and easy-to-follow structure of the paper. We address the questions about the scope of the problem setup, the complexity of MTRL environments, the potential need for exploration baselines, and extending existing results as below and in the attached figures, with additional experimental results when possible: [Reviewer order → R1: npBg, R2: xBdd, R3: cxcr, R4: uAJb] ### Constrained Problem Setting? [R1, R3] - Our work follows a large body of MTRL research with the same assumptions of a single agent learning multiple tasks together, thus having a shared observation and action space: Distral [1], DNC [2], CDS [3], UDS [4], MT-Opt [5]. This is a widely accepted problem subset of MTRL: where tasks differ in terms of rewards and initial states, but the agent and the transition function stay the same. We would like to emphasize that our work **further alleviates assumptions from prior work** in behavior sharing about the tasks not conflicting with each other. - Further, R2 and R4 have positively commented on the generality and utility of the method. Some examples of multi-task learning on a single agent include humans, robotics (manipulation, locomotion, navigation), autonomous vehicles, dialog agents, and FPS game agents like Minecraft and Doom. Furthermore, generalist agents such as Gato [6] and Perceiver IO [7] share observation and action spaces across modalities and tasks. Selective behavior sharing can be crucial in assisting shared learning of tasks in such applications as RL capabilities scale in future. ### Complexity of Environments? [R1, R4]: Online MTRL from scratch is a computationally expensive problem because it is simultaneous RL on several tasks. For example, our maze and meta-world experiments take 1.5 days to run on an RTX 3090 with SAC. We report results on 5 seeds for all 6 baselines, which amounts to 45 GPU days just for Metaworld. Given the state of RL methods and our non-industry scale computing budget, we find this to be the most complex yet feasible environment setup. We would like to emphasize that MetaWorld has been widely used as the main benchmark in [8,9,10]. Furthermore, we make our tasks challenging for multi-task behavior sharing by including a diversity of behaviors: - Reacher: Includes tasks with *drastically different* optimal behaviors. - Maze: Scales to a *larger number* (10) of simultaneous tasks. Rebuttal Fig. P1 shows that the benefit of behavior sharing increases when tasks increase from 3 to 10. - Meta-World: *Different objects* to be manipulated and *less direct* shared behavior. ### Exploration techniques needed? [R2] - We clarify that all tasks (except one in Reacher) have dense rewards. Thus, our benchmarks alleviate the need for specialized exploration and isolate the impact of behavior-sharing across tasks without confounding factors. As suggested by R2, we add a simple experiment in Rebuttal Fig. P2 that shows SAC with increased exploration (via entropy) does not improve performance. This reflects that our problem cannot be sufficiently addressed by simply employing exploration heuristics. Our work suggests that behavior-sharing in MTRL helps beyond exploration because it shortcuts the need to explore by exploiting similar experiences made in other tasks. - R2 mentions a limited coverage of skill-learning literature, but to the best of our knowledge, we do not miss any relevant baselines for MTRL behavior-sharing. We would greatly appreciate some references of papers on what the reviewer identifies as missing and are happy to discuss them in the revision. ### Result Extensions - [R2] arg-max Q-filter v/s probabilistic Q-filter: Rebuttal Fig. P3 shows arg-max performs better and does not require the temperature parameter, which justifies our design choice. - [R2] more env steps on results: Rebuttal Fig. P4 shows no-shared baseline saturates at suboptimal performance in Maze. - [R2] Q-switch becomes selective to its own task over training: please refer to Appendix Fig. 9. - [R3] Parameter+Behavior sharing for Reacher and Metaworld: Parameter-sharing causes suboptimal convergence in Reacher (Fig P4) and MT10 (Fig P6) than even no-sharing. While QMP consistently gains sample efficiency in isolation, its combination with parameter-sharing gives mixed results: (i) Maze: combination helps, (ii) Reacher: parameter-sharing hurts QMP, (ii) MT4: QMP hurts parameter-sharing, (iv) MT10: QMP > parameter-sharing > both. ### Misc - We thank R2 and R4 for pointing out the issues of notational corrections, figure clarity, and typos. We have fixed them in our revision. ### [References] [1] Teh et al. Distral. Robust multitask reinforcement learning. NeurIPS, 2017. [2] Ghosh et al. Divide-and-conquer reinforcement learning. ICLR, 2018. [3] Yu et al. Conservative data sharing for multi-task offline reinforcement learning. NeurIPS, 2021. [4] Yu et al. How to leverage unlabeled data in offline reinforcement learning. ICML, 2022. [5] Kalashnikov et al., Scaling up multi-task robotic reinforcement learning. CoRL 2021b. [6] Reed, Scott, et al. "A Generalist Agent." TMLR 2022. [7] Jaegle, Andrew, et al. Perceiver IO: A General Architecture for Structured Inputs & Outputs. ICLR 2021. [8] Yang et al. Multi-task reinforcement learning with soft modularization. NeurIPS, 2020. [9] Sodhani et al. Multi-task reinforcement learning with context-based representations. ICML 2021. [10] Yu et al. Gradient Surgery for Multi-Task Learning. NeurIPS 2020. Please find further answers in the responses to individual reviews below. We hope our responses address the concerns and the questions from the reviewers. We are happy to answer any further questions. Pdf: /pdf/04e87eb005414d028cf2e89957e78da1c006b4ea.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
The Rank-Reduced Kalman Filter: Approximate Dynamical-Low-Rank Filtering In High Dimensions
Accept (poster)
Summary: This paper develops a method for performing approximate Kalman filtering and smoothing using a low-rank approximation. Fully low-rank predict, update and smoothing equations are derived. Under certain reasonable assumptions, these equations reduce the cubic complexity to linear complexity (and quadratic if the assumptions are violated), while being “approximately exact” compared to Monte Carlo-based ensemble methods. The method is evaluated on a range of problems and appears to perform favourably. Sensible limitations and future opportunities are highlighted. Strengths: This paper addresses a critical issue – the super-linear (cubic) complexity of exact Kalman filtering/smoothing (KF/S). This makes stable and closed-form estimation intractable for anything other than small problems. The proposed solution is appealing: introduce a low-rank factorization that, through some linear algebra and recent solvers, is linear (or quadratic) in the rank. While low-rank factorizations are common, I have not seen it applied to KF/S, a domain where it is very naturally appealing. The detail in which the authors walk through the derivation is excellent, and appears to be mathematically sound (although I am not super familiar with this application domain). The purely theoretical contributions of the paper are not huge; but are well-judged, interesting, apt for the problem at hand (modulo Limitations), and dovetail nicely with recent literature. The experimental evaluation touches on a number of interesting examples that are high dimensional, and for which the proposed method seems to excel. The final application, although a little off-the-wall, really does a good job in highlighting how high-dimensional this method can go. Other experiments do a good job of comparing asymptotic to non-asymptotics in a range of domains. I think there is definitely an audience at NeurIPS that will enjoy and really benefit from this work. I also believe this work will engender follow-up work which will further build this direction. The paper is on the whole well-written (modulo Weakness B) and is nicely visually presented. Figures are very well prepared, with clearly legible fonts and clear colours. Weaknesses: The paper, on the whole, is very good, but I do think there are some areas for improvement. - A: The experimental evaluation is, on the whole, good. Figure 1 is excellent in visually depicting how covariance matrices can be low rank. WIth that in mind, I wonder if there is a simple, low-dimensional model that the authors can present that can be easily visualised. I don’t have a great “feel” for any of the examples presented. The results for this experiment certainly don’t need to be SoTA (since existing methods will probably also perform well). Some visualisations (PCA projections of state, low-rank factors etc) can then be visualised to highlight the differences between methods and build reader intuition. This example could be as simple as a synthetic LDS with dimension ~10. Note: I wouldn’t expect to see this during the rebuttal period, but I would _strongly_ encourage the authors to consider including it in a camera ready. - B: [I am aware this is a very general and somewhat hard-to-address comment, so please take these more as suggestions] The paper itself is very notationally dense, to the point of obfuscating the simplicity and applicability of the method. While the authors are to be commended for their attention to detail, I think parts of the paper can be thinned out, with more technical content being dropped to the supplement. This will result in a paper that is more immediately digestible, and will have a larger reach and impact. Some examples: - I think Sec 3.1 is great for building reader intuition, but this intuition is then lost a little bit by having to wade through long math equations, where the details aren’t actually super important. - I wonder if some of the smoothing equations can be cut to the supplement, with just the final forms retained and note directing the interested reader to the supplement. This newly cleared space can then be used for including higher-level sketch proofs, intuition-building worked examples or text, algorithm blocks, diagrams, experiment visualisations etc. - Including a banner figure (and maybe even an algorithm block) explaining the whole method would be nice, just to visually complement the math. - Maybe consider including a table outlining the positives and negatives of each approach under particular operating conditions? Then the benefits are stated clearly and concisely outside of the body text. - **The key takeaway** is that you have already solved the hard equations and derived an elegant solution! Therefore, I would encourage you to strip out as much complexity from the main text as possible, and add extra emphasis to the end result – even something as cartoonish as a callout box with the key equations (akin to how Sarkka has a full equation block per model/inference methodology). ## Minor weaknesses: - It would be nice if the DLRA/BUG integrator/solver were introduced in the main text. You highlight it as a key component, but there is not really much discussion of its implementation, assumptions, limitations, complexity etc. - I don't quite understand the $(\Phi \Sigma \quad Q)$ notation in (16) (specifically the white space between the matrices). Please explicitly define it somewhere. If these changes (and the limitations) were made/commented on, then I would consider upgrading my review. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Also see weaknesses and limitations. Is operating in the continuous time domain strictly necessary? My understanding is that a lot of complexity (at least notationally) goes away when you consider discrete time/equally spaced observations. I think all of your experiments used equally spaced observations as well. It could be simpler if you present the material in the discrete time domain, and then have a paragraph outlining that this also extends to the continuous time/SDE domain. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: A limitation is that the authors do not consider bounding the error introduced by the low-rank approximation. For instance, the LML in (12) is for the approximate model. There is no comment on the approximation gap between the approximate model and the full-rank model. I am not fully sure on this, but I think it is possible to construct a bound based on the truncated parts of the spectrum of the matrix being truncated. If this isn’t possible, some qualitative discussion would help reinforce this point. If you were able to construct a bound – any bound! – on the error, then that would put a nice bow on the approximation, as opposed to relying on evaluating the error empirically. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your strong review and your detailed, positive assessment of our work! We greatly appreciate your appraisal that "there is definitely an audience at NeurIPS that will enjoy and really benefit from this work". In the following we will address your remaining concerns, which we hope further improves and solidifies your valuation of the quality of our work. **Q: Writing suggestions** A: Many thanks for taking the time to formulate concrete suggestions in order to help improve our work! We will consider them for the camera-ready version. **Q: Is there a simple, low-dimensional model that can be easily visualized, in order to give more intuition into the workings of the method?** A: We have given this idea some thought. It is not easy to construct an instructive example that is low-dimensional (and thus easily visualizable) and can be further compressed with a low-rank method. The full potential of our contribution unfolds exactly in very large systems in which the state-covariance can be tracked approximately on a low-dimensional manifold. Thus, we went in the opposite direction to focus on this by showcasing the method on a very large dynamical system with the example on rainfall prediction. There, we wanted to convey the intuition that a lot of locations have to be tracked but neighboring points will exhibit similar error estimates, which justifies the low-rank assumption. **Q: Is operating in the continuous time domain strictly necessary?** A: In many cases, continuous formulations of dynamics are sparse, which does not in general transfer to the discretized model. This would thus not allow for exploiting the sparse properties of the continuous system for computational tractability. **Q: Could a bound to the error of the method be provided?** A: This brings up an interesting point. However, it is not at present clear how to approach an error analysis in a tractable way. While there are initial ideas for _local_ error bounds, we saw value in postponing their systematic investigation to future work in order to place more focus on reader intuition behind the approach. **Q: It would be nice if the DLRA/BUG integrator/solver were introduced in the main text.** A: We give an intuition behind the method in Section 2.2 and the concrete algorithm as we use it in Section C of the supplementary material. We further provide all necessary references to the DLRA literature. In our opinion this is a good usage of the limited space for an existing method with a large body of existing literature. **Q: The notation for square-root factors of covariances (e.g. Eq. (16)) is somewhat unclear.** A: We will clarify the notation in a revised version of the paper. For example, $(\Phi \Sigma^{1/2} \quad Q^{1/2})$ denotes a block matrix that is a square-root factor of the predicted covariance matrix, in that the outer product $(\Phi \Sigma^{1/2} \quad Q^{1/2})(\Phi \Sigma^{1/2} \quad Q^{1/2})^\top = \Phi\Sigma\Phi^\top + Q$. If $\Sigma^{1/2}$ and $Q^{1/2}$ are $n\times r$ - matrices -- i.e. low-rank square-root factors of $\Sigma$ and $Q$, respectively -- then the block matrix is in turn a low-rank square-root factor of the predicted covariance matrix. --- Rebuttal Comment 1.1: Title: Staying Put. Comment: Thank you to the authors for their response. After reading your response, and the response to the other reviewers, I will maintain my scores as they were. With that said, I am very interested to see what the other reviewers have to say with respect to comparisons to existing literature. I am not overly familiar with this domain, so I defer to the other reviewers in that respect. However, even if there is overlap with existing works, I believe the clear presentation of a usable method is still valuable enough for inclusion, so long as the links to this existing work are thoroughly explored in the main text. Good work, and good luck. e5WD
Summary: The paper proposes an approximation method for Kalman filtering of intractable, high-dimensional Gauss-Markov settings with linear observations. It adopts existing DLRA methods to compute a discrete-time representation of the sampled process and suggests a recursive filtering and smoothing methods in a low-rank formulation. Strengths: To the best of my knowledge, the proposed filtering scheme is novel and is shown to outperform existing EnKFs under the same configurations. As opposed to EnKF approximation methods, this method is deterministic and capable of accurately restoring the optimal filter given the right parameterization. Derivation of the recursive filter seems correct, and filtering is done in a square-root fashion, which is stable and efficient. The recursion step is shown to be efficient (although under very strong assumptions). The method additionally suggests an efficient calculation for the observation likelihood. Weaknesses: The method assumes the low rank $r$, which is a drawback of many low-rank approximation methods. Even if it is beyond the scope of the paper, there should be a short discussion about choosing this parameter or mention this as a limitation. In Figs. 2, 3 the RRKF is compared with two other methods, both are Monte-Carlo, and are not expected to restore the exact filter. A righteous comparison should include some deterministic algorithms e.g. [10], [r1] below. [r1] MLA Farrell, Brian F., and Petros J. Ioannou. "State estimation using a reduced-order Kalman filter." Journal of the Atmospheric Sciences 58.23 (2001): 3666-3680 Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The recursive methods assumes knowing $\Sigma_{l-1}$ at each step, but don't mention how to initialize $\Sigma_{1}$ at $t_1$. Is it just $Q_{t_1}$? 2. The estimations on Rainfall in Australia regression (Fig.5) don't seem very accurate, especially at spatial points where data deviates from the mean value. Authors claim that "the RRKF achieves high-quality approximate estimates", but I'm not convinced. Can they support this claim quantitively? Minor comments: Eqn. (4) and (16) are somewhat unclear. It would be helpful to clearly state matrices dimensions. In l.146 it is stated that truncated SVD complexity is $O(nr^2)$, which seems to be right. But can you explain/reference this claim? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The method assumes the low rank $r$. This limitation is not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your detailed and positive assessment of our work. In the following we will address your remaining concerns, which we hope further improves and solidifies your valuation of the quality of our work. **Q: The method introduces the low-rank dimension $r$ as a tunable parameter of the method. How would one choose $r$?** A: Thank you for raising this important point. It is true that the method introduces the low-rank dimension $r$ as a tunable parameter. We will emphasize and discuss this in our revision. In fact, the dynamic low-rank approximation (DLRA) literature offers a potentially quite interesting direction for follow-up work, which might automate the choice of $r$ [2]. **Q: The method should be compared to reduced-order Kalman filters ([1]).** A: [1], like us, takes the model as given but heavily relies on realization theory of stable linear time-invariant systems to obtain a reduced-order model. In contrast, the dynamic low rank method that we employ can be applied to any matrix differential equation. Consequently, we expect our present method to be easier to extend to other model classes (i.e. nonlinear). Therefore, we view a comparison to the method presented in [1] as not critical to demonstrate the value of our method. Nevertheless, we will take the opportunity to more clearly situate the present contribution within the larger body of work in low-rank methods for estimation and model identification. **Q: How is the state covariance matrix initialized?** A: In our experiments, we compute the stationary mean and a low-rank factorization of the stationary covariance matrix of the prior and condition the stationary moments on the first measurement in the respective time-series dataset. We will add this detail to the revised version. **Q: The rainfall predictions in the final experiment could be improved** A: You are right in noticing that the rainfall model yields less accurate predictions, as opposed to solving a PDE. Essentially, the rainfall data is interpolated approximately with a simple Matérn Gaussian-process model. This, of course, makes the prediction significantly cheaper, as well. Most importantly, the focus of this experiment is the high dimensionality of the state-estimation problem, which is solved efficiently and -- given the simple model and the significant rank compression -- accurately. We will clarify the focus of the experiment accordingly in the paper. **Q: The notation for square-root factors of covariances (e.g. Eq. (4) and (16)) is somewhat unclear.** A: We will clarify the notation in a revised version of the paper. For example, $(\Phi \Sigma^{1/2} \quad Q^{1/2})$ denotes a block matrix that is a square-root factor of the predicted covariance matrix, in that the outer product $(\Phi \Sigma^{1/2} \quad Q^{1/2})(\Phi \Sigma^{1/2} \quad Q^{1/2})^\top = \Phi\Sigma\Phi^\top + Q$. If $\Sigma^{1/2}$ and $Q^{1/2}$ are $n\times r$ - matrices -- i.e. low-rank square-root factors of $\Sigma$ and $Q$, respectively -- then the block matrix is in turn a low-rank square-root factor of the predicted covariance matrix. **Q: Why is the truncated SVD complexity $O(nr^2)$?** A: On tall and wide matrices the rank is upper-bounded by the smaller of both dimensions. It is therefore sufficient to compute only a subset ($r$) of the singular vectors of such a matrix, since the remaining singular values are zero. This is sometimes referred to as "compact SVD" (or "reduced SVD" (RSVD), or "economic SVD"). This is detailed, e.g. in Chapter 5 in [3]. We will add this reference to the camera-ready version. [1] B. F. Farrell and P. J. Ioannou, "State estimation using a reduced order Kalman filter," 2001. [2] G. Ceruti, J. Kusch, and C. Lubich, “A rank-adaptive robust integrator for dynamical low-rank approximation,” 2022. [3] Golub, Gene H., and Charles F. Van Loan. Matrix computations. JHU press, 2013. --- Rebuttal Comment 1.1: Comment: Thank you for your respose.
Summary: The paper presents a deterministic low-rank approximation of the Kalman filter for estimation in high-dimensional linear Gaussian setting. This setting is important for many problems that arise in meteorology and solved by Ensemble Kalman Filter (EnKF). The paper presents formula for low rank approximation of the covariance for both prediction and correction steps and demonstrates the proposed approach in different scenarios in comparison with EnKF. Strengths: - paper is well-written, has clear message - it considers an important problem in meteorology - the proposed approach is motivated well - comprehensive numerical experiments are given Weaknesses: - the main weakness is the fact that in most high-dimensional data assimilation problems, one has has only access to a simulator for the forward dynamic update. It is not clear how the proposed approach perform low rank approximation in that case. In EnKF, one simulates each member of the ensemble according to the dynamic model - the numerical code for reproducing the results are not given - the relevance to NeurIPS audience is weak. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1- What is a reference for stochasticity in EnKF introduces unfavorable properties. It will be good to be more concrete here, and explain what are exactly those unfavorable properties, since it is a central motivation for your approach. 2- The computational time plot in figure 4 is a bit surprising. How is that low rank approximation has smaller computational time compared to EnKF? 3- Can you produce a figure that demonstrates error as a function of raw computational time? 4- What is the reference for O(nr^2) complexity of truncated SVD? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your detailed and positive assessment! We hope that we can additionally resolve your main point of criticism in the following and thereby convince you to raise your score. **Q: It is not clear how to apply this method in case one has to forward the dynamics with a numerical simulator.** A: This is a very important point, the relevance of which we are very aware of. Since dynamics, which require a numerical simulator for forward propagation, are usually nonlinear, this question will receive systematic treatment in follow-up work, while this paper establishes the foundations of the method on a purely linear setup. Note that a simple, linear example of this case is showcased by the linear-advection example (Section 4.1; Fig. 1), which is a common baseline in papers that propose ensemble Kalman filter methods (e.g. [1], [2]). There, the simulation step is a simple matrix-vector product with a circulant (shift) matrix. Instead of individual ensemble members, the mean is simulated forwards and the linear operator is used to forecast the low-rank covariance matrix (as described). **Q: The code for reproducing the results is not given** A: The code is ready to be made public and will be released upon acceptance of the paper. **Q: The relevance to NeurIPS audience is weak.** A: We strongly disagree with this statement and hope to convince you of the opposite by addressing it here. The recent momentum in the domain of ML in the sciences, like, for instance, meteorology, geophysics, oceanography, etc., calls for efficient, yet accurate tools for the analysis of high-dimensional time series. Especially in sparse-data regimes, Gaussian-process models and hybrid physics-informed/data-driven ideas from the field of "data assimilation" become more and more attractive and often are preferred over purely data-driven approaches. Our method (and planned follow-up work) aims to serve as a well-founded, simple-to-use drop-in replacement for approximate Bayesian filtering/smoothing for exactly such applications and we hope to gain momentum in both the ML community and the sciences. **Q: What do you mean by "unfavorable properties" that are introduced by the stochasticity of the EnKF?** A: Adding sampled noise to the ensemble introduces sampling error [3,5], which leads to erroneous estimates of the approximate moments and motivates works like [3,1] and generally all work on deterministic ensemble Kalman filters; and it causes spurious correlations in the covariance estimator, which has then to be mitigated by localization techniques (e.g. [4]). The main point is that it is preferrable to approximately target the Karhunen-Loève truncation rather than sampling a random subspace. **Q: How is the RRKF faster than the EnKF in Fig. 4?** A: It is always difficult to compare absolute runtimes, due to nuances in the respective implementation. Even though we implemented all the methods ourselves and took great care in optimizing each of them to the same degree, there are likely variations. The point of our runtime analysis lies in the **asymptotic behavior**. This plot is not to show that our algorithm is faster than the EnKF. It is to show that the asymptotic runtimes of the ensemble-based algorithms is preserved by our method, while it has some major advantages over the stochastic counterparts, as described in the paper. **Q: Can you produce a figure that demonstrates error as a function of raw computational time?** A: Thanks for this suggestion. We included a draft of such figures in the PDF file attached to the general author rebuttal above. We would like to add those to the supplementary material of the camera-ready version. **Q: Why is the truncated SVD complexity $O(nr^2)$?** A: On tall and wide matrices the rank is upper-bounded by the smaller of both dimensions. It is therefore sufficient to compute only a subset ($r$) of the singular vectors of such a matrix, since the remaining singular values are zero. This is sometimes referred to as "compact SVD" (or "reduced SVD" (RSVD), or "economic SVD"). This is detailed, e.g. in Chapter 5 in [6]. We will add this reference to the camera-ready version. [1] P. Sakov and P. R. Oke, “A deterministic formulation of the ensemble Kalman filter: an alternative to ensemble square root filters,” _Tellus A: Dynamic Meteorology and Oceanography_, vol. 60, no. 2, p. 361, Jan. 2008. [2] G. Evensen, F. C. Vossepoel, and P. J. van Leeuwen, _Data Assimilation Fundamentals: A Unified Formulation of the State and Parameter Estimation Problem_. in Springer Textbooks in Earth Sciences, Geography and Environment, 2022. [3] J. S. Whitaker and T. M. Hamill, “Ensemble Data Assimilation without Perturbed Observations,” _Mon. Wea. Rev._, vol. 130, no. 7, pp. 1913–1924, Jul. 2002. [4] A. Carrassi, M. Bocquet, L. Bertino, and G. Evensen, “Data assimilation in the geosciences: An overview of methods, issues, and perspectives,” _WIREs Clim Change_, vol. 9, no. 5, Sep. 2018. [5] W. Sacher and P. Bartello, “Sampling Errors in Ensemble Kalman Filtering. Part I: Theory,” _Monthly Weather Review_, vol. 136, no. 8, pp. 3035–3049, Aug. 2008. [6] Golub, Gene H., and Charles F. Van Loan. Matrix computations. JHU press, 2013. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response my questions and providing clarifications. My review stays positive, but I can not increase because I still find the model a major practical limitation that underlies the contribution. And I find the run-time results surprising as ETKF and EnKF have the almost the same curve, although ETKF involves solving a linear program scaling with the number of particles. --- Reply to Comment 1.1.1: Comment: Many thanks for continuing to argue in favor of acceptance and for detailing your remaining concerns! We would like to take this opportunity to address in particular your second concern briefly, in the hope that it can be satisfactorily resolved in the course of a brief discussion. It is correct that the ETKF scales cubically in the number of particles $r$, which does not show in the experiment on asymptotic runtime. However, in this experiment, we investigated the asymptotic runtime with respect to the _state-dimension_ $n$ and let $r$, the number of particles, be fixed at a very small number $r = 5$, in order to take this quantity out of the analysis. This experiment was to show that the proposed method - as the ETKF - has the important quality that its asymptotic complexity does _not_ scale cubically in a typically very large number ($n$). As for your other concern, we agree with the assessment that the assumed linear state-space model is a remaining limitation of the approach, as addressed in the limitations section. We continue to argue that this extension is best followed-up upon in future work after the foundations of the method have been established in the present contribution, on a linear-dynamics setup. We would be very pleased to continue the discussion if there are open questions remaining and thank you again for your follow-up comment.
Summary: The paper proposes a deterministic low rank approximation to the Kalman filter in high dimensional settings where the computational complexity of the full Kalman filter is very expensive. They compare to the ensemble Kalman filter and ensemble transform Kalman filter. Strengths: The paper is well written. Weaknesses: See questions. The biggest issue is that the comparisons are not against “fair” competing methods (any low rank Kalman filter). They also do not mention anything about the large swaths of low rank variants of Kalman filters. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The scenario for this problem is linear dynamic systems? Ensemble style Kalman filters are often used in non-linear dynamic systems and have the greatest advantage there; including computational ones, but that is compared to particle filters etc. . The authors cite using ensemble Kalman filters for computational cheapness, but the vast majority of those papers are from not very good journals and their accuracy is suspect... In contrast the real data experiments are for atmospheric problems that generally have non-linear dynamics? How are you computing the low rank dimension of the ensemble Kalman filter, which is not a low rank method (figure 2)? Neither the ensemble Kalman filter nor the ensemble transform Kalman filter are explicitly attempting to estimate a low rank approximation. It would be more appropriate to compare against a variant of the Kalman filter that is a low rank approximation (or a sparse one at least). There are tons of variants from just a quick google search. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your critical assessment of our work! We identified your main points of contention to be (a) that the ensemble Kalman filter is not a low-rank method and (b) we are not comparing our method to a low-rank filter. Both of these points are based on a misconception, which we will explain below, followed by addressing your remaining concerns. We believe that our response demonstrates that the points raised do not warrant a rejection of our work. **Q: The ensemble Kalman filter is "not a low-rank method" and thus not a fair comparison.** A: This statement is incorrect. The EnKF *is* indeed a low-rank method, where the rank of the covariance approximation is determined by the number of ensemble members. Concretely, $\Sigma \approx (\frac{1}{\sqrt{r-1}}X)(\frac{1}{\sqrt{r-1}}X)^\top$, where the ensemble $X \in \mathbb{R}^{n \times r}$ is of rank $r$ and $r \ll n$. When comparing our method to the EnKF in our experiments, the "low-rank dimension" of the EnKF thus corresponds to the ensemble size. In view of this, we would agree that our experimental comparisons are not exhaustive. However, we do maintain that our experimental results are certainly sufficient to demonstrate the merits of our approach. Furthermore, on the basis of this review and that of dWp4 we are confident that we can more clearly situate the present contribution within the larger body of work in low-rank methods for estimation and model identification. **Q: The problem statement contains only linear systems, whereas the EnKF is often used for nonlinear systems.** A: This paper is the first in what we hope to be a line of work. The method is motivated by being entirely deterministic -- which clearly has benefits over Monte-Carlo-based approaches in some relevant scenarios -- and it is established here on linear systems. Indeed our algorithm, as presented in the paper, only applies to linear systems. As you are aware, we explicitly discuss this in the limitations section of our work. Indeed, many state-estimation problems are nonlinear, and this setting will certainly be followed-up upon in future work. The experiment on rainfall estimation succeeds in showcasing the computational feasibility in a high-dimensional state-estimation problem; the assumption of linear dynamics does not impede this argument. **Q: Cited literature that uses ensemble Kalman filters for computational cheapness is questionable** A: We fundamentally reject this criticism. The foundational paper by Evensen [1], establishing the first version of the EnKF, presents the method as "a better alternative than solving the traditional and computationally extremely demanding approximate error covariance equation used in the extended Kalman filter". Our method solves precisely this problem. Further, in an important foundational piece on the EnKF [2], Houtekamer and Mitchell write: > "In particular, to make the ensemble approximation feasible, we have to use a fairly small ensemble with many less members than either the number of model coordinates, or the number of independent observations, or the (unknown) dimension of the dynamical system. To nevertheless obtain good results, we must (i) counter the tendency of the ensemble spread to underestimate the true error, and (ii) localize the ensemble covariances. The localization is severe and leads to imbalance in the initial conditions." This clearly states the agenda of developing a computationally tractable approximate inference scheme and the need to counteract degeneracies that come with the naïve ensemble-based approximation. [1] G. Evensen, (1994), Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics, _J. Geophys. Res._, 99(C5), 10143–10162. [2] P. L. Houtekamer and H. L. Mitchell, “Ensemble Kalman filtering,” Q. J. R. Meteorol. Soc., vol. 131, no. 613, pp. 3269–3289, Oct. 2005. **Q: Much of the existing low-rank Kalman filters are not mentioned.** A: We will extend the related work section to position the present contribution in the broader context of dimensionality reduction in high-dimensional state-estimation problems. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks for the response. While the ensemble Kalman filter indeed produces a low-rank covariance where the rank is the size of the ensemble size, it is not purposefully estimating a "good" low rank covariance. The ensemble Kalman filter produces a low rank matrix in the same fashion as the sample covariance matrix is a "low rank matrix" when there are fewer samples than dimensions. But just as the SCM is not a "good" estimator of the true covariance matrix (and is not even positive definite, which violates the required properties of covariance matrices), the standard ensemble Kalman filter is not producing a "good" estimator of whatever is the true prior covariance. There are numerous other methods that assume that the hidden states of the dynamics systems lie on a lower dimensional manifold and explicitly attempt to estimate good low rank estimator for the covariance. The current comparison to the ensemble Kalman filters is like comparing against an explicitly estimated low rank covariance estimator (there are a lot of methods that do this e.g. with regularization) to a baseline sample covariance matrix that has insufficient samples. That is our issue with comparing to the ensemble Kalman filter. In respond to the quote. Yes using fewer ensemble members makes the process computationally cheaper, but at what cost? The original quote downplays the cost of this "computational cheapness". Personally I find the original argument extremely weak (this is not the fault of the authors, but choosing to use it is). Normally approximate methods / optimization relations / etc. always compare against the original solution to show the cost of the cheaper computations. Otherwise a similar argument could be that using fewer samples / data points makes any method computational cheaper... --- Reply to Comment 1.1.1: Title: An attempt to clear up remaining misconceptions Comment: We would like to thank the reviewer for engaging in further discussion. However, we find it strange that the reviewers main objection appears to be based on their dislike of the ensemble Kalman filter, rather than an assessment of the quality of our proposed alternative. We further note some remaining misconception. We elaborate on these issues below. **On the ensemble Kalman filter** It is indeed the case that the ensemble Kalman filter leaves a lot of room for improvement in terms of tracking a low rank approximation to the state covariance matrix. In fact, the situation is even worse as ad-hoc tricks to improve the approximation quality often needs to be employed in practice. Nevertheless, the ensemble Kalman filter is often used practice, which is precisely why we find it pressing to address these deficiencies in algorithm design. The reviewer is entitled to their opinions on the arguments adduced in the literature on ensemble Kalman filter. However, it is rather strange to take our citation of the arguments for developing the ensemble Kalman filter as endorsement of the same. The purpose of the citation was to demonstrate that, contrary to the reviwers belief, a primary reason behind the development of the ensemble Kalman filter was indeed to reduce computational cost - this is correct. **On comparisons against the original solution** The reviewer suggests that approximate methods should be compared against the original solution to show the cost (in terms of accuracy) paid for the lessened cost in computation. We completely agree! In fact we have done this in the original manuscript, the reviewer is invited to examine Figures 2 and 3 to see the result. **On alternative methods** The reviewer mentions "numerous other methods" for attacking the present problem formulation. However, not a single specific example is given. We would of course be happy to appropriately address specific suggestions for related work if examples were given, see e.g. the responses to dWp4 and 2SHM. **On positive definiteness of covariance matrices** Lastly, we would like to do away with the misconception that covariance matrices need to be positive definite, indeed it is sufficient that they are positive semi-definite. This property (positive semi-definiteness) of course holds for the sample covariance matrix, the ensemble Kalman filter approximation, and our reduced rank method.
Rebuttal 1: Rebuttal: We are very grateful to all reviewers for their high-quality reviews and for providing constructive feedback. We are inspired by the reviewers' confirmation that our work addresses a "critical issue" (e5WD) and an "important problem" (U69V) to which our "proposed solution is appealing" (e5WD). We would like to thank the reviewers for judging our manuscript as "nicely visually presented" (e5WD), "well-written" (e5WD, U69V, HH4H, dWp4), conveying a "clear message" (U69V); and for highlighting the novelty of the method (e5WD,2SHM) and the "excellent" (e5WD) and "mathematically sound" (e5WD) derivation. We are particularly encouraged by the assessment that "there is definitely an audience at NeurIPS that will enjoy and really benefit from this work" (e5WD). Further, we are grateful to the reviewers for suggesting improvements, which will greatly benefit a revised version of the manuscript. The most significant objection is that the relation of the proposed method to other existing approaches to low-rank filtering should be made more explicit. We will take the opportunity to situate the present contribution within the larger body of work in low-rank methods for estimation and model identification in the camera-ready version. Pdf: /pdf/b191e30de768fcdc8e363879f1c2ab5bbbaebd0b.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: Filtering (estimation) and smoothing of large-dimensional state-space models are computationally challenging. Exact Kalman filtering, at least insofar as LTI SDEs are considered, is a mature solution for such tasks but the cubic complexity of such methods makes it computationally intractable. This paper proposes an approximate Gaussian filtering and smoothing method that propagates low-rank approximations of the covariance matrices. To enable this, i) Lyapunov equations are projected into a manifold of lower rank in the prediction step combined with ii) square root filtering, which together offer numerically stable and tractable solution. Four examples are provided, with increasing complexity, and in one case runtime analysis is also carried out to support the claims. Two measures are used RMSE and covariance deviation to assess the method’s performance. Corollary 1 provides a “Kalman-like” result (an important contribution of the paper) while Section 3.2 discusses the algorithm of the method for filtering (3.4 for smoothing) and section 3.3 provides the time complexity of the method. Strengths: The proposed method differs from existing ensemble-based approaches in that the low-rank approximations of the covariance matrices are deterministic, rather than stochastic. This is important because it allows the method to reproduce the exact Kalman filter as the low-rank dimension approaches the true dimensionality of the problem. The time complexity analysis is well-written, and authors clearly demonstrate their method reduces computational complexity from cubic (for the Kalman filter) to quadratic in the state-space size at most (and linear in some cases—refer to proposition 3 and assumptions therein). Weaknesses: My major issue with this paper is there is a large body of work on estimation of high- and infinite-dimensional dynamical systems and this paper does not position itself in such context. There are numerous works on using sparse identification (Sparse reduced-order modeling: Sensor-based dynamics to full-state estimation), Koopman-based (A Robust Data-Driven Koopman Kalman Filter for Power Systems Dynamic State Estimation), balanced-truncation ROMs and Kalman (State Estimation Using a Reduced-Order Kalman Filter), DMD-based ROM and Kalman (Dynamic mode decomposition and robust estimation: Case study of a 2d turbulent boussinesq flow), etc. In above-mentioned works, usually the high-dimensional governing equations is first projected in some appropriate basis (low-rank), and then an appropriate filter is deigned. Once estimation is done, the reconstruction is projected back into the physical domain. This also enables the application of the method to i) nonlinear dynamical system, ii) unknown systems. Such approaches benefit from lower computational cost, similar to the proposed method in the paper. In a sense authors acknowledge, more or less, such line of work in the last paragraph of Section 1. However, when they write “In contrast to our work, their method assumes that the dynamics unfold entirely in a lower-dimensional space, and conditioning on measurements happens by transporting the low-dimensional diffusive process to the full space by a projection that is also assumed to be known” there is no quantitative or qualitative comparison to support their claim. Let’s say for example 1 considered in this paper, authors first project the PDE into low rank ODE, and then design the standard Kalman filter in such a manifold, and once estimation is propagated in time, the low dimension is converted back into actual space. What are the advantages and disadvantages? Please note I am not simply asking for more comparison (in addition to, say, EnKF) but more a conceptual question of what the real advantage of the proposed method is to state of the art for ROM-based estimation. I’m sure authors are aware of works like “Krylov subspace methods for solving large Lyapunov equations” where large scale Lyapunov equations are solved with efficient algorithms. I hope authors clarify their novelties in the rebuttal. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In section 1, Please be more specific about such unfavorable properties (line 45). Overall, the issue of GP is not discussed clearly in this section and elsewhere in the manuscript. Any insights if such approach can be applied to nonlinear systems? What if the dynamics is unknown (A and B)? I'd suggest moving section 4.4 as either last section or right after 4.1. Currently the flow of the results section is hard to follow: first three examples are given to demonstrate the accuracy of the model with the given two measures, then a discussion of runtime and finally again another result for rainfall data. It would be helpful to include a discussion on the impact of snapshots noise (measurement noise) on the results. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your careful reading of our paper as well as your thoughtful critique. In our response, we would like to divide the major points of contention into two categories: "Address literature on reduced order methods" and "model order reduction versus our approach". These will be responded to in order, followed by answers to more specific questions raised. It is our assessment that an elaboration on related work in the camera-ready would be sufficient to address your concerns, and it is our hope you come to the same conclusion upon reading our response. Before proceeding we shall refer to the papers cited in your review as follows - [1] Sparse reduced-order modeling: Sensor-based dynamics to full-state estimation - [2] A Robust Data-Driven Koopman Kalman Filter for Power Systems Dynamic State Estimation - [3] Dynamic mode decomposition and robust estimation: Case study of a 2d turbulent boussinesq flow - [4] State Estimation Using a Reduced-Order Kalman Filter - [5] Krylov subspace methods for solving large Lyapunov equations and we will also refer to some of the references of our paper by - [6] P. Sakov and P. R. Oke, “A deterministic formulation of the ensemble Kalman filter: an alternative to ensemble square root filters,” 2008. - [7] J. S. Whitaker and T. M. Hamill, “Ensemble Data Assimilation without Perturbed Observations,” 2002. - [8] A. Carrassi, M. Bocquet, L. Bertino, and G. Evensen, “Data assimilation in the geosciences: An overview of methods, issues, and perspectives,” 2018. - [9] W. Sacher and P. Bartello, “Sampling Errors in Ensemble Kalman Filtering. Part I: Theory,” 2008. ### Address literature on reduced order methods The cited papers [1,2,3] are concerned with low-rank modelling of dynamical systems from measured data. The paper [4] is concerned with low-rank approximations of a given model based on classical theory on linear time invariant systems. Once a low rank model is obtained for the phenomena under study then state estimation becomes computationally trivial, in the sense that the model is not very large. These contributions are of course related to the present paper under the broader context of low-rank/ dimensionality reduction methods. However, they do differ on some key points: - [1,2,3] allow themselves to construct a (low-rank) model from measured data. Whereas we assume the model class is fixed and the objective is make inference tractable with minimal violence done to the model. - [4], like us, takes the model as given but heavily relies on realization theory of stable linear time-invariant systems to obtain a reduced-order model. In contrast, the dynamic low rank method that we employ can be applied to any matrix _differential_ equation. Consequently, we expect our present method to be easier to extend to other model classes (i.e. nonlinear). Finally, we will touch on the mention of paper [5], which develops methods for obtaining low-rank approximations to the solutions of _algebraic_ Lyapunov equations. Whereas the problem we tackle with the dynamic low rank method is to obtain low-rank approximations to the solution of Lyapunov _differential_ equations. Consequently, the methods of [5] can not directly be brought to bear on our problem. ### Model order reduction versus our approach The review brings up example 1 in the paper to support a discussion of model order reduction (i.e. [1,2,3,4]) versus to the present approach of simply tracking a low rank approximation to the covariance matrix while keeping the model intact. Broadly speaking, the approaches of [1,2,3,4] are expected to work well under the following assumption: - [A1] The state evolves in a low-dimensional manifold (approximately). Whereas our approach is expected to work well under the following assumption: - [A2] the error in the state estimate evolves in a low-dimensional manifold (approximately). These two assumptions are quite different indeed and of course come with certain advantages and disadvantages. We can not say how the quality of state estimation would differ between the two approaches under [A1], we do however note that the model reduction approach would be more economical, certainly in storage, but perhaps also in computation. On the other hand, under [A2] it is not clear that the model reduction approach would be appropriate in all cases. In particular it could be the case that the state of the system really does develop in a high-dimensional space whereas the uncertainties are concentrated in a lower dimensional subspace -- in this case we expect our proposed approach to be more succesful. **Q: What is meant by "unfavorable properties" of the stochastic nature of the EnKF?** A: Adding sampled noise to the ensemble introduces sampling error [7,9], which leads to erroneous estimates of the approximate moments and motivates works like [7,6] and more work on deterministic ensemble Kalman filters. It further causes spurious correlations in the covariance estimator, which has to be mitigated, e.g., using localization techniques (e.g. [8]). The main point is that it is preferrable to approximately target the Karhunen-Loève truncation rather than sampling a random subspace. **Q: Your work only regards linear dynamics systems, whereas existing methods are usually applied to nonlinear dynamics** A: Indeed our algorithm, as presented in the paper, only applies to linear systems. As you are aware, we explicitly discuss this in the limitations section of our work. Indeed, many state-estimation problems are nonlinear, and this setting will certainly be followed-up upon in future work. **Q: Writing suggestions** A: Many thanks for taking the time to formulate concrete suggestions in order to help improve our work! We will consider those for the camera-ready version. --- Rebuttal Comment 1.1: Comment: Thanks authors for their detailed response. I am in the process of reading them carefully.
null
null
null
null
null
null
Trust Your $\nabla$: Gradient-based Intervention Targeting for Causal Discovery
Accept (poster)
Summary: The paper proposes to exploit the gradient information to efficiently select intervention targets. The novel approach is called Gradient-based Information Targeting (GIT), it is claimed that it is the first gradient-based intervention targeting method. A number of realistic benchmarks are used to validate the approach. Strengths: The approach is sound, and it is reported that it performs quite well. Weaknesses: The idea to make use of the gradient information is not new. The actual method relies on the scores (eq. 4) that are proportional to the expected magnitude of the gradient. Although it might be novel to apply the gradient information to identify the nodes for interventions, the gradient magnitude information was already well studied both in a number of theoretical and practical publications. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I would like a clarification: the parameters of the model are associated with graph edges. So, the gradient information is the information on the edges. Once a batch of targets with the highest scores are identified, what information (observation) is added exactly? If I remember, the BNlearn generates data as a matrix: number of observations (rows) \times variables (nodes, in columns). So, if you decide that you need more information on a particular node (column), how can you add such a partial information? Aren't you going to get missing values for other nodes in this case? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the feedback. We are glad that the Reviewer appreciates that GIT is sound and experiments are well tailored for the problem. Below we provide answers to the Reviewer's questions and concerns: 1. [Lack of novelty]- It is true that the idea of using gradient signal in guided data selection has been proven to be fruitful in many contexts throughout machine learning, especially in active and curriculum learning. However, it is not immediately clear how to apply these ideas to intervention targeting, and even if so, there still is a gap between expectations and how to achieve them. Our work bridges this gap: we are the first to associate gradients of the structural loss used by causal discovery methods with intervention targeting. We show how to bring all the necessary components together in a simple manner that achieves strong empirical performance and is agnostic to the underlying gradient-based causal discovery framework. GIT is accompanied by a theoretical analysis, which underpins its validity. We believe all this is a proper scientific contribution with value to the community. *Questions:* 1. [Clarification of interventional data acquisition] - Once a target node with the highest score is identified, an intervention is performed on that node (in the real, underlying model). This effectively changes the joint distribution of the data (see Equation 2 in our paper and lines 107-116). We sample observations of all random variables in the graph from this *new* distribution. We then add this new information to the set of interventional data $D_{int}$ (see Algorithm 1) and retract the intervention in the real model. BnLearn apart from datasets provides also models which allow conducting such operations and collecting arbitrary large datasets. Please let us know if the above answers the Reviewer’s comments and questions. We would also be more than happy to address any other suggestions. In case there are none, and given the Reviewer’s positive outlook on soundness, presentation, and contribution, we would gently ask the Reviewer to consider increasing the score. --- Rebuttal Comment 1.1: Comment: I acknowledge that I read the authors response. --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for reading our response and their previous constructive feedback that further helped to improve our work. We would be happy to answer any open questions and, otherwise, would kindly ask the Reviewer to consider raising their score.
Summary: This paper proposes a method for efficient acquisition of interventional data. Specifically, given observational data and causal discovery algorithm A, the authors propose a method (GIT: Gradient-based Intervention Targeting) that chooses particular intervention(s) to enrich the data and enhance the causal discovery process. The norm of the gradient (of the structural loss) is used as the acquisition function. The paper indeed tackles an important problem (causal discovery with active experiment design), with no apparent restrictions on the random variables (though to varying degrees of success in the empirical evaluation). The paper provides a fair number of empirical evaluations and compares the method to a decent number of baselines. Strengths: 1. The paper tackles an important problem, and provides a clean solution that seems to be compatible with many causal discovery frameworks. 2. The paper is generally well-written and easy to digest. 3. Extensive empirical evaluations are provided which shows that the proposed method is superior especially in the low-data regime. Weaknesses: 1. I believe the main weakness of the paper is that I do not fully get why the gradients estimated from “ imaginary” interventional data approximate those you get from “real” interventional data. It would be great to: (i) give a hypothesis for why this is the case; and/or (ii) state clearly (in the main text) that it is indeed surprising and is an interesting direction of future work. I think this is an interesting phenomenon that should be highlighted more in the text. 2. There is a clear improvement of GIT over baselines; however, I find it surprising that the random baseline is doing much better than other baselines. You state that this is due to approximation errors and model mismatches. Can you elaborate? Is it safe to conclude that in this setting, Random is the SOTA method (other than GIT)? 3. While the framework seems generic and not specific to categorical variables, the experiments suggests that GIT have minimal advantage over Random in the continuous settings. Figure 6 suggests that GIT only outperform Random-fixed (which I’m not sure is a reasonable baseline). 3. Nitpicks: - Line 291: section 5.3 - Line 300: Did you mean SAUSHD instead of EAUSHD? - Result in A is not really specific to GIT; I believe it is true for all algorithms. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I like the results where you show the correlation of the scores produced by different acquisition methods. However, I think further exploration here would be valuable. For example: - How the correlation increases/decreases with increasing/decreasing number of samples in the Monte Carlo approximation ($|\mathcal{D}_{G,i}|$). This may help understand the gap between the privileged and simulated gradients (i.e., does approximation error play a big role?). - Is the correlation in earlier batches lower than that of later ones? And by how much? I suspect the answer to the first question is yes since for later batches the estimated DAG is "closer" to the true DAG. 2. I'd be interested to see an evaluation of the performance of \epsilon-greedy GIT. My presumption is that it could serve as a useful safeguard against possible inaccuracies in gradient estimation or discrepancies with the "real" privileged gradient. 3. I'd like to understand the rationale behind choosing Random-fixed as a baseline in the experiments under the DiBS Framework. From my viewpoint, this baseline selection seems somewhat arbitrary. Could you elaborate on why you chose this specific baseline? 4. In algorithm 2, you choose a set of interventions in each round (32 as stated in section 5). Do these correspond to the number of data samples generated? Or are they the number of nodes intervened on? If the latter, does that mean only one point is generated per intervention? would increasing the number of samples per intervention help? 5. From a quick read of appendix B, my understanding leads me to believe that these theorems are specific to GIT-Privileged and not GIT with “imaginary” interventional samples. Could you confirm if my interpretation is correct or if I may have misunderstood the points made? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No negative societal impact is expected. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the Reviewer's feedback. It's gratifying to know that the Reviewer recognizes GIT as a robust solution to the crucial problem of intervention targeting, and acknowledges its compatibility with various causal discovery frameworks. We're also pleased that the Reviewer acknowledges the thoroughness of our empirical evaluation. Below, we address the raised concerns: 1. [“Imaginary” gradients approximate the “real” ones] - Consider an algorithm (e.g., ENCO) that throughout the training consecutively learns to better approximate the real solution. In consequence, the “imaginary” data sampled from such a model progressively become a better approximation of the real interventional data. In Appendix F.5 we also empirically observe that the correlation between the scores of GIT and GIT-privilege is high. Note that the approach of using “imaginary” gradients is also well-grounded in the active learning literature (Ash, Jordan T., et al. 2019). At the same time, we agree that the properties of this process are far from being understood. We add this point to the future work section. 2. [Strength of Random Baseline] - indeed, at first this came as a surprise for us as well, and, in our view, reinforces the need for theoretical understanding of the fail cases of those methods. We suspect that the observed behavior stems from the poor quality of approximations used internally by AIT and CBED. For example, MI used in CBED might be susceptible to errors in highly dimensional situations. Note, however, that random selection is a strong baseline in some active learning scenarios. For instance, (Lowell et al. 2018) observe that an actively-acquired dataset does not consistently outperform training on i.i.d sampled data. 3. [Continuous variable setup and Figure 6] - Thank you for bringing up this point. Indeed, we only show that GIT is applicable to the continuous situation but not necessarily effective. We now add it to the limitations. Note that in this continuous setup, all the baseline methods (not only GIT) perform very similarly (except for the Random-fixed). We speculate that in the continuous case, it is perhaps more important to identify the values of the nodes than properly select the intervention nodes (consider the difference between Random-Fixed and Random-uniform in Figure 6 in our Appendix, and Figure 3a/b in Tigas et al. 2022). A detailed explanation is an interesting direction for future work. Questions: 1. [Further exploration of correlations] - Following the Reviewer's suggestion, we analyzed how correlations change over time. The results are shown in Figure R.3 in the Rebuttal PDF. Surprisingly, we noticed that although the correlation between GIT and GIT-privileged remains high, the overall correlation between different methods decreases as the number of acquired batches increases. We hypothesize that early in the training process, identifying the best target nodes is relatively easy, leading to similar solutions across methods. However, as the number of steps increases, finding nodes that significantly improve performance becomes more challenging (this idea is also supported by the initial rapid SHD convergence shown in Figure 7 in the Appendix). 2. [Evaluation of \epsilon-greedy GIT] - Thank you for your suggestion. We evaluated GIT, random, and $\epsilon$-GIT ($\epsilon$=0.33) on collider and jungle graphs. The results are presented in Figure R.2 in the rebuttal PDF. Interestingly, for the jungle graph (where GIT performed well, as shown in Figure 2 of the main paper), GIT and $\epsilon$-GIT exhibit similar performance. Additionally, as pointed out by the Reviewer, $\epsilon$-GIT seems to enhance performance in cases where GIT struggles due to inaccuracies or disparities with the true gradient. This is evident from the performance on the Collider graph in Figure R.2. 3. [Rationale behind choosing Random-fixed] - We have chosen to include Random-Fixed to maintain compatibility with CBED (Tigas et al. 2022). 4. [Number of interventions] - In Algorithm 2 we acquire data only from one intervention. The quantity 32 corresponds to the number of data samples drawn from the selected interventional distribution. We have also experimented with increasing this number (see Section 5.3). Those experiments show that increasing the number of samples to 1024 can significantly improve the results. 5. [Theorems in Appendix B] - the results are applicable to the GIT method. More specifically, Theorems 5-9 guarantee the “local convergence” of ENCO meaning that when acquiring enough real interventional data from a specific node parameters of all neighboring edges will converge and stop generating gradient. Based on this observation we conclude, in theorem 10, that if we choose any node that has a positive gradient magnitude, we will be able to discover more graph structure. We understand that the current proof is hard to follow. We will clarify the whole section in the camera-ready version. We again thank the Reviewer for the useful feedback. We hope that our answers clarify the issues. We would be happy to answer more questions, if needed, otherwise if everything is clear, we gently ask that the Reviewer consider raising the score. References: * Lowell, David, Zachary C. Lipton, and Byron C. Wallace. "Practical obstacles to deploying active learning." arXiv preprint arXiv:1807.04801 (2018). * Tigas, Panagiotis, et al. "Interventions, where and how? experimental design for causal models at scale." Advances in Neural Information Processing Systems 35 (2022): 24130-24143. * Ash, Jordan T., et al. "Deep batch active learning by diverse, uncertain gradient lower bounds." arXiv preprint arXiv:1906.03671 (2019). --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response. I have revised the assigned score accordingly. --- Reply to Comment 1.1.1: Title: Thank you Comment: We again thank the reviewer for useful questions and suggestions.
Summary: This paper addresses the problem of targeting interventions to learn the causal graph. The ideal goal is to have the minimum number of interventions that is necessary to have identifiability. The authors propose a Gradient-based Intervention Targeting method (GIT) that utilize the gradient estimator of a gradient-based causal discovery framework to provide signals for the intervention acquisition function. The paper includes extensive experiments on simulated and real-world datasets, demonstrating that GIT performs on par with competitive baselines, surpassing them in the low-data regime. Strengths: 1.The paper addresses a significant problem in causality and machine learning, presenting a new approach to intervention targeting. 2. The proposed GIT method is model-agnostic and leverages the gradient estimator of a gradient-based causal discovery framework. 3. The authors provide extensive experiments on both simulated and real-world datasets, demonstrating the effectiveness of their approach, especially in low-data scenarios. 4. The paper is well written. Weaknesses: 1. The paper could benefit from a more detailed explanation of the GIT method and its underlying principles. A summary of gradient based methods and why they work is also important to make the paper self-contained and easier to follow to people with a general background in causality and machine learning. 2. The authors could provide more insights into the limitations of their method, including potential biases and assumptions. Especially, as there is no solid theoretical justification for the convergence of the method. To my understanding, the convergence claim only holds with ENCO and with hard-to-interpret assumptions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The majority of the proofs in the appendix are just proof sketches, I think it is better to detail them so a more solid theory for the proposed method can be developed. 2. While the method is intuitively appealing and the extensive experiments provide very good results, is it possible to fail for some edge-case joint probability distributions? 3. Is there an empirical estimation of hte number of samples needed? Or a theoretical upper bound? 4. As neural causal discovery is more on the heuristic side (at least to my understanding), if the method badly fits the data distribution, how does the error propagate to GIT? 5. Do the author think (as a conjecture) that GIT captures the minimum number of necessary interventions to have identifiability? Or do they think that there is a limitation to it, if so is it possible to give some examples? 6. Can the authors provide details for the assumption of theorem 5 to 10 (also please note that theorem 9 is missing in the appendix). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The empirical results are extensive and show great performance of the proposed algorithm. A more theoretical understanding of the method can provide more understanding to its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the time and effort put into providing valuable feedback on our work. We are pleased to hear positive comments on the novelty of our work, the clarity of the manuscript, and the extensiveness of our experiments. We appreciate the constructive critique and questions which can help improve the publication. Below we provide answers to the Reviewer’s questions and concerns: 1. [More detailed explanation] - We agree with the Reviewer that a detailed description of the method benefits any paper. We feel that we put effort into explaining GIT in detail and would like to bring the Reviewer’s attention to the appropriate parts in the main body of the text: (a) intuition (lines 147-150) with grounding in existing literature (lines 91-98); (b) a pictorial representation of the GIT (Figure 1); (c ) formalism (lines 169-183); (d) assumptions and pseudo-code (lines 184-195); and (e) intuitive argument for theoretical justification (lines 196-205). Furthermore, in Section 3 we offer brief descriptions of two related gradient-based causal discovery methods: DIBS and ENCO (with more details postponed to Appendix C). We are committed to making the description as clear and easy to digest as possible, so if the Reviewer still feels that some details are missing, we kindly ask for suggestions, so we can address them accordingly. 2. [Insights into the limitations] - Thank you for this suggestion. We add the following limitations and future work section: * The theoretical grounding of the method involves multiple hard-to-interpret assumptions. Further work that simplifies the assumptions and identifies fail cases would benefit the community. * We provide proof that epsilon-greedy GIT converges with any causal discovery framework. As for pure GIT, we show its convergence only with the ENCO framework. The development of a more general theory that solidifies the approach is a promising future work direction. * Our method can be applied in the soft-intervention case, and providing appropriate experimental evaluation would be an interesting follow-up to this work. * Our method may need more interventions than the minimal number required to identify the causal structure. For example, GIT can be biased towards high-degree nodes, as interventions on them tend to affect a larger amount of structural parameters and result in larger gradients, which might cause suboptimal choices. We would also like to bring the Reviewer’s attention to the following insights described in the paper: the method's reliance on interventional batch size (Section 5.2) and on the number of graph samples (in Appendix F.4), a thorough analysis of the method's behavior (Section 5.3), and detailed discussions in Appendices F.5 and F.6. Questions: 1. [Proof sketches] - Thank you for raising this issue. We will clarify Appendix B in the camera-ready version and formalize GIT convergence theorem. 2. [Edge cases] - The proof in Appendix A shows that epsilon-greedy GIT has the same convergence guarantees as the underlying framework. Thus, for discussion on fail cases, we refer the Reviewer to the underlying causal discovery algorithm (for example, appendix B.2.4 in ENCO). 3. [Estimation of samples needed] - Yes, we experimented with different sizes of interventional batch size. For the specific graphs we tested, 1024 samples allow the method to intervene on each node at most once before SHD converges to 0 (see Section 5.3 and in Figure 9 in Appendix F). We found this result satisfactory to conclude that 1024 samples for the purpose of our experiments is enough. 4. [Error propagation] - Unsurprisingly, GIT is susceptible to the errors of the underlying methods. Due to a lack of proper theoretical frameworks, these are hard to quantify. However, our positive empirical results, let us have an optimistic belief that there is a significant resilience to these errors. We conjecture that this might be due to the fact that the probabilistic nature of the causal frameworks may, on average, shield us from the worst cases scenarios. We also identified a case, in which the errors might be hard to overcome. Consider the collider graph from Figure 7 in the Appendix. The collider graph is difficult to learn because it is hard to model precisely conditional distribution in the collider node. Our method, together with BOED, struggles to extract useful information from the model in this example as they are not able to match the Random baseline. 5. [Required number of interventions] - Thank you for this question. We do not expect that GIT needs only a minimum number of interventions. We add this to the limitation list. This is for two reasons: the underlying framework may not have this property (which is, for example, the case of ENCO), and GIT might not optimally choose the required interventions. On the positive side, our empirical results clearly demonstrate that GIT performs well and in practice outperforms existing methods. 6. [Assumptions details] - The theorems 5-10 are a direct adaptation of the convergence results for the ENCO method. In consequence, apart from the observations 1-3 mentioned in our Appendix B, we also have to assume all the requirements made by ENCO - we refer to sections B.1 of the ENCO Appendix (Assumptions 1-5) and the explained intuition behind them in ENCO Appendix B.2.1. and B.2.2. We understand that the current proof is hard to follow. We will clarify the whole section in the camera-ready version. We again thank the Reviewer for the valuable feedback. Should you have any other questions or concerns, please let us know. If the answers are satisfactory, we gently ask the Reviewer to consider raising the score. References: [ENCO] Phillip Lippe et al. Efficient neural causal discovery without acyclicity constraints. 2021. [DIBS] Lars Lorch et al. Dibs: Differentiable bayesian structure learning. NeurIPS, 2021. --- Rebuttal Comment 1.1: Comment: Thank you for your response! However, I believe the current score honestly reflect the quality of this work and will keep my score for now. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you again for many useful comments. If you have any further suggestions or questions, please let us know.
Summary: The paper studies the problem of efficiently inferring causal structure from data. Observational data often falls short in providing a full picture of the causal structure, while obtaining interventional data tends to be costly. Therefore, optimizing the collection of interventional data to reduce the number of necessary experiments becomes crucial. In this context, the authors present Gradient-based Intervention Targeting (GIT), a novel approach to enhancing causal discovery from data. Uniquely, GIT takes advantage of gradient estimators from gradient-based causal discovery frameworks, paving the way for efficient intervention targeting. Thanks to its plug-and-play nature, GIT is compatible with various frameworks. The authors validate the effectiveness of GIT via extensive tests on synthetic and real-world datasets, demonstrating its superior performance, especially in low-data scenarios. Strengths: - The problem studied in this paper is important. - The paper is written in a very clear way. The background, method, and empirical results are all presented nicely. - The proposed method demonstrates very good empirical performance. Weaknesses: Typos: - In the final sentence of the first page, the in-text citation should use \citep instead of \citet. - On line 156, there appears to be a large space between 'A' and the period. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: NA Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for positive feedback on our work. We are glad that the Reviewer considers the studied problem to be important and that they appreciate a very good empirical performance of our method. Is it good to hear that we managed to clearly describe our work and present the results nicely. We thank the Reviewer for identifying the typos and will correct the typos in the camera-ready version of the paper.
Rebuttal 1: Rebuttal: We would like to thank all the Reviewers for taking the time to review our work and providing us with insightful feedback. We are glad that the Reviewers appreciate the strengths of our paper, including clear writing and good presentation (Reviewers 6LvC, qbh5, 8Vao, E5qP), extensive experimental results (Reviewers 6LvC, qbh5, 8Vao, E5qP), significance of the studied problem (Reviewers 6LvC, qbh5, 8Vao, E5qP), and novelty of our approach (Reviewers 6LvC, 8Vao). It is good to hear that our study has no serious weaknesses (Reviewer 6LvC), is technically sound (Reviewers 6LvC, zVWz), and presents a clean solution (Reviewer E5qP). We provide specific answers to the Reviewers' individual concerns posted as separate comments below and attach a pdf file with figures referenced in individual responses. Pdf: /pdf/0156785b8b334bee9748caa7e9e2efb8c7a4f8e1.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors present an approach for inferring the most informative intervention targets for the task of causal discovery; where they propose an alternative to bayesian experimental design approaches. They use the gradients of the casual discovery algorithm's loss function to find the intervention targets, which helps the algorithm to learn about regions of the graphs that it is most uncertain about. They experiment with the ENCO causal discovery algorithm and show how their approach for interventional target acquisition outperforms other approaches on both synthetic and real data benchmarks. Strengths: * Since we can only learn the causal graphs up to an equivalence class using observational data, it is important to collect interventional data in an efficient manner for causal discovery. Hence, the authors consider a very relevant problem in the area. * The proposed approach is novel to the best of my knowledge and also technically sound with extensive experimentation to back it. The authors compare a good set of baselines and perform several ablation studies to understand the effectiveness of their approach. * The paper is very well written with good presentations of the experiment results and details of the proposed approach. * The proposed approach could be especially significant for large graphs with more nodes, where estimators for mutual information might not work well and the approaches based on them could suffer. Weaknesses: I do not think the paper has any serious weaknesses, but I have listed some of them in the questions section ahead. One suggestion for the authors would be to test their approach with synthetic data containing more nodes, as it could help us understand how the proposed intervention target acquisition approach scale with the size of the graph. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * The approach works with causal discovery algorithms that maintain a distribution over the structure of the graph. Is that assumption limiting in some manner? What about approaches where we do not maintain a distribution over graphs, could the approach be applied with a single-point estimate for the graph? * A related question to the previous one, how does the approach perform when we change the number of graphs sampled from the distribution over graphs? Are there ablations studies that test the effect of changing $|G|$ on the performance of the causal discovery algorithm? * The authors consider only single-node hard interventions for target acquisition; what are the reasons for not considering soft interventions? Is it a limitation of the proposed approach or soft interventions are not as informative as hard interventions for the task of causal discovery? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes, the authors have addressed any potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for a very encouraging review. We are glad that the Reviewer appreciated the importance of the problem, novelty, soundness of the approach and experiments, and clear presentation. Below we answer the specific questions asked by the Reviewer: - [Scalability of the method]: We thank the Reviewer for suggesting studying scalability. For the rebuttal, we ran an additional experiment on the jungle graph with 100 nodes, in which we compared different acquisition methods used with ENCO (see Figure R.1 in the additional rebuttal PDF). We can indeed observe that GIT significantly outperforms Random and MI-based approaches given the intervention budgets as in our main text experiments. For the camera-ready version, we will prepare a section about scalability, with results on other synthetic graphs. - [Maintaining distribution over graphs]: In order to be compatible with GIT, the underlying causal discovery method needs to maintain a distribution over the graphs (causal DAGs). This assumption is explicitly mentioned in the paper, in Section 4, paragraph “Requirements for causal discovery algorithm A”. That being said, we note that the assumption is met for a broad class of recent causal discovery algorithms, including ENCO [1], DIBS [3], SDI [2], DCDI [4], and DECI [5]. In fact, we are not aware of any modern neural network-based approach for which this requirement would not be satisfied. - [Impact of number of graphs sampled]: We performed the mentioned ablation and reported the results in Figure 10 in Appendix. We varied the number of sampled graphs from 10 to 70 and observed no significant impact on the results. In the experiments in the main text, we always use 50 samples. - [Using soft interventions]: We thank the Reviewer for this interesting question. The main reason we considered hard interventions is easier comparability with prior works such as [1], which used hard interventions in their experiments. However, this is not a limitation of our method, and there is no reason for soft interventions not to work with GIT. We leave an empirical evaluation in such a scenario for future work. We would like to express our gratitude once more for the Reviewer's positive rating and constructive feedback, which have contributed to the improvement of our paper. [1] P. Lippe, T. Cohen, E. Gavves. Efficient neural causal discovery without acyclicity constraints. arXiv:2107.10483. [2] N. R. Ke, O. Bilaniuk, A. Goyal, S. Bauer, H. Larochelle, B. Schölkopf, M. Mozer, C. Pal, and Y. Bengio. Learning neural causal models from unknown interventions, arXiv:1910.01075. [3] L. Lorch, J. Rothfuss, B. Schölkopf, A. Krause. Dibs: Differentiable bayesian structure learning. Advances in Neural Information Processing Systems 34, 2021. [4] Ph. Brouillard, S. Lachapelle, A. Lacoste, S. Lacoste-Julien, A. Drouin. Differentiable causal discovery from interventional data. Advances in Neural Information Processing Systems, 33, 2020. [5] T. Geffner, J. Antoran, A. Foster, W. Gong, Ch. Ma, E. Kiciman, A. Sharma, A. Lamb, M. Kukla, N. Pawlowski, M. Allamanis, Ch. Zhang. Deep End-to-end Causal Inference. arXiv:2202.02195 --- Rebuttal Comment 1.1: Comment: Thanks for the good response during the rebuttal! My concerns are addressed and I think my original rating is still fair enough with regards to the submission. --- Reply to Comment 1.1.1: Title: Thank you Comment: Again, thanks for your work towards making our paper better.
null
null
null
null
null
null
Sample Complexity of Forecast Aggregation
Accept (spotlight)
Summary: The authors study the problem of forecast aggregation in a Bayesian setting. Here n experts observe an individual signal that is correlated with the truth, and each expert reports their posterior belief to the principal. The principal aggregates these reports and outputs a prediction, whose quality is assessed using square loss. The object of interest is the minimax sample complexity necessary to obtain an additive excess error of $\epsilon$ compared to the best aggregator that knows the conditional distribution of truth given reports. The authors derive results for arbitrary distributions $P(\omega, s)$ on truth and signals, as well as distributions that factorize as $P(\omega,s) = P(\omega) \prod_i P(s_i|\omega)$. They show that there is an exponential gap in sample complexity between these two cases, and in the latter case the complexity does not depend on the number of experts $n$. Strengths: - Well written and appears technically sound - Introduces the study of sample complexity to the forecast aggregation literature - Interesting and general results that cover multiple natural settings - A novel lower bound construction for distribution estimation that allows reduction from estimation to aggregation Weaknesses: - Non-matching upper and lower bounds in most cases - Main technical difficulty lies in construction of lower bound, the upper bounds don't require significant new ideas - Techniques seem limited to the specific loss function + discrete truth/signal setting. Commenting on possible extensions in these directions would be interesting Technical Quality: 3 good Clarity: 3 good Questions for Authors: Have the authors thought about results for alternative loss functions (e.g. different weight for type2 vs type1 errors) or continuous truth/signal distributions where for example the experts report the condition mean of the truth given their signal? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors address limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Alternative loss functions:** Our response to Reviewer YcHV mentions several reasons why we focus on the squared loss $E[ | f(r) - \omega |^2 ]$ in this work. We did think about other loss functions like the logarithmic loss $E[(\omega \log(f(r)) + (1-\omega)\log(1-f(r)))]$ and the absolute loss $E[ | f(r) - \omega | ]$. Some of our techniques for the squared loss can be applied to other loss functions. *Logarithmic loss:* Because the logarithmic loss can be unbounded, giving an upper bound on its sample complexity is not easy and may need significantly different techniques than our current work. For the lower bound, our results for the squared loss can be applied. Due to Pinsker's inequality, the logarithmic loss difference is greater than or equal to 2 times the squared loss difference, so if we get an $\epsilon$-optimal aggregator under the logarithmic loss, then this aggregator is automatically $\epsilon/2$-optimal under the squared loss. This means that our sample complexity lower bound for the squared loss is automatically a sample complexity lower bound for the logarithmic loss. *Absolute loss:* Our upper bound argument for the squared loss can be adapted to the absolute loss. But the lower bound argument cannot. The squared loss has the property that the difference between the squared losses of any aggregator and the optimal aggregator can be conveniently written as their expected squared difference: $E[|f(r)-\omega|^2] - E[|f^*(r)-\omega|^2] = E[|f(r)-f^*(r)|^2]$ (Lemma 2.1). We use this property in the argument. But the absolute loss $E[ | f(r) - \omega | ]$ does not have this property, and its sample complexity lower bound may need a different argument. We believe that analyzing other loss functions is an interesting direction for future works. **Continuous signal/truth distribution:** We did think about continuous distributions for signal or truth. For continuous signal distribution, our results for conditionally independent distributions do cover this case -- we make no assumption on the signal space. For not conditionally independent distributions, note that our results show that the sample complexity grows with the size of the discrete signal space. This means that with a continuous signal space the sample complexity is infinite in the worst case. So, to obtain a meaningful sample complexity result, one has to make some assumptions on the continuous signal distribution, for example, the distribution belongs to some parameterized family. Then, one may obtain very different results than the results in this paper, using different techniques. With continuous truth distribution, the problem will be very different if the experts only report the posterior means of the truth instead of the posterior distributions of the truth. And similar to continuous signals, to obtain a meaningful result one has to make some assumptions on the continuous truth distribution. Depending on different assumptions one may get different results. We believe continuous truth/signal distribution is an interesting direction to explore for future works.
Summary: The paper initiates the study of sample complexity of forecast aggregation under a Bayesian forecasting model. In this problem, n experts, each observe a private signal about an unknown binary event, and then report their posterior beliefs about the event to a principal, who then aggregates the reports into a single prediction. The underlying joint distribution is unknown to the principal, but he has access to i.i.d. samples from the distribution. Using these samples, the principal aims to find an epsilon-approximate optimal aggregator, where optimality is measured in terms of the mean squared error between the aggregated prediction and the real event. The authors show that the sample complexity grows exponentially in the number of experts n, but that if the experts’ signals are conditionally independent, then the sample complexity does not depend on the number of experts at all. They further consider the case of non-binary events and weakly/strongly informative experts. Strengths: The paper is very elegantly written. It presents the setup of the problem clearly, motivates it thoroughly, and initiates an interesting discussion on the fundamental limits of the problem. The proof sketches are quite intuitive and convincing and their implications are reflected and discussed. Weaknesses: One weakness for me is in the particular choice of mean squared error as an optimality measure. It seems counter-intuitive, given that the experts report their posterior beliefs, which is in essence a minimum error probability optimality measure. It would seem more natural for the principal to look for an aggregation that minimizes the probability of error given the exerts reports. Another weakness is the gap between the upper and lower bounds with respect to epsilon, which might follow from the relatively simple upper bound proposed in the paper. This also bleeds over to the conditionally independent variant. Finally, the choice of averaged squared error as an optimality measure in the multi-outcome events section brings forth quite bizarre looking results. It follows, under this measure, that is the number of events is omega(1/epsilon^2) our task succeeds with zero samples! The authors do clarify this in the appendix, but I would remove it altogether from the paper (or change the measure to additive MSE). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Did you consider using the maximum a posterior optimality measure for the principal? An added explanation on why MSE is a good choice can benefit your work. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations of their work Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Questions:** Maximum a posterior (MAP), or posterior mode, is the point estimate $f(r)$ that minimizes the absolute loss $E[ | f(r) - \omega | ]$. We choose to use the squared loss $E[ | f(r) - \omega |^2 ]$ instead in our work for a few reasons: (1) Squared loss is a very popular loss function used in many learning problems. (2) Squared loss is a proper loss function for eliciting probability distributions. Our model requires the experts to report posterior beliefs to the principal. To incentivize the experts to do so, the principal needs to reward the experts according to a proper loss, for example the squared loss $-E[|r_i - \omega|^2]$. The experts maximize their expected rewards by reporting their beliefs (posterior distribution of $\omega$) truthfully to the principal. (We wrote this in Footnote 3.) So, to be consistent with the expert's loss, we measure the principal's loss by the squared loss $E[|f(r) - \omega|^2]$ as well. A non-proper loss like the absolute loss $-E[|r_i - \omega|]$, on the other hand, does not elicit the beliefs from the experts. (3) Squared loss has the property that the difference between the squared losses of any aggregator and the optimal aggregator can be conveniently written as their expected squared difference: $E[|f(r)-\omega|^2] - E[|f^*(r)-\omega|^2] = E[|f(r)-f^*(r)|^2]$ (Lemma 2.1). This property is used in our sample complexity lower bound argument. The absolute loss doesn't have this property and its sample complexity lower bound may need a different argument. We think it will indeed be interesting to consider other losses as directions of future work, for example the absolute loss $E[|f(r)-\omega|]$, which is minimized by posterior mode (MAP). **Response to Weaknesses:** Regarding the ``bizarre looking result" in the multi-outcome case, this result is indeed an artifact of the use of the *average* squared loss (where we divide the loss by $|\Omega|$). (You can see our response to Reviewer ey4x for details.) We will change the loss to the additive squared loss (not dividing $|\Omega|$) to avoid this confusion. Thanks for suggesting this! --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I have read the rebuttal, the authors addressed the comments and questions raised thoroughly.
Summary: The paper studies the aggregation of expert opinions in a Bayesian setting for discrete (binary) distributions. The setting further limits each expert to have at most m different opinions, making the problem fully discrete. This reduction enables the analysis of the sampling complexity of the problem, i.e., the minimum number of observed aggregation events (expert opinions and true outcome) required to get an epsilon tight estimate on the true distriution in the total variation norm. Strengths: The paper approaches the more and more important problem of how to combine different "expert" opinions in a thorough and interesting way. It is well written, clear and, as far as I managed to go into detail, correct. Their approach sparks many new ideas on how to approach aggregation problems and opens up interesting questions on the relevance of (historical) data in aggregation. Weaknesses: The proof of the lower bound in Section 4.2 doesn't seem intuitive. The mere construction of an example is of course sufficient for a formal proof, but an explanation of why the authors had the idea for this specific construction might be more illuminating. The authors sometimes speak of the "number of signals an expert can possibly observe." Maybe this is better explained as the "cardinality of the signal the experts observe" Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Is it essential in the proof that the experts decide on Bayesian arguments? Or does it suffice to assume that each expert always just makes one of m possible reports, i.e., can we focus only on the joint distribution of r^t and omega and ignore s^t? In Section 6, does bigger Omega really reduce the complexity? This is counterintuitive. What is the source of Theorem 7.1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors clearly state the limitations of their work, are honest and clear about the current mismatch in their bounds in epsilon, and don't oversell their results. I do not see potential societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Experts decide on Bayesian arguments? This is an interesting observation! Indeed, our results for the case of general distributions do not need the experts to be Bayesian. It is OK if an expert reports 0.1 even if its posterior belief given signal is 0.8, as long as the expert reports different numbers given different siganls. What matters is the correlation between all experts' reported numbers and the event outcome. The principal can learn such correlation from samples and then do aggregation. However, for the case of conditionally independent distributions, we do need the experts to be Bayesian in order to apply Lemma 5.1 to get the smaller sample complexity upper bound. 2. Does $|\Omega|$ really reduce the sample complexity? No. As we explain in the end of Appendix C, this is an artifact of normalization. When defining the loss of an aggregator for multi-outcome events, we normalize the loss by dividing $|\Omega|$ to make sure the loss is bounded in $[0, 1]$. But for an aggregator that always outputs a probability distribution, its loss is actually bounded by $[0, 1/|\Omega|]$, causing the illusion that the sample complexity decreases as $|\Omega|$ increases. We will change the loss to the unnormalized loss (not dividing $|\Omega|$) to avoid this confusion. 3. Theorem 7.1. The source of this theorem is that the minimum joint probability $P(s, \omega)$ is bounded away from $0$ (by a margin of $c/m^n$ with constant $c$). In the proof of this theorem (Appendix I) we used the "empirical Bayes aggregator" $\hat f(r) = \frac{\hat P(r, \omega)}{\hat P(r)}$, namely, the Bayes rule with empirical probability estimated from samples. For this aggregator to work well, the denominator $\hat P(r)$ must be away from $0$. This is guaranteed if the minimum joint probability is away from $0$. In contrast, Theorem 4.1, which is more general but has a looser upper bound, does not need this assumption. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed answer, I don't have any further questions.
Summary: This paper studies the problem of forecast aggregation: There are n experts who are each given signals about some unknown binary event, and each output a posterior probability based on these signals. We have access to k such reports (corresponding to k events). The task is to aggregate these reports to produce a prediction that is close to the unknown event in expected squared distance. The paper shows that there is an exponential gap in sample complexity (ie k) between the cases when the experts' signals are independent conditioned on the event, versus the general case. In addition, a (slightly loose) upper bound is provided for the arbitrary case, and a lower bound is provided for the conditionally independent case. Strengths: This is a very clearly written paper, and it studies an interesting problem. The story of the exponential sample complexity difference in the arbitrary signals vs conditionally independent signal setting is a compelling one. Overall, I think this is an interesting paper that deserves to be accepted to NeurIPS. More comments: - It studies a natural extension where the aggregator sees multiple samples from each expert. This is a more realistic setting than the one-shot problem. - There are natural connections to distribution estimation here that are interesting. For this reason, I think this paper will appeal to the CS theory community. - The future directions mentioned are compelling, and will likely lead to several interesting results. Weaknesses: I don't see any real weaknesses. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Is it possible to say something about when you expect the signals to be conditionally independent in the real world? For instance, your paper points to a Kaggle dataset - is there any evidence that there is conditional independence there? Do you expect your algorithms to work well on that dataset? - Can you draw any connections between (variants of) this problem and distribution testing? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Question 1: Signals are conditionally independent if they are independent draws from some distributions but the distributions are determined by the unknown state of the world. For example, the unknown state of the world can be the quality of a school, which is either high or low. If the school quality is high, the probability for a student from the school to pass a state exam is 0.9. If the school quality is low, the probability for a student to pass the state exam is only 0.6. This can be viewed as an example of conditionally independent signals, as given the quality of the school, students' exam outcomes are independent draws from a distribution that depends on the true quality of the school. Conditionally independent signals capture many real-world settings where agents have independent observations but the distribution of the observations depends on the state of the world. We think Kaggle competitions can be reasonably modeled using conditionally independent signals. However, the dataset contains heterogeneous non-repeated tasks and hence is not a good match for our algorithm, which needs iid repeated tasks. Question 2: Many distribution testing problems (e.g., testing whether a distribution is uniform) can be solved by distribution estimation: first use samples to compute the empirical distribution as an estimate of the true distribution, then check whether the empirical distribution satisfies the property we are testing. This means that distribution estimation is more difficult than (or as difficult as) those distribution testing problems, in the sense that the number of samples needed for distribution testing is <= the number of samples needed for distribution estimation. Our results show that forecast aggregation for general distributions is essentially as difficult as distribution estimation, so it is also more difficult than those distribution testing problems. But the forecast aggregation problem for conditionally independent distributions is not directly comparable to distribution estimation and we don't see a direct connection with distribution testing, either. Nevertheless, testing whether a distribution is conditionally independent is itself a property testing problem. --- Rebuttal Comment 1.1: Comment: Hi, Thanks for your responses. For 2, I was wondering if there's a different but related problem that you can come up with that directly relates to distribution testing (say uniformity testing). In any case, I am impressed with this paper, and maintain my rating.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Scalable Transformer for PDE Surrogate Modeling
Accept (poster)
Summary: The paper presents a high-dimensional function decomposition technique for mitigating the complexity encountered by Transformers when dealing with high-dimensional data. The authors have developed a clear logical argument and a comprehensive review of the relevant literature. It conducts extensive experiments on several benchmark 3D problems governed by NS equations. It uses latent marching which shows better empirical performance compared with autoregressive models. However, the paper's novelty appears to be somewhat limited, particularly as similar ideas were presented at ICLR the previous year, including Factorized Fourier Neural Operator (FFNO) and Tensorized FNO. Overall, the paper lies on the borderline. It presents a clear motivation and technical soundness, and has shown some empirical advantages. However, the limitations highlighted above, particularly the lack of comprehensive experimental comparisons and some confusing notations, hinder its overall quality. I lean towards accepting this paper if the authors could strengthen the experimental results and address the raised concerns. Strengths: 1. The motivation for the research is clear, with the Introduction section being easy to read and understand. 2. From a technical perspective, the paper appears sound, with a well-articulated methodology that would be easily replicable, especially if the authors decided to open-source their code and datasets. 3. Empirically, the proposed method has shown some advantages. Weaknesses: 1. The experimental section is somewhat weak. The authors considered 3D datasets only within regular geometric areas and did not compare their approach with many of the available baselines such as U-FNO, FFNO, HT-Net, Tensorized FNO, and Wavelet NO. The paper's baselines only included standard FNO and ResNet, making it challenging for readers to ascertain whether the Transformer-based approach indeed excels in high-dimensional data processing. I suggest the authors incorporate additional representative baselines (at least one or two) during the rebuttal period. 2. The use of notations in the paper is a little confusing. Does 'd' represent the dimension of the hidden layer? How is it selected? It seems like it would be beneficial to annotate some of the intermediate variables' dimensions or shapes, as the current description in the paper is not very clear. 3. In equation (7), the 'v' kernel appears to still increase exponentially with the dimension, which begs the question, how is this overcome the curse of dimensionality? 4. While the proposed method seems to involve some sort of tensor decomposition on the data, the paper does not investigate the relationship between these processes in detail. It would be beneficial to explore this theoretically, as it could shed light on the model's representational capacity. After all, certain tensor decompositions have theoretical guarantees for approximating a high-dimensional tensor. 5. I suggest you to include the following closely related work in the reference if you have not mentioned them in the paper. FFNO, HT-Net and Tensorized FNO are proposed in the last year to deal with high dimensional data. WNO, U-FNO are also popular architectures for operator learning. And you might need to consider citing a more relevant survey that list many related works on neural operators. Ref: 1. Factorized Fourier Neural Operators (https://arxiv.org/abs/2111.13802) 2. Multi-Grid Tensorized Fourier Neural Operator for High Resolution PDEs (https://openreview.net/pdf?id=po-oqRst4Xm) 3. HT-Net: Hierarchical Transformer based Operator Learning Model for Multiscale PDEs(https://openreview.net/forum?id=UY5zS0OsK2e) 4. Multiwavelet-based Operator Learning for Differential Equations (https://arxiv.org/abs/2109.13459) 5. U-FNO -- An enhanced Fourier neural operator-based deep-learning model for multiphase flow (https://arxiv.org/abs/2109.03697) 6. Physics-Informed Machine Learning: A Survey on Problems, Methods and Applications (https://arxiv.org/abs/2211.08064) Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: None. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer TtUN for the very detailed comments and suggestions. We also appreciate the recognition on our work. Here we would like to address your concerns and questions as below. --- > *The experimental section is somewhat weak......I suggest the authors incorporate additional representative baselines (at least one or two) during the rebuttal period.* We appreciate reviewer's suggestion on making the experiment section more comprehensive. The primary reason for choosing FNO and Dil-ResNet as main baselines is that FNO is a represenative neural-PDE sovler that has shown decent performance on a wide array of problems, while DilResNet has shown good accuracy [1, 2] on the class of fluid problems investigated in this work. Nevertheless, following the reviewer's suggestion, we added experiments on Multi-wavelet NO, Factorized-FNO, and Tensorized-FNO in our newly uploaded PDF (**Table 2,3,4**). In addition, we also added experiment results (**Table 1**) of our model on datasets from FNO, which are used in many other neural-PDE solver literatures. Due to the one page limit we only listed FNO/Linear Transformer/Dil-ResNet for reference, but we will incorpate more results from relevant literatures in the future. [1] Learned Simulators for Turbulence, ICLR 2023 [2] Towards Multi-spatiotemporal-scale Generalized PDE Modeling, 2023 --- > *The use of notations in the paper is a little confusing. Does 'd' represent the dimension of the hidden layer? How is it selected? > It seems like it would be beneficial to annotate some of the intermediate variables' dimensions or shapes, as the current description in the paper is not very clear.* Yes, 'd' can be considered as the hidden dimension in the network. We set it to 128 by default and we invite reviewer to check out **Figure 3** in our **Appendix** where we show that the model's performance can be further improved if we adopt a larger dimension (*We apologize for a typo in Figure 3's x-axis, the correct labelling should be: 128 -> 64, 64 -> 128*). We also added an illustrative diagram in the newly uploaed PDF (**Figure 2**) with annotated tensor shapes. --- > *In equation (7), the 'v' kernel appears to still increase exponentially with the dimension, which begs the question, how is this overcome the curse of dimensionality?* It is true that the 'v' has a shape that grows exponentially and we actually acknowledged it in the Conclusion section that our model is still not free from the curse of dimensionality. Our main goal is to alleviate the curse of dimensionality when using attention to parametrize the learnable kernel integral. More specifically, we exploit the low-rank structure of full-attention matrix and replace it with a set of much smaller attention matrices, which improves the computational efficiency and also the numerical stability when applying to higher-dimensional problems with a large number of grid points. --- > *While the proposed method seems to involve some sort of tensor decomposition on the data, the paper does not investigate the relationship between these processes in detail. It would be beneficial to explore this theoretically, as it could shed light on the model's representational capacity. After all, certain tensor decompositions have theoretical guarantees for approximating a high-dimensional tensor.* We thank reviewer's suggestion to explore the theoretical property of the decomposition process. While SVD-based methods (e.g. Tucker/CP decomposition) come with a strong theoretical foundation, they are not used in our work as it is prohibitively expensive to compute the online SVD for high-dimensional input tensor in each layer during training iterations. Instead, the decomposition from high-dimensional tensor to vector is accomplished through the cheap (yet effective) learnable projector we proposed. Investigating the theoretical capacity of the proposed projection method can be an interesting future direction. Following this, we would like to also elaborate a bit more about the difference between existing works and our work in terms of how the kernel is parameterized and computed. * Tensorized-FNO: Apply low-rank decomposition (e.g. Tucker/CP/Tensor-train) to the dense weight in spectral convolution layer and store them in the factorized form. * Factorized-FNO: Apply 1D spectral convolution along each dimension separately. * FactFormer: Use learnable projection to project tensor into vectors along different axes, and then use these projected vectors to compute data-dependent kernel. In general, our work explored a new way to compute *data-dependent* kernel in a multi-dimensional factorized way. --- > *I suggest you to include the following closely related work in the reference if you have not mentioned them in the paper...* We thank the reviewer for the valuable references and we will include them in the future version of the manuscript (and camera-ready version if the paper gets accepted). --- Rebuttal 2: Title: Feedback Comment: Thanks for the response from the authors. Most of my concerns are resolved and I have increased the rating.
Summary: This paper uses separable axis attention to reduce the complexity from being exponential in spatial dimension to linear. Strengths: NA Weaknesses: The standard Transformer has two components: attention and MLP. This paper only addresses the attention. But the MLP complexity O(Nd^2) would still make the overall complexity exponential in spatial dimension. This seems to defeat the purpose of the paper. By design, the attention without softmax has rank d, where d is the hidden size and is typically much smaller than the sequence length N. This is shown in Fig 8a. For A^(1) and A^(2) in Fig 8b and 8c, the sequence length along each separate axis is much smaller. This gives the impression that matrices A^(1) and A^(2) are less rank deficient. But this does not mean that the proposed attention is better than vanilla attention. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Table-4 only reports the encoding or attention layer runtime. What is the overall runtime including MLP layers? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer EAHP for the efforts spend on reviewing our work. However, we believe there are some misunderstanding and misinterpretation about our work and we would like to provide some clarifications below. --- > *The standard Transformer has two components: attention and MLP. This paper only addresses the attention. But the MLP complexity O(Nd^2) would still make the overall complexity exponential in spatial dimension. This seems to defeat the purpose of the paper.* We believe the reviewer has misunderstood the goal of the proposed model. The goal of the proposed approach is to alleviate the curse-of-dimensionality of applying attention to higher-dimensional PDE problems and our approach is not completely free from the exponential complexity in spatial dimension (we actually acknowledge that our model requires evaluating function value at all $N$ grid points in the conclusion section). We achieve this goal by exploiting the low-rank structure of the kernel matrix $A=QK^T$, replacing it with the product of a set of much smaller axial kernels. Through extensive experiments, we have showcased that our proposed approach enjoys notably better computing efficiency and numerical accuracy than the model using softmax-free attention. In addition, despite the asymptotic bound of MLP being $O(Nd^2)$, it is not the major bottleneck of attention-based PDE solvers in practice. For example, on the $128 \times 128$ problem, MLPs only account for roughly 16\% of the total calculation time for FactFormer, and less than 5\% for Linear Transformer. --- > *Table-4 only reports the encoding or attention layer runtime. What is the overall runtime including MLP layers?* We actually include most MLPs' runtime in Table-4 and we will make this point clearer in the future version of the manuscript. The only MLP that has been excluded is the one that's used for propagating dynamics in the latent space. More specifically, the model has 4 attention layers followed by 4 MLPs, and an additional propagating MLP. The *Enc. time* corresponds to the total runtime of 4 attention layers + 4 MLPs. The *Prop. time* corresponds to calling the propagating MLP 4 times to propagate system state from $z_{t}$ to $z_{t+4}$. --- > *By design, the attention without softmax has rank d, where d is the hidden size and is typically much smaller than the sequence length N. This is shown in Fig 8a. For A^(1) and A^(2) in Fig 8b and 8c, the sequence length along each separate axis is much smaller. This gives the impression that matrices A^(1) and A^(2) are less rank deficient. But this does not mean that the proposed attention is better than vanilla attention.* The experiment here is to provide heuristic motivation for replacing the full-attention with axial factorized attention. When parametrizing the learnable kernel integral with vanilla attention, the kernel matrix $A=QK^T$ has very low rank by design (since $rank(AB)\leq \min[rank(A), rank(B)]$ and as reviewer points out, $rank(Q), rank(K)$ is upper bounded by channel number $d$). This motivates us to propose a axial-factorized kernel integration scheme and replace $A$ with a set of much smaller (but higher-rank) matrices $A^1, A^2, ..., A^n$. And the experiment results confirm that $A^1, A^2, ..., A^n$ indeed exhibit higher-rank structures after training. Based on reviewer's feedback, we will revise the description here to avoid ambiguous impression. Essentially, the proposed approach doesn't differ from vanilla linear attention in terms of how to calculate attention score (they are both dot product attention), the difference is how to parameterize and compute the kernel in the learnable kernel integral transform with attention. --- Rebuttal 2: Comment: Thanks the authors for the clarification and answering my questions. I have increased my ratings.
Summary: This paper proposes a new method to scale Transformer-based models for PDE surrogate modeling to higher dimensions. The new method combines several "tricks" from previous works, and most importantly, uses factorization to further reduce the computational cost of training and evaluating Transformer-based operator learners, while not sacrificing the accuracy too much, and even achieves superior performance in certain benchmark problems. Even though the idea of using factorization (tensor decomposition) for end-to-end problems that feature a spectrum decay and/or sparse structures is not new, I check the Transformer-based PDE modeling papers and this is the first time someone implemented it. I think this is a worthy contribution to the literature, however, there are apparent weaknesses in the paper as well (detailed below). I lean toward acceptance if the author can address or answer some of the concerns below. Strengths: - The authors proposed a simple way to reduce the computational cost of linearization of attention further. The method is actually quite simple by using a different projection and iteratively evaluating the attention "integral", which is essentially tensor decomposition when the un-integrated kernel matrices are moved to the innermost integral written together. As the reason behind tensor decomposition makes more mathematical sense than the one featured in Axial Transformer. - Dramatic savings versus even the linear attention. - I actually appreciated that the authors spilled some techniques that are usually deemed "tricks" explicitly in Section 3.3, and used them across all baselines, instead of hiding them in the source code as some works on Transformers. - The authors enhanced many existing PDE benchmarks (in the anonymized GitHub repo), and created several new ones that test the scalability of the models. Weaknesses: - The biggest weakness is perhaps some lack of theoretical foundation, but I guess this is fine for a methodology paper. - The saving of computation in the presentation of formula (7) may not be that obvious to the community of Transformer research, who may not be that familiar with the integral representation. Especially considering the presentation of (7) uses $n$-D, but compares that in (2), which is for a 1-D problem. - The presentation in laying out the factorization could use some improvement for people more familiar with linear algebra than differential equations. For example, on line 179, - If only the author could add a more informative diagram comparing the difference between the factorized approach vs Axial Transformer. ### Misc small typos - line 144: there should be space between RoPE and its reference. - line 152: this is just a suggestion, the tilde notation could be clearer if it is referred back to equation (3), saying something like "$\tilde{\mathbf{z}}$ denotes a matrix $\mathbf{z}$ whose row vectors $\mathbf{z}_j$ are RoPE encoded as in (3)". - line 154: "learnable projection" -> "learnable projections". - line 209: in (9), $Z$ should be set to $\operatorname{Att}(U)$. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - In the comparison of SVD for attention matrices, what is the value of $k$? What specifically are $A^{(1)}$ and $A^{(2)}$? From different model problems? - In Section 4.2, the FNO models have significantly higher parameter counts. Is there a specific reason for this? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: The author properly acknowledged the curse of dimensionality. Moreover, the dimension-bound factorization limits the domain type to tensor-product type (i.e., rectangular). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer y1WA for the helpful comments and greatly appreciate your recognition on our work. Here we would like to address your concerns and questions as below. --- > *The saving of computation in the presentation of formula (7) may not be that obvious to the community of Transformer research, who may not be that familiar with the integral representation. Especially considering the presentation of (7) uses n-D, > but compares that in (2), which is for a 1-D problem.* > > *The presentation in laying out the factorization could use some improvement for people more familiar with linear algebra than differential equations...* We appreciate reviewer's suggestion on making this paper more accessible to the broader audience. In (2), we are trying to define a general domain $\Omega$ without specifying its dimension, it can also be greater than $1$-D. In the next version of the manuscript, we will make this point clearer, and incorporate a more concrete expression explaining the difference between the standard attention and axial factorized ones, that is our proposed scheme amounts to replacing $z=AV$ with tensor-matrix product $z=V\times_1 A^1 ... \times_n A^n$ where $A^1, ..., A^n$ are much smaller matrices than full attention matrix $A$. --- > *If only the author could add a more informative diagram comparing the difference between the factorized approach vs Axial Transformer...* In the newly uploaded PDF (**Figure 2**), we added a diagram that describes the differences between how the proposed approach and Axial Transformer deal with 2D input. We will also add this to the Appendix of the paper in the future version. --- > *Misc small typos...* Thanks for the catch, we will correct them in the next version of the manuscript. --- > *In the comparison of SVD for attention matrices, what is the value of k? What specifically are A^{(1)} and A^{(2)}?* $k$ is the index of the singular value (used to compute the faction of singular values in the figure, i.e. k divided by total number of singular values). $A^{(1)}, A^{(2)}$ are sub-attention matrices corresponding to two axes $y$ and $x$, respectively. --- > *In Section 4.2, the FNO models have significantly higher parameter counts. Is there a specific reason for this?* This is because the number of parameters of FNO grows exponentially with respect to the problem dimension. The parameter count of the kernel in each FNO kernel integral layer is $O(M^nd^2)$, where $M$ is the number of truncated modes, $n$ is the number of the problem dimension. For example, using 16 modes and a hidden dimension of 128 in a FNO3D layer will result in parameter count of $O(16^3 \times 128^2)$. --- Rebuttal Comment 1.1: Comment: The reviewer appreciates the author adding many experiments and the diagram in such a short response window. I think it is a worthy addition to the literature, for both operator learning and Transformer architecture research community. Therefore, I raised the score from 6 to 7.
Summary: The proposed work presents a scalable transformer architecture for modeling partial differential equations (PDEs). The input function is linearly projected to multiple functions with one-dimensional domains using a learnable integral projection operator. The attention mechanism is then applied to these projected functions. This factorized attention mechanism requires less time and memory. The proposed FactFormer shows comparable or better performance while requiring fewer parameters and less memory. Strengths: 1. The proposed method is effective and scalable, making it suitable for high-resolution PDE modeling. 2. The paper is well-written, and the method is described in detail. Weaknesses: 1. The experiments in this work use fluids with very high viscosity, resulting in simulations of flows with low Reynolds numbers. As the model uses linear projection operations and performs softmax-free attention in the projected functions, it is crucial to demonstrate the method's effectiveness in modeling complex systems, such as flows with high Reynolds numbers. For instance, the set of experiments performed in FNO to model the Navier-Stokes equation with varying viscosities. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. is there any non-linearity or activation function used in the architecture? 2. How well does the model perform in the zero-shot super-resolution task compared to the FNO baseline? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer UCqx for the helpful comments and suggestions on improving the paper. Here we would like to address your concerns and questions as below. --- > *The experiments in this work use fluids with very high viscosity, resulting in simulations of flows with low Reynolds numbers. As the model uses linear projection operations and performs softmax-free attention in the projected functions, it is crucial to demonstrate the method's effectiveness in modeling complex systems, > such as flows with high Reynolds numbers. For instance, the set of experiments performed in FNO to model the Navier-Stokes equation with varying viscosities.* Based on reviewer's suggestion, we have added experiments on datasets from FNO with varying viscosities (please refer to **Table 1** in the newly uploaded PDF). We observe that FactFormer also has competitive performance on these datasets. In addition, the 2D Kolmogorov dataset in our work also has a relatively low viscosity (with Re=1000). --- > is there any non-linearity or activation function used in the architecture? Yes, as shown in Figure 2 of the main manuscript, we apply a pointwise feedforward network to the output of each attention layer, which uses GELU activation function. --- > How well does the model perform in the zero-shot super-resolution task compared to the FNO baseline? The proposed model can also generalize to unseen resolutions like FNO. Below we provide an experiment on Darcy flow: (Metric is relative L2 error) |Resolution|211(train)|421| |----------|----------|---| |FNO2D|0.0073|0.0141| |FactFormer|0.0058|0.0133| --- Rebuttal Comment 1.1: Title: Response to the Rebuttal Comment: I thank the authors for their response. The proposed method also achieves comparable performance in modeling fluid flow with high Re. I am increasing the score.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their efforts spent reviewing our work and the suggestions on helping us improve our work. Below we summarize the newly added results that are included in the one-page pdf. #### **Table 1**: We tested our proposed model and Dilated-ResNet on two benchmark problems from FNO [1], which are 2D Navier-Stokes equation with different viscosities and 2D Darcy flow with different resolutions. We also included the result of Linear Transformer [2] for comparison. In general, we observe that our proposed FactFormer also has competitive performance on these problems. #### **Table 2/3/4**: We added three new baselines on problems studied in the paper. The newly added baselines are Multi-wavelet neural operator(MWT) [3], and two newly proposed variants of FNO: Tensorized-FNO [4], Factorized-FNO [5]. Tensorized-FNO(T-FNO) factorizes the weight in a standard FNO layer with tensor decomposition technique. Factorized-FNO (F-FNO) introduces a group of tricks for improving standard FNO and applying spectral convolution in an axial-factorized way. *Note: We opt for not including the results of MWT and T-FNO on 3D problems to avoid misleading conclusions. The current MWT implementation does not include architecture for applying 3D wavelet transform/bases projection and can only operate on resolutions that are a power of 2. The original T-FNO in [4] is applied to 1D/2D steady-state prediction problems. Our direct application of T-FNO on 3D time-dependent problems has a notable degradation compared to FNO.* #### **Table 5**: Detailed runtime comparison between different baseline models. #### **Figure 1**: Averaged rollout error trend of model using linear/factorized attention scheme. #### **Figure 2**: A diagram comparing Axial Transformer[6] and the proposed model. We have also addressed the detailed comments of every reviewer in separate replies. Please don’t hesitate to let us know if there is any further question or additional comment. *References*: [1] Fourier neural operator for parametric partial differential equations, 2021. [2] Choose a transformer: Fourier or galerkin, 2021. [3] Multiwavelet-based operator learning for differential equations, 2021. [4] Multi-Grid Tensorized Fourier Neural Operator for High Resolution PDEs, 2023. [5] Factorized Fourier Neural Operators, 2022. [6] Axial Attention in Multidimensional Transformers, 2019. Pdf: /pdf/ee0a51070e6e77e484372988da34dee119bb554f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes Factorized Transformers (FactFormer) for surrogate modeling of partial differential equations (PDEs). FactFormer first projects the input function to multiple one-dimensional projected functions, which are then used for multi-dimensional factorized attention. The factorized attention greatly reduces the model complexity, while still achieving competitive performance on three turbulence problems. Strengths: 1. Improving the efficiency of transformer is important to the development of transformers in PDE modeling. 2. The proposed FactFormer achieves superior model efficiency with competitive performance. Weaknesses: 1. The design of learnable projection and factorized kernel integral may limit the model capacity, especially for problems that do not have such underlying low rank structure. For example, the projection block basically gets $n$ projections of the input function on $n$ dimensions, which could drop the crucial information about the input function. 2. In principle, full attention should have better model capacity than factorized attention. However, the experimental results in this paper show that full attention is much worse. The authors hypothesize that it is due to the instability of rollout. An error trend plot of full attention and factorized attention is need to verify this hypothesis. 3. The authors hypothesize that CNN variants are better than FNO on turbulence problems because CNN filters capture high-frequency patterns but FNO truncates the high-frequency modes. However, the nonlinear activation function and the local linear transform branch in FNO allows FNO to model high-frequency patterns. [1] has shown it can efficiently approximate operators in incompressible Navier-Stokes equations. Further justification and ablation study is needed to support this hypothesis. 4. The inference speed of FactFormer is 1.5-1.8x slower than FNO while the number of parameters is much smaller. Why is that? Can you also report the FLOPs of the FactFormer and compare it with other baseline models? 5. Overall, the accuracy of FactFormer is not consistently better than Dil-ResNet. Minors: 1. Line 179: $L_N$ should be $L_n$. 2. Line 302: "time cost for ..." -> "time cost of ..." 3. Line 315: "time cost by ..." -> "time cost of ..." [1]: Kovachki, Nikola, Samuel Lanthaler, and Siddhartha Mishra. "On universal approximation and error bounds for Fourier neural operators." The Journal of Machine Learning Research 22.1 (2021): 13237-13312. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. How is the error trend plot computed exactly? I thought it is accumulative error but the error is not monotonically increasing. 2. Can you also compare the model efficiency with tensorized FNO, which leverages the tensor factorization to improve efficiency of the original FNO? 3. Section 4.3 claims that Factorized attention has higher ranks so it is more efficient. Can you elaborate more on how you jump to that conclusion? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Listed in weaknesses and questions sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer hvcj for the insightful comments and suggestions on the paper. Here we would like to address your concerns and questions as below. --- >*The design of learnable projection and factorized kernel integral may limit the model capacity, especially for problems that do not have such underlying low rank structure. > For example, the projection block basically gets n projections of the input function on n dimensions, which could drop the crucial information about the input function.* The projection part exploits the low-rank structure and drops information of other axis, but the value branch in the attention layer preserves the information of all dimensions and then a pointwise feed-forward network is applied to the output of the attention layer (somewhat similar to the local branch in FNO). --- > *The authors hypothesize that it is due to the instability of rollout. An error trend plot of full attention and factorized attention is needed to verify this hypothesis.* We've added a figure of the average rollout error trend of Linear/Factorized attention in the newly uploaded PDF (**Figure 1**). --- > *The authors hypothesize that CNN variants are better than FNO on turbulence problems because CNN filters capture high-frequency patterns but FNO truncates the high-frequency modes. However, the nonlinear activation function and the local linear transform branch in FNO allow FNO to model high-frequency patterns [1]...* We appreciate the reviewer for the valuable reference. We will revise our description of the high-frequency bias hypothesis and incorporate the reference in the next version of the paper. We agree that theoretically FNO has the capacity to capture high-frequency patterns, but in practice, this also depends on the model's training dynamics and the problems it is applied to. [1] poses that learning the linear transform of Fourier coefficients can potentially suffer from spectral bias [2] and choose to preserve lower frequency modes during training. [1] Incremental Fourier Neural Operator, 2022. [2] On the spectral bias of neural networks, 2019. --- > *The inference speed of FactFormer is 1.5-1.8x slower than FNO while the number of parameters is much smaller. Why is that? Can you also report the FLOPs of the FactFormer and compare it with other baseline models?* The major reason is that the number of parameters of FNO grows exponentially with respect to the problem dimension while FactFormer does not (the projection operator of FactFormer grows linearly). The parameter count of the kernel in each FNO kernel integral layer is $O(M^nd^2)$, where $M$ is the number of truncated modes, $n$ is the number of the problem dimension. For example, using 16 modes and a hidden dimension of 128 in a FNO3D layer will result in parameter count of $O(16^3 \times 128^2)$. We also list the FLOPs (on 3D grid) measured by DeepSpeed library below: |Model|Tensorized-FNO|Factorized-FNO|FNO|Dil-ResNet|Linear Transformer|FactFormer| |-----|--------------|--------------|---|----------|-----------|-------------------| |GFLOPs |33 | 101 | 33 |3501|1685|596| --- > *How is the error trend plot computed exactly? I thought it is accumulative error but the error is not monotonically increasing.* Frame-wise error is shown in the error trend plot. LM models predict multiple future steps within one model call and thus result in the error not always monotonically increasing within a prediction window. --- > *Can you also compare the model efficiency with tensorized FNO...* In the newly uploaded one-page PDF, we added tensorized FNO to the runtime comparison (**Table 5**) and we also tested it on 2D problem (**Table 2**). --- > *Section 4.3 claims that Factorized attention has higher ranks so it is more efficient. Can you elaborate more on how you jump to that conclusion?* The low-rank property of large-sized full attention matrix $A$ hints that it is possible to simplify the original kernel integral computation without too much loss of information. We approach this simplification by replacing the original kernel integral with an axial factorized integral which only involves a group of much smaller attention matrices $A^1, ..., A^n$ (with higher rank), which improves the computational efficiency. Based on the reviewer's feedback, we will revise the discussion in section 4.3 to improve the clarity. --- > *Spotted typos...* We thank the reviewer for the catch. We will correct them in the next version of the manuscript.
Summary: The authors propose FactFormer, a neural PDE solver that is based upon the transformer, but scales much better with the dimensionality of the PDE problem. The authors compare FactFormer to several other neural PDE solvers on three different problems and find that Factformer achieves similar performance to state-of-the-art PDE solvers, but while offering lower computational costs. This property may also allow FactFormer to scale to high-dimensional problems than existing PDE solvers. Strengths: 1) The paper is generally well-written 2) The authors use three challenging PDE problems, and they compare their approach to two state-of-the-art existing models, providing reasonably good empirical evidence for the performance of their approach. 3) The authors do provide some analysis comparing FactFormer to other Attention mechanisms, which is helpful for proving their advantage. Weaknesses: 1) (Major) Upon inspection, it doesn't appear that these baseline models (FNO and Dil-ResNet) have been previously applied by other authors to any of the three benchmark problems (with the particular settings used in this work), so that the authors here had to implement and apply the baseline methods themselves. This introduces a major potential source of bias if the authors, because it is unclear whether the authors made equal effort to optimize the performance (e.g., by adjusting model size, batch sizes, learning rates, etc) of the baseline methods and their proposed approach. It would be better if the authors could test FactFormer on one or two PDEs where the baseline methods (FNO and Dil-ResNet) have previously been optimized and tested by other authors, helping to ensure a more fair comparison. For example, why don't the authors use the data/benchmarks from [84], for example? 2) (Major) The main claim of this paper is that attention is good, but we cannot use it because of the computation time. Therefore we make it more efficient, resulting in FactFormer, but the FactFormer model doesn't perform any better than existing models (e.g,. Dil-ResNet). Therefore, if there was an advantage to using Transformers for PDE solvers, apparently that advantage disappears when using the factorization in the Factformer. So then the main contribution here is presumably the computational time of Factformer? Factformer does seem to be significantly more computationally efficient than Dil-ResNet for Inference - which is great - but what about training time? If it is not also superior during training then this is a major limitation. 3) (Minor) The main stated contribution of this paper is to extend transformers to solve higher-dimensional problems, rather than outperform other PDE solvers. Indeed, the Dil-ResNet often outperforms the FactFormer, and therefore there seems to be little, if any, performance advantage against recent PDE solvers. The authors show some theoretical big-"Oh" analysis of runtime, but these are asymptotic bounds. In Table 4 the authors report some computation times, but it is just for one problem, and it is unclear what problem they are testing on (perhaps I missed it?). Since this is a main claimed contribution of this work, it seems quite important to me to report the empirical computation time (e.g., per epoch or iteration) for PDE problems with some varying mesh sizes and dimensionalities, so that we can see how the runtimes compare for some common settings of these parameters. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1) Per Weakness #2 above, What is the computational cost of training FactFormer compared to Dil-ResNet and FNO? 2) Is there a reason the authors did not use existing benchmark PDE problems to test FactFormer (per my Weakness #1 comment above)? It is important to me that there is a good reason, or alternatively, that the authors can either (i) provide evidence that they fairly optimized all competing models (this is very hard to do), or (ii) add another benchmark that was used in a prior study, and compare Factformer's performance to that reported for the baselines on that prior study. If these questions are (convincingly) addressed I am willing to significantly increase my score, potentially to "Accept". Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: na Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer YpQV for the detailed comments and suggestions on improving the paper. Here we would like to address your concerns and questions as below. --- > *The motivation for creating new datasets.* One of the major goals of our work is to improve the scalability of attention-based PDE learning model on higher-dimensional PDE problems and many existing benchmarks consider only 1D or 2D problems. To this end, we created a dataset with two challenging 3D problems and a high-resolution 2D problem to stress-test our newly proposed model and existing neural PDE solvers. --- > *Why don't the authors use the data/benchmarks from [84]?* To our knowledge, the data and code of [84] are not publicly available yet. --- > *Is there a reason the authors did not use existing benchmark PDE problems to test FactFormer (per my Weakness #1 comment above)? It is important to me that there is a good reason, or alternatively, that the authors can either > (i) provide evidence that they fairly optimized all competing models (this is very hard to do), or (ii) add another benchmark that was used in a prior study, > and compare Factformer's performance to that reported for the baselines on that prior study.* We appreciate reviewer's suggestion of a fair and reliable benchmark. To address your concern, we have added new experiments of FactFormer on commonly used benchmark datasets including 2D Navier-Stokes and 2D Darcy flow from FNO (*please refer to **Table 1** in the newly uploaded one-page PDF*). We observe FactFormer also has competitive performance on these datasets. In addition, we plan to open-source the code and datasets proposed in this work in the future to facilitate the development of other models in this area. --- > *Indeed, the Dil-ResNet often outperforms the FactFormer, and therefore there seems to be little, if any, performance advantage against recent PDE solvers.* We agree with reviewer that our proposed model does not consistently outperform Dil-ResNet. However, we do want to point out that as there exists a large variety of PDEs, there is no guarantee that one type of model can rule them all. Prior studies have reported that Dil-ResNet excels on 3D turbulence [1] and 2D smoke buoyancy [2]. In our newly added experiments, we show that our proposed model actually has better accuracy over Dil-ResNet on some other 2D problems. In addition, unlike FNO and FactFormer, without changing the architecture, Dil-ResNet's performance deteriorates significantly when the resolution increases (**Table 1 - Darcy flow 2D** in the newly uploaded PDF). In this case, our proposed model has roughly the same level of accuracy under varying resolutions. [1] Learned Simulators for Turbulence, ICLR 2023 [2] Towards Multi-spatiotemporal-scale Generalized PDE Modeling, 2023 --- > *In Table 4 the authors report some computation times, but it is just for one problem, and it is unclear what problem they are testing on (perhaps I missed it?)* The problem tested is $128 \times 128$ Kolmogorov flow (*mentioned in line 331 of manuscript, we will make this clearer based on reviewer's feedback*). On top of that, we would like to invite reviewer to also check out **Figure 1** and **Figure 2** in the **Supplementary material** , which includes runtime reports under different model sizes and mesh sizes/dimensionalities. --- > *What is the computational cost of training FactFormer compared to Dil-ResNet and FNO?* We have added the full runtime (forward+backward) in **Table 5** of the newly uploaded pdf. The relative order is similar to the inference time we reported in the manuscript. --- Rebuttal Comment 1.1: Title: Thank you for your detailed response, and quick followup questions Comment: I thank the authors for their detailed response to my comments. I find most of the argumentation and new results convincing. I have increased my rating by one point I will consider increasing it by another point or two, but I have one more follow up question regarding the fair comparison of FactFormer with prior competing approaches. In Table 1, the authors added additional experimental results for the Navier-Stokes and Darcy-Flow problems, which I appreciate, and which I believe invariably strengthens the results of the paper. Per my previous commentary however, these results would be even more convincing (much more in my opinion) if the results of the competing methods (e.g., FNO, Dil-Resnet) on this problem were reported from prior work, where the authors of the competing methods had optimized these models for the problems under study. With this in mind I have three (hopefully quick ) follow up questions: 1) Is that still the case that you (the authors) implemented the competing models yourself on these new problems (Navier-Stokes and Darcy)? 2) If yes, then why? e.g., is the data for these problems from prior studies not publicly available so you could run FactFormer on the same data? 3) Even if data from prior studies is not available, how do your results with the competing models compare with those obtained from prior studies? Are they similar? If the authors could provide a short description and/or point me to a particular study, and the relevant Table/Figure numbers, that would be helpful. --- Reply to Comment 1.1.1: Title: Reply to reviewer YpQV Comment: Thanks for your consideration and additional comments. The results from FNO and Linear Transformer are based on their original paper, except that we re-run FNO's results on Darcy using its improved latest official implementation for a fairer comparison. FNO reported around 1.09e-2 relative error on Darcy in its original paper, whereas we have reported around 0.70e-2 relative error using an updated and improved implementation from the official repo. For the official results from FNO and Linear Transformer (LT), we kindly refer reviewer to the following papers: * Table 1 (Navier-Stokes) and Table 4 (Darcy flow) in paper: "*Fourier Neural Operator for Parametric Partial Differential Equations*" (arxiv version) * Table 2(b) (Darcy flow) in paper: "*Choose a Transformer: Fourier or Galerkin*" (arxiv version) * Table 1 (Navier-Stokes) in paper: "*Transformer for Partial Differential Equations’ Operator Learning*" (arxiv version) On top of that, we implement and run the experiments for Dil-ResNet as there is no other work reporting its performance on these datasets to our knowledge. We have also uploaded the scripts for reproducing FactFormer on these datasets in the anonymous repo (link in the Section 1 of Appendix).
null
null
null
null
Improving Adversarial Transferability via Intermediate-level Perturbation Decay
Accept (poster)
Summary: The paper addresses the limitations of intermediate-level attacks, highlighting how their two-stage training scheme leads to perturbations that deviate from directional guides, significantly impairing attack performance. To overcome this limitation, the authors propose a single-stage optimization method that ensures intermediate-level perturbations have larger magnitudes and precisely follow the directional guides, effectively increasing model prediction loss. Extensive experiments on ImageNet and CIFAR-10 demonstrate that the proposed method outperforms existing techniques when attacking various victim models. Furthermore, the paper shows that the proposed method can be easily combined with prior approaches to craft more transferable adversarial examples. Strengths: This paper recognizes the limitations of current intermediate-level attacks and proposes a novel method named ILPD. This method crafts adversarial examples using a single stage of optimization, encouraging the intermediate-level perturbation to be both in an effective adversarial direction and possess a significant magnitude simultaneously. A large number of experiments demonstrate that ILPD outperforms SOTA methods by large margins. Weaknesses: The method seems sensitive to $\gamma$ and the intermediate layer to split h and g. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1 Line 200-201. It is reasonable that the gradients of $h$ are not concerned, but why $W_{\beta+1}$ (after the intermediate layer) not considered? 2 If change the surrogate model, do you need to adjust the intermediate position and $\gamma$? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The method appears to be sensitive to the parameters $\gamma$ and the choice of intermediate layer for splitting h and g. However, in real-world scenarios, it is often impractical to tune these parameters effectively based on the victim models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive feedback. Our response to the comments is provided as follows. &nbsp; > The method seems sensitive to $\gamma$ and the intermediate layer to split $h$ and $g$. **Response:** Just like all other intermediate-level attacks, although our ILPD needs to tune the position to achieve its optimal performance, we can observe from our discussions in Section 5.4 that our method is less sensitive to the choice of position compared to other methods. Moreover, accordingly, to experiments in the ILA paper, the optimal choice of position can actually be found on the substitute model itself thus it is generally not a concern. As for $1/\gamma$, Figure 6 in the paper shows that our method outperforms all its competitors with a wide range of $1/\gamma$ ($[0, 0.3]$ and $[0.1, 0.6]$ on ImageNet and CIFAR-10, respectively), showing that the choice of $1/\gamma$ is not a big concern to ensure its superiority over state-of-the-arts. &nbsp; > Line 200-201. It is reasonable that the gradients of $h$ are not concerned, but why $W_{\beta + 1}$ (after the intermediate layer) not considered? **Response:** We appreciate the comment. Indeed, the difference in $W_{\beta + 1}$ between the substitute model and the victim models also affects the transferability. However, such a gap cannot be easily bridged, unless we modify the parameters of the substitute model to make it more similar to the victim parameters, which seems infeasible from our perspective. By contrast, the difference in activation masks in $\{D_i\}$ and/or diversity in the gradients of $L$ _w.r.t._ the logits can be somehow reduced by modifying the intermediate-level representations on the substitute model, thus we mainly discuss them in our paper. &nbsp; > If change the surrogate model, do you need to adjust the intermediate position and $\gamma$? **Response:** To show whether the same hyper-parameters work on different substitute models, we conduct an experiment on ImageNet to compare the performance of ResNet-50, ViT-B, DeiT-B, Swin-B, or the MLP Mixer-B as the substitute model. $h$ is always split to be the first two blocks of these models and we fix $1/\gamma=0.1$ for them. For NAA and ILA++, we tested the results of all possible intermediate layer selections and report the best results among them. The average success rates are shown as follows and results are compared with the I-FGSM baseline, NAA, and ILA++. The victim models are the same as those in Table 1 in our paper. It can be seen that the obvious superiority of our ILPD holds. | Substitute model | NAA | ILA++ | ILPD | |:------------------:|:--------:|:-----:|:-------:| | ResNet-50 | 46.36% | 46.99% | **57.06%** | | ViT-B | 43.52% | 35.02% | **46.63%** | | DeiT-B | 35.35% | 23.66% | **44.28%** | | Swin-B | 21.41% | 13.23% | **36.77%** | | MLP Mixer-B | 33.18% | 20.81% | **41.72%** | --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: The authors have addressed my question. I intend to maintain my current rating. --- Reply to Comment 1.1.1: Title: Thanks to the reviewer Comment: Dear Reviewer ektX, Thanks for responding to our rebuttal. We are glad to know that your questions have been addressed! Best regards, Authors
Summary: In this paper, the authors propose an intermediate-level attack method with a single stage of optimization to improve attack transferability. When computing gradients, it adopts intermediate-level perturbation decay to process a larger magnitude. The experiment validations on the ImageNet and CIFAR-10 datasets are given to demonstrate its effectiveness to improve the attack transferability. Strengths: 1. The proposed method presents a single stage with a larger magnitude rather than two stage training. 2. The proposed method is a pluggable module that can be deployed in other attack methods. 3. The experiments are thorough with various victim models, including CNNs, MLP, vision transformers, and robust models. Weaknesses: 1. From Figure 1, there is no obvious difference between the victim model and the substitute model. The reviewer is confused that these two figures are less related to the statement in Section 3.1. 2. The construction of this paper is disordered. The authors introduce the existing intermediate-level attack method with the proposed method, which covers the contribution of the proposed method. The authors should highlight its own contribution. 3. For the experiments, the authors only use the ResNet-50 model as the substitute model to test the attack transferability on other models. However, a cross-model evaluation could be better, i.e., using MLP or transformers to test on other types of models. 4. The writing of this paper needs to be improved. - For the introduction, the authors do not provide a brief introduction of the proposed method. - For the related work, the white-box attack is less related to this paper. And existing intermediate-level attack methods are missed. - There are also some typos. a) In line 31, be discuss -> be discussed. b) In line 327, there exists two duplicate ‘to all competitors’. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Figure 1 does not give an intuitive visualization about the problem of existing intermediate-level attacks. The motivation of this paper should be stated clearly. 2. Lack of related works with existing intermediate-level attack methods. The whole construction of this paper should be improved. 3. How about the transferability with other types of models as the substitute model? 4. Fix the typos and polish this paper again. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Limitations not clearly elaborated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback. Our responses to the comments are provided as follows. &nbsp; > From Figure 1, there is no obvious difference between the victim model and the substitute model. The reviewer is confused that these two figures are less related to the statement in Section 3.1. **Response:** Figure 1 is given to show the negative impact of the directional deviation from the guide in middle layers. In Figure 1, we can observe that with a similar scalar projection, more severe deviation leads to lower prediction loss on both the substitute model (Figure 1(a)) and the victim model (Figure 1(b)). We use blue and orange colors to indicate different ranges of prediction loss in the two subfigures, and it can be seen that lower prediction loss is generally achieved on the victim model. The intermediate-level perturbation obtained by ILA is marked by the purple box in the figure. The figure further demonstrates that ILA leads to directional deviation from the guide (as its x-axis value, _i.e._, the cosine similarity, is smaller than 1.0), and, in particular, if such a deviation can be addressed, higher attack performance can be achieved by adversarial examples whose intermediate-level perturbations are even smaller (as darker colors, _i.e._, higher prediction loss, can be achieved with points on the y-axis whose scalar projections are even smaller than that of the ILA point). We have discussed these points in detail in the last paragraph in Section 3.1. &nbsp; > The construction of this paper is disordered. The authors introduce the existing intermediate-level attack method with the proposed method, which covers the contribution of the proposed method. The authors should highlight its own contribution. **Response:** Section 3.1 not only introduces previous intermediate-level attacks, but also discusses and analyzes their limitations which inspire our work, _i.e._, ILPD. We believe such a discussion is also part of our contributions and it will inspire future work in this field. Moreover, without such a gentle introduction and discussion, it will be more difficult to capture the motivation of our method. We are more than glad to highlight more about the contribution of this paper in our revision. &nbsp; > For the experiments, the authors only use the ResNet-50 model as the substitute model to test the attack transferability on other models. However, a cross-model evaluation could be better, i.e., using MLP or transformers to test on other types of models. **Response:** We appreciate the suggestion and have done such a comparison on ImageNet, by using ViT-B, DeiT-B, Swin-B, and MLP Mixer-B as the substitute models. $h$ is always split to be the first two blocks of these models and we fix $1/\gamma=0.1$. For NAA and ILA++, we tested the results of all possible intermediate layer selections and report the best results among them. The average success rates are compared as follows, and the victim models are the same as those in Table 1 of our paper. | Substitute model | NAA | ILA++ | ILPD | |:------------------:|:-----:|:-------:|:-------:| | ViT-B | 43.52% | 35.02% | **46.63%** | | DeiT-B | 35.35% | 23.66% | **44.28%** | | Swin-B | 21.41% | 13.23% | **36.77%** | | MLP Mixer-B | 33.18% | 20.81% | **41.72%** | &nbsp; > For the introduction, the authors do not provide a brief introduction of the proposed method. **Response:** As has been discussed in the paper, since the name of the proposed method itself contains much information, we have briefly introduced our method in the introduction section as "In this paper, we propose a method that encourages the intermediate-level perturbation to possess a greater magnitude than a directional guide by its nature and to be in the same adversarial direction as that of the guide. This is achieved by introducing intermediate-level perturbation decay (ILPD) in a single stage of optimization." We are more than glad to introduce more in an updated version of the paper. &nbsp; > For the related work, the white-box attack is less related to this paper. And existing intermediate-level attack methods are missed. **Response:** Since I-FGSM is considered as the baseline and it was first introduced as a white-box method, we briefly introduce white-box attacks in Section 2. Notations are also given during such a brief introduction. Following your suggestion, we will consider revising Section 2 to further simplify such an introduction, and we will perform a more comprehensive literature review in this section. &nbsp; > There are also some typos. a) In line 31, be discuss -> be discussed. b) In line 327, there exists two duplicate ‘to all competitors’. **Response:** The authors would like to thank the reviewer for pointing out typos. All of them will be addressed in an updated version of the paper. --- Rebuttal Comment 1.1: Comment: Thanks for the response to my questions. The authors clarified some of my concerns, especially for more supportive experiments. However, Figure 1 is still hard to understand for me. I think the authors can solve the remaining questions in this rebuttal and I am willing to upgrade my score. I hope the authors will provide the complete results in the revised paper and polish this paper again. --- Reply to Comment 1.1.1: Title: Thanks to the reviewer Comment: It's great to know that your concerns have been addressed! Of course, our paper will be revised accordingly to include experimental results in our rebuttal and to clarify confusing points. Thanks to the reviewer.
Summary: This paper presents an approach to enhancing the transferability of adversarial examples in a black-box scenario. Previous studies have typically followed a two-stage process involving the derivation of effective guiding directions and subsequently maximizing the perturbation magnitudes accordingly. Unfortunately, this approach often deviates from the intended guides, resulting in suboptimal performance. In contrast, the authors introduce a one-stage method that utilizes a decayed perturbation function and an analysis tool to investigate the sources of deviation. Through extensive experiments, the authors demonstrate the effectiveness of the proposed approach. Strengths: 1. The paper is well-written, and the motivation is sound. 2. The analysis provides insights and could serve as a tool for future work. 3. Extensive experiments verify the effectiveness of the proposed method. Weaknesses: 1. Despite the empirical effectiveness, the proposed method is not well-explained. Or, at least, it is unclear to me. The authors may like to elaborate more on how the "dual" equation (Eq. 3) is derived and provide insights into the equation. There seem to be some alternatives. For example, one can maximize $\mathcal{L}(h(x+\Delta x))$. What would be the role of $h(x)$ in this formulation? 2. Following the previous point, the proposed method looks incremental or empirical finding to me if it lacks proper insights. 3. In L152, what is the link to the mixup? The authors mentioned it, but it needed further discussion in depth. 4. In Section 4, the analysis lacks the baseline ILA. The audiences will be interested in whether the proposed method improves the prior work as motivated by the authors in the introduction. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The experiment settings seem to be different from prior work. What does the *different* mean in Sec 5-1? Why didn't the authors report the same architecture attack performance? 2. Meanwhile, the performance of ILA++ reported in the paper is different from the original paper. What would be the root reason for it? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: 1. The authors do not report the variance of the experimental results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback. Our responses to the comments are provided as follows. &nbsp; > Despite the empirical effectiveness, the proposed method is not well-explained. Or, at least, it is unclear to me. The authors may like to elaborate more on how the "dual" equation (Eq. 3) is derived and provide insights into the equation. There seem to be some alternatives. For example, one can maximize $L (h(x+\Delta x))$. What would be the role of $h(x)$ in this formulation? **Response:** Our motivation is summarized as follows to make it clearer. * The common belief in intermediate-level attacks, _e.g._, ILA, ILA++, and NAA, is that a larger magnitude of intermediate-level perturbations along the directional guide leads to improved transferability of adversarial examples. These methods achieve their goals via a two-stage mechanism, _e.g._, performing I-FGSM first to obtain the directional guide and enlarge the projection of the intermediate-level perturbation onto the directional guide. * However, in this paper, we point out that the directional deviation of intermediate-level perturbation from the guide, even subtle, does great harm to the transferability. Therefore, we seek the intermediate-level perturbation $h(\mathbf{x}+\mathbf{\Delta x}) - h(\mathbf{x})$ that is in an adversarial direction $\mathbf{v}$ and to possess larger magnitudes compared to $\mathbf{v}$, simultaneously. * It is achieved by defining $\mathbf{v} = \frac{1}{\gamma} (h(\mathbf{x}+\mathbf{\Delta x}) - h(\mathbf{x}))$ as the directional guide and optimizing it to be adversarial in Eq. (3) in the paper. From such a definition, we know that the magnitude of the achieved intermediate-level perturbation naturally shows a larger magnitude than $||\mathbf{v}||$, thus the goal is achieved. We appreciate the suggestion about alternative options, but $L(h(\mathbf{x}+\mathbf{\Delta x}))$ (if $L$ is defined to be the cross-entropy loss as in our paper) can NOT be directly optimized. Not sure if the reviewer actually meant to say $L (g(h(\mathbf{x}+\mathbf{\Delta x})), y)$. If yes, we would like to remind the reviewer that this is equivalent to the baseline attack, _i.e._, I-FGSM, and it is not beneficial to transferability. We agree that there may exist alternatives, yet, from our perspective, the chosen formulation is the most straightforward and effective implementation accordingly to our motivation. &nbsp; > In L152, what is the link to the mixup? The authors mentioned it, but it needed further discussion in depth. **Response:** As has been discussed in lines 151-153 and lines 172-178, after rewriting the objective, our solution resembles performing mixup on the intermediate-level feature representations between adversarial and benign examples. Yet, in our solution, there is no randomness for $\gamma$ and the mix ratio $1/\gamma$ of the optimized subject is suggested to be relatively small ($\leq 0.5$). If we follow the mixup to use a random mix ratio sampled from a Beta distribution, then the optimization of adversarial examples hardly converge, as the ratio on the optimized subject change drastically during optimization. &nbsp; > In Section 4, the analysis lacks the baseline ILA. The audiences will be interested in whether the proposed method improves the prior work as motivated by the authors in the introduction. **Response:** We would like to politely remind the reviewer that we have discussed ILA from the perspective of gradient alignment in Section 4 already (specifically, in lines 228 - 234). Moreover, we have conducted comprehensive empirical comparisons between our method and other intermediate-level attacks (including ILA++ and NAA) in Section 5. These methods outperform ILA according to our experimental results and similar results in many previous papers. To be concrete, in our experiment, ILA achieves an average attack success rate of 44.80% on ImageNet, while ILA++, NAA, and our solution achieve 46.99%, 46.36%, and 57.06%, respectively. &nbsp; > The experiment settings seem to be different from prior work. What does the different mean in Sec 5-1? Why didn't the authors report the same architecture attack performance? **Response:** In Section 5.1, since a VGG-19 is used as the substitute model, we adopt an independently trained VGG-19 which has different weights but the same architecture as one of the victim models to make the evaluation more practical. Note that, in a black-box setting, it is less likely that the victim adopts exactly the same model as the substitute model, but it is more possible that the architecture is the same. &nbsp; > Meanwhile, the performance of ILA++ reported in the paper is different from the original paper. What would be the root reason for it? **Response:** As has been discussed in lines 257-258, this is because of differences in pre-processing. In this paper, we don't crop images before feeding them to the models, following some recent work [1][2], while, in the ILA++ paper the images were cropped. &nbsp; > The authors do not report the variance of the experimental results. **Response:** For compared methods that involve randomness, the standard deviation of their multiple run results is relatively small (generally smaller than 0.30% to be specific) compared to their performance gap, thus it does not affect the conclusions. We will add such results to our paper. &nbsp; &nbsp; [1] Jiadong Lin, et al. Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks. In ICLR 2019. [2] Xiaosen Wang, et al. Admix: Enhancing the Transferability of Adversarial Attacks. In ICCV 2021. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: I appreciate the detailed and informative response, which has clarified most of my concerns. I will raise my score after. The reason and further comments are listed below. > We appreciate the suggestion about alternative options ... I apologize for the typo. I was actually contemplating the substitute for the term $h(x+\Delta x) - h(x)$ in Equation (3) and meant to know whether it is the optimal choice. But, I acknowledge that it is minor and agree with the authors' motivation on magnitude maximization while retaining directional information. > Link to mixup After reading the response, I still found it a weak link, but it's also minor. I respect the authors' opinion to keep it. > In Section 4, the analysis lacks the baseline ILA. The audiences will be interested in whether the proposed method improves the prior work as motivated by the authors in the introduction. I would like to further elaborate on this point. It would be of interest to readers to witness a quantitative comparison between the proposed method and ILA concerning the alignment of gradient angles, which could be incorporated into, e.g., either Figure 3 or 4. As highlighted in Section 3-1 (especially around Line 100), the deterioration in ILA's performance stems from deviations in directional angles, while the proposed method addresses the issue via a novel objective. Despite the comprehensive experimental validation of attack performance, this particular aspect is somewhat unverified within the paper. Beyond these comments, the paper seems novel and provides extensive experiments proving its effectiveness. --- Reply to Comment 1.1.1: Title: Thanks to the reviewer Comment: It is great to know that most of your concerns have been addressed! As for quantitative comparison between our ILPD and ILA, we appreciate the further elaborated suggestion and we will add more ILA results to our paper, in addition to what have been given in the rebuttal (i.e., ILA achieves an average attack success rate of 44.80% on ImageNet, while our solution achieve 57.06%). --- Reply to Comment 1.1.2: Comment: Dear Reviewer XQBy, Thanks again for your positive feedback to our rebuttal and for the intention of raising your score! However, it seems that currently the score (on the system) is still the same as pre rebuttal, and we would like to gently remind that the rating is now able to be altered. As there are three days left for author-reviewer discussion, we would be more than happy to address any remaining concerns if you do have. Best regards, Authors
Summary: The authors first introduce the observation that traditional Intermediate-Level Attack (ILA) and some subsequent works deviate from the directional guides, resulting in sub-optimal transferability. With such observation, the authors propose Intermediate-Level Perutbration Decay (ILPD), a one-staged optimization that decays the intermediate perturbation strength in return for amplified directional guide, and thus higher adversarial transferability. Experiments on ImageNet and CIFAR-10 show that the proposed ILPD method outperforms SOTA transfer-based attacks in different types of models as well as robust models. Strengths: 1. Claims and speculations in the paper are well grounded by theoretical analysis as well as empirical studies. From the hypothesis of directional deviation in Figure 1, then the introduction of the ILPD and verification in Figure 2, studies and discussion are organized layer by layer, which is an enlightening experience to read through. 2. Apart from showing the analysis of ILPD, the authors extend the discussion to previous works such as ILA++, NAA and LinBP. I particularly like the authors’ attempt to replace $D_i$ and $\nabla_{z_g}L(\mathbf{z}_g, y)$ to study the transferability in the gradient aspect. 3. Although the modification the authors propose is a simple math trick by scaling up the influence from the directional guide, the optimization falls back to one-staged and no longer resembles lLA, which is sufficiently novel. 4. The experiments are conducted in a large variety of models, both normal and robust, both CNNs and ViTs. Ablation study and hyper-parameters study are performed to understand the method in an in-depth manner. Weaknesses: I could not spot any major weaknesses in this paper. The following points are mostly minor and understandable: 1. Although the authors perform evaluations on robust models, the choice of models does not align with what used to be evaluated in the previous works (i.e. the NeurIPS 2017 Competition, some models with ensemble adversarial training, etc.). 2. The number of attack iterations is fixed to be 100, which is quite large considering most baselines perform only 10-20 iterations in their experiments. Plus the observation in Figure 4(b), does it imply the proposed method has a slow convergence and requires more iterations to yield transferable attacks? 3. The performance of ILPD seems to be quite sensitive to the choice of $\gamma$. For example, Figure 2 suggests setting $1/\gamma \approx 0.5$ might probably obtain good transferability. However, in Figure 6(a), setting $1/\gamma = 0.5$ can have a success rate >15% lower than the optimal choice. For a new dataset, it is hard to determine a good starting $\gamma$ due to the difference in Figure 6(a) and 6(b). #### Grammar mistakes: - Line 148: the method **seems decays** - Line 212: **a** I-FGSM example - Line 214: the difference **become** - Line 254 **a** Inception v3 Here are only a few. The authors are recommended to check the paper thoroughly for the remainings. --- In summary, this paper is very well-written and provides a lot of evidence to support its claims. The only weaknesses that I could identify are largely outweighed by its strengths. Therefore I recommend accepting this paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Regarding the ILA++ baseline, I would like more clarification on the base attack in ILA++ for directional guidance. Since two attacks are applied in series, how are they balanced to sum up to 100 iterations? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Weakness #3 is briefly discussed in the paper. To me, the search for hyper-parameters will be the major limitation of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive feedback. Our responses to the comments are provided as follows. &nbsp; > Although the authors perform evaluations on robust models, the choice of models does not align with what used to be evaluated in the previous works (i.e. the NeurIPS 2017 Competition, some models with ensemble adversarial training, etc.). **Response:** We appreciate the comment. The NeurIPS 2017 Competition provides a robust Inception v3 and a robust Inception ResNet v2 which is trained to be robust to an ensemble of adversarial examples. The performance on the robust Inception v3 has been shown in our paper already (in Table 3). We now provide comparative results on the robust Inception ResNet v2 (with ensemble adversarial training) as follows. It can be seen that the advantage of our method is still obvious. | ensemble adv. train. | I-FGSM | DI$^2$-FGSM | NAA | ILA++ | ILPD | |:---------------------:|:-------:|:---------------:|:----:|:------:|:-------:| | Incepction ResNet v2 | 6.28% | 8.68% | 8.06% | 10.02% | **13.12%** | &nbsp; > The number of attack iterations is fixed to be 100, which is quite large considering most baselines perform only 10-20 iterations in their experiments. Plus the observation in Figure 4(b), does it imply the proposed method has a slow convergence and requires more iterations to yield transferable attacks? **Response:** Despite some previous work used 10-20 iterations to evaluate transfer-based attacks, in practice, many attacks cannot converge well within only 10-20 iterations (as observed in many other papers, _e.g._, [1]). Thus, to ensure a fair comparison, we used 100 iterations with a step size of $1/255$ for each method to guarantee convergence of all methods, just like in many papers, _e.g._, [2][3]. In Figure 3 and Figure 4, we used a smaller step size of $\epsilon / 100$ to observe the trend of overfitting before exhausting the perturbation budget as mentioned in the paper (lines 262-263). Given such a small step size, more iterations are required to demonstrate superiority indeed. However, in practice, our method does not exhibit slow convergence, and we have compared it with I-FGSM, NAA, and ILA++ on ImageNet as follows. We show how the average success rate of different methods varies with the maximum number of optimization iterations. The victim models are the same as those in Table 1 in our paper. | | 10 iterations | 20 iterations | 40 iterations | 60 iterations | 80 iterations | 100 iterations | |:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | I-FGSM | 16.95% | 20.86% | 22.38% | 22.34% | 22.40% | 22.36% | | NAA | 35.11% | 42.78% | 45.27% | 45.91% | 46.16% | 46.36% | | ILA++ | 41.76% | 45.98% | 46.54% | 46.88% | 46.94% | 46.99% | | ILPD | **42.16%** | **50.64%** | **55.50%** | **56.51%** | **56.83%** | **57.06%** | &nbsp; > The performance of ILPD seems to be quite sensitive to the choice of $\gamma$. For example, Figure 2 suggests setting $1 / \gamma \approx 0.5$ might probably obtain good transferability. However, in Figure 6(a), setting $1/\gamma=0.5$ can have a success rate >15% lower than the optimal choice. For a new dataset, it is hard to determine a good starting $\gamma$ due to the difference in Figure 6(a) and 6(b). **Response:** In the paper, we suggest using $\gamma \geq 2$ (_i.e._, $1/\gamma \leq 0.5$ line 172) to narrow the search for such a hyper-parameter. Figure 6 further shows that our method outperforms all its competitors with $1/\gamma$ in a wide range of $[0, 0.3]$ and $[0.1, 0.6]$ on ImageNet and CIFAR-10, respectively, showing that the choice of $1/\gamma$ is not a big concern to ensure its experimental superiority over all state-of-the-arts. In practice, the attacker could tune the hyper-parameters using a variant of the substitute model as the victim on a small validation set of, for instance, 200 examples, on their data to ensure performance. As for Figure 2, it adopts the same victim model as in Figure 1, which shares the same $h$ with the substitute model, thus might be slightly different from practical victim models and requires a different value of $1/\gamma$ to reach optimal attack. In general, the evaluated victim models in, for instance, Table 2 share a similar optimal $1/\gamma$ to be compromised fully. &nbsp; > Grammar mistakes. **Response:** Thanks for pointing out the typos. We will fix them in the updated version. &nbsp; > Regarding the ILA++ baseline, I would like more clarification on the base attack in ILA++ for directional guidance. Since two attacks are applied in series, how are they balanced to sum up to 100 iterations? **Response:** For previous intermediate-level attacks that require two-stage optimization, we do not count the number of iterations in their first-stage optimization in the 100 iterations. That is, if we count them all together, then these methods require more than 100 iterations to reach their performance reported in our paper. &nbsp; &nbsp; [1] Zhengyu Zhao, et al. Towards Good Practices in Evaluating Transfer Adversarial Attacks. In arXiv 2022. [2] Yi Huang, et al. Transferable Adversarial Attack Based on Integrated Gradients. In ICLR 2021. [3] Yiwen Guo, et al. Backpropagating Linearly Improves Transferability of Adversarial Examples. In NeurIPS 2020. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I would like to thank the authors for the responses and the extra experiments. My concerns regarding convergence rate and choice of $\gamma$ are well addressed. Just a clarification here: I believe the robust Inception v3 (from timm) used by the authors originally is trained by standard adversarial training (AT), which uses white-box attacks. By NeurIPS 2017 Competition, I indeed referred to the models - Inc-v3$_{ens3}$ - Inc-v3$_{ens4}$ - IncRes-v2$_{ens}$ (added in the rebuttal) as used in citations [11, 12, 28, 42, 46] in the paper. These models are trained using **ensemble** adversarial training (EAT), which is claimed to be more robust against black-box attacks. Nevertheless, I don’t think the results of said models will influence the conclusion given the extensive experiments in other sections and the appended result of IncRes-v2$_{ens}$. Therefore, having no further concerns, I am inclined to keep my rating. --- Reply to Comment 1.1.1: Title: Thanks to the reviewer Comment: We would like to thank the reviewer for responding to our rebuttal. It is great to know that your concerns have been addressed.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper notes that ILA (intermediate level attack), which maximizes projection onto a guide attack, suffers when the direction of the attack ends up differing from the direction of the guide, even when the projection is large. Thus, they propose that instead of allowing the direction to vary from a guide, we should simply look for attacks that have large intermediate magnitudes. This motivates their approach (called Intermediate Level Perturbation Decayed). They then demonstrate that this approach results in significantly improved performance across a wide range of models. Strengths: Overall, I think the issue the authors identify with ILA is well motivated, and the improvements are fairly significant. In particular, the experiments investigating gradient alignment are fairly convincing about what ILPD is doing, as well as the analysis of existing attacks. Weaknesses: My primary issue with this paper was that I found the presentation confusing. Although the motivation for why we might prefer an approach that doesn't differ in direction from an original "guide" attack is solid, the actual motivation of the approach used was much less clear to me, and not particularly intuitively presented. From my understanding, the attack is essentially "simulating" reducing the strength of the perturbation at some intermediate stage, but this interpretation isn't presented in the paper. In particular, upon first reading, this line was particularly confusing to me. > seeks intermediate-level perturbations to be in an adversarial direction and to possess larger norms (or say larger magnitudes) compared to a directional guide in the same direction simultaneously Since if there was a perturbation of the input that could lead to a strictly larger perturbation along the same direction as a guide, I would expect that to be already found by standard optimization procedures. I believe the part I missed from my first reading is that there is no "directional guide" - in comparison with ILA (a 2 step procedure), this is a 1 step attack. Although I think I now have some intuitive understanding of the paper's attack, it doesn't quite match the author's explanation, and I think the paper would benefit from improving the clarity of section 3.2 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive feedback. Our response to comments about our presentation is provided as follows. **Response:** The common belief in intermediate-level attacks, _e.g._, ILA, ILA++, and NAA, is that a larger magnitude of intermediate-level perturbations along the guide leads to improved transferability of adversarial examples. Thus, these methods achieve their goals via a two-stage mechanism, _e.g._, performing I-FGSM first to obtain the directional guide and maximize the scalar projection of the intermediate-level perturbation onto the directional guide. However, in this paper, we point out that the learning objective of these methods inevitably leads to intermediate-level perturbations deviating from the guide, and such a directional deviation of intermediate-level perturbation, even subtle, does great harm to the transferability. Therefore, we seek the intermediate-level perturbation $h(\mathbf{x}+\mathbf{\Delta x}) - h(\mathbf{x})$ whose direction is already aligned with an adversarial directional guide $\mathbf{v}$ and, simultaneously, possess larger magnitudes than it. This is achieved by defining $\mathbf{v} = \frac{1}{\gamma} (h(\mathbf{x}+\mathbf{\Delta x}) - h(\mathbf{x}))$ as the directional guide and optimizing Eq. (3) in the paper. In the paragraph below Eq. (3), we have mentioned that the method seems to reduce the strength of the intermediate-level perturbation during optimization, from a dual perspective, and this is actually why we call it intermediate-level perturbation decay (ILPD). The effectiveness of such an intermediate-level perturbation decay, from another perspective, is explained in Section 4.2. Although the method can be interpreted superficially as decaying intermediate-level perturbations during optimization, our in-depth motivation is to encourage the finally achieved intermediate-level perturbation to be larger than a directional guide and have the same direction as the directional guide. We are more than glad to follow the suggestions from the reviewer and revise Section 3.2 to make it clearer.
null
null
null
null
null
null
Fast Exact Leverage Score Sampling from Khatri-Rao Products with Applications to Tensor Decomposition
Accept (poster)
Summary: This work studies fast, exact leverage score sampling for Khatri-Rao product matrices (given access to its factor matrices) and an application to CP tensor decomposition via alternating least squares with row sampling. The main theoretical contribution is a data structure for sampling from the leverage score distribution of a Khatri-Rao product by iteratively sampling from the correct conditional marginal distribution as it constructs the row sample (one factor at a time). To apply this most effectively to CP decompositions, the authors propose a two-step version of this sampler that first samples a rank-1 eigenspace of the partial Gram factor matrices. The authors provide a good comparison of their work to the leverage score-based CP decomposition algorithm of Larsen and Kolda (Journal of Matrix Analysis and Applications, 2022). Strengths: - The sparse factor matrix version of Theorem 1.1 complements the paper. - The comparison with Woodruff-Zandieh [27] is valuable and discusses the shortcomings of the theory-heavy results (i.e., hidden second-order constants that blow up). - The paper is very nicely organized. - The two-stage eigenspace sampling idea in Section 3.2 is a solid research contribution. Weaknesses: - Theorem 3.1 is one of the main theory components of the paper, but largely comes from [Malik, ICML 2022]. - Section 3.1 could benefit from a formal description of what the leaf nodes represent. In particular, what is $S_{0}(v)$ and how does this partitioning get decided (i.e., where to draw the cut points)? It seems reasonable that the result is correct, just not immediately implementable by the reader. - In Corollary 3.3, it would be better to use the original expression $R=O(R \max\{\log(R/\delta), 1/(\varepsilon \delta)\})$ samples so as to not cause confusion by the implicit case assumption. - Re experiments: It would be good to compare the running time of this sampling-based method with both ALS-ARLS-LEV and an out-of-the-box ALS implementation (e.g., MATLAB or Tensorly). Analyzing the fit of the decomposition in isolation doesn't tell the full story. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Questions** - Re Theorem 1.1: If a single entry of $U_j$ changes, is there a faster update time? - Should $I$ in Table 1 be $I_j$ to represent the $j$-th factor update? - In Line 81, we have $S \in \mathbb{R}^{J \times I}$, so shouldn't we have $S[j,i]$ in the following line? - In Line 95, what is the tilde in the lower bound for $J$? Related: Line 99 should be $\Omega(R/(\varepsilon \delta))$ since this is a lower bound. - In the discussion about Kronecker regression, Cheng et al. [4] and Larsen-Kolda [12] sample from the leverage score distribution of the Kronecker product for their CP decomposition algorithms (i.e., the product of the leverage scores). This connection seems worth pointing out. - In Line 337, shouldn't the fixed sample count increase with the rank? This is needed for the leverage score sampling guarantees. **Typos and suggestions** - [line 14] "denoted" --> "denoted by" - [line 43] Suggestion: Can remove the parenthesis around the summand in the second part of Theorem 1.1. The sentence "The structure can also draw samples..." can probably be generalized to say "... for any subset of factor matrices"? - [line 49] You say that "our applications deal with dense inputs," but this is immediately followed by mention of the sparse Amazon tensor experiments. The writing here could be improved. - [line 55] Suggestion: Add a citation for CP-ARLS-LEV after you first mention it. - [line 58] Table 1 could benefit from $\tilde{O}$ notation to account for missing constant factors. The description of what the complexity is for could be made more explicit too: "Complexity of factor matrix $U_j$ for $N$-dimensional dense CP decomposition..." - [line 64] suggestion: $j$'th --> $j$-th, same for later occurrences - [line 79] suggestion: "sampling operators" --> "row sampling operators" - [line 131] "Kronecker regression is distinct" --> "is a distinct" - [line 141] suggestion: "autoregressive fashion" is an indirect way to explain the procedure. Consider dropping this phrase and merging the two sentences. - [line 143] $I_n$ have been scalars so far. Therefore, it is better to use $(i_1, \dots, i_N) \in [I_1] \times \dots \times [I_N]$ to denote the set of indices. - [line 144] suggestion: consider restating the dimension of $G_k$ and $G$ to help the reader remember what some of these operators mean, e.g., $G := (\text{expression}) \in \mathbb{R}^{R \times R}$. - [line 153] Please include the citation to Malik 2022 too, so that the reader can easily click through to the references. - [line 162] Typo: "theorem 1.1" --> "Theorem 1.1" - [line 165] Suggestion: Rewrite the sentence as: Let $h \in \mathbb{R}^{R}$ be a vector and let $Y \in \mathbb{R}^{R \times R}$ be a ... [delete "respectively"]. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer, and we have already updated our draft to address the typos and suggestions. The table below gives the requested comparisons to out-of the-box methods. All times reported are average seconds per single ALS iteration; randomized algorithms were benchmarked with $2^{16}$ samples per LSTSQ solve. We used Python Tensorly 0.81 and Matlab Tensor Toolbox version 3.5 on an identical system configuration to other experiments. OOM denotes out-of-memory, for Tensorly due to explicit materialization of the Khatri-Rao product . Our method requires significantly less runtime across all tensors. Due to the slow performance of CP-ARLS-LEV in Matlab, we developed a high-performance multithreaded version in C++, which serves as the baseline in the rest of the paper. See figures 9 and 10 in the appendix (supplement to original submission) for more runtime comparisons. | Method | Uber | Enron | NELL-2 | Amazon | Reddit| | --------- | ---- | ----- | ------ | ------ | ----- | | Tensorly, Sparse Backend | 64.2 | OOM | 759.6 | OOM | OOM | | Matlab TToolbox Standard | 11.6 | 249.4 | 177.4 | >3600 | OOM | | Matlab TToolbox CP-ARLS-LEV | 0.5 | 1.4 | 1.9 | 34.2 | OOM | | **STS-CP (Ours)** | 0.2 | 0.5 | 0.6 | 3.4 | 26.0 | Other Responses: **Questions:** 1. [Single Element Factor Updates] Yes! There is a faster update time, which is mentioned in Theorem 1.1 (line 41 of the submitted draft): “If a single entry in a matrix $U_j$ changes, it [the data structure] can be updated in time $O(R \log(\left| I_j \right| / R)$”. The procedure is detailed in Algorithm 5 in Appendix A.7, “Efficient Single-Element Updates”. The relatively simple method involves selectively updating matrices stored by binary tree nodes on the path from a leaf to the root, with the elements updated in each cached matrix depending on the index of the entry changed within $U_j$. Due to space constraints, we did not highlight this contribution more in the main body of the draft; we are happy to do so in the revision. 2. [Table 1 I vs I_j] We intended (but did not make clear in the caption) that the input tensor has dimensions $I \times I \times … \times I$ (i.e. all mode sizes $I_j = I$) to simplify the complexities. Thanks, we will clarify this. 3. [S[j, i] vs S[i, j] ] Corrected, thanks. 4. [Tilde on lower bound for J] The tilde denotes a hidden constant multiplying the right-hand side $R \max (\log (R / \delta), 1 / (\varepsilon \delta)))$. Woodruff [1, Theorem 2.11] reports the constant value as 144 multiplying the term $R \log (R / \delta)$, while Malik [2, S1 supplement] reports a value of $8 / 3$ as sufficient. Line 99: agreed, we have changed this to $\Omega$. 5. [Kronecker Sketching Connection] Thanks for pointing out this connection - we plan to include this in the section on Kronecker sketching. 6. [Increasing Sample Counts] We agree that the sample count should increase with the target rank in theory. However in figure 4, we would have to increase the sample count by different rates for different algorithms, since our algorithm STS-CP requires $O(R)$ samples, while CP-ARLS-LEV requires $O(R^{N-1})$ samples (see also Figure 6). Furthermore, our algorithm STS-CP performs better than worst-case analysis suggests when the sample budget is fixed, achieving 99.7% of the fit of exact ALS even at rank 125 on the Amazon tensor. To avoid these confounding effects that vary between different algorithms and tensors, we used a fixed sample count throughout figure 4 to **directly compare the sample efficiency of our methods for a fixed budget**, as well as quantify the accuracy degradation as the rank increases. See figures 6, 8, 9, and 10 for experiments that vary the sample counts. **Weaknesses** 1. [Thereom 3.1 from Malik 2022]: You are correct. The novelty in our approach lies in the strategy to sample from the distribution given in Theorem 3.1, a critical improvement that enables scaling to massive sparse tensors with up to **several thousand times** less compute for decompositions of *identical* quality. Suppose the algorithm in [Malik 2022] was applied to the Amazon Reviews tensor in our experiments with dimensions $4,821,207 \times 1,774,269 \times 1,805,187$ to produce a rank $R=25$ decomposition. After extra non-asymptotic improvements made in their paper, the exact floating point operation (FLOP) count to draw a sample from the KRP excluding the third mode is lower-bounded by $\left| I_2 \right| R^2$, or 1.12 gigaFLOPs *per row*. Including multiplicative constants but excluding the small $(25 \times 25)$-sized eigendecompositions (performed only per batch of rows, not once per row), **our** approach requires only 53.7 kiloFLOPs per row sample, a more than **20,000x reduction** that is due to the asymptotic improvement from $\left| I_j \right|$ to $\log \left| I_j \right|$. We have revised our draft to emphasize these points. 2. [Interval Endpoints] Agreed, we revised our draft to make the cut points explicit. If we let $v_1, …, v_{\lceil I / F \rceil}$ be leaf nodes such that the intervals $S(v_1), …, S(v_N)$ are ordered from left to right, the explicit formula for $S_0(v_i)$ is $S_0(v_i) = (i-1) * F$, so that each segment has at most $F$ rows. Our draft has been updated to reflect this. The method to choose the cut points for sparse factors is slightly more involved, and the procedure is given in appendix A.8. 3. [Corollary 3.3] Corrected, thank you. 4. [Runtime Comparison] Agreed! Beyond Figures 6, 8a, and 8b, a thorough runtime comparison against CP-ARLS-LEV is made in Appendix A.12.4, figures 9 and 10, which measure the speedup of our algorithm over CP-ARLS-LEV. We tested a range of sample counts for both CP-ARLS-LEV and our algorithm, testing the former method on a significantly larger range of sample counts for fairness (see Table 5). --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response, especially the runtime comparison table. I also read all the other reviews and author responses, and am updating my rating to 5 (borderline accept).
Summary: The paper studies algorithms to efficiently sample a row from a matrix $A = U_1 \odot U_2 \odot ... \odot U_N$ with probability proportional to the leverage scores of $A$. Here, $B \odot C$ denotes the _Khatri-Rao_ product of $B$ and $C$, or the column-wise kronecker-product of the columns of $B$ and $C$. In particular, this means that $A$ has a small number of columns $R$, but has an exponential number of rows $\prod_{j=1}^N |I_j|$ (where $U_j \in \mathbb{R}^{|I_j| \times R}$). This exponential blowup in the number of rows of $A$ in the core computational focus on this paper, as well as several related papers in the area. This paper's technical contribution is in proposing a data structure that can preprocess $U_1, ..., U_N$ such that a row of $A$ can be sampled with probability proportional to its leverage score. This data structure's space complexity matches the sizes of the input, and at query time it takes only $O(NR^2 \log \max \\{I_j, R\\})$ amortized time per row sampled. This new data structure is based on "binary-tree inversion sampling", a binary search algorithm that can help sample from distributions efficiently, and is used to speed up a computational bottleneck in a known approach to sample leverage scores from Khatri-Rao products. Theoretical results demonstrate the correctness of this algorithm, and experiments demonstrate the effectiveness of this algorithm. Strengths: The paper is well written, nicely motivated, pretty easy to understand, has a clear presentation of the subproblem it speeds up, and provides good context relative to existing work in the area (afaik; I'm no expert in tensor algorithms). **Originality and Quality:** A fine line needs to be drawn here, and this is the subtlest point of my review. Per the authors' account, existing work in leverage score sampling from Khatri-Rao products exists and provides nice results. A good example is Theorem 3.1 from this paper, which is an adaptation of an observation made by prior work about a particular way to compute the leverage scores of $U_1 \odot ... \odot U_N$. The core of this paper is taking Theorem 3.1 and finding a faster way to compute it's right hand side. This means that the prior work gave much of the framework for around paper's proposed algorithm, and we can sorta think of this paper's contribution as finding a novel way to compute a subroutine more quickly. That said, the approach to computing the subroutine is novel and interesting. I've not seen prior work in this area use a binary-tree in such a way. There's clearly a good deal of effort put into the math (though I only verified bits and pieces of the appendix). The paper very much stands as sufficiently original, but I want to be clear that the originality lands firmly in section 3.1 of the paper -- how to efficiently sample following this framework that existed in prior work. **Clarity:** The paper is well written. I have only a couple mild gripes about writing and presentation (listed later in the "Questions" section), and I feel that I understood all the big ideas of this paper. **Significance:** This paper feels well motivated, and the improvement to sample a row with runtime that depends logarithmically on the number of rows in $A$ is quiet strong and interesting. The experiments suggest that this truly is a state-of-the-art algorithm for a meaningful suite of metrics. I do have some gripes with the presentation of the experiments, which undercuts my confidence in the experimental evidence a smidge, but I think this both not essential and can be easily corrected. Weaknesses: The weaknesses are few and far between in this paper. A couple section lack a bit of clarity, like how the big-Oh rates for some of the downstream tensor algorithms are insufficiently explained. These are issues that can easily be fixed for a camera-ready version of the paper. Out of these mild gripes, there are three that do stand out a bit: 1. Several experiments lack confidence intervals. This muddies the story of the algorithm's empirical efficacy, especially in Figure 3, where it feels very confusing to see the error of the green curve in the left figure jump up significantly. 2. In a few place, discussion of big-Oh runtime isn't fully explained. For instance, I don't understand how the complexity of STS-CP was derived on Table 1 (appendix A.1). More details are in the "Questions" section. 3. The last paragraph from section 3.2 takes the core technical ideas and is supposed to summarize the algorithm. However, it instead introduces some new notation and makes me wonder if it's actually summarizing an algorithm or if it's instead trying to briefly explain some of the deeper technical edgecases the algorithm has to handle. Either way, it's pretty confusing of a paragraph for me to read, and undercuts my confidence in understanding the full algorithm. None of these are serious gripes, and call all be fixed with mild updates to the presentation. So I'm still very positive on this paper! Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I don't really have deep concerns to ask about here. I'll just enumerate a long list of minor typos and confusions. 1. Why use (afaik) non-standard notation, like having $d$ be the dimension (number of cols) of A, or $p_i$ be the probability of sampling row $i$, or $m$ or $c$ or $s$ being the number of sampled rows? 1. [Line 30] As someone familiar with leverage scores, I was surprised not to see a log appear in the sample complexity of leverage score sampling. I'd make it into a $\tilde O$. 1. [Lines 49 and 52] Line 49 says that the applications mainly deal with dense inputs. Line 52 says that the most practical benefit is on sparse matrices. These aren't necessarily statements that are at-odds, but it does feel like a bit of whiplash to read these statements back-to-back. The argument on lines 279-280 would be good to add here. 1. [Line 55] Do you have any understanding of why your algorithm which boasts a new and much smaller big-Oh term has 2% slower runtime? 1. [Table 1] I dunno how you came to the complexity-per-iteration here. As a reader, I felt like I should be able to pattern-match between Theorem 1.1 and Table 1 to understand the complexity of STS-CP, but I really couldn't make them line up. 1. [Line 131] "is **a** distinct" 1. [Line 132] "There" not "Here" 1. [Line 194] Get rid of the square on $[0,1]^2$ 1. [Line 132] Explicitly argue why $F=1$ is affordable space-wise now, but was too expensive in the setting at the start of section 3.2 1. [Line 241] What is $Z_j$? 1. [Figures 2,3,5] Add confidence intervals 1. [Figure 4] Swap the x and y axis. It's hard to read with fit being on the x-axis. Add some space between the subplots -- they're real strange shoved together like that, making it harder to read. 1. [Line 344] Well... STS-CP does have an oscillating error pattern on Amazon too. Maybe mention it? Eh, I'm sorta torn here. 1. [Algo 6, line 5] Missing a period in $1,..,N$ Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your comments, which have strengthened the quality of our draft. The global rebuttal response to all reviewers includes an extra PDF that contains updated figures (2, 3, 4, 5) with error bars / axes swaps mentioned in this review. It also includes a frequency spectrum analysis (figure 6 in the extra PDF) regarding the error oscillation in CP-ARLS-LEV (see below). Responses: **Questions:** 1. [Notation] We chose the notation for column count [Larsen & Kolda 2022] and [Malik 2022], which use $R$ or $r$ as the column count (more standard in the tensor decomposition literature). We see your point with the other notation; happy to change $q_i$ to $p_i$. We were a bit worried that $s$ as the sample count could cause confusion with $\hat s_1, ..., \hat s_N$, denoting random variables for the multi-indices of a single sample, so we took $J$ as the sample count from [Malik 2022]. Open to changing this if necessary. 2. [O tilde] Agreed, and pointed out by another reviewer. We have updated our draft. 3. [Dense vs. Sparse Inputs] Agreed, thanks. We meant in lines 48-49 that the matrices $U_j$ in the Khatri-Rao product are typically dense, and will clarify this. 4. [2% Slower Runtime] This is because the sample counts for our algorithm vs CP-ARLS-LEV were equal for this first experiment (line 53), and CP-ARLS-LEV has faster runtime per sample but a poorer accuracy for the same sample budget, both in theory and practice. In hindsight, this was not the best way to highlight our contributions, since the runtime, accuracy, and sample couns must all be accounted for. Our revised draft makes reference to the last section of the appendix and reads as follows: "On the billion-scale Amazon and Reddit tensors, our algorithm STS-CP can achieve 95% of the fit of exact ALS between 1.5x and 2.5x faster than the high-performance, state-of- the-art competing CP-ARLS-LEV method. Our algorithm is significantly more sample-efficient; on the Enron tensor, only $2^{16}$ samples were required to achieve the 95% accuracy threshold above a rank of 50, which could not be achieved by CP-ARLS-LEV with even 54 times as many samples. 5. [Complexity Derivation] per the comment above, we will work in $\tilde O$ notation. A version of the derivation below appears in appendix A.9, and we have added a reference to the main body of the text referencing it. a. Each ALS iteration contains $N$ least-squares problems. The $j$'th problem, for $1 \leq j \leq N$, is $\min_X \left| \left| U_{\neq j} X - B \right| \right| $, where $U_{\neq j}$ is the Khatri-Rao product of all matrices but $U_j$. The matrix $B$ has $\left| I_j \right|$ columns. b. For each problem, we need $\tilde O(R / (\epsilon \delta))$ samples from $A$ at a cost of $O(R^2 \sum_{k \neq j} \log \left| I_k \right|)$ per sample, plus a one-time cost (not-per-sample) of $O(NR^3)$ for the eigendecompositions. To avoid the dependence on $j$, we will round up the cost per sample to $O(R^2 \sum_{k=1}^N \log \left| I_k \right|)$ The cost to compute a $QR$ decomposition on the downsampled design matrix is $\tilde O(R^3 / (\epsilon \delta))$, a lower-order term. Because the observation matrix $B$ has $\left| I_j \right|$ columns, the cost of multiplying by $Q$ and back-substituting with respect to $R$ is $\tilde O(\left| I_j \right| R^2 / (\epsilon \delta))$. The total complexity per solve is: $\tilde O\left(NR^3 + (R / (\epsilon \delta)) \left(R^2 \sum_{k=1}^N \log \left| I_k \right| \right) + \left| I_j \right| R^2 / (\epsilon \delta)\right).$ For $\epsilon, \delta < 1$ and $\left|I_k\right| \geq 2$ for all $k$, observe that the first term $NR^3$ is at most the second term in the expression above, and we can eliminate it; simplifying slightly gives $\tilde O\left((1 / (\epsilon \delta)) \left( \sum_{k=1}^N R^3 \log \left| I_k \right| \right) + \left| I_j \right| R^2 / (\epsilon \delta)\right)$ c. Finally, we sum the expression above over $1 \leq j \leq N$ to get: $\tilde O\left((N / (\epsilon \delta)) \left( \sum_{k=1}^N R^3 \log \left| I_k \right| \right) + \sum_{j=1}^N \left| I_j \right| R^2 / (\epsilon \delta)\right)$ Combining the index variables $k$ and $j$ over independent sums into a single summation over $j$ gives: $\tilde O\left((1 / (\epsilon \delta)) \sum_{j=1}^N \left( N R^3 \log \left| I_j \right| + \left| I_j \right| R^2 \right) \right)$ and setting all $\left| I_j \right|$ equal to $I$ gives the Table 1, last row. 6-9. Thanks, fixed. 10. The variables $Z_j$ are the segment trees (with cached matrices) for the second-stage eigensampling from each factor matrix, and the variables $E_k$ are the segment trees for the first stage involving the small $R \times R$ gram matrices. We clarified this and also revised the last paragraph of Section 3.2. See the global rebuttal for the revision text. 11, 12. Agreed, see PDF. We observe, in the updated Figure 3, that the median error of our method (orange bars) across several randomly generated input matrices actually decreases as the tensor dimension $N$ increases. On the other hand, there are more outliers (defined in the caption) that drive up the mean at increased dimension. We carefully checked that this behavior does not arise from experimental error. Both algorithms run on identical matrices regenerated once per new trial, with identical solve procedures after the sample indices are identified. 13. [Oscillating error pattern on Amazon] Our algorithm does exhibit some oscillation on Amazon, but we find no clearly-defined period. The period for CP-ARLS-LEV **exactly matches the dimension of the tensor**. This is an artifact that the worst-case theoretical guarantees for CP-ARLS-LEV do not capture. Figure 6 in the PDF attached is a frequency spectrum of the errors. Observe that CP-ARLS-LEV exhibits a clear peak at 4 for the Uber tensor and 3 for Amazon, while our algorithm does not suffer from such an artifact. 14. Thanks, corrected. --- Rebuttal Comment 1.1: Title: Thanks for the response! Comment: Sorry for the delay in my response. I enjoyed the authors' message, and happily maintain my score. The paper should be accepted. I'm glad to see the change in the figures, and the detailed responses in the bullet points above are well written. Some quick notes: 1 . If this is standard in parts of the tensor literature, then feel no need to change that notation. I'm not used to it, but that's a me problem. 4 . I really like that paragraph you inserted it. Maybe say ~65000 instead of $2^16$, just because $2^16$ feels like it should be much larger of a number than it actually is? Eh, kinda torn on this. 11 . I enjoy the updated figures, and they contain nice trend data. It's perfectly sufficient for a rebuttal, but for a camera-ready draft I think I'd recommend swapping the stars out -- they're hard to see unless I zoom in quite a bit. 13 . Strange, a good point from the frequency plot perspective. It might be worth acknowledging the distinction on line 345, but it's really up to you.
Summary: This work proposes a new algorithm to efficiently perform exact leverage score sampling on Khatri-Rao products. This work is built on top of the TNS-CP algorithm (Malik et al., 2022), and a more efficient data structure is used to achieve better sampling computational cost. This algorithm can be used to accelerate sketching based alternating least squares (ALS) for CP decomposition, and it has been shown that state-of-the-art complexity per ALS sweep has been achieved. Experimental results show that this sampler is efficient and yields better accuracy than previous leverage score sampling based CP-ALS algorithms for large real sparse tensors. Strengths: 1. Accelerating leverage score sampling and large scale CP decomposition is an important topic, and this work achieves state-of-the-art results in both the theoretical analysis and the experimental results. 2. The data structure used to accelerate leverage score sampling of Khatri-Rao product is novel. Weaknesses: Overall I believe this is a good contribution. I only have one minor comment: it takes me a while to figure out the logics in Sections 3.1 and 3.2, and I think adding a figure to summarize the two sampling process would be good. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: n/a Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your review of our paper. Per the instructions, the PDF attached to our global response at the top of these reviews has a three-part figure that illustrates the sampling process, including the matrices involved, the significant operations, and the two-stage sampling procedure. These diagrams have been added to our draft. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response, I will keep my score.
Summary: This paper designs a method for efficiently minimizing least squares regression, when the design matrix is equal to the Khatri-Rao product of multiple matrices. Because this Khatri-Rao product can have a huge dimension, the paper is motivated to study sampling techniques to reduce the dimension of the product matrix. This least squares problem is motivated by applications to tensor decomposition, in particular when alternating least squares is used for finding CP tensor dimension. Each step of the alternating least squares corresponds to a least squares regression, where the design matrix is the Khatri-Rao product of the other tensor factors in the CP dimension. To solve this sketching problem, the paper adapts the leverage score sampling technique to this Khatri-Rao product design matrix. The algorithm scales in a linear order proportionally to the sum of the dimensions of each matrix in the product. Experiments are provided, comparing the approach to methods from a prior work by Larsen and Kolda (SIAM J. Matrix Analysis and Applications (2022)). The results show moderate improvement in terms of sparse tensor decomposition. Strengths: S1) The main is mainly of a technical nature: the algorithm, which is based on detailed calculations of the Khatri-Rao product, appears sound, and this is validated in the experiments. S2) The sampling from Khatri-Rao product relies on a binary-tree inversion sampling technique, and this tree construction has time cost $O(I R^2)$ and storage cost $O(R^2 I / F)$. S3) The experiments compared against the method of Larsen and Kolda appear significant, especially at higher target rank values, for several sparse tensors with as many as $10^9$ nonzeros. Weaknesses: W1) From my reading, the paper does not do a good job of motivating the problem. For instance, the main motivating example is tensor decomposition, but the connection is spelled explicitly only until Section 3.3. Are there any other plausible examples for motivating the proposed problem? W2) I find the derivation steps to be quite laborious but also quite tedious. I find the illustration of Figure 1 to be very useful, and I think that a better presentation that conveys the key steps would be very helpful, for instance, in a special case of the Khatri-Rao product. W3) The significance of the results is quite limited; in my opinion, I think the paper could benefit from stronger statements about how significant the contributions are. Solely judged based on the experiments, the improvement is quite weak since only one baseline is considered in the experiments. From a theoretical standpoint, the comparison stated in the related work could be made more clear as well. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - It would be nice to give some description of the tensor used in the experiments (without having to refer to the appendix). - I understand this question may be out of scope, but it would be nice to compare fit against other tensor decompositions, e.g., the Tucker decomposition. Would the newly developed techniques apply to Tucker decomposition? - It is mentioned in the related work that this paper is very closely related to a method by Woodruff and Zandieh (2022). I wonder how the methods would compare against each other in the experiments. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the paper discussed limitations in Section 5. Due to the technical nature, I do not see any potential societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and useful feedback, which have strengthened the quality of our draft. Regarding the third weakness [W3] about the novelty of our contribution: our approach offers a significant advance in the best theoretical complexity for sketched ALS CP Decomposition. By improving the per-leverage-score sample complexity from linear in $\left| I_j \right|$ (Malik 2022) to $\log \left| I_j \right|$, we enable computational cost savings of **several thousand times** for approximations of **identical** quality. This is the critical improvement that allows exact leverage-score sketching for large sparse tensor decomposition, which we illustrate by the FLOP complexity calculation below. Suppose the algorithm in [Malik 2022] was applied to the Amazon Reviews tensor in our experiments with dimensions $4,821,207 \times 1,774,269 \times 1,805,187$ to produce a rank $R=25$ decomposition. After extra non-asymptotic improvements made in their paper, the exact floating point operation (FLOP) count to draw a sample from the KRP excluding the third mode is lower-bounded by $\left| I_2 \right| R^2$, or 1.12 gigaFLOPs *per row*. Including multiplicative constants but excluding the small $(25 \times 25)$-sized eigendecompositions, **our** approach requires only 53.7 kiloFLOPs per row sample, a more than **20,000x reduction** that is due to the asymptotic improvement from $\left| I_j \right|$ to $\log \left| I_j \right|$. We plan to add the statements above to the paper to highlight our contributions over existing work. Other Responses: **Questions** 1. [Tensor Descriptions] Agreed. We omitted these descriptions due to space constraints in the main body, but have added them in the extra space the revision affords. 2. [Tucker Decomposition] The techniques developed here are less applicable to ALS Tucker decomposition due to the different properties of the Kronecker product on which it is based. The leverage scores on the rows of $A \otimes B$ are products of the leverage scores of $A$ and $B$, a property that the Khatri-Rao product does not share. Drawing a sample from the Kronecker product, therefore, is simpler than sampling from a Khatri-Rao product, although the number of samples required to achieve least-squares solution guarantees is exponential in the tensor dimension for even state-of-the-art sketched ALS Tucker methods (see, e.g. https://arxiv.org/abs/2209.04876). As a result, the runtime to compute the Tucker decomposition may be significantly higher than CP decomposition for even 4-dimensional tensors. 3. [Woodruff Zandieh Comparison] The algorithm developed by Woodruff and Zandieh is intricate (requiring three distinct sketching operators composed in non-trivial ways) with no publicly available code, to the best of our knowledge. Furthermore, their near input-sparsity time algorithm confers no benefit for tensor decomposition. The least-squares problems $\min_X \left| \left| AX - B \right| \right|$ in CP decomposition have observation matrices $B$ with $\left| I_j \right|$ columns for each $1 \leq j \leq N$. Regardless of the efficiency of the sketching mechanism, the column count of matrix $B$ introduces an unavoidable cost $O(\left| I_j \right| R^2)$, destroying the benefit of the $\tilde O(\left| I_j \right| R)$ input-sparsity time cost to form the sketching data structure for each factor matrix $U_j$. Furthermore, Woodruff and Zandieh do not provide a readily evident method to update their sketch when entries of each factor matrix change, although their approach could likely be adapted. Efficient factor matrix updates are essential for CP decomposition, and our algorithm provides methods to update each factor when even a single entry changes. **Weaknesses** 1. [W1] Our introduction includes several motivating examples for linear systems with Khatri-Rao design matrices, including compressed sensing and signal processing besides tensor decomposition. Systems of this exact form also occur in PDE-inverse problems (see, for example, equations (2) and (4) from https://arxiv.org/pdf/1909.11290.pdf); like us, they also consider the case with low column count and hundreds of thousands of rows per matrix forming the Khatri-Rao product. We have added this reference to the paper. 2. [W2] In the global response at the top of these reviews, the PDF attached contains figures that illustrate Theorem 3.1 and our two-stage leverage-score sampling procedure. We have added these to our draft. We hope these aid the reader and are happy to engage in further discussion. 3. [W3] The main baseline that we selected for comparison, CP-ARLS-LEV, is a state-of-the-art algorithm for sparse tensor decomposition with performance exceeding well-known software packages on the market. The table below compares the runtime per iteration of our algorithm with three other pieces of software for sparse tensor decomposition. Figure 4 (original document) and Table 4 demonstrate that our accuracy is comparable to non-sketched tensor decomposition, recovering 99.7% of the fit consistently for the Amazon tensor. | Method | Uber | Enron | NELL-2 | Amazon | Reddit| | - | - | - | - | - | - | | Tensorly, Sparse Backend | 64.2 | OOM | 759.6 | OOM | OOM | | Matlab TToolbox Standard | 11.6 | 249.4 | 177.4 | >3600 | OOM | | Matlab TToolbox CP-ARLS-LEV | 0.5 | 1.4 | 1.9 | 34.2 | OOM| | **STS-CP (Ours)** | 0.2 | 0.5 | 0.6 | 3.4 | 26.0 | All times reported are average seconds per single ALS iteration; randomized algorithms were benchmarked with $2^{16}$ samples per LSTSQ solve. We used Python Tensorly 0.81 and Matlab Tensor Toolbox version 3.5 on an identical system configuration to other experiments. OOM means out-of-memory. **The timings demonstrate that our algorithm STS-CP can quickly decompose tensors requiring hundreds of gigabytes of disk space in a fraction of the time required by standard packages**. We have added these clarifications to a revised copy of our draft.
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive comments. The PDF attached to this rebuttal contains figures that were requested by some reviewers. These include diagrams that illustrate our sampling procedure and updated graphs with error bars. Below, you can find a summary of some revisions to our draft common to multiple reviewers, with specific responses in the individual rebuttals to each review. 1. We revised our draft to use $\tilde O$ notation instead of $O$ notation, where applicable, to account for the factors $\log(1 / \delta)$ that appear in leverage-score sampling. 2. We provided diagrams illustrating Theorem 3.1 and the two-stage sampling process, Figure 1 in the attached PDF. 3. We have provided baseline comparisons to state-of-the-art tensor decomposition libraries, such as Tensorly and Tensortoolbox, that demonstrate the dramatic efficiency of our algorithm while maintaining approximation quality. The responses to reviewers u8zc and LxVj contain this table with average runtime per ALS iteration for our method vs. Tensorly and Matlab TensorToolbox 3.5. We hope that these changes improve the clarity of the draft, and are happy to engage in further discussion. Finally, reviewer qocR requested a revision of the last paragraph of section 3.2, which we include here due to space constraints in the comment below that review. Point taken, this last paragraph is dense and can be simplified. Here is our revision of that paragraph: "To summarize, Algorithms 1 and 2 give the construction procedure and two-stage sampling algorithm described above. The subroutines "BuildSampler" and "RowSample" relate to the procedure to build and sample, respectively, from the data structure in Lemma 3.2. The construction procedure builds the samplers $Z_j$, $1 \leq j \leq N$, for the second phase of sampling. The construction cost is $O(\left| I_j \right| R^2)$ per matrix $U_j$. The sampling algorithm returns $J$ samples from the Khatri-Rao product of all matrices (excluding possibly index $U_j$, a useful feature for tensor decomposition applications). Lines 2-5 construct the sampling data structures for the first phase of sampling, while lines 9-11 implement the two-stage procedure by calling RowSample twice in succession and updating the running vector $h_{<k}$." Pdf: /pdf/47b1e4f1f321a0b1d53a093d047cff0e09e41701.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Functional Equivalence and Path Connectivity of Reducible Hyperbolic Tangent Networks
Accept (poster)
Summary: The paper investigates the functional equivalence class for reducible neural network parameters and its connectivity properties. The authors focus on single-hidden-layer hyperbolic tangent networks, but the findings can be generalized to other feed-forward network components. Strengths: The paper provides a comprehensive understanding of the functional equivalence class for reducible neural network parameters. This can be key to understanding the structure of the parameter space and the loss landscape on which deep learning takes place. The authors describe a complex union of manifolds, displaying rich qualitative structure. This includes a central discrete array of reduced-form parameters, connected by a network of piecewise linear paths, and various manifolds branching away from this central network. The paper establishes that with a majority of blank units, the diameter of this parameter network becomes a small constant number of linear segments. This can be beneficial in understanding the trade-offs between shortest path length and rank for different unit permutations. The paper also discusses the relevance of their findings to modern architectures and deep learning, suggesting that understanding reducible functional equivalence classes may be key to understanding these topics. Weaknesses: The paper is highly theoretical and may not provide immediate practical applications for those working with neural networks. The exact relevance of reducible parameters to these topics remains to be clarified. The paper focuses on single-hidden-layer hyperbolic tangent networks, which may limit the applicability of the findings to more complex architectures. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In the paper, the authors focus on single-hidden-layer hyperbolic tangent networks. How do the authors think the results would change ifwe considered networks with multiple hidden layers or different activation functions? The paper discusses the trade-offs between shortest path length and rank for different unit permutations. Could the authors provide more insights into how these trade-offs could impact the performance of neural networks in practical applications? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer Wcpf for their thorough review of our submission. We think the strengths section provides a fair summary of the main contributions of the work and that the weaknesses section provides a fair summary of the main scope limitations of the work. We are pleased to reply to the reviewer's questions as follows: 1. **Beyond single-hidden-layer tanh:** We refer the reviewer to the top-level author rebuttal where we outline a roadmap for generalising our algorithms and analysis beyond the this simple setting. In summary, we think that the results we have presented will form part of the picture in the more general case, but there is additional structure to redundant parameterisations in multiple layers and in other architectures (such as transformer architectures) that is yet to be explored. 2. **Trade-offs between rank, permutation, and path length and their impacts on learning:** We appreciate this question as the topic seems interesting to us. * We note that the impact on learning of the existence of short paths is through its implications for the structure of the loss landscape. Of interest in particular is the 'local' structure, in this case the number and arrangement of equivalent parameters a small number of path segments away from the current parameter. (See also the discussion in the top-level author rebuttal and the response to some other reviewers). * We have identified two extreme points along this tradeoff: * When the rank of the parameter is one less than the number of units, there are a small number of parameters nearby accessible by a small number of liner segments distance. * When the rank of the parameter is half of the number of units, all equivalent parameters are within a small number of linear segments. * The fundamental driver for these path lengths is the number of 'swap' manoeuvres required to implement the permutation separating the reduced form of two parameters. Some permutations require a large number of serial transpositions to implement, while all permutations can be implemented with a small serial depth if we can perform enough parallel transpositions in each serial step (this parallel transposition approach is at the heart of the diameter result). * The ability to execute transpositions in parallel relies on the number of blank units that can be used as temporary registers for the weigh swap manoeuvres. * Putting it together, this suggests that, the more blank units we have in the reduced form (the lower the rank or the original parameter), the more permutations we will be able to implement with a low equivalent parameters we will be able to implement with a small serial depth (due to more parallelisation), and therefore the more equivalent parameters will be reachable with a small number of linear path segments. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response.
Summary: The paper introduces an algorithm to calculate the canonical equivalent parameter for a tanh feed-forward network. Further, the full equivalent class of each parameter is given by a union of subsets constructed out of (inverse-)reductions and permutations of the canonical parameter. Finally, the paper shows that each functional equivalence class is path-connected and that the diameter of a functional class with rank at most the half of the number of units in the hidden layer is 7. Strengths: The canonicalisation algorithm is novel for feed-forward networks with a tanh transfer function. Furthermore, this is the first result that characterizes the diameter of some of the equivalence classes of these networks. The paper fully proves each theorem and includes useful visualizations for the path structure of a functional equivalence class. The paper is overall very clearly written, with only a few possible minor typos. The results presented here could be of importance to loss minimization problems. Weaknesses: The scope of applications of the algorithm and the theorems seems to be rather narrow or at least is not properly motivated to be broad. The paper has very limited evaluation of the proposed algorithm and further uses of the results. It is for example unclear in which practical application Algorithm 4.1 could be used, because of the real valued parameters being usually all different in a trained network, see Questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 196: What is a branch? 265: Should it be ``connect $w'$ to $u$"? What are the practical applications of this algorithm? Could you apply it to a loss function example you mention in 354? If the parameters are real values, wouldn't they (and their absolute values) all be different from each other with probability equal to 1? Could you say anything about the loss function at different parts of the functional class that belongs to zero loss? Is it steeper at certain points in parameter space than others? Could you clarify and justify the remark in 362? What is this approximation? Could you give an example of a network that has rank higher than half of $h$ that is part of a functional equivalence class with a diameter more than 7? Why does it matter what the diameter is of a functional equivalence class? Why are networks that are in a functional equivalence class of diameter at most 7 of particular importance? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: They are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer QPQA for the thorough review of our work. We appreciate in particular the kind comments about the clarity, novelty, and potential importance of the paper. We hope to be able to sway the reviewer's recommendation to reject the paper by defending the relevance of the results in answer to the reviewer's weaknesses/concerns and specific questions about the motivations. (We are also happy to respond to the reviewer's technical questions.) **Weaknesses:** * We refer the reviewer to the top-level author rebuttal on the relevance of the results for a simple architecture. In summary, we believe that the results in this paper are a meaningful component of a generalisation to more practical architectures, and future work can explore the *additional* redundancy structure that emerges as new architectural components are added. * We refer the reviewer to our response to **weakness 2** of **reviewer HCLr** for a discussion of the relevance of the canonicalisation algorithm. In summary, this algorithm has clear theoretical utility (for characterising sets of equivalent parameters), which is the main motivation, but also it already has some practical utility for measuring approximate equivalence despite numerical issues and this could be developed further in future work. * Regarding the claim in the third question about real weights being unequal probability 1 (which is about the existence/relevance of reducible parameters at all, not just about the canonicalisation algorithm), this depends on the sampling distribution. We view it as unsettled whether weights in trained networks are actually unequal, since they are determined based on data rather than uniformly at random. As a weaker claim, it may also be true that weights are unequal but the course of training is still influenced by the local structure of the parameter space which may involve reducible networks. If the reviewer has in mind a direct empirical evaluation of the hypothesis that reducible parameters and their functional equivalence classes are at all relevant to modern deep learning, we agree that such evaluation is an important direction for future work. Motivated by indirect evidence (such as empirical phenomena of compressibility in learned neural networks), we view our contribution as laying the groundwork for a principled study of reducibility and its role in practical deep learning. **Questions:** 1. By 'branch' we simply mean 'if' or 'else if' in the algorithm. So the 'second branch' spans lines 8 to 11 of the algorithm (lines 144--147 of the manuscript). We apologise for being unclear. This appears to be standard programming terminology and we are not sure how to clarify it---we welcome specific suggestions. 2. Yes. Thank you for pointing out this mistake and please accept our apologies for the confusion. 3. We agree that with floating point parameters the canonicalisation algorithm will not usually face exact equality of any parameters. We refer the reviewer to the second dot point in response to weaknesses above. In summary, the algorithm has theoretical utility and potentially still has practical utility. 4. It does appear that the steepness of the loss function varies for different equivalent parameters. As a simple example, in a one-unit network *a.tanh(bx)*, if *b* is perturbed around zero for large *a*, the function will vary much faster than if *b* is perturbed around zero for small *a*. 5. If a parameter *w* achieves loss *L(w)* and is very close in parameter space to some reducible parameter *u*, then *L(u) ~= L(w)* by smoothness of *L*. Then there are paths of parameters with loss equal to *L(u)* throughout the whole functional equivalence class of *u*. In aggregate this implies a path of parameters from *w* to many other parameters with approximately equal loss to *L(w)*. If the reviewer has further questions we will be happy to elaborate. 6. An example of such a high-diameter set is as follows. Consider a parameter with 10 units but rank 9, meaning there is exactly one 'spare' unit. There is a two-segment (at most) path from this parameter to a 'reduced form' of the parameter where a single unit is blank. Now to reach other equivalent parameters that are separated from this parameter by a permutation, it appears that the permutation must be implemented 'one swap at a time'. Some permutations can be implemented in a single swap. However, other permutations require some large minimum number of serial transpositions to be carried out in order to get all the units into the right place. Implementing one of these permutations to get to an equivalent reduced-form parameter will in general require potentially many more than 7 segments. Note: We haven't formally ruled out shorter paths existing than those we construct, which only give upper bounds on shortest path lengths. However, we conjecture that shorter paths do not exist in this architecture based on our experience varying parameters, and we think we could probably prove lower bounds if we invested effort into this. It's not something we have considered yet. 7. (and 8.) As discussed in the top-level author rebuttal, we do not actually think that the diameter itself or the number 7 are crucially important. Rather, by proving that the diameter is some small constant, we establish that all equivalent parameters are reachable with a small number of path segments. This suggests an extremely tightly connected network of equivalent parameters for these very-reducible parameters. Put another way: what matters is not the low maximum shortest path length but rather than number of short paths. In this interest we are attempting to respond to empirical literature observing piecewise linear paths of low loss connecting learned networks (so-called 'mode connectivity' literature). We think our theoretical findings, by identifying one source of such paths, implicate (highly-)reducible networks in these phenomena. See also our Discussion section.
Summary: This paper focuses on the study of functional equivalence classes in neural networks, specifically in fully-connected, single-layer networks with hyperbolic tangent activation. It presents a Canonicalisation algorithm for reducible networks to determine canonical representative parameters for different functional equivalence classes. Finally, it explores the concept of connectivity between these classes. Strengths: The paper gives a well-motivation exploration of functional equivalence classes in neural networks. It provides a well-written definition of these classes, introducing an analysis of connectivity between them. The article references foundational works on simple neural networks and incorporates recent literature in the field of deep learning, enhancing the credibility and relevance of the study. Weaknesses: The weaknesses of this article stem from the focus on a very theoretical network type with implausible parameters. While a useful theoretical case, further discussion on the applicability of this analysis to modern architectures would benefit the article. Specifically, a drawback of the article is its narrow focus on a specific architecture type, namely single-layer networks with tanh activation, a single input and single output. While it does discuss expanding the input and output space, it fails to address how the findings extend to more complex multi-layer networks or different activation functions commonly used in modern architectures. By limiting the scope of investigation, the article has limited application to contemporary neural network design and learning. The proposal of the Canonicalisation algorithm also presents limitations. The algorithm relies on exact equivalence between weights or to 0, which is rarely achievable in practice, especially for weights trained through backpropagation. This raises questions about the adaptability of the Canonicalisation algorithm to networks trained through backpropagation, and whether it can provide meaningful results. The article would greatly benefit from addressing these concerns and exploring potential adaptations for networks trained through backpropagation. It appears to me that the Canonicalisation algorithm is a unit pruning / neuron removal algorithm. While Kuditipudi et al. is cited, the article would benefit from considering and discussing other neuron removal methods, specifically the following which focus on individual neurons in fully-connected layers: + Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, and Qiang Liu. Good subnetworks provably exist: Pruning via greedy forward selection. In Proceedings of the 37th International Conference on Machine Learning, pp. 10820–10830. PMLR, 2020. + Xiaocong Du, Zheng Li, Yufei Ma, and Yu Cao. Efficient network construction through structural plasticity. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 9(3):453–464, 2019. + Lemeng Wu, Bo Liu, Peter Stone, and Qiang Liu. Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks. In Advances in Neural Information Processing Systems, volume 33, 2020. Additionally, GradMax is a method which adds new neurons, similarly to Fukumizu and Amari, which was cited. The new neurons do not impact the function of the network upon addition by setting fan-in weights to 0. This case doesn't seem covered by remarks 5.3-5.5. Evci, Utku, et al. "GradMax: Growing Neural Networks using Gradient Information." International Conference on Learning Representations. 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The practical contribution of the path connectivity analysis was furthermore unclear for me. What does this analysis mean for learning or architecture design? Is it an argument for sparse training or initialization to reach an architecture with a majority of blank units? I believe the reference format is incorrect; I believe that NeurIPS uses number-based references instead of name, year. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper does not adequately address the limitations induced by the narrow focus on a highly theoretical architecture. The sections on "towards modern architectures" and "functional equivalence and deep learning" offer insight into the potential application of the analysis to contemporary deep neural networks, but are not comprehensive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer HCLr for taking the time to thoroughly review our work. We aim to respond here to the several weaknesses and questions raised. Our question for the reviewer is, if the discussion of limitations and practical considerations in the paper were expanded along these lines, would the reviewer reconsider their recommendation to reject the paper, in line with some of the other reviewers who have recommended varying degrees of acceptance while acknowledging similar limitations? Or, is the reviewer's assessment of our contribution that it is fundamentally insufficient? **Weakness 1 (limited architecture):** Please see the top-level author rebuttal. In summary, we believe that our results form a meaningful step towards answers for more practical architectures. **Weakness 2 (exact canonicalisation):** We acknowledge the limitation of the canonicalisation algorithm based on exact equivalence. We do think of the canonicalisation algorithm as primarily a theoretical tool (as a basis for the characterisation of the full functional equivalence class), as the reviewer has noted. We agree that two neural networks trained to approximate functional equivalence via backpropagation are unlikely to be detected as equivalent by this algorithm for numerical reasons. However, we still believe the canonicalisation algorithm has the potential to be developed in a direction that makes it more applicable as a diagnostic tool for this purpose. We offer the following observations: * After applying canonicalisation to two such networks, if they really are almost functionally equivalent, then their canonicalisations are likely to be close in parameter space, which can be easily detected (e.g. simply computing L_infinity distance). * One 'corner case' where the above claim does not hold is when the networks are near to the boundary of the 'canonical region', the slice of parameter space into which the canonicalisation algorithm maps parameters. Then it is possible that by slightly perturbing a network so as to cross this boundary, and then canonicalising the network, the resulting network will appear far away near another boundary of the region. Such networks will be approximately equivalent but not enough to be detected * It is simple to compute the L_infinity distance from a parameter to this boundary, which corresponds to finding the nearest reducible parameter. * This leads to a reliable test of approximate functional equivalence for parameters that are closer to each other than they are to the boundary of the canonical region. If the reviewer finds this sketch compelling, then if accepted, we could add a formal version of these observations as an appendix. **Weakness 3 (missing discussion of pruning, gradmax):** We thank the reviewer for the detailed reference suggestions, which are highly appreciated. We must admit we are not deeply familiar with the literature on pruning, having come to this problem from the perspective of the literature on functional equivalence. We acknowledge the apparent similarities between canonicalisation (which involves exact neuron removal) and unit-based network pruning, and also gradmax. One difference we have observed, stemming from the practical motivations of the pruning literature, appears to be that pruning methods usually accept an approximation of functionality (as long as loss is mostly concerned) while we have studied a setting of maintaining exact functional equivalence. We believe that important future work will bridge these settings. * In our case that means studying approximate functional equivalence. We have another paper under blind review that makes this connection more precise, discussing a similar algorithm to the canonicalisation algorithm under the framing of 'lossless network compressibility' (we also study approximate relaxations of this problem and their computational complexity). * On the other hand, we think our systematic study of functional equivalence can inform pruning literature. Most pruning methods we have seen are based on removing units with little unilateral impact on the function output/the loss. Our study shows that sometimes units can be 'merged', but not unilaterally removed. Is reviewer is aware of any pruning approaches that take advantage of such 'higher-order' opportunities for unit removal? The closest we have seen is: Casper et al., 2019, "Frivolous Units: Wider Networks Are Not Really That Wide", discussing merging units with correlated outputs (cf. our reducibility condition (iii)). **Question 1 (relevance of path connectivity to learning or architecture design):** The main lessons we hope to draw from this kind of analysis is for learning (rather than architecture design). We have summarised in the Discussion our main takeaways in terms of bulk structure of the parameter space, and therefore the loss landscape, of our analysis. The main insight is in the large number of equivalent parameters that are reachable through short, simple paths from any given reducible parameter (see also: discussion in the top-level author rebuttal about how exactly we expect these insights to generalise). The main relevance of these results draws on the fact that in the overparameterised learning setting, many interpolating solutions are reducible, so this rich structure is inherited by the set of zero-loss parameters. One could explore training methods that encourage reducibility directly, however, we are more interested in understanding whether or not existing learning methods already cause reducibility to emerge. This is one of our main motivations for theorising these sets of parameters: Now that we have characterised these sets, we can (in future work) conduct experiments aiming to observe their role in learning. **Question 2 (reference format):** The 2023 formatting instructions say "Citations may be author/year or numeric, as long as you maintain internal consistency." We will follow any updated advice on the matter. --- Rebuttal Comment 1.1: Comment: Thank you for your extensive response. After discussion with other reviewers, there are a few points I'd like to discuss. Apologies that this response comes so close to the end of the discussion period. In the overall response, you note the lack of "direct evidence of reducible parameters being encountered or approached during training in practical settings, which would be the basis for their relevance to modern deep learning theory." In my view, this appears similar to the question about exactness in the canonicalisation algorithm. Are there reducible parameters in deep learning, given the use of backpropagation and stochastic gradient descent? Are there more if the definition is expanded to include approximate equivalence? The expansive pruning literature would seem to indicate that there are: parameter-based removal and merging work when using approximate equivalence. The relation between parameters which can be removed without changing the function of the network and *reducible* parameters as used in this article appears worthy of exploration. One important distinction is that this article considers reducibility based only on the network parameters, and not on any distribution of data. The insight from architecture search, pruning, and other neural structure literature is that the information from the data distribution can often be useful, as a neural network is used as a function of a certain distribution. That being said, data-free initialization and pruning methods exist, based on functional analysis of the network parameters: Namhoon Lee, Thalaiyasingam Ajanthan, and Philip H. S. Torr. 2019. SNIP: Single-shot Network Pruning based on Connection Sensitivity. In International Conference on Learning Representations (ICLR). arXiv:cs.CV/1810.02340 Namhoon Lee, Thalaiyasingam Ajanthan, Stephen Gould, and Philip H. S. Torr. 2020. A Signal Propagation Perspective for Pruning Neural Networks at Initialization. In International Conference on Learning Representations (ICLR). arXiv:cs.LG/1906.06307 Soufiane Hayou, Jean-Francois Ton, Arnaud Doucet, and Yee Whye Teh. 2021. Robust Pruning at Initialization. In International Conference on Learning Representations (ICLR). arXiv:stat.ML/2002.08797 The closest works that I'm familiar with to the canonicalization method are the following, which use data-free analysis of network parameters to merge *similar* parameters: Suraj Srinivas and R. Venkatesh Babu. 2015. Data-free parameter pruning for Deep Neural Networks. In British Machine Vision Conference (BMVC). arXiv:cs.CV/1507.06149 Ben Mussay, Daniel Feldman, Samson Zhou, Vladimir Braverman, and Margarita Osadchy. 2020. Data-Independent Structured Pruning of Neural Networks via Coresets. In International Conference on Learning Representations (ICLR). arXiv:cs.LG/2008.08316 The coreset article is particularly close to the proposed canonicalisation, as it removes entire neurons and not only individual weights. The analysis used for coreset identification in this work may be useful here: instead of considering a single distribution, the functional equivalence of the network is analyzed over an arbitrary vector. A full review of pruning and sparse architectures is given here: Hoefler, T., Alistarh, D., Ben-Nun, T., Dryden, N., & Peste, A. (2021). Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. The Journal of Machine Learning Research, 22(1), 10882-11005. Given this extensive literature and its relevance to the canonical architectures studied in this work, I would appreciate a deeper contextualization of the proposed analysis. The theoretical framework proposed here could be directly linked to these approaches, which have been applied to contemporary deep architectures, thus bringing this work closer to understanding such architectures. Finally, the clarification about the path connectivity analysis is appreciated. In a similar way to my above comments, this approach of studying the learning from canonical architectures appears to me related to the lottery ticket hypothesis: Jonathan Frankle and Michael Carbin. 2019. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In International Conference on Learning Representations (ICLR). arXiv:cs.LG/1803.03635 Do the authors believe that canonical networks are lottery tickets? Could understanding their path connectivity shed light on training sparsely initialized networks? --- Reply to Comment 1.1.1: Title: Response to Reviewer HCLr Comment: **Thank you.** First, thank you so much for these valuable additional literature recommendations. As noted, we are highly interested in exploring the relation between our functional equivalence perspective and the perspective from the literature on pruning and sparsity in deep learning. We appreciate the time you have taken to cite these papers for us. **On the relevance of reducible parameters.** We concur about the potential relation between approximate pruning and approximate reducibility, and the value of future exploration of this direction. In the overall response where we mentioned a 'lack of *direct* evidence' for reducible parameters arising or being approached in learning, we indeed had in mind a number of *indirect* sources of evidence for this hypothesis, including: 1. the literature on pruning (as you point out) broadly suggesting that learned neural networks are often prunable without dramatic changes in their functionality; 2. the broader phenomenon of neural network compressibility (beyond pruning), for example the possibility of model distillation through teacher--student learning; 3. the common use of dropout to enforce redundant functionality within learned networks; and 4. the lottery ticket phenomenon (see also below). **On data-sensitive pruning:** We appreciate the reviewer pointing out this distinction. What we can say on the matter is that, clearly, in practice we care about a restricted form of functional equivalence that is only sensitive to changes in functionality for realistic inputs. This seems similar to the data-dependent pruning approaches you mentioned. Of course, functional equivalence for all inputs is a sufficient (not necessary) condition for such data-sensitive functional equivalence. This is the case we study, which we agree seems related to data-independent pruning approaches, particularly those based on coresets. **On lottery tickets:** As discussed, the existence of sparse subnetworks containing (most of) a model's functionality appears related to reducibility and canonicalisation. In particular, it seems to us that the existence of a lottery ticket implies that a network is (approximately) reducible. We haven't thought about this particular connection before, but it seems that lottery ticket could potentially be related to canonical parameters, but the connection is not immediate. * Roughly, the correspondence would say that the canonical parameter existed as a sparse subnetwork at initialisation. During training, the remaining units would be brought to cancel out so that the behaviour of the sparse subnetwork / canonical parameter could determine the overall function's behaviour. Possibly, the canonical parameter could also distribute its computation by 'splitting' (un-merging) its units into nearby blank units, possibly along paths like those constructed in our paper (e.g. the reverse of the reduction paths). * To the extent that canonicalisation involves eliminating units that do not contribute, or collections of units that can be merged together and then fail to contribute (because they 'cancel each other'), it seems that the units that are left may look like a sparse subnetwork that performs well before and after training. But another important part of canonicalisation is merging units that each contribute meaningfully to the function. If two such units jointly contribute to the function, it would appear that removing either of them from a subnetwork would meaningfully alter the function. This suggests that finding the hypothetical canonical parameter = lottery ticket after training could be difficult if using pruning methods that do not consider merging proportional units. **Overall comments:** > The relation between parameters which can be removed without changing the function of the network and reducible parameters as used in this article appears worthy of exploration. > Given this extensive literature and its relevance to the canonical architectures studied in this work, I would appreciate a deeper contextualization of the proposed analysis. The theoretical framework proposed here could be directly linked to these approaches, which have been applied to contemporary deep architectures, thus bringing this work closer to understanding such architectures. Our above comments are essentially all we have to say so far. If accepted, we would be willing to expand upon the high-level discussion of these related literature(s) in the paper, to better acknowledge this related literature and to call for future work systematically exploring these connections. Unfortunately we are not in a position to promise any detailed exploration of these connections with this submission. We thank reviewer HCLr once again for their detailed review, discussion, and literature recommendations, and for their consideration of our submission. (Note: unfortunately, we are unavailable for further discussion between now and the end of the discussion period in a few hours.)
Summary: This paper deals with functional equivalence problems in neural networks, i.e., the characterization of all neural networks that lead to the same given output function. This is a problem that has been studied since the early 1990s, including work by the fields medalist Charles Fefferman. The present paper considers single-layer networks with tanh-nonlinearity and puts forward a completely new perspective by paying attention to reducible parameters. Strengths: The paper tackles a problem that has been studied in various guises for over 3 decades, and puts forward a completely new vantage point, by considering reducible parameters. This leads to rich insights into the functional equivalence problem. In addition, the paper exhibits a strong algorithmic component, specifically by providing an algorithmic characterization of redundancies and connecting the underlying theory to the beautiful concept of piecewise-linear path connected sets. Weaknesses: could not find any Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: it would be interesting to see the authors' thoughts on whether the algorithmic component of the paper can be extended to multi-layer networks Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: none Flag For Ethics Review: ['No ethics review needed.'] Rating: 10: Award quality: Technically flawless paper with groundbreaking impact, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer NhDx for their review of our work and for their generous praise of our contributions. We appreciate the reviewer's acknowledgement of our novel perspective on the functional equivalence problem in particular. We do acknowledge that our work definitely has important limitations in scope (as discussed in the top-level author rebuttal, and as raised by the other reviewers and acknowledged in our rebuttals). Chief among these limitations is the study of the simple architecture---to the reviewer NhDx's question. We refer the reviewer to our top-level author rebuttal for our detailed thoughts on the path to extending these results to more general architectures. We will add for the interest of reviewer NhDx that our intuitions in this direction are influenced by the work of Fefferman (studying the *irreducible* multi-layer tanh case) and also the recent work of Vlačić and Bölcskei (e.g., "Affine Symmetries and Neural Network Identifiability", *Advances in Mathematics* 376, DOI 10.1016/j.aim.2020.107485) generalising Sussmann's reducibility conditions substantially (to networks with arbitrary feed-forward connection graphs). We are excited to draw on these works to push beyond the assumption of irreducibility and continue to explore the rich equivalence structure of reducible networks in more general architectures in future work.
Rebuttal 1: Rebuttal: We thank the reviewers for their time spent thoroughly reviewing our submission. We have responded individually, and we wanted to take the opportunity here in the author rebuttal to expand on the discussion of limitations in the paper. **Limitations of architecture generality:** All reviewers raised questions or concerns about the limitations in the scope of the study. As the reviewers have acknowledged, we discuss these limitations in the paper, but we are happy to expand here, and if accepted, to expand this discussion in the paper itself. The main concern appears to be that the results for simple architecture (single-hidden-layer hyperbolic tangent networks) are not relevant to modern practice (with much more complex architectures), and it is unclear how our results will inform a study of the more practical setting. Concretely, we are interested in generalising the main results (canonicalisation algorithm, characterisation of functional equivalence class, construction of piecewise linear paths) to architectures with features such as ReLU activation, multiple feed-forward layers, residual connections, or even other arrangements of neurons such as attention heads / transformer blocks. Our claims are as follows: 1. A 'framing'-level contribution of our paper is a readily-generalisable set of questions for studying canonicalisation and functional equivalence beyond irreducible parameters. This appears to be a novel contribution to the theoretical literature. 2. **Our technical results on canonicalisation and functional equivalence are indeed meaningfully informative for future work** studying analogous questions in more complex architectures, as follows: * When it comes to functional equivalence in a single hidden layer, while our results have studied the tanh case specifically, most of the structure in the results is not specific to tanh. In particular, reducibility conditions (i) through (iii) are generic to feed-forward layers with any activation function, and only (iv) makes special use of the tanh 'odd' property. Other activation functions, such as ReLU, have their own additional symmetries giving rise to analogous conditions, such as positive linear scaling symmetry for ReLU. However, the methods used in algorithms and proofs for the ReLU case will be similar in how they account for reducibility conditions (i), (ii), and (iii). Therefore parts of our algorithms and proofs will generalise. * When it comes to studying the multi-layer case, we have made some headway by generalising our results to the single layer case with multiple inputs and outputs, as a multi-layer feed-forward network can be viewed as a composition of such functions. For this reason, part of canonicalisation in multi-layer architectures will involve canonicalising each layer independently, and then one must also account for interactions between layers. The per-layer canonicalisation will involve a straightforward application of our single-layer results, to which further algorithmic and theoretical analysis could be applied to resolve inter-layer redundancy. In other words, our results will form a significant part of the picture for the multi-layer case. * Even transformer blocks are composed of 'pieces' that look like feed-forward network layers, and so our results may partially inform generalisations in this direction. 3. When it comes to considering extensions of our path connectivity results, we expect our results to generalise, but not in the strong sense of 'the same connectivity properties will hold for other architectures'. We should clarify this point (here and in the paper): * We do *not* have any specific reason to expect that generalisations of the global connectivity properties or O(1) diameter result in other architectures. When additional sources of redundancy are introduced by extending the architecture, they may expand the functional equivalence class in areas that are not connected by piecewise linear paths. * However, while these 'global' connectivity results are the framing we use for technical statements, we believe the interesting implications of these results are *the fact that from a given (highly) reducible parameter, there are many equivalent parameters reachable with (a small number of) linear path components.* This 'local' perspective should not be disrupted by the additional of further equivalent parameters to the set, even if they are globally disconnected. * Along these lines we can already conjecture that modern architectures have rich 'local' connectivity properties in the functional equivalence classes of their reducible parameters, simply due to the presence of the same kinds of redundancy between neighbouring units within some layer of the architecture. **Empirical relevance of reducible parameters:** We are grateful for our reviewers for following our motivation for the study of reducible parameters at all. Our reviewers do not appear to have taken major issue with the main gap in our motivation (for the most part), which is that so far we lack direct evidence of reducible parameters being encountered or approached during training in practical settings, which would be the basis for their relevance to modern deep learning theory. Our personal research agenda involves developing experiments to directly test our hypothesis that these parameters are relevant for learning. We consider this a priority over, say, investing more theoretical effort to systematically study new sources of redundancy arising from richer architectures. --- Overall, we view hyperbolic tangent networks as a 'toy' architecture in which we have given a complete answer for *one part* of the general topic of redundancy in neural networks, and this is a concrete-enough architecture (historically sufficient) that we might also hope to be able to probe the relevance of reducible networks in practice. Once again we thank our reviewers for their attention and their consideration.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper addresses fully connected feed-forward neural networks (NNs) with a single hidden layer and the hyperbolic tangent activation function. A parameter is thus a vector of weights and biases. It considers reducible parameters, which means that a NN with strictly less neurons can implement the same function. The paper provides a "canonicalise" procedure taking a parameter as input and yielding a canonical parameter that implements the same function. Also, if two parameters implement the same function then the output of the procedure is the same for them. Based on this procedure, the paper characterizes the set of parameters that implement the same function as a given parameter. It shows that this set is piecewise linear path-connected, and if the set is defined with respect to a ``sparse'' parameter, then the number of linear pieces of the diameter of the set is bounded by 7. Strengths: The paper is well written. The figures are very clear and help understanding. The theoretical topic is well motivated and well connected to the literature. The theoretical results are interesting, and so are the proofs in my opinion. Weaknesses: The main weakness is that the paper is restricted to a single hidden layer. Also, the hyperbolic tangent activation function is studied, while ReLU would have been more relevant, in my opinion. For these two weaknesses, it seems they cannot really be fixed in the frame of this paper, as considering multiple hidden layers and/or ReLU would probably change the nature of the results and the proofs. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The content presented is overall clear. The statements of the theoretical results seem clear. Some proofs are a bit technical and more difficult to follow. The authors could consider adding pedagogical content/details on the proofs in the appendix, if they see fit. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately discuss the main limitation (a single hidden layer) in the discussion. I do not see potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer virJ for their thorough review and for their kind words about the clarity of the paper. We appreciate the reviewer's feedback that some of the more technical elements of the proofs were difficult to follow. We are interested in broadening the accessibility of the paper to the extent possible. If possible, we would appreciate if the reviewer noted any specific sections of the proofs that were particularly difficult to follow? This would allow us to efficiently allocate our resources towards clarifying the presentation where it is most needed. Nonetheless, we are exceedingly pleased to hear that the reviewer found our results and proofs interesting. The main concern of the reviewer appears to be the limited relevance of the setting studied in the paper. We refer the reviewer to the top-level author rebuttal where we have outlined how we see the results as forming an important component of a more general study of redundancy in more modern architectures. It's true that the results and proofs will have to change in future work in this direction. However, we contend that these changes will be more like 'expansions' than fundamental changes. New methods will of course be needed, but we believe our contributions will still be useful. We will reiterate that with this paper we believe we have identified a self-contained sub-problem (fully analysing the simple architecture) which fits (with clear presentation) into a single conference paper; and that we feel addressing the additional complexity arising from other architectural extensions can be best achieved with separate submissions. We hope that this defense of the limited scope of the paper might earn the reviewer's reconsideration of their 'borderline' recommendation to accept the paper. However, either way, once again, we thank the reviewer for their attention and consideration.
null
null
null
null
null
null
Differentially Private Decoupled Graph Convolutions for Multigranular Topology Protection
Accept (poster)
Summary: Graph neural networks have privacy leakage in both their topology information and node attribute information. This paper proposes a differential privacy framework to protect both graph topology and node attributes. A model that decouples graph convolution and node attribute embedding is proposed. Strengths: A graph differential privacy (GDP) framework is proposed for GNN models. Theoretical GDP guarantees are provided. Weaknesses: 1. A weakness of using the differential privacy (DP) metric is the significant deterioration of utility (in this paper, it is the node classification accuracy) for even a very generous privacy budget. As seen in Table 3 and Figure 3, \epsilon=16 has to be set to achieve reasonable accuracy (except for the simplest case of edge-level privacy). However, even with such a generous budget, the test accuracy drops significantly compared to the non-private case. One should question if DP is indeed the proper framework to use in GNN (despite its popularity in database privacy). For example, there are frameworks on *inference* privacy that specifically protect certain private attributes instead of the full "data" (graph topology + features), which are more applicable in practice. 2. By decoupling the graph adjacency information A from the node attributes X, the model can no longer benefit from graph aggregation and local node processing. This also explains why the proposed model does not perform well on homophily datasets. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It was not immediately clear by the end of Section 5 that the output of DP-MLP_W is also designed to be DP, hence the overall framework is DP due to the composition theorem. It caused some confusion for me and I would appreciate if this point is emphasized since DP-MLP_W is never discussed in detail throughout the paper. 2. How is the MLP in Table 3 trained to achieve GDP? If the MLP is GDP and protects graph information (i.e., using the individual outputs from the MLP applied to individual nodes, one cannot easily infer if there are edges between them), why does it perform significantly better than DPDGC or DP-SAGE? 3. On the heterophily datasets, the reduction in test accuracy is very significant compared to the non-private scenario. What is causing this? A detailed discussion should be added. 4. How tight are the bounds in the theoretical results in Section 5 and are these used directly in the DPDGC model or is the model tuned instead based on an empirical estimate of its GDP? 5. What is the computational complexity compared to baselines? Is a distributed implementation using message passing possible? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer fPNi for their thoughtful feedback, comments, and positive assessments of our work. All questions are addressed below. - W1: ``A weakness of using the differential privacy (DP) metric is the significant deterioration of utility … for even a very generous privacy budget. … One should question if DP is indeed the proper framework to use in GNN (despite its popularity in database privacy).`` This is indeed an interesting comment. Please see G3 of our general response. - W2: ``By decoupling the graph adjacency information A from the node attributes X, the model can no longer benefit from graph aggregation and local node processing.`` Thank you for pointing this out. We have indeed discussed this issue and described it in our “limitation” section in the Appendix (lines 506-512). The key contributions of our work are to introduce new differential privacy criteria for graph learning tasks and establish theoretical privacy guarantees for different learning settings. Decoupling the graph adjacency matrix allowed us to better control the privacy-utility trade-offs and it was fundamental for motivating partial topology privacy concepts such as $k$-neighbor-level GDP. Note that traditional message-passing designs mostly focus on the utility aspect of GNNs: Whether there exists a GNN design that can achieve our $k$-neighbor-level GDP and offer similar utility as standard message-passing GNNs remains a challenging open problem. It is also worth noting that LINKX [38] also adopts a similar decoupled structure and empirically demonstrates strong (nonprivate) performance on heterophilic datasets. Nevertheless, that work does not study privacy-related features of decoupling. - Q1: ``It was not immediately clear by the end of Section 5 that the output of DP-MLP_W is also designed to be DP, …`` We apologize for the confusion. Once we obtain the cached intermediate embedding $Z$, the remaining DP-MLP modules are trained with DP-SGD in an end-to-end fashion. We will clarify this point in our revision. - Q2: ``Questions about DP-MLP`` Please see G1 in our general response. - Q3: `` Why on the heterophily datasets, the reduction in test accuracy is very significant compared to the non-private scenario.`` This is a good question. We conjecture that the beneficial information in Squirrel and Chameleon datasets is relatively delicate. That is, most node embeddings are close to the classification boundaries of GNNs even if they are correctly classified in the non-private case. Thus, adding noise during the computation of $Z$ embedding may greatly obfuscate these beneficial signals. It is unclear at this point why this phenomenon happens, and we hope to further investigate it in the future. - Q4: ``How tight are the bounds in the theoretical results in Section 5 and are these used directly in the DPDGC model?`` Note that DP is defined with respect to the worst case scenario, and following our proof in Section 5, one can easily construct a worst case pair ($(X, A)$ and $(X’, A’)$) such that the bounds are tight, implying our bounds are worst-case optimal. On the other hand, we use a tight upper bound on sensitivity in our experiment to obtain a practical GDP guarantee. As stated in Appendix I (line 677), we use autodp for privacy accounting (which adopts privacy amplification via subsampling and compositions via Renyi DP). We note that the conversion from RDP to approximate DP (Lemma F.1) is known to be nearly optimal and has been used in most practical DP frameworks. - Q5: ``What is the computational complexity compared to baselines?`` The computational complexity of DPDGC and GAP are roughly the same. Note that the bottleneck of the computational complexity is the operation $AX$, $AH$ (GAP), or $AW$ (DPDGC). We set the (hidden) dimension of $W$ to be 64 (Appendix I, line 680), which is usually smaller than the feature dimension of $X$. As a result, the computational complexity of DPDGC, GAP, and the other GNN baselines are similar. Note that the implementation of DPDGC and message-passing in Pytorch Geometric already uses sparse tensor/matrix multiplication to speed up the computation of $AX$, $AH$, or $AW$. Furthermore, similar to LINKX [38], DPDGC by default can support the arbitrary size of mini-batch training and is thus scalable. Please see G2 in our general response for further discussion. Please feel free to let us know if there are follow-up questions. We will try our best to address them in a timely manner. --- Rebuttal Comment 1.1: Comment: Thank you for your clarifications. I appreciate the authors' efforts in responding to the reviews. I have no further questions. --- Reply to Comment 1.1.1: Comment: Thank you for the acknowledgment!
Summary: This paper introduces a new framework called Differentially Private Decoupled Graph Convolutions (DPDGC) for graph learning settings that ensures both provably private model parameters and predictions. The framework is designed to protect sensitive user information and interactions in graph-structured data. The authors highlight the limitations of standard Differential Privacy techniques in graph learning settings and propose a novel notion of relaxed node-level data adjacency to establish guarantees for different degrees of graph topology privacy. The paper also includes an analysis of the framework and its performance compared to existing methods. Strengths: 1. This paper focuses on the important problem of graph differential privacy, which is critical in protecting sensitive user information and interactions in graph-structured data. 2. This paper conducts experimental evaluation on seven node classification benchmarking datasets. Weaknesses: 1. The paper's presentation is difficult to follow, which may make it challenging for readers, especially for those who do not have strong background knowledge on this topic hard to understand the proposed method and its contribution. 2. The proposed method has poor performance, which is reflected in the experimental results presented in the paper. 3. Some relevant literature has not been cited in the paper, which could suggest that the authors have not conducted a thorough review of the existing research in this area. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. How can one determine the appropriate value of K for a given dataset, in order to achieve meaningful results considering both privacy protection and utility? 2. In the case of the Pubmed and Cora datasets, why do the results of graph-based machine learning methods remain unchanged across different values of K, and what implications does this have for the use of these datasets in research? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: 1. The paper's presentation could be improved to make it easier for the reader to follow the content and understand the proposed method. 2. The utility of the proposed method is limited, as the privacy protection achieved with a privacy budget of eps=16 is too weak to be meaningful for many datasets. Furthermore, even with this level of privacy protection, the performance of the proposed method is significantly worse than that of non-private baseline models on most datasets, indicating poor utility. 3. The paper could benefit from a more comprehensive review of related works, as some relevant studies are not cited or discussed in the text, including [1]. [1] Zhang, Q., Ma, J., Lou, J., Yang, C., & Xiong, L. (2022). Towards Training Graph Neural Networks with Node-Level Differential Privacy. arXiv preprint arXiv:2210.04442. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer ssFv for their comments. We addressed all questions below. - W1: `` The paper's presentation is difficult to follow.`` We appreciate the reviewer’s comments regarding readability. We spend significant efforts to find a good order of exposition and lengths of explanations under page limit constraints, but there is clearly always room for improvement. Nevertheless, we would like to point out that all other reviewers mentioned that our presentation is good (score 3). Moreover, Reviewer XfoH even explicitly mentioned that `` The problem and contributions discussed in the paper are clearly mentioned and the illustrations do a good job of conveying them.`` Reviewer tLab also mentions that ``The authors have effectively conveyed complex concepts and ideas in a well-structured manner.`` and put it as one of the strengths of our paper. We are already working on improving the presentation of our paper even further. - W2: `` The proposed method has poor performance.`` We respectfully disagree with this comment. Note that our proposed method outperforms or matches the prior state-of-the-art method GAP and all the other baselines in almost all cases. Even for a few cases for which DP-MLP has the best performance, our approach is still competitive with the other DP-GNN baselines. Note that it is reasonable that in some cases, DP-MLP can outperform DP-GNNs. See G1 in our general response for further discussion and examples. In summary, we would like to emphasize again that our DPDGC model already has the overall best performance compared to other DP-GNN baselines. It is important to bear in mind how difficult it is in practice to ensure provable privacy guarantees for graph-based learners with satisfied utility. - W3: ``Some relevant literature has not been cited in the paper, which could suggest that the authors have not conducted a thorough review of the existing research in this area.`` We thank reviewer ssFv for bringing to our attention the paper of Zhang et al. [ref 1]. We will include it in the related work section of our revision. However, we would like to point out that [ref 1] considers the case that the training graph and test graph are disjoint. Hence, they only need to ensure GNN weights are DP. Their method cannot be extended to the more challenging setting where training nodes are reused for inference, as is the case of our work and the common node classification scenario. See page 7 in [ref 1]. As a result, we cannot compare [ref 1] with GAP and our approach. Furthermore, their method can be viewed as a privatized version of APPNP [ref 2], which is unable to learn well on heterophilic datasets [ref 3]. In contrast, our DPDGC can work well on heterophilic datasets. - Q1: `` How can one determine the appropriate value of K for a given dataset?`` One should treat the parameter $k$ in our $k$-neighbor-level GDP similar to $(\epsilon,\delta)$ as in approximate DP (see our discussion in lines 348-352); it serves as a design parameter specifically tailored to graph datasets. The appropriate value of $k$ can be determined by considering the level of sensitivity in revealing a portion of edge information. In practical implementations, the model holder (e.g., the server) should proactively choose privacy parameters $k$, $\epsilon$, and $\delta$ in compliance with regulations or users' agreements. While we demonstrate the utility-privacy trade-off for different choices of $k, \epsilon, \delta$, the final choice should be made by the practitioners in the field. - Q2: `` In the case of the Pubmed and Cora datasets, why do the results of graph-based machine learning methods remain unchanged across different values of K, and what implications does this have for the use of these datasets in research?`` Note that GAP and the other DP-GNN baselines are unaffected by the choice of $k$, as already discussed in Section 5. The main reason behind this finding is their “coupling” design, i.e., the use of the product of $A$ and (a function of) $X$. We also give a detailed discussion and simple example explaining why prior DP-GNN models are not affected by the choice of $k$, see lines 244-251. On the other hand, we would like to point out that our DPDGC ***does offer different performance*** for different choices of $k$. Please check our Table 3 and Figure 3 for details. Also check our updated Table in the general response. - L1: `` The utility of the proposed method is limited, as the privacy protection achieved with a privacy budget of eps=16 is too weak ...`` We would like to point out that our method already outperforms the state-of-the-art DP-GNN method, GAP, across different datasets and privacy parameter settings. We also want to point out that the authors of GAP also choose $\epsilon=16$ to demonstrate the utility of DP-GNNs for node-level DP experiments. While we agree that $\epsilon=16$ may be too weak in practice, we are not aware of any other methods that outperform our DPGDC and we do test for different $\epsilon \in \\{ 1,2,4,8,16 \\}$ in Figure 3. In fact, our novel $k$-neighbor-level GDP definition partially addresses the issue that node-level GDP with low $\epsilon$ results in poor utility. We hope the reviewer ssFv can appreciate the challenging nature of the problem of graph privacy and our contribution given the pointers above. Please feel free to let us know if there are follow-up questions. We will try our best to address them in a timely manner. ### Reference [ref 1] Towards Training Graph Neural Networks with Node-Level Differential Privacy. Zhang et al. arXiv preprint arXiv:2210.04442. [ref 2] Predict then Propagate: Graph Neural Networks meet Personalized PageRank. Gasteiger et al. ICLR 2019. [ref 3] Adaptive Universal Generalized PageRank Graph Neural Network. Chien et al. ICLR 2021. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications. It's worth noting that there's relevant literature missing, specifically reference [1]: [1] Epasto, Alessandro, et al. "Differentially Private Graph Learning via Sensitivity-Bounded Personalized PageRank." Advances in Neural Information Processing Systems 35 (2022): 22617-22627. --- Reply to Comment 1.1.1: Title: About the additional reference Comment: We thank reviewer ssFv for providing the additional reference [ref 4]. However, we would like to emphasize that [ref 4] is not about DP-GNN but specifically the DP PageRank algorithm. We believe that there can be much more literature about DP graph algorithms. Due to the space limitation, we choose to focus our discussion on DP-GNNs in the related work section, which is the most relevant to our manuscript. Note that neither our DPDGC nor discussed DP-GNN baselines leverage PageRank algorithms. The only exception is the reference Zhang et al. [ref 1] provided by the reviewer ssFv, where their work is a privatized version of APPNP and thus relevant to [ref 4]. We will try to include [ref 4] along with [ref 1] if there is still space in our revision. ### Reference [ref 1] Towards Training Graph Neural Networks with Node-Level Differential Privacy. Zhang et al. arXiv preprint arXiv:2210.04442. [ref 4] Epasto, Alessandro, et al. "Differentially Private Graph Learning via Sensitivity-Bounded Personalized PageRank." Advances in Neural Information Processing Systems 35 (2022): 22617-22627.
Summary: The paper presents a well-written and easily understandable framework called Graph Differential Privacy (GDP) tailored for graph learning methods. The proposed framework aims to address the privacy challenges associated with GNNs by ensuring both model parameter and prediction privacy. The authors introduce the Differentially Private Decoupled Graph Convolution (DPDGC) model, which offers superior privacy-utility trade-offs compared to existing approaches. The paper includes theoretical analysis that provides a solid foundation for the proposed model. The authors evaluate the DPDGC and compare it with existing differentially DP-GNN methods, as well as non-private models. By achieving SOTA performance, the experimental results validate the effectiveness and utility of the proposed DPDGC model in graph learning tasks. Strengths: 1. The paper is its clarity and coherence. The authors have effectively conveyed complex concepts and ideas in a well-structured manner. 2. The theoretical analysis provided in the paper adds good value to the research, supporting the proposed model and enhancing its credibility. 3. The incorporation of the DPDGC model, which leverages decoupled graph convolution, achieves SOTA result. Weaknesses: 1. Lack of comparison with other methods: The paper focuses primarily on comparing the performance of the proposed GDP-based methods (including DPDGC) against other differentially private graph learning methods. However, it would be beneficial to include a comparison with SOTA non-private graph learning methods to better understand the tradeoffs between privacy and utility.   2. Limited evaluation on larger and more diverse datasets: The experimental evaluation of the proposed methods is conducted on a relatively small set of benchmark datasets. The generalizability and scalability of the methods to larger and more diverse datasets are not extensively explored. Including a broader range of datasets would provide a more comprehensive evaluation of the proposed methods' performance and generalizability.   3. Lack of detailed analysis on privacy guarantees: While the paper mentions the privacy guarantees provided by the GDP framework and the DPDGC model, the detailed analysis of these guarantees is not thoroughly discussed. Providing more in-depth analysis, proofs, and discussions of the privacy guarantees would strengthen the paper's claims about the privacy properties of the proposed methods.   4. Limited exploration of alternative privacy mechanisms: The paper primarily focuses on GDP as the privacy framework and DPDGC as the corresponding graph learning model. However, there are various other privacy mechanisms and techniques available in the field of differential privacy. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See Weaknesses Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors do not adequately acknowledge potential limitations in their work, which may indicate a lack of comprehensive understanding of the challenges and constraints associated with the proposed framework. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer tLab for their thoughtful feedback and comments. We addressed all questions below. - W1: `` Lack of comparison with other methods: ... include a comparison with SOTA non-private graph learning methods to better understand the tradeoffs between privacy and utility.`` We appreciate the comment. We agree that comparing our method with SOTA ***non-private*** graph learning methods will reveal the exact price we need to pay for privacy (GNN GDP). However, we would like to point out that studying GDP for graph learning methods is very different from studying standard DP neural networks, as incorporating (modified) DP-SGD can only ensure GNN weights being DP, but not their final output predictions. A direct injection of DP noise typically leads to very poor, if not meaningless, results. See the DP-SAGE approach and the discussion in the GAP paper [19] for an example. For standard (graph-free) classification problems, one can simply adopt DPSGD during training to make any neural network model DP. That is, the model design and the privatization approach are decoupled. Unfortunately, this is not the case for graph learning as we explain in lines 41-46. When requiring both GNN weights and the output to be GDP, we have to also specifically design DPGNNs and thus not all GNN models can be privatized easily to satisfy GDP. As we also mentioned in our limitation section (Appendix A), DPDGC designs may not be the ultimate solution for DPGNNs. Due to the complex nature of the graph learning problem, one may need to jointly design the GNN and the privacy mechanism as in GAP and our DPDGC. Nevertheless, ours is currently the best GNN model that offers GDP guarantees. As a result, we only compare GNNs that can achieve GDP with the method described in our manuscript. - W2: `` Limited evaluation on larger and more diverse datasets`` Thank you for this comment. While we agree that evaluating our model on larger datasets is beneficial, we believe that our tested datasets are the most diverse one known in the DPGNN literature. Note that we are the first to test DPGNNs on heterophilic datasets and various homophilic datasets. Due to the diversity of our dataset’s choice we were able (for the first time) to establish that DP-MLP can outperform all other existing DPGNNs in some cases. This phenomenon also shows the importance of our novel $k$-neighbor-level GDP definition, as it provides a new utility-graph structure privacy tradeoff. Regarding the experiments on large datasets, please check G2 in our general response. - W3: `` Lack of detailed analysis on privacy guarantees`` Due to space limitations, we were only able to provide sketches of proofs in the main text (Section 5). Complete proofs were presented in Appendices (B to H), as is standardly done with ML conference submissions. We have tried our best to explain the key steps of our analysis in Section 5, especially regarding why the “coupling” graph convolution design fails to explore the trade-off of $k$-neighbor-level GDP and utility in lines 244-251 (i.e., the sensitivity remains unchanged for different $k$). The key ingredient for establishing a GDP guarantee is to derive tight sensitivity bounds for GAP and DPDGC, which are discussed in detail in Section 5, lines 222-236 and lines 287-292. For the proof of our main theorem (i.e., Theorem 5.1, 5.3, and 5.4), please check Appendix E, C, and D, respectively. Nevertheless, we will try our best to make our sketch of proof and discussion more transparent in the revision. - W4: `` Limited exploration of alternative privacy mechanisms: The paper primarily focuses on GDP as the privacy framework and DPDGC as the corresponding graph learning model. However, there are various other privacy mechanisms and techniques available in the field of differential privacy.`` We believe that our privacy definitions for graph datasets are common and appropriate in both theory and practice. It is worth noting that the notions of edge-level and node-level DP have been widely adopted and studied in numerous previous works [18,19,29]. In our work, we further extend DP definitions by introducing the concept of k-neighbor-level privacy, which offers a novel way to capture practical privacy considerations specific to graph datasets. Regarding the techniques employed to achieve GDP, we have designed a GNN architecture with small sensitivity, compatible with the DP noise addition mechanism. Our choice of Gaussian noise as the mechanism to achieve DP is well-founded for several reasons: (1) Gaussian noise allows for tighter privacy accounting, (2) the Gaussian mechanism has been proven to achieve optimal MSE under a z-CDP constraint [ref 1] (note that z-CDP can be viewed as a variant of Renyi DP), and (3) Gaussian noise DP is easy to implement in practice. While we believe that our technique based on the Gaussian mechanism is well-suited for the task, we are also open to alternative suggestions and would be happy to implement and compare with other solutions proposed by the reviewer. [ref 1 ] Mark Bun and Thomas Steinke, “Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds”, TCC 2016 ### Comments regarding the limitation Due to the space limit, we have put the limitation section in Appendix A. We have mentioned that current SOTA non-private GNNs on homophily datasets are still using “coupling” graph convolution designs. Thus, our proposed DPDGC may not be the ultimate solution but it is the best currently known solution under GDP constraint. We hope to further investigate whether there is a new GNN design that can have the merits of both worlds. Hence, we respectfully disagree with the comment about us not adequately acknowledging the limitations of our work. Please feel free to let us know if there are follow-up questions. We will try our best to address them in a timely manner.
Summary: The paper introduces a differentially private GNN model that allows for different privacy requirements for node attributes and graph structure. The model decouples graph convolutions from node attributes and graph topology and provides provable privacy guarantees. Experimental results are provided to show the proposed methodology's superiority. Strengths: The problem and contributions discussed in the paper are clearly mentioned and the illustrations do a good job of conveying them. Graph differential privacy is an interesting topic and the idea of providing flexibility for node attributes and graph structure is promising. Weaknesses: Some discussion regarding the questions mentioned in the next section would add to the paper greatly. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. From line 310: "we also test (DP-)MLP and several DP-GNN baselines that can achieve GDP guarantees, including RandEdge+SAGE [29] and DP-SAGE [18] for edge and node GDP, respectively.". However, it is not clear what MLP refers to in Table 3. 2. There is a large jump in performance for the non-private setting for DPDGC but the other methods do not exhibit this behavior. Any insight into this phenomenon? 3. From line 341 - DPDGC starts to outperform GAP when privacy budget increases but lags behind when privacy budget is small. What is the typical scenario in real world situations? 4. Regarding the comment on line 330: does homophily alone decide for which datasets utility loss from privacy noise compensates graph structure information? Basically, what are the things that one has to consider before picking the right algorithm to achieve GDP? Minor: On line 109, what is T? Table 3: change "none" to "non" Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer XfoH for their thoughtful feedback, comments, and positive assessment of our work. We addressed all questions below. - Q1: `` From line 310: "...". However, it is not clear what MLP refers to in Table 3.`` We apologize for the confusion. MLP in Table 3 refers to both non-private MLP and DP-MLP, depending on whether it is in the first row of Table 3 or not (i.e., having $\epsilon>0$ or not). We will change MLP to DP-MLP in Table 3 when it is trained with DPSGD. See our pdf in the general response. - Q2: `` There is a large jump in performance for the non-private setting for DPDGC but the other methods do not exhibit this behavior. Any insight into this phenomenon?`` We assume that reviewer XfoH refers to results on heterophily datasets (Squirrel, Chameleon, and Facebook). It is known in the literature that standard GNN designs (such as GCN) do not work well on heterophilic datasets. In contrast, specially designed GNNs such as LINKX [38] work much better in this setting. Thus, we conjecture that GAP (and the other DPGNN baselines) which are similar to GCN cannot perform well on heterophilic datasets even in the nonprivate setting, as described in [38] and our Table 3. As a result, adding privacy noise to these models will not result in a huge performance drop, as they already have poor performance in nonprivate settings. In contrast, since LINKX can learn well on heterophilic datasets in nonprivate settings, injecting large privacy noise into LINKX can result in a huge performance drop. Please let us know if we misinterpret your question, we will follow up in a timely manner. - Q3: `` From line 341 - DPDGC starts to outperform GAP when privacy budget increases but lags behind when privacy budget is small. What is the typical scenario in real world situations?`` In practice, the privacy parameters $\epsilon,\delta$ and $k$ should be determined based on the agreement between the model holder, the users or privacy regulators. That is, the model holders and users should first determine the ***strength*** of privacy (i.e., $\epsilon, \delta$) they agree with even before training models. Given such GDP constraints, we can then identify the DPGNNs with the best utility. Our experimental results only demonstrate the privacy-utility trade-off, but do not assume any particular privacy level requirements. - Q4: `` Regarding the comment on line 330: does homophily alone decide for which datasets utility loss from privacy noise compensates graph structure information? Basically, what are the things that one has to consider before picking the right algorithm to achieve GDP?`` This is an interesting question. We believe that there are multiple factors that affect this phenomenon. We conjecture that both homophily level and edge density are two good indicators. Our conjecture is based on the analysis of the Stochastic Block Model (SBM) with two clusters [ref 1]. Consider a graph that has two even-size clusters ($n/2$). “In-cluster” edges are sampled i.i.d. from Bernoulli distribution with probability $p=a \log(n)/n$. “Intra-cluster” edges are sampled i.i.d. from Bernoulli distribution with probability $q=b \log(n)/n $ for some non-negative reals $a,b$. It is known from the literature that the fundamental limit of exact recovery (i.e. recover the clusters with high probability) is $|\sqrt{a}-\sqrt{b}|>\sqrt{2}$ [ref 1]. Interestingly, this simplified case reveals two facts about how strong the graph information alone is: 1) when the homophily measure is close to $1$ or $0$ (i.e. |p-q| is large, where $p$ and $q$ are the edge density of in-cluster and intra-cluster edges respectively), the graph structure has strong information about the labels (clusters). 2) one needs edge density to be large enough. In the case of SBM, it should be $\Omega(\log(n)/n)$. If the edge density is too low (i.e., there are only n/2 edges), then it is impossible to achieve exact recovery even if $q=0$ as the graph is not even connected. Indeed, this only characterizes the strength of the graph information in a simplified case (2 clusters SBM). In real-world data, we also have node features and training node labels to learn from. We refer you to [ref 2] for a study of GNN utility on a generalized version of SBM, where both node features and training node labels are considered. Nevertheless, the privacy aspect of GNNs was not considered in [ref 2]. We also hope to further study the question of “when will the benefits of graph information compensate the price to privatize themselves.” Please feel free to let us know if there are follow-up questions. We will try our best to address them in a timely manner. ### References [ref 1] Community detection and stochastic block models: recent developments, Emmanuel Abbe, JMLR 2017. [ref 2] Graph Convolution for Semi-Supervised Classification: Improved Linear Separability and Out-of-Distribution Generalization, Baranwal et al., ICML 2021 --- Rebuttal Comment 1.1: Title: Thank you for the thorough response Comment: Changed the score to 7 --- Reply to Comment 1.1.1: Comment: Thank you for the positive feedback and raising the score! We really appreciate the fruitful discussion with reviewer XfoH.
Rebuttal 1: Rebuttal: We appreciate the time and effort of all reviewers and the AC. The reviews indeed provided helpful feedback. In this general response, we elaborate on some common questions raised by the reviewers (G1, G2). We also highlight some thoughtful questions and comments that initiate valuable discussions (G3, G4). - G1: ``Why simple DP-MLP has the best performance in some cases and how is it trained`` (Reviewer fPNi, WwFs) Note that the DP-MLP model does not leverage the graph structure information $A$. Hence, it does not need to pay an extra privacy price for protecting the graph structure. DP-MLP is simply trained with DP-SGD to achieve the same $(\epsilon,\delta)$ guarantees from the DP definition. Roughly speaking, DP-MLP can spend the whole privacy budget $(\epsilon,\delta)$ on DP-SGD training. However, both GAP and DPDGC need to spend some privacy budget to account for protecting $A$, such as the PMA module and multi-step training. As a result, DP-MLP requires less noise during training compared to GAP and DPDGC. Hence, DP-MLP might outperform GDP-GNNs such as our DPDGC when the benefit of graph structure information is insufficient to compensate for the additional noise needed for a GDP guarantee. Consider an extreme example, where $X = Y$ and $A$ is purely random (i.e., generated from Erdos-Renyi random graph model with edge probability 1/2). Clearly, introducing $A$ will not benefit the node classification accuracy. However, in order to make the graph algorithm GDP, we still need to pay an additional privacy budget to “protect” the privacy of $A$. It is obvious that DP-MLP will outperform any possible DP-GNNs in this case. For further discussion on how to characterize graph structure information, see our response to Q4 of reviewer XfoH. - G2: ``Performance on large homophilous graph datasets`` (Reviewer fPNi, tLab, WwFs) We are also very interested in conducting experiments on them. Unfortunately, the Opacus library (library for DP training) currently does not support SparseTensor (see issue #579 of Opacus GitHub repository) so we are unable to run tests on larger datasets. We believe that Opacus will support SparseTensor in a future version (as indicated by issue #350 of the Opacus GitHub repository by the creators) and we are willing to test on large graph datasets such as those in the Open Graph Benchmark repository. - G3: `` A weakness of using the differential privacy (DP) metric is the significant deterioration of utility… One should question if DP is indeed the proper framework to use in GNN.`` (Reviewer fPNi) While we acknowledge that there might be alternative definitions of privacy that could be considered "proper" for Graph Neural Networks (GNNs), we want to emphasize that Differential Privacy (DP) has emerged as the most widely accepted and implemented privacy standard across various fields, including machine learning, database & privacy research communities as a whole. The adoption of DP algorithms and models by prominent tech companies like Apple and Google, as well as the U.S. Census Bureau's embrace of Differential Privacy for data privacy, all further highlight its practical relevance and applicability in real-world settings. Given the widespread acceptance and deployment of DP, we firmly believe that investigating GNN performance under DP requirements is both valuable in theory and practice. On the other hand, we also acknowledge that requiring node-level Differential Privacy (DP) can significantly impact the utility of GNNs or necessitate a high privacy budget to maintain reasonable utility. In fact, we are the first to highlight this limitation of DP-GNNs, as prior works (e.g., GAP) only demonstrate the case where GAP outperforms MLP on three datasets. In contrast, our extensive testing on seven datasets reveals this phenomenon, underscoring the importance of addressing this challenge. It remains an interesting and open problem to explore whether a DP-GNN design can achieve node-level DP with a moderate privacy budget while maintaining comparable utility, or whether there is a fundamental price to pay for node-level DP. The above observations motivated us to introduce the concept of "k-neighbor-level" DP. As mentioned in our paper (lines 56-64), selecting $k = 0$ implies no privacy protection on the graph structure, while $k=n$ indicates node-level DP. In other words, it offers a trade-off between the edge information and utility and mitigates the possibly overly pessimistic node-level DP. It is also worth noting that in many practical scenarios, edge information can be less sensitive than node features, making this notion of privacy particularly useful in various applications. Lastly, we recognize that further research is necessary to determine whether the extent of utility drop is indeed the fundamental privacy price we must pay in the graph learning scenario. Nonetheless, exploring this aspect lies outside the scope of this single paper, but we hope that future studies will provide new answers to the challenging graph DP problem. - G4: `` The novelty of DPDGC with respect to LINKX`` (Reviewer WwFs) Note that we cited LINKX [38] in the original manuscript on line 198, where we also mentioned that our DPDGC is motivated by LINKX. We would like to emphasize the key novelty of DPDGC with respect to LINKX. LINKX focuses on learning with heterophilic graph datasets, and the authors purely focus on the utility aspect of the graph learning problem. In contrast, we are the first to identify and study the theoretical privacy benefit of this decoupled graph convolutional design and propose a corresponding privatization design. None of these ideas appear in the LINKX [38] paper or other prior literature. Furthermore, all prior DPGNNs leverage the “standard coupling graph convolution” design as their building block, which is significantly different from our approach. Thus, we believe that our DPDGC model is significantly novel. Pdf: /pdf/ce1df6b98b5e0525a8f7d082ae4fce338d1d8bb1.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors introduce a new model for Graph Differential Privacy (GDP) that ensures a parametric level of topological privacy through decoupling of the graph convolution mechanism (i.e., preventing direct neighborhood aggregation of features = standard $AXW$ aggregation). The key definition is that up to $k$ in- and out-neighbours of a randomly selected single node $r$ can be modified to create a new adjacency matrix. This makes this a hybrid between edge ($k=1) and node ($k=n) GDP, the parameter $k$ allows a tradeoff between topology privacy and accuracy. All GNN training is done using this modified graph $D’$ to create the model parameters for inference. The main contribution of the paper is three-fold 1) proving theoretical bounds on the Differential Privacy (DP) state of the art DPGNN called GAP [19] 2) Intuitions from analyzing the DP weakness of GAP to develop a novel Differentially Private Decoupled Graph Convolution (DPDGC) model, which benefits from decoupling graph convolution while providing GDP guarantees 3) theoretical bounds on the DP of their proposed DPDGC. The key intuition is that GAP has greater privacy leakage because they compute $A'H'$ where both adjacency matrix and features change. Motivated by this the authors propose DPDGC in which the $A'H'$ product is avoided thus providing more privacy than GAP. Strengths: Sound theoretical analysis of the two GDP models Theoretical analysis of GAP i.e., the presence of the $A'H'$ product which is shown from Theorem 1 to be a contributing factor to the DP limitation of GAP. This leads to their derivation of DPGDC which applies a DP-MLP to adjacency matrix A using a non-linear operation on $AW^{(A)}$ to create the adjacency matrix embedding $Z$, where $W_A$ are fixed model weights. This is opposed to GAP which computes $AH$ for the $Z$ embeddings. By ensuring $W^{(A)}$ is DP, they only need to look at $A'W^{(A)}$ versus $A'H'$ in GAP. Weaknesses: A big portion of the paper is spent defining the problem and setting up conventions for future Differential Privacy studies to be more suitable to the GNN field/setting. The novelty of the decoupling method is not very convincing. The idea of decoupling is previously found in this paper [1]"Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods", D. Lim, F. Hohme et. al, Neurips 2021, which defines a method called LINKX for non-homophilous graphs. LINKX separately embeds the adjacency $A$ to $h^A$ and the features $X$ into $h^X$ before mixing adjacency and feature information. The decoupling method proposed here seems to be a modification of the LINKX strategy. From Table 3, the proposed DPDGC model works well compared to the other differential privacy models when the graphs are heterophilic. This is not surprising because the decoupling idea is known to be beneficial for heterophilic graphs [1]. However, the other baseline methods work better in a homophily setting (columns in the right side of the table) because they are not specifically designed for that kind of setting. I would expect a more adaptive method that works in both settings to be more convinced about the utility of this method. Also from Table 3, simple MLP outperforms DPDGC in accuracy for higher $k$ on many datasets, especially homophilic ones. As pointed out by the authors themselves, protecting the graph information (higher privacy requirements) quite drastically reduces the utility of these decoupling methods. So the practical utility of this seems questionable. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What is the practical meaning of high-$k$ topology protection? in other words what are the additional security benefits obtained by going from $k$ to $k+1$. Its not clear to what is the marginal utility of increasing $k$ in terms of the increasing cost to an adversary who wants to break privacy. This is an important question as varying $k$ is the major difference between this work and GAP. How well will the decoupling method DPDGC work on large scale homophilous graphs with moderately high privacy budgets. I would like to see more extensive experimental results to justify the cost of decoupling versus simple MLP methods. Fig 1 is hard to understand (contextualize) with a lot of undefined terms (e.g., k-neighbor level adjacency) especially as it comes so early in the paper. Consider moving to later or defining better in-text. Def 4.3 seems ambiguous: $k$ entries of $A_{rj}$ and $A_{lj}$ are modified but what is the size of $ j + l $( Is $j+l=k$ or $2k$?). Text only says “some” $j$ and $l$. There are multiple variations of row normalization, are you doing Euclidean norm normalization of each row, it is not clear from the text and obviously it makes a big difference in the proof as it is known that Euclidean row normalization dampens the effect of outliers [1] “Sign and rank covariance matrices” J. of Statistical Planning and Inference, Dec. 2000. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have addressed the limitations of this work in the appendix. "we do not believe that the current DPDGC model is the ultimate solution for GDP-aware graph learning methods. To support this claim, we note that the nonprivate state of-the-art performance for learning on large-scale homophilic graphs is achieved by standard graph convolution models [47, 48]. The authors have stated that the proposed topic doesn't have any negative societal impacts, instead GDP can potentially protect user data and is beneficial. However this is a generic statement and needs to be proved (to what extent will breaking GDP of a graph impact individual users?) Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer WwFs for their thoughtful feedback and comments. We addressed all questions below. - W1: `` A big portion of the paper is spent defining the problem and setting up conventions for future Differential Privacy studies to be more suitable to the GNN field/setting.`` We believe this is in fact one of our most important contributions. Note that we need a rigorous definition and proof for GDP to truly ensure the privacy of graph datasets. As first pointed out by the authors of [19], merely ensuring GNN weight DP is insufficient to protect the privacy of graph datasets. We are the first to formally define GDP and lay rigorous theoretical foundations for future DP GNN studies. - W2: `` The novelty of DPDGC with respect to LINKX`` Please see G4 in our general response. - W3: “DPDGC only works well on heterophilic graphs but not homophilic graphs” We agree that part of the reason that DPDGC works well on heterophilic graphs is due to the same reason LINKX works well on heterophilic graphs. However, we would like to emphasize that our contribution is to propose the privatization design of DPDGC and identify the theoretical privacy benefits of decoupled graph convolution design. Note that ensuring the GDP guarantee is not trivial, as a direct extension of DP-SGD training to GNN does not work (see lines 41-46). GAP is the only (and prior state-of-the-art) DP-GNNs that satisfies GDP guarantees (our Corollary H.4 and H.5) but does not work for heterophilic graph datasets as demonstrated in Table 3. Our DPDGC is currently the only method that can achieve nontrivial performance on heterophilic graph datasets with GDP guarantees. We agree that DPDGC is only on-par with GAP under the node-level GDP setting for homophilic datasets. Nevertheless, DPDGC still significantly outperforms GAP in the $k$-neighbor-level GDP setting on three out of four homophilic datasets. While DP-MLP can outperform all DPGNNs in certain scenarios, we conjecture this to be the case due to the benefits of graph information not being able to compensate for the utility loss induced by privacy noise that protects the graph information (line 330). Please check our general response G1 for more information. Still, we agree that there should be a more adaptive design of DPGNNs that works across all settings and datasets, which we also mention in our limitation section (Appendix A). Such a design appears hard to find and work on this problem is ongoing. - W4: `` simple MLP outperforms DPDGC in accuracy for higher $k$ on many datasets`` Please see G1 in our general response. - Q1: `` What is the practical meaning of high-$k$ topology protection?`` This is an excellent question. One can think that the parameter choice $k=1$ provides roughly the same privacy protection on $A$ as in edge-level GDP (albeit $k$-neighbor-level GDP also protects the node features and labels). For general $k$, this implies that the adversary cannot simultaneously infer the existence of $k$ neighbors for each node. Let's consider the simplest case of $k=1$: according to Definition 4.4, the GDP algorithm output prevents the adversary from inferring the existence of any edge, even with access to the remaining n-1 nodes and their edges. However, the adversary may still be confident that there is a true edge between certain node pairs. For instance, even though the adversary cannot individually infer whether $A_{12}$ or $A_{13}$ are $1$ or $0$, they might be confident that $A_{12}=1$ or $A_{13}=1$ in the worst case. The case $k=2$ additionally protects this case but the adversary might be confident that $A_{12}=1$ or $ A_{13}=1$ or $A_{14}=1$ in the worst case. A similar explanation holds for general $k$. Selecting an intermediate value of $k$ (where $1 < k < n$) may reveal some portion of edge information, but it represents a trade-off between the potentially sensitive edge information and utility. It's worth noting that in many practical scenarios, edge information can be less sensitive than node features, making this notion of privacy particularly useful in various applications. For instance, if we are satisfied with the graph structure protection from edge-level GDP, we can simply use $k=1$ for additionally protecting sensitive node features and labels with a similar graph structure privacy. Please let us know if further clarification is needed, we are happy to provide a more intuitive explanation. - Q2: ``How well will the decoupling method DPDGC work on large scale homophilous graphs?`` This is a good question. Please check G2 in our general response. - Q3: ``Fig 1 hard to understand, consider to move it later`` Thank you for the suggestion. We will either try to make it more clear or move it to the later part of the paper as suggested. - Q4: `` Def 4.3 seems ambiguous`` Sorry for the confusion. We meant to say that the total number of replaced entries is $2k$ ($k$ for in-edges and $k$ for out-edges). We will make this clear in our revision. - Q5: ``question about row normalization`` We apologize for the confusion. We mean an $\ell_2$ norm (Euclidean) row normalization. That is, for a matrix $H\in \mathbb{R}^{n\times d}$, we normalize each row of $H$ to have $\ell_2$ norm equal to one (i.e., $||H_i||_2 = 1$ for all $i\in [n]$). We will make this clear in our revision. It is interesting to investigate the outlier effect mentioned by the reviewer WwFs as a future direction. However, in this work, we merely focus on the privacy aspect and this future direction is out of the scope. We thank reviewer WwFs for this intriguing comment. Please feel free to let us know if there are follow-up questions. We will try our best to address them in a timely manner. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. No further questions as of now. --- Reply to Comment 1.1.1: Comment: Thank you for the notification and response!
null
null
null
null
null
null
Deep Gaussian Markov Random Fields for Graph-Structured Dynamical Systems
Accept (poster)
Summary: This paper proposes a Gaussian model for graph-temporal data, with two sorts of sparseness in the precision matrix: blockwise due to an assumed Markov property in time, and within blocks due to assumed sparse graph structures. To achieve inference and parameter learning the paper proposes a two-stage approach with parameter learning done via VI and final posterior (mean) inference with CG. The variational distribution has a suitable structure that reflects the temporal sparseness of the model. Strengths: The presented model is a very natural and ubiquitous one, so the setting is well motivated. There are no other approaches that I am aware of that solve this particular situation. The approach is certainly valid. I found this paper easy to read and well explained. Rather than solve the inference problem with a Kalman-style algorithm (as in [1]) this paper proposes a two-stage approach of using VI for parameter learning and then CG for the final posterior (mean). I initially thought this was a rather strange thing to do, but on reflection I see that it is quite a good idea in this setting. I wonder whether this has more general application. The design of the posterior distribution is well chosen to capture the (exact?) posterior while reducing the computation complexity of the naive approach. Weaknesses: The experiments section is rather weak and is comes across more as a unit test than a proper evaluation: the data is drawn from exactly the correct model class, and the primary comparison is to a model that is the same but missing the temporal dependencies. There is not comparison on real data, or on simulated data with any other approach except extremely simple AR/MA/ARMA models. The approach of [1] is applicable for the temporal part of this setting, and for a 30x30 grid it is feasible to take the 900 dimensional dense representation. This would have made a useful comparison. In terms of novelty, the extension the spatio-temporal regime is not a huge leap, and is already arguably already in the DGMRF model class already. Most of VI is introduced in [40], so the novelty there is not that great. While the setting is general, to enable fast determinant the form of the spatial layer is actually quite restricted. For a general spatial graph the approach would be no better than kalman approaches. Technical Quality: 3 good Clarity: 3 good Questions for Authors: With the extension propose on 267 it would seem to me that the variational distribution contains the exact posterior. Is this correct? If so, then the CG could surely be omitted if the VI was run to convergence (e.g. with an annealing learning rate). I assume the variational distribution is learned by direct gradient descent on the parameters. This would be rather inefficient if the matrices are ill conditioned. Does the natural gradient approach of [1] work here? The motivation for the two-stage approach is that "parameter estimation becomes computationally prohibitive in high dimensional settings". [57] shows that doing the (truncated) CG algorithm also gives the marginal likelihood log det term with a good approximation. I feel that this is relevant work which should be discussed. [57] Scalable Log Determinants for Gaussian Process Kernel Learning, Dong, Eriksson, Nickisch, Bindel, Wilson, NeurIPS 2017 Typo: 371 extend -> extent Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The approach is limited to Gaussian models, and in this case exact inference is available if the problem is small enough, so the method is only relevant for larger data. It is not really made clear in the experiments how large is feasible, and whether the two stage approach really works on real rather than simulated data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Official Rebuttal to Upf6 We would like to thank Reviewer Upf6 for the positive assessment, the thorough feedback and interesting questions. In the following, we will respond to each point separately, and refer to our general response to all reviewers whenever there is overlap with other reviewers. ## Responses to Weaknesses > The experiments section is rather weak and is comes across more as a unit test than a proper evaluation: the data is drawn from exactly the correct model class We have added additional experiments on a real-world air quality monitoring dataset, showing that our approach generalizes well to more complex real world settings. For more details, please see the general response to all reviewers. > and the primary comparison is to a model that is the same but missing the temporal dependencies. There is not comparison on real data, or on simulated data with any other approach except extremely simple AR/MA/ARMA models. The approach of [1] is applicable for the temporal part of this setting, and for a 30x30 grid it is feasible to take the 900 dimensional dense representation. This would have made a useful comparison. We have added an AR model with spatial noise (as suggested by reviewer LAkB) as an additional baseline, taking both spatial and temporal dependencies into account. Fore more details, please see the general response to all reviewers. While we agree that the approach of [1] could serve as an interesting baseline, we are not sure how feasible it is in practice. To the best of our knowledge there is no code publically available for this approach, and given the remaining time frame we do not think it is doable to implement this ourselves. However, we would happily include this baseline if the reviewer could point us to a working implementation. > While the setting is general, to enable fast determinant the form of the spatial layer is actually quite restricted. For a general spatial graph the approach would be no better than kalman approaches. It is true that for general dense base graphs $G_{\text{spatial}}$ and $G_{\text{spatial}}$, there is no increase in computational efficiency. However, the idea is that in such a setting the dense graph is likely the result of a chain of more local (and thus sparse) interactions, and the dense adjacency matrix $\mathbf{A}$ can thus be approximated by a composition of sparse matrices. For further discussion and experimental validation of the spatial DGMRF approach for general graphs, we would like to point the reviewer to [40]. ## Responses to Questions > With the extension propose on 267 it would seem to me that the variational distribution contains the exact posterior. Is this correct? If so, then the CG could surely be omitted if the VI was run to convergence (e.g. with an annealing learning rate). Although this seems to be the case at first sight, the variational distribution is defined in terms of a sparse and factorized covariance matrix, while the true posterior has a sparse precision matrix $\mathbf{\Omega}^+=\mathbf{F}^T\mathbf{S}^T\mathbf{SF} + \mathbf{H}^T\mathbf{R}^{-1}\mathbf{H}$. The corresponding covariance matrix $\mathbf{\Sigma}^+=(\mathbf{\Omega}^+)^{-1}$ will generally be dense. To facilitate fast sampling from the variational distribution, the idea is to approximate the true dense covariance matrix $\mathbf{\Sigma}^+$ with a sparse and factorized covariance matrix $\mathbf{\Lambda}$. > I assume the variational distribution is learned by direct gradient descent on the parameters. This would be rather inefficient if the matrices are ill conditioned. Does the natural gradient approach of [1] work here? Thank you for this good suggestion. While we did not run into convergence issues with our experiments, it is definitely worth considering this in the future. > The motivation for the two-stage approach is that "parameter estimation becomes computationally prohibitive in high dimensional settings". [57] shows that doing the (truncated) CG algorithm also gives the marginal likelihood log det term with a good approximation. I feel that this is relevant work which should be discussed. The advantage of using CG only to compute the final posterior distribution is that the associated iterations need to be run only once. In contrast, using CG to approximate log-determinants, would require running CG repeatedly until convergence during the training loop. Although CG usually converges fast, we would expect that for large systems this would still result in a major bottleneck. ## Responses to Limitations > The approach is limited to Gaussian models, and in this case exact inference is available if the problem is small enough, so the method is only relevant for larger data. It is not really made clear in the experiments how large is feasible, Figure 3 in the appendix shows the general scaling behaviour of our method, supporting our theoretical analysis in the main paper. Of course, the scalability to even larger systems also depends on additional factors such as hardware constraints. To address this concern, we are planning to explore the limits or our approach during the discussion period, and report our findings as soon as possible. > and whether the two stage approach really works on real rather than simulated data. Our additional experiments on the real-world air quality dataset show that our method works well also in more challenging settings with unknown and complex dynamics (see general response to all reviewers for more details). ## Conclusion We thank reviewer Upf6 again for the time and effort spent on this review and for posing interesting and thought-provoking questions. Similar to the other reviewers, concerns were mainly raised about the experimental evaluation. We addressed this by adding experiments on a real-world dataset and by including a spatiotemporal ST-AR baseline model. If there are any other changes you would like to see, we are more than happy to discuss them with you. --- Rebuttal Comment 1.1: Title: Comment Comment: I thank the authors for the extensive and detailed reply. My questions were addressed satisfactorily. An implementation of [1] is at https://github.com/secondmind-labs/markovflow, thought it may be quite difficult to use. I continue to be broadly in favour of the paper.
Summary: The paper extends Gaussian Random Fields with precision-based spatiotemporal structure. Strengths: Clarity: The paper is professionally written, with flawless math. The method is technical and dense, and there are many parts to it. It would have been useful to include illustrations, perhaps in the supplements. Originality: The paper has moderate novelty. The idea of using spatial and temporal precisions to model random fields is very apt, but these ideas are also well known in general. Quality: The method is principled and well derived. Weaknesses: The experimental evaluation is insufficient. There is only a single simple advection example, which is effectively a toy case. There are no real world experiments. Unfortunately a paper like this really needs a real world’y experiment to show that the method has transferable performance. The paper is really good, but I’m leaning on rejection for this reason alone. The performance is good, but incremental. I’m happy with the neural network variant performance, but there is something wrong if the advection model (that matches the true system perfectly!) can’t fit the simple dynamics, nor improve over neural network. I wonder if this is a sampling or data scarcity issue. The results are a black box. The purpose of this work is to learn temporal causalities, and spatial couplings. Yet, neither are shown! The paper needs to illustrate what kind of structures the system has learnt, and show if they are accurate or useful, and demonstrate transferable insights. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No issues Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Official Rebuttal to k13T We would like to thank Reviewer LAkB for praising the quality and clarity of our method and for providing constructive feedback. In the following, we will respond to each point separately, and refer to our general response to all reviewers whenever there is overlap with other reviewers. ## Responses to Weaknesses > The experimental evaluation is insufficient. There is only a single simple advection example, which is effectively a toy case. There are no real world experiments. Unfortunately a paper like this really needs a real world’y experiment to show that the method has transferable performance. The paper is really good, but I’m leaning on rejection for this reason alone. We have added additional experiments on a real-world air quality monitoring dataset, showing that our approach generalizes well to more complex real world settings. For more details, please see the general response to all reviewers. > The performance is good, but incremental. I’m happy with the neural network variant performance, but there is something wrong if the advection model (that matches the true system perfectly!) can’t fit the simple dynamics, nor improve over neural network. I wonder if this is a sampling or data scarcity issue. We are happy to report that we resolved this issue. Our updated results (see Table 1 in the provided PDF) now show a significant improvement of the ST-DGMRF models over the baselines, with the advection model with $L_{\text{temporal}}=4$ temporal layers (matching the true transition model) clearly performing best. Nevertheless, please note that the issue was not as severe as it seems: in the original version the table showed results for $L_{\text{temporal}}=1$, which presents a simplification of the true transition model not accounting for longer range dependencies. > The results are a black box. The purpose of this work is to learn temporal causalities, and spatial couplings. Yet, neither are shown! The paper needs to illustrate what kind of structures the system has learnt, and show if they are accurate or useful, and demonstrate transferable insights. Please note that we are not claiming to learn causal structures, but rather aim at exploiting prior knowledge about such structures in order to improve inferences about unobserved system states. While in general, learning the true (causal) structure of the system will definitely aid in obtaining better state estimates, in practice it may be sufficient to learn a "good enough" approximation. However, we agree with the reviewer that an evaluation of the learned transition and precision matrices is interesting and could provide useful insights into both the model and the system itself. Unfortunately, we haven't had the time to do this, but we are hoping to do so during the discussion period. ## Conclusion We thank reviewer k13T again for taking the time to provide thoughtful and constructive feedback, which has helped us to improve our experimental evaluation significantly. We hope that we have resolved your major concerns with regard to the experimental evaluation by adding additional experiments on a real-world dataset and by improving the results on the simulated dataset such that they now align well with the expected outcome. If these changes are to your satisfaction, we kindly ask you to consider revising your initial rating accordingly. We are more than happy to discuss possible solutions to any remaining issues with you. --- Rebuttal Comment 1.1: Title: resp Comment: Thanks for the response. Including a new experiment is beneficial for the paper, but I am worried that this is too much changes to the paper during review period. Similarly I would prefer the method and results to not change during this time either: the paper should have converged before submission. Finally, my concerns about learned structured insights remain. For these reasons I still vote for rejection, although I am raising my score to 4. --- Reply to Comment 1.1.1: Title: Response to k13T Comment: Thank you for taking the time to carefully consider our changes and engage in a discussion with us. First, we would like to point out that while the results (Table 2 in original paper, Table 1 in provided PDF) did change, our proposed method as described in the submitted paper did not. The improved numbers are merely a result of improving the actual implementation (addressing issues like numerical instabilities). Second, we have included the real-world experiment and the additional spatiotemporal AR baseline during the rebuttal phase to directly address concerns raised by the reviewers. We would like to point out that Neurips facilitates uploading an extra PDF page with figures and tables for this exact purpose. This, in our opinion, indicates that additions and changes that improve the submitted paper are clearly encouraged. In any case, we appreciate that our changes were acknowledged as beneficial for the paper in general. Regarding the structural insights that can be gained using our method, we will come back to you soon to provide details on how we are planning to evaluate the learned structures. We would be glad to engage in further discussion with you about this.
Summary: The author extend work on deep Gaussian Markov random field to a spatio-temporal setting. Earlier it has been explored in spatial and graphical setting. The authors show how to retain the sparsity structure of the previous work, which is need to be computationally feasible. They also show the various types of temporal structure can be built in this framework. Strengths: The extension to a spatio-temporal models is of course very important as many datasets have both a spatial and temporal structure. Also that the methods are computationally feasible is paramount as the complexity spatio-temporal models tend to be very large. Each part of the paper is clear written. Weaknesses: The main weakness for me is the validation of the method only a single simulated data set is rather weak. If I understand it correctly you are comparing your method to either a pure time series model (AR, MA, ARMA) or a pure spatial (DGMRF?) this does not seems like a fair comparison. Something simple like a AR processes with spatial noise is standard in statistics. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In Figure 2 to the right should w be from 6-10 in the it says 6-9,12? Also how come mae is decreasing when w is increasing? Why is the average value of the simulations very negative in appendix in the colorbar the value ranges from -35 to -40. Have you added a mean in the other models, so they can compensate for this? You should also be able to get the marginal posterior distributions for your model how does they look for the simulations? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Official Rebuttal to LAkB We would like to thank Reviewer LAkB for the positive assessment and good suggestions. In the following, we will respond to each point separately, and refer to our general response to all reviewers whenever there is overlap with other reviewers. ## Responses to Weaknesses > The main weakness for me is the validation of the method only a single simulated data set is rather weak. We have added additional experiments on a real-world air quality monitoring dataset, showing that our approach generalizes well to more complex real world settings. For more details, please see the general response to all reviewers. > If I understand it correctly you are comparing your method to either a pure time series model (AR, MA, ARMA) or a pure spatial (DGMRF?) this does not seems like a fair comparison. Something simple like a AR processes with spatial noise is standard in statistics. We have added such a process as an additional baseline. For more details, please see the general response to all reviewers. ## Responses to Questions > In Figure 2 to the right should w be from 6-10 in the it says 6-9,12? Well spotted! We have updated the text with the correct values w=6-10. > Also how come mae is decreasing when w is increasing? As mentioned in line 346-347, this is an artifact of the different set of pixels used for evaluation. E.g. if the smallest mask exactly covers the "mode" of the advection-diffusion process (yellow pixels in Figure 2 (left)) and a model estimates all pixels to have some low "base state" (blue pixels), the average error between true state and model estimate will decrease as the mask size increases and starts covering more of the surrounding pixels that are close to the "base state". > Why is the average value of the simulations very negative in appendix in the colorbar the value ranges from -35 to -40. Have you added a mean in the other models, so they can compensate for this? The simulated data is generated by drawing a sample from the initial state distribution and then simulating forward in time. Even though this initial distribution has zero mean, the initial sample happens to be drawn from the "negative side" of the distribution. We indeed made sure that the overall mean is subtracted before fitting the baseline models. > You should also be able to get the marginal posterior distributions for your model how does they look for the simulations? We indeed have access to the true marginal posterior of the simulations. As explained in the general response to all reviewers, we have extended our evaluation to include a direct comparison between our marginal posterior estimates and these true marginal posterior distributions. ## Conclusion We thank reviewer LAkB again for taking the time to provide thoughtful feedback and for suggesting a feasible spatiotemporal baseline. We hope that we answered your questions to your satisfaction and we could resolve your concerns w.r.t. the evaluation of our method by adding experiments on a real-world dataset. If there are any other changes you would like to see, we are more than happy to discuss them with you. --- Rebuttal Comment 1.1: Comment: Thank you for your response. You have answered my questions.
Summary: This work extends DGMRF to account for temporal dependency in data with a SSM reformulation. With necessary assumptions, the proposed method produces accurate posterior, competitive performance to DGMRF and faster inference to Kalman smoother. Strengths: - The method extends DGMRF to account for temporal dependency in data. - It has been shown that the method has low computational complexity and good scalability. - The method produces higher accurate posterior comparing to DGMRF and other transitional TS models. Weaknesses: - Since the method is extending to dynamical systems, DGMRF is a bit of "strawman". The method should be also compared with other filter/smoother e.g. deep KL, ensemble KL, particle smoothing and etc. - Though the posterior was evaluated in the manuscript, the dynamical model (F) wasn't. It is very often that you got a decent posterior but learned a "bad" F (in terms of forecasting). - Only the posterior mean was evaluated. It is unclear how good the posterior variance is. - The variational posterior should be compared to the true posterior in an example where the latter is accessible analytically or via sampling. - Only one synthetic example is not sufficient for evaluating the method thoroughly. It is unclear how it generalizes. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Were the observation parameters $H$ and $\xi$ trainable or fixed at the true values? - Were the hyperparameters such as state noise and observation noise trainable or fixed at the true values? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The technical limitations were discussed in the last section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Official Rebuttal to 9F96 We would like to thank Reviewer 9F96 for the positive feedback and the good suggestions. In the following, we will respond to each point separately, and refer to our general response to all reviewers whenever there is overlap with other reviewers. ## Responses to Weaknesses > Since the method is extending to dynamical systems, DGMRF is a bit of "strawman". The method should be also compared with other filter/smoother e.g. deep KL, ensemble KL, particle smoothing and etc. We added a spatiotemporal ST-AR baseline (see general rebuttal) to address this limitation. Unfortunately, other more complex filter/smoother approaches are more difficult to apply in the setting we are considering. For example, deep Kalman filtering with KVAE [17] requires the definition of a suitable encoder mapping from high-dimensional observations to a latent space. In settings where the pattern of missing observations varies over time, this requires some initial imputation or a smart way to inform the encoder about this pattern, and thus falls beyond the scope of this paper. On the other hand, ensemble or particle filter approaches typically rely on an established dynamics model (as discussed in line 82-88). While we have such a dynamics model available for the advection-diffusion dataset, applying ensemble or particle filtering would simply converge towards the true posterior distribution, which we have access to, and therefore would, in our opinion, not form a very useful comparison. > Though the posterior was evaluated in the manuscript, the dynamical model (F) wasn't. It is very often that you got a decent posterior but learned a "bad" F (in terms of forecasting). This is indeed a very interesting point. Unfortunately, we haven't had the time to evaluate the learned transition models, but we are hoping to do so during the discussion period. > Only the posterior mean was evaluated. It is unclear how good the posterior variance is. We do evaluate the posterior variance in terms of the CRPS metric. However, as the CRPS is a combined evaluation of the mean and variance, we now also include direct evaluations of the marginal standard deviations, for the advection-diffusion dataset where the true posterior distribution is available (see Table 1 in the provided PDF). > The variational posterior should be compared to the true posterior in an example where the latter is accessible analytically or via sampling. Thank you for this good suggestion. We will include this comparison in the appendix of the camera ready paper. To do this, we will perform the same evaluation as is done with the final posterior estimate (see above). > Only one synthetic example is not sufficient for evaluating the method thoroughly. It is unclear how it generalizes. We have added additional experiments on a real-world air quality monitoring dataset, showing that our approach generalizes well to more complex real world settings. For more details, please see the general response to all reviewers. ## Responses to Questions > Were the observation parameters $H$ and $\xi$ trainable or fixed at the true values? > Were the hyperparameters such as state noise and observation noise trainable or fixed at the true values? Thank you for pointing out that this crucial information is missing. In all experiments, the observation model parameters $\mathbf{H}$ and $\mathbf{R}$ ($\boldsymbol{\xi}$ is the random variable, not a parameter) are fixed. In particular, $\mathbf{H}$ is a selection matrix defined according to the training/validation/test masks used in each experiment, and the observation noise is defined as $\mathbf{R}=\sigma^2\mathbf{I}$. For the advection-diffusion experiments, $\sigma$ matches the value used to generate the data, while for the newly added air quality dataset it is set to $\sigma=0.01$. We will add this information to the description of the experimental setup. ## Conclusion We thank reviewer 9F96 again for taking the time to provide thoughtful feedback and for pointing out some missing information about the experimental setup. We hope that our responses and actions taken w.r.t. the evaluation of posterior variances and additional experiments on a real-world dataset are satisfactory. If there are any other changes you would like to see, we are more than happy to discuss them with you. --- Rebuttal Comment 1.1: Comment: Thank the authors for the response. > We added a spatiotemporal ST-AR baseline (see general rebuttal) to address this limitation. Unfortunately, other more complex filter/smoother approaches are more difficult to apply in the setting we are considering. For example, deep Kalman filtering with KVAE [17] requires the definition of a suitable encoder mapping from high-dimensional observations to a latent space. In settings where the pattern of missing observations varies over time, this requires some initial imputation or a smart way to inform the encoder about this pattern, and thus falls beyond the scope of this paper. On the other hand, ensemble or particle filter approaches typically rely on an established dynamics model (as discussed in line 82-88). While we have such a dynamics model available for the advection-diffusion dataset, applying ensemble or particle filtering would simply converge towards the true posterior distribution, which we have access to, and therefore would, in our opinion, not form a very useful comparison. - It could be the example to compare with the true posterior. - Missing observations is not a must for every example, is it? - There are variational methods learning the, e.g. [Frigola 2014](https://papers.nips.cc/paper_files/paper/2014/hash/139f0874f2ded2e41b0393c4ac5644f7-Abstract.html) and [Naesseth 2017](https://arxiv.org/abs/1705.11140). Dual Kalman filter could also learn the dynamical model. Fig 1. bottom. Three methods perfectly overlap for input data? --- Reply to Comment 1.1.1: Comment: >- It could be the example to compare with the true posterior. To do this, we have now implemented the Ensemble Kalman Smoother with an advection-diffusion transition model matching the data-generating process. We used $10^4$ ensemble members (the maximum feasible on our machine). Fixing the velocity and diffusion parameters to the true values, we obtain the following results: $MAE_{\mu}$ = 0.0512$\tiny{\pm 0.0019}$, $RMSE_{\mu}$ = 0.0654$\tiny{\pm 0.0028}$, $MAE_{\sigma}$ = 0.0041$\tiny{\pm 0.0000}$, $RMSE_{\sigma}$ = 0.0045$\tiny{\pm 0.0000}$, $CRPS$ = 0.1025$\tiny{\pm 0.0029}$ As expected, the estimated posterior is very close to the true posterior. For a more appropriate comparison with our approach, we used a state augmentation approach to estimate the velocity and diffusion parameters jointly with the system states. In contrast to the ST-DGMRF approach, we consider initial and transition noise parameters to be fixed in order to avoid divergence of the EnKS. This yields the following results: $MAE_{\mu}$ = 0.1249$\tiny{\pm 0.1423}$, $RMSE_{\mu}$ = 0.1925$\tiny{\pm 0.2504}$, $MAE_{\sigma}$ = 0.0046$\tiny{\pm 0.0011}$, $RMSE_{\sigma}$ = 0.0061$\tiny{\pm 0.0031}$, $CRPS$ = 0.1624$\tiny{\pm 0.1226}$ While $MAE_{\sigma}, RMSE_{\sigma}$ are lower than with our approach, $MAE_{\mu}, RMSE_{\mu}$ are slightly higher than with the ST-DGMRF model with $L_{\text{temporal}}=4$. Note that $MAE_{\sigma}$ and $RMSE_{\sigma}$ are expected to increase as well when considering the noise parameters unknown. Reducing the ensemble size by half, increases $MAE_{\mu}$ and $RMSE_{\mu}$ further to 0.1647$\tiny{\pm} 0.1274$ and 0.2244$\tiny{\pm 0.1765}$ respectively. >- Missing observations is not a must for every example, is it? You are right, missing observations are not a must. However, settings with partially observed system states are a focus of our paper and one of the main motivations for developing our approach (see lines 15-25), which is why we designed our experiments accordingly. >- There are variational methods learning the, e.g. [Frigola 2014](https://papers.nips.cc/paper_files/paper/2014/hash/139f0874f2ded2e41b0393c4ac5644f7-Abstract.html) and [Naesseth 2017](https://arxiv.org/abs/1705.11140). Dual Kalman filter could also learn the dynamical model. Thank you for sharing these very interesting papers with us. Using Gaussian processes in combination with variational inference, as done in Frigola 2014, is indeed a very promising approach. However, their application seems (so far) limited to small systems (e.g. one variable and its derivative). Applying such an approach to large systems with interacting components would require designing a suitable GP transition with both high-dimensional input and high-dimensional output. This seems beyond the scope of this paper. >Fig 1. bottom. Three methods perfectly overlap for input data? Yes, this is expected because the input data points $\mathbf{y}$ are given to all these methods at inference time. Unless the observation noise is set very high, the estimated posterior $p(\mathbf{x}\mid\mathbf{y})$ will thus be very close to the observed values at these points for all considered methods. If you have any further questions, we would be happy to answer them.
Rebuttal 1: Rebuttal: # General rebuttal to all reviewers First of all, we would like to thank all the reviewers for their thoughtful and constructive feedback. We are delighted to hear that our paper is "clear" (LAkB), "easy to read" (Upf6) and "professionally written" (k13T), and that the reviewers found our proposed method to be "well explained" (Upf6), "well motivated" (Upf6), "principled and well derived" (k13T) and "quite a good idea" (Upf6). Concerns were raised mainly about the experimental evaluation, which was limited to a single simulated dataset and to relatively simple (purely temporal and purely spatial) baseline models. Here, we provide a short summary of the major actions we have taken to address these concerns. 1. We agree with the reviewers that it is important to show how our method generalizes to real world settings. Therefore, we performed additional experiments on real air quality sensor data. 2. We acknowledge the lack of a suitable spatiotemporal baseline in our evaluation. To address this limitation, we have included an AR process with unconstrained spatial noise in the transition model (as suggested by reviewer LAkB) as an additional baseline. We are happy to report that our method can improve on the performance of this spatiotemporal baseline. 3. Reviewers pointed out that while the true posterior is available for the simulated advection-diffusion dataset, we did not actually use it in the performance evaluation. This is a good point, and we have adjusted Table 2 accordingly. 4. Reviewer k13T correctly pointed out that the performance gain reported in the submitted paper is good but incremental. We are happy to report that we were able to improve the performance of all ST-DGMRF variants by a significant amount (see Table 1). An important ingredient was to replace the standard CG method by a regularized version [60], which allows for more stable posterior estimation, in particular for ST-DGMRFs with large number of temporal layers. In the following, we provide detailed information on each of these points. Minor changes and clarifications are addressed in the responses to the individual reviewers. ## Additional experiments We performed additional experiments on real air quality sensor data obtained from [58]. The dataset contains hourly PM2.5 measurements taken at 246 sensors distributed around Beijing. We considered a time period of T=400 hours in March 2015 and extracted relevant weather covariates from the ERA5 reanalysis [59]. The base graph is defined based on the Delaunay triangulation of sensor locations. ### Experimental setup To define our test set, we randomly draw 10 time points $t_k$ and mask out all measurements within a spatial block (containing 50% of all sensors) for time steps $t_k, …, t_k+20$, mimicking partial network failures. Since the transport of pollutants is a complex process involving many different temporal scales, we hypothesize that both increasing the Markov order $p$ and the number of temporal layers will improve the estimated posterior. To test this, and to validate that ST-DGMRF is indeed able to capture such higher order dependencies, we perform an ablation study where we vary $L_{\text{temporal}}$ from 1 to 4, for Markov order $p=1$ and $p=2$ respectively. We consider two ST-DGMRF variants: one with simple "diffusion" layers, and one neural network variant taking edge features and weather covariates as inputs. ### Main results (see Table 2 and Figure 1) - We find that ST-DGMRF estimates unobserved system states more accurately than all considered baseline models. - As expected, increasing $p$ from 1 to 2 results in more accurate posterior estimates, for both variants. - Similarly, increasing $L_{\text{temporal}}$ results in continuous improvement in posterior estimates, for both variants. ## Additional baseline models The spatiotemporal AR model with unconstrained spatial noise (ST-AR) takes the form $x_k = \alpha\cdot x_{k-1} + \epsilon_k$, where $x_0 \sim \mathcal{N}(\mu_0, \Sigma_0)$ and $\epsilon_k \sim \mathcal{N}(0, \mathbf{Q}^{-1})$. We fix $\Sigma_0 = 10\cdot I$ to encode high uncertainty about the initial state $x_0$, and fit $\alpha, \mu_0$ and $\mathbf{Q}^{-1}$ to data using closed-form EM updates. We initialize the EM algorithm with $\alpha=1$, $\mu=\mathbf{0}$ and $\mathbf{Q}^{-1}=diag(\mathbf{q})$ where elements $\mathbf{q}_i \in [3, 4]$ are initialized randomly. After convergence of the EM-algorithm, the final state estimates are obtained with the Kalman smoother. ## Additional evaluations For the advection-diffusion dataset, for which the true posterior distribution is available, we now evaluate the estimated posterior mean and std w.r.t their ground truth. That means, we replaced the MAE and RMSE computed based on the system state (Table 2 in the submitted paper) by $MAE_{\mu}$, $RMSE_{\mu}$, $MAE_{\sigma}$ and $RMSE_{\sigma}$ (see Table 1). ## General improvements We replaced the standard CG method with a regularized CG variant [60], resulting in more stable final posterior estimations. This allowed us to improve the performance of ST-DGMRF variants with large numbers of temporal layers significantly (see Table 1). Note that in contrast to Table 2 in the submitted paper, showing results for $L_{\text{temporal}}=1$, we now show results for $L_{\text{temporal}}=2$ and $L_{\text{temporal}}=4$. As expected, the ST-DGMRF with advection-diffusion dynamics and $L_{\text{temporal}}=4$ performs best, as it has exactly the same form as the true transition model. ## References [58] Y. Zheng et al. Forecasting fine-grained air quality based on big data. In Proceedings of the 21th SIGKDD conference on Knowledge Discovery and Data Mining. 2015. [59] H. Hersbach et al. The ERA5 global reanalysis. Quarterly Journal of the Royal Meteorological Society. 2020 [60] Bai ZZ, Zhang SL. A regularized conjugate gradient method for symmetric positive definite system of linear equations. Journal of Computational Mathematics. 2002 Pdf: /pdf/4289591ff289890f73809bb62f4402b059494e35.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Finding Order in Chaos: A Novel Data Augmentation Method for Time Series in Contrastive Learning
Accept (poster)
Summary: This paper presents a new time series augmentation method which adopts Mixup on the amplitude and phase, respectively, after transforming the time series in the frequency domain, thus avoiding the destructive extrapolation from linear Mixup. Strengths: 1. The authors propose a novel mixup method in the frequency domain for time series, avoiding the destructive extrapolation resulting from linear Mixup. 2. The theoretical proof is complete and has some reference value. Weaknesses: 1. It is unclear whether it works when considering amplitude or phase alone for Mixup. The relevant ablation experiments must be performed consistently with the parameter settings of the original method. 2. Lack of convincing examples. For example, to illustrate the problem, perform Mixup on two samples where the frequency domain information in both samples that are strongly correlated with I(x,y) exactly cancels each other and shows whether the newly generated samples have specific frequency domain information that can be identified. 3. The paper's content does not explain how to "Find Order in Chaos" mentioned in the title and how the "chaos" is reflected. And I'm confused about how the so-called "control the degree of chaos" proposed by the authors is done. 4. Assumption 2.1 is too idealistic. The label information of time series is not only related to the time domain but also the frequency domain, and focusing on the frequency domain information only does not achieve the classification well. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The ablation experiments considering amplitude or phase alone for mixup need to be performed consistently with the parameter settings of the original method. 2. Please give practical examples of how Mixup may corrupt the frequency domain information used to discriminate the generated samples. 3. It is not clear how the so-called "control the degree of chaos" proposed by the authors is done. 4. For data augmentation of time series, why does this paper choose to perform augmentation in the frequency domain instead of the time domain? In general, it is difficult to use only frequency domain information for classification over the time domain for time series classification tasks [1]. In contrast, the combination of time and frequency domain information can effectively improve the model's classification performance [2]. 5. What are the advantages of performing data augmentation in the frequency domain instead of doing data augmentation in the time domain of time series data? 6. Why are eight datasets chosen for experiments in this paper? Its inconsistency with existing benchmark datasets for time series prediction [3,4,5], classification [6,7] and anomaly detection [8] tasks. In addition, the comparison methods in this paper do not include benchmark methods in the time series domain. For example, benchmark methods for time series prediction include Informer [3], CoST [4], and FEDFormer [5], etc., time series classification methods include OS-CNN [6] and DSN [7], etc., and time series anomaly detection includes Anomaly Transformer [8], etc. [1] Cross reconstruction transformer for self-supervised time series representation learning. arXiv, 2022. [2] Self-Supervised Contrastive Pre-Training for Time Series via Time-Frequency Consistency. NeurIPS, 2022. [3] Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. AAAI, 2021. [4] CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series Forecasting. ICLR, 2022. [5] FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting. ICML, 2022. [6] Omni-Scale CNNs: a simple and effective kernel size configuration for time series classification. ICLR, 2022. [7] Dynamic Sparse Network for Time Series Classification: Learning What to "See". NeurIPS, 2022. [8] Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy. ICLR, 2022. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors did not mention the limitations. I do not have any comments on this point. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review. We appreciate the feedback and the recognition of the importance of our work. We replied to each concern as follows. 1)Regarding the ablation experiments focusing on amplitude or phase alone for the mixup. We'd like to clarify that we conducted a parameter search to determine the optimal settings for all mixup methods. This approach is commonly used as a general practice to ensure the best performance of the baselines in different domains. However, it's important to note that the original mixup method was initially designed for images with a distinct distribution compared to time series. As such, directly employing it for comparison would be inequitable. Even for that, the parameters for the mixup methods are very close to the original ones in the literature within a range of 0.8 to 0.9. 2)Linear mixup has the potential to directly distort information across both time and frequency domains, rather than being limited solely to the frequency, as demonstrated by the mathematical proof we provided. As a practical example: Assume you've two signals with the same frequency (PPG or ECG with the same heart rate), if the phase difference between these two signals is more than $\pi/2$, there will be a destructive interference and the magnitude of the waves will decrease depending on the phase difference and magnitude of them. We believe that it is important to mention the mixup is nothing more than adding two signals together with different ratios. If you apply this to quasi-periodic signals, your waves can destroy each other. This is quite common in nature as well from ocean waves to lights where the most famous example is the double-slit experiment [1-2] which is considered evidence for the probabilistic nature of quantum mechanics. [1]Young,Thomas (1804). "The Bakerian lecture. Experiments and calculation relative to physical optics". Philosophical Transactions of the Royal Society of London. [2]Kipnis,Naum S. (1991). History of the Principle of Interference of Light. Springer The empirical evidence of this destructive mixup can also be seen from the question of reviewer JK7u “Why linear mix-up method performs well in activity recognition task?” The other two tasks involve strong periodicity as they originate from the human cardiovascular system, causing linear mixup to perform much worse compared to the less periodic data from activity recognition. 3)The recent paper “Chaos is a ladder” [3] ([18] in the manuscript) showed that the data augmentations create chaos where contrastive learning climbs that ladder. Also, they showed that a good data augmentation should create samples that are similar to those in intra-class.“For example, two different cars become very similar when they are both cropped to the wheels”. In this work, we proposed to control the augmentation degree by controlling the mixup coefficients while looking at the semantic similarity of the samples in the latent domain in an unsupervised way. For example, if the two samples are close to each other in the latent domain, the coefficients are more aggressive to make them closer. Although, we know this approach has some limitations, we mentioned its performance improvement and limitation in Section 5.1 Ablation studies. [3]Yifei Wang et al. Chaos is a ladder: A new theoretical understanding of contrastive learning via augmentation overlap. ICLR 2022 4-5)We think these two questions align at the same point and there is a misconception we would like to clarify. In this paper, we have introduced a novel approach to overcome the limitations of mixup by shifting the mixing process from the time domain to the frequency. We did this because we want to get rid of the destructive mixup, which is due to the phase difference between the same frequencies. We had to split these two pieces of information for a wave that motivates us to shift the frequency domain. Otherwise, our augmentation method has a direct correspondence within the time domain too. We believe that the time or frequency domain has no specific advantage over it in data augmentation for contrastive learning. It is just a matter of how you can keep the task-related information while increasing the diversity of samples, which was shown by InfoMin [53,in script], a seminal work in augmentations for contrastive learning. 6)The selection of 8 datasets for experiments was driven by the objective of our study, which is to investigate the mixup with quasi-periodic signals and their related applications. While benchmark methods like Informer, CoST, and FEDFormer are valuable in their respective domains, they are not employed as benchmarks within the datasets we used. If you look at the previous works with the same datasets, you can see we used the common benchmarks [4-7]. [4]Hangwei Qian et al. Latent independent excitation for generalizable sensor-based cross-person activity recognition. AAAI 2021 [5]Garrett Wilson et al. Multi-source deep domain adaptation with weak supervision for time-series sensor data. ACM SIGKDD 2020 [6]Dwaipayan Biswas et al. Cornet: Deep learning framework for ppg-based heart rate estimation and biometric identification in the ambulant environment. IEEE Transactions on Biomedical Circuits and Systems, 2019 [7]Francisco Javier Ordóñez et al. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors, 2016. If we use the models you suggested for these tasks, a similar question "Why did you use different models for these datasets?" can be asked again. Regarding assumption 2.1, the intention behind is not to claim that label information is exclusively encoded within the time or frequency domain. Rather, it asserts that the signal-to-noise ratio contains informative characteristics of a wave. Again, we thank the reviewer for this careful review and appreciation of our work. Hopefully, these answers have clarified any questions you may have had. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Most of my questions have been resolved. However, I still have some concerns. The examples provided by the authors are mainly focused on evident frequency features, such as electrocardiogram recordings. Nevertheless, time series in the real world also encompasses situations where frequency features are not as apparent,e.g., AllGestureWiimotex, MelbournePedestrian, and GestureMidAirD3 in UCR dataset. --- Reply to Comment 1.1.1: Title: Thanks for your response! Comment: We thank the reviewer for the response but we want to highlight some points. In this paper, we showed a problem of mixup in quasi-periodic time-series data through both theoretical and empirical means. Then, we proposed a method that solves this problem and presented our results with 8 datasets and 14 baselines (previous data augmentation methods and several mixup techniques) where our method outperforms baselines in 7 datasets and ranks second in the other dataset. We provided examples where the data is mainly quasi-periodic (activity, cardiovascular, etc.) as we solve a problem directly related to that. There are hundreds of time-series data with different characteristics, we hoped that the reviewers would anticipate that and appreciate the contribution to the field of quasi-periodic signals. Also, we believe that the time series we demonstrate the performance of our method is of huge importance to the field of ML and health such as electrocardiograms, which make the contribution of this work valuable. Thank you once again for joining the discussion section.
Summary: This paper proposes a novel mixup method for non-stationary time-series data to generate positive samples for the contrastive learning formulation. The proposed method mixes the magnitude and phase of each frequency component of two samples that are close in the latent space of a variational autoencoder. The authors prove that the mixup process in the frequency domain does not cause the loss of information while it generates diverse samples. The paper provides an extensive empirical evaluation to show that the proposed method learns better representations than existing contrastive learning baselines on eight time series datasets from three tasks. Strengths: The paper studies an important and under-explored problem. Although contrastive learning is proven to work well on images, its performance on time series is yet limited due to the lack of good time series augmentations. While many existing work tries to find the good time series augmentations through extensive empirical studies, this paper proposes a theoretically well-grounded mix-up method for time series augmentation. Moreover, the empirical evaluation is also extensive and convincing. Weaknesses: 1. The hyperlinks of the citations and the references are missed. 2. I don't find where $x^*$ is defined. 3. It would be great if there is an algorithm box explaining the whole process of drawing samples, computing the degree of the augmentation, performing the mixup, and then training. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. How do you select $\beta$? Would a very large (close to 1) $\beta$ limit the diversity of the augmented samples? 2. What does the instance selection (line 268) mean exactly? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review of our work. We appreciate the feedback and are glad to hear that our work has been received positively. Regarding the weaknesses, thank you for bringing hyperlinks to our attention. We will fix this issue in the revised version of the manuscript. Your feedback is greatly appreciated. We appreciate your observation regarding the absence of the definition for "x*." In prior works, namely DACL and GenRep, this term was denoted as the optimum generated or augmented sample. This contextual reference guided our usage of the term before formally defining it. However, we understand the importance of clarity and will provide a precise definition for "x*" in the revised manuscript. We thank you for your valuable suggestion. We shared the same thought in the submission process but we were keen to add another figure to the manuscript as we have a limited space. However, we appreciate your input and will incorporate a new figure to provide a clear and comprehensive overview of the process in the revisited manuscript or Appendix. Thank you for your thoughtful suggestion. Regarding the questions, we have replied to each of them below. Q1) In previous mixup methods normally $\beta$ is chosen with high values such as 0.8 or 0.9. However, the major drawback of this is that when you increase the $\beta$ to closer to 1, the mixup samples are getting closer to the anchor sample, and the diversity of them decreases significantly as you expected. Therefore, we followed a similar procedure with previous works and performed a grid search amongst values 0.7 to 0.9 and chose the best values. We also observed that the $\beta$ values are quite flexible for the magnitude mixing while for the phase it should be more closer to 0.8-0.9. We believe that this can be attributed to the influence of phase values on the semantic characteristics of the signal. Altering these phase values exerts a potent impact on the inherent features conveyed by the signals. Q2) We thank the reviewer for this careful revision. It is our mistake. It should be the mixup coefficient selection according to phase and magnitude rather than the instance selection as we indeed choose mixup coefficients rather than instances. By addressing your questions, we hope to provide clarity and resolve any uncertainties you may have. We genuinely appreciate your thorough review and the insights you've shared. Thank you once again for your careful attention to detail during the revision process and appreciation of our work. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. I maintain my initial recommendation to support the acceptance of the paper. The paper presents a novel time series augmentation approach with applicability across multiple popular contrastive learning frameworks. While the experiments do not encompass some benchmark datasets, the evaluation results across eight datasets unequivocally demonstrate the effectiveness of the proposed method. --- Reply to Comment 1.1.1: Title: Thanks! Comment: We are glad to hear our contribution to the field is appreciated. Thank you once again for your time and appreciation of our work.
Summary: This paper is about a novel data augmentation method that can be used in contrastive learning for time-series tasks and aims to connect intra-class samples together, and find order in the latent space. The proposed method builds upon the mixup data augmentation technique by controlling the degree of chaos created by data augmentation. More specifically, the augmentation method considers the phase and the amplitude information as two separate features and then generates positive samples by controlling the mixup coefficients for each feature for each randomly chosen pair. This process helps to generate features that enhance intra-class similarity and help contrastive learning to learn class separated representations. The proposed method is evaluated using three time-series tasks: heart rate estimation, human activity recognition and cardiovascular disease detection and against state of the art comparison methods. Strengths: - The paper is about an interesting topic, contrastive learning in time-series. - The proposed methodology is based on the idea of controlling the degree of chaos created by data augmentation methods. These ideas can be applied in the augmentation part of contrastive learning for other domain as well. - The proposed methodology shows improvement in the performance over ten comparison methods on three tasks and eight datasets. Some time-series contrastive learning methods are missing from the comparison though. Weaknesses: - The writing in the paper can be improved. The Method and Results and Discussion sections need some organization, adding subsections will help with the structure. - The proposed method has incremental novelty. This method builds on top of known components. The writing needs improvements to make the contributions of this paper more clear. - The experimental setup also seems to be missing some important state of the art. No time-series specific contrastive learning method is included in the experiments for comparison. - The proposed methodology seems to have similarities with TS-TCC [80] and TFC [22] (the reference numbers align with the ones in the main paper) and since these are state of the art in contrastive learning for time-series, they should be part of the comparison methods. - A comparison of the methodology of the proposed method with [22] and [80] is needed as these paper are around the same ideas. What is different in the contributions of the proposed method? - Dataset statistics are missing from the Results and Discussion section. Adding them in a table helps with the understanding of the results. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: My main concerns are regarding the readability of the paper and improvements in the structure of the paper, and the experimental setup that is missing time-series contrastive learning models. The results and the discussion will be more convincing and the performance difference more important if the author include the results of some other methods, like [22] and [80]. A comparison of the methodology and the concepts of the proposed method with [22] and [80] is needed as these paper are around the same ideas. What is different in the contributions of the proposed method? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: There are no limitations or negative societal impact from this work. The datasets used in the paper are all publicly available. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your comments. And, we would like to clarify some misunderstandings and points arising from your review of our paper. Regarding the claim of your incremental novelty, it's important to note that there currently exists no prior study that has specifically addressed the destructive effects of the linear mixup when applied to periodic time series data, nor has any previous work proposed a corrective methodology in this context. While linear mixup has demonstrated efficacy within image-related domains, its applicability to time series analysis remains constrained due to the inherent periodic nature of time series data. Our research also highlights that increased signal periodicity shows a significant decline in mixup performance. Given this distinct gap in the existing literature, we argue that our work should not be characterized as incremental in nature. We believe that our work contributes several insights, including those of technical nature. In summary, To the best of our knowledge, our work is the first to * present the destructive behavior of linear mixup for quasi-periodic time-series data both empirically and theoretically with mathematical proofs. * propose a novel mixup approach for time-series data while preventing information loss of prior works on mixup * take a novel approach for sampling mixup coefficients for each pair based on their similarity in the latent space, which is constructed without supervision while learning disentangled representations, to prevent aggressive augmentation between inter-class samples. * demonstrate that our approach significantly outperforms well-known mixup methods with state-of-the-art data augmentation techniques in 8 datasets while comparing with 14 baselines. Although we will more strongly emphasize our contributions in the revised version of the paper, we kindly wish to draw the reviewer's focus to the non-incremental nature of the paper's contributions. Regarding your concern for [80] and [22], these papers proposed an unsupervised time-series representation learning framework. Therefore, they are similar to SimCLR and BYOL, but not our presented method. In this paper, we propose a data augmentation strategy to improve the performance of unsupervised contrastive learning frameworks by using a tailored mixup method for quasi-periodic signals. The mixup is well-appreciated in the vision community as it offers an easy and effective data augmentation strategy, even so, many works proposed different variations of mixup such as cutmix, binarmix, geomix. In this paper, we showed that the original mixup and variations do not perform well with the quasi-periodic signals and proposed a tailored theoretically proven mixup method and showed the improvement. To explain further, you can use our proposed method with any unsupervised contrastive framework such as SimCLR, BYOL, NNCLR, MoCo, or [80] and [22] as a data augmentation technique. Therefore in Appendix E, we showed how our proposed data augmentation increases the performance of BYOL and SimCLR compared to the traditional augmentation used in time series such as jittering. However, you cannot use [80] and [22] together with SimCLR or BYOL. They constitute a replacement for those self-supervised learning methods. Therefore, we think our experimental setup with the additional experiments has no missing parts and clearly indicates the contribution of the paper. Moreover, [22] proposed a pre-training strategy while considering the time-frequency consistency (TF-C) —embedding a time-based neighborhood of an example close to its frequency-based neighborhood, which they mainly used for domain adaptation whereas, in our paper, we showed the destructive feature of mixup for quasi-periodic signals and fix this drawback to increase the performance for unsupervised contrastive learning. Similarly, in [80] authors propose a framework to learn time-series representation from unlabeled data. Their framework requires weak and strong data augmentations where they implemented weak augmentation as a jitter-and-scale and strong augmentation as a permutation-and-jitter. We both compared the performance of those augmentations with ours in two different contrastive learning frameworks and showed the improvement. Responding to your question, we have also introduced another contrastive framework explicitly tailored for time series data. In the submitted pdf, we demonstrate the performance enhancement achieved through the integration of our method with this additional framework. Therefore, we can confidently say that our proposed method exhibits substantial distinctions from [80] and [22] in terms of its contributions. Our work is not a time-series contrastive learning method where we should compare with other frameworks as opposed to your claim. Furthermore, it's worth noting that we have conducted comparisons with all previously published known data augmentation methods in 8 datasets. Regarding the dataset statics, we already mentioned some statistics about datasets in Appendix B and we will certainly address this suggestion by incorporating dataset statistics in a table. We apologize for any confusion caused by the misinterpretation. We believe that these clarifications with additional results will now help address any misconceptions and make the contribution of our paper clear to the field. Thank you for your time and consideration again. --- Rebuttal Comment 1.1: Comment: Thank you for your response. After reading the other reviews and the rebuttal responses, I understand that the focus of this work is the data augmentation aspect and not a new contrastive learning model (and that was my understanding from the beginning), but I believe that in order to compare the proposed data augmentation strategy it is crucial to try it with the other state of the art contrastive learning models. It is very useful, that the readers who read this paper, would have an idea of what works best with this augmentation strategy and what does not. Also, since the focus of this work is the data augmentation strategy, it would be interesting to check how it works in all domains of supervised, self-supervised and unsupervised learning. Finally, one of the main reasons that we discuss about the focus of this work (data augmentation strategy vs. contrastive learning model) is the way the authors present their findings. Instead of using the names of the contrastive learning models, they could use the same contrastive learning model and only change the augmentation strategies. Then, they can present for example, the comparison of the methods not by using the names of the contrastive learning models, but instead by using the names of the data augmentation strategies. And they could also add one table with the data augmentation strategies in the state of the art and the corresponding models that those have been used in (i.e., table with two columns, one is model's name, one is augmentation strategy description). Overall, I do believe that this paper will make a stronger impact if both a comparison for the data augmentation strategies and the contrastive learning models will be included, as this is one of the mechanisms that researchers use the data augmentations to learn the time-series representations. As a result, I will keep my initial score as I am not fully convinced about the current state of the experimental setup.
Summary: This paper explores data augmentation for contrastive learning that better aligns with the nature of time series. Specifically, they tailor the mix-up method separately to amplitude and phase in the frequency domain. A theoretical analysis illustrates how the method enhances task-specific information, contrasting it to the linear mix-up method which potentially discards task-specific information. Experiments on time-series from three domains also demonstrated the advantages compared with previous mix-up and contrastive learning methods. Strengths: 1. This paper studies an important problem of designing augmentation for contrastive learning that adapts to the non-stationarity of time series. 2. The paper presents theoretical analysis of the benefits of proposed mix-up method compared with vanilla linear mix-up method. 3. The paper shows better performance on time-series datasets of multiple domains. Weaknesses: **Lack of baseline comparisons**: The paper compares with a list of mix-up methods and contrastive learning methods, and the authors also incorporate time-series augmentation methods into the framework in appendix. However, it would also be interesting to have (1) some direct comparisons with existing time-series augmentation methods, e.g., [1, 2] (both have mix-up augmentation in the time or frequency domain). (2) Contrastive learning methods dedicated for time-series domain [3, 4, 5, 6] (I am curious why [4] is not one of the baselines in experiments?) [1] FrAug: Frequency Domain Augmentation for Time Series Forecasting [2] Towards Diverse and Coherent Augmentation for Time-Series Forecasting [3] CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series Forecasting [4] Self-supervised contrastive pre-training for time series via time-frequency consistency [5] Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding [6] TS2Vec: Towards Universal Representation of Time Series **Experimental analysis**: Why linear mix-up method performs well in activity recognition task? Is there any difference in the setting of activity recognition compared with the other two tasks? **Writing**: 1. Consider defining notations of x, \tilde{x}, x^* before or in Proposition 2.3. 2. The title metions "order" and "chaos"; hence, a detailed explanation of these terms, as well as how the proposed method comprehensively accounts for the "non-stationarity" of time series, would be beneficial. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See weakness above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The paper did not explicitly discuss their limitations. Some comments and questions regarding limitations of the work are discussed in the weakness and question parts above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your feedback about our paper on data augmentation for contrastive learning in the context of quasi-period time series data. We also thank you for valuing the importance of the problem. We would like to address certain points raised in your review. We are well aware of all the works you cited in your review, except the [2]. We apologize for missing reference [2], which emerged close to our submission. This paper indeed proposed a data augmentation method for time series but the overall focus was for forecasting without considering the quasi-periodic nature of signals. We applied their proposed method.But,as they emphasize different frequency components via random weights, its performance is quite low compared to ours (as can be seen from the submitted single-page pdf). We will include them in our revisited version. Secondly, we haven't explicitly compared our proposed augmentation with [1] in the original script because the work presented in [1] is for time-series forecasting and the augmentations are tailored for this purpose (not for quasi-periodic signals, similar to [2]) where comparison will be unfair. Even in the augmentation, there are look-back window x and target horizon y. How can you think integrating these augmentations to our problems make sense? For example, what is the look-back window and target horizon for ECG signals in cardiovascular disease detection? If you look at the baselines we used, they all claim their proposed methods can be applied to other domains (e.g.,DACL). However,as a reply to your question, we applied the data augmentations in FrAug (with original parameters) which are mainly frequency masking and mixing, you can find the results in the pdf. The performance of this method is low since sometimes the key freq. components are masked or mixed as the characteristic of quasi-periodic signals is different. Thirdly, regarding the other cited works, we've already applied the augmentations mentioned there in our experiments. If you look at the augmentations used in these works, you can see we already used and pointed out them in Appendix. [3]CoST:Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series Forecasting Scaling,Shifting,Jittering, All of them are used and given in the submitted manuscript. [4]Self-supervised contrastive pre-training for time series via time-frequency consistency In this work, the author's main contribution is not a data augmentation method but a framework for domain adaptation.That was the main reason, we did not explicitly compare this as a baseline, which will not be a fair comparison. In other words, this work is not a baseline for us. We propose a data augmentation method for unsupervised learning, not a framework. The authors propose to perturb only the magnitude of the frequency domain without changing the phase while the perturbation is performed by sampling from Gaussian. In the paper, the authors showed that perturbation of high and low frequency affects performance differently depending on the application. Therefore, we applied this to the whole band during our submission. As per your request, we also applied to the low band separately as in the original paper. In the end, the performance decreased severely. [5]Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding. There is no data augmentation used or proposed in this paper. Again, we propose a data augmentation method for quasi-periodic nonstationary signals. This work is not even a baseline for our work. [6]TS2Vec:Towards Universal Representation of Time Series Timestamp Masking, Random Cropping We both applied these augmentation techniques as baselines (random zero out and permute) Q)Experimental analysis:Why linear mix-up method performs well in activity recognition task?Is there any difference in the setting of activity recognition compared with the other two tasks? The answer to this question outlines the main contribution of this paper. The mixup method demonstrates a destructive impact in cases where it is applied to two sinusoids with the same frequencies yet different phases. The PPG (for heart rate) and ECG (for cardiovascular disease) signals exhibit a higher degree of quasi-periodicity, leading to diminished performance when used with linear mixup. These two signals display a greater level of periodicity as they stem from the human body. In contrast, in the realm of activity recognition, the extent of periodicity varies with different activities—like sitting (characterized by lower periodicity) and walking (exhibiting higher periodicity). Our proposed method addresses this issue and leads to a substantial enhancement in performance. Hope, the answer to this question clarifies the contribution of our presented work. Regarding your claim of lack of baseline comparisons, in our work, we used 8 datasets in 3 domains with 14 baselines including automatic data augmentation techniques (DACL, GenRep) and 6 different mixup methods in 2 contrastive learning frameworks. Our theoretically well-grounded mix-up overperformed all of these in 7 datasets from 3 domains while ranking second in one dataset. Your claim of lack of baseline is not realistic at all. Although we agree that using different contrastive frameworks can be more interesting, our experiments including ablation studies are comprehensive and robust invalidating any notion of inadequate baseline comparisons. Additionally, we include one more contrastive learning framework that is designed for time series where we observe the same behavior. Hope, the reviewer would be convinced by the additional results and understand the difference between a lack of baseline and more interesting experiments. We believe our work has enough baseline, evaluating with 2 more contrastive learning frameworks does not add anything significant to the already proved contribution that is substantiated both theoretically and empirically. --- Rebuttal Comment 1.1: Comment: Thank the authors for the responses and additional experiments. After reading the rebuttal, I still have a few questions. 1. The authors said "But,as they emphasize different frequency components via random weights, its performance is quite low compared to ours (as can be seen from the submitted single-page pdf)", but I don't see the results of reference [2] in the pdf. 2. I understand that this paper emphasizes improved mix-up methods in contrastive learning for quasi-periodic time-series data, but that does not stand as a valid point for excluding comparison with other types of contrastive learning frameworks. Users are in search for better contrastive learning methods, regardless of the underlying frameworks. If the comparison has to be limited to other augmentation methods, it would be better to generalize the scope to broader settings beyond contrastive learning (e.g., supervised classification and forecasting), and show the merits of the proposed augmentation in general.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for taking the time to read our paper and provide feedback. In response to the reviewers' comments, we included two more data augmentations for comparison. We also applied our proposed data augmentation method to one more contrastive learning framework, which is specifically proposed for time-series data, in addition to SimCLR and BYOL. Following the submission of our responses, we would greatly appreciate receiving additional feedback from the reviewers, particularly from Reviewer JK7u and hEg3 (the first two reviewers). Their reviews appear to misinterpret the paper, viewing it more as a novel unsupervised contrastive learning framework rather than recognizing it as a novel data augmentation method for quasi-periodic signals, which is evident from their request for a direct comparison (baseline) with unsupervised learning methods. Even if the reviewers maintain their initial assessments, we value the opportunity to gain additional clarity on your feedback. If you believe there are areas where our responses did not fully address your concerns or if there are aspects that could be improved to merit a higher score, we would greatly appreciate your insights on those specific points. Your feedback is crucial in helping us enhance our work moving forward. Thank you once again for your review! Pdf: /pdf/ea919b839cdb221e2b37d5796a757ac43df78af2.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Epistemic Neural Networks
Accept (spotlight)
Summary: This paper introduces *Epistemic Neural Networks (ENN)* as a general approach for uncertainty estimation for deep learning, and then the *epinet*, which is a particular ENN instantiation. The paper argues that *joint* predictions are essential for sequential decision making problem, and shows *epinets* offer similar marginal performance but better joint predictive performance than conventional BNNs/DNNs. Strengths: 1. Interesting ideas, particularly around setting aside getting good weight-space posteriors and instead focusing on getting good predictive performance (whether that be marginal or joint predictions). I think this is good, and is of interest to the community. 2. Solid experimental results, especially around joint predictions. Nice job. Weaknesses: Although I think the paper is of interest to the community, I think the paper could be substantially improved. I have some concerns about the presentation and writing of the paper that limit how excited I am about this work being published at NeurIPS. On balance, I would like to see the suggestions I've made below incorporated into the writing of the text, and this would enable me to increase my score. # 1. Poor Technical Presentation, Writing, Technical Soundness I have a number of specific concerns here. Some of these might be seen as nit-picky, I admit, but I remain concerned here. 1. L7-9. "With an epinet, CNNs outperform large ensembles of hundred of more particles, and use orders of magnitude less compute." This claim should be qualified because it only applies robustly to joint-predictions. This applies elsewhere in the paper. 2. L31-32 "These approaches can almost match the posterior distribution." I think citing Welling and Teh for this doesn't make sense, and I don't think this is true. See, for example, the analysis in [1]. 3. L36-37. "Practical large scale implementations are often limited to ten or fewer [particles] due to computational reasons". The way you've defined "ensemble based BNNs", sampling based inference or even VI when sampling from the approximate posterior counts as an ensemble. And some approaches use more than 10 samples, e.g., sampling 30+ particles from an approximate posterior should be fine. 4. In a number of places in the paper, intuition could be provided that would allow the reader to understand the points you are making more clearly. e.g., L50-51. "All BNNs are ENNs, but there are useful ENNs such as the epinet that are not BNNs". Explain why please. ENNs have the extra index, which mean that they have a wider class of functions (as far as I can tell), but I'm not sure this is right. L215-216 "with this stop gradient, training dynamics more reliably produce models that perform out ot sample". Please provide more intuition. 5. The writing in Section 3 only addresses classification problems. It should be more general. 6. The notation in 3.2 is confusing. Both $\nu$ and $\theta$ correspond to parameters. 7. It is worth noting that recent work argues that BNNs do not actually need to maintain distributions over all of their parameters for good predictive performance [1]. This is important because it's mentioned in the paper that BNNs learn distributions over all parameters, but this is not completely true, some BNNs do not do this. This characterisation seems wrong. 8. L163: "A BNN is specified by a pair: a base network $f$ and a parameterised sampling distribution $p$." I don't think this is quite right. I think a BNN is specified by a likelihood and prior, which defines a posterior distribution, at least technically speaking. Just because we have a distribution over network parameters does not mean that the network is a BNN. That might be contentious, but the sampling distribution certainly does not need to be parametric! For example, HMC has no parametric sampling distribution! 9. Eq (3): it seems to be the definition should integrate over $z$ and examine the distribution over outputs when doing that. The actual equation currently seems to be a stronger condition: correct joint predictions for every value of $z$. Am I missing something here? 10. Theorem 3. "any BNN defined with respect to $f$ can be expressed as an ENN defined with respect to $f$". I find this is a bit confused. Are there no technical conditions here? If there are technical conditions, please improve them. For instance, if there is a BNN with 10 million parameters, how do we represent this with an ENN that has one dimensional $Z$. 11. Similarly, notation in 4.1 is a bit confusing, with different symbols used for different parameters etc. 12. I think it is incredibly confusing that the paper introduces ENNs and epinets, but that these are not the same thing! I would suggest changing the name of one of them, probably the epinet. # 2. Evaluations 1. What does the reduction in joint-log loss actually correspond to here? Does it yield better predictions in sequential decision making problems? Can you test this? For example, on an active learning or bandit problem? I appreciate this might be a tough ask, but I think given that decision making problems are a central focus for this work, this would be a substantial improvement to the submission. I'd also like to see the performance of an ENN on a 1D regression problem to understand whether ENNs offer coherent updates when observing data, since BNNs (ideally) would offer this. This could be done by retraining, or by just updating the posterior over the epistemic index. 2. I'd like to see results on ImageNet-C i.e., OOD robustness. BNNs and other approaches are often used here. I'm curious to see if the epinet helps here. Maybe this is what you are doing already, it wasn't clear to me reading the main paper alone, and it should be. 3. Are there error bars on Figure 2? It seems not? Please add them, if possible! 4. It is not clear what the reduction in joint log-loss corresponds to, the number is hard to interpret or understand. As a reader, I cannot understand how significant this is. Benchmarking on a sequential decision making problem and demonstrating improved decision making there would be a much more significant result. I appreciate there is theory here, but I'm more interested in the practical implications of the work. 5. Ideally, there would be examples of networks having good marginal predictions but poor joint predictions that lead to problems in sequential decision making problems. [1] Sharma, Mrinank, et al. "Do Bayesian Neural Networks Need To Be Fully Stochastic?." International Conference on Artificial Intelligence and Statistics. PMLR, 2023. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. L42-43 "evaluating each model on out-of-sample data", is this OOD data e.g., from ImageNet-C, or just the normal test set? Please specify? 2. How would one actually use an ENN on sequential decision making problems or active learning problems? Update the epinet parameters only? Retrain the whole thing? Perform inference over Z? Given that this is the aim of the paper, this is an important topic to discuss in the main text. 3. Is the Glorot initialisation used for the MLP important? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 3 good Limitations: Looks good to me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reviewing our paper and writing detailed reviews. Our responses are as follows. Due to the length limit of the rebuttal, we will omit parts of the questions and provide concise answers. **Regarding your questions:** **1. L42-43 "evaluating each model on out-of-sample data", is this OOD data e.g., from ImageNet-C, or just the normal test set? Please specify?** This is a typical test dataset. We will clarify. **2. How would one actually use an ENN on sequential decision making problems or active learning problems? Update the epinet parameters only? Retrain the whole thing? Perform inference over Z? Given that this is the aim of the paper, this is an important topic to discuss in the main text.** Thanks. In case of sequential decision making, on obtaining new data, both base model and epinet needs to be trained (as the data provides information for both base model and epinet). This can be done either concurrently or by first training the base network and then training the epinet in an iterative manner. We believe that both of these should be equivalent. **3. Is the Glorot initialisation used for the MLP important?** We think that Glorot initialisation would lead to outputs roughly in N(0,1) when inputs are from N(0, I). This helps us by not having to tune the scale for different depths and widths of the neural networks. But techniques presented in our paper should also work for other initialization schemes. **Regarding your comments on technical presentation, writing, and technical soundness:** **L7-9. "With an epinet, CNNs ... This applies elsewhere in the paper.** We will polish the wording in the revision. **L31-32 "These approaches can almost ... See, for example, the analysis in [1].** Thanks for pointing us to [1], we were not aware of this recent paper. We will make modifications accordingly. **L36-37. "Practical large scale ... e.g., sampling 30+ particles from an approximate posterior should be fine.** When we refer to “ensemble”, we just mean an ensemble of particles, not more general sampling based inference. We will change “ten or fewer particles” to “at most tens of particles”. **In a number of places in the paper ... Please provide more intuition.** Thanks for the comments! We will provide more intuitions in the revision to further improve the writing. **The writing in Section 3 only addresses classification problems. It should be more general.** We fully agree that it can be more general, in particular, it can also include the problem formulations for regression problems. We only include the classification formulation since we would like to keep the paper short and simpler. **The notation in 3.2 is confusing. Both \nu and \theta correspond to parameters.** Note that \theta corresponds to the parameters of the neural network, and \nu corresponds to the parameters of the sampling distribution. **It is worth noting that recent work ... some BNNs do not do this. This characterisation seems wrong.** Thanks for pointing us to this very recent work, we were not aware of it. We would like to clarify that we just would like to give a high-level overview of how classical BNNs model uncertainty. We will change the wording accordingly. **L163: "A BNN is specified by a pair... For example, HMC has no parametric sampling distribution!** We will clarify in the revision that the sampling distribution is supposed to be an approximation of the posterior distribution. **Eq (3): it seems to be the definition ... Am I missing something here?** Note that `z` is a random variable, so in Equation (3), we imply that the distribution of a BNN can be matched by the distribution generated by an ENN. **Theorem 3. "any BNN defined with respect to `f` can be expressed ... how do we represent this with an ENN that has one dimensional `Z`.** Theorem 3 doesn’t require any additional technical conditions. This can be observed from the proof of Theorem 3 in Appendix C. **Similarly, notation in 4.1 is a bit confusing, with different symbols used for different parameters etc.** We used different symbols to distinguish different parameters to avoid confusion. **I think it is incredibly confusing that the paper ..., probably the epinet.** ENNs are a more general framework. Epinet is one specific kind of ENN. For example, ENNs are to NNs as Epinet is to an MLP. **Regarding your comments on evaluations:** **What does the reduction in joint-log loss actually correspond to here? ... or by just updating the posterior over the epistemic index.** Thanks for the great suggestion. We will include these results with 1D regression in the paper. **I'd like to see results on ImageNet-C i.e., OOD robustness. ..., it wasn't clear to me reading the main paper alone, and it should be.** Thanks for this suggestion! We will examine performance on Imagenet-C and other OOD datasets in our future work. **Are there error bars on Figure 2? It seems not? Please add them, if possible!** We weren't able to add the error bars because of the compute requirements for some of the agents (like ensembles), but we will try to include them in the final version. **It is not clear what the reduction in joint log-loss corresponds to, ... but I'm more interested in the practical implications of the work.** We would like to point the reviewer to a recent paper [1] that examines performance of epinet, in both the neural bandit problems and reinforcement learning problems. [1] “Approximate Thompson sampling via Epistemic Neural Networks” https://openreview.net/pdf?id=xampQmrqD8U **Ideally, there would be examples of networks having good marginal predictions but poor joint predictions that lead to problems in sequential decision making problems.** Based on the paper [1] discussed above, it is indeed the case that the agents which have good marginal predictions and poor joint predictions perform poorly in sequential decision problems. --- Rebuttal Comment 1.1: Comment: Thanks. I raise my score to weak accept. I would like to re-iterate that I think the work is good but it needs some polish and improvement to writing. I would ask the authors of the papers to revise and improve the writing when submitting the camera ready. --- Reply to Comment 1.1.1: Comment: Dear reviewer, thanks for going through our rebuttal and increasing the score. We would also like to thank you for your constructive feedback and for engaging with us.
Summary: In this paper, the authors present a new approach for uncertainty estimation utilizing joint predictions. They introduce the concept of an Epistemic Neural Network (ENN). Within this framework, they propose an innovative architecture called 'epinet,' which supplements any conventional neural network. This additional architecture helps conventional neural networks outperform large ensembles in terms of the joint loglikelihood at the cost of a minimal computational increase. This approach is shown to be effective and practical through numerical experiments, including large-scale datasets like ImageNet. Strengths: 1. The paper addresses an important problem of uncertainty quantification and disentanglement (aleatoric and epistemic) and introduces a novel class of models, ENN, to tackle it. 2. The paper is well-written and easy to follow. 3. The code for the reproduction of experiments is provided. Weaknesses: I think that there is a missing opportunity to show the performance of the proposed approach on direct uncertainty quantification. The motivation example in the introduction section showed that it is important to realise where uncertainty came from -- is it due to the noise in data (aleatoric) or due to the lack of knowledge (epistemic). It would be helpful to see the authors run standard experiments, like checking how well the model can tell the difference between one dataset and another. This would give us a clearer picture of how well the model understands what it doesn't know. One such experiment might be to train a model on one dataset (say CIFAR100) and compare some epistemic uncertainty measure on another dataset (say LSUN). And then compute ROCAUC. Right now, the results are just about how well the model can classify data and predict losses based on the same training dataset, which I believe doesn't tell us the whole story. Minor concerns: 1) In the related work section, line 74, there are cited several papers, and mentioned their connection to Gaussian Processes. But two out of three cited papers - PriorNets and PostNets -- actually focus more on Dirichlet parametrization and don't have a clear link to Gaussian Processes. I think SNGP[1] should be cited here. [1] Liu, Jeremiah, et al. "Simple and principled uncertainty estimation with deterministic deep learning via distance awareness." Advances in Neural Information Processing Systems 33 (2020): 7498-7512. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the weaknesses I mentioned above. Edit: I would like to thank the authors for their answers during rebuttal period. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors adequately addressed the limitations, there is no negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort in reviewing this paper. Our responses are: > I think that there is a missing opportunity to show the performance of the proposed approach on direct uncertainty quantification... It would be helpful to see the authors run standard experiments, like checking how well the model can tell the difference between one dataset and another... Thanks for the great suggestion! We plan to include a visualization of the uncertainty estimates produced by our methods on some 2D OOD-like classification problems. This should give more intuition on how epinets are performing outside of the train dataset. It will be interesting to investigate more about how our methods will perform on more general OOD tasks in the future. > In the related work section, line 74, there are cited several papers, and mentioned their connection to Gaussian Processes. But two out of three cited papers - PriorNets and PostNets -- actually focus more on Dirichlet parametrization and don't have a clear link to Gaussian Processes. I think SNGP[1] should be cited here. Thanks for pointing us to the SNGP paper [1]. We will make appropriate modifications and include the paper.
Summary: The paper introduces epinets as part of Epistemic Neural Networks (ENNs), a novel approach to uncertainty estimation in deep learning models. Epinets extend neural networks to create ENNs, which can be used to estimate uncertainty. The paper then proceeds to present experiments on image classification tasks, demonstrating that epinets outperform both uncertainty baseline models and ensemble methods in terms of joint log-loss, while maintaining comparable performance in terms of marginal log-loss. At the same time, the computational cost of these networks is orders of magnitude less than larger ensembles. Strengths: - The paper presents a clear distinction between epistemic neural networks and Bayesian neural networks (BNNs). This distinction helps readers grasp the unique characteristics and advantages of the proposed framework. - The results showcase the benefits of epinet compared to both ensemble approaches and uncertainty baselines. More comprehensive results in terms of the computational cost in FLOPs are provided in the appendix. - The explanations provided throughout the paper are balanced between intuitive and easy-to-follow explanations and rigorous theorems. The figures provided in the document also effectively illustrate both the core principles of the epinet framework and the key results obtained. These visuals enhance the understanding of the concepts discussed in the paper. Weaknesses: - Even though the framework is rather general, the focus of the experiments seems rather narrow and confined to image classification tasks. It would be useful to see how well the epinet generalizes to other domains and whether there are scenarios or domains where it is more or less effective. - Related to the previous point, the limitations of the work are not discussed in the conclusion. While epistemic neural networks are a general framework, addressing such limitations could provide insights on future directions to take. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Has the epinet framework been tested on real-world datasets outside of the experiments mentioned in the paper? 2. How does the interpretability of the uncertainty estimates by epinets compare to that of BNNs? 3. In the ablation study in the appendix, it is shown that larger values for the index dimension improve both the joint and marginal kl estimates. Is there a point where this is no longer the case? What is the tradeoff? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: To my knowledge, the limitations of epistemic neural networks are not addressed in the paper. The limitations and possible directions for future work could be addressed to improve the understanding and practicality of epinets. Edit: I thank the authors for addressing my questions and comments in their rebuttal. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for reviewing our paper. Our point-to-point responses are as follows: **Even though the framework is rather general, the focus of the experiments seems rather narrow and confined to image classification tasks. It would be useful to see how well the epinet generalizes to other domains and whether there are scenarios or domains where it is more or less effective.** We thank the reviewer for this comment. We would like to point the reviewer to a recent paper [Osband et al. 2023], on approximate Thompson sampling via epistemic neural networks. [Osband et al. 2023] has provided more experiment results for epinet, in both the neural bandit problems and reinforcement learning problems. Epinet has been found to be effective in these two problems. Osband et al. “Approximate Thompson sampling via Epistemic Neural Networks” https://arxiv.org/pdf/2302.09205.pdf **Related to the previous point, the limitations of the work are not discussed in the conclusion. While epistemic neural networks are a general framework, addressing such limitations could provide insights on future directions to take.** We thank the reviewer for this comment. One limitation of the current ENN framework is that it does not consider problems with sequential inputs and outputs (e.g. the language translation problems). We will clarify and discuss this limitation in the revision. A possible future direction is to build ENNs for these sequential problems. **Has the epinet framework been tested on real-world datasets outside of the experiments mentioned in the paper?** As we have mentioned above, the epinet framework has been tested in other problems, including the neural bandit problems and reinforcement learning problems. **How does the interpretability of the uncertainty estimates by epinets compare to that of BNNs?** Our understanding is that interpreting uncertainty modeling in complex prediction/decision problems might be too challenging. It is better to compare them based on the effectiveness of resulting predictions/decisions. In this paper, we have used the joint log-loss to measure the uncertainty estimates from different agents, including epinets and BNNs. As Theorem 2 of this paper shows, minimizing joint log-loss leads to effective decisions. **In the ablation study in the appendix, it is shown that larger values for the index dimension improve both the joint and marginal kl estimates. Is there a point where this is no longer the case? What is the tradeoff?** Thanks for the comment. Usually, increasing the ENN index dimension will improve both the joint and marginal kl estimates (performance). However, it will also require more computation. Thus, there exists a computation-performance tradeoff here.
Summary: In this work, the authors introduce a new framework "Epistemic Networks" to better capture uncertainty by integrating over epistemic indices when computing the joint distribution over multiple inputs. This approach is flexible and can be added to any existing neural network architecture with some differences in the training algorithm. The authors show through experimental evaluations how their approach shows promising results when compared against other baseline methods for uncertainty quantification, including BNNs and deep ensembles on joint log-loss. Strengths: - This paper presents a new approach that's flexible and doesn't add a lot of parameters to the existing model. - The paper evaluates the joint log-loss performance rather than just marginal log-loss and thus helps us understand how other baseline methods fare when looking at the joint prediction performance. Weaknesses: - I think the authors need to do a better job to motivate the case for joint log-loss. Why should one care for joint log-loss performance as opposed to marginal? Can you describe any applications where this might be useful? - The presentation can improve a bit. The authors should describe section 4 to make sure it's easier to understand for folks. For e.g., what are all the different terms in sec 4.2 (eq 7, 8) and what do they signify? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Can you highlight the parallels between the expected reward under joint prediction to the work in the Bayesian decision theory area? In Bayesian decision theory too our goal is to get better decision utility for downstream decision-making task. [1-3] - Building upon the previous point, why should one focus on the decision making framework presented in this paper versus other Bayesian decision theoretic applications that look at marginals? [1-3] - Could you you describe how this approach captures aleatoric uncertainty more clearly? References [1] Vadera, Meet, Soumya Ghosh, Kenney Ng, and Benjamin M. Marlin. "Post-hoc loss-calibration for Bayesian neural networks." In UAI (2021). [2] Cobb, Adam D., Stephen J. Roberts, and Yarin Gal. "Loss-calibrated approximate inference in Bayesian neural networks." arXiv preprint arXiv:1805.03901 (2018). [3] Simon Lacoste-Julien, Ferenc Huszár, and Zoubin Ghahramani. Approximate inference for the loss-calibrated bayesian. In AISTATS (2011) Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Highlighted above in the review. The authors do not clearly specify the limitations in their submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank Reviewer TShA for reviewing our paper. The following are our point-to-point responses: **I think the authors need to do a better job to motivate the case for joint log-loss. Why should one care for joint log-loss performance as opposed to marginal? Can you describe any applications where this might be useful?** We would like to clarify that, as a recent paper [Wen et al. 2022] has shown, for a broad class of decision problems, such as combinatorial decision problems, sequential prediction problems, and multi-armed bandit problems, accurate joint predictions are required to deliver good performance, while accurate marginal predictions alone are insufficient to guarantee good performance. We will further explain and discuss this in the revision. Wen et al. 2022, “From predictions to decisions: the importance of joint predictive distributions ”, https://arxiv.org/abs/2107.09224. **The presentation can improve a bit. The authors should describe section 4 to make sure it's easier to understand for folks. For e.g., what are all the different terms in sec 4.2 (eq 7, 8) and what do they signify?** Thanks for raising this. We will further improve the writing of Section 4.2. The notations in equations 7 and 8 follow the definitions in Sections 3 and 4.1. In particular: \theta is the parameters of the ENN, z is the ENN index, (x_i, y_i) is an input-label pair. f_\theta(x, z_i) is the output of the ENN, which is the logit vector. The softmax transforms the logit vector into a probability vector, and the subscript y_i in eq 7 denotes the (y_i)-th component of this probability vector. We will clarify that in eq 8, \Phi is the CDF of a standard Gaussian random variable. We recognize that Section 4.2 only explicitly states the loss for a single input-label pair and a single epistemic index. We will clarify in the paper that for each stochastic gradient step, the method samples a batch of input-label pairs and a batch of indices and averages over the losses. **Can you highlight the parallels between the expected reward under joint prediction to the work in the Bayesian decision theory area? In Bayesian decision theory too our goal is to get better decision utility for downstream decision-making task. [1-3]** Bayesian decision problems are typically framed as maximizing expected utility. Our approach offers tools for such problems. Theorem 2 ensures that minimizing joint log-loss leads to effective decisions for problems framed in that manner. Please also note that, as Theorem 2 indicates, for the special case when the reward only depends on one label (i.e. tau=1 in the formal version of Theorem 2 in Appendix B), joint loss reduces to marginal loss. For that special case, minimizing marginal log-loss is sufficient. **Building upon the previous point, why should one focus on the decision making framework presented in this paper versus other Bayesian decision theoretic applications that look at marginals? [1-3]** As we have discussed above, our approach is consistent with Bayesian decision theory. Joint log-loss serves as a unit test to ensure that our tool will serve the needs of Bayesian decision making. **Could you you describe how this approach captures aleatoric uncertainty more clearly?** Variation across epistemic indices models epistemic uncertainty. For a fixed epistemic index, label probabilities produced as model outputs express aleatoric uncertainty. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I thank the authors for their rebuttal. I've looked at their rebuttal as well as other reviews on the paper. Furthermore, I've come to realize that my initial assessment might have fallen short - I am reducing my confidence on the review and increasing the score based on authors' response. Nonetheless, the paper does need significant updates to its presentation, and this has also been pointed out by one other reviewer.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes epistemic neural networks (ENNs) as a novel framework for uncertainty estimation in neural network predictions. ENNs introduce an epistemic index that expresses uncertainty and correlations across multiple inputs via joint predictive distributions. The paper argues that joint predictions are critical for effectively evaluating uncertainty quality and enabling good decision making. It is argued that minimizing joint log-loss leads to near optimal actions while marginal log-loss does not. ENNs generalize Bayesian neural networks, as any BNN can be expressed as an ENN but the reverse does not hold. A novel ENN architecture called epinet is introduced, which adds a small auxiliary network to any existing neural network to produce uncertainty estimates. Experiments demonstrate that epinets can match the joint prediction performance of large ensembles while adding relatively minor computational overhead. Strengths: Originality: Proposes a new conceptual framework of epistemic neural networks that expands uncertainty estimation options beyond Bayesian neural networks. Introduces a practically effective and scalable approach via epinets that represents a novel architecture and training methodology. Quality: Technically sound way of modeling uncertainty through joint predictive distributions. Strong empirical results outperforming baselines. Clarity: Motivates the limitations of marginal predictions and need for joint modeling. Explains and visualizes key concepts effectively through examples. Results presented clearly through tables/plots. Significance: Uncertainty estimation is a fundamental challenge for deploying reliable neural networks. Can enable progress on critical applications like exploration, experiment design, and robust decision making. Weaknesses: Theoretical Analysis: The connections between joint log-loss and decision performance could be bolstered with more rigorous analysis of regret bounds or performance guarantees. This would strengthen claims about utility for decision making. Additional theoretical characterization of the proposed training objectives and epinet architecture properties could provide better insights into why the approach works. Experimental Evaluation: Experiments focus on image classification. Evaluating on decision-making benchmarks more directly relevant to the motivations could better highlight benefits. Could ablate design decisions like network architectures and priors more thoroughly to understand their impact. This provides guidance on best practices. Novelty and Impact: The high-level ENN concept builds closely on established perspectives like Bayesian learning. Examining more novel implications would strengthen novelty. While solid incremental gains are shown, the broader advancements enabled by this approach could be better highlighted. Articulating impact on future work would increase significance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Theoretical Analysis Could you provide a more thorough theoretical analysis quantifying the advantages of joint modeling for decision making? Any regret bounds or performance guarantees? Is it possible to better characterize the properties of the proposed training objectives? Do they provably optimize calibration of uncertainties? Experimental Evaluation Have you considered evaluating on tasks more directly relevant to decision making such as experiment design, active learning, or contextual bandits? Impact Could you better highlight the broader potential impact enabled by the approach? What new directions are unlocked? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their time and effort in reviewing the paper. We address the comments below: **> Theoretical analysis regarding regret bound: Could you provide a more thorough theoretical analysis quantifying the advantages of joint modeling for decision making? Any regret bounds or performance guarantees?** Thanks for raising this. We would like to point the author to Theorem 2, which has a formal version and a proof in Appendix B. The theorem establishes a rigorous regret bound in bandit settings. It is an adaptation of the theory presented by Wen et al. (https://arxiv.org/pdf/2107.09224.pdf), which we plan to cite in the main paper as well. **> Theory regarding the choice of training objective: Is it possible to better characterize the properties of the proposed training objectives? Do they provably optimize calibration of uncertainties?** We would like to point the reviewer to Theorem 4 and the associated lemmas and proof in Appendix D. In Theorem 4, we show that the distribution approximated by an epinet converges to the posterior in linear regression setting, under appropriate technical conditions. We plan to extend this theory beyond linear regression in future work. **> Have you considered evaluating on tasks more directly relevant to decision making such as experiment design, active learning, or contextual bandits?** We would like to point the reviewer to a relevant paper: (https://arxiv.org/pdf/2302.09205.pdf) which presents empirical results that improved joint predictions via epinet lead to better performance in bandit and reinforcement learning tasks. **> Could you better highlight the broader potential impact enabled by the approach? What new directions are unlocked?** Our main motivation for this work is to enable principled and practical methods for scalable uncertainty estimation. Our work, especially epinet, will enable uncertainty estimation for large scale models (such as large language models). Furthermore, scalable uncertainty estimation can also unlock practical implementations of efficient exploration algorithms, such as Thompson sampling and information-directed sampling, which may dramatically improve data efficiency in sequential decision-making tasks. Finally, our ENN framework and open source library will allow researchers to iterate and develop better networks and approaches to uncertainty modeling.
null
null
null
null
null
null
Causal Discovery in Semi-Stationary Time Series
Accept (poster)
Summary: The problem of causal discovery is tackled for time series data in those cases where stationarity of multivariate time series data can not be assumed. A problem formulation based on structural causal model is given by considering the case of semi-stationary time series. Under this assumptin, the paper considers the case where a finite number of different causal mechanisms happen in a sequential manner and periodically across time. A constraint based, and non parametric, structural learning algorithm is designed and developed by leveraging on the PCMCI, described in the specialized literature. Such an algorithm is extended and adapted to cope with the considered semi-stationary setting. The causal discovery problem is tackled by standard conditional independence tests. The paper proves that the proposd algorithm is sound when asked to discover and identify causal relationships between variables of discrete time series. Numerical experiments, both from synthetic and real world data, are reported and described to comment on the performance of the proposed approach. The main contributions of the paper are in my humble opinion the following: - designed and developing a new causal discovery algorithm for semi-stationary time series where the causal graph changes periodically. - validation of the proposed approach through syntethic data Strengths: - the tackled problem which is of both theoretical and practical interest - the theoretical framework of the considered problem Weaknesses: - the proposed algorithm wisely exploits the PCMCI algorithm while not being original, it looks a little as an incremental result - numerical experiments seem to compare the proposed algorithm to algortihms which are not conceived to tackle the considered problem - numerical experiments on real world data are disappointing. Indeed, the paper itself recognizes that not having knowledge about the considered domain the results obtained by the proposed algorithm can not be validated nor commented on. Therefore, I ask myself which is the value of presenting such experiments? - computational complexity is not mentioned at all - the paper structure and organization makes it not trivial to follow it - I found many typos Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - which is the coputational complexity of the algorithm you proposed? - how does the proposed algrithm scales with respect to the number of variables of the multivariate time series? - could you please explain why the following works had not been taken into account? maybe they are slighlty out of the scope but still linked to the problem you tackled (Learning Continuous Time Bayesian Networks in Non-stationary Domains), (On-homogeneous dynamic bayesian networks with bayesian regularization for inferring gene regulatory networks with gradually time-varying structure) (Learning non-stationary dynamic bayesian networks) - line 104: could you please explain the meaning of setting P(V) different from 0? - line 105: the notation used for the pair of variables is not properly coherent, indeed Xj is a univariate timeseries contained in V, which has not been defined. Furthermore, what do you mean by variables here? two time serires? - line 115: what do you mean by finite maximum lag? - would you be that kind to comment explicitly and not only formally on the assumptions made about the time when nonstationarity occurs, i.e., the casual graph changes its structure? synchronous? asynchronous for each time series? could you discusse more on this aspect? what about parameters non stationarity? I guess you should consider this is excluded to happen, or not? - line 156: the definition of n is a little confusing. Indeed, n belongs to a set that is defined as a function of n again, would you be that kind to help me understand? - formula (14), the expernal p is the same probability ad the p inside brackets? - line 243, I read "continuous time series in four steps", but I found only three of them - would you also consider the structrual hamming distance to measure the performance of the proposed algorithm Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: - computational complexity of the proposed is not mentioned at all - comparison has not been performed with algorithm natively designed for semi-stationary time series data. Maybe no such algorithms exist? - results on real world data are of no help because no ground thruth is available and no domain expert comments are given to corroborate it Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really appreciate the feedback. Regarding your valuable questions, please see below for our responses. ## Q1. Computational Complexity Executing the PCMCI algorithm on the entire time series constitutes the initial phase of the proposed approach (designated as "line 1" in **Algorithm 1**). The algorithm's worst-case overall computational complexity is delineated by $O(n^{3}\tau_{ub}^{2})$. Here, the symbol $n$ denotes the multivariate nature of the temporal data, and $\tau_{ub}$ represents the upper boundary for time lags. The subsequent computational load stemming from the remaining components of our algorithm follows a complexity of $O(\omega_{ub}^2n^{2}\tau_{ub})$. This encompasses the $O(n^{2}\tau_{ub})$ complexity associated with conducting Momentary Conditional Independence (MCI) tests on all $n$ univariate time series. The parameter $\omega_{ub}^2$ here arises due to the search procedure involving $\omega_{j}$, iterating through values from $1$ to $\omega_{ub}$ for all $n$ univariate time series. The runtime of the computation is further influenced by the scaling behavior of the conditional independence test concerning the dimensionality of the conditioning set and the temporal series length $T$. For further details, see section 5.1 in the work by Runge et al.[2019]. ## Q2. Novelty Thank you for your comment. We agree that our solution leverages PCMCI and builds on it. However, we believe that the novelty of our approach lies in simplicity by carefully leveraging it using algorithmic ideas. Please note that prior to our work, it was not clear if we needed to rethink the causal discovery problem from scratch for non-stationary settings or if some of the existing work could be leveraged in a clever way. Our algorithmic contribution demonstrates the latter. ## Q3. Lack of appropriate Baselines We agree with the reviewer. To the best of our knowledge, there are no non-parametric baselines that can handle non-stationarity or semi-stationarity in the data with a window causal graph. ## Q4. Experiment in the case study While we understand that our results are based on simulated data, it is fairly common in the area to benchmark against the simulated settings that we considered. As one of the main contributions of our work is the introduction of the problem, we believe that our empirical contributions along with the problem formulation are significant. Moreover, many results in causal discovery and inference rely on synthetic simulations, which is necessary in the absence of ground truth. ## Q5. Related Literature We appreciate the valuable literature you've shared; it bears significant relevance to our ongoing project. Regarding our proposed algorithm here, we emphasize a non-parametric approach capable of uncovering causal relationships with time-lag effects and adapting to both abrupt changes in causal mechanisms at each time point while also addressing consistent causal mechanisms as long as periodicity is present. In contrast, the works mentioned are not distribution-free and generate summary causal graphs as outputs. ## Q6. $P(V)\neq 0$ The probability of any given realization involving all variables within $V$ should not be zero. For instance, consider $V=\\{X,Y\\}$; when computing $P(X|Y)=\frac{P(X,Y)}{P(Y)}$, we aim to avoid scenarios where the denominator or numerator becomes zero. This is needed in the proof, and this prerequisite is already met by **Assumption A7**. $P(V) \neq 0$ is also called the Positivity assumption. ## Q7. Clarification on some notations in Line 105 The notation $X, Y\in V$ signifies that any two random variables, denoted by $X$ and $Y$, belong to the set $V$. $X, Y\in \mathbb{R}^{1}$. Meanwhile, $S$ represents a subset of $V$. As indicated in line 101, $X_{t}^{j} \in \mathbb{R}^{1}$ designates an individual variable, whereas $\mathbf{X}^{j}_{t\in[T]} \in \mathbb{R}^{T}$ represents a time series. ## Q8. Finite maximum lag This implies that as $t$ approaches infinity, $X^i_1$ will not serve as a cause for $X^j_t$. The maximum time lag $\tau_{\text{max}}$, as defined in Eq.1, must remain finite. ## Q9. Non-stationarity in the Semi-Stationary setting Each univariate time series $\mathbf{X}^j$ in the n-variate time series $V$ can have its own periodicity $\omega_j, j\in[n]$ in the causal mechanisms. Within each cycle of $\omega_j$ time points, the same causal mechanism is reiterated, resulting in $\omega_j$ distinct causal mechanisms for the time series $\mathbf{X}^j$. In accordance with **Assumption A6**, when the causal mechanisms governing two variables, $X^j_{t1}$ and $X^j_{t2}$, have undergone alteration, it follows that their respective parent sets cannot remain identical. In essence, when the parameters within the structural causal model shift, there is a corresponding change in the parent set as well. ## Q10. Line 156, definition of $n$ $n$ belongs to a subset of $\mathbb{N}$ that should satisfying $\tau_{max}+q-1+n\delta \leq T$, i.e, $n\leq (T-(\tau_{max}+q-1))/\delta$ ## Q11. Formula 14, two $p$ Yes, they both denote the probability. That means the chance of the two probabilities enclosed within the brackets being identical is zero. ## Q12. Line 243 typo There should be only three steps. ## Q13. Structrual hamming distance We did not use structural hamming distance because of the semi-stationary context, wherein the estimated binary edge array's dimensionality grows with rising estimated $\hat\omega_{j}$. As a result, larger $\hat\omega_{j}$ values inherently amplify the dissimilarity in SHD between the true and estimated edge arrays. This complexity makes direct comparisons challenging. Precision, recall, and F1 score, on the other hand, are intrinsically normalized within the range of [0,1], rendering them suitable metrics for the semi-stationary context. --- Rebuttal Comment 1.1: Title: Read your rebuttal Comment: Thank you, I went through your rebuttal question by question or lets say answer by answer. I found many answers being very useful to clarify aspects that in the original submission were not that clear to me. However, I'm still not satisfied by answers to lack of baselines, I understand the author's point but I would have expected a more involved discussion on this aspect. Furthermore, if you describe experiments on real data where you do not have ground truth and you do not have knowledge about the domain I still miss how you can validate your findings. Then, also the answer to Q6 is puzzling, indeed if you consider continuous variables you always have that the variable X takes on a specific continuous value x with probability zero, so I miss the point. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful comments and valuable questions. **Baselines**: Sorry we didn't write this before due to the limited space. Here is our detailed discussion. Indeed, several baseline methods are designed to handle non-stationary temporal data, such as CD-NOD[1] and JIT-LiNGAM[2]. However, it's important to note that the outputs of these baselines are **summary graphs**, allowing them to identify the causal relationship between time series $X^j$ and $X^i$ without pinpointing the specific time lag $\tau$. In contrast, our method enables the detection of the precise variable $X^j_{t-\tau} \in X^j$ that causes the behavior in $X^i_t$, presenting a distinguishing feature. These approaches highlight distinct aspects from ours. While summary graphs offer a degree of insight, they are inherently less informative about the underlying causal structure in our specific case compared to the detailed window graphs produced by our method. To showcase our method's capacity to handle periodicity in causal mechanisms with time-lag effect, a capability lacking in other approaches, we will introduce a related baseline called "Regime-PCMCI" in our new experiment. It's worth noting that this algorithm necessitates a linear model (while our method is non-parametric) and additionally mandates precise foreknowledge of the number of causal mechanisms inherent within the time series. Remarkably, based on several experiment results we have, our algorithm's superior performance is sustained even when the accurate count of causal mechanisms is supplied to the "Regime-PCMCI" method. Given the considerable time requirement of the "Regime-PCMCI" method, we regret to say that the result plot is currently unavailable. However, we intend to incorporate the said plot into the paper later. The significance of the comparison depicted in Fig. 3(a) with the baselines we have is apparent when considering the following points: 1. Our algorithm excels in the reduction of False Positive edges, as indicated by the higher Precision, particularly when compared to the stationary algorithm employed in a semi-stationary setting. This outcome effectively validates **Lemma 3.4**, providing a solid foundation for our approach. 2. With the expansion of the sample size, a notable reduction in False Negative edges becomes evident, as evidenced by the recall rate approaching that of PCMCI, which stands out as the best-performing among the baseline methods. 3. In cases where the ground truth involves periodic causal mechanisms, the misuse of a stationary algorithm leads to a significant amount of error, emphasizing the critical importance of employing an appropriate method, such as ours, to account for such complexities. As far as we know, our method is the first to address the semi-stationary Structural Causal Model (SCM) with the capability to accommodate periodicity. We hope that our approach can establish itself as a potential reference for individuals interested in periodic causal mechanisms. **Case Study**: Thank you for the comment. We will add the following discussion in the camera-ready paper to avoid potential misunderstanding regarding our conclusion in the case study. We could not comment on whether the result of the case study is significant. We open a door for the related experts; if assumptions A1-A8 are satisfied, the stationary assumption may not hold in this real-world data, and such periodicity exists. However, if the finding is not correct from an expert's viewpoint, the following assumptions may be violated: 1. **Hard Mechanism Change** combined with limited power of CI tests: if there is a soft mechanism change in the dataset, the reliability of the CI test of two variables given their parents will be influenced by the skewed distribution of the parent variables. This effect will be exacerbated by the fact that the sample size will be shrunk by $\omega$. 2. **No Contemporaneous Causal Effects**: There's a possibility of potential causal effects from $X^{ta}_t$ to $X^{cp}_t$ that we're unable to capture in our analysis under this assumption. Our method provides a sound and robust (shown as figures in the Rebutall pdf file) algorithm for experts in various fields who are interested in validating the presence of periodicity within the causal mechanisms specific to their domain. **More details about $p(V)\neq 0$**: Suppose Y is a continuous random variable. p(Y) here is the probability density, which is always positive in the domain of Y. For example, this would be a case for any non-degenerate jointly Gaussian distribution. Thank you again and we wholeheartedly invite further discussions. [1]Huang, Biwei, et al. "Causal discovery from heterogeneous/nonstationary data." The Journal of Machine Learning Research 21.1 (2020): 3482-3534. [2]Fujiwara, Daigo, et al. "Causal Discovery for Non-stationary Non-linear Time Series Data Using Just-In-Time Modeling." 2nd Conference on Causal Learning and Reasoning. 2023.
Summary: The paper describes a constraint-based causal discovery method for semi-stationary SCMs, where the causal structure is periodically changing over time. This type of structure is relevant in real-life scenarios where seasonal or diurnal variation is present for example. The problem is rigorously formalized and and algorithm is give to reconstruct the SCM. As a first step the algorithm identifies a superset of parents using PCMCI naively, then searches for the correct periodicity for all variables by minimizing parent set size. The author provides theoretical guarantees that the reconstructed causal graph is indeed the generating graph in the infinite sample size limit, if all assumptions hold. Strengths: The paper describes a novel extension of PCMCI for semi-stationary SCMs, and gives theoretical guarantees. The probrem formulation is sufficiently rigorous, and the algorithm is described in sufficient details. This makes the works reproducible. The method relaxes a quite frequent assumption of causal discovery methods in the temporal domain: stationarity. Even if this new relaxed assumption also likely violated by many real life dataset, it is clearly an improvement in cases when stationarity is clearly a wrong assumption. Weaknesses: The main limitation I see is the following: For example, if the causal effect changes by the day of the week but the samples are collected in every hour, this periodicity is hard to represent in the presented framework. Given the process inherent timescale is hourly, we cannot just subsample the data as this would induce transitive dependencies. However in the current form the method will try to fit a new causal mechanism in every hour (until the period is reached of course). It would be good to know what happens if this type of data was fed to the algorithm, as this is hard to avoid. To avoid the identification of the transitive dependencies as genuine we are motivated to work as fine timescale as available. Similar question applies for the "sudden" mechanism change. In real life most likely the sudden change will transform to a smooth change as the sampling frequency increases (for example seasonal effect in a daily scale). Most likely the effect of the Atlantic air temperature on the Central Pacific air temperature does not disappears suddenly. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: l257: **Following the previous work in Huang et al. [2020], F1 score, Adjacency Precision, and Adjacency Recall are used to measure the performance** Please specify in details how you compute these metrics. Classical methods reconstruct a single causal mechanism per variable, while the ground truth has multiple. l259: **The standard error of the averaged statistics, displayed either by color filling or by error bars is usually too small to be observed.** You mean here the standard error of the mean estimate? You can use the sample variance instead. Fig 2a, In horizontal axis $\omega_{max}$ is shown. Has the code been executed with the same $\omega_{ub}$ for all cases, or the $\omega_{ub}$ was smaller for smaller $\omega_{max}$ trials? In general what hyper parameters (upper bounds, significance thresholds) were selected? What CI test was used? In the Case Study, if you have monthly data, and Central Pacific has 3 parent sets, it seems to me the causal effect is different in every month say the time partitions are {January,April, July, .. } {February,May, August, ..} {March, June, September, ...}, how you come to the conclusion that: **"the causal effect from the tropical Atlantic air temperature to the Central Pacific air temperature would disappear every quarter of a year"*** ? I am not sure I understand this point. Minor comment: I would suggest to use "discrete valued" and "continuous valued" terminology instead of "discrete time series" / "continuous time series", as it can be mistaken with discrete and continuous time. How the sampling frequency influences the reconstruction of the SCM? (See weaknesses part.) **Rebuttal**: I have read the author's rebuttal and we exchanged further comments. I still see the paper having potential interest for the audience of this conference. I had one main open question, the result of the case study. Its result is unfortunately quite implausible, therefore I cannot further increase my score. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The author points out some limitation of the method. For example the higher sample complexity of the method relative to vanilla PCMCI. It is not clear how realistic the present semi-stationarity assumption. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really appreciate the feedback. Regarding your questions, please see below for our responses. ## Q1. Sampling issue Thank you for this great question. Our proposed method can handle the "unchanged" causal mechanism as long as there exists a periodicity in how the causal mechanisms change over time. For instance, if the causal effect changes by the end of the day but the samples are collected in every hour, considering each number as representing a distinct causal mechanism, the causal mechanism progression of $\mathbf{X}^{j}$ over time could be [1,1,...,1,2,2,...,2,3,3,...,3,4,4,...,4,...,7,7,...,7,1,1,...,1,2,2,...,2,3,3,...,3,...4,4,...,4,...,7,7,...,7,...]. Even though the causal mechanism doesn't change every time point (every hour for example), the causal mechanism will eventually repeat itself every week and the algorithm will treat the sequence [1,1,...,1,2,2,...,2,3,3,...,3,4,4,...,4,...,7,7,...,7] for one week as a single unit, eventually leading to a periodicity of 24*7=168 hours=7 days. In this situation, our proposed algorithm will have 168 correspondingly distinct time point subsets where the 1th, 169th, and 337th causal mechanism "1" will be included in the same time point subsets instead of having all causal mechanism "1" in the same time point subsets like [1,2,3,4,5,6,7,1,2,3,4,5,6,7,...,1,2,3,4,5,6,7]. Hence, the drawback is that the effective sample used in each MCI test for the first case will be 1/24 of the samples used for the second case. That is, the sample size used in MCI tests is shrunk. In our ongoing research, our goal is to overcome this limitation and use the samples more efficiently in the first case. ## Q2. "sudden" change As for the "sudden" change in the causal mechanism, we can handle the "soft" change as long as the periodicity of the "soft" change mechanism is not a multiple of the periodicity of the "hard" change mechanism. For instance, suppose the causal mechanism progression of $\textbf{X}^{j}$ over time is [1,2,3,1,2,3,1,2,3,...] and the change between causal mechanism "1" and "2" is "soft". More specifically, suppose the only incoming edge of $X^j_t$ in causal mechanism "1" and "2" is $X^i_{t-1}$, but the causal effect between $X^i_{t-1}$ and $X^j_t$ in causal mechanism "1" is stronger than that in causal mechanism "2". This is a violation of the **Assumption A5** Hard Mechanism Change. Suppose in causal mechanism "3", **Assumption A5** still holds. Our algorithm will treat this case as the same as [1,1,3,1,1,3,1,1,3,...]. Because of the causal mechanism "3", we will still have a periodicity of 3, and the time partition will still be correct. Even though there are no sudden changes between causal mechanisms "1" and "2", the corresponding samples will still be partitioned according to their causal mechanism, and then the causal effect will be estimated correctly. However, without causal mechanism "3", the algorithm will fail. On the other hand, if the periodicity in the "soft" change mechanisms is 2 with [1,2,1,2,...], then the periodicity of the "hard" mechanism change is 1 with [1,1,1,1,...], therefore the algorithm will tell us that the periodicity is 1, instead of 2. Hence, the algorithm can handle the "soft" change as long as the periodicity of the "soft" change mechanism is not a multiple of the periodicity of the "hard" change mechanism. Thank you for this great question that inspires us to relax the **Assumption A5** to handle "soft" change in some cases. We will update this in the camera-ready version. ## Q3: The metrics. Thank you for the comment, and we will include the explanation in the paper later. During our metric evaluation process, we will compute the least common multiple of $\Omega$ and $\hat{\Omega}$, where $\Omega$ itself represents the least common multiple of all $\omega_{j}, j\in[n]$, as defined in Eq. 7. Designating this least common multiple as "lcm," we construct two binary arrays of edges with dimensions $[N, lcm, N, \tau_{max}+1]$ for both the true causal graph and the estimated graph. In these arrays, the value 1 signifies an edge connecting one variable to another with a specific time lag, while 0 indicates the absence of an edge. The metrics are subsequently computed based on the difference between these two arrays of edges. ## Q4: Standard error We appreciate your suggestion, and we will make the necessary updates accordingly. To maintain consistency, we will continue to utilize the standard error for the new experiments. These updates will be synchronized with the previous experiments in the camera-ready version. ## Q5: hyperparameters and CI tests. We thank you for your comment, and we will incorporate these specific details into the main paper. For the continuous-valued time series, $\omega_{ub}=15$ for all cases no matter what $\omega_{max}$ is, and $\tau_{ub}=20$ for all cases. CI test used in continuous valued time series is a partial correlation test. For the discrete valued time series, $\omega_{ub}=7$ and $\tau_{ub}=7$ for all cases. CI test here is a conditional mutual information test. ## Q6: Conclusion in the case study The casual edge from $X^{ta}_{t-1} \text{to} X^{cp}_t$ disappears in Jan, April, June, and October, while in the remaining months, the edge exists. Therefore we have an initial conclusion that the causal effect of tropical Atlantic air temperature on the Central Pacific air temperature disappears every quarter of a year. ## Q7: "discrete valued" and "continuous valued" time series We greatly appreciate your supportive comment, and we will revise the terminology accordingly to prevent any potential misunderstandings. --- Rebuttal Comment 1.1: Title: Reply to Rebuttal Comment: I would like to thank the authors for their time to answering my (and other reviewers') questions. I am mostly satisfied with the answers and explanations provided. I only mention here the questions where I have some additional comment or follow up questions. **Q1:** Indeed, my worry was exactly the diminishing sample size per CI test. **Q3:** So I assume if a method cannot handle non-stationarity ($\hat{\Omega} =1$), the reconstructed mechanism is repeated $\Omega$ times and compared to the ground truth. Am I correct? **Q5:** I misunderstood this in the original paper but indeed it was a mistake from my side. However, now I see that indeed this finding is highly implausible. Are you aware of any assumption violation that would produce this kind of artifact ? The provided new results in the PDF demonstrate that the method also seem to have a level of robustness on datasets where the assumptions are violated. --- Reply to Comment 1.1.1: Comment: We appreciate your insightful comments, which are truly inspiring. **A1**: We recognize that there's room for more effective sample utilization in the real-world application of our algorithm when dealing with periodic causal mechanisms that allow for the repetition of the same causal mechanism. When there are only a few instances of the same causal mechanism, such as 2, reducing the samples per CI test by half might be reasonable in certain scenarios. However, if there's a substantial number of repeated causal mechanisms, such as 200 instances, it may be more appropriate to treat this issue as a change point detection problem, which is currently a topic of our ongoing research. If the repetition falls somewhere between being small and significantly large, we don't have a definitive solution at this point, but it presents an intriguing open question worth exploring. Your insightful question is greatly appreciated, and we intend to incorporate it into the discussion of limitations within our paper. **A3**: Yes. **A6**: We deeply appreciate your valuable input on the case study. We claimed in the paper that the significance of these results is under-explored, making your comment essential. There are specific assumptions that may be violated, as follows: **Hard Mechanism Change combined with the content in Q1**: If a soft mechanism change occurs in $X_{t-1}^{cp}$, the reliability of the CI test between $X_{t-1}^{ta}$ and $X_t^{cp}$ given $X_{t-1}^{cp}$, will be impacted by the skewed distribution of $X_{t-1}^{cp}$. It is further impacted by the fact that the sample size here is 900/3=300. **No Contemporaneous Causal Effects**: There's a possibility of potential causal effects from $X^{ta}_t$ to $X^{cp}_t$ at the same time point $t$ that we're unable to capture in our analysis. Thank you for raising these crucial points; we will include these comments in our case study section. In conclusion, we wholeheartedly invite further discussions.
Summary: ## Summary The paper addresses the problem of discovering causal relations in semi-stationary time series data, where a finite number of causal mechanisms occur sequentially and periodically across time. The authors propose a constraint-based, non-parametric algorithm called $PCMCI_\Omega$ to handle this type of time series data. The main contributions are 1. The proposed algorithm, PCMCIΩ, is an extension of the PCMCI algorithm designed for stationary time series data. It leverages the PCMCI algorithm to systematically discover the superset and use CI test to find out the correct periodicity and remove unnecessary parents. 2. The soundness of the PCMCIΩ algorithm is demonstrated through a theoretical analysis, showing that it can recover the true causal graph under specific assumptions. 3. The authors validate the effectiveness of the PCMCIΩ algorithm through experiments on both continuous and discrete synthetic time series data. Although I think the baselines can be stronger. Strengths: Strengths of the paper: 1. Originality and significance: The paper proposes a novel algorithm, PCMCIΩ, to address the challenging problem of causal discovery in semi-stationary time series data. Fewer work have considered such settings and most of the previous method assume stationarity. This is a step towards generalising causal discovery to non-stationary time series. 2. Soundness: This work provides a theoretical guarantees on the soundness of the proposed method, and validate it with experiments. Weaknesses: The main weakness of this paper are 1. Limited scope of experiments: Although the authors have performed experiments on synthetic continuous and discrete time series data, as well as a real-world climate dataset. However, the baseline selection is too simple. For example, VARLinGaM and DYNOTEARS are both linear models with stationarity assumptions. It is expected to outperform those two methods. I suggests including more stronger baselines, such as Granger causality baselines [1], and SCM-based method [2]. 2. The paper is hard to understand when I first read it. There are many definitions defined in section 2, without knowing why they are important. The graphical illustration helps a little bit, but still confuses me on why we have to define them. For example, I am not sure I understand why Illusory Parent sets requires a subscript. Like why we need $Pa_2(X_7^1)$ and $Pa_3(X_7^1)$. Isn't it the same as $Pa_1(X_8^1)$? Since the following theoretical analysis extensively uses these definitions, I recommend the author to be very clear on these definitions and should make them easier to understand for the readers. 3. How does your method performs when there are mismatches in the assumptions? For example, the ground-truth is stationary/non-stationary. When the lag upper bound and period upper bound are shorter than the truth. etc. More ablation study would be helpful. 4. What are the potential limitations of the proposed method? Some discussion should be added. For example, instantaneous effect can be important when aggregation happens. Computational complexity with dimensionality, etc. [1] Khanna, S., & Tan, V. Y. (2019). Economy statistical recurrent units for inferring nonlinear granger causality. arXiv preprint arXiv:1911.09879. [2] Gong, W., Jennings, J., Zhang, C., & Pawlowski, N. (2022). Rhino: Deep causal temporal relationship learning with history-dependent noise. arXiv preprint arXiv:2210.14706. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. I am a bit confused on line 123, why minimum periodicity has the same symbol as the number of causal mechanisms in $V$? 2. In definition 2.3, eq.8, the subscript $k$ is not showing in the latter equations. What does this k means? 3. Can you make the definition of Illusory parent sets easier to understand and explain the intuition behind it? 4. For figure 1, why $Z_2^1$ is identical to $Z_1^7$? I thought $Z_2$ defines the Markov chain for second time series? Also, you use superscript in $X$ to indicate time series number, I think $Z$ should follow the same pattern to avoid confusion. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The author does not explicitly provide a discussion on the limitations. Hence, I suggest the author to include a limitation section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Your insights are greatly appreciated as we work towards enhancing the robustness of our method. Regarding your questions, ## Q1. Scope of experiments To our best knowledge, there is no such non-parametric algorithm that can handle the periodicity in causal mechanisms and discover the time-lag effect with a window causal graph. For the Granger causality baselines [1], the estimated causal graph is a summary causal graph, while the output of our proposed algorithm is an estimated window causal graph. They highlight distinct facets. Summary graphs are less informative than the window graphs about the underlying causal structure in our case. For the SCM-based method [2], we have contacted the authors and will conduct the experiments as soon as we get the code. Based on your valuable suggestions, we conduct more experiments in the nonlinear setting with the baseline method "tsFCI." In the new experiments, shown as **Fig.1(b)** in the Rebuttal pdf file, the SCM are non-linear and the proposed algorithm performs well. As for the "non-stationary" assumption, to showcase our method's capacity to handle periodicity in causal mechanisms with time-lag effect, a capability lacking in other approaches, we will introduce a related baseline called "Regime-PCMCI" in our new experiment. It's worth noting that this algorithm necessitates a linear model and additionally mandates precise foreknowledge of the number of causal mechanisms inherent within the time series. Remarkably, based on several experiment results we have, our algorithm's superior performance is sustained even when the accurate count of causal mechanisms is supplied to the "Regime-PCMCI" method. Given the considerable time requirement associated with the "Regime-PCMCI" method, we regret to say that the result plot is currently unavailable. However, we intend to incorporate the said plot into the paper at a later stage. ## Q2. Visualization of definitions We appreciate your suggestion, and we will incorporate additional figures that illustrate each definition separately into the supplementary material. ## Q3. Ablation study We have conducted new experiments for the ablation study. Our proposed algorithm can handle stationary SCM as stationary SCM is a special case of semi-stationary SCM where $\Omega=1$. In a non-stationary setting without periodicity, shown as **Fig.1(c)** in the Rebuttal pdf file, the proposed method performs slightly better in terms of F1 score and precision. However, the recall rate is the worst. If $\omega_{ub}<\omega_{max}$, the performance of the proposed algorithm decreases, but still can detect a sparser graph, shown as **Fig.1(d)** in the pdf file. ## Q4. Limitations **Assumption A4** No Contemporaneous Causal Effects is needed in the underlying Markov chains of the time series. See line 39 in the Appendix. The worst case complexity of our algorithm is $O(n^{3}\tau_{ub}^{2})$+$O(\omega_{ub}^2n^{2}\tau_{ub})$. The runtime of the computation is further influenced by the dimensionality of the conditioning set in CI tests. This set's maximal size is $2+|Pa(X^j_t)|+|Pa(X^i_{t-\tau})|$, coupled with the temporal series length $T$. Refer to section 5.1 in the work by Runge et al.[2019] for more details. ## Q5. Minimum Periodicity is the number of causal mechanisms. As defined in Eq.4 in **Definition 2.2**, the causal mechanisms of $\mathbf{X}^{j}$ follow a sequential and periodic pattern, occurring every $\omega$ time points. The smallest value of $\omega$ that satisfies this condition is termed the periodicity of $X^{j}$, which necessitates $\omega$ causal mechanisms. To illustrate, considering each number as representing a distinct causal mechanism, the causal mechanism progression of $\mathbf{X}^{j}$ over time could be [1,2,3,1,2,3,1,2,3,...]. In this instance, $\mathbf{X}^{j}$ comprises 3 distinct causal mechanisms, which cyclically repeat every 3,or 6, or 9,..., or $3N,N\in \mathbf{N}^{+}$ time points. The smallest value of $3N$ here is 3, which is the number of causal mechanisms. However, it's important to note that these $\omega$ causal mechanisms need not to be entirely unique. For instance, the causal mechanism progression of $\mathbf{X}^{j}$ over time could be [1,1,2,1,1,2,1,1,2,...]. In this scenario, the algorithm treats the sequence [1,1,2] as a single unit, effectively resulting in a periodicity of 3 while containing only 2 distinct causal mechanisms. To maintain simplicity, we still refer to this as having 3 causal mechanisms. ## Q6. Updated Definition 2.3 *Time Partition* The updated definition should be as follows. A time partition $\Pi^{j}(T)$ of a univariate time series $X^{j}$ in a Semi-Stationary SCM with periodicity $\omega_{j}$ is a way of dividing all time points $t\in[T]$ into a collection of non-overlapping non-empty subsets $\Pi^{j}_{k} (T),k\in [\omega_j]$ such that $\Pi^j_k(T):=\\{t:\tau_{\max}+1 \leq t \leq T, (t \bmod \omega_j)+1=k \\} $ where $\bmod$ denotes the modulo operation. For instance, $5\bmod3=2$. In this context, by gathering all time points $t$ where the corresponding variable $X^j_t$ share the same causal mechanism, we can form $\omega_{j}$ distinct subsets of time points, denoted as $\Pi^j_k(T)$, where $k$ ranges from 1 to $\omega_{j}$. ## Q7. Markov chains The Discrete-time Markov Chains $\\{Z^q_n\\}$, where $q\in [\delta]$ denotes the index of Markov Chains and $n$ denotes the state index of each Markov Chain, are defined for the entire multivariate time series $V$ and are not defined for individual univariate components $\textbf{X}^{j}\in V$. When a specific value of $q\in [\delta]$ is chosen, the sequence $\\{Z^q_n\\}_\{n\in \mathcal{N}\}$ in **Definition 2.5** constitutes a Markov Chain, with $n$ serving as the progressing index. In Fig.1, there is no $Z^7_1$ because there are only six Markov Chains associated with $V$, and $Z^1_2$ denotes the second state within the first Markov Chain. --- Rebuttal 2: Comment: Thanks for the additional experiments provided by the authors. They managed to address my concerns, and hope the revised version to have a more clear presentation. I will keep my current score. --- Rebuttal Comment 2.1: Comment: We appreciate your feedback, and we're pleased that the experimental results, guided by your insightful suggestions, have effectively addressed your concerns. We are going to refine the presentation to enhance the clarity of definitions in the forthcoming version. Thank you once again!
Summary: The authors propose a causal discovery algorithm for a class of periodic random processes called semi-stationary process. The algorithm is built upon an existing algorithm assuming stationarity. Consistency results of the algorithm is provided. Strengths: 1. Consistency results of the algorithm was shown even though the algorithm is relatively complicated. 2. The algorithm shows superior performance against the baselines. Weaknesses: 1. The motivation for generalizing PCMCI to handle semi-stationary processes is not clear, especially from a theoretical perspective. The description of PCMCI is completely moved to the supplementary. I think including a brief description and the key procedures (e.g., how the superset is estimated) is helpful for understanding why PCMCI is good fit for the considered setting. 2. As mentioned in Section F. from Appendix, the proposed algorithm does not yield good performance in finite samples until the heuristic method $turning\text{ } points$ is introduced. To support the theoretical results, it should be shown that the algorithm starts to work when the sample size is sufficiently large. typo: Definition 2.3 is not correct: $\Pi_{k}^{j}(T)$ depends on $k$, but its definition in (8) does not depend on $k$. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Since PCMCI assumes stationarity, in Algorithm 1, why PCMCI is used to estimate the superset $\widehat{SPa}(X_{t}^{j})$ for semi-stationary processes? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really appreciate the feedback. Regarding your questions, ## Q1. Description of PCMCI Thank you for the suggestion. In the final version of the paper, we will provide a concise overview of how PCMCI functions, utilizing the additional page available in the camera-ready version. Here is a brief overview of PCMCI. There are two stages of PCMCI: the condition-selection stage and the causal discovery stage. In the first stage, unnecessary edges are removed based on the conditional independencies from an initialized partially connected graph. In the second stage, Momentary Conditional Independence tests (MCI) are used to further remove the false positive edges caused by autocorrelations in data. ## Q2. Relevance of PCMCI After running PCMCI on the whole multi-variate time series with Semi-Stationary SCM, the parent sets $\widehat{SPA}(X^{j}_{t}),j\in [n]$ we obtained (line 2 in **Algorithm 1**) for each variable should be a superset of its true parent set for any time point $t\in T$, proved by **Lemma 3.2-3.3**. For a guess of $\hat{\omega}_{j}$, we can construct a corresponding series of time partition subsets. A series of MCI tests are conducted on samples whose time points are from the same time partition subset. Based on the result of the MCI tests, variables in $\widehat{SPA}(X^{j}_{t})$ will be removed based on the conditional independencies. If $\hat\omega_{j}=\omega_{j}$, the samples we used in each MCI tests are from the same causal mechanism and $\widehat{SPA}(X^{j}_{t})$ will shrink to the true parent set under assumption **A1-A7** with an oracle (infinite sample size limit). If $\hat\omega_{j}\neq\omega_{j}$, however, the samples in MCI tests are from different causal mechanisms, bringing in more causal relations under assumption **A6**. Hence according to the MCI test results, $\widehat{SPA}(X^{j}_{t})$ will shrink (or not shrink) to a wrong parent set whose size is larger than the true parent set, leading to a denser causal graph, proved by **Lemma 3.4**. Hence, guess $\hat \omega_{j}$ resulting in the most sparse causal graph equal to the correct estimation $ \omega_{j}$. Based on the consistency of PCMCI algorithm and our proof of **Lemma 3.2-3.4**, the proposed algorithm PCMCI$_\Omega$ is sound under assumption **A1-A7** with an oracle (infinite sample size limit). ## Q3. PCMCI$_\Omega$ without *turning point* Thank you for this insightful comment, and we will add the related discussion in the camera-ready paper. We have conducted new experiments without using *turning point* and therefore $\hat\omega_{j}$ has been chosen according to the original rule stated in line 12 in **Algorithm 1**. The results show that algorithm shows similar performance with and without the *turning point*. Shown as **Fig.1(a)** in the Rebuttal pdf file, PCMCI$_\Omega$ without *turning point* results in a slightly larger standard error with a smaller sample size. As time length $T$ increases, the performance of the algorithm without *turning point* has consistently increased and is even slightly better than PCMCI$_\Omega$ with *turning point*. The consistent performance of PCMCI$_\Omega$ under different chosen rules of $\omega$ supports our theoretical result, that is, the correct periodicity $\omega$ leads to the most sparse causal graph. ## Q4. Updated Definition 2.3 *Time Partition* Thank you for pointing this out. This definition in the main paper is not precise. The updated definition should be as follows. A time partition $\Pi^{j}(T)$ of a univariate time series $X^{j}$ in a Semi-Stationary SCM with periodicity $\omega_{j}$ is a way of dividing all time points $t\in[T]$ into a collection of non-overlapping non-empty subsets $\Pi^{j}_{k} (T),k\in [\omega_j]$ such that $\Pi^j_k(T):=\\{t:\tau_{\max}+1 \leq t \leq T, (t \bmod \omega_j)+1=k \\} $ where $\bmod$ denotes the modulo operation. For instance, $5\bmod3=2$. --- Rebuttal Comment 1.1: Title: Reply to the rebuttle Comment: Q3: It is good to see that the algorithm does not heavily rely on turning points. This addresses my major concern of the algorithm. Based on this, I would raise my score slightly. Q1: I am still quite skeptical about using PCMCI for estimating $\hat{SPa}(X_{t}^{j})$ in finite samples, since PCMCI is proposed for stationary processes rather than semi-stationary processes. The proofs of Lemma 3.1 and 3.2 assume consistent CI tests, so I think the theoretical guarantees are quite limited. But I get that it is quite challenging to obtain stronger results. Overall, I think the studied setting is interesting and method has some novelty. But since semi-stationary processes defined in this paper are not studied in the literature, it is not clear whether it is general enough to model real world periodic random processes. Therefore, experiments on real data with ground truth will make the method more convincing. --- Reply to Comment 1.1.1: Comment: **Q3**: Thank you for the reconsideration. **Q1**: Thank you for these valuable comments. The CI tests in both PCMCI and our algorithm are assumed to be consistent given i.i.d. samples. We do not assume the consistency of CI tests with respect to semi-stationary data. Therefore any CI tests that maintain consistency with i.i.d. samples can be seamlessly integrated into our algorithm. This applies not only during the initial PCMCI phase but also in the subsequent step of our algorithm, where Momentary Conditional Independence (MCI) tests come into play. For instance, CI tests with proven consistency, such as Chi-square tests or Fisher's exact tests, can be employed in both PCMCI and our algorithm. This is feasible because, in both cases, variables are conditioned on parent sets or supersets of parent sets, ensuring the independence of samples due to the independence of exogenous noise terms. By choosing a correct $\omega$, conditioning on a superset of the parent set can yield identical samples from the same causal mechanism. An incorrect $\omega$ results in samples from a mixture distribution, as outlined in Eq.50 within the supplementary material. Hence it may introduce more dependent relationships in the mixture distribution, supported by **Lemma 3.2**. Once again, we express our heartfelt appreciation for your insightful feedback. We aspire that our algorithm serves as an innovative tool, harnessing expertise from various domains to unveil the latent periodic patterns within causal mechanisms across real-world situations.
Rebuttal 1: Rebuttal: Dear reviewers, We extend our sincere gratitude for dedicating your time to meticulously assess our submission and offer invaluable insights. With the initial rebuttal phase comes to an end, we believe that our response effectively addresses the raised points of concern. We have attached a one-page pdf with new experiment results. We remain fully open to continued discussions throughout the forthcoming discussion phase. In light of our responses and the implemented revisions, we kindly request you to reconsider your evaluation. Warm regards, Authors Pdf: /pdf/14636cbb277954fc6e9775128aeb3c688f917a2a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Self-Correcting Bayesian Optimization through Bayesian Active Learning
Accept (poster)
Summary: The paper proposed a novel function SAL used for active learning based on distribution difference. Then the author further incorporated the SAL function into Bayesian optimization (BO) to improve BO performance. Most numerical experiments are conducted on synehtic benchmakr functions. Better performances are achieved by using the proposed method compared to several baseline functions. Strengths: Overall, the paper is easy to follow despite several confusions. Weaknesses: Lack of real-world experiments. I would suggest the authors be precisely clear on their symbols. I have some questions about the technical details. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In general, I grasp the main idea of this paper. However, I suggest that the authors distinguish among three different things: (i) the true underlying function $f(x)$, (ii) the observed noisy function value $f(x)+\epsilon$, and (iii) the GP model $GP(x)$, which is an approximation to $f(x)$. The authors should carefully examine the usage of each symbol throughout their paper and ensure that they properly distinguish between (i)-(iii). Currently, the authors sometimes misuse these symbols, which hinders the understanding of the paper. Below, I have listed several questions and would appreciate it if the authors could provide answers. This will help me gain a better understanding and assessment of the paper. 1. The authors state that the mean function $m(\cdot)$ is set to a constant. Why is this constant not included in the set of hyperparameters? 2. The numerical experiments primarily focus on benchmark functions, with the exception of DNA classification task and cosmological estimation. It would be beneficial to include more real-world experiments. For example, Bayesian optimization is commonly used for tuning network parameters in automl. It would be valuable to add experiments similar to that. 3. What is the specific form of $p(\theta|D)$ in step 4 shown in Algorithm 1? 4. What is $p(f|\theta,D)$ in Step 6? And what does Step 6 do? The symbol $f$ is used to represent the underlying function in equation (1). I am confused about how to sample an $f$ from $p(\cdot|\theta,D)$. Since $f(\cdot)$ represents the underlying function and does not depend on the GP model, I assume that $p(f|\theta,D)$ means sampling a GP instance given $\theta$ and $D$. If this is the case, I suggest the authors use another symbol instead of $f$ here. 5. One advantage of Bayesian optimization is that its acquisition function does not require calling the real function. Instead, it only needs predictions based on the GP (e.g., EI = mean of GP - std of GP). Thus, Bayesian optimization can save functional evaluation times. Typically, Bayesian optimization is employed in situations where observing the function value $y_x$ at a specific $x$ is time-consuming or expensive. However, I am confused here. Does $y_x$ represent the observed function value at $x$ according to equation (1)? Does this mean that in order to evaluate equation (9), we actually need to simulate or obtain $y_x$ at a specific $x$? In other words, does the proposed acquisition function (9) require the observation of $y_x$? If so, it seems to me that the method is not valid, as it sacrifices the fundamental benefit of Bayesian optimization. 6. Continuing from the previous question, how do we evaluate the distribution value $p(y_x|\cdots)$ in Step 7? Moreover, how do we evaluate $p(y_x|D)$ in (9)? 7. What is the difference between $p(y_x|\theta,D)$ used in equation (7) and $p(f|\theta,D)$ used in Step 6? Are they different in terms of considering $\epsilon$ or not (i.e., in equation (1), $y_x=f(x)+\epsilon$)? 8. I am confused about the condition on $(x^\star,f^\star)$. What do the authors mean by $(x^\star,f^\star)$? Are they the true optimals related to the underlying function $f$ or are they the optimals related to a GP model? 9. The considered numerical experiments all seem rather low-dimensional. 10. I am concerned about the insufficient technical contribution. It appears to me the main contribution could be summarized in one equation, Eq. (7). Previous similar papers usually also provided theoretical guarantees (e.g., the JES paper). Solely a new function form seems rather insufficient as a standalone paper. I would appreciate it if the author could justify their contribution. **Writing suggestions** 1. I assume "HP" in Figure 1 means "hyper-parameter." However, at first sight, "HP" is confusing, especially since many words have abbreviations as "HP." I suggest that the authors write the full name when it first appears. 2. Line 72 contains a typo: "$\epsilon\sim N(0,\sigma^2_\epsilon)$" instead of "$\epsilon^2$." 3. Keep the symbol format consistent: In line 76, $\theta$ is not bold, while in line 80, $\theta$ is bold. 4. The Hellinger distance has the general form shown in equation (4). Why is there no general form provided for the Wasserstein distance? 5. In line 221, "robust. than" needs to be revised for clarity. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors provided limitations in the last section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough work reviewing the paper. We have added additional experiments, and hope to clarify the notation-induced confusion. __0)__ _Distinguish among three different things: $f(x), f(x) + ε, GP(x)$_ Using the notation $f(x)$ to denote both the true black-box function _and_ a random variable modeled by the GP can be confusing. __This is, however, standard practice in BO literature__ [Hernandez-Lobato et.al, 2014, Wang and Jegelka, 2017, Hvarfner et. al. 2022, Takeno et. al. 2022] and we are hesitant to break this convention. To clarify: - $p(f |D)$ is the distribution over functions that the GP induces, and a sample draw from this distribution is a function $f_i \sim p(f |D)$ (as in Alg. 1 step 6). - $y_x$ (reads $y$ at $x$) is a random variable representing the noisy output at $x, y_x = f (x) + \epsilon$. Its __predictive posterior under the GP__ is $p(y_x|D)$. Hopefully, this clarifies our notation to the reviewer. __1)__ This is well spotted by the reviewer. We do indeed marginalize over the mean (Appendix A), but we should have declared it as a HP in Section 2.1. This is part of the HPs that SAL and SCoreBO learn. We will update the CR. __2)__ We have added __three (4D) HPO tuning tasks__ from the PD1 benchmarking suite — Two involving large language models, and one from computer vision. We adopt the same setting as in HEBO [Cowen-Rivers et. al. 2020], where input and output warpings [Snoek et, al 2014] are used to account for the heteroskedasticity prevalent in real HPO tasks. We report the results in the PDF material submitted with this rebuttal and observe that __SCoreBO outperforms the other methods on 2 out of 3 tasks__, placing second on the third. __3)__ $p(\theta|D)$ is the posterior over the HPs given the data. We can only sample from it through MCMC since it does not have a closed-form expression. __4)__ $p(f |\theta, D)$ is the distribution over functions that the GP with HPs \theta induces. Step 6 draws a sample function $f_i$ from $p(f |\theta, D)$, then maximizes it to obtain a __simulated optimal location__ and value $(x^*, f^*)$. To obtain $f_i \sim p(f |\theta, D)$, we use the decoupled sampling approach of Wilson et. al. (2020). When that sample function $f_i$ has been obtained, we use gradient-based optimization to find $({x_i}^*, {f_i}^*)$. The explanation in line 173 will be expanded in the CR. __5)__ Throughout the paper, the observations are denoted by D = $(x_i, y_{x_i})_{i=1}^N$. The random variable $y_x$ denotes a not-yet-observed random variable (modeled by the GP) at the input location $x$. SAL and SCoreBO __do not need to query the black-box function to evaluate the acquisition function__ (that would indeed invalidate all that Bayesian optimization stands for). As mentioned previously, $p(y_x|D)$ is the distribution of the function values at $x$, so (9) evaluates the disagreement between the distributions of the output at $x$. __6)__ As $y_x$ is the random variable representing the output at $x$, $p(y_x|D)$ is the predictive posterior distribution of the GP at the input location $x$. __7)__ The random variable $f$ is noiseless, whereas $y_x$ is noisy. However, more importantly $p(f|D, \theta)$ is the GP-induced distribution over _functions_, whereas $p(y_x|D, \theta)$ is the predictive distribution of the noisy output _at one specific location_ $x$. __8)__ $(x^*, f^*)$ are random variables denoting the optimal value and its location, using the same notation as Hvarfner et. al. (2022). These are samples as described in (4.). __9)__ The general trend in high-dimensional tasks is that the model is tailored to take into account the high-dimensional setting, such as in the authors’ AddGPs and SAASBO experiments, but also in methods such as REMBO, ALEBO and BaXuS. In this context, SCoreBO experiments span a comprehensive range of dimensionalities, including 8D, 11D, 25D and 180D. __10)__ While the extension of B-QBC into SAL, which is the essence of Eq. (7), in itself is a significant contribution, it is only a building block in the overall body of work. The paper further __presents the first strategy for joint optimization and active learning of HPs__. AL and BO have historically been considered as two different domains, and there has historically been little effort in integrating elements of active learning into BO. SCoreBO proposes a novel method to achieve this by incorporating Entropy Search concepts into the SAL objective. The estimation schemes conditioned on the HPs and the optimizers should also be considered a technical contribution. The need for the aforementioned joint objective is made clear from the results in Fig. 7 and Fig. 8, where active dimensions of high-dimensional problems must be learned for the optimization to be effective. In Fig. 1 of the rebuttal PDF, we show the HP convergence of SCoreBO and JES for 25D Ackley with 4 active dimensions. Clearly, JES fails to identify the active dimensions, whereas SCoreBO rapidly succeeds. The SCoreBO strategy is fundamentally different from that of JES because it __trades HP learning and optimization in a principled manner__. JES, however, only does the latter. On the lack of theoretical guarantees, the authors would like to point out that __the JES papers do not provide any convergence guarantees__ (Hvarfner et. al., Tu et. al., 2022), unfortunately. Furthermore, the MES proof (Wang and Jegelka, 2017) is highly contested (Takeno et. al., 2022). ____ Lastly, We thank the reviewer for the suggestions which will be added to the CR. The Wasserstein distance does have a closed form for Gaussians, which we will add as well as suggested. We hope that the clarifications regarding the contributions and the requested supplementary experiments have enhanced the reviewer's perception of our work. If that is the case, we would appreciate if the reviewer increases their rating. If any ambiguities persist, we are happy to clarify further. --- Rebuttal Comment 1.1: Title: Thanks for the clarfications Comment: I would like to express my sincere appreciation to the authors for taking their time to address my previous concerns and clarifying my misunderstandings regarding this paper. I have several followup questions. 1) Could the authors provide clarifications on the highest dimension of the experiments discussed in the paper? 2) With regards to the JES paper authored by Ben Tu et al., I understand that the provided proofs are not intended to establish a definitive convergence rate. But it is good to know some theoretical properties of JES, so is for this paper. 3) Could the authors give intuitive reasons why the proposed method works, and performs better than the other methods? --- Reply to Comment 1.1.1: Title: Further response Comment: Thanks to the reviewer for their continued engagement. We hope to clarify the unique strengths of our proposed method and clarify the additional questions below: --- > __1.__ Could the authors provide clarifications on the highest dimension of the experiments discussed in the paper? --- The highest dimensional benchmark in the experiments is the Lasso-DNA real-world task with 180 variables. The problem constitutes finding the hyperparameters of a weighted Lasso model for a microbiology classification task (Mills 2020). This benchmark has been considered in several papers for high-dimensional hyperparameter optimization (Šehić et al., 2022, Papenmeier et., al. 2022, Ziomek and Bou-Ammar, 2023). As traditional BO methods are inefficient at such high dimensions, we adopt the Sparse Axis-Aligned Subspace hyperparameter prior (SAAS, Eriksson and Jankowiak, 2021) which encourages the model to disregard inactive dimensions. This approach was used for Lasso-DNA in Papenmeier et. al. (2022). SCoreBO is able to infer the effective subspace under the SAASBO prior within tens of iterations by actively learning the model while optimizing, while conventional acquisition functions fail to infer the effective subspace at all. As such, SCoreBO achieves a 25% performance increase over JES and EI, relative to the initial design configurations in Fig. 7. --- > __2.__ With regards to the JES paper authored by Ben Tu et al., I understand that the provided proofs are not intended to establish a definitive convergence rate. But it is good to know some theoretical properties of JES, so is for this paper. --- We agree with the reviewer that a deeper theoretical contribution in this area would be very valuable, as the broader set of BO acquisition functions with fully Bayesian treatment of the hyperparameters lack such insights. SAL was developed with BQBC in mind, but generalizes BQBC to allow a broader class of disagreement metrics than just the difference in mean. As such, SAL encompasses a broad class of possible AL methods. A theoretical connection with existing methods came through the discussion with reviewer LR3N: SAL with KL divergence used as a distance measure is equivalent to the popular BALD active learning method. We theoretically prove this connection in our discussions with reviewer LR3N.
Summary: The paper introduces a technique called statistical distance-based active learning (SAL) for the purpose of learning the hyperparameters of a Gaussian process. Additionally, SAL is integrated with information-theoretic Bayesian optimization (BO) in a framework referred to as self-correcting BO (SCoreBO). This integrated approach enables simultaneous learning of GP hyperparameters and optimization of the black box function. The experimental results demonstrate the enhanced performance of SCoreBO compared to traditional benchmarks, including unconventional BO tasks. Strengths: The paper introduces new active learning algorithms that leverage the Query-by-Committee strategy and utilize two statistical distances. It also addresses the practical challenge of simultaneously learning hyperparameters while optimizing a black-box function, which holds significant importance in real-world applications. Weaknesses: The main weakness of the paper is inadequate literature review, resulting in major concerns on the novelty of the proposed problem and solution and the experimental results. 1. Although BALD is widely recognized as a highly effective active learning strategy (e.g., employed in PES, MES, JES), it is surprising that BALD is not considered as a baseline in the active learning experiments. Conversely, the other baselines employed in the active learning experiments are relatively weak, such as utilizing only the posterior variance (BALM), the posterior mean (BQBC), or simply adding BALM and BQBC. 2. Equation (7) (the proposed SAL) is essentially the Jensen-Shannon divergence, representing the mutual information between $\theta$ and $y_x$—in other words, BALD when $d$ corresponds to the KL divergence. Surprisingly, this crucial aspect is not discussed in the paper, raising the question of why the preference is given to the Hellinger distance or Wasserstein distance instead of the KL divergence. 3. The paper introduces the problem of optimizing a black box function with unknown Gaussian Process (GP) hyperparameters, asserting that this aspect has received limited attention, without providing references to existing Bayesian Optimization (BO) works that address unknown GP hyperparameters. However, it is worth noting that in the early work PES, there exists a dedicated section that tackles the issue of unknown GP hyperparameters by learning the posterior distribution of the hyperparameters and averaging PES over samples of GP hyperparameters. This crucial detail is missing from the paper, despite frequent mentions of PES throughout. Furthermore, this implies that other information-theoretic acquisition functions such as MES and JES could also be extended to handle unknown GP hyperparameters using a similar approach. Consequently, the novelty in the problem appears to be overstated and a significant concern arises regarding the absence of several essential baselines in the experiments, including PES, MES, and JES averaged over samples of GP hyperparameters. 4. The motivation behind the proposed SCoreBO in Equation (8) needs further enhancement. Given that the goal of Bayesian Optimization (BO) is to optimize a black-box function, it seems appropriate to exclude $\theta$ from the second argument in $d$ in Equation (8). This is because it is redundant to learn the hyperparameters of the Gaussian Process (GP) if they do not impact the difference between $f^*$ or $x^*$ under different GP hyperparameter samples. Essentially, the acquisition function should be a JES, with the GP hyperparameters marginalized out. To substantiate this approach, it would be valuable to present compelling experimental results comparing it with such a JES that marginalizes out the GP hyperparameters. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please clarify the above weaknesses. ======================== After reading author's response The authors have made substantial efforts in providing supplementary experimental outcomes and elaborating the reasoning behind the integration of hyperparameters into the acquisition function. I believe that these experiments and explanations have the potential to provide stronger motivation for the paper and make the proposed approach's performance more convincing. As a result, I improve my rating accordingly. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their through review. There are, however, crucial misunderstandings regarding our work: __all baselines use fully Bayesian hyperparameter treatment__ and the active learning presented in this paper __goes beyond conventional hyperparameter marginalization__. The feedback related to these issues is addressed in __1)__. We will further clarify the interplay between fully Bayesian hyperparameter treatment and active learning and address the reviewer's remaining concerns. We hope that this gives a fresh perspective on the novelty of the work as perceived by the reviewer. --- > __1)__ The paper __(a)__ lacks _“several essential baselines in the experiments, including PES, MES, and JES averaged over samples of GP hyperparameters”_ as well as __(b)__ lacks references to the _“dedicated section [in PES] that tackles the issue of unknown GP hyperparameters by learning the posterior distribution of the hyperparameters and averaging PES over samples of GP hyperparameters.”_ --- __a) All baselines marginalize over the hyperparameters__ in the original manuscript as per Line 224-225: #### "ScoreBO _and all baselines_ use fully Bayesian hyperparameter treatment." We will ensure that this is clearer in the CR. __b)__ The aforementioned Section 2.3 - _Hyperparameter Learning_ in PES simply describes a fully Bayesian treatment of the hyperparameters, i.e. an extension of the work by Osborne et. al. (2010) and Snoek et. al. (2012). These seminal works are referenced on line 27. However, the aforementioned __fully Bayesian hyperparameter treatment must be distinguished from actively learning the hyperparameters__, i.e. selecting queries to find __more accurate hyperparameters__. Fully Bayesian hyperparameter treatment is a __prerequisite__ for (Bayesian) active learning to take place. Moreover, the active learning component differentiates SCoreBO from PES/JES/MES — SCoreBO learns __more accurate__ hyperparameters (in the spirit of BALD) while optimizing the objective, whereas ES methods - _even their fully Bayesian variants_ - only do the latter. — > __2)__: __(a)__ _“[I]t is redundant to learn the hyperparameters of the Gaussian Process (GP) if they do not impact the difference between or under different GP hyperparameter samples”_ and __(b)__ _“Given that the goal of Bayesian Optimization (BO) is to optimize a black-box function, it seems appropriate to exclude theta from the second argument in d in Equation (8)”_. Lastly, __(c)__ _“it would be valuable to present compelling experimental results comparing it with such a JES that marginalizes out the GP hyperparameters.”_ __a.__ Having an __accurate__ sense of the HPs is often crucial for the predictive performance of the GP model and, hence, for the efficiency of the BO algorithm. To exemplify this, the reviewer is referred to Fig. 1 in the rebuttal PDF, where the impact of the active learning on hyperparameter convergence is visualized for the 25D Ackley task. __Fully Bayesian MES and JES never find the active dimensions__ 1-4, whereas __SCoreBO rapidly finds them__ and outclasses MES/JES/EI as a result (Fig. 7 in main paper). SCoreBO’s joint objective of active hyperparameter learning and optimization makes it __generate more accurate hyperparameters__ by reducing hyperparameter- _and_ optima-induced disagreement. Thus, it quickly obtains hyperparameters which suggests the right dimensions 1-4 as active. This should clarify that active learning is _not_ redundant in a BO context, as the investment in active learning can produce substantially more accurate hyperparameters, which in turn can yield improved optimization efficiency. __b.__ The $\theta$ parameter in the second argument of Eq. (8) __enables the active learning component of the joint BO/AL objective - the paper’s main contribution__. For this reason, it is paramount to keep $\theta$ as described. Without $\theta$, SCoreBO would be similar in spirit to fully Bayesian JES. __c.__ As established previously, the JES in the experiments employs hyperparameter marginalization. --- > __3)__ BALD is not considered as a baseline in the active learning experiments. --- We initially decided to exclude BALD for its lackluster performance in Riis et. al. (2022). However, we agree with the reviewer and added it in the rebuttal PDF in Fig. 2. It performs very well, only marginally worse than SAL-HR on average. We thank the reviewer for pointing this out. --- > __4)__: SAL is essentially the Jensen-Shannon divergence (...) --- Equation (7) proposes a general instance where the specific distance metric can be instantiated. We have added the Jensen-Shannon (JS) divergence to both the AL and BO experiments in Fig. 1 and Fig. 2 of the rebuttal PDF. It performs very well on some tasks, but does not generally achieve the desired consistency. The KL/JS divergence is indeed related, but not equivalent, to BALD. Both SAL-KL and SAL-JS minimize the _relative entropy_ between distributions. BALD, which computes the _differential entropy_ minimization, _would_ be equivalent only if the reference distribution was uniform. There is an analogous discussion in Henning and Schuler (2012, p.13). ____ We hope that we have clarified the role of the joint optimization and HP learning objective, and the distinct differences between this and previous work. With these misunderstandings out of the way, we would appreciate if the reviewer reconsidered their score. Moreover, we would be happy to address additional questions. #### References 1. Christoffer Riis, Francisco Antunes, Frederik Boe Hüttel, Carlos Lima Azevedo, Francisco Câmara Pereira. Bayesian Active Learning with Fully Bayesian Gaussian Processes. NeurIPS, 2022. 2. Phillip Henning and Christian Schuler. Entropy Search for Information-Efficient Global Optimization. JMLR, 2012. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for clarifications and additional experimental results. However, my concerns remain: 1. I believe it is important for the paper to discuss existing BO works that handle the unknown hyperparameters issue such as PES. Considering that the issue of unknown hyperparameter is not new, the novelty of the problem seems to be overstated in the paper. 2. Section 2.3 in PES is not exactly about a fully Bayesian treatment of the hyperparameters. It proposes to average the PES acquisition function over $M$ samples of the hyperparameters (see Equation 10). This is different from a fully Bayesian hyperparameter treatment which marginalizes $p(y|x) \approx 1/M \sum_{i=1}^M p(y|x,\phi_i)$. While this may not be a "correct" way, the paper has shown superior empirical performance. Hence, I do not understand why the experiments did not incorporate this PES baseline. 3. I am still confused why JS divergence is not equivalent to BALD. Let say the posterior distribution of the hyperparameters is approximated with $M$ samples of the hyperparameters, then BALD, i.e., the mutual information between $y$ and the hyperparameters $\phi$ is $I(y;\phi|x) \approx JSD(p(y|x,\phi_1), p(y|x,\phi_2), \dots, p(y|x,\phi_M)) = 1/M \sum_{i=1}^M D(p(y|x, \phi_i) || p(y|x))$ where $D$ is the KL divergence. Equation (7) in the paper is this formulation but replacing the KL divergence with other divergences. As the authors demonstrate in their response, the competitive performance of BALD reduces the need of shifting from using the KL divergence to the suggested divergences. 4. I have a different perspective regarding the authors' assertion that: having an accurate hyperparameters is crucial for the efficiency of the BO algorithms. This is due to the fact that obtaining precise hyperparameters requires utilizing samples. If the core objective of BO is to locate maximizers (not to learn the hyperparameters), then what justifies the explicit incorporation of hyperparameters into the acquisition function? I believe we should only reduce the uncertainty in hyperparameters that specifically contributes to uncertainty in the maximizer (which is achieved by a fully Bayesian treatment of PES, for example). Therefore, incorporating the hyperparameters directly into the acquisition function doesn't appear to be particularly persuasive. I understand that there may be an empirical performance gain as the fully Bayesian treatment of the hyperparamters is not exact (approximated with MCMC). As a result, from my personal view, the inclusion of hyperparameters in the acquisition function doesn't seem to be a substantial contribution. --- Reply to Comment 1.1.1: Title: Further comments on marginalization Comment: We'd like to express our sincere appreciation to the reviewer for their consistent engagement throughout the review process and for taking the time to participate in this rebuttal phase. The authors are committed to addressing the concerns raised in the ensuing discussion and especially to clarify the misunderstanding about the marginalization of the hyperparameters. ____ __Point 2:__ We now appreciate the misunderstanding of the terminology, which is one that is deeply seeded in the BO community. The reviewer makes a distinction between 1. Marginalizing the __acquisition function__ over the hyperparameter uncertainty, $E_\theta[(\alpha(y)|x,\theta)]$, and 2. Marginalizing the __model__ over the hyperparameter uncertainty, $\alpha(E_\theta[p(y|x,\theta)])$. In the ML literature, it'd be natural to call 2) fully Bayesian treatment, which is what the reviewer does. However, the BO community has been referring to a fully Bayesian treatment as directly marginalizing the acquisition function over the hyperparameters, which is 1). This definition of fully Bayesian treatment in BO was given by Snoek et al. (2012) and used even earlier by Osborne et al. (2010). It has subsequently been used by PES and a number of other BO methods, including MES and JES. From a general ML perspective this may be confusing because the PES authors refer to 1) as a fully Bayesian treatment (see Section 2.3. of their paper). SCoreBO uses 1) as well. To summarize, both PES and Snoek refer to the same definition 1) of fully Bayesian treatment, which marginalizes the acquisition function over the hyperparameters. That is the same definition that we use in all the baselines of the paper. This could be misleading and we will make this difference explicit in the CR. As a result, since PES performs the same operations to the hyperparameters as other baselines, one would expect that PES should not perform better than the other methods. We have run this version of PES, and added a table of the hyperparameter values for all methods (and reference values) for PES, JES, and SCoreBO on the 25D Ackley task. It is attached as the last comment in this post. We will add PES on all tasks for the CR; it has been run and performs marginally worse than MES on average. Its final performance is added for all tasks as a separate post. ____ __Point 1:__ We agree with the reviewer that the topic of handling unknown hyperparameters for BO is not new — All optimizers must handle unknown hyperparameters in some way, whether through MLE, MAP, fully Bayesian treatment or adapting the acquisition function to actively manage uncertainty across hyperparameters. Since the reviewer is familiar with BALD and PES, we would like to highlight the differences between them to align with the reviewer: - PES reduces the uncertainty over the optimum, - BALD reduces the uncertainty over the hyperparameters, and - SCoreBO reduces uncertainty over optimum **and** hyperparameters. The authors would like to emphasize that a Bayesian treatment of the hyperparameters does not entail reducing hyperparameter uncertainty, i.e., a Bayesian treatment “acknowledges” parameters uncertainty, but doesn’t reduce it. It can still suffer from having large variance in the parameter distribution — That is a crucial difference that makes SCoreBO outperform other acquisition functions on the SAASBO experiments. Furthermore, the Bayesian treatment of the hyperparameters in PES is based on Snoek et. al. (2012) and it is not a contribution of PES. PES is one of the many approaches that use marginalization of the hyperparameters in this way (MES and JES also did that, following Snoek et al and PES). We use the same marginalization over the hyperparameters used in PES, MES and JES. Additionally, we cite and compare against all these baselines. We thus disagree with the statement that we do not discuss existing BO works that handle unknown hyperparameters. The main contribution of SCoreBO is to combine the reduction over the uncertainty of the location of the optimum, such as in PES, and the reduction of the uncertainty over the hyperparameters. This __joint__ uncertainty reduction is novel. We are committed to improving the introduction of the paper in the camera ready, to make sure that our contribution is not confused with general handling of unknown hyperparameters as the reviewer suggests.
Summary: The paper presents two algorithms/methods, SAL and SCoreBO, for active learning and Bayesian optimization using GPs. Both build on the premise that hyperparameters are critical to effective AL or BO; hence it is important to learn both the function and the hyperparameters through an acquisition function that accounts for both needs. Contributions: - The SAL acquisitions function for AL considers the statistical distance between a condition (on the hyperparameters) and the marginal posterior. - The SCoreBO algorithm for BO, which uses SAL in combination with Thompson sampling/posterior sampling - Empirical evaluation on several relevant benchmark problems - Comparison with a set of alternative methods Strengths: - Well-written and clear (with only a few unclear aspects) - Relatively simple but effective approach (based on various other works in fully Bayesian BO including [42]) - Thorough evaluation of a suitable set of both AL and BO benchmarks, including non-standard problems - Comparison with what appears to be a sensible set of baselines, showing favorable performance on the BO tasks Weaknesses: The following are mostly just questions and comments, not all weaknesses per se: - Lack of details regarding a fully Bayesian treatment - The paper mentions that it applies a fully Bayesian treatment, yet I am (as a reader) still a bit unsure about the specific inference being used. I think more details are needed here to make it precise. In particular, discussing $p(\theta | D)$ would be helpful to ensure the paper presents a self-contained view. - Justification of distance metrics and alternatives - I feel the paper lacks justification for the choice of distance measures. I appreciate a choice needs to be made at some point, but alternative statistical distance measures also have closed-form, e.g., (symmetric) KL, and even approximations for mixtures. - I am slightly concerned about the sensitivity to the distance function for the AL (and BO as shown in the appendix). For the AL task, have the authors performed longer experiments than presented in Figure 6 (i.e. why stop at 100/150/200 iterations)? - What's the complexity? It would be helpful with an indication/summary of the complexity of SCoreBO relative to a standard fully Bayesian BO approach. - (The need for) theoretical bounds: What is the prospect of providing theoretical bounds for the specific algorithm? Minor: - As a sanity check, have the authors compared SCoreBO with a box standard BO approach without a fully Bayesian treatment of hyperparameters (i.e., ML-II)? - Figure 1: For clarity, I'd suggest specifying what a "BoTorch prior" is - Figure 7: There are two dashed lines; perhaps clarify the caption. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Included in the above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Included in the above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and are glad to see that the reviewer appreciates both the importance of the problem setting as well as the proposed approach. --- > __1)__ _Lack of details regarding a fully Bayesian treatment_ --- We agree with the reviewer that the paper would benefit from more clearly describing the sampling procedure when marginalizing over the hyperparameters. In the CR, we will extend the description of this in Appendix A1 as well as add a shorter description in the main text. Briefly, SCoreBO uses No U-Turn Sampling (NUTS) MCMC sampling (the Pyro implementation) and marginalizes over 16 models. We will allocate a section of the background to fully Bayesian hyperparameter treatment in the CR. --- > __2)__ _Justification of distance metrics and alternatives_ --- The requested Jensen-Shannon (JS, symmetric KL) divergence has been added to the set of AL and BO experiments (Fig. 2, 3 in the attached rebuttal PDF) and __all AL experiments have been doubled in length__. The rank of the different methods is roughly the same as in the original manuscript — We find that __the JS variant is occasionally a powerful alternative but less consistent__ than the Hellinger distance for both AL and BO. All distance measures have their specific strengths and weaknesses. However, the Hellinger distance has been the most empirically consistent choice overall. This may be due to the intuition below which aligns well with the global optimization goal under limited evaluation budget: - The Jensen-Shannon divergence prioritizes __same order-of-magnitude variances__. As such, it is inclined to repetitively query the same location to correctly estimate noise levels. - Wasserstein (earth-mover) distance seeks to minimize the difference in __first and second moments__. In practice, this places a premium on matching large-variance regions, leading to higher global exploration which can be detrimental for global optimization. - Hellinger distance seeks to minimize the __ratio between difference in mean and the sum of variances__, which punishes outlier predictive distributions of high confidence. This turns out to be the most practical metric for posterior convergence. We will clarify these points which characterize the different distance metrics in the CR. --- > __3)__ _What’s the complexity? It would be helpful with an indication/summary of the complexity of SCoreBO relative to a standard fully Bayesian BO approach._ --- The conditioning on Thompson samples involves a rank-1 update of $\mathcal{O}(n^2)$ of the GP for each Thompson sample draw. As such, the complexity of constructing the acquisition functions is $\mathcal{O}(MNn^2)$ for $M$ models, $N$ optima per model and $n$ data points. The MCMC involved with the fully Bayesian treatment is $\mathcal{O}(|\theta|n^3)$ per sample. The complexity of the forward pass is the same as (fully Bayesian) JES, namely $\mathcal{O}(MNn^2)$. As such, these methods are roughly identical in terms of runtime. For reference, EI has a forward pass complexity of $\mathcal{O}(Mn^2)$ and no setup. For all acquisition functions, the NUTS sampling accounts for a large majority of the total runtime. We will extend the complexity section for the CR. --- > __4)__ _(The need for) theoretical bounds: What is the prospect of providing theoretical bounds for the specific algorithm?_ --- Theoretical bounds for SCoreBO are likely tied to bounds for BO algorithms with fully Bayesian treatment, which, to the best of the authors' knowledge, there are none to date. Furthermore, bounds on SCoreBO are likely tied to bounds on to information-theoretic acquisition functions. Among these acquisition functions, PES/JES have no convergence bounds, and the bounds proposed in MES are not believed to be correct (Takeno et. al 2022, App. H). Thus, while theoretical bounds for SCoreBO are an interesting direction for future research, they appear challenging at present. --- > __5)__ _The SCoreBO algorithm for BO, which uses SAL in combination with Thompson sampling/posterior sampling._ --- To ensure clarity: SCoreBO employs JES-like conditioning, which uses Thompson sampling (TS) as an intermediate step, to build the acquisition function. SCoreBO does not, though, use TS itself as the acquisition function. ____ We hope that the additional results and answers have addressed the reviewer's concerns and improved the reviewer’s perception of our work. We would be happy to address any additional questions that may arise. #### References Shion Takeno, Tomoyuki Tamura, Kazuki Shitara, Masayuki Karasuyama. Sequential and Parallel Constrained Max-value Entropy Search via Information Lower Bound. Proceedings of the 39th International Conference on Machine Learning, PMLR 162:20960-20986, 2022. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for addressing my questions, in particular on the details of the fully Bayesian treatment and choice of divergence. I note these aspects have been discussed further and in-depth with reviewer LR3N, including explicit links to BALD, etc. I welcome the insights gained from the discussion with LR3N (and other comments). While I still feel the paper addresses a relevant and interesting aspect of BO, the discussion reveals that a more complete and coherent narrative is required to present the method/results than the case in the submitted paper. The many required/suggested changes lead to some doubt about what the final paper will look like, and I usually lean towards recommending the paper go through a full review cycle in such cases. I’d encourage the authors to summarize their proposed changes in a single place, before the end of the discussion phase on the 21st.
Summary: Standard acquisition functions in Bayesian Optimization (BO) only aim to finding the optimum, but do not directly consider the problem of hyperparameter learning in Gaussian Processes (GP) which has considerable impact on the optimization performance. This paper introduces a new acquisition function for Active Learning (AL) and Bayesian Optimization (BO), whose goal is to learn both hyperparameters and the location of the optimum. Specifically, the acquisition function for AL is based on extending similar previous proposals of Active Learning via disagreement, by using different statistical distances. This new acquisition function is then adapted for the BO task by conditioning on sampled locations/value of the optimum. The new acquisition functions are shown to work (slightly) better at AL and BO, especially with unusual BO tasks. Strengths: - **Originality:** While natural, the problem of hyperparameter learning in BO has rarely been addressed, so this paper tackles an open and underestimated problem in the field. The proposed solutions are interesting. - **Significance:** Given the prominence of BO in machine learning and other fields, this work is potentially very significant. - **Quality:** The general quality of the work is good, although there are some open questions (see below). - **Clarity:** The paper is generally clear. As general comments, Statistical distance-based Active Learning (SAL) is well-motivated and explained, a nice generalization of previous proposals. It is also nice that the paper explores the use of two different distances (Wasserstein and Hellinger), and that it gives approximations on how to compute them. The proposed "motivated heuristic" for extending SAL to BO via conditioning over the location-value of the optimum is interesting. ### Post-rebuttal Thanks to the authors for addressing the points I raised. I am glad that they managed to run additional experiments with a more realistic benchmark, showcasing the effectiveness of their method. Overall, I am satisfied with the paper and the score of 6 reflects my current evaluation of the paper ("Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations."). Weaknesses: - The paper showcases many synthetic functions but there are only a couple of real-world applications. Real-world applications are particularly important for this work because they are exactly the situations at risk of breaking active learning (e.g., due to model misspecification). Indeed, the Cosmological Constant task in the paper is the one in which ScoreBO does not show any advantage over vanilla Expected Improvement. For this reason, it would have been nice to show other real examples in which ScoreBO does improve performance. - As a minor comment, please double-check the paper and bibliography for typos or errors. At a quick glance I spotted a few mistakes (e.g., the authors of reference [43] "Fast information-theoretic Bayesian optimisation" are in a wrong order; occasionally "Bayesian" appears lowercase, etc.). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - This paper would strongly benefit from showing ScoreBO at work on other real-world tasks, to showcase that indeed this new acquisition function and its focus on active learning does not harm when deployed on real problems (e.g., RL, hyperparameter tuning, and other typical applications of BO which are conspicuously missing here). - Minor: Fix the few typos in the paper and bibliography. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The conclusion quickly states a few limitations but it'd be good to have a separate **Limitations** section which very explicitly mentions them. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We are pleased to see that the reviewer recognizes the significance of the joint BO/hyperparameter learning problem setting and our proposed approach. --- > __1.__ _This paper would strongly benefit from showing ScoreBO at work on other real-world tasks, to showcase that indeed this new acquisition function and its focus on active learning does not harm when deployed on real problems (e.g., RL, hyperparameter tuning, and other typical applications of BO which are conspicuously missing here)._ --- Three (4D) hyperparameter tuning tasks from the PD1 benchmarking suite have been added in the rebuttal PDF, Fig. 4 — Two involving large language models, and one from computer vision. The surrogate model from HEBO [Cowen-Rivers et. al. 2020], which employs input and output warpings [Snoek et, al 2014], is used to account for the heteroskedasticity prevalent in HPO tasks. SCoreBO outperforms the other methods on 2 out of 3 tasks, placing second on the third. The three additional real-world benchmarks hopefully depict a clearer picture of the performance of SCoreBO, together with the other two real-world benchmarks included in the original submission, namely Lasso-DNA and Cosmological Constants. --- > 2. _The conclusion quickly states a few limitations but it’d be good to have a separate Limitations section which very explicitly mentions them._ --- The authors would like to thank the reviewer for this suggestion. A dedicated Limitations section will be added in the CR. We intend to discuss the potential pitfalls of utilizing SCoreBO with misspecified models. This is partly addressed in Appendix C in relation to the Rosenbrock functions, where SCoreBO performs worse than on other tasks, relative to other acquisition functions. On Rosenbrock, the hyperparameter values increase over time instead of converge, which suggests that the latent function is not part of the model class. Thus, the self-correction effort of SCoreBO is less rewarding. The authors believe that these limitations, as well as the broader subject of model misspecification in BO, necessitate further research. Encouragingly, the reviewer's perspective appears to align with this viewpoint. We thank the reviewer for pointing out the typos, which will be addressed in the CR. _______ Hopefully, the additional real-world applications introduced in this rebuttal showcase the potential and usefulness of SCoreBO. We would be happy to address additional questions that the reviewer might have. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thanks to the authors for addressing the points I raised. I am glad that they managed to run additional experiments with a more realistic benchmark, showcasing the effectiveness of their method. Overall, I am satisfied with the paper. In the discussion with the other reviewers, I will argue for acceptance. --- Reply to Comment 1.1.1: Title: Additional reponse to ta8V Comment: We are pleased that the reviewer valued our additional experiments and thankful for their willingness to support our paper in discussions with other reviewers. In light of the reviewer’s updated judgement, we would very much appreciate if they also considered updating their score on OpenReview.
Rebuttal 1: Rebuttal: The authors would like to thank all reviewers for their effort. As per the reviewers' requests, we have added 4 additional plots to the rebuttal PDF. __Fig. 1:__ Hyperparameter convergence of fully Bayesian JES, MES and SCoreBO on the SAASBO task, the noisy 25-D Ackley function. We see that the fully Bayesian information-theoretic acquisition functions fail to find any of the active dimensions of the task, while SCoreBO finds all of them with low uncertainty. Thus, SCoreBO successfully optimizes the task, whereas JES and MES do not. __Fig. 2:__ Prolonged active learning results with double the iteration budget of the initial submission. We include SAL using Jensen-Shannon (JS) divergence as the distance metric, and BALD as an additional baseline. BALD is a highly competitive baseline. SAL-JS performs well on many tasks, but is inconsistent. __Fig. 3:__ BO synthetic experiments with SCoreBO using Jensen-Shannon divergence as well as non-fully Bayesian EI (MAP). SCoreBO-JS performs well on many tasks, but is inconsistent. EI-MAP performs well on some tasks, but lags behind on some tasks, most prominently on Hartmann (6D). __Fig. 4:__ Three (4D) hyperparameter tuning tasks from the PD1 benchmarking suite - Two involving large language models, and one from computer vision. SCoreBO outperforms the other methods on 2 out of 3 tasks, placing second on the third. _________ The authors look forward to a productive discussion with the reviewers! Pdf: /pdf/e6ec6c3b379860d8ee6a4bcc1971787b053932a0.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Train Faster, Perform Better: Modular Adaptive Training in Over-Parameterized Models
Accept (poster)
Summary: The authors introduce a novel modification of the neural tangent kernel (NTK) called modular NTK (mNTK). It is essentially the decomposition of the NTK as a sum of tangent kernels for the disjoint modules that make up the network. This allows for module-level analysis of training dynamics. They find that principle eigenvalue of mNTKs tends to be orders of magnitude larger than the others and that the variation of these principle eigenvalues is a synchronous across modules. Since the directions associated with these principle eigenvalues dominates gradient updates during training, they can be utilized to selectively update subsets of the modules to better allocate computational resources during training. The authors further characterize modules with smaller max eigenvalues as being more likely to learn "nuisance" features. They introduce a novel optimization schema called modular adaptive training (MAT) that stops back propagation to modules whose principle eigenvalue falls below a threshold or if variation of the principle eigenvalue falls below a threshold. They demonstrate that MAT produces models with equivalent performance to vanilla trained model while requiring fewer FLOPs during training. Furthermore, they show that MAT trained models tend to generalize better. Strengths: - Paper is well-written. - Method is novel and theoretically well motivated. - Supported by experiments. - The method is very general and has the potential for high impact. Weaknesses: For the language models, the paper only looks at perplexity scores of the pretrained model for text. I'm not the most up to date on this, but I think typically practitioners are more concerned about the performance of the model when fine-tuned on down stream tasks. I think comparing performance on (vanilla) fine-tuned downstream tasks of MAT vs vanilla pretrained models would strengthen the results. Tying a bit into the previous point, papers such as RoBERTa indicate that continuing to train pretrained models after the train perplexity has mostly stopped decreasing can lead to significant boosts in performance when fine-tuned on downstream tasks. If the low $\lambda_\textrm{max}$ heads mostly learn diverse features, then this might be negatively impacted by MAT pretraining. If they mostly learn noise, then there shouldn't be much negative impact on this. Success of MAT in this setting (as indicated by increased performance on downstream tasks at a given computational cost) would greater strengthen the method. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Line 221 mentions how the hyperparameter $\alpha$ is relatively stable throughout training. This doesn't really make sense to me because how could a constant not be stable throughout training. Is this a typo? I think you may have meant to say something like $\lambda_\alpha$ is stable throughout training. - Have you explored using MAT for the process of fine-tuning a pretrained model? Typically, fine-tuning takes far less time than pretraining, but it can still take a while to converge if the target task has many examples. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > `Q1` For the language models, the paper only looks at perplexity scores of the pretrained model for text. I'm not the most up to date on this, but I think typically practitioners are more concerned about the performance of the model when fine-tuned on down stream tasks. I think comparing performance on (vanilla) fine-tuned downstream tasks of MAT vs vanilla pretrained models would strengthen the results. `A1` Thanks for your suggestion. We compare performance of vanilla and MAT pre-trained model on the SST-2 (sentiment recognition) from GLUE Benchmark in the following table. | Method | MLM (perplexity) $\downarrow$ | SST-2 (accuracy) $\uparrow$ | |---|---:|---:| | vanilla | 4.41 | 84.36 | | MAT | 4.27 | 85.47 | > `Q2` Tying a bit into the previous point, papers such as RoBERTa indicate that continuing to train pretrained models after the train perplexity has mostly stopped decreasing can lead to significant boosts in performance when fine-tuned on downstream tasks. If the low $\lambda_{\max}$ heads mostly learn diverse features, then this might be negatively impacted by MAT pretraining. If they mostly learn noise, then there shouldn't be much negative impact on this. Success of MAT in this setting (as indicated by increased performance on downstream tasks at a given computational cost) would greater strengthen the method. `A2` Thanks for your comments. It's in line with what we are thinking. Many works demonstrate similar conclusions that continuing pre-training the model whose perplexity has converged boosts the downstream task performance. However, a major challenge in this process is how to distinguish learning diverse, useful features and learning noise. The proposed MAT shows potential for indicating which modules are learning the informative features that are common and likely to generalize. We've conducted related experiments. We pretrain BERT from scratch by MLM task on AGNews tilL the validation perplexity converges, and save the snapshot called $A$. After that, we obtain $B$ and $C$ by continuing to pretrain $A$ for 10 epochs with vanilla method and MAT seperately. We fine-tuned the three models on AGNews by classification task and the experimental results are shown in Table. | Accuracy | A (Full-Trained) | B (Continue-Trained) | C (Adaptive-Trained) | |---|---:|---:|---:| | Full Data | 74.73 | 75.02 | **75.54** | | Long-tail Data | 73.69 | 74.23 | **75.00** | > `Q3` Line 221 mentions how the hyperparameter $\alpha$ is relatively stable throughout training. This doesn't really make sense to me because how could a constant not be stable throughout training. Is this a typo? I think you may have meant to say something like $\lambda_\alpha$ is stable throughout training. `A3` Thanks for your question. The modular policy is motivated by the analysis of the eigen-spectrum distribution in Section 2.3. We find it exhibits two distinct regions in Figure 4 with a significant threshold, and we try to locate the threshold, which corresponds to the $\lambda_\alpha$. In fact, due to the magnitude variation of eigenvalues during the training process, $\lambda_\alpha$ should also be dynamic. However, we empirically observe the threshold is relatively stable in the normalized distribution as Equation 4, leading us to assign the normalized threshold $\alpha$ as a constant. Thus, the hyperparameter $\alpha$ keeps constant and the threshold $\lambda_\alpha$ is located dynamically according to the magnitude of overall eigenvalues. > `Q4` Have you explored using MAT for the process of fine-tuning a pretrained model? Typically, fine-tuning takes far less time than pretraining, but it can still take a while to converge if the target task has many examples. `A4` Thanks for your question. We do consider MAT as a potential method for fine-tuning or transfer learning, because parameters updating tends to be sparse or low-rank in these tasks. Following your suggestion, we conduct the experiments that fine-tune a pre-trained BERT(obtained from Huggingface) on SST-2 data. Experimental results show MAT achieves better performance with less computation. | Method | Accuracy | Computation (PFLOPs) | |---|---:|---:| | vanilla | 92.96 | 5.71 | | MAT | 93.41 | 3.35 | --- Rebuttal Comment 1.1: Comment: Thanks for your response. My comments have been adequately addressed, so I'll bump up the score --- Reply to Comment 1.1.1: Title: Thanks for reading our rebuttal Comment: Thanks again for your valuable comments, which are very helpful to us.
Summary: This paper introduces modular adaptive training based on the largest eigenvalue of the NTK. In particular, based on the assumption that the principal eigendirection matters the most for generalization, the authors propose to dynamically turn off gradient updating on certain modules if they have small principal eigenvalues. Experiments are conducted on NLP and CV tasks to show the superiority of MAT. Strengths: It is very important to reduce computation resources for large model training. This paper focuses on an important topic, and the approach is principled. The presentation is very organized and the delivery is clear. Weaknesses: See questions. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. In Figure 2(a), can we also see a picture showing the distribution of all eigenvalues? Now even the 16th eigenvalue seems pretty large, is there very small eigenvalue? 2. In definition 1, can you write down the size of each matrix? Like $J\in\mathbb{R}^{nk\times p}$. 3. In figure 3, I wonder if there is a phenomenon of benign overfitting in this BERT task. If you keep training beyond 100 epochs to make preplexity even smaller, will you see a double descent on validation? 4. In figure 2(a), $\lambda_1$ of the 12th layer is still large close to 100 epoch. If you keep training will that eigenvalue eventually decays to $0$? What is special about this layer? 5. Around line 220, you mentioned the computation cost of backprop is reduced. Say at some point my first layer is in the information modules but the last layer is in the nuisance modules. In this case, the computation cost is as large as all modules are active, is this right? Because you need to backprop through the later layers. 6. Can you give more intuition on eq(6)? If my $\lambda_1(\Theta_1)$ and $\lambda_1(\Theta_0)$ are small, but $\lambda_1(\Theta_t)$ and $\lambda_1(\Theta_{t-1})$ are both large, then this equation can still be satisfied. Do you want to stop gradient in this scenario? 7. In algorithm 1, how large did you use for sample number $S$? How accurate is this estimation on the eigenvalues? 8. When you compute FLOPs, do you include the computations used for eigendecomposition? 9. Can we see the downstream performance of the MAT-trained BERT on a downstream task? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > `Q1` In Figure 2(a), can we also see a picture showing the distribution of all eigenvalues? Now even the 16th eigenvalue seems pretty large, is there very small eigenvalue? `A1` Thanks for your question. The eigenvalues can be refered to *Figure A* of PDF in General Response. The computation of NTK considers each output unit, so the magnitude of the eigenvalues is related to the specific task. The number of output units in BERT's MLM task is equal to the vocubulary size, which makes the magnitude of the smallest eigenvalue in BERT MLM task can be as large as $10^9$ ~ $10^{10}$. Therefore, we observe their relative or normalized value instead, as *Figure 4* for all attention heads. > `Q2` In definition 1, can you write down the size of each matrix? `A2` Sure. Suppose the parameter vector is partitioned into $L$ disjoint modules $\boldsymbol{\theta}=\{\boldsymbol{\theta}^1,\boldsymbol{\theta}^2,...,\boldsymbol{\theta}^L\}$, each of which contains $m^1,m^2,...,m^L$ parameters respectively. The size of each matrix is annotated below: $J_{\boldsymbol{\theta}^l}(\mathcal{X})\in\mathbb{R}^{nk\times m^l}$, $J_{\boldsymbol{\theta}_p}(\mathcal{X})\in\mathbb{R}^{nk\times 1}$, $\boldsymbol{\Theta}^l(\mathcal{X},\mathcal{X})\in\mathbb{R}^{nk\times nk}$,$\boldsymbol{\Theta}(\mathcal{X},\mathcal{X})\in\mathbb{R}^{nk\times nk}$ > `Q3` In Figure 3, I wonder ... double descent on validation? `A3` Thanks for your question. In our experiment, the validation perplexity keeps increasing when training for an additional 1000 epochs without descent on validation. Thanks for pointing it out, and we will further study on it. > `Q4` In Figure 2(a), ... about this layer? `A4` Thanks for your question. The 12th layer is the last Transformer layer, closest to the output and classifier. In our experiment, $\lambda_{\max}$ of the last layer is still large even it's trained more(~500) epochs. A possible explanation is that, in the later training process, due to the noisy dataset, the training error is not necessarily reduced to zero, so the residual makes its $\lambda_{\max}$ keep stable. In comparison, we conduct an experiment on MLP with MNIST, whose training loss and $\lambda_{\max}$ of the last layer are both converged to small value. Thanks for pointing out this interesting phenomenon, we will do more experiments to further explain why it occurs. > `Q5` Around line 220, ... backprop through the later layers. `A5` Thanks for your comments. When using MAT, most gradient computation of fixed modules can be left out even when earliest layers need to be updated. Take a linear layer $y=Wx$ in an arbitrary model as an example, it goes through two computation steps during the standard backward process: $\frac{\partial L}{\partial W}=\frac{\partial L}{\partial y}x^{\top}$ and $\frac{\partial L}{\partial x}=W^{\top}\frac{\partial L}{\partial y}$. If we split $W$ into equal-sized modules $W=cat(W_1,W_2,...,W_n)$, we can sparsify the two steps into: $\frac{\partial L}{\partial W}=cat(0,...,\frac{\partial L}{\partial y_i}x^\top,...,0)$ and $\frac{\partial L}{\partial x}=\sum_{i}W_i^\top\frac{\partial L}{\partial y_i}$, where $W_i$ is the informative module. In other words, if $k$ modules in a layer are in nuisance space, we reduce the backward computation to $(n-k)/n$ of the origin. With multiple layers, as long as the latter layer produces non-zero $\frac{\partial L}{\partial x}$, i.e., by keeping at least one module in each layer active, the gradient can continue to propagate to the earlier layers. In our empirical study, we have not observed instances where later layers end their learning before the earlier ones, and thus the backpropagation is performed uninterruptedly. This example is congruent with the multi-head attention structure, and the associated gradient sparsification is proved to guarantee the convergence [A]. > `Q6` Can you give more intuition on eq(6)? ... in this scenario? `A6` Thanks for your question. Our intuition is to stop the module whose $\lambda_1(\boldsymbol{\Theta})$ has small variation. Referring to Figure 3, near the time point of overfitting, certain modules begin to satisfy this condition. As for the situation you mentioned, where $\lambda\_1(\boldsymbol{\Theta}\_{n})$ is larger than the initial one $\lambda\_1(\boldsymbol{\Theta}\_{0})$, is avoided by recording the initial value $\lambda\_1(\boldsymbol{\Theta}\_{0})$ after warming up or at the time $\lambda\_1(\boldsymbol{\Theta})$ reaches its largest value. > `Q7` In algorithm 1, how large did you use for sample number $S$ ? How accurate is this estimation on the eigenvalues? `A7` Thanks for your question. The sample number $S$ is set according to the learning task and GPU memory. In our experiment, $S=64$ for BERT/Switch-Transformer and $S=128$ for VGG-16. We demonstrate the mNTK $\lambda_{\max}$ distribution of approximate NTK and true empirical NTK in *Figure B* of PDF in General Response, which are nearly identical. > `Q8` When you compute FLOPs, do you include the computations used for eigendecomposition? `A8` Thanks for your question. We demonstrate that MAT only yields a small proportion (<1.5%) of extra computations. *Table 7* shows the computational costs of varying multi-scale BERT models, and we can see that the overheads introduced by MAT are negligible. The complete computational complexity analysis and numerical results can be found in *Appendix B.3*. > `Q9` Can we see the downstream performance of the MAT-trained BERT on a downstream task? `A9` Thanks for your question. We compare performance of vanilla and MAT pre-trained model on the SST-2 (sentiment recognition) from GLUE Benchmark in the following table. | Method | MLM (perplexity) $\downarrow$ | SST-2 (accuracy) $\uparrow$ | |---|---:|---:| | vanilla | 4.41 | 84.36 | | MAT | 4.27 | 85.47 | --- **References:** [A] Alistarh, Dan, et al. "The convergence of sparsified gradient methods." Advances in Neural Information Processing Systems 31 (2018). --- Rebuttal 2: Comment: Hi Reviewer afqQ, Since the discussion with the authors is closing soon, could you please go over the rebuttal and provide some feedback? Regards, AC --- Rebuttal 3: Title: Further experimental results Comment: Suggested by the reviewers, we have applied MAT to transfer learning problems to further evaluate the generalization ability of our method. We have conducted the experiments on both typical structured NLP(BERT-base) and CV(ResNet-32) models. The downstream tasks are text(SST-2, IMDb) and image(CIFAR10/100) classification. The experimental results demonstrate our method saves the computation(41%~51%) while improves the task performance. These results further show the potential of our method on transfer learning(or fine-tuning), as a practical learning method for modern machine learning. | Model | Method | Task | Accuracy | Computation (PFLOPs) | |---|---|---|---:|---:| | BERT-base | vanilla | SST-2 | 92.96 | 5.71 | | BERT-base | MAT | SST-2 | 93.41 | 3.35 | | BERT-base | vanilla | IMDb | 93.39 | 2.32 | | BERT-base | MAT | IMDb | 93.47 | 1.14 | | ResNet-32 | vanilla | CIFAR10 | 96.31 | 2.61 | | ResNet-32 | MAT | CIFAR10 | 96.48 | 1.37 | | ResNet-32 | vanilla | CIFAR100 | 83.14 | 2.75 | | ResNet-32 | MAT | CIFAR100 | 83.20 | 1.51 | Please let us know if you still have any concern about this paper and we will be more than happy to discuss with you before the discussion period ends. --- Rebuttal Comment 3.1: Comment: Thanks for the answers from the authors. My questions are addressed and the downstream performance looks good, especially given the amount of computation cost saved. I'm happy to raise my score. --- Reply to Comment 3.1.1: Title: Thanks for reading our rebuttal Comment: Thanks again for your valuable comments, which are very helpful to us.
Summary: In this work the authors propose a method for analyzing modules of a neural network, in particular their utility in test-time generalization, via a modular neural tangent kernel (mNTK). The mNTK is eigenspectrum the NTK of the network modules, e.g. different attention heads. In this work it was found that the larger the largest eigenvalue for a module's mNTK, (I will call this lam-max-mNTK for convenience) the more that module contributes features which are useful for generalizability. For example when, during training, a neural network's lam-max-mNTKs plateauing on a low value is indicative that the network has overfit. The authors also demonstrate that the modules' lam-mat-mNTKs evolve asynchronously during training, indicating that optimization of certain modules is more important during different epochs during training. Using this intuition the authors propose modular adaptive training (MAT), which weights modules' learning rates during training in proportion to the lam-max-mNTK of that module. The authors find that this improves training, achieving a minimum loss faster (around 1-5% faster). These experiments, including the analysis of generalizability, were done on BERT, although MAT was also performed on VGG, where the computational savings were even greater. Strengths: The paper is clearly written and pleasant to read. The proposed analysis is interesting and would be of general interest to the deep learning community. The computational improvements are nice. Weaknesses: The mNTK isn't super novel, as it is very similar to spectral analysis work which as been performed before, although I do think it is sufficiently novel enough for to ML venues. The performance improvements, while nice, aren't large enough (at least for BERT) that I would expect MAT to see a large deployment in the future considering implementation seems somewhat involved. Perhaps more work is needed to make the improvement more substantial. The experimental results are somewhat limited. In particular its hard to conclude that the observations in Section 2.2 generalize to neural networks in general considering this was just applied to one network. Also I think concluding that two regions in Figure 4 correspond to "info space" and "nuisance space" seems like a bit of a large jump and again, its just for one network. The split is interesting, but can we really conclude that the small eigenvalues are nuisances? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Do we have some more evidence that your interpretation of mNTK is correct? Is there some reason we are more interested in languange models? Do the VGG lam-max-mNTK results align with those you got for BERT? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: none Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > `Q1` The mNTK isn't super novel, as it is very similar to spectral analysis work which has been performed before, although I do think it is sufficiently novel enough for to ML venues. `A1` Thanks for your comments. Indeed, the spectrum of NTK has been applied to two-layer networks to analyze the optimization of the overall network [A,B]. However, one key contribution of our work is that we observe significant inter-module variance during training, specifically, some modules already converge and fall into the nuisance space, while others are still in the information space. To the best of our knowledge, our work, for the first time, indicates that mNTK is a good metric to measure the inter-module training dynamics, and presents an mNTK-based adaptive training method to optimize the network training process. > `Q2` The performance improvements, while nice, aren't large enough (at least for BERT) that I would expect MAT to see a large deployment in the future considering implementation seems somewhat involved. Perhaps more work is needed to make the improvement more substantial. `A2` Thanks for your comments. To further verify the potential of MAT, we consider the following improvements: - Scalability: In Appendix Table 7, we present the computational costs of varying multi-scale BERT models. MAT can save 26.4%, 33.4%, 39.4%, 40.6%, and 50.9% computations for BERT-Mini, BERT-Small, BERT-Medium, BERT-Base, and BERT-Large (L=24, H=1024), respectively. These findings indicate that applying MAT to larger models can better improve training efficiency. In addition, larger datasets are more likely to contain more superfluous or noisy features, making more modules unnecessary to update. - Network Pruning: Our approach is not limited to sparsification on gradient backpropagation computation. Our empirical study reveals some modules that are never or rarely selected by the proposed adaptive training method (MAT), showing potential for being pruned to achieve further computation savings. In other words, $\lambda_{\max}$ of mNTK can be criteria of structured pruning. The experimental results can be found in Appendix B.2. - Training Stability: Large model pre-training is often unstable and requires careful tuning of the learning rate to maintain stable convergence. A common practice is to use a learning rate scheduler to gradually reduce the learning rate. MAT prompts the model to update in a consistent direction, which to some extent avoids the instability caused by noise, allowing the model to keep a large learning rate and achieve higher training efficiency. Experimental results are shown in following table. | Method | Valid PPL @ 10 PFLOPs | Test PPL @ Final | Computation (PFLOPs) | |---|---:|---:|---:| | MAT (w/ lr scheduler) | 4.46 | 4.27 | 16.50 | | MAT (w/o lr scheduler) | 4.37 | 4.30 | 14.75 | > `Q3` Also I think concluding that two regions in Figure 4 correspond to "info space" and "nuisance space" seems like a bit of a large jump and again, its just for one network. The split is interesting, but can we really conclude that the small eigenvalues are nuisances? Do we have some more evidence that your interpretation of mNTK is correct? `A3` Thanks for your questions. These two terms, information space and nuisance space, are originated from Oymak et al. [C], and the idea of splitting is followed by other works. For example, Li et al. proposes clean and corrupted information and proves that the clean residual is aligned with the top singular direction of the Jacobian matrix whereas label noise is aligned with the small singular directions [D]. Please refer to the **General Response** for detailed explanation. To further verify the discrepancy between the two spaces, we conduct an ablation study which performs adaptive training on modules lying in the nuisance space. The comparison is listed below, where further verify the modules entered nuisance space learn superfluous or noisy features. | Method | Valid PPL @ 10 PFLOPs | Valid PPL @ 15 PFLOPs | Test PPL @ Final | Computation (PFLOPs) | |---|---:|---:|---:|---:| | vanilla | 5.39 | 4.75 | 4.41 | 28.70 | | MAT (nuisance) | 5.78 | 5.08 | 4.63 | 23.79 | | MAT (information) | 4.46 | 4.41 | 4.27 | 16.50 | > `Q4` The experimental results are somewhat limited. In particular its hard to conclude that the observations in Section 2.2 generalize to neural networks in general considering this was just applied to one network. Do the VGG lam-max-mNTK results align with those you got for BERT? `A4` Thanks for your questions. Prior works have found that parameter updates in over-parameterized models are sparse and low-rank, which opens up the possibility of neural network pruning and sparse training. Following your suggestion, we conduct the experiments in VGG. The result of eigen-spectrum distribution is similarly splitted, as shown in *Figure C* of PDF in General Response. --- **References:** [A] Arora, Sanjeev, et al. "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks." International Conference on Machine Learning. PMLR, 2019. [B] Xiao, Lechao, Jeffrey Pennington, and Samuel Schoenholz. "Disentangling trainability and generalization in deep neural networks." International Conference on Machine Learning. PMLR, 2020. [C] Oymak, Samet, et al. "Generalization guarantees for neural networks via harnessing the low-rank structure of the jacobian." arXiv preprint arXiv:1906.05392 (2019). [D] Li, Mingchen, Mahdi Soltanolkotabi, and Samet Oymak. "Gradient descent with early stopping is provably robust to label noise for overparameterized neural networks." International conference on artificial intelligence and statistics. PMLR, 2020. --- Rebuttal Comment 1.1: Comment: Thanks for the response. These are some fair points, I think the computational speedup is nice. I'll bump a point or two. I am still not very certain with my score, however. --- Reply to Comment 1.1.1: Title: Thanks for reading our rebuttal Comment: We are greatly encouraged by your response. In addition to the scalability, pruning, and stability benefits, we are currently applying MAT to transfer learning problems and further evaluating the generalization ability of the method. We will report back to you with the results as they become available. In the meantime, please let us know if you have any other questions about this work. Thank you so much.
Summary: Inspired by the Neural Tangent Kernel literature, this paper analyzes the training dynamics of over-parameterized neural networks from the perspective of modularity. In particular, the paper proposes to decompose the Jacobians of the NTK into those of modules. Here, the concept of "module" is a bit ill-defined, referring to some coherent computation block. The paper proposes to analyze the NTK within each module and in particular look at the eigenspectrum. It shows that empirically the first eigenvalues dominate within each module and that those eigenvalues are very different across modules. This provides an avenue for focusing mostly on those modules with a high MNTK eigenvalue. Results on different architectures and datasets show noticeable speedups. Finally, a connection between optimizing only high eigenvalue parts of the NTK and generalization is cited and corresponding empirical results are provided in this direction as well. Strengths: - The paper reads well - It's an interesting topic and potentially impactful in improving our understanding of training dynamics. Weaknesses: - The biggest limitation for me is that the method of computing the NTK and then computing its eigenvalues seems quite expensive. Speedup experiments are performed using flops, which bypasses the actual running time (which would have to include the algorithm itself), defeating the purpose if it takes longer than the speed gain. - As presented, the algorithm mostly claims speedups by not updating some modules, but if we need to update one of the earliest layers we still need to perform most backpropagation computations. Furthermore, at best, this algorithms provides a 2x (or 1.5x depending on how you count) improvement since it basically only affects the backward step. - I found the connection of eigenvalues of mNTKs and generalization a bit week both conceptually/theoretically and in terms of empirical results. It's enough to show promise, but the connection with the distance to initialization is only tangential and improvements in generalization are not necessarily directly linked to the NTK eigenspectrum in the experiments. Minor suggestion: I would put % of heads instead of number of heads in figure 5 to make it more immediately parseable. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - I'm a bit surprised at the lack of role of interactions between Jacobians or similar terms across modules in the entire derivation. Given how little constraints there are in what constitutes a module, I would've expected to see some effect. Could you explain this in more detail? - As mentioned under weaknesses, I would really appreciate a deeper comment on the entire computation time when accounting for the algorithm itself. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: - The added compute should've been mentioned more clearly in my opinion. - No ethics review necessary. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > `Q1` The biggest limitation ... the speed gain. > I would really appreciate a deeper comment on the entire computation time when accounting for the algorithm itself. `A1` Thanks for your comment. The proposed method MAT will introduce additional computational overheads as it involves calculation of empirical NTK and eigen-decomposition. However, numerical overhead results from *Table 7* demonstrate that MAT only requires a negligible proportion (1%~1.5%) of extra computations. The complete computational complexity analysis and numerical results can be found in *Appendix B.3*. To accelerate the computation, we employed two strategies to approximate the Jacobian matrix $J\in\mathbb{R}^{nk\times m}$ with $\tilde{J}\in\mathbb{R}^{S\times m}$, where we sample $S$ data and use the sum-of-logits approach [A] instead of considering all output units. Furthermore, MAT applies a lightwight NTK estimation by modular NTK instead of integral NTK, which reduces the complexity from $O(Sm^2)$ to $O(Sm^2/L)$, assuming we are computing $L$ mNTKs. FLOPs comparison gives the upper bound of the time savings, and the actual wall-clock time is limited by the hardware performance and optimization. Under our experimental conditions, the wall-clock time is reduced by 31.5% compared by vanilla model. Note that computional cost of the algorithm itself is also included in the statistics. | Method | Test PPL @ Final | Computation (PFLOPs) | Wall-Clock Time (s) | |---|---:|---:|---:| | vanilla | 4.41 | 28.70 | 8021 | | MAT | 4.27 | 16.50 | 5494 | > `Q2` As presented, the algorithm ... the backward step. `A2` Thanks for your comments. When using MAT, most gradient computation of fixed modules can be left out even when earliest layers need to be updated. Take a linear layer $y=Wx$ in an arbitrary model as an example, it goes through two computation steps during the standard backward process: the gradient of parameter weight W $\frac{\partial L}{\partial W}=\frac{\partial L}{\partial y}x^{\top}$ and the gradient of the input vector $\frac{\partial L}{\partial x}=W^{\top}\frac{\partial L}{\partial y}$, where $\frac{\partial L}{\partial x}$ will be used to compute $\frac{\partial L}{\partial W}$ of the front layer. If we split $W$ into equal-sized modules $W=cat(W_1,W_2,...,W_n)$, which satisfies $y=cat(y_1, y_2,...,y_n)=cat(W_1x,W_2x,...,W_nx)$, we can sparsify the two steps into: $\frac{\partial L}{\partial W}=cat(0,...,\frac{\partial L}{\partial y_i}x^\top,...,0)$ and $\frac{\partial L}{\partial x}=\sum_{i}W_i^\top\frac{\partial L}{\partial y_i}$, where $W_i$ is the informative module. In other words, if $k$ modules in a layer are in nuisance space, we reduce the backward computation to $(n-k)/n$ of the origin. With multiple layers, as long as the latter layer produces non-zero $\frac{\partial L}{\partial x}$, i.e., by keeping at least one module in each layer active, the gradient can continue to propagate to the earlier layers. In our empirical study, we have not observed instances where later layers end their learning before the earlier ones, and thus the backpropagation is performed uninterruptedly. In addition, once the shallow layers are all in nuisance space, we can stop the gradient computation in those layers altogether. This example is congruent with the multi-head attention structure, and the associated gradient sparsification is proved to guarantee the convergence [B]. Furthermore, our approach is not limited to sparsification on gradient backpropagation computation. We can further save the forward computation if we consider mNTK $\lambda_{\max}$ as network pruning criteria. *Table 5* demonstrates the comparison of applying MAT as pruning method. The complete experimental results can be found in *Appendix B.2*. > `Q3` I found the connection ... eigenspectrum in the experiments. `A3` Thanks for your comments. Please refer to the **General Response**. > `Q4` Minor suggestion: I would put % of heads instead of number of heads in figure 5 to make it more immediately parseable. `A4` Thanks for your suggestion. We will improve this figure in the revised version. > `Q5` I'm a bit surprised at ... expected to see some effect. `A5` Thanks for your question. While it might seem counterintuitive, mNTKs are calculated independently across modules, and its sum of mNTKs equals to the integral NTK; the derivation is shown below. $\boldsymbol{\Theta}(\mathcal X, \mathcal X)=\sum_{l=1}^L \sum\_{\boldsymbol{\theta}_p \in \boldsymbol{\theta}^l}J\_{\boldsymbol{\theta}^l}(\mathcal X) J\_{\boldsymbol{\theta}^l}(\mathcal X)^{\top}=\sum\_{l=1}^L \boldsymbol{\Theta}^l(\mathcal X, \mathcal X)$. Different from MLP, modern neural networks are inherently structured. Nevertheless, modules are correlated for model optimization. As presented in the paper, we observe significant inter-module variation during training, specifically, some modules already converge and fall into the nuisance space, while others are still in the information space. Our work indicates that mNTK is a good metric to measure the inter-module training dynamics, and presents an mNTK-based adaptive training method to optimize network training process. In addition, the granularity of the modular division will have an impact on our adaptive strategy, the following table evaluates the impact of division granularities. As we can see, attention head is a suitable granularity. | Method | Valid PPL @ 10 PFLOPs | Test PPL @ Final | Computation (PFLOPs) | |---|---:|---:|---:| | MAT (layer) | 4.64 | 4.37 | 19.40 | | MAT (head) | 4.46 | 4.27 | 16.50 | | MAT (half-head) | 4.53 | 4.29 | 17.26 | --- **References:** [A] Mohamadi et al. "A fast, well-founded approximation to the empirical neural tangent kernel." International Conference on Machine Learning. PMLR, 2023. [B] Alistarh, Dan, et al. "The convergence of sparsified gradient methods." Advances in Neural Information Processing Systems 31 (2018). --- Rebuttal 2: Comment: Hi Reviewer TyiH, Since the discussion with the authors is closing soon, could you please go over the rebuttal and provide some feedback? Regards, AC --- Rebuttal 3: Title: Further experimental results Comment: Suggested by the reviewers, we have applied MAT to transfer learning problems to further evaluate the generalization ability of our method. We have conducted the experiments on both typical structured NLP(BERT-base) and CV(ResNet-32) models. The downstream tasks are text(SST-2, IMDb) and image(CIFAR10/100) classification. The experimental results demonstrate our method saves the computation(41%~51%) while improves the task performance. These results further show the potential of our method on transfer learning(or fine-tuning), as a practical learning method for modern machine learning. | Model | Method | Task | Accuracy | Computation (PFLOPs) | |---|---|---|---:|---:| | BERT-base | vanilla | SST-2 | 92.96 | 5.71 | | BERT-base | MAT | SST-2 | 93.41 | 3.35 | | BERT-base | vanilla | IMDb | 93.39 | 2.32 | | BERT-base | MAT | IMDb | 93.47 | 1.14 | | ResNet-32 | vanilla | CIFAR10 | 96.31 | 2.61 | | ResNet-32 | MAT | CIFAR10 | 96.48 | 1.37 | | ResNet-32 | vanilla | CIFAR100 | 83.14 | 2.75 | | ResNet-32 | MAT | CIFAR100 | 83.20 | 1.51 | Please let us know if you still have any concern about this paper and we will be more than happy to discuss with you before the discussion period ends.
Rebuttal 1: Rebuttal: # General Response Thank you very much for reviewing our manuscript and providing detailed and constructive comments, which have been very helpful for us to improve the quality of our work. Please see our answers to address the comments of individual reviewers. Additionally, enclosed pdf with figures is used for `Q4` of reviewer `pEnu`, `Q1` and `Q7` of reviwer `afqQ`. Here, we provide a general response to the common concerns of reviewers (especially `Q3` of reviewer `TyiH`, `Q1` and `Q6` of reviewer `pEnu`), in terms of the key contributions of our work, mNTK and its implications, the split of information and nuisance space, and their relationship to generalization. Firstly, we would like to point out that analyzing the eigenvalues of NTK is equivalent to analyzing the singular values of the Jacobian matrix, specifically $\lambda_i (JJ^\top) = \sigma_i^2(J)$, where $\lambda$ denotes the eigenvalue and $\sigma$ denotes the singular value. Next, several prior works have analyzed the relationship between generalization and the singular values of the Jacobian matrix. Oymak et al. notice the low-rank structure of the Jacobian matrix, and the features that fall on the lower part of the Jacobian singular value spectrum are hard to generalize [A]. Li et al. consider clean and noisy labels, splits the residual accordingly as presented below, and proves that the clean residual is aligned with the top singular vectors whereas label noise is aligned with the small singular vectors [B]. $$ \underbrace{\boldsymbol y-f(\boldsymbol W_t)}\_{\text {corrupted residual }}=\underbrace{\tilde{\boldsymbol y}-f(\boldsymbol W_t)}\_{\text {clean residual }}+\underbrace{\boldsymbol y-\tilde{\boldsymbol y}}\_{\text {label corruption }} $$ Intuitively speaking, mNTK measures the correlation of the gradient that different data samples produce on a certain module. The eigenspectrum of mNTK measures how frequent data features exisit in the dataset. Features occur frequently are related to the large eigenvalues, while data-specific features, generally considered as noise, related to the small eigenvalues. As the dataset becomes larger, the small eigenvalues, corresponding to the data-specific noise, become even smaller. Therefore, the gap between large and small eigenvalues becomes more significant. One key contribution of our work is that we observe significant inter-module variance during training, specifically, some modules already converge and fall into the nuisance space, while others are still in the information space. Therefore, we use mNTK $\lambda_{\max}$ as an indicator of modular training. We further introduce Radmacher complexity as evaluation of generalization ability from Lemma 2 in Appendix A.2: **Lemma 2** Given $R>0$, with probability at least $1-\delta$ over the random initialization $(\boldsymbol{\theta}(0), \boldsymbol{a})$, simultaneously for every $B>0$, the following function class $$ \mathcal F_{R, B}^{\boldsymbol{\theta}(0), \boldsymbol{a}}=\\{f_{\boldsymbol{\theta}, \boldsymbol{a}}:\|\theta_r-\theta_r(0)\|_2 \leq R~(\forall r \in[m]), \\ \|\boldsymbol{\theta}-\boldsymbol{\theta}(0)\|_F \leq B\\} $$ has empirical Rademacher complexity bounded as: $$ \mathcal R_S( \mathcal F_{R, B}^{\boldsymbol{\theta}(0), \boldsymbol{a}})= \frac{1}{n} \mathbb E_{\boldsymbol{\varepsilon} \in\{ \pm 1\}^n}[\sup_{f \in \mathcal F_{R, B}^{\boldsymbol{\theta}(0), \boldsymbol{a}} } \sum_{i=1}^n \varepsilon_i f(\mathbf{x}_i)] \\ \leq \frac{B}{\sqrt{2 n}}(1+(\frac{2 \log \frac{2}{\delta}}{m})^{1 / 4})+\frac{2 R^2 \sqrt{m}}{\kappa}+R \sqrt{2 \log \frac{2}{\delta}}. $$ Lemma 2 indicates that Rademacher complexity is proportional to the weight distance from its initialization, where $\|\boldsymbol{\theta}-\boldsymbol{\theta}(0)\|_F \leq B$. Parameters updating of those modules falling into the nuisance space slightly contribute to the loss reduction but increase the Rademacher complexity. In summary, during model training, modules already fallen into the nuisance space (with low mNTK $\lambda_{\max}$) are prone to learn superfluous or even noise features, which increase the Rademacher complexity with weak trainability, and deteriorate the model's generalization ability. To the best of our knowledge, this work indicates for the first time that mNTK is a good indicator of inter-module training dynamics, and presents an mNTK-based adaptive training method to optimize the network training process. --- **References:** [A] Oymak, Samet, et al. "Generalization guarantees for neural networks via harnessing the low-rank structure of the jacobian." arXiv preprint arXiv:1906.05392 (2019). [B] Li, Mingchen, Mahdi Soltanolkotabi, and Samet Oymak. "Gradient descent with early stopping is provably robust to label noise for overparameterized neural networks." International conference on artificial intelligence and statistics. PMLR, 2020. Pdf: /pdf/a480cd2f6143b212252880139babb1f13aef3bfc.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language
Accept (oral)
Summary: This paper presents a model for how humans learn abstract symbolic concepts from induction. The model uses an off-the-shelf language model as a meta-prior, which is then tuned to form a task-specific prior over hypotheses using a small number of human samples. This prior is then incorporated into a Bayesian inference setup to solve inductive reasoning tasks. The model closely adheres to human judgments and also seems to show that natural language is a better performing hypothesis space than programs. Strengths: - Clear and concise - Thoughtful discussion - Novel incorporation of LLMs into cognitive modeling Comments: - Line 162: Super cool result. I’d love if you could stay on this result a little longer and speculate why this might be. - Line 275: You’re basically using LLMs as a meta-prior, and then tuning it to obtain a task-specific prior. This is very interesting. Weaknesses: - The abstract is vague. I’d recommend the authors expand the abstract in length and make reference to their results. - Some more implementation details in Sections 4 and 5 would be helpful for future readers. Please see questions below. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Major: - Do you have thoughts on exactly how "strong" you should make the prior and likelihood functions? For example, I'd imagine you might get even better raw performance by using GPT-3 as the prior model. Is there a sweet spot wrt model capabilities for matching human judgments? If so, what are the implications for your rational process model? Minor: - Line 36: ”regularizes the learner toward probable generalizations” Some references here would be nice—you can just duplicate the ones you have later in the text. - Line 48: Why is this well-suited for natural language? - Line 83: typo “has also” - Line 135: What’s the sampling technique—nucleus sampling, greedy, etc? I’d like some more detail on the implementation of the proposal distribution, because sampling and sampling technique can change the effective distribution a lot. - Line 142: Can you just explain how you fit the parameters right after you introduce them? I was wondering how you trained the parameters for a few paragraphs. Nits: - Figure 3: Do you have a higher resolution screenshot? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Yes, the authors have addressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review and kind words. Below we respond to some of your main points, but can discuss further during the discussion period: > Do you have thoughts on exactly how "strong" you should make the prior and likelihood functions? Our thoughts are that the proposal distribution $q$ needs to be the "strongest", because it needs to propose plausible hypotheses from scratch. In our paper, the likelihood $p(X|C)$ only needs to translate English to Python, so it requires a decent LLM or a fine-tuned smaller model (we likely could have fine-tuned a T5-like model also). In the global response, we have attached a PDF showing analysis of using smaller open-source models for the proposal distribution and the likelihood. These new results suggest, as above, that $q$ needs to be the "strongest", but that a weaker $q$ can still work, provided that you take more samples. This introduces an interesting tradeoff between the "strength" of $q$, and the amount of compute (# samples) that are taken at test time. The prior $p(C)$ is only responsible for giving a soft bias toward simpler, shorter language, and in our experience, does not require a "strong" model: we tried both a 350M open source model and tuning a ~100M model. > Is there a sweet spot wrt model capabilities for matching human judgments? If so, what are the implications for your rational process model? That is an interesting scientific question. For example, a stronger proposal distribution might sometimes outperform humans, which could suggest certain limits on bottom-up psychological processes. Although our paper does not specifically explore the questions you raise, they would make for thrilling future work. We will add this to the discussion. > What’s the sampling technique—nucleus sampling, greedy, etc? The proposal distribution was sampled with temperature $T=1$ and $\text{topP}=1.0$., which effectively disables nucleus sampling. Because we are taking multiple proposals, a higher temperature made more sense to encourage diversity. For the likelihood, which calls out to an LLM to translate English to Python, we sampled with temperature $T=0$, both because Codex was very reliable at this translation, and to avoid needless stochasticity. The revision will mention these issues. Thanks for the catch. > Why is this well-suited for natural language? [prime numbers less than 30] Perhaps a better phrasing would be that these concepts are "suitable to be expressed in natural language". All that is meant is that it is possible and reasonably practical to express these concepts in words. > Can you just explain how you fit the parameters right after you introduce them? I was wondering how you trained the parameters for a few paragraphs. Yes, we can move up the explanation of parameter fitting in Section 4. > Figure 3: Do you have a higher resolution screenshot? No we don't: this image was provided to us by Piantadosi et al. 2016 (with permission). --- Rebuttal Comment 1.1: Comment: Thanks for answering my questions. The hyperparameter choices for sampling make sense to me, and my concerns have been adequately addressed. I have increased my score to a 7. I still think this paper would be better served by a longer abstract. For example, calibration is at the core of this paper, and this is elided into "can be fit to human data" in the abstract. Unpacking and explaining calibration with an extra sentence would make sense to me. However, if the authors feel committed to the short abstract, I will not continue belaboring this point and leave it to their discretion. --- Reply to Comment 1.1.1: Comment: Thanks for the response, and for the increase to your score. > I still think this paper would be better served by a longer abstract Agreed: and if accepted, we'll have an extra page to use for expanding the abstract and providing further details and discussion throughout the paper.
Summary: This paper proposes a model for human learning of concepts from few examples (aka "few shot" setting) that leverages natural language as an internal concept representation. This is a key trick because it means that * an LLM can be used to act as a proposal distribution for (efficiently) generating data-dependent candidate concepts * a prior distribution over concept space can be tuned (estimated) from human judgments using NLP features * a "likelihood function" for determining whether a concept could have generated an observation can be implemented via a "text2code" LLM that derives Python code from the natural language concept representation Experiments on "Number Game" and logical concept learning show concept learning with "psychologically plausible" sample complexity, remarkable agreement with human judgments, and explainable failure modes. In comparison with state-of-the-art Bayesian Program Learning (BPL) this approach is able to search through a much smaller hypothesis space (thanks to "high quality" natural language candidate proposals) but also generalize to totally novel concepts which are not express-able in the BPL due to the flexibility of natural language vs the primitives available to the BPL approach. Strengths: The paper is well-contextualized with respect to related work. It identifies a major open question (tractability vs expressivity) and addresses it via a key insight (using natural language is now much more feasible with recent LLM advances). Actually implementing this idea requires multiple non-obvious steps to bring it all together. The experiments systematically illuminate both the power and limitations of the approach in controlled but challenging problem settings. The agreement with human subject results gives the approach credibility. The comparison approach (Bayesian Program Learning) is a strong baseline. Overall I found this work to be a creative and well-executed integration of LLM advances into Bayesian cognitive modeling. Weaknesses: I guess if we were purely looking for some kind of "predictive performance" task there might be other approaches that "outperform" the framework here. However, the grounding in cognitive science for this work means that is probably not the right criterion, but rather this work is pursuing improved understanding of learning itself. I'd be interested to see a discussion or mention of some task or problem setting that successfully "red teams" this approach - ie, some problem which is adversarially chosen to be challenging or ill-suited for this framework (while still remaining in-scope within the domain of non-embodied abstract concept learning). The approach seems critically dependent on the quality of the Python code generation aspect. From the supplemental materials it looks like a lot of prompt engineering went into getting that to work. To some extent this seems like kind of a "weak link" for the whole enterprise: using natural language as the concept space requires the availability of a (very general) mechanism for computing observation likelihood or concept membership given the concept. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: L92: it's a little unclear what it means to say that P(X_{test} \in C) - it doesn't seem exactly to be P(X_{test} | C), or even P(X_{test}, C), but rather more like the posterior probability that X_{test} was generated by _the same latent concept C that generated X_{1:K}_. I guess the key assumption here is that \mathbb{1}[X_{test} \in C] is easily computable, but that isn't clear or established by this point in the development (later it is shown how to do this with NL-to-Python). L147: this summary of all the variations is pretty dense and takes a bit of reader effort to unpack. Section 4: Is there a stronger or more recent baseline for Number Game, vs Latent Language from 2018? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Limitations seem reasonable and likely avenues for future work. The reproducibility issues with using GPT4 seem addressed by including the responses in the software/data release. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review and enthusiastic support. Please find below our clarifications: > …mention of some task or problem setting that successfully "red teams" this approach - ie, some problem which is adversarially chosen to be challenging or ill-suited for this framework (while still remaining in-scope within the domain of non-embodied abstract concept learning). Because we use a large language model as a proposal distribution, problems which LLMs struggle with would likely foil our approach. However, there is one caveat: because we draw multiple proposals (~100), and because we reweigh each of them using Bayes Rule, the model only needs a few proposals that hit the mark. Therefore, our approach could be “red teamed” by problems that cause LLMs to “fall on their face,” but probably not by problems that cause LLMs to become merely “flakey”. Also, our current approach translates each natural language concept into Python, so it would struggle to learn concepts that are not easily expressible in a precise formal language like Python. This restriction is not strictly necessary: you could instead define the likelihood $p(X|C)$ using another neural model in order to allow learning "fuzzier" concepts, although this would probably come at the expense of precision. The last sentence of the paper alludes to these subtleties by saying "We bypass natural language’s ambiguity by translating it into Python for the likelihood computation, but future work needs to determine if language models can produce language precise enough for induction, or if refining into languages like Python is more practical." If accepted, we'd have an extra page for expanding that point with this discussion. > …critically dependent on the quality of the Python code generation aspect. From the supplemental materials it looks like a lot of prompt engineering went into getting that to work For the initial submission, we tried exactly two prompts for converting logical concepts into python: a short simple prompt (which was unreliable), and later a very long prompt, which proved reliable (but probably went overboard). In the weeks after the submission, we also tried prompting GPT-4 with instructions but no few-shot examples, which works better than the long Codex prompt used in the initial submission. In our experience, though, converting simple natural language utterances into snippets of python requires little prompt-engineering, provided one is willing to use larger models such as Codex. In the global response, we also describe new results using Llama-2 70B, a very recent open source LLM, to convert natural language into Python, which required zero new prompt engineering. > I guess if we were purely looking for some kind of "predictive performance" task there might be other approaches that "outperform" the framework here. However, the grounding in cognitive science for this work means that is probably not the right criterion, but rather this work is pursuing improved understanding of learning itself. Indeed, our primary aims are scientific. After the submission deadline, we tried optimizing the logical concept model to maximize average task performance, instead of maximizing fit to humans. We found that the maximizing average performance makes the model surpass human performance, but also degrades human-model agreement. If accepted, we will include these results in the revision. > Section 4: Is there a stronger or more recent baseline for Number Game, vs Latent Language from 2018? Thanks for the suggestion. We have now ran a DreamCoder baseline (a recent neurosymbolic Bayesian program learner). It achieves a decent (but not great) fit to the human Number Game data: $R^2=.75$, which should be contrasted width $R^2=.95$ for our full model. > it's a little unclear what it means to say that $P(X_\text{test} \in C)$ ... I guess the key assumption here is that $\mathbb{1}[X_\text{test} \in C]$ is easily computable We like how you put it: The key assumption is that $\mathbb{1}[X_\text{test} \in C]$ is easily computed, which comes from translating $C$ into Python. A footnote will be added clarifying this. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response elaborating and clarifying with respect to my questions. The additional results described (maximized predictive perf and DreamCoder baseline) and details about different prompt/model variations tried would strengthen the submission even further, which would be great to see.
Summary: The authors introduce a computational approach for explaining how humans perform few-shot concept learning. The proposal is that humans use Bayesian inference over natural-language definitions of concepts. In more detail, a bottom-up proposer generates a set of candidate concept definitions, and Bayesian inference is then performed over these candidates - based on a prior distribution over concept definitions and a likelihood capturing how well the definition covers the observed examples. The paper then presents two main experiments that implement the proposal via the use of recent large language models (LLMs). The prior over natural-language concept definitions is the probability that an LLM assigns to the definition. The proposer is instantiated as the responses of an LLM prompted with the training examples. The likelihood is computed by translating the proposed concept definitions into Python code using a code-based LLM and then running the Python code on the training examples. In both case studies (the Number Game and learning compositional logical concepts), the proposed approach provides a strong fit to human data. Strengths: - S1: The approach provides an interesting combination of Bayesian inference and neural network models that combines complementary strengths of both approaches: the powerful inference abilities of Bayesian models and the tractability and flexibility of neural models. This combination results in a system that is arguably more powerful than either type of approach on its own, representing an important advance in computational cognitive science. - S2: The components of the approach are well-motivated by high-level considerations and are well-operationalized using current AI tools. - S3: The results are compelling: in both case studies, the proposed model shows advantages (in overall fit to human data and/or in computational tractability) over strong baselines on both the Bayesian side and the neural network side. - S4: Working within this proposed paradigm, the authors show how to fit a model to human data in a way that successfully transfers human priors into a neural model. - S5: This work will likely be useful for future researchers working in Bayesian modeling and/or neural networks as a way to take insights from one school of thought and use them to overcome weaknesses of the other school of thought. Weaknesses: - W1: Natural language is ambiguous, so natural language strings do not really provide clear concept definitions - and, by extension, inference over natural language strings cannot strictly be viewed as inference over concept definitions. (I.e., in the general case, natural language cannot be unambiguously translated into Python code). That said, as the experiments show, it is clearly close enough to work very well in at least some settings. In addition, the authors are clear in stating (in lines 286 to 293) that they do not claim that natural language is the language of thought but rather that it is a useful heuristic tool for modeling human thinking. - W2: The specific proposal is well-suited to concepts that can be naturally expressed in language, but is not suited to concepts that do not have a straightforward linguistic definition. The authors acknowledge this point (lines 48 to 51). - W3: It was a little unclear to me what the paper’s main contribution is: Is it (i) a new hypothesis about how humans perform few-shot learning, or is it (ii) a way to make tractable some previously-existing hypotheses that were previously intractable to evaluate? Both types of contribution are valuable, but it would be helpful to clarify which is/are being made here. It’s clear that the paper accomplishes (ii), but it is not obvious to me that it does (i), as the basic high-level ideas seem to be present in the cited prior work (e.g., ideas about performing Bayesian inference over a small-ish number of heuristic proposals are present in prior work about bounded rationality). It’s not a problem if (i) is not done here, but if it is done, it would be helpful to clearly state what new hypothesis is being offered, and if it is not done, it would be helpful to state explicitly that (ii) is the main type of contribution being made. One reason I’m confused here is due to differing senses of the word “model”: the paper clearly states that it proposes a new model, but I’m not sure if this is meant in the sense where “model” means “hypothesis” or the sense where it means something more like “implementation”. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: It would be helpful to hear your thoughts on the point discussed in W3 above, under “weaknesses.” Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors do a strong job of discussing and acknowledging limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 10: Award quality: Technically flawless paper with groundbreaking impact, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful input and enthusiastic support. We are happy to correspond more during the discussion period, but here we mainly just address the specific question you raised: > …unclear to me what the paper’s main contribution is: Is it (i) a new hypothesis about how humans perform few-shot learning, or is it (ii) a way to make tractable some previously-existing hypotheses that were previously intractable to evaluate? … It’s clear that the paper accomplishes (ii), but it is not obvious to me that it does (i) Thank you for this helpful way of thinking about the different contributions of the work. Our main contribution is (ii), offering a computational model that makes inference tractable for certain very expressive hypothesis spaces that have proved valuable in cognitive modeling (Line 31: "Our goal is to build a model of humanlike concept learning that makes progress toward resolving the tension between intractable inference and expressive hypothesis classes"). There is also a more speculative account of our contributions (i), namely that our model offers a hypothesis for how human brains resolve the “curse of a compositional mind” (Spelke 2022: freeform recombination of concepts yields a combinatorial explosion). In this more speculative hypothesis, culturally-transmitted concepts and linguistic schemas help guide inner thought, making combinatorial thinking more tractable. We've shied away from making that claim, because it is not directly supported by our findings, though it is not contradicted by our work, either. Right now the manuscript tries to strike a balance by simply saying that "Natural language, even if it is not actually the same as our inner mental language, acts as a vast reservoir of human concepts, and provides a flexible algebra for combining them. Thus our best near-term strategy for modeling human thinking may be to use natural language as a *heuristic approximation* to an inner Language of Thought" (emphasis added), later refining the claim from natural language generally to large language models specifically as a "reasonable surrogate for this [human] bottom-up [proposal] process, even if it its inner workings might differ greatly from human bottom-up proposal processes". We hope that the paper's current wording manages to strike the right balance, and are happy to revise in order to more clearly signpost the actual contributions, results, and concrete hypotheses, especially given that, if accepted, we have an additional page to include discussion and add clarifications. Last, about the other weaknesses you raise: > [the work is] not suited to concepts that do not have a straightforward linguistic definition... the authors are clear in stating (in lines 286 to 293) that they do not claim that natural language is the language of thought but rather that it is a useful heuristic tool for modeling human thinking. > Natural language is ambiguous, so natural language strings do not really provide clear concept definitions - and, by extension, inference over natural language strings cannot strictly be viewed as inference over concept definitions. The authors acknowledge this point Indeed, we *do not* provide a unified theory of human few-shot concept learning. Thank you for pointing out that the submission is careful about clearly discussing the limits of the work. --- Rebuttal Comment 1.1: Title: Thanks for the reply! Comment: Thank you for the reply, which is very helpful for clarifying the few things I found unclear about the paper. I continue to view this paper highly and to enthusiastically recommend acceptance.
Summary: The paper proposes an approach to scaling up intractable Bayesian models of few-shot concept learning. The key idea is to (1) train an amortized posterior distribution q(C|X_{1:K}) over concepts (represented as natural-language expressions) and then (2) make predictions about membership by marginalizing over a finite set of latent concepts sampled from q. Several variants of this approach are considered, all of which use a large language model (Codex) for the likelihood function p(X | C) and proposal distribution q. The approach is evaluated on two few-shot learning experiments: a generative number concept task, and a discriminative logical concept task. A version of the model that reweights sampled concepts according to a prior distribution trained on human judgements is found to fit human patterns better than alternative models which either (1) replace the human-tuned prior scores with an off-the-shelf language model (CodeGen), (2) replace the natural-language prior with a Python prior, or (3) entirely omitting the separate proposal distribution q and sampling proposed concepts directly from the (learned) prior instead. Strengths: The manuscript presents a promising and potentially influential approach to scale up Bayesian models on few-shot learning tasks. A number of timely ideas are explored, and the results compare favorably with other recent approaches that have been well-received at NeurIPS (e.g. Bayesian program learning, and other neurosymbolic approaches). Many of the basic ideas in play are pretty familiar by now (e.g. amortizing a proposal distribution to avoid costly search over hypotheses, using natural language as a more expressive latent hypothesis space, fine-tuning on human priors to impart human-like inductive biases). But this work still 'remixes' and integrates them in an interesting way: for example, other language-guided BPL approaches (e.g. Wong et al, 2021, ICML; Andreas et al., 2017) have required a set of language annotations to fine-tune on, rather than utilizing more generic priors (although Codex would be more impractical for multi-modal tasks, conditioning on images). Evaluating on human behavior (rather than synthetic benchmarks) are another strength. Weaknesses: My primary concerns are around how to do credit assignment to the many potentially brittle component parts of the pipeline, and the applicability of this scaling approach outside of toy domains. One somewhat deflationary critique is that there is no Bayesian *inference* at all in this pipeline, and certainly no inference *over natural language*. The approach depends on the independent existence of a sufficiently powerful amortized posterior distribution from a very expensive inference procedure that has already been performed off-stage (i.e. training Codex). the model comparisons simply show more or less efficient ways to approximate a *predictive* distribution by marginalizing over high-probability regions of that independently pre-existing posterior. This isn't inference! Arguably, it's just wrangling the amortized product of an earlier inference, exploring different ways of leveraging human data to correct distortions in this mammoth amortized posterior and project it down to specific tasks at hand. I still think this kind of 'wrangling' work is interesting and novel, as there are clearly more or less effective ways to do it, but (1) I believe the framing of actually doing Bayesian inference over natural language is misleading (including statements like 'our work adds Bayesian inference'), and (2) I would suggest much more focus and scrutiny on the black-box, closed-source models that are the 'wizard behind the curtain,' the parts of the pipeline actually responsible for the few-shot concept induction. For example, it is stated that Codex was used 'because we hypothesized that training on source code would transfer to reasoning about numbers,' but no other choice was considered. I would like to see a bigger space of candidate posteriors compared. I would also like to see a stronger 'credit assignment' analysis testing components of the Codex pipeline (which I understand has now been deprecated, making these results difficult to reproduce?) For example, one very strong assumption is that each linguistically expressed concept maps deterministically to a single Python program, which in turn, can be evaluated deterministically on each number. There may be multiple valid programs corresponding to each linguistic utterance. What if the linguistic concept is good but the translation to python is bad? i.e. what if the best cutting-edge model were used as the proposal distribution, but a weaker code-generation model were used in the likelihood to evaluate the generated concepts on numbers? The current analysis cannot pull these apart. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It wasn't clear how train-test splits were handled. The Fig. 2 caption alludes to 'held-out' judgements, but what was the split? How many examples were used to tune the prior? Were entire concepts held out, or were some human data seen for each example set? 2. In Fig. 2, it looked like the 'no proposal dist' model might potentially be competitive if it was given more than 100 samples to hit on good descriptions. This is still a relatively small number of samples, and even if it is somewhat expensive, it would aid understanding to extend the x-axis one more degree of magnitude. 3. It looked like data came from the extremely small, cherry-picked set of concepts used in the original Tenenbaum dataset (with N=8). However, much larger and more systematic datasets are now standard, e.g. Bigalow & Piantadosi, https://doi.org/10.5334/jopd.19, releasing 272k judgements using many more concepts. I would strongly suggest reporting generalization performance to these new concepts. 4. It wasn't clear how the Latent Language model actually worked for these task; it would help to clarify precisely how it differed from the other models in this case (did it just take the single maximum likelihood concept from the proposal distribution instead of marginalizing proportional to each sample?) 5. Did the likelihood function assume boolean output from the codex-derived python code? how was this ensured? what happened if the generated python code returned an error or non-boolean? 6. Section 5 states: "Except we now have a discriminative learning problem instead of a generative one" -- except wasn't the number game treated as discriminative by the model, using an indicator function (effectively deriving a disciminative classifier for whether each number is in or out the concept?) 7. I would have liked to see a more reasonable neurosymbolic BPL baseline, which actually does proper Bayesian inference (i.e. MCMC) using the latent concept space as the proposal distribution over valid programs. A related more recent paper that may be worth including: * Wong, L., Grand, G., Lew, A. K., Goodman, N. D., Mansinghka, V. K., Andreas, J., & Tenenbaum, J. B. (2023). From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought. arXiv preprint arXiv:2306.12672. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: A clear limitation is that a central pillar of the approach, the Codex API, is now deprecated and unavailable to other researchers. It is therefore not clear whether any of the results can be replicated, creating further incentive for the authors to compare performance on a number of other amortized posteriors (proposal functions) and likelihood functions, ideally those which are open and maintained. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful input, kind words, and suggested improvements. We address the main issues, with **new results bolded**, but can answer all of your questions during the discussion period. > My primary concerns are around how to do credit assignment to the many potentially brittle component parts of the pipeline In our view, the key ideas are (1) natural language as a hypothesis space, (2) Bayesian reasoning for forming predictions, and (3) LLMs for tractable inference. Achieving the full suite of results requires all ingredients, established by comparing against Bayesian Program Learning (no natural language), latent language (no Bayes), latent source code (LLMs, but no natural language), and no proposal dist (no bottom-up proposals for tractability). Our pipeline has 3 components: prior; likelihood; and proposal distribution. We ablate the prior by not tuning it and by removing it (in latent language). The proposal distribution was also ablated. The likelihood is so critical for discarding erroneous hypotheses that we did not even consider a model without it, but **we've now run a likelihood ablation (global response PDF), confirming that the likelihood computation is essential.** > what if the best cutting-edge model were used as the proposal distribution, but a weaker code-generation model were used in the likelihood To understand what happens when the individual components are merely made weaker instead of ablated entirely, we've run **new experiments on Llama-2, a recent 70B open-source model that is thought to be weaker than Codex (see attached global response PDF).** The new data show that a weaker LLM, like Llama-2, can implement effective likelihood models (act as code-generators), but stronger LLMs are important for proposal distributions, especially in the low-sample regime. Although these Llama-2 results do not change our scientific conclusions, they help in understanding how to practically engineer systems like ours using off-the-shelf tools. > Codex... has now been deprecated, making these results difficult to reproduce Codex is deprecated but freely available for academic use (with a special application needed), or on Azure (expensive, but for anyone). Ultimately, any closed-source model hurts reproducibility, and will be eventually deprecated. To mitigate this, we've archived our OpenAI queries+responses, as noted in the author checklist, and are doing Llama-2 replications (see above). > data came from the extremely small, cherry-picked set of concepts used in the original Tenenbaum dataset (with N=8)... larger and more systematic datasets are now standard, e.g. Bigalow & Piantadosi Before performing our study we closely examined the Bigalow & Piantadosi dataset and discussed it with one of the authors of that dataset. We concluded it was too noisy, and that many Mechanical Turk workers were not correctly following the instructions. We suggest examining their data for "powers of 2", which shows that participants failed to even label the provided numbers (16, 32, 2, 8) as being 100% in the concept. Although small, the Tenenbaum data exhibits important phenomena such as few-shot learning of both rule-based and similarity-based generalizations. It is also canonical and pedagogical, serving as a main example of Bayesian concept learning in the textbook "Machine Learning: A Probabilistic Perspective" (Murphy 2012). Although a bigger dataset would be desirable, we believe the Tenenbaum data effectively shows the basics of the model, before moving onto the bigger logical concepts dataset. > scaling... outside of toy domains From the perspective of cognitive modeling, the logical concept data is quite challenging. To the best of our knowledge, there is no other study of logical concept learning in humans which is nearly as broad, high-quality, and high-resolution, as Piantadosi et al. 2016. > how train-test splits were handled For Number Game we split the human judgments in 10-way cross-validation, *mixing across concepts.* Originally we thought that with only 8 concepts, fitting a prior while holding out whole concepts would not work. **Based on your question we reran by testing only on held-out concepts, finding that the results change very little: $R^2$ drops from 0.95 to 0.91.** Therefore, the model learns a prior that works for novel concepts / training data. For logical concepts, we followed Piantadosi 2016, which held out specific learning curves run on independent participants. That split shows generalization to new training data, like the new Number Game result given above. Our replication experiments get their test data from running new participants on novel out-of-distribution concepts (training on Piantadosi 2016), again showing that the learned prior can transfer to never-before-seen concepts. > I would have liked to see a more reasonable neurosymbolic BPL baseline We appreciate your suggestion of a modern neurosymbolic baseline, and **have now ran a DreamCoder comparison, which gives a decent (but not great) fit to Number Game concepts (attached pdf).** > 'no proposal dist' model might potentially be competitive if it was given more than 100 samples We performed a new experiment, finding that **with an order of magnitude more samples ($10^3$), 'no proposal dist' agrees with the human data only at $R^2=.41$, ie, it levels off in fit, although it should eventually trend upward with enough samples.** As the number of samples tends toward infinity, 'no proposal dist' should be just as good as the full model. > there is no Bayesian inference at all in this pipeline, and certainly no inference over natural language We've raised this issue in the global response, and can certainly alter our word choice. > how the Latent Language model actually worked... did it just take the single maximum likelihood concept from the proposal distribution instead of marginalizing proportional to each sample? Exactly, it works as you described. --- Rebuttal Comment 1.1: Title: Thanks! Comment: I very much appreciate the careful and thoughtful response, particularly the new results with Llama-2 pointing out the importance of having a strong pre-trained model for the proposal distribution. I think the paper will be greatly strengthened by these changes. I still think this is a good paper (thus the positive score) but given the opportunity for some back-and-forth, I wanted to clarify two points. 1. About the 'brittleness' of the pipeline: I agree that the 'classes' of ablations appropriately map onto the 'joints' of the approach. I was instead trying to note brittleness in the linking function between the idealized mathematical model worked out in section (3) and the many specific *instantiations* or *choices* used to realize the model (e.g. `CodeGen` for the pre-trained prior, `all-MiniLM-L6` for the tuned prior, ` code-davinci-002` as the proposal distribution translating to Python, the tuneable Platt transform as the linking function to Likert ratings, etc.) Each of these choices represent an important 'ancillary assumption' that constrains the interpretation of the results --- the exact same theoretical model with different choices plugged in could do substantially better or worse in practice. Imagine that someone includes this model as a baseline in a future paper; they plug in a set of 'reasonable' off-the-shelf models as the linking functions, and find that it performs very poorly relative to their approach. Would they then be licensed to reject the whole model? Or would you respond "of course it doesn't work if you plug in those, you should have used these." But there are enough 'experimenter degrees of freedom' for each choice that it's not clear what can ultimately be attributed to the core theoretical approach (vis-a-vis section 3) and what is an artifact of the particular bundle of ancillary assumptions used as linking functions. It's ok if it's intended to be an existence proof that some bundle of ancillary assumptions suffices to achieve a certain level of performance, but it's hard to generalize any core principles. 2. About the terminology: I'm sorry to be grumpy about this, but I have to insist that 'inference' is being used in a deeply misleading way here. There is an active community at the intersection of computational cognitive science and ML specifically working on the problem of 'performing Bayesian inference over natural language' (i.e. inferring a posterior `P(concept | natural language) \propto P(natural language | concept) P(concept)` using, e.g., principles of pragmatics and social cognition in the likelihood function). The title and abstract of the paper strongly suggest that the problem has been solved and we are now able to do Bayesian inference over natural language. However, this is not at all the problem addressed by the paper. I believe a large segment of the target audience for this paper (i.e. computational cognitive scientists working on human few-shot concept learning) will be confused or misled by the non-standard evocation of an 'inference' problem here. I'm not familiar with any adjacent literature that uses 'inference' to describe the problem of marginalizing over a pre-existing posterior distribution. I'd be much happier to recommend this paper given a less contentious (but still very cool!) tweak of the title/abstract like "Modeling Human Few-Shot Learning using Amortized Language Models" or "Modeling Human Few-Shot Learning by Translating Linguistic Proposals" or just "Modeling Human Few-Shot Learning using Natural Language" or something. --- Reply to Comment 1.1.1: Title: Thanks for the engagement! We're revising as follows Comment: Thank you for raising interesting points and helping us refine the paper. We're revising as follows: > the exact same theoretical model with different choices plugged in could do substantially better or worse in practice. This is an important point, and relevant to many structured Bayesian cognitive models: there are typically multiple reasonable choices for the prior, likelihood, inference method, and hypothesis space. In our past experience with Bayesian models, good agreement with humans requires at least *some* “tinkering” of these components, and can require sampling budgets larger than what we think humans plausibly process. Relative to other Bayesian Program Learners we’ve worked with, this new model required less "tinkering", and vastly smaller sampling budgets. For example, all LLMs reported are the first ones we tried (except for logical concepts: we tried codex before GPT4). We needed less “tinkering” as we largely remove a critical degree of freedom: The design of the structured symbolic hypothesis space itself. We also introduce new degrees of freedom, like the choice of LLM, and new continuous parameters for estimating the prior. Some related models, like Rational Rules, have almost no continuous parameters, but require designing a custom discrete hypothesis space. Piantadosi et al. ‘16 shows such model performance depends on that design. We have extra learnable parameters, but remove combinatorial degrees of freedom as a result. We'll add this to the discussion: **Generalizability of the theoretical framework.** The basics of the model make few commitments, yet instantiating it requires selecting specific language models, engineering prompts and likelihoods, etc. More broadly, a high-resolution cognitive model, particularly a structured Bayesian one, requires domain-specific modeling choices. How much credit should we assign to the general theoretical framing, as opposed to particular engineering decisions? Although our paradigm introduces new degrees of freedom (which LLMs/prompts to use), it removes others (the grammatical structure of the symbolic hypothesis space). On balance, we are cautiously optimistic that the framework will generalize with significantly less domain-specific tinkering, at least for abstract symbolic domains. This optimism is because the framework replaces hand-designed structured hypothesis spaces with pretrained neural models, and because reasonable “default” neural networks worked well across our experiments. > someone includes this model as a baseline… plug in a set of 'reasonable' off-the-shelf models… find that it performs very poorly relative to their approach. Would they then be licensed to reject the whole model? Following the above discussion, we’d feel comfortable with other researchers using the theoretical framework as a baseline and rejecting it if it flunks their data. (And we couldn’t say the same for DreamCoder, BPL, SOAR, etc.) One nuance, though, is that the proposal distribution needs a strong LLM, which we now have hard data to support, thanks to your earlier input. > the terminology... 'inference' is being used in a deeply misleading way here... [suggesting] the problem of 'performing Bayesian inference over natural language'... using, e.g., principles of pragmatics and social cognition Thank you for explaining. We’ll change the title to not include the phrase “Bayesian Inference over Natural Language”, and clarify that our work has nothing to do with the Rational Speech Act model, recursive/social reasoning, etc. Thanks for suggesting possible titles. We also use “utterance” in the paper. Do you think that could mislead readers into thinking this is about external communicative language? If so, we’ll globally remove “utterance”. > I'm not familiar with any adjacent literature that uses 'inference' to describe the problem of marginalizing over a pre-existing posterior distribution Respectfully, we’d like to explain why we think the phrase “Bayesian Inference” is consistent with literature on Bayesian Program Learning and Bayesian computational cognitive science. We view $q$ as a data-driven proposal distribution. Given that, Lake et al. ‘15 uses “inference” to refer to generating data-driven proposals, which are then weighed by prior and likelihood, like our model. This approximate posterior—the result of inference—is then marginalized over to form predictions, like our model. DreamCoder [Ellis et al. ‘21] also uses this terminology, as does Latent Language [Andreas et al. ‘18] (“inference”, minus the term Bayesian). Other NeurIPS papers in cognitive modeling have long used similar terminology. From Shi&Griffiths ‘09: “importance sampling provides a simple and efficient way to perform Bayesian inference, approximating the posterior distribution with samples from the prior weighted by the likelihood.” Thinking of the raw LLM as a pre-existing amortized posterior is an interesting perspective, but not the only appropriate vocabulary.
Rebuttal 1: Rebuttal: Thank you all for your reviews, and especially for the encouragement and constructive criticism. Below we summarize some important strengths and weaknesses identified across the reviews, and overview the new results in the attached PDF that address those weaknesses. **Strengths:** qc38 describes the work as an “important advance in computational cognitive science” which "identifies a major open question (tractability vs expressivity) and addresses it via a key insight (using natural language is now much more feasible with recent LLM advances)". The other reviewers find the work “promising and potentially influential” [y5Ba], "useful for future researchers" across Bayesian AI and deep learning "as a way to take insights from one school of thought and use them to overcome weaknesses of the other school of thought" [ce9a], and contributing a "super cool result" [8GEn]. Experiments “illuminate both the power and limitations” in light of ”strong baselines” [qc38], yielding "results [that] compare favorably with pother recent approaches that have been well-received at NeurIPS" [y5Ba] and “provides a strong fit to human data” [ce9a]. Reviewers mentioned the ability to explain patterns of human mistakes via "explainable failure modes"; the ability to "generalize to totally novel concepts" [qc38]; and last, the "transfer [of] human priors into a neural model" [ce9a]. From a technical perspective, reviewers describe the computational model as one which "takes classic ideas and “'remixes' and integrates them in an interesting way” [y5Ba] via "multiple non-obvious steps” [qc38]. **Weaknesses, and responses (see also attached PDF):** - qc38 asks "Is there a stronger or more recent baseline for Number Game, vs Latent Language from 2018?", echoed by y5Ba, who requests "a more reasonable neurosymbolic BPL baseline". We have now run DreamCoder, a 2001 neurosymbolic Bayesian Program Learner. We find a nontrivial gap between DreamCoder and our model (see attached PDF), even when DreamCoder is granted 2 orders of magnitude more test-time samples. This finding further supports the conclusion that, relative to prior BPL frameworks, our model can produce more human-like predictions with far fewer samples. - y5Ba points out that we use closed-source LLMs, which is a reproducibility hazard. qc38 has a slightly different take, noting "reproducibility issues with using GPT4 seem addressed by including the responses in the software/data release" (indeed: we are including it in the data release, as noted in the author checklist). Further addressing this reproducibility hazard is being done by replicating the results using an open source model, Llama-2. The provided PDF shows results on a Llama-2 Number Game model, showing that open LLMs can be used to build a decent model, but that they are currently slightly worse than OpenAI's LLMs as a proposal distribution. (Llama-2 on logical concepts would take more than 2 weeks to complete on the hardware available to us.) - y5Ba asks "how to do credit assignment to the many potentially brittle component parts of the pipeline" and requests a broader set of LLMs be tried, including weaker ones for different pipeline stages. As a reminder, the pipeline includes prior, likelihood, and proposal distributions. We ablated the prior by not tuning it, and by disabling it completely (for latent language), and also ablated the proposal distribution, but we never ablated the likelihood. The attached PDF shows a new likelihood ablation, revealing that the likelihood is very important. To understand what happens when the individual components are merely made weaker instead of ablated entirely, the new Llama-2 results include data for when individual components are replaced with the weaker Llama-2. The new data show that weaker LLMs can implement effective likelihood models, but stronger LLMs are important for proposal distributions, especially in the low-sample regime. Although the Llama-2 results do not change the scientific conclusions, they are helpful in understanding how to practically engineer systems like ours using off-the-shelf tools. **Potentially Revised Terminology:** Reviewer y5Ba suggests that the framing as "Bayesian inference over natural language" is technically incorrect, preferring to think of pretraining the LLM as "inference", and our model as predicting based on that already inferred distribution. We're happy to revise our terminology, and understand that Bayesian vocabulary can be quite nuanced (and occasionally contentious!). Right now, we think our original terminology is consistent with adjacent literature, but we can go with whatever terminology the reviewers collectively feel is best. Pdf: /pdf/ab4120f7a1d93151b79bb48a1867d86817b9c690.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
RECKONING: Reasoning through Dynamic Knowledge Encoding
Accept (poster)
Summary: The paper introduces a two-level learning algorithm called RECKONING, which enhances the in-context reasoning performance by addressing the issue of distractor facts. The algorithm consists of two learning steps, an inner and an outer loop, where the inner loop trains the model to encode contextual knowledge into its parameters through BP, and the outer loop teaches the model to use the updated parameters to answer questions. The experimental results on two multi-hop reasoning datasets indicate that RECKONING outperforms the ICR baseline by up to 4.5%. Additionally, RECKONING generalizes better to longer reasoning chains unseen during training, is more robust to distractors in the context. Strengths: Overall, the research topic presented in this work is of interest and importance, as it enhances transformer-based language models by incorporating in-context knowledge into parameters to improve reasoning ability. The proposed RECKONING consistently outperforms the conventional ICR-based methods. The proposed method's superiority is further confirmed through comprehensive experiments and ablation studies, which demonstrate its superiority in generalization to longer reasoning chains. Weaknesses: While the research topic is interesting and the proposed algorithm outperforms conventional in-context reasoning methods, there are some weaknesses in the paper. The proposed inner-loop of encoding knowledge into model parameters to improve reasoning abilities is limited in its incremental nature, which may lead to issues when facts from different contexts contradict each other. For example, the fact that "Peter is a son to Kyle" may be changed to "Kyle is a son to Peter" in another question's context. Additionally, the experiment part compares two baselines "No-Facts" and "Randome-Facts," which are too weak, and no other publicly available baselines have been compared. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. In Figure 3, it would be helpful to include more experimental results with training on both 4- and 6-hop questions, or both 2- and 4-hop questions to further validate the proposed algorithm's performance. 2. It would also be beneficial to include other subtasks like proof generation in the ProofWriter dataset to further explore the algorithm's capabilities. 3. Conducting more experiments on public pre-trained language models like LLaMA would provide additional validation for the proposed method's superiority. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: I can not find any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Responses to Reviewer 5k3Z (R5) We thank the reviewer for their constructive comments, as well as describing our idea as interesting and important, and recognizing the benefits of our approach. We address the reviewer’s concerns and questions below: **W1: The reviewer argues that if there are contradictory facts between questions, it may lead to issues with the model performance because of the incremental nature of the inner-loop** We first clarify the inference process of RECKONING. For each new question, we start from the trained meta parameters of the model (which does not contain any background knowledge for the test set questions) and do a few steps of gradient updates to encode the background knowledge. Next, we directly evaluate the updated parameters on the question. After that, we discard this locally updated model. For the next questions, we start from the trained meta parameters again, which do not contain any information of the background knowledge from the previous questions. Thus, the inner-loop in our proposed approach is not incremental. The current question’s inner-loop does not depend on the previous question's inner-loop. Each question’s inner-loop is unique to the associated question during inference. Suppose the previous question’s context and the current question’s context contradict each other. In that case, the model will not be affected since the trained meta parameters will not contain information on the previous question’s context when we do inference on the current question. **W2: The reviewer argues that the experiments contain only weak baselines and no public baselines compared** We have added more baselines; see our response to Reviewer 1’s Q3. For a stronger baseline, we have finetuned GPT-XL-Lora with in-context reasoning. We show that on ProofWriter-5-hop, RECKONING (70.2) still improves over this baseline (65.0) when there are distractors in the context of a question. For stronger public baselines, in our supplementary material, we report the performance of GPT-3.5 (text-davinci-003) on ProofWriter and CLUTRR (Table 9, supplementary material). We include the results below as a reference: | | | ProofWriter | | |ProofWriter (distractor)| | |CLUTRR | | |-----------------|------|:-----------:|------|------|:-----------:|------|------|:-------:|------| | Method | 2-h | 3-h | 5-h | 2-h | 3-h | 5-h | 2-h | 4-h | 6-h | | GPT-3.5 (0−shot)| 58.4 | 56.4 | 53.7 | 49.1 | 47.1 | 45.3 | 35.6 | 16.0 | 18.5 | | GPT-3.5 (8−shot)| 78.0 | 82.4 | 80.1 | 58.7 | 57.2 | 54.5 | 39.0 | 18.5 | 20.8 | | RECKONING | **99.5** | **99.7** | **99.8** | **79.8** | **83.7** | **84.0** | **98.3** | **97.6** | **94.8** | | The results show that even a best-performing model like text-davinci-003 still fails to perform well on the reasoning tasks and does not generalize well when the context includes distractors. Compared to a public best-performing model, we show that RECKONING significantly benefits reasoning tasks under a systematic complex setting. For other public baselines, we argue that our idea and the proposed algorithm are novel and have not been done before. Thus, we believe that other public baselines on the two datasets are also weak and do not fit in the scope of this study. Specifically, the public baselines do not evaluate models under a systematic generalization setting. **Q1: The reviewer suggests experiments that mix training data from different hops of CLUTRR for a longer reasoning chain** We trained the model using RECKONING on a mixture of 2-hop, 4-hop, and 6-hop data. We report our results in the pdf as Figure 1. We show that the performance of FT-ICR and RECKONING is roughly equivalent at low-hop reasoning. However, RECKONING shows greater improvement when generalizing to OOD hops, similar to Figure 3 in our paper. **Q2: The reviewer suggests expanding our proposed approach to the proof generation task** We thank the reviewer for this interesting idea! We note that our results in Table 4 show which facts the model recalls as relevant for reasoning about the question, which can be viewed as the first step of a proof, since the model identifies the relevant facts for reaching the correct answer. Extending RECKONING for the “full” proof generation task would require greater changes that we couldn’t implement in the rebuttal window. We will explore this and present the results in the camera-ready. **Q3: The reviewer suggests conducting more experiments on public pre-trained language models like LLaMA** This is a great suggestion! As our responses to W2 and Reviewer 2’s W1/Q1, we have demonstrated that large pre-trained language models like GPT-3.5 and ChatGPT still struggle with complex reasoning and do not generalize well with distractors (Table 9 in the supplementary material). We will conduct more experiments on open-source language models like LLaMA in our revision. --- Rebuttal Comment 1.1: Title: Thank you for improving your rating! Comment: Many thanks to the reviewer for their helpful and constructive suggestions! We are grateful for the reviewer raising their score from 4 to 5.
Summary: This work aims to solve reasoning tasks where models need to rely on knowledge provided as part of the task input. Motivated by the fact that pre-trained models encode a lot of irrelevant facts and they are good at retrieving the relevant ones for solving a downstream task, this work proposes an algorithm, RECKONING, that updates a model’s parameters to memorize the facts on the fly and then answer the question without having those facts as input explicitly. Experiments show that the proposed method can improve the model performance compared with the in-context reasoning baseline. The resulting model also generalizes better to examples requiring longer reasoning chains, is more robust to distractors and even more efficient in a certain setting. Strengths: - Originality: Unlike prior works that propose different ways to filter irrelevant facts explicitly, this work makes use of the pre-trained LM to subsume all the facts and filter the irrelevant facts on its own. The idea is interesting and makes sense. - Quality: To validate the effectiveness, the work conducts sufficient experiments. The results demonstrate the superiority of the method in scenarios where a longer reasoning chain is needed or there exist a lot of distractors, which well motivate this study. An analysis is also provided to address the concern about the computation efficiency of the method. - Clarity: The algorithm is well explained despite requiring some effort to understand it. Weaknesses: - In the standard setting where there are no added distractors, the improvement brought by the method is actually minimal (Table 1). - Also, the need to update the parameters on the fly makes the algorithm hard to be applied on large language models, which might have a greater capability in filtering irrelevant facts in-context already. I would suggest extending the idea to these large LMs by updating a subset of parameters (like adapters do). - One potential disadvantage of the method could be that the knowledge facts contain a lot of noise or toxic content that may undermine the model’s basic ability for reasoning. In this regard, methods that first filter irrelevant facts and then provide the remaining facts in-context do not have such an issue. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. For the Random-Facts baseline, do you mean the random facts are provided during inference or also during training? Do you train it with the multi-task objective? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: No limitation is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Responses to Reviewer W7M8 (R4) We thank the reviewer for their constructive comments and viewing our idea as interesting and sound, our experiments as sufficient and well-motivated toward the study, and our algorithm as well explained. We address the reviewer’s comments and questions below: **W1: The reviewer argues that in the standard setting, where there are no added distractors, the improvement brought by the method is minimal (Table 1 of our paper)** In our paper, we mainly focus on the problem of generalization in more complex settings with distractors and longer reasoning chains. While our results in Table 1 show a small improvement for RECKONING, the baseline (FT-ICR) is also quite strong in this idealized setting, and our results are partly a sanity check that RECKONING works as well (even slightly better!). However, RECKONING exhibits even stronger improvements when idealized conditions are removed and the model has to generalize out of distribution and handle noisy inputs. **W2: The reviewer wonders if the performance gain of RECKONING will generalize to larger language models** This is an interesting question. First, we want to show that more advanced large language models like GPT-3.5 (text-davinci-003) still fail to generalize when there are distractors, i.e., irrelevant information present in the context. Below our evaluation results on ProofWriter using GPT-3.5: | | | ProofWriter | | |ProofWriter (distractor)| | |CLUTRR | | |-----------------|------|:-----------:|------|------|:-----------:|------|------|:-------:|------| | Method | 2-h | 3-h | 5-h | 2-h | 3-h | 5-h | 2-h | 4-h | 6-h | | GPT-3.5 (0−shot)| 58.4 | 56.4 | 53.7 | 49.1 | 47.1 | 45.3 | 35.6 | 16.0 | 18.5 | | GPT-3.5 (8−shot)| 78.0 | 82.4 | 80.1 | 58.7 | 57.2 | 54.5 | 39.0 | 18.5 | 20.8 | | RECKONING | **99.5** | **99.7** | **99.8** | **79.8** | **83.7** | **84.0** | **98.3** | **97.6** | **94.8** | | While the comparison is not perfect since we cannot tune GPT3.5, we note that the performance drop in the distractor setting on ProofWriter is much greater than for RECKONING. As a closer comparison, when we apply RECKONING to larger language models, we see similar improvements. We applied RECKONING to GPT2-XL (1.5B) with LoRA and evaluated on ProofWriter 5-hop with all distractors. Compared to FT-ICR’s performance (**65.0**), RECKONING’s performance (**70.2**) is **5.2** percentage points higher, demonstrating that RECKONING’s benefits still appear with larger language models. **W3: The reviewer argues that noise or toxic content in knowledge may undermine the model’s basic ability for reasoning. They propose that methods that filter irrelevant facts ahead of time could be more effective than RECKONING** This is an interesting point. However, we note that RECKONING is not in contradiction with a method that first filters irrelevant facts before providing them to the model for reasoning. Facts could be filtered before being provided to RECKONING too. We believe that combining these two approaches might work complementarily since existing approaches to filter irrelevant facts are not perfect. **Q1: The reviewer asks if we provide random facts during inference or during training, and do we train the random facts baseline with the multi-task objective** We provide random facts both during training and during inference. We train the random facts baseline without the multi-task objective.
Summary: This paper introduces a novel method for addressing logical reasoning in natural language question answering, specifically when the required knowledge is part of the context. One major challenge highlighted in this paper is that the included knowledge often contains irrelevant information, which can mislead the reasoning process and negatively impact question-answering performance. The proposed solution involves a bi-level optimization technique during inference. The technical key is utilizing the knowledge to quickly fine-tune the model at inference time, using a causal language modeling objective. This eliminates the need for the knowledge to be explicitly included in the context, as it is encoded within the model parameters. It also helps the model to focus on the relevant parts of the knowledge. To evaluate the effectiveness of the approach, multiple baselines incorporating variations of in-context learning are compared. The experimental results demonstrate improved performance using the proposed method, particularly in terms of generalization over longer reasoning chains. Strengths: Pros: The proposed approach is sound and interesting. The paper is well-written. They show that their inference-time optimization makes the model more robust to the redundant information. It becomes more generalizable to longer chains of reasoning. It is more efficient when multiple questions are asked based on the same context. Weaknesses: The run-time analysis is to some extent misleading. For the mentioned problem setting, in general, we assume each example comes with its own context that is the context is not often shared. Therefore, I guess we expect this model to be less efficient in general for reasoning as it needs the bi-level optimization at inference time. I think highlighting the efficiency in the multi-question setting looks a bit far fetch. Technical Quality: 3 good Clarity: 3 good Questions for Authors: —While it is expected that the model learns to focus on the relevant parts of the knowledge, it is not very clear to me why the approach should make the model more generalizable to longer chains of reasoning? Any intuition? —Is there any SOTA results better than what you report in this paper in your baseline variations? -- As it is explained in the paper, for the multiple question setting some parts of the knowledge are relevant for one question while irrelevant for others. Since the model is tuned once and the same model is used for answering all questions then the performance should be lower in this case, right? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation section is not included in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Responses to Reviewer ZqBF (R3) Dear reviewer, thank you for the constructive comments and we appreciate your time and effort. We thank the reviewer for recognizing our approach as interesting and sound, and our paper as well-written. We are encouraged to see the reviewer acknowledging our contributions. We address the reviewer’s concerns and questions below: **W1: The reviewer argues that the runtime analysis in the multi-question setting does not demonstrate RECKONING’s efficiency because the context is not often shared in general** We argue that the setting where multiple questions share the same context is a common problem setting. For example, reading comprehension (e.g., SQuAD) typically involves reading a passage and answering multiple questions about it. In our own experiments, the ProofWriter typically contains fact sets about which multiple questions can be asked. Certain facts are distractors for particular questions and relevant for others. One of the benefits of RECKONING is that it can answer multiple questions based on the same set of facts, and our evaluation uses this property to improve the evaluation speed in multi-question settings. **Q1: The reviewer asks about the intuition that our method makes the model more generalizable to longer chains of reasoning** This is a good question. Our hypothesis is that RECKONING is more generalizable to longer reasoning chains because it encodes the multiple pieces of knowledge as separate sequences in a batch, which may lead to less of a length distribution shift, since the maximum sequence length is roughly equivalent for most facts. For in-context reasoning, reasoning is performed through the forward pass using the attention mechanism, which may be more likely to overfit to training length as the facts are concatenated into a single sequence. **Q2: The reviewer wonders if there are SOTA results on the baseline variations** We also included an advanced large language model, GPT-3.5 (text-davinci-003) as one of the baselines. We show that RECKONING outperforms GPT-3.5 both on ProofWriter and CLUTRR: | | | ProofWriter | | |ProofWriter (distractor)| | |CLUTRR | | |-----------------|------|:-----------:|------|------|:-----------:|------|------|:-------:|------| | Method | 2-h | 3-h | 5-h | 2-h | 3-h | 5-h | 2-h | 4-h | 6-h | | GPT-3.5 (0−shot)| 58.4 | 56.4 | 53.7 | 49.1 | 47.1 | 45.3 | 35.6 | 16.0 | 18.5 | | GPT-3.5 (8−shot)| 78.0 | 82.4 | 80.1 | 58.7 | 57.2 | 54.5 | 39.0 | 18.5 | 20.8 | | RECKONING | **99.5** | **99.7** | **99.8** | **79.8** | **83.7** | **84.0** | **98.3** | **97.6** | **94.8** | | We highlight the significant performance gain when there are distractors in the context of a question. While performance drops are observed for both approaches, the drop for GPT-3.5 is greater than for RECKONING. **Q3: Should the performance be lower when multiple questions share the same context and some parts of the knowledge are relevant for one question while irrelevant for others** Compared to the setting where each question is associated with a unique context with no irrelevant information, yes, the performance is lower when the setting changes to multiple questions sharing the same context. However, we show in our experiment (Figure 5, paper) that models trained with our algorithm are much more robust when there are distractors (irrelevant information) in the context, compared to the in-context reasoning baseline. --- Rebuttal Comment 1.1: Comment: I have read the author's response and other reviews and discussions. Thank the authors for further clarifications on my questions. I was already positive about this paper. My score remains unchanged. --- Reply to Comment 1.1.1: Title: Thanks for your encouraging comments! Comment: We are encouraged to see the reviewer being positive about our paper. We are grateful for the reviewer's constructive feedback!
Summary: Summary: The paper introduces RECKONING, a bi-level learning algorithm designed to improve reasoning in transformer-based language models. RECKONING encodes contextual knowledge into the model's parameters using gradient updates, allowing the model to answer questions based on its updated parameters. The authors demonstrate, through experiments on two multi-hop reasoning datasets, that RECKONING outperforms an in-context reasoning baseline and is more robust to distractions, generalizes better to longer reasoning chains, and is more computationally efficient under certain conditions. Contributions: 1. Propose a bi-level learning algorithm, RECKONING, to teach language models to reason by updating their parametric knowledge through back-propagation. 2. Conduct experiments on two multi-hop reasoning datasets, ProofWriter, and CLUTRR-SG, showing that RECKONING outperforms the in-context reasoning baseline and provides several other benefits. 3. Provide analyses on the ability of RECKONING to memorize knowledge, measure its performance under distractor conditions, and analyze its run-time efficiency, etc. Strengths: 1. the idea of encoding knowledge into the model's parameters through gradient updates is an interesting and novel idea in the field of natural language reasoning 2. RECKONING demonstrates better performance than the baseline model in several aspects, including better generalization to longer reasoning chains and its robustness to distractions 3. The paper is well-presented Weaknesses: The main weakness I see for this paper is the scope of its conducted experiments: 1. The experiments are conducted on two synthetic multi-hop reasoning datasets, ProofWriter and CLUTRR-SG. While these analyses provide valuable insights, further evaluation on a broader range of real-world datasets would strengthen the generalizability of RECKONING's performance. 2. The experiments are conducted only using the base GPT-2 model, which is really behind the state of the art. It's hard to tell if on the best performing models we can still see this impovement. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How would RECKONING perform with more complex and diverse real-world reasoning tasks beyond the synthetic multi-hop reasoning datasets used in the experiments (ProofWriter and CLUTTR-SG)? E.g., HotpotQA? 2. Is this improvement agnostic to model architectures? How does RECKONING do on seq2seq models? 3. In the multi-task training setting, does the choice of weighting different terms affect the performance of RECKONING? (from the paper is 1:1?) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Responses to Reviewer UhBM (R2) We thank the reviewer for their helpful comments and are encouraged that the reviewer agrees that our algorithm is “an interesting and novel idea in the field of natural language reasoning” and recognizes its benefits. We also thank the reviewer for complimenting our paper as “well-presented.” We address the reviewer’s concerns and questions below: **W1/Q1: The reviewer asks how RECKONING would perform with more complex and diverse real-world reasoning tasks outside of the scope of synthetic data** Taking the reviewer’s suggestion, we evaluated RECKONING on FOLIO, a reasoning benchmark with first-order logical reasoning problems written by expert annotators based on real-world knowledge. We use the validation set as an in-house test set since the true test set has not been publicly released. In the training process, we randomly split the training data into train and validation sets. We show the evaluation results below: | Model | Acc | |-----------------------------------|------| | FT-ICR | 53.3 | | GPT-3.5 (text-davinci-003) 0-shot | 45.1 | | GPT-3.5 (text-davinci-003) 8-shot | 52.9 | | chatGPT (gpt-3.5-turbo) 0-shot | 40.0 | | chatGPT (gpt-3.5-turbo) 8-shot | 42.6 | | RECKONING | **54.9** | | Our experiments show that RECKONING still outperforms the FT-ICR baseline and improves the performance compared to best-performing large language models like GPT-3.5 and ChatGPT, showing that even in less synthetic settings, RECKONING can still outperform other approaches. **Q2: The reviewer asks if the improvements are agnostic to model architectures and wonders how RECKONING would perform with seq2seq models** This is a great suggestion! We extended RECKONING on a T5-small model and evaluated it on ProofWriter-5-hop with all the distractors. The FT-ICR baseline with T5-small achieves 69.2, and RECKONING achieves 70.8. We show that RECKONING still outperforms the baseline. We will conduct more extensive experiments on seq2seq models in our future revision. **Q3: The reviewer wonders if the choice of weighting on the two terms of the multi-task objective affect the performance of RECKONING** This is an interesting point! In principle, the weighting would have some impact on the performance. However, we find that a 1:1 ratio works well in our study, so we stick to this setting due to the limited computation budget. --- Rebuttal Comment 1.1: Title: Author response addressed my questions Comment: Thanks for taking the time to run additional experiments, and the experiment on FOLIO looks great. --- Reply to Comment 1.1.1: Title: Thanks for your encouraging comments! Comment: We thank the reviewer for their encouraging response and constructive suggestions! We are grateful for the reviewer raising their score from 5 to 7.
Rebuttal 1: Rebuttal: ## General responses to all reviewers We would like to thank the reviewers for providing us with thoughtful comments and constructive feedback. We appreciate that the reviewers recognize our proposed **bi-level learning algorithm** for language reasoning as novel/interesting (R2, R3, R4, R5), useful/important (R1, R5), sound (R3), and our work conducts sufficient **experiments** (R4). We are encouraged that the reviewers see our **paper** as well-written (R2, R3) and well-explained (R4). We address the general concerns below: **(1) A few reviewers asked us to clarify the inference process of RECKONING and how the knowledge-encoding and question-answering were conducted.** Our proposed bi-level learning algorithm follows the idea of Model Agnostic Meta-Learning (MAML), where the objective is learning to do few-shot learning for downstream classification tasks. In the case of RECKONING, we are learning to do fast language modeling (e.g., knowledge encoding) for question answering. During inference time, we start from the trained meta parameters and train the model to do language modeling on the given facts using a few steps of gradient descent. Then we use the updated parameters to answer a question. After this, we discard the updated parameters and recover the trained meta parameters for the next question. This inference step is done for each new question. i.e., for each new question, we start from the same trained meta parameters. However, this ability to do inference-time gradient-based learning requires us to use a bi-level optimization algorithm during training to obtain the trained meta parameters. **(2) Some of the reviewers ask how the performance of RECKONING would generalize with larger language models and real-world non-synthetic datasets.** In response to this, we applied RECKONING to GPT2-XL (1.5B, 15 times bigger than the GPT2-small model used in our paper.) with LoRA. We evaluated on ProofWriter 5-hop with all distractors. Compared to FT-ICR’s performance (**65.0**), RECKONING’s performance (**70.2**) is **5.2** percentage points higher. We demonstrate that RECKONING still shows significant benefits when scaling up to larger models. To validate RECKONING’s performance on real-world reasoning tasks, we also evaluated on FOLIO, a complex logical reasoning benchmark involving real-world examples. We report the performances below: | Model | Acc | |-----------------------------------|------| | FT-ICR | 53.3 | | GPT-3.5 (text-davinci-003) 0-shot | 45.1 | | GPT-3.5 (text-davinci-003) 8-shot | 52.9 | | chatGPT (gpt-3.5-turbo) 0-shot | 40.0 | | chatGPT (gpt-3.5-turbo) 8-shot | 42.6 | | RECKONING | **54.9** | | We can see that RECKONING still performs better than the in-context reasoning baseline, and it even surpasses best-performing large language models (LLMs) like GPT-3.5 and chatGPT. In addition, we show that although the two datasets we used are synthetic, even best-performing models like GPT-3.5 still fail under the systematic generalization tests (Table 9 in the supplementary material): | | | ProofWriter | | |ProofWriter (distractor)| | |CLUTRR | | |-----------------|------|:-----------:|------|------|:-----------:|------|------|:-------:|------| | Method | 2-h | 3-h | 5-h | 2-h | 3-h | 5-h | 2-h | 4-h | 6-h | | GPT-3.5 (0−shot)| 58.4 | 56.4 | 53.7 | 49.1 | 47.1 | 45.3 | 35.6 | 16.0 | 18.5 | | GPT-3.5 (8−shot)| 78.0 | 82.4 | 80.1 | 58.7 | 57.2 | 54.5 | 39.0 | 18.5 | 20.8 | | RECKONING | **99.5** | **99.7** | **99.8** | **79.8** | **83.7** | **84.0** | **98.3** | **97.6** | **94.8** | | We can see GPT-3.5 especially struggles when there are distractors in the context of the questions. In general, we use synthetic datasets to allow us to control changes to the fact base (e.g., distractors and longer reasoning chains) such that we can systematically evaluate how sensitive in-context reasoning and RECKONING are to these factors. Our proposed learning algorithm improves model performance in more complex and challenging settings where factors like distractors and longer reasoning chains can be systematically evaluated. Pdf: /pdf/9ca6507d01a346b63a6c399228c5662cfbe221a1.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents a two-phase learning algorithm that 1) encodes background knowledge in the parameters of an LM through fine-tuning (instead of providing it in context), and 2) learns how to use and reason the encoded knowledge for a given question (?). The paper shows better performance than in-context learning, and better generalization to longer reasoning chains. Strengths: - The paper is easy to read for the most part. - The generalization experiments results support Reckoning as a useful approach. Weaknesses: - Most of my concerns / questions have to do with Phase 2 of the model updates. More details given below in questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Does the order in which the questions are seen effect the results? - Which task from the Proofwriter dataset was actually used in this paper? - Is the outer-loop of updates actually needed? Finetuning on knowledge and then trying to answer the question directly (without any more parameter updates) is a natural baseline to compare against. I believe this is slightly different from the FT-ICR setup. - In any case, the performance difference between FT-ICR and Reckoning is only somewhat apparent on the CLUTTRR dataset. Can you conjecture why that might be the case? - I think the bigger performance gaps are noticeable in the OOD generalization setup. Can you provide an intuition / analysis of why generalization gets better? - At multiple places, the paper says phase 1 is to memorize the knowledge and phase 2 is to quickly memorize the given knowledge and perform reasoning. e.g. lines 108 to 111. Can you clarify why one needs 2 steps to memorize the said knowledge. Is the knowledge being memorize different in these two phases? Moreover, it seems memorize and "learn" are being used interchangeably, but I believe they are different. Overall, phase 2 application and its impact are confusing. - If the knowledge from a question is being encoded in the model, how does the model handle contradictory information? For example: it is possible "A is B's son" in instance 1, but "B is A's son" in another instance. Is there an assumption that this is not possible? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Responses to Reviewer ymTg (R1) We thank the reviewer for viewing our proposed bi-level learning algorithm as interesting and novel and for recognizing its benefits and contribution. Below, we address the questions: **Q1: The reviewer asks if the order of the questions affect the results** No, the order of the questions in the dataset should not affect the results. We randomly shuffle the train and test data for each run. For each experiment, we conduct three runs using three different random seeds and report the average. **Q2: The reviewer wonders which task from ProofWriter was used in our paper** We follow the deductive reasoning task originally proposed in RuleTaker, a work that ProofWriter is built on. The task is most similar to Task 1, proof generation, but we omit the proof part and only generate the answer. **Q3: The reviewer asks if the outer-loop of updates is needed** We evaluated four additional baselines related to the outer-loop updates to confirm this: - **FT-KG**: Here we check, without outer loop optimization, can finetuning on all facts allow the model to memorize them and perform well on the questions when no additional information is given. - **FT-KG-ICR**: Here we check, without outer loop optimization, can finetuning on all facts allow the model to memorize them and perform well on the questions when the background knowledge is given at test time too. - **RECKONING-no-outer**: We check if RECKONING with inner-loop only (i.e., a single-level optimization) can perform well when no facts are provided in the question. Note that for each question, we start from the trained model that is not yet finetuned on that question’s facts. - **RECKONING-no-outer (zero-shot)**: We do not train the model but directly evaluate it on the questions by dynamically doing a few gradient steps to encode the facts. We check if the model already can do inference-time knowledge encoding through gradient descent dynamically for each question, without bi-level optimization.Note that for each question, we start from the initial model that is not yet finetuned on that question’s facts. We report their performances below (also see Table 1, rebuttal pdf): | | Clutrr-2-hop | Clutrr-4-hop | Clutrr-6-hop | |--------------------------------|--------------|--------------|--------------| | FT-KG | 5.1 | 4.6 | 5.8 | | FT-KG-ICR | 5.1 | 4.6 | 5.8 | | RECKONING-no-outer (zero-shot) | 7.9 | 8.1 | 9.9 | | RECKONING-no-outer | 20.7 | 12.9 | 10.3 | | RECKONING | **98.3** | **97.6** | **94.8** | | | | Proof-2-hop | Proof-3-hop | Proof-5-hop | |--------------------------------|-------------|-------------|-------------| | FT-KG | 31.2 | 33.8 | 33.3 | | FT-KG-ICR | 32.5 | 32.4 | 33.6 | | RECKONING-no-outer (zero-shot) | 33.3 | 29.7 | 33.0 | | RECKONING-no-outer | 17.6 | 14.2 | 6.8 | | RECKONING | **99.5** | **99.7** | **99.8** | | As our evaluation results show, the baselines FT-KG and FT-KG-ICR perform close or under random (33.3% for ProofWriter and 5% for CLUTRR). The baselines that remove the outer loop also perform poorly, far below RECKONING’s performance. We highlight the importance of outer-loop optimization, indicating that it's necessary for the model to learn to dynamically do few-step knowledge encoding that supports the reasoning performance. **Q4/Q5: The reviewer asks why the performance difference between FT-ICR and RECKONING is only apparent on CLUTRR, and why bigger performance gaps are noticeable in the OOD generalization setup** Our results in Table 1 show small improvement for RECKONING as the baseline (FT-ICR) is also quite strong in this idealized setting. Here, our results are partly a sanity check that RECKONING works as well under ideal conditions (even slightly better!). However, RECKONING exhibits stronger improvements when idealized conditions are removed and the model has to generalize out of distribution and handle noisy inputs. **Q6: The reviewer asks us to clarify knowledge memorization and learning and motivate the importance of outer loop optimization** In RECKONING, we do not first fine-tune the model on all facts in the dataset and then learn to “quickly memorize the given knowledge and perform reasoning,” as the reviewer suggested. Instead, RECKONING does inference-time training on the fly. We define knowledge memorization as doing a few gradient updates on the facts. In the inner-loop, the model encodes the knowledge through these gradient updates. In the outer-loop, the model uses the encoded knowledge to perform reasoning. Please see more context on the inference process in our general response (point 1). To teach models the ability to do this kind of inference-time learning, i.e., gradient-based knowledge encoding that supports reasoning, bi-level optimization is important. As we have shown in our response to Q3, without the outer loop, models perform poorly. **Q7: The reviewer asks how does the model handle contradictory facts across examples** RECKONING would be able to handle this case. As mentioned in the general response, at inference time, RECKONING falls back to learned “meta-parameters” after every processed example, wiping the slate clean for the next example. As a result, contradictory knowledge between examples do not contaminate each other. RECKONING would likely not work if there were contradictory facts for the **same** example, but to the best of our knowledge, even the best performing models cannot reliably handle contradictory information in the context well. --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: Thank you for the response and for clarifying my questions. The new results are also interesting to learn about. I've also read the other reviews and responses. I was already positive about the paper and I am happy to raise it a bit in light of the strong new results that've been reported during the rebuttal phase. Thanks! --- Reply to Comment 1.1.1: Title: Thanks for your encouraging comments! Comment: We appreciate the reviewer's helpful suggestions and encouraging response to our rebuttal! We are grateful for the reviewer raising their score from 6 to 7.
null
null
null
null
null
null
Does a sparse ReLU network training problem always admit an optimum ?
Accept (poster)
Summary: The authors are interested in the following question : « Given a deep learning architecture possibly with sparsity constraints, does its corresponding optimisation problem actually admit an optimal $\theta^*$? » While it is mostly taken for granted that the answer is « yes », the author proves it to be true in some cases, and false in others. They derive an algorithm in order to verify a sufficient condition in order for there to exist. More then a single way the sparseness of the considered network in ensured is considered in the article. Strengths: 1. The relevance of the work is well defended in the introduction. 2. The results concern a large pool of predictors, considering a regression task. 3. The results concern more than one way of ensuring networks to be sparse. 4. The results seem to be a significant add-on to the literature concerning closeness and non-closeness of problems involving the training of (sometimes sparse) neural networks. 5. Example 3.1., though having some flaws (see Weaknesses - Major - 1.2.), is really important for illustrating what might otherwise be a bit hard to grasp for the reader, and proving the issues that might be encountered when not considering the initial question : « Given a deep learning architecture possibly with sparsity constraints, does its corresponding optimisation problem actually admit an optimal $\theta^*$? ». Weaknesses: _Major_ 1.1. Regularisation is indeed (lines 41-42) a common way to bypass this question, but the notion of regularisation in itself is, especially when it comes to neural networks, mostly important for limiting over-fitting. I didn’t find the arguments regarding the discarding of regularisation for neural networks training really compelling (lines 44-47), especially since it wasn’t supported by any work from the literature. 1.2. Concerning Example 3.1 and the discussion that follows (lines 216-221), the discussion is based on what's displayed on Figure 1 only, forgetting about validation / test loss; saying that $L^2$ regularisation « might be detrimental » thus forget about validation / test loss. And if the goal is simply to over-fit a problem, then even smallest regularisation would make the parameter converge to certain values, practically not affecting the obtained error in train, which probably wasn't the case, since a single (and arbitrary) regularisation parameter was applied for each training. 2. The assumptions required for having Proposition 3.1. to hold (continuous, coercive) are limiting, making the use of 0-1 loss impossible (and cross-entropy as well, if I’m correct). The way the networks are defined too, since no soft-max activation function can be used on the output layer. Those loss functions (and the soft-max) being crucial when it comes to classifications, the results are mostly relevant only for regression, which is really not the most popular type of task when it comes to neural networks. Though it has been briefly mentioned on lines 176-177, I feel like the authors were not honest enough regarding how limiting the assumptions and the way the networks are defined are. For example, a sentence such as « Though our study theoretically applies to classification schemes, it more naturally suites regression schemes, considering the assumptions made in [...] » in the abstract, or at least in the Introduction section would be sufficient. 3. The motivations of the work are partially theoretical and mostly practical, but the results, though interesting, for it proves not every network has an optimal set of parameters in different settings, are mostly inapplicable in practice, limiting the impact of the work. 4. The terminology in Table 1 might be misleading, since the term « shallow » is vague and that when it comes to Theorem 3.4 and Corollaries 3.1, 4.1, 4.2 and 4.3, « shallow » stands for « 1-hidden-layer ». What are the limitations for the works from the literature with regards to the depth of the networks? If it is 1-hidden-layer, then the whole table should refer to « 1-hidden-layers architecture », and not « shallow architecture »; if it is deeper than 1-hidden-layer in at least one case, than the underlying maximum depth should be displayed in the table. _Typos / Minor_ 1. Line 126 : « agrees with $\mathbf{A}$ on rows in $S_{\underline{r}}$ (resp. columns in in $S_{\underline{c}}$) » 2. I expect Lemma 3.2. of the main paper to be coherent with what is stated on lines 505-506 for a camera-ready version of the article. 3. Lines 259 and 260 : The statement « This constraint on the sparsity level of each 260 layer is widely used in many works on sparse neural networks » is not supported by any citation. 4. Line 261 : « Consider scalar-val\underline{u}ed ». 5. Line 359 : « cf\underline{.} » 6. Using $\Omega$ for both the finite and the non-close cases is a bit confusing; it might be more convenient to differentiate these two cases with a separate notation. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. If the practical interest in studying problems with finite $\Omega$ is clear, I have a hard time understanding the interest of studying non-closed $\Omega$ and how it applies to real-world situations. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I feel like the authors downplayed the importance of some limitations of their work (see Weaknesses – Major). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the referee for his/her comments and questions. **Comments on the weakness** 1.1. While regularization with a coercive function indeed ensures the existence of a minimizer, this however implies tradeoffs between the data fitting term and the penalty, usually tuned via a parameter $\lambda$. Unfortunately, there may not be any satisfying tradeoff in non-closed cases, as is illustrated in Figure 1 (in the pdf of the global response) under the setting of Example 3.1 (in our submitted paper). Four curves display the concrete tradeoffs obtained along the training trajectory, when optimizing a regularized loss with four values of $\lambda$. In addition, we display random (yellow) and optimum (black) "oracle" tradeoffs obtained using approximate LU factorizations of the matrix $A$ of Example 3.1. These oracles are built as (exact) LU factorizations of perturbed matrices $A+N$, where $N$ is a noise matrix scaled to achieve prescribed approximation levels (and therefore specified empirical losses). Even though arbitrarily small empirical loss is theoretically possible, reaching an empirical loss below $0.1$ with an oracle approximate LU factorization requires LU factors with a norm of the order of $10^3$. The required norm quickly increases when improving the precision of the oracle LU approximation. 1.2. We thank the referee for this remark. The validation error is indeed important. Choosing to display the Jacobian loss was motivated by the fact that it is indeed essentially proportional to the validation loss, as illustrated in Figure 2 in the pdf of the global response (with more values of $\lambda$ than in Figure 1 in the submitted paper, as a sanity check). This can also be proved theoretically under some assumptions. This was however too implicit and we will replace the Jacobian loss in Figure 1 (of the submitted paper) by the validation loss. 2. The assumptions made in Proposition 3.1 are indeed natural for the regression case but not for the classification case, and we agree with the referee that it is worth making the remark more visible since the beginning of the paper, this will be fixed in the final version. In the classification case, using the soft-max after the last layer together with the cross-entropy loss function indeed leads to an optimization problem with no optimum (regardless of the architecture) when given a *single* training pair $(x,y)$. This is due to the fact that changing either the bias or the scales of the last layer can lead the output of the soft-max arbitrarily close to an ideal Dirac mass. It is an interesting challenge to identify whether sufficiently many and diverse training samples (as in concrete learning scenarios) make the problem better posed, and amenable to a relevant closedness analysis. This point will be highlighted after Proposition 3.1. 3. The main contribution of the paper is to shed light on a phenomenon (absence of an optimum) that appears even in the simplest MLP architecture, which was mostly overlooked before (with the exception of few specialized papers, presenting results based on stronger assumptions). The first concrete consequence of our work is the possibility of detecting if a pruned MLP support poses a potential problem. A more general study of closedness for other architectures is left for future work, as this would require different tools. For MLPs, indeed the analysis is based on known results of matrix factorization. It seems possible for instance to generalize this analysis to the case of skip connections, because networks with skip connections can be modeled as larger ReLU networks with blocks of identity matrices as weights, so that their presence would add a further linear term in the Equation (27) (Lemma C.4, Appendix), which could be treated in the same way as the one that is already there. Convolutive layers are also likely to be amenable to an adapted analysis. 4. We will consistently use "one-hidden-layer". **Answer for question** We agree that the case of a finite $\Omega$ is the one encountered in practice when learning from actual training sets. Considering a domain $\Omega$ such as the unit cube, with non-empty interior, is indeed mostly of theoretical interest, but seems, for example, crucial when analyzing generalization properties, as the "optimal target function" is traditionally expressed as the minimizer of an expected risk defined as an integral on such domains or even on the whole space. Understanding whether this optimum target function exists is thus of interest. --- Rebuttal Comment 1.1: Comment: I thank the authors for their insightful remarks. I'm glad to see a few changes will be made in the final version of the manuscript, reflecting the comments of the various reviewers. I stand by my score and recommend the acceptance of the paper.
Summary: This work studies the existence of global minimum in a sparse deep learning setting Strengths: I think this work touches on a very important problem that is understudied in the field: the existence of the global minimum for deep nonlinear networks. It is easy to image toy examples for which the global minimum does not exist, but until this work, there does not exist a formal study of this problem The problem is important and novel. Linking this problem to training a sparse network is novel, I thus support its publication, though with some reservation Weaknesses: To me, I think the main problem is that the results are quite weak, and that the main example is not quite convincing or relevant. The example of a LU network is quite artificial and it is hard to imagine that this situation arises in deep learning. I think this work can benefit greatly if it identifies a more relevant and convincing example (and it should not be difficult to achieve). This is the main reason I give this work a weak accept There is one minor problem that I do not find serious but is worth some attention. While the paper mostly motivates from the viewpoint of the popular pruning literature, this work can benefit from discussing more about regularization-based methods for compressing neural networks. For example, see the recent work in https://arxiv.org/abs/2210.01212, which also discusses the existence of the global minimum. At its face value, the results in this work seems to motivate and advocate the use of regularization-based compression methods in deep learning; is this interpretation correct? If not, why? I think discussing this point can better clarify the implication of this work Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I think the work discussed the limitations well Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the referee for his/her comments. **Comments on the weakness** For the *first* point, our Example 3.1 about the "LU architecture" is meant to be pedagogical, and its ultimate goal is to show that even in pretty simple cases there may be actual issues related to non-closedness. In our response to Reviewer 5AbX (https://openreview.net/forum?id=dTj5tH94xv&noteId=ozVHRnjQBI), we highlight that our results can also be used more concretely to detect supports leading to a non-closedness problem and that this occurs frequently for randomly pruned MLPs which serve as a baseline for sparse DNNs training [1]. For the *second* point, our main message is not to advocate the use of regularization-based compression, but rather to suggest that it is worth detecting supports leading to non-closedness. While regularization with a coercive function indeed ensures the existence of a minimizer, this however implies tradeoffs between the data fitting term and the penalty, usually tuned via a parameter $\lambda$. Unfortunately, there may not be any satisfying tradeoff in non-closed cases, as is illustrated in Figure 1 (in the pdf of the global response) under the setting of Example 3.1 (in our submitted paper). Four curves display the concrete tradeoffs obtained along the training trajectory, when optimizing a regularized loss with four values of $\lambda$. In addition, we display random (yellow) and optimum (black) "oracle" tradeoffs obtained using approximate LU factorizations of the matrix $A$ of Example 3.1. These oracles are built as (exact) LU factorizations of perturbed matrices $A+N$, where $N$ is a noise matrix scaled to achieve prescribed approximation levels (and therefore specified empirical losses). Even though arbitrarily small empirical loss is theoretically possible, reaching an empirical loss below $0.1$ with an oracle approximate LU factorization requires LU factors with a norm of the order of $10^3$. The required norm quickly increases when improving the precision of the oracle LU approximation. **References** [1] S. Liu, T. Chen, X. Chen, L. Shen, D.-C. Mocanu, Z. Wang, M. Pechenizkiy, The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training, International Conference on Learning Representations, ICLR, 2022. --- Rebuttal Comment 1.1: Title: reply Comment: Thanks for the response. As there is no significant concern, I keep my original score of weak acceptance.
Summary: This paper provides necessary and sufficient conditions for whether a neural network training problem admits an optimal solution, that is, whether weights and biases exist such that the infimum of the loss function is actually attained. This is done for the classical case of empirical risk minimization on a finite training set as well as for function approximation on a continuous input domain. Strengths: - I think the investigated question is highly interesting already for purely theoretical reasons: the existence of an optimal solution is such a fundamental property of an optimization problem. We should be able to answer this question for our "favorite" optimization problem, namely training a neural network. This work seems to play a crucial role in (i) posing this question and (ii) providing results for some cases. - Apparently the non-existence of an optimal solution can also explain divergence in practical settings, which I find compelling. Still, I would like to emphasize that the paper is a theory paper and should be judged as such. - The paper involves quite some amount of non-trivial mathematics. Although the tight reviewing schedule does not allow me to verify all details, the parts I read seem to be mathematically sound. Weaknesses: - Many cases for the posed question remain open. In particular, most of the results are concerned with 2-layer NNs only. - The presentation of the results in the intro could be improved. Some definitions could be made earlier (or at all). See my detailed comments below. Also I find it quite difficult to read the Tables 1 and 2. One basically has to read the rest of the paper in order to truely understand what these tables are about. Comments for the authors to improve the paper (not meant as true weaknesses): - title: remove the space between "optimum" and the question mark - line 54: "the best approximation property (BAP), which guarantees the existence of an optimal solution" -> Is this the definition of BAP or just a consequence? Please make this clear. You might want to define BAP in a proper definition environment. - lines 57-79: for readers unfamiliar with sparsity in neural networks, it is quite hard to understand your contributions without reading the "notations" part later. You should spend some efforts in explaining terms like "(structured) sparse networks", "fixed sparsity level" vs. "fixed sparsity pattern", etc. already in the introduction. I think "sparsity level" is only implicitly defined very late in the paper. - line 67: Here (and at some other places in the paper) you use the term "learning problem" as a synonym to "training problem". Some people understand "learning problem" as the problem to minimize the generalization error, as opposed to the "training problem", which only aims to minimize the training error. At least in the finite domain case you are definitely in the latter regime, so I suggest to use "training problem" consistently. - Tables 1 and 2: the relation to the sparsity constraints is not really clear in the tables. In particular, what does the "sparse" adjective in brackets in the architecture column mean? Why is it in brackets? - caption of table 2: there is an extra space after the opening bracket - line 124: it is a bit weird that the "notations" section is part of the "related work" section. - line 195: contains -> contain - line 261: valed -> valued - line 293: a bit redundant "many other domains such as [...] and much more". - line 301: "(in the whole paper we naturally assume B > 0)" -> state this earlier in the paper when you use B for the first time. - line 303: provide a reference to the proof of this theorem in the appendix. - line 312: what you call "plain" here is what I would call "fully connected". You should maybe use this term here and elsewhere in the paper. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Is there any chance to obtain a hardness result for the computational problem considered in Section 3.3? NP-hardness? Hardness for the existential theory of the reals (which is always a good candidate if the best known solution uses quantifier elimination)? See related results for NN training: https://arxiv.org/abs/2102.09798; https://arxiv.org/abs/2204.01368. - Pushing the relation to the existential theory of the reals even further, I find it very interesting that you obtain such a sharp difference between the scalar-valued output case and the multi-dimensional output case (Section 3.4). The same seems to be true for the training complexity of such shallow networks: while scalar-output networks can be trained with a combinatorial search algorithm, certifying that the problem is in NP, it turns out that training a shallow network with multi-dimensional output is ER-complete and therefore much harder (https://arxiv.org/abs/2204.01368). Again I wonder: is there any relation to your results? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The paper properly states under which circumstances (mathematical assumptions) the results are valid. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the referee for his/her comments and questions. **Comments on weakness** We really appreciate the comments on the overall organization and presentation of the paper and will implement them correspondingly in the final version. Regarding the *limitation to 2-layers only*, all our *negative results* (sufficient conditions for non-closedness, based on Theorem 3.1) are valid for any depth. Several technical lemmas for our *positive results* (e.g. Lemma C1 and Lemma C2 in the Appendix) also hold for MLPs with arbitrary depths. These results could be re-used to analyze more completely the deeper case, which is however left for future work. Moreover, our positive results apply to non-scalar outputs and take into account the effect of sparsity constraint, while (to the best of our knowledge) existing positive closedness results were limited to one-hidden-layer, scalar-valued outputs, with no support constraint. **Answer to question** 1. The question is indeed interesting. We found several links between the papers mentioned by the referee and our setting in Section 3.3. Indeed, in the proof of the main result of the first paper, the constructed ``hard'' neural network does not have all the connections (i.e., the support is not full), it is assumed to have the identity activation function (Theorem 2) and all biases can be assumed to be zero (Observation 5). This is very similar to the setting of sparse matrix factorization. Nevertheless, it remains non-trivial to adapt the techniques used in the articles mentioned by the referee to the problem of deciding on the closedness of the semi-algebraic set $\mathcal{L}_\mathbf{I}$. Still, we believe this interesting question is worth investigating further. 2. We think the two results are somewhat related. In fact, our proof of Theorem 3.4 uses the normalization technique from Algorithm~1 [1]. The exact same algorithm is used in the paper mentioned by the referee to argue that the training of scalar-valued output, one-hidden layer NNs is NP (if the input and hidden-layer dimensions are constant). We believe it is interesting to exploit this observation and further study the separation between the cases of scalar-valued and vector-valued output. **References** [1] R. Arora, A. Basu, P. Mianjy, A. Mukherjee, Understanding Deep Neural Networks with Rectified Linear Units, International Conference on Learning Representations, ICLR, 2018. --- Rebuttal Comment 1.1: Comment: I thank the authors for sharing their thoughts in the rebuttal and answering my questions. I remain curious about the connections to results in training complexity and hope future work will generate more insights on this. I continue to vote for acceptance of this paper.
Summary: The paper studies the existence of an optimal solution to the objective of training a ReLU neural network under certain sparsity patterns. They consider two topological properties of the input function space (the best approximation property, and closedness). They provided series of results on scalar-valued neural networks, shallow neural networks, neural network with one hidden layer. Specifically, they provide necessary and sufficient conditions on the sparsity (e.g., fixed sparsity level or pattern) of the neural network to guarantee existence of an optimal solution to the training loss function. Strengths: The paper studies an interesting problem which relates to the stability of the neural network training due to the non-existence of an optimal solution given a sparsity pattern. The paper covers a good literature review and stand themselves very well compared to prior works. Their contribution is clear. The title of the paper is very related to the goal of the paper. The paper is coherent in studying the problem of interest. The paper is written clearly. Each theorem/proposition is followed by lemma/corollary and informal proof sketch which makes it very easy to follow. For examples, the paper contains a numerical example very early on in the paper to further give intuition on the importance of the problem and a plausible scenario where an optimal solution does not exist. The results provide new insights on if a neural network will have an optimal solution. Table 1 and 2 clearly state how their work differs from prior work. Weaknesses: It would be nice in the main paper to provide more tangible examples on what sparsity examples on what architectures may/may not result in existence of an optimal solution. NeurIPS has a broad range of audiences from theory and application (addition of this can attract more application-based deep learning researchers). - minor: corollary 3.1, valed --> valued Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - For corollary 3.1, what is the dimension of the hidden layer? - Could you clarify in simple words the term number 2 on line 305 (addition of this will improve clarity). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Addressed properly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her comments on our work. **Comment on the weakness** Our work allows to answer the following question: if the supports of the weight matrices are randomly sampled from a distribution, what is the probability that the corresponding training problem potentially admits no optimal solutions? While simple, this setting does happen in practice since random supports/binary masks are considered a strong and common baseline for sparse DNNs training [1]. Indeed, thanks to Theorem 3.1, if $\mathcal{L}\_\mathbf{I}$ is not closed then the support is *bad*. Although the algorithm of Lemma 3.3 to *decide* if $\mathcal{L}\_\mathbf{I}$ is closed is not polynomial, for one-hidden-layer NNs, there is a polynomial algorithm to *detect* non-closedness: if the support constraint is *locally similar* to the LU structure (precisely, if it satisfies the condition of Theorem 4.20 of [2]), then $\mathcal{L}\_{\mathbf{I}}$ is not closed. The resulting detection algorithm can have false positives (i.e., it can fail to detect more complex configurations where $\mathcal{L}\_\mathbf{I}$ is not closed) but not false negative. When testing it on a one-hidden layer ReLU network with two $100 \times 100$ weight matrices, drawing uniformly at random two supports of respective cardinality $|I_1| = 3000$ and $|I_2|=2000$ (i.e. 30\% of nonzero coefficients on the first layer, and 20\% on the second one) and averaging over 100 draws, the algorithm estimates the probability of "bad" supports to 85\%. For sparser random supports (when $|I_1|=|I_2|=2000$), the estimated probability is nearly 100\%. We will add a brief description of these consequences of our work in the final version, thank you for this opportunity. **Answer for questions** 1. The dimension of the hidden layer is arbitrary (there is no assumption on it). This will be clarified in the final version. 2. An equivalent (and probably simpler) way to state the second point is: for each fixed binary diagonal matrix $D$, the set $\\{W_2DW_1 \mid \text{supp}(W_1) \subseteq I_1, \text{supp}(W_2) \subseteq I_2\\}$ is closed. This will be clarified in the final version. **References** [1] S. Liu, T. Chen, X. Chen, L. Shen, D.-C. Mocanu, Z. Wang, M. Pechenizkiy, The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training, International Conference on Learning Representations, ICLR, 2022. [2] Q.-T. Le, E. Riccietti, R. Gribonval. Spurious Valleys, NP-hardness, and Tractability of Sparse Matrix Factorization With Fixed Support. SIAM Journal on Matrix Analysis and Applications. --- Rebuttal Comment 1.1: Title: Reviewer after Rebuttal Comment: I thank the authors for additional clarifications on my questions. I have read their response and recommend acceptance (keep my score).
Rebuttal 1: Rebuttal: Dear reviewers, In this global response, we attach a pdf file containing figures for our rebuttal. Pdf: /pdf/9440f8eade73540011d67f0011e14a52020b0564.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Slow and Weak Attractor Computation Embedded in Fast and Strong E-I Balanced Neural Dynamics
Accept (spotlight)
Summary: The paper combines two established models in theoretical neuroscience, the continuous attractor neural network (CANN), and the excitation-inhibition balanced neural networks (E-INN). Though those models have been applied to explain various phenomena in the brain, they possess seemingly contradictory characteristics, begging the question of how they can coexist in the brain, which is what this paper is about. More specifically, the paper first introduces a plausible spiking model with three neuron populations that incorporates both CANN and E-INN. Then, this spiking model is numerically simulated and shown to not only preserve the known properties of both CANN and E-INN, but also to benefit synergistically from one another through sped-up convergence, faster adaptation to change, and smaller tracking lag for varying stimuli. Finally, the paper introduces a firing-rate model to account for the benefits observed in simulations. The main insight is obtained with the derivation of the eigenvalue of the dominant motion mode as a function of the $\beta$ coefficient, which accounts for the presence of E-I balance. Strengths: The strengths of the paper are: - It is well-written and easy to follow despite being theory-heavy. It is also clearly structured. - It tackles the important question of how to reconcile E-I balance with attractor models, and the presented insights might transfer to other attractor models than CANNs. - It is a creative combination of existing ideas. Weaknesses: The current weaknesses of the paper are: - Many equations in the spiking model are not homogeneous, so some constant might be missing. E.g. in Eq 1 a voltage (left) is equal to a current (right). In Eq 4, $f_{j}^{b}$ should be a voltage but it is equal to the inverse of a time. Same for Eq. 7. The paper would gain in clarity if the units are correctly worked out, although this does not change the message of the paper. - The body text does not seem self-contained, I could only find the definitions of some quantities such as $U_E$, $r_E$, $k$ and $[\cdot]_{+}$ (line 202) in the appendix. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Suggestions: - Figure 1 (F) caption could be more specific about what quantity is the eigenvalue from, since the answer comes only in section 5. - The shades in Figs 3 (A) Bottom, 5 (A,B,C) are not defined in the caption. Same for error bars in Fig 4 (D). Questions: - For the spiking model, how is the equilibrium of the CANN part defined theoretically? Is there a way to define an energy function? - The CANN requires symmetric connectivity to function as an attractor network, which is often not considered a plausible connectivity motif. How can this be accounted for in the model? - I don't see from Fig 4 (D) bottom how one can conclude "Shunting inhibition strength is proportional to the total EPSCs.". It feels like another type of plot would better demonstrate proportionality. Could you please explain in more details? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors do not mention any limitations of their model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the encouraging and valuable comments of the reviewer, which are very helpful for us to improve the paper. Below, we would like to address reviewer’s concerns in weakness and questions in detail. **On weakness:** Thank the reviewer for pointing out that there exist inconsistencies of units in some equations in our paper. We will correct all these inconsistencies in the revision. Also, we will add the missing definitions for some quantities in the main text as suggested by the reviewer. **On questions:** 1. The eigenvalues in Fig. 1F refer to the eigenvalues of CANN dynamics projected onto its dominating motion modes. We will modify the caption of Fig.1F to describe this clearly in the revision. 2. Shades in Fig. 3A are the instantaneous readout results (computed at each dt=0.01ms), and the solid line is the running average calculated over 150 dts (i.e. 1.5ms). Shades in Fig. 5 indicate std. calculated over 20 trials. Error bars in Fig. 4D are std. calculated over 10 trials. We will add these details in figure captions in the revised manuscript. 3. In the field, the equilibrium of a spiking CANN has never been theoretically defined, and people normally verify an equilibrium state by simulation. For a rate-based CANN, it can define an energy function if the input-output function is local. But for the rate-based CANN model we consider in Section 5 and Supplementary Material, since global divisive normalization (corresponding to shunting inhibition) is used, it does not have an energy function, although its equilibrium state can be analytically solved. 4. Indeed, all CANN models consider symmetrical connections, which is not fully biologically plausible. In reality, this assumption can be regarded as a good approximation in many cases, since CANNs have successfully modelled many neural system behaviors. 5. Fig 4D shows shunting inhibition (orange bar) is the strongest for the center neuron when the stimulus is positioned at the center, and the weakest for peripheral neurons. Indeed, this plot does not directly show *proportionality*. One has to refer to Equation (6) to see this where we define shunting inhibition as a product of total EPSCs and IPSCs from PV-expressing neurons. We will refine this statement in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thanks to the author for addressing my remarks. I have no further comments and still recommend acceptance.
Summary: The authors present a model of a continuous attractor network in a balanced-state spiking network. The model combines fast and slow connections. The fast connections are in charge of the balanced state and are thus strong 1/sqrt(K). The slow connections are in charge of the continuous attractor, and are weak 1/K. The model also has shunting inhibition which helps keep the network in the balance regime. The resulting model has irregular activity, along with bump dynamics. The balanced state helps the network respond faster to changing inputs. The authors also provide an approximate firing rate model that gives intuition on the network dynamics. Strengths: The authors combine two categories of models that are often studied separately, and demonstrate their harmonious coexistence. The analytical approximation of the interaction between the two components is very helpful and could be generalized to other models as well. Weaknesses: The stability of the bump in figure 3A bottom – it seems that the bump decays after stimulus offset. I am not an expert on balanced networks, so I hope I didn’t miss something crucial. The authors claim that localized input disrupts balance in unstructured networks (line 137-138). The work of Hansel and van Vreeswijk 2012 shows that tuned input does not disrupt balance. Another claim (line 40) is that E-I balance requires highly unstructured connectivity. This seems at odds with the work of Darshan et al (PRX 2018). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Figure 2 shows some properties of balanced networks, but not two that are often used in the literature: Fano factor and the log-normal distribution of firing rates. Table S4, S5: Why these numbers? Were they tuned somehow? Is the model robust to this choice? Table S6, is I_d zero? Line 98: cluster / clutter Ref 17: missing bibliographic info Supporting, line 11. Scaled by 1/K (not K) ? Supporting, equation above line 4: sqrt(N) or sqrt(K)? Line 101. Should the fast inhibitory be slow inhibitory? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the encouraging and valuable comments of the reviewer, which are very helpful for us to improve the paper. Below, we would like to address reviewer’s concerns in weakness and questions in detail. 1. Localized inputs disrupt the balance condition, if the EI balance dynamics is mediated by global unstructured connections. To mitigate this problem, several modelling studies (e.g., Rosenbaum and Doiron, 2014; Rosenbaum et al., 2017 and also the two studies the reviewer mentioned) use local connectivity to construct balance networks. These works have not explored the co-existence of EI balance and CANN. Notably, these modelling works do not refute our model assumption. The assumption in our model **explicitly** requires that the observed local connectivity structure come from the small synapse set, while the EI balance dynamics is still mediated via global unstructured connections. While an EI balance network with local connectivity could work when faced with localized inputs as in previous literature, our hypothesis is more aligned with the recent experimental data in (Scholl et al., 2021 Nature), where in Fig2a of the paper, the data shows null relationships of large connection weights between neuronal selectivity and orientation preference difference. 2. Thanks for the suggestion. We plot the Fano factor of neuronal responses in the appended PDF, which is around $1$, indicating irregular neuronal responses in our model satisfies the Possion statictics. We will replace Fig.2C with it. 3. We set $I_d$ as zero because in the brain, inhibitory neurons usually serve as interneurons and do not receive feedforward inputs from other cortical areas. Nonetheless, $I_d$ can be non-negative and it would not affect our results. 4. The inhibition in our model is fast for both EI balance and CANN dynamics. The reasons for us using fast inhibition in CANN are two-fold: 1) slow negative feedback is unstable for a dynamical system and would easily lead to oscillations which could severely limit our choice of parameters; 2) the rate model requires fast inhibition so we can absorb the effect of $I_p$ into Equation (11) which permits an analytical solution. Notably, this setting is also biologically plausible, as the GABA dyanmics is much slower than the NMDA dynamics involved in the CANN. 5. During our investigation, we first determined the parameters for EI balance dynamics (i.e. Table S4, S6). The classical EI balance constraint $\frac{f_E}{f_I}>\frac{w_{E I}}{w_{I I}}>\frac{w_{E E}}{w_{I E}}$ is only a necessary condition, and we found some parameters satisfying this constraint can still lead to oscillatory activity on a population level. We chose the values in Table S4 and S6 simply because they give nice irregular activity patterns. There is a pretty large parameter space that satisfies this requirement. We then determine the parameters for CANN dynamics. The most important parameter is $w_{max}^{EE}$, i.e., the recurrent connection strength in CANN dynamics. It cannot be too large, which would lead to recurrent dynamics stronger than EI balance dynamics and thus defeat the purpose of this study. It cannot be too small as well, as otherwise the plateau activity would decay very fast (but see also Fig. R2 in the appended PDF where we show increasing $w_{max}^{EE}$ by 25% gives persistent activity). Other parameters in Table S5 are basically chosen as we see fit. We are sure there is a large parameter space that achieves the same performance. 6. We will also correct the typos the reviewer mentioned. --- Rebuttal Comment 1.1: Title: update Comment: Thank you for the answers and clarifications. I have increased my score to Accept.
Summary: The paper proposes a new spiking neural network where structured connectivity, consistent with a CANN, is combined with random connectivity, consistent with a excitation and inhibition network, with two inhibitory subpopulations. The network depends onj two different sets of weights: weak weights for CANN dynamics and strong weights for E-INN dynamics. The network has better performance on a signal tracking task compared to a CANN network because of faster convergence of attractor states. Strengths: The proposed network that the paper discusses is the first to combine CANNs with balanced excitatory-inhibitory spiking neurons. The proposed network has some desirable properties in terms of ability to quickly track an input. Furthermore, CANN dynamics with E-INN dynamics can tolerate a higher maximal speed for tracking compared to without E-INN dynamics. Weaknesses: It seems that the network does not truly have the same continuum of fixed points if $\beta\neq0$ as the CANN network would have. This can be also seen with the diffusion of the "persistent activity" in Figure 3 after the stimulus is off. The CANN would have an actual persistent activity. What is the trade-off how the proposed network might be faster, but does not have persistent activity (e.g. in terms of $\beta$)? The CANN should also have a high readout after the stimulus is turned off because of the persistent activity. The experiments seems to be in contrast with the observation that the coupled network dynamics is neutrally stable in the second motion mode of the QHO. As mentioned in 57, a total synaptic current of $\mathcal{O}(1)$ serves as input to the CANN network. This would actually shift the second motion mode of the QHO, making the "persistent" state not persistent. The theory for this network suggests that an infinite number of neurons are needed to build a CANN network. However, this is physically impossible in the brain. It is possible to build a CANN network with a finite number of neurons [1]. A comparison of the tracking of a signal with such a network would benefit the analysis. Overall, the methods used for the analysis are not novel. [1] Noorman, Marcella, et al. "Accurate angular integration with only a handful of neurons." bioRxiv (2022): 2022-05. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 131: How do you justify that the final conclusions of our model are qualitatively applicable to general cases where $\beta$ is large, except for simulations? The solution of the stationary state $\bar U_E$ in Equation S11 and through that $G^J (x; x0 j z)$ would be different. Show that this change does not affect the relation $\lambda_0<\lambda_*$. Did you perform simulations with a range of all the parameters? Does it only hold for the particular set of parameter that is shown there? When $\beta$ is sufficiently small, the stable solution is minimally affected by the E-INN. How small does it need to be? 254: Does the CANN have a different equilibrium than the actual perfectly tracked signal? Would the equilibrium that is reached quickly for a sufficiently strong input not be the perfectly tracked signal? Why are and two different inhibitory groups necessary for the network? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: They are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the encouraging and valuable comments of the reviewer, which are very helpful for us to improve the paper. Below, we would like to address reviewer’s concerns in weakness and questions in detail. **On weakness:** 1. Whether the model can maintain a persistent bump is mainly dependent on the strength of the recurrent connection strength $w_{max}^{EE}$ of the CANN dynamics. In reality, the brain does not need to hold forever-lasting activity bumps. The computationally meaningful parameter region is that the network holds not permanent, but slow-decaying bump states, the so-called slow point dynamics [1]. Therefore, the parameters we choose in the main text is in this slow point dynamics region. Nonetheless, in Fig. R2 of the appended PDF, we demonstrate that by increasing the $w_{max}^{EE}$ 25%, the network can maintain the persistent state. [1] Sussillo, D., & Barak, O. (2013). Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural computation, 25(3), 626-649. 2. Indeed, the removal of the stimulus would result in a sudden drop in readout, but the dropped bump can last for a long or infinite time depending on the parameter setting. 3. In Fig. R2, we demonstrate that the neutral stability holds for stronger recurrent connections of CANN dynamics. For the parameters we used in the main text, we can approximate the slow-decaying states as stable states, and hence the neutral stability in the second motion mode roughly holds. 4. We presume the reviewer refers to that the total synaptic from EI balance dynamics shifts the _first_ motion mode (i.e., the bump height mode) and causes instability of the bump in the height direction. But we note that for $\beta< 0$, the network can still hold static bump if the CANN weights are large enough. However, we think computational meaningful states of CANN dynamics are the slow-decaying states as discussed above. 5. In theoretical analysis, we assume that there are infinite number of neurons. To confirm the theoretical results, we often carry out simulations with a finite number of neurons, e.g., in our simulation, we chose $N=100$. The reviewer suggested an interesting reference. This paper proposes an interesting method to smoothen the energy barrier. As a side note, we also observed less discrete attractor space for $\beta<0$ in our model when CANN connection strength is inhomogeneous (not shown in this paper). We will include this paper in the related work section. **On questions:** 6. Thanks for the insightful comments of the reviewer. It is true that to carry out theoretical analysis, we need small $\beta$ (we use small/large here to represent the magnitude of $\beta$). For large $\beta$, we can only validate the results by simulation. Fig.S1 show that if $\beta$ is not too large, $\lambda_0^\beta < \lambda_0^*$ still holds. However, there is no theoretical guarantee that $\lambda_0^\beta < \lambda_0^*$ always holds for arbitrarily large $\beta$. We will modify our statement in the revised manuscript. 7. We performed simulations over a wide range of parameters. There is a very large parameter space in which our conclusion holds. 8. The model holds for a relatively wide range of parameters, for example, for theoretical analysis, we require the EI balance weights scale in $\mathcal{O}(1/K)$ while the CANN weights in $\mathcal{O}(1/\sqrt{K})$. In practice, however, this scaling relationship can be largely relaxed, as long as the EI balance weights are much stronger than the CANN weights, our main results hold. 9. To keep the bump stable, it needs $\beta$ to be smaller than a threshold, whose value depends on other parameters, such as the recurrent connection strength $J_0$ and the global inhibition strength $\kappa_c$. However, it is difficult to analytically calculate this threshold 10. Sorry, we do not understand what the reviewer means by “the perfectly tracked signal”. 11. Two different inhibitory groups are necessary, since they are needed to separate the two dynamics running on different time scales. --- Rebuttal Comment 1.1: Comment: I thank the authors for their answers and clarifications. I have no further comments and still recommend acceptance. 10. A perfectly tracked signal would one without any lag or mismatch.
Summary: This study explores the compatibility of attractor networks and excitation-inhibition balanced networks (E-INNs) in neural circuits. It proposes that a neural circuit can exhibit traits of both by utilizing two sets of synapses: one set for strong and fast irregular firing and another set for weak and slow attractor dynamics. Simulations and analysis show that this approach enhances network performance, including accelerated convergence of attractor states and preserved E-I balanced conditions. Strengths: * The approach addresses the challenge of reconciling the structural demands of attractor networks and E-INNs, which are typically studied independently. By combining two sets of synapses with different properties, the proposed approach allows for the coexistence of both attractor dynamics and irregular firing in a neural circuit. * The simulations and theoretical analysis demonstrate that the approach leads to improved network performance compared to using only one set of synapses. The enhanced performance includes accelerated convergence of attractor states and the retention of E-I balanced conditions for localized input. This suggests that the combined approach can achieve the advantages of both attractor networks and E-INNs simultaneously. * The study provides insight into how structured neural computations can be realized through the integration of irregular firings of neurons. By investigating the coexistence of attractor networks and E-INNs, the approach sheds light on the mechanisms underlying complex neural processing, contributing to a better understanding of neural computation in the brain. Weaknesses: * The authors do not discuss the biological plausibility of the proposed two-set synapse model. It is essential to consider whether such a system could be implemented in actual neural circuits and whether it aligns with known biological mechanisms. * Another drawback of the paper is the lack of proper discussion of how general these findings are when the paper's assumptions are softened or when dealing with sufficiently different learning algorithms. When studying biological featuers in silico, qualitative and quantiative robustness is extremely important. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the encouraging and valuable comments of the reviewer. Below are our replies to reviewer’s concerns. 1. On the biological plausibility of our model. We actually just found an experimental study [1] (which we did not know when writing this paper), which has already provided strong evidence for the two-set synapse assumption in our model. In the experimental data, the authors measured synapse strengths using combined two-photon and scanning electron microscopy techniques. They found that **“no evidence that strong synapses have a predominant role in the selectivity of cortical neuron responses”**, which correspond to the unstructured and strong synapses for EI balance dynamics in our model. They also found that **“spatial clustering of co-active inputs appears to be reserved for weaker synapses”**, which corresponds to the weak synapses for CANN dynamics in our model. Please see the experimental figure in the appended PDF. 2. On the generality of our model. The results of our model are quite general. There are no specific assumptions on the model, other than a requirement on the relationship between two set of synaptic weights: one scales in $\mathcal{O}(1/K)$ and the other in $\mathcal{O}(1/\sqrt{K})$, with K the connectivity of neurons. However, this strict scaling relationship is mainly for the convenience of theoretical analysis, as done in [2]. In practice, this scaling relationship can be largely relaxed, as long as the synaptic weights for EI balance dynamics are much stronger than those for CANN dynamics. For other parameters, they are all consistent with the normal parameter settings in EI balance and CANN dynamics. In the revised manuscript, we will discuss about the generality of our model. [1] Scholl, B., Thomas, C. I., Ryan, M. A., Kamasawa, N., & Fitzpatrick, D. (2021). Cortical response selectivity derives from strength in numbers of synapses. Nature, 590(7844), 111-114. [2] van Vreeswijk, C., & Sompolinsky, H. (1998). Chaotic balanced state in a model of cortical circuits. Neural computation, 10(6), 1321-1371. --- Rebuttal Comment 1.1: Comment: Following the rebuttal I have adjusted my score.
Rebuttal 1: Rebuttal: **On the biological plausibility of our model** We are delighted to find that an experimental study [1], which we did not know when writing this NeurIPS paper, has already provided strong evidence for the key assumption of our model, namely, the neural circuit consists of two sets of synapses: a weak one for attractor computation and a strong one for EI balance dynamics. In this experimental work, the authors measured synapse strengths using combined two-photon and scanning electron microscopy techniques. They found that **“no evidence that strong synapses have a predominant role in the selectivity of cortical neuron responses”**, which corresponds to the strong and unstructured synapses for EI balance dynamics in our model. They also found that **“spatial clustering of co-active inputs appears to be reserved for weaker synapses”**, which indicates that the set of weak synapses determines the orientation selectivity of neurons (note that orientation selectivity is often modeled by a CANN [2]). Please refer to the experimental figure in the appended PDF. [1] Scholl, B., Thomas, C. I., Ryan, M. A., Kamasawa, N., & Fitzpatrick, D. (2021). Cortical response selectivity derives from strength in numbers of synapses. Nature, 590 (7844), 111-114. [2] Ben-Yishai, R., Bar-Or, R. L., & Sompolinsky, H. (1995). Theory of orientation tuning in visual cortex. Proceedings of the National Academy of Sciences, 92 (9), 3844-3848. Pdf: /pdf/b74655ca664a3561335acae41310778b0fe61e09.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
NeuralGF: Unsupervised Point Normal Estimation by Learning Neural Gradient Function
Accept (poster)
Summary: The paper proposes a method to learn normals for point clouds using a neural representation. The idea is to perform a multi-step point moving and compute a series of losses to constrain the gradient of the field to describe shape locality with consistency. The paper proposes a large series of comparisons with the existing methods, showing promising results, although the training time is particularly slow. == POST-REBUTTAL == After the discussion, the authors addressed my concerns. From other reviews, I also see a general consensus for acceptance, and the general recommendations are about clarifying some minor aspects. The only negative reviewer seems unwilling to defend the rejection position and does not point to significant weaknesses. Hence, I decided to increase my score and lean toward acceptance. Authors already agreed to incorporate the suggested changes, which are indeed essential; especially, I would stress that the two figures that show partial sections of complete shapes should be clearly described so as to not be deceptive. I wish authors best of luck with their work. Strengths: 1) The paper obtained good reconstruction results; the shown shapes have varying numbers of different geometries, and the method seems to outperform the competitors. 2) While the method part is not straightforward to understand, the attached code and the implementation details should provide enough details for replicability. Weaknesses: 1) The paper does not convey a clear and precise message about its contribution. I am not an expert in this specific field, and I have quite a hard time understanding what makes this work different from previous ones. Accordingly to the conclusion, the main contribution seems in introducing a new loss/optimization schema. However, from the ablation study (which is also not easy to inspect, given the number of experiments and the lack of a best in bold), the "full" method is not always the clear winner, and it makes unclear the contribution of the losses. From my understanding, the main advancement is in providing multi-scale neighbour consistency, which helps orient all the normals in the same direction. I suggest clearly stating the main insights and adding more structure to 3.2 (e.g., with paragraph titles). 2) Some works seem missing: [A] proposes a smoothness regularization for implicit representations; [B] involves a differentiable Poisson Shape reconstruction. I think a proper discussion would be useful, and especially fostering this kind of discussion in the related works. For example, at the moment, the two paragraph end highlighting that the method is unsupervised, and achieves better performance than the previous works. But how is this obtained? What is the key aspect that enables such advancement and is not considered in the previous works (if any)? [A]: Implicit Geometric Regularization for Learning Shapes, Implicit Geometric Regularization for Learning Shapes, Gropp et al., ICML 2020 [B]: Shape As Points: A Differentiable Poisson Solver, Peng et al., NeurIPS 2021 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: A) From 3.2 I cannot completely understand how the method obtains the correct normal orientation between the two possible directions. Is it obtained thanks to the multi-scale neighbourhood size? B) From supplementary material, the training time seems dramatically slower than other methods. What is the main cause of that? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are sufficiently discussed in the supplementary material Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### 1. Discussion of contributions, and improvements to previous works. As we can see from Table 2, existing supervised methods can achieve high-precision unoriented normal. However, from Table 1, we can observe that higher-precision unoriented normal do not result in more accurate oriented normal using a normal orientation algorithm based on propagation strategies, such as PCA+MST vs. AdaFit+MST and PCA+QPBO vs. HSurf-Net+QPBO. This means that even if we develop better unoriented normal estimation algorithms, utilizing existing normal orientation algorithms will not necessarily lead to better orientation results. In brief, the bottleneck of oriented normal estimation is correctly determining the orientation. The supervised methods require expensive ground truth as supervision and have been extensively studied, while unsupervised learning of normals is still an unexplored field. Based on the above observations, we focus on how to use an unsupervised manner to directly learn oriented normals with higher orientation accuracy, instead of learning unoriented normals. Both IGR [ICML 2020] and SAP [NeurIPS 2021] learn neural shape representation from point clouds without oriented normal. IGR proposes an implicit geometric regularization to favor a smooth zero level set of an implicit function f, i.e., f(x)=0. SAP proposes a differentiable poisson solver to represent the shape surface as an oriented point cloud at the zero level set of an indicator function f, i.e., f(x)=0. These methods focus on accurately locating the position of the zero iso-surface of f(x) to extract the shape surface. However, they ignore the constraints on the gradient of the function during optimizing the neural network. We know that the gradient determines the direction of function convergence, and the gradient of the iso-surface can be used as the normal of the surface. If the gradient can be guided reasonably, the convergence process will be more robust and efficient, avoiding local extremum caused by noise or outliers. Motivated by this, we try to incorporate neural gradients into implicit function learning to achieve oriented normals. In this work, we introduce a neural gradient function, consisting of the multi-step moving strategy along the gradient in Eq.(2), the gradient uniformity of multi-scale neighborhood size in Eq.(7), and the gradient consistency of multi-step moving in Eq.(8). From the ablation studies in Table 3(a), we can see that the performance of the algorithm drops a lot if we do not use these strategies, especially the losses in Eq.(7) and Eq.(8). In Fig.6 of the paper and Fig.1 of the supplementary material, thanks to the losses in Eq.(2) and Eq.(7), our method is more robust against noise compared to surface reconstruction methods. The multi-scale neighbor consistency mentioned by the reviewer, i.e., Eq.(7), is one of the important advancements of this work. ### 2. Clarification of ablation study. For the ablation studies in Table 3, we report results for both tasks of unoriented and oriented normal estimation. The first two categories (a) and (b) in the table are obtained by removing modules (losses and inputs). It can be seen that the performance of the algorithm drops relative to the complete method 'Full', especially the losses in Eq.(7) and Eq.(8). These studies demonstrate the effectiveness of our novel designs and the contribution of the losses to improving the performance of the algorithm. The latter two categories (c) and (d) are experiments on parameters, and they are based on using all losses. Some parameter settings provide better results in a single task, but do not give better results for two tasks. To handle different cases of more datasets, such as KITTI, we choose the parameters that perform the best on average in both tasks. We cannot conclude that the contribution of the losses is unclear because the 'Full' is not always the winner. ### 3. How to obtain the correct normal orientation. The normal orientation is achieved by the gradient of the learned implicit function, and our proposed neural gradient function learns an implicit global surface representation from data. The implicit representation approaches, such as signed distance fields (SDF), represent the surface as zero level set of an implicit function f, i.e., f(x)=0. Therefore, we can train a neural network to regress singed distances, where SDF<0 for inside, SDF>0 for outside, and SDF=0 for surface, so that the SDF increases from inside to outside of the surface. Then, the gradient vector field of the SDF is obtained, and the gradient on the iso-surface should have a uniform orientation. In our method, the optimization is formulated as an iterative moving operation of points to the target surface. According to Neural-Pull [ICML 2021], the gradient indicates the direction in 3D space in which the signed distance from the surface increases the fastest, so moving a point along or against (decided by SDF) the gradient will find its nearest path to the surface. We can obtain the gradient at each point using the learned SDF, and the gradient is perpendicular to the surface and points to inside or outside based on the initialization of the network. ### 4. The training time is slower than other methods. Note that we only reported the running time of various methods for predicting normal from point cloud (testing), and Table 1 in supplementary material does not include the training time of other methods. The other supervised methods can be trained in advance using ground truth, and then their trained model is used for testing. In contrast, our unsupervised method does not require training data and ground truth to train the model, but needs to be optimized based on test data to obtain its learned function, so we provide our optimization (training) time in the table. As for training time, our method has a similar time cost (about 40 hours) to the SOTA method SHS-Net on the entire PCPNet dataset using an NVIDIA 2080 Ti GPU. --- Rebuttal Comment 1.1: Title: Post-rebuttal Comment: I thank the authors for their answers to my concerns. I understand that the normal orientation can be recovered using the SDF, which separates the inside from the outside. However, in some of the shown examples, the concept of inside/outside is not well defined (e.g., Figure 3, Figure 7). How are these cases solved? Is the output sign ignored, and the normals are considered unoriented (and visualization is just illustrative)? At the moment, I do not have further questions, and I am considering increasing my score. I am looking forward to reading other reviewers' opinions. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for considering increasing the score. The output signs in Figure 3 and Figure 7 are not ignored, and the estimated normals are oriented and have consistent orientations. We know that implicit functions can reconstruct artifact surfaces from point clouds with open surface structures, but we do not care about the entire reconstructed surface, and only focus on the regions of existing points on the surface where the SDF can be correctly defined and its gradient has a consistent orientation. For a region without points, whose SDF is uncertain and the zero iso-surface is indeterminate, we do not use the SDF of this region to solve for the gradient. The example in Figure 3 is a part of a full shape with a closed surface, its inside/outside is defined, and we use a section of it for visualization. The example in Figure 7 is a point cloud of the KITTI dataset, the implicit function will learn a closed surface from it, and the points on the surface have a consistent gradient. We only use the SDF at points to solve for gradients as the normals. --- Reply to Comment 1.1.2: Title: We are glad to take more questions Comment: Dear reviewer 6zu3, We are glad to have your additional comments or take more questions from you. We believe they would be helpful to clarify any ambiguities and increase your rating as you mentioned in the previous comment. Thanks, Authors
Summary: The paper introduces a method for estimating oriented normals from a given point cloud by utilizing a neural networkto model a sign distance field (SDF). The proposed approach involves training the SDF representation, which allows for easy querying of gradients at the positions of the point cloud. The method employs a set of loss functions that simulate an iterative process of moving query points to match target points based on the neural gradients. Experiments are conducted on unsupervised oriented normals estimation from input point clouds, which may contain noise, outliers, and density variations. Strengths: - The authors clearly highlight the limitations of previous works, providing a good motivation for their proposed method. The promising results showcased in Figure 1 further support this motivation. - The introduction and related work sections are comprehensive, providing the necessary background for understanding the normal estimation task. This makes the paper self-contained. - The method is presented in a well-structured manner, starting with a high-level overview before delving into the details. This organization helps to keep the reader engaged and informed throughout the paper. Weaknesses: - The mathematical definitions and notations in the method section need to be revisited. Some crucial definitions, such as $f_i$ in Equation 2, are mentioned without being properly defined, making Sections 3.1 and 3.2 harder to understand. Additionally, the notation $ \{ Q, G\} $ is confusing as the operation involving this set containing two sets of points is not clearly defined or highlighted. The notation $f_i^G$ also requires clarification. Furthermore, the unit gradient $\boldsymbol{n}$ mentioned in Line 164 lacks clear information about its 3D positions. - The authors refer to the post-processing step of computing the gradients of the learned SDF as "inference". However, "inference" typically refers to the process of applying a trained model on unseen inputs for generalization. This usage of terminology can be misleading and confusing. - Figure 2 does not fully capture the presented method as it introduces notations that are only defined in the text. It is unclear from the figure alone, even with the caption, what the input to the model is. - A comparison to other neural SDF methods that learn an SDF from an input point cloud, such as SAL (https://arxiv.org/abs/1911.10414) or IGR (https://arxiv.org/abs/2002.10099), is missing. These methods could be relevant for extracting normals easily, as in the suggested method. - The paper lacks a reference to SAP (https://pengsongyou.github.io/sap), which provides a scenario of reconstructing normals and a surface from an input point cloud. - It is recommended to include illustrations for the different losses mentioned, similar to the visualization in Figure 2 (i) at the bottom. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see concerns and questions in weaknesses. Few further questions: - Are the loss coefficients tuned for different noise levels, or are they fixed throughout the experiments? - Can the presented method also yield the complete negative solution, i.e., $-\boldsymbol{n}$, which is equivalent up to a global sign? Does this depend on the initialization or other factors? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors discussed limitation, however the limitation discussion and figure is shown soley in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### 1. Some definitions, notations, and wordings need to be revised. $f_i$ in Eq.(2) represents the signed distance of each point during the i-th position movement process. $f_i^G$ in Eq.(3) represents the signed distance of each point in point set G during the i-th position movement process. As in Line 117, Q and G are two point sets sampled from the raw point cloud in different ways, and they are used as the input of the network during optimization. The unit gradient n in Line 164 is the normal of each point on the surface. The wording 'inference' will be changed to 'normal estimation'. We will revise the paper to further polish the definitions, notations, and wordings. ### 2. Comments on Fig.2, loss and model input. We will revise Fig.2 to clearly illustrate the method and add an illustration for the designed loss. During the optimization, the model input is two point sets Q and G sampled from the raw point cloud. During the normal estimation, the model input is the entire raw point cloud. ### 3. Comparison with other neural SDF methods. We will add references to the mentioned methods, namely SAL, IGR and SAP, in the revised version. We use some implicit representation methods to estimate oriented normals. The comparison of normal RMSE on point clouds in the datasets PCPNet and FamousShape is reported in the following table and the visual comparison is shown in *Rebuttal PDF Fig.6*. We can see that our method has clear advantages. We also have compared with some other surface reconstruction methods in Fig.6 of the paper and Fig.1 of the supplementary material. Our method can reconstruct better surfaces, especially on noisy point clouds. Moreover, our method can handle point clouds with uneven sampling and open surface structure, such as the KITTI dataset in Fig.7. |Method |SAP [1] |IGR [2] |SAL [3] |Neural-Pull [4] |Ours | |:-: |:-: |:-: |:-: |:-: |:-: | |Noise |57.56 |54.77 |46.69 |48.48 |***36.92***| |Density |41.32 |75.90 |43.78 |26.22 |***26.08***| |Average |49.44 |65.33 |45.24 |37.35 |***31.50***| [1] Peng et al., Shape As Points: A Differentiable Poisson Solver, NeurIPS 2021. [2] Gropp et al., Implicit Geometric Regularization for Learning Shapes, ICML 2020. [3] Atzmon et al., SAL: Sign Agnostic Learning of Shapes from Raw Data, CVPR 2020. [4] Ma et al., Neural-Pull: Learning signed distance functions from point clouds by learning to pull space onto surfaces. ICML 2021. ### 4. More questions. The coefficients for each loss are fixed across all experiments. The oriented normal, i.e., gradient, is derived from the learned neural gradient function, and its sign depends on the initialization of the network. We do not observe that our method yields the complete negative solution of the sign. In the evaluation, if all normals have negative orientations with respect to the ground truth, we can simply reverse their orientation. --- Rebuttal Comment 1.1: Title: post rebuttal Comment: Thank you for addressing my main concerns and issues. I encourage the authors to add the mentioned clarifications regarding the notations and improve Figure 2 and the loss illustrations. The additional results presented in the rebuttal are compelling. They should be incorporated in the final submission, especially the comparison to other neural implicit representation methods that struggle to reconstruct from noisy point clouds. As most reviewers mentioned this requirement, it is clear that the paper would benefit from such comparison and discussion in the main paper. Based on these and the other reviews, I raised my rating to week accept. I believe the paper has a solid contribution, but it requires the mentioned amendments in order to be a valid submission. --- Reply to Comment 1.1.1: Title: Thanks for the final rating of acceptance Comment: Dear reviewer 1E4M, Thanks for the acceptance rating. We will follow your advice to update our revision. Best, Authors --- Rebuttal 2: Comment: Dear 1E4M, we would love to hear your thoughts. Did the rebuttal and the other reviews change your mind?
Summary: In this work the authors present an unsupervised framework for predicting globally consistent, accurate normals given a point cloud as input. The crux of the method is to predict an implicit surface representation (a signed distance function to be exact) from the input point cloud by leveraging the fact that the gradient of the Neural SDF of surface points gives the surface normal at that point. The authors distinguish their method from existing similar approaches by overcoming drawbacks like lack of global consistency by designing a loss function that penalizes incorrect signed predictions and also considers multi-scale neighborhoods. Furthermore, they consider a multi-step iterative approach for refining their surface normal estimations. They show impressive results in terms of RMSE of unoriented/oriented normals across challenging datasets across different levels of sampling densities and noise. Strengths: 1. The proposed method achieves SoTA performance compared to other unsupervised methods on the PCPNet and FamousShape dataset. 2. In the presence of noisy input points, their method is quite competitive even against supervised baselines which is a big plus. 3. Accurately oriented point clouds are hugely sought-after in downstream applications like surface reconstruction and their results on single object surface reconstruction demonstrates the value of their method in this important application. Weaknesses: 1. Comparison with other implicit representations. At its core, the proposed method learns a Neural SDF for the shape represented by the input point cloud and the estimated normals are simply the gradients computed from this Neural SDF using automatic differentiation. As such I believe there should be more comparison drawn to similar methods like Neural-Pull [1], SIREN [2], SAL [3], and Neuralangelo [4]. In particular, [4] uses a finite-difference-based approach to estimate the surface normal during training and I would be curious to see if a similar technique can get similar results in lesser resources for normal estimation. 2. Hyperparameter tuning. Since the method is optimization based, one potential risk is mishandling the hyper-parameter tuning, which turns out to be very important. For example, how does the method choose the hyper-parameter for each shape? Does a different input point set require different hyper-parameters or one hyper-parameter can be used for a wide range of shapes? I think tuning hyper-parameter of a neural field fitting procedure can drastically change the performance, and arguably with proper hyper-parameter tuning, one can probably find a way to curate smooth solutions to different input instances. As a result, it's very essential for this kind of test-time optimization-based method to have a rigorous hyper-parameter tuning procedure reported to eliminate the risk of accidentally adding human judgment into producing the results. References: [1] Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces. https://arxiv.org/abs/2011.13495 [2] SIREN. https://arxiv.org/abs/2006.09661 [3] SAL. https://arxiv.org/abs/1911.10414 [4] Neuralangelo: High-Fidelity Neural Surface Reconstruction. https://research.nvidia.com/labs/dir/neuralangelo/ [5] NKSR. https://research.nvidia.com/labs/toronto-ai/NKSR/ Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The density-based experiments are unclear to me. Could the authors please explain what the “gradient” and “stripe” settings exactly imply? The supplementary shows a few qualitative examples but can you explain how they were generated? It also seems that supervised methods perform comparably w.r.t. the proposed method. Could the authors give an intuition for why their method suffers from density variation but not on noisy point clouds (see Table 2 and Fig. 5, especially at low threshold angles)? Which hyperparameters can be tuned to address noise vs. density variation and what is the trade-off for the same? This would help with a better understanding of the authors’ contributions. 2. In the analysis of un-oriented point clouds, we can see that LRR is the second-best performing unsupervised normal estimation method. Yet, we do not see an application of LRR with techniques for introducing orientation like MST in the study on the estimation of oriented normals. Adding LRR + (some orientation method) type of methods to Table 2 and Fig. 5 would make the results a bit more stronger. Basically this can help address the question of why don't we just pick a SoTA unoriented unsupervised point cloud method and do some simple orientation method? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitation section in the supplementary. Potential other limitations can include shapes with open surface and shapes without volume. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### 1. Comparison with other implicit representations. We use some implicit representation methods to estimate oriented normals. The comparison of normal RMSE on point clouds in the datasets PCPNet and FamousShape is reported in the following table and the visual comparison is shown in *Rebuttal PDF Fig.6*. We can see that our method has clear advantages. We also have compared with some other surface reconstruction methods in Fig.6 of the paper and Fig.1 of the supplementary material. Our method can reconstruct better surfaces, especially on noisy point clouds. Moreover, our method can handle point clouds with uneven sampling and open surface structure, such as the KITTI dataset in Fig.7. We will add references to the mentioned methods in the revised version. Neuralangelo [5] aims to recover dense 3D surfaces via image-based neural rendering and only provides the source code of a Blender addon. Rewriting the algorithm to estimate normals from point clouds is difficult during the short rebuttal period. |Method |SAP [1] |IGR [2] |SAL [3] |Neural-Pull [4] |Ours | |:-: |:-: |:-: |:-: |:-: |:-: | |Noise |57.56 |54.77 |46.69 |48.48 |***36.92***| |Density |41.32 |75.90 |43.78 |26.22 |***26.08***| |Average |49.44 |65.33 |45.24 |37.35 |***31.50***| [1] Peng et al., Shape As Points: A Differentiable Poisson Solver, NeurIPS 2021. [2] Gropp et al., Implicit Geometric Regularization for Learning Shapes, ICML 2020. [3] Atzmon et al., SAL: Sign Agnostic Learning of Shapes from Raw Data, CVPR 2020. [4] Ma et al., Neural-Pull: Learning signed distance functions from point clouds by learning to pull space onto surfaces. ICML 2021. [5] Li et al., Neuralangelo: High-Fidelity Neural Surface Reconstruction, CVPR 2023. ### 2. Hyperparameter tuning. In all experiments, we use the same network structure and loss weight factors. We did not perform hyperparameter tuning for each shape. We use the same hyperparameters for all shapes in a benchmark dataset, and only use different parameters to adopt different shapes in different datasets. So, a different input point set does not require different hyperparameters in a dataset and one hyperparameter can be used for a wide range of shapes. Our method has a hyperparameter needed to be tuned, i.e., determining the standard deviation of distribution D for different datasets. This hyperparameter is predefined in the code we provide, and we tend to choose a larger value for the dataset with sparse sampling or high curvature. ### 3. Explanation of density-based experiments. The 'gradient' simulates data collected using a 3D scanner, where nearby points are dense while far points are sparse. To achieve this, we give higher weight to points that are closer to the simulated scanner in the probability-based sampling process. The 'stripe' simulates the occlusion situation during the data collection, making the points in the occluded area sparse or disappear. To achieve that, we divide the shape into multiple areas and sample the points in specific areas with extremely low weights. ### 4. Suffer from density variation at low angle thresholds (Fig.5), but not noise? The optimization of our method is formulated as an iterative moving operation of the input point position. This strategy and the constraints for gradient uniformity of multi-scale neighborhood size make the model can handle noisy data. During optimization, the input point set Q is generated from the raw point cloud through a probability distribution D, which is built based on the neighborhood of the query points. Therefore, the density variation will affect the input data, and the generated points in sparse areas may be far away from the surface, increasing the difficulty of optimization in the point moving operation. As stated in Question 2, we use the same parameters in all categories (clean, noise, and density variation) of a dataset for a fair comparison with other methods. A solution may be to choose different distributions for clean, noisy and uneven point clouds of a dataset, respectively. ### 5. Experiment on LRR+(some orientation method). The RMSE results of LRR+MST/QPBO/ODP on datasets PCPNet and FamousShape are shown in the following table and the curve of PGP is shown in *Rebuttal PDF Fig.4*. For oriented normal estimation methods based on two-stage paradigms, the initial unoriented normals will affect the performance of normal orientation. From this table, we observe that higher-precision unoriented normals do not result in more accurate oriented normals using a normal orientation algorithm based on propagation strategy, such as PCA+MST vs. LRR+MST. This means that even if we use better unoriented normal estimation algorithms (unsupervised or supervised), utilizing existing normal orientation algorithms does not necessarily lead to better orientation results. |Method | PCA+MST | PCA+QPBO | PCA+ODP | LRR+MST | LRR+QPBO | LRR+ODP | Ours | |:-: |:-: |:-: |:-: |:-: |:-: |:-: |:-: | |PCPNet | 28.52 | 26.52 | 32.16 | 44.82 | 41.98 | 32.44 | **18.70** | |FamousShape | 40.48 | 41.31 | 42.92 | 57.83 | 59.84 | 51.93 | **26.16** | --- Rebuttal Comment 1.1: Title: More questions on hyper-parameter tuning Comment: It’s still not very clear to me how a hyper-parameter is chosen for each dataset. Which metric do you use to tune the hyper-parameter? How do you select hold-out set? --- Reply to Comment 1.1.1: Title: Responses to hyperparameter tuning Comment: The hyper-parameter is first set empirically and then tuned according to the experimental results over the validation dataset. The metric we use to tune the hyper-parameter is the RMSE of the oriented normal. Same as existing methods, this metric is also used in evaluation experiments. Specifically, both the PCPNet dataset and the FamousShape dataset contain six categories, we simply choose the average RMSE over the validation dataset as the main indicator when tuning the hyper-parameter for each dataset. As with existing methods, we use standard data splits (training/validation/testing sets) for the datasets used, and the KITTI dataset is only used as the testing set. We will add more details on hyper-parameter tuning in the revised version.
Summary: This work proposes to learn neural gradient functions from point clouds to estimate oriented normal in an unsupervised manner. Specifically, this method introduces several loss functions to constrain query points to iteratively fit the underlying surface, which is defined by the sampled points. Meanwhile, the local gradients are incorporated into the surface approximation to measure the minimum signed deviation of queries, resulting in a consistent normal field associated with the surface. Lastly, some evaluations demonstrate the superior performance of the proposed method over existing approaches. Strengths: 1. This paper is well organized and nicely written. The presentation of motivation is clear and smooth. 2. The work is well motivated. The idea of learning neural gradients for normal estimation is inspiring. 3. Some visual and quantitative evaluation are promising. Weaknesses: 1. I am not sure why we need a distribution D. 2. I do not see any running efficiency statistics, which would be important for a fair assessment. 3. I think there is a missing comparison with [73]. Also, some challenging cases from [73] should be included. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: 1. I believe that the designed loss functions can facilitate query points to iteratively reach the moving targets and aggregate onto the approximated surface, thereby learning a global surface representation of the data. However, I am doubting that if the incorporate gradients can achieve a consistent normal field, especially handling some challenging cases. Is it possible to give more details? 2. Both point sets of Q and G are sampled from input point cloud, so in the test stage, how to compute the normal for all points? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: 1. I do not see any failure cases in the paper. It is essential to show some failure cases for the reader to investigate the failure model to benefit future research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### 1. Why need a distribution D? The optimization of our method is formulated as an iterative moving operation of the points to the target positions. We use this distribution to generate the training data, i.e., query points, which are pulled onto the surface during optimization. As in Line 117, the input point set Q is sampled from the raw point cloud via distribution D. Line 206 describes how to construct the distribution. ### 2. Running efficiency. In Section 2 and Table 1 of the **supplementary material**, we have compared the network parameters, and running time of the learning-based methods. ### 3. Comparison with GCNO and using challenging cases of GCNO [73]. We conduct an evaluation on a dataset that has the same shapes as the FamousShape dataset but each shape in this dataset contains only 5000 points. As shown in the following table, we report quantitative comparison results of oriented normal estimation on this dataset with sparse point clouds. The traditional baseline algorithms, including GCNO and PCA+MST, are implemented in C++ on the Windows platform and run on an Intel i9-11900K CPU. Other learning-based methods are implemented in Pytorch on the Linux platform and run on an NVIDIA 2080 Ti GPU. We can see that our method has the best RMSE result. Comparisons on the challenging cases of GCNO [73] and other cases are shown in ***Rebuttal PDF***. These results demonstrate the good performance of our method on sparse point clouds. |Method |HSurf-Net+ODP |PCA+MST |PCPNet |SHS-Net |GCNO |Ours | |:-: |:-: |:-: |:-: |:-: |:-: |:-: | |RMSE |62.51 |45.40 |48.48 |32.64 |45.14 |**24.35**| [73] Xu et al., Globally consistent normal orientation for point clouds by regularizing the winding-number field. ACM TOG 2023. ### 4. Doubt about incorporating gradients to achieve consistent normals. During the learning of global surface representation, the gradient of the signed distance field determines the convergence direction of the zero iso-surface, and the consistency of the gradient affects the quality of the final result. To ensure a continuous and smooth surface, adding constraints to the gradient for optimization can improve the robustness of the algorithm for surface representation and avoid local extremum caused by noise or outliers. In this work, we introduce the multi-step moving strategy along the gradient in Eq.(2), the gradient uniformity of multi-scale neighborhood size in Eq.(7), and the gradient consistency of multi-step moving in Eq.(8). The quantization results in Table 1 and Figure 5, the noisy data in Figure 6, and the uneven data with open structure of the KITTI dataset in Figure 7 indicate that our method can obtain consistent normal under various geometric structures, noise, and non-uniform densities. Overall, the shape in the FamousShape dataset has a more complex geometry and topology than the PCPNet dataset, while the point clouds in the KITTI dataset are extremely uneven, sparse, and has an open surface structure. The challenging cases provided in Question 3 and more cases in ***Rebuttal PDF*** show the good performance of our method. At the same time, we also provided a detailed analysis of the limitation and failure cases of our method in the supplementary material. ### 5. How to compute the normals of all points? During training, the point sets Q and G sampled from raw point clouds are used as input. As in Line 122, during testing, the entire point cloud is fed into the trained network to derive the gradient at each point, and the solved gradient is used as the normal. ### 6. Failure cases. The failure cases are already provided in Section 4 of the **supplementary material**. Our method fails on noisy point clouds of a thin sheet with a hollow structure. The integration of noisy points from the upper and lower planes leads to the blurring of the internal and external structures, and the algorithm assumes that all point clouds belong to the same plane. --- Rebuttal Comment 1.1: Title: Post Rebuttal Comment: Dear Reviewer vh4j, We have provided a comparison with GCNO [73], used the challenging case of GCNO, and further clarified how incorporating gradients achieves consistent normals. In light of this, we would like to know whether you believe we have addressed your concerns, and if so we hope that you would be willing to increase your score. Thank you for your time, The Authors --- Rebuttal 2: Comment: Dear vh4j, we would love to hear your thoughts. Did the rebuttal and the other reviews change your mind?
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable comments, and our responses to all reviewers are as follows. More visual comparison results are shown in the provided PDF. Pdf: /pdf/3d5b39f9f5d7af2d4f1bb75b4cf71cb3a8d76e29.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a method that predicts oriented surface normals given a 3D point cloud. The method is based on SDF surface reconstruction. Given a potentially noisy point cloud, a SDF is fitted using an approach similar to Neural-Pull [45]. The surface normal at each point can then be computed based on the gradient of the SDF. Since a global SDF of the shape is recovered, the orientation of the predicted surface normal is also globally consistent. Additionally, the paper proposed various improvements on top of Neural-Pull to improve normal prediction quality: a multi-step movement strategy during optimization, and a multi-scale neighborhood size strategy. Strengths: * The proposed method is unsupervised -- It is optimized on a per-shape basis and unlike [23,8,39,38], it does not need to be trained on a large collection of shapes, which also avoids any training-evaluation domain gap. * Unlike local fitting methods [8,82,37,15], the proposed method produces globally consistent surface orientation thanks to the use of a global SDF. * The proposed method achieved state-of-the-art performance on both oriented and unoriented surface normal estimation. It worked well especially for noisy point clouds. * The paper has included very comprehensive ablations to show the effects of various design decisions as well as the new components such as multi-step supervision and multi-scale neighbor selection. Weaknesses: * The proposed method resembles many existing works on point cloud 3D reconstruction, such as SAL [6], Neural-Pull [45], Shape As Points (Peng et al.), NDF (Chibane et al.). In fact, recovering the normal can be considered as the side effect of surface reconstruction -- once the oriented surface is obtained, the oriented normal can be obtained naturally. As there is no comparison on surface normal quality with such methods in the paper, it is not clear if the proposed method has any significant benefit over these methods from surface reconstruction community. * Compared to feed-forward methods such as PCPNet [23], the proposed method requires per-shape optimization, which can be computationally expensive. * The paper will be easier to follow if it can devote some paragraphs to the connections between state-of-the-art surface reconstruction and normal estimation literature. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * How does the proposed method compare with previous works in terms of speed? * It is possible to directly repurpose surface reconstruction methods for normal estimation? How would they perform? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitation and societal impact of the paper is adequately addressed in the supplemental material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### 1. Comparing with other implicit representation methods and repurposing surface reconstruction methods for normal estimation. We use some implicit representation methods to estimate oriented normals. The comparison of normal RMSE on point clouds in the datasets PCPNet and FamousShape is reported in the following table and the visual comparison is shown in *Rebuttal PDF Fig.6*. We can see that our method has clear advantages. We also have compared with some other surface reconstruction methods in Fig.6 of the paper and Fig.1 of the supplementary material. Our method can reconstruct better surfaces, especially on noisy point clouds. Moreover, our method can handle point clouds with uneven sampling and open surface structure, such as the KITTI dataset in Fig.7. |Method |SAP [1] |IGR [2] |SAL [3] |Neural-Pull [4] |Ours | |:-: |:-: |:-: |:-: |:-: |:-: | |Noise |57.56 |54.77 |46.69 |48.48 |***36.92***| |Density |41.32 |75.90 |43.78 |26.22 |***26.08***| |Average |49.44 |65.33 |45.24 |37.35 |***31.50***| [1] Peng et al., Shape As Points: A Differentiable Poisson Solver, NeurIPS 2021. [2] Gropp et al., Implicit Geometric Regularization for Learning Shapes, ICML 2020. [3] Atzmon et al., SAL: Sign Agnostic Learning of Shapes from Raw Data, CVPR 2020. [4] Ma et al., Neural-Pull: Learning signed distance functions from point clouds by learning to pull space onto surfaces. ICML 2021. ### 2. The per-shape optimization is computationally expensive compared to PCPNet. Existing learning-based normal estimation methods, such as PCPNet, require ground truth normals as supervision for training, and there is still room to improve, especially in some challenging cases. Although PCPNet can predict point normal in a forward process with pre-trained parameters, which is much faster than ours, it does not generalize well to unseen cases. We resolve this issue using an overfitting strategy, which significantly improves the performance on unseen cases. Our method directly learns normals from raw data without using ground truth and achieves better performance for various inputs. In addition, our method can also provide better surface reconstruction results. We do not intend to replace methods such as PCPNet that can directly use pre-trained models on different data, but rather as a new exploration that can provide another option for the 3D computer vision community to easily obtain more accurate normals and surfaces from point clouds. ### 3. Add a paragraph to the connection between SOTA surface reconstruction and normal estimation. In recent years, researchers have paid more attention to the global consistency of normal orientations, such as iterative Poisson Surface Reconstruction (iPSR) [Hou et al. 2022], Parametric Gauss Reconstruction (PGR) [Lin et al. 2022] and Stochastic Poisson Surface Reconstruction (SPSR) [Sellán and Jacobson 2022]. For example, iPSR runs Poisson reconstruction in an iterative manner and updates normals using the generated surface of the last iteration. PGR regards normals and surface elements in the Gauss formula as unknown parameters and optimizes the parametric function space. In addition to traditional approaches, deep neural networks have been applied to gather information for orientation and reconstruction. Some works learn the implicit function directly from the input point cloud and eliminate the need for training data, such as SAL [Atzmon et al. 2020], Neural-Pull [Ma et al. 2021], IGR [Gropp et al. 2020] and SAP [Peng et al. 2021]. For example, SAP proposes a differentiable Poisson solver to represent shape surfaces as oriented point clouds, and the point positions and normals are updated during the optimization of surface. Neural-Pull predicts the signed distance field to move a point along or against the gradient for finding its nearest path to the surface, and its gradient is equivalent to normal. Although these surface reconstruction methods do not aim to estimate normals, they will add normals to the constraint conditions during the surface optimization process to assist surface reconstruction. The experimental results of these methods also validate the benefits of using normals in surface reconstruction. We will add more discussions in the revised version. ### 4. Comparison of running speed. In Section 2 and Table 1 of the **supplementary material**, we have compared the network parameters, and running time of the learning-based methods. --- Rebuttal Comment 1.1: Title: Keep my rating Comment: I would like to thank the authors for the response. The rebuttal has solved all my concerns, especially on its advantages over directly using surface reconstruction methods to recover surface normal -- it seems that the proposed method performed significantly better than generic surface reconstruction methods. I would like to retain my rating of weak accept. What prevents me from giving higher rating is mostly due to the exposition. As the proposed method is based on surface reconstruction methods, it will be helpful to compare and contrast the two, instead of trying to describe the proposed method as something new.
null
null
null
null
null
null
Safe Exploration in Reinforcement Learning: A Generalized Formulation and Algorithms
Accept (poster)
Summary: This paper studies the safe RL problem with a generalized stepwise safety chance (probability) constraint, essential for many safety-critical systems with RL. The authors propose a meta-algorithm to solve this problem (MASE), by combining unconstrained RL with an uncertainty quantifier to guarantee safety with probability. They study two variants for MASE, one for the linear model and the other for GP. Experiments on grid-world and safety gym show that MASE with GSE formulation achieves SOTA compared to the baselines. Strengths: The paper is well-written and easy to follow. The proposed GSE problem is more general than the CMDP formulation with an additive expectation constraint. Especially, the GSE problem is an important problem to study for many real-world safety-critical systems, such as autonomous cars and cars. The proposed method looks sound and correct to me. The MASE algorithm essentially builds on Assumption 3.4 (uncertainty quantifier) which could also be a potential bottleneck or limitation, but the author did a good job by proposing a general linear model and GP for it. The experimental results show better results in terms of safety violations, compared to other CMDP-based approaches. Weaknesses: 1. The author may want to at least discuss this ICML paper in their related work. The hard safety chance constraint is highly relevant to the GSE problem in this paper, although it is indeed using different approaches to solve the problem. Wang, Y., Zhan, S. S., Jiao, R., Wang, Z., Jin, W., Yang, Z., ... & Zhu, Q. (2022). Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement Learning in Unknown Stochastic Environments. arXiv preprint arXiv:2209.15090. 2. The authors should discuss the limitations of this work 3. In Method, MASE needs to compute a safe action set, how complex this computation is? In order to compute it, what kind of assumptions of the underlying environments do you acquire? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Page 3 Lines 96 to 98, although I can understand what the authors mean, we should always keep in mind that s_h (a_h) are random variables, you cannot let a random variable be less than a value. It has to be in Pr() format, even if the probability is 1. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I would like to see the authors' opinions on the limitations in their responses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer’s encouraging comments. Your feedback and suggestions are valuable and help us improve the quality of the manuscript. We will first answer the Question and then address the comments regarding Weakness. **Lines 96 - 98 [Question].** Thank you for the important question! As the reviewer mentions, we should have paid more attention to the fact that $s_h$ and $a_h$ are random variables. We will fix them in the camera-ready version. **Missing reference [Weakness 1].** Thank you for pointing out the closely-related work. Indeed, the existing work [Ref-1] the reviewer points out formulates their problem in a highly-relevant manner to the GSE problem. We will discuss this existing work as a closely-related research. **Limitations of the work [Weakness 2].** Due to the severe page limit, we discuss the limitations of our paper in the supplementary material (Appendix I). As the reviewer mentioned, however, the limitations should be discussed in the main paper and it would be better to discuss other limitations such as computational cost. We will move this part to the main paper in the camera-ready version while adding more descriptions. **Complexity [Weakness 3].** In our proposed method, a safe action set is computed based on the inference by uncertainty quantifier (e.g., GP). In our practical implementation, since each GP inference is computationally inexpensive and we require the agent to update the GP model only at the end of the episode (see Line 13 in Algorithm 1), dominant computational time depends on the number of actions used for the GP inference. In the case of RL problems with discrete action, the computational cost is proportional to $|A|$. When the action space is continuous, it would be more difficult to compute the safe action set and a sampling technique is a simple yet powerful solution. In fact, when we conducted our experiment, we randomly sampled next actions and checked whether or not there were actions that conservatively guarantee the safety constraint. The computational cost of this process is practically smaller than the main RL process. We should have explained the details of the practical implementation, so we will address the reviewers’ comments in the next version. We sincerely thank the reviewer for taking the time to review our paper. --- Reference [Ref-1] Wang, Yixuan, et al. "Enforcing hard constraints with soft barriers: Safe reinforcement learning in unknown stochastic environments." International Conference on Machine Learning (ICML)., 2023.
Summary: The authors present a novel safe RL algorithm for safety constraints with probability one. First, the authors present a problem formulation that can be used to derive different common safe RL formulation (state constraints, accumulated constraints etc). Then the authors leverage a sophisticated technique to reshape the reward accounting for safety information. In particular, they predict if the following actions will be safe by learning the action safety state similar to safety shield techniques. Their shields are learning using Gaussian processes and offer safety predictions with uncertainty estimates.` Strengths: 1. The algorithm is technically sound, novel and quite interesting as an idea. 1. The authors present an extensive analysis for the case of generalized linear CMPDs. The theoretical results seem to be correct, but I have a few minor concerns regarding the statements 2. The numerical results are impressive but shown only for one environment. Weaknesses: 1. The statement of Theorem 3.1 reads there are instances of the GSE problem that are not equivalent and cannot be transformed into Problem 1, 2 or 3. This is not shown. 1. The fact that Problem 2 can be transformed into the GSE problem does not mean the GSE problem is more general than Problem 2. If the GSE problem can be transformed to Problem 2 as well, then they are equivalent. Furthermore, if the accumulated cost is used as a state as in Lemma A.1, then the problems cannot truly be equivalent. I think the authors should rephrase these results and make more accurate claims. 1. The method does remind me of safety layer techniques [Dalal 18] and [28] with a more sophisticated reward-shaping approach. Can the authors provide a short discussion on the relation to these papers? 1. The numerical results can be improved * I recommend adding boxplots to the simulation results to see the distributions of the traces. For instance the boxplots of the trajectories for the final epoch as in [28] * I recommend providing more epochs in the experiments. I suspect that the algorithms generally achieve similar performance, but the algorithms with probability one constraint simply converge slower. * [Dalal 18] Dalal, G., Dvijotham, K., Vecerik, M., Hester, T., Paduraru, C., & Tassa, Y. (2018). Safe exploration in continuous action spaces. arXiv preprint arXiv:1801.08757. * [Yu 22] Yu, Haonan, Wei Xu, and Haichao Zhang. "Towards safe reinforcement learning with a safety editor policy." Advances in Neural Information Processing Systems 35 (2022): 2608-2621. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Please provide details on the safety gym environment. This does not seem like a standard one. 1. How the algorithm scales with a number of epochs/states? My concern is that the GP is not the most scalable model for RL Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: There are two limitations of the approach: 1) the scalability of the algorithm and 2) the restrictiveness of the problem definition. While the authors discuss the latter limitation and show that this problem formulation is important, I didn’t find a discussion on scalability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer’s encouraging comments. Your thoughtful comments are valuable and help us to improve the quality of the manuscript. We will answer the Questions first and then address the comments in Weakness. **Safety-Gym [Question 1].** Thank you for the very important question! Our experimental setting is based on the SOTA paper [28]. The Safety-gym environment is slightly different from the original one in that the obstacles (i.e., unsafe region) are replaced deterministically so that the environment is solvable and there is a viable solution. Because this information is important for reproducing our work, we will add the above explanations in the camera-ready version. **Scalability [Question 2].** Thank you for the valuable question! As the reviewer mentions, in general, GP is a computationally-expensive algorithm. However, computationally scalable GP algorithms have been proposed such as [Ref-1] or [Ref-2]. For example, KISS-GP (w/ LOVE) [Ref-1] requires $\mathcal{O}(k(n + m \log m))$, where $n$ is training points, $m$ inducing points, $k$ Lanczos/CG iterations. In our experiment, we used a scalable deep GP algorithm [26] and the computational time of the GP part was much smaller than the RL part in our experiment settings. This is also because of the algorithm setup that the uncertainty quantifier is updated at the end of the episode (see Line 13 in Algorithm 1). As the reviewer points out, since GP can be a computational bottleneck in some cases, we will discuss its potential issue in Limitation. **Terminology issue in Theorem 3.1 [Weakness 1 and Weakness 2].** We apologize that Theorem 3.1 is miss-leading. We provide Theorem 3.1 to insist that *"Problems 1, 2, and 3 can be transformed into the GSE problem."* Notice that the MASE algorithm can solve the GSE problem. By Theorem 3.1, we can further claim that the MASE algorithm can also solve Problems 1, 2, and 3 since they can be transformed into the GSE problem. Namely, Theorem 3.1 implies that the MASE algorithm is useful in many safe RL problems. The current statement is confusing and over-claiming more than necessary, which hides our real implications; hence, we will rephrase the statement around Theorem 3.1 and make more accurate claims. Thank you for your helpful comments. **Comparison with [Ref-3], [Ref-4], and [28] [Weakness 3].** As the reviewer mentions, [Ref-3] solves a similar problem to ours. The biggest difference between [Ref-3] and our paper is that [Ref-3] basically assumes that there is always at least one safe action at every time step while our MASE incorporates situations where there is no safe action. Because our MASE algorithm is developed for solving the GSE problem characterized by a time-varying (potentially decreasing) safety threshold $b_h$, we need to carefully consider the case where there is no safe action. The advantage of the MASE compared to [Ref-3] is that it can deal with such hopeless cases by the emergency stop action and reward penalty based on the uncertainty quantifier, while maintaining the theoretical guarantees on safety and (optimality under the Generalized linear CMDP assumption). Next, [Ref-4] tries to solve typical safe RL problems with expected cumulative safety constraints; thus, it is essentially difficult to guarantee the safety constraint in the GSE problem and Problems 1, 2, and 3. Finally, while [28] deals with the so-called probability one constraints (i.e., Problem 1 in our paper), their algorithm heavily penalizes the agent after the constraint violation. Their proposed algorithm is nice in that safety after convergence is guaranteed, but safety is not guaranteed during the learning phase by nature. An advantage of our MASE compared to theirs is that safety is guaranteed even during learning, which is evidenced by both theoretically and empirically. Note that, we thought the reviewer may have miswritten [Ref-4] as [28] and gave the above response. **Numerical results [Weakness 4].** Thank you for the useful comments. We conducted an additional experiment results with boxplots for a larger epoch (500 $\rightarrow$ 1000). **We would like to ask the reviewer to see the new result in a new PDF file attached in the global response.** Because we consider that it is more important to show the performance in terms of reward and safety during learning, we show the current learning curve. As the reviewer mentions, however, the boxplots would be definitely useful to see the final performance and we will add such plots as the new figure (i.e. Figure 4 in a PDF attached in the global response). We sincerely thank the reviewer for taking the time to review our paper. --- References [26] Salimbeni, H. and Deisenroth, M. Doubly stochastic variational inference for deep Gaussian processes. In Neural Information Processing Systems, 2017. [28] Sootla, Aivar, et al. "Sauté rl: Almost surely safe reinforcement learning using state augmentation." International Conference on Machine Learning. PMLR, 2022. [Ref-1] Wilson, A, and Hannes N. "Kernel interpolation for scalable structured Gaussian processes (KISS-GP)." International conference on machine learning. 2015. [Ref-2] Pleiss, G, et al. "Constant-time predictive distributions for Gaussian processes." International Conference on Machine Learning. 2018. [Ref-3] Dalal, G., Dvijotham, K., Vecerik, M., Hester, T., Paduraru, C., & Tassa, Y. (2018). Safe exploration in continuous action spaces. arXiv preprint arXiv:1801.08757. [Ref-4] Yu, Haonan, Wei Xu, and Haichao Zhang. "Towards safe reinforcement learning with a safety editor policy." Advances in Neural Information Processing Systems 35 (2022): 2608-2621 --- Rebuttal Comment 1.1: Title: Final remarks Comment: I thank the authors for the responses and the new experiments. I have two remarks: **Terminology issue in Theorem 3.1 [Weakness 1 and Weakness 2].** I agree with the formulation that Problems 1,2, and 3 can be transformed to GSE for Theorem 3.1. I also suggest clarifying that GSE can be used to solve other problems, but there is no (to our best knowledge) direct proof that solving other problems would not solve GSE. Unless there is proof and that could be another interesting addition. **Comparison with [Ref-3], [Ref-4], and [28] [Weakness 3]** Just a minor remark that I think was alluded to in other responses. I believe your algorithm for probability one constraint can be seen as a nice generalization of [28] with (again) a nice form of shielding. This is naturally a matter of opinion, but I found this could be an interesting connection. If some connections can be made perhaps the theory of [28] can be extended to MASE. I also suggest adding boxplots to the final version of the paper not to the appendix. I feel this would be a fair comparison to [28] showing the superior performance of MASE. --- Reply to Comment 1.1.1: Title: Thank you for further comments. Comment: We would like to express our sincere gratitude to the reviewer who read through our responses and other reviews. **Terminology issue in Theorem 3.1.** Thank you for your valuable comments. We will make sure that the terminology issues around Theorem 3.1 are fixed to avoid misleading and over-claiming statements. Also, thank you for additional suggestions regarding the theoretical analysis of the GSE problem! We agree with the reviewer that the theoretical analyses around the GSE problem (more broadly, connections among various safe RL formulations) are an interesting research direction. We receive the reviewers' comments seriously and will consider them in future work. **Comparison with [Ref-3], [Ref-4], and [28].** We agree with the reviewer that our MASE algorithm for probability one constraint can be regarded as a nice generalization of [28] combined with a nice variant of shielding methods, which we also believe is an interesting and useful connection in the safe RL community. As we responded to Reviewer sDLd, we will surely add such discussion in the camera-ready paper. **Boxplots.** Thank you for seeing the new experimental results in a new one-page PDF. We agree with the reviewer that the boxplots are important for a fair comparison to [28]; hence, we will add them to the final version of the main paper (not to the appendix). We appreciate your valuable suggestions and feedback for improving the quality of our manuscript.
Summary: Paper considers the generalized safe exploration problem (problem 4) and compares it to the other safe exploration problems in the literature, leading to Thm 3.1 that concludes that it is more general than the others. The authors then introduce MASE, a meta-algorithm for safe exploration, that attempts to solve the GSE problem with the ability to execute an emergency stop "beforehand" as opposed to others that have done it afterwards. Section 6 then presents a more practical algorithm using GP models, which is then used in section 7 to compare the new approach with several baselines. Strengths: The paper is well presented, with several technical results about the safety analysis, culminating in Thms 5.6 and 5.7 which show that GLM-MASE guarantees safety with high probability for every time step. Several numerical experiments are presented to compare with unconstrained and SOA constrained RL algorithms. Weaknesses: The development in the paper assumes access to an emergency stop action that enables the agent to avoid violating a safety constraint. This assumptions seems difficult to achieve in practice as for many agents of interest, stopping is not a safe state. Further, the action would typically be state dependent, requiring at a minimum a emergency action policy (rather than action). Finding either action or policy seems non trivial and could possibly be overly simplifying most problems of interest. Given the comment about [33] at the bottom of page 4, I would have expected to see a comparison of these approaches to imposing an action to avoid an unsafe state before/after a training epoch in the numerical results. If that is included in section 7, suggest highlight that discussion point more. The footnote on page 3 makes reference to this being a conservative approximation of that in safe RL problems with chance constraints, and consigns the discussion Appendix B. It would seem like more clarification than that is needed here. Also, the more recent work by Pavone in this area will be of interest: * Lucas Janson, Edward Schmerling, and Marco Pavone, “Monte Carlo motion planning for robot trajectory optimization under uncertainty.” In Robotics Research, pages 343–361. Springer, 2018. * Anirudha Majumdar and Marco Pavone “How Should a Robot Assess Risk? Towards an Axiomatic Theory of Risk in Robotics,” https://doi.org/10.48550/arXiv.1710.11040 Fig 2 shows that MASE satisfies the constraints, but if I understand the plot correctly, this is achieved with very conservative margins (constraint at 20, values are typical ≤ 5). This is similar to Saute TRPO, but perhaps suggest why the performance (episode return) is so weak compared to the unconstrained solutions (that don’t violate the constraints by much). Is there a way to trade off this conservatism to achieve better performance? Very hard to see the frequency with which TRPO Lagrangian violates the constraints given the lines/colors on fig 2. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Does the comment about safe RL problems with chance constraints suggest that there is a better answer available already? The text says that problem 5 in App B is hard to solve, but presumably the authors of [20] and [22] did so? Can their results be compared to the ones here? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See little discussion of this point in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer’s valuable comments and feedback. We will answer the Questions first and then address the comments provided in Weakness. **Chance constraint [Question and Weakness 3].** Thank you for the good question! In general, it is known that joint chance constraints represented in Problem 5 are hard to handle. Most of the previous work does not directly deal with this type of constraint and uses some approximations or assumptions. For example, Pfrommer et al. [22] assume a known linear time-invariant dynamics. Also, One et al. [20] approximates the joint chance constraint as follows: \begin{align} \Pr \left[ \bigvee_{h=1}^{H} s_h \in S_\text{unsafe} \mid P, \pi \right] \le \mathbb{E} \left[ \sum_{h=1}^H \mathbb{I}(s_h \in S_\text{unsafe}) \mid P, \pi \right]. \end{align} This is a conservative approximation with an additive structure, which is easier to solve than the original joint chance constraint. One et al. [20] deals with the above constraints with additive safety structure. Remark B.3 implies that, by additionally transforming the conservatively-approximated problem into the GSE problem, the problem would become easier to handle because the safety constraint is instantaneous. As for the comparison with [20] and [22], they assume (partially) known system dynamics; thus, it is not possible to directly compare our method and theirs. We appreciate the reviewer’s comments and questions and the good pointers. We will add more discussion mentioned above in the next version based on the reviewer’s comments. **Emergency stop action [Weakness 1].** As discussed in Appendix I, emergency stop actions should be avoided in some cases. However, the biggest objective of this paper is to 1) formulate the GSE problem and 2) propose the MASE algorithm for solving it. Emergency stop action is a variant of resetting actions, which have been commonly used in episodic RL literature. The issue in the resetting action has been discussed in general RL problem settings, which have been addressed by many existing RL studies as being represented by [13]. We consider that it is not very difficult to combine our proposed method with such previous work (although we additionally need to incorporate the safety budget to return to the initial state). To clearly convey the core ideas or contributions of our approach, we consider that it is necessary and reasonable to introduce the emergency stop action. **Early termination [Weakness 2].** Thank you for the good comments. **We conducted an experiment and presented new results in a PDF provided in the global response.** The early-terminated MDP (ET-MDP, [33]) and our MASE exhibit similar learning curves on the average episode reward and average episode safety. However, while ET-MDP violated the safety constraint in most episodes (i.e., almost all episodes are terminated after an unsafe action is executed), our MASE did *not* violate any safety constraint. **Performance [Weakness 4].** We appreciate valuable comments. As the reviewer mentions, our proposed algorithm is sometimes very conservative since it seeks to guarantee the safety constraint at every time step and episode, while leveraging the uncertainty quantifier. This is the reason why our algorithm performs worse than CPO or TRPO-Lagrangian that encourages safety in a more loose manner, in terms of reward. However, we consider that the main objective of the experiment is to show the validity of our GSE problem and MASE algorithm for which we need to prove that the safety constraint is satisfied empirically consistently with the theory. As the reviewer points out, it would be an important and interesting direction to consider how to balance the trade-off between safety and reward in our GSE problem and MASE algorithm. Although we provide a theoretical result on optimality (Theorem 5.5) under the generalized linear CMDP assumption, we have observed that it is difficult to achieve comparable reward performance to safety-agnostic RL algorithms (e.g., TRPO) or safe RL algorithms with loose constraints (e.g., CPO, TRPO-Lagrangian) in complicated tasks. We would like to leave this issue in the future work. We deeply appreciate the reviewer’s valuable comments. **TRPO in Figure 2 [Weakness 5].** Thank you for the advice to improve the presentation. We will modify the color and line width so that it is easier to see the result of TRPO-Lagrangian. We thank the reviewer for the time and effort on reviewing our paper. --- References [13] Eysenbach, B, et al. "Leave no trace: Learning to reset for safe and autonomous reinforcement learning." ICLR (2017). [20] Ono, M., Pavone, M., Kuwata, Y., and Balaram, J. (2015). Chance-constrained dynamic programming with application to risk-aware robotic space exploration. Autonomous Robots,39(4):555–571. [22] Pfrommer, S., Gautam, T., Zhou, A., and Sojoudi, S. (2022). Safe reinforcement learning with chance-constrained model predictive control. In Learning for Dynamics and Control Conference (L4DC), pages 291–303. [33] Sun, H., Xu, Z., Fang, M., Peng, Z., Guo, J., Dai, B., and Zhou, B. (2021). Safe exploration by 455 solving early terminated MDP. arXiv preprint arXiv:2107.04200. --- Rebuttal Comment 1.1: Title: Reply Comment: Authors have mostly addressed my concerns and I will raise my score accordingly --- Reply to Comment 1.1.1: Title: Responses to further comments. Comment: We appreciate your valuable suggestions and thank you for increasing the score! We will ensure that all reviewers' valuable feedback is reflected in the camera-ready paper.
Summary: This paper addresses the problem of safe reinforcement learning, in particular safe exploration. In a nutshell, the authors provide an algorithm that is supposed to go beyond standard 'safety' measures in safe RL, such as the constrained MDP setting. There, an agent is bound to satisfy an additional (expected) cost constraint. Here, the authors postulate that an agent must satisfy a constraint almost surely or with high probability. As a key feature of their approach, the authors assume that a so-called 'emergency stop action' is available that allows the agent to always have a fallback. The authors provide a general framework in the form of an algorithm, prove its theoretical guarantees, and evaluate the method on a set of standard benchmarks. Strengths: The authors tackle a very important problem, safe exploration in RL. Moreover, they identify correctly that the standard constrained RL (or constrained MDP) setting is generally insufficient to ensure safety during exploration. In principle, the constrained RL setting provides just an incentive to act safely during training, and even after training, safety depends on an expectation to satisfy a safety constraint. Generally, the paper is well-written and easy to follow. Weaknesses: I like the paper in general, but in its current state cannot be accepted, in my opinion. The reason is a severe lack of related work. Most importantly, the authors seem unaware of a flavor of safe RL that is often referred to as 'shielding.' In these settings, a so-called shield 'blocks' unsafe actions according to some pre-defined safety measure. [1] was the first paper to introduce this aspect, [2] introduced shields for almost-sure properties in partially observable environments, [3] provides shields that satisfy a property with a certain probability, [4] provides a shielding mechanism for multi-agent settings. There are many more relevant works. The general shielding framework depends on varying assumptions, but the general procedure looks like the MASE algorithm. Note that safety during exploration/training is the most important motivation for shielding. I encourage the authors to thoroughly compare these (and more) works to explain the contribution and novelty better. [1] Alshiekh et al.: Safe Reinforcement Learning via Shielding. AAAI 2018 [2] Carr et al.: Safe Reinforcement Learning via Shielding for POMDPs. AAAI 2023 [3] Jansen et al.: Safe Reinforcement Learning Using Probabilistic Shields. CONCUR 2020 [4] Melcer et al.: Shield Decentralization for Safe Multi-Agent Reinforcement Learning. NeurIPS 2022 Moreover, I am not convinced by the experimental evaluation. The emergency action seems central to the approach, but I fail to see how it has been integrated into the standard benchmarks. Then, how often is it called during training by the agent? How does it impede an agent's exploration rate? minor comments l92: In the definition of a policy, it seems to be non-stochastic. In such a multi-objective setting, it might be beneficial to use stochastic policies. Have you considered this? l 125/126: in a probabilistic setting, there might be states with a high probability of violating a safety constraint but do not already violate it. It could be interesting to consider such information in the value function. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Compare your approach to the shielding framework. 2. How is the emergency action affecting the agent, please discuss in line with the experiments. 3. How is the 'blocking' of actions realized for an RL agent on a technical level? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 1 poor Limitations: The limitations are not properly addressed, see my earlier comments on the evaluation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer for helpful and thoughtful comments and questions. First of all, please let us emphasize that **our main contributions include the formulation of the GSE problem and its theoretical result (i.e., Theorem 3.1)** as well as the proposal of the MASE algorithm. Our contribution regarding the MASE algorithm is not only on the soundness or novelty of the algorithm itself but also the applicability to a wide range of safe RL problems such as Problems 1, 2, and 3, which is supported by the good theoretical property of the GSE problem backed by Theorem 3.1. We will first answer the Questions and then address the comments in Weakness. We will provide responses together for the similar Questions and comments written in Weakness. **Shielding method [Question 1 and Weakness 1].** Thank you for pointing out related work. We have read all the papers the reviewer raises as examples and notable ones on shielding. As the reviewer mentions, the notion of emergency stop actions is akin to the shielding. We now consider that it would be reasonable to regard our MASE algorithm as a variant of shielding methods (especially, preemptive shielding in [Ref-1]) that is specialized for the GSE problem. On the other hand, the MASE algorithm does not only block unsafe actions but also provides proper penalty for executing the emergency stop actions based on the uncertainty quantifier. Thus, this algorithm provides rigorous theoretical guarantees on optimality (e.g., Theorems 4.2 and 5.7) as well as safety (e.g., Theorems 4.1, 5.6, and 6.1). In particular, under the generalized linear CMDP assumption, our proposed MASE algorithm provides theoretical guarantees on both safety and optimality, which is a unique property from the perspective of safe RL research based on shielding. We can enjoy this theoretical advantage in many safe RL problems because of the wide applicability of the GSE problem. We will discuss the relationship between the shielding method and our MASE algorithm, while properly citing the papers mentioned by the reviewer (i.e., [Ref-2], [Ref-3], [Ref-4]). In addition, thanks to the reviewer’s comments, we found that our GSE problem may contribute to bridging the gap between shielding methods and other safe RL methods. We appreciate the constructive feedback. **Experiment [Question 2-3 and Weakness 2]**. In our experiment, when the agent identified that there was no safe action based on the GP-based uncertainty quantifier, we simply terminated the current episode (i.e., resetting) immediately after the emergency stop action and started the new episode. Also, the frequency of the emergency stop actions is: | Task | total | Last 100 epochs | | ---- | ---- | ---- | | PointGoal1 | 154/500 | 24/100 | | CarGoal1 | 397/500 | 46/100 | The emergency stop is a variant of so-called resetting actions that are quite common in episodic RL settings, which actually prevent the agent from exploring the state-action spaces since the uncertainty quantifier is sometimes quite conservative. We consider that this is the reason why the reward performance of our MASE is worse than other methods (e.g., TRPO-Lagrangian, CPO) in Figures 2a and 2d. However, because we require the agent to solve more difficult problems where safety is guaranteed at every time step and episode, we consider that this result is reasonable to some extent. Though it is better for an algorithm for such a severe safety constraint to have a comparable performance as CPO, we will leave it for future work. The aforementioned discussion will be useful for the readers, so we will add it to the experiment section in the camera-ready version. **Stochastic policy [Minor comments 1]**. To clarify the contributions of our paper, we focus on the deterministic policy. As the reviewer mentions, however, stochastic policy settings would be beneficial in many cases. We consider that our two core ideas (the GSE problem and the MASE algorithm) are quite simple and intuitive, which can be extended to stochastic policy settings, but the mathematics would be much more complicated. **Value function incorporating safety [Minor comments 2]**. Thank you for the great advice! It seems a promising idea to pessimistically estimate the value function for the states that are likely to violate the safety constraint. We will improve our paper in the final version based on the reviewer’s comments especially on the connections with the shielding method. --- References [Ref-1] Alshiekh et al.: Safe Reinforcement Learning via Shielding. AAAI 2018 [Ref-2] Carr et al.: Safe Reinforcement Learning via Shielding for POMDPs. AAAI 2023 [Ref- 3] Jansen et al.: Safe Reinforcement Learning Using Probabilistic Shields. CONCUR 2020 [Ref-4] Melcer et al.: Shield Decentralization for Safe Multi-Agent Reinforcement Learning. NeurIPS 2022 --- Rebuttal Comment 1.1: Title: Thanks Comment: I thank the authors for their careful reply. With the new experimental results and the proper placement in the literature, I will increase my score. --- Reply to Comment 1.1.1: Title: Thank you for additional reply Comment: We would like to express our sincere gratitude to the reviewer who read through our responses and will raise the score. We believe that the valuable comments from the Reviewer sDLd are very helpful for us to improve our paper. Thank you very much for your considerable comments.
Rebuttal 1: Rebuttal: Dear reviewers and AC, We deeply thank all the reviewers for their insightful comments and constructive suggestions. - We have conducted new experiments based on the reviewers' comments. Additional experimental results are provided in a one-page PDF containing new figures attached in this "global" response. - We have provided our detailed response to each reviewer with a separate response. We hope our replies have addressed all the questions and concerns of the reviewers. We are willing to answer any of the reviewers' concerns about our work and sincerely wish the reviewers to value the technical innovation and overall contributions of our paper. Best regards, Authors. Pdf: /pdf/abb48d40b198166f3e82dd04a9f975864f09f545.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant Learning
Accept (poster)
Summary: This work extends [LIME](https://arxiv.org/abs/1602.04938) to generate stable and unidirectional feature attributions through the [invariant risk minimization framework](https://arxiv.org/abs/1907.02893). Their method, LINEX, ensures that it follows the desiderata of faithfulness, robustness to neighborhood sampling, stability (similar to neighbors), and the proposed unidirectionality (closeby examples have the same sign for its attributions). They design LINEX as a concave game (pure Nash equilibrium always exists), thereby ensuring stability and unidirectionality. In the experiments, they show that they can find feature attributions that stay consistent to its neighbors across various modalities (tabular, image, text) and mostly outperform baselines. Strengths: - The (concave) game theoretic design of the feature attribution method is novel and sound. - The unidirectionality desideratum is original and very sensible. - The theoretical results provide sound support for their design of the proposed feature attribution method. - The proposed method empirically improves upon previous query-based methods. - Stable & robust feature attributions, such as generated by their proposed method, are important for researchers as well as practitioners to gain better insights into black-box models. - The paper is clearly written and easy to follow. - Code is provided. Weaknesses: - The largest weakness of this work is missing any (novel) qualitative insights into how model work (e.g., similar to Sec. 6.3 and/or 6.4 of [LIME](https://arxiv.org/abs/1602.04938)). The present submission only contains comparisons to previous works and does not show any interesting application of their approach to understand models. - While unidirectionality may be a valid property for most cases, there will be edge cases for which minor differences for close by examples may be discriminative (e.g., a person that has paid back previous loans and another has not, while all other features remain the same). - The trustworthiness of the proposed feature attribution method is not shown. It would be helpful to conduct a user study similar to Sec. 5.4 of [LIME](https://arxiv.org/abs/1602.04938). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - How is t is set during the experiments? Does it violate Assumption 2? - Are the results for MeLIME correct for CIFAR results since their attributions often just seem to be flipped from LINEX? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for highlighting the strengths of our approach. We now address your concerns. > qualitative insights into how models work We provide several qualitative insights in last paragraph of Section 5 and in appendices I, J, . Please also look at Figure 25. 1. Figures 2 and 3 show LINEX explanations are more coherent and highlight more salient features compared to MeLIME, and in Table 1 (text data), where LINEX highlights features that are reasonable for the sentiment. 2. Supp. I (Figure 23) - Using FMNIST data we qualitiatively show that LINEX/real highlights prominent features (such as sleeves and collar in a shirt, handles of the bags, outlines of the boots/shoes) better than MeLIME, even when the infidelity values are high. 3. Supp. J (Figure 25) - Using ablation with FMNIST data, we show that the features deemed important by LINEX/real (those with the highest coefficients) are indeed important for the model prediction compared to MeLIME. We show that setting the important features to baseline value produces a larger change in predicted classes for LINEX/real compared to MeLIME. > edge case for unidirectionality: minor differences for close by examples may be discriminative Our method has better unidirectionality overall, and we agree that there may be corner cases like the reviewer mentions where higher unidirectionality need not necessarily be desirable. > trustworthiness using a (simulated) user study The experiment in Supp. J (Figure 25) demonstrates the trustworthiness of the explanation. Since we claim LINEX to be a better post-hoc explanation compared to the baselines, one of the characteristics we expect is that setting the features deemed important by LINEX to some baseline value must substantially reduce the performance of the model. This is exactly what we show in this experiment using ablation with FMNIST data and comparing LINEX/real to MeLIME. > How is t is set during the experiments? Does it violate Assumption 2? As mentioned in the supplement for the tabular datasets (IRIS and MEPS) we ask for 5-sparse explanations while for the others where MeLIME is relevant we adopt its setup. Thus, $t$ is set accordingly. This leads to Assumption 2 being violated for the tabular datasets, but not for the others. Nonetheless, we mostly perform either better or similar to the competitors on the different metrics even on tabular. > Are the results for MeLIME correct for CIFAR results since their attributions often just seem to be flipped from LINEX? We checked again and the attributions are correct. They are not necessarily flipped but just not that correlated. For instance in Figure 3, i) for the dog image in most cases when LINEX is blue (high significance) MeLIME is yellowish-green (moderate significance) and ii) for the bird image LINEX correctly highlights the most of the birds wings in blue, while MeLIME highlights also the wings in blue but also the much of the background. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for their detailed response. In particular, I appreciate the discussions on unidirectionality, trustworthiness, and setting t. I think that the mention of the first and last points could be good and interesting additions to the current exposition. Unfortunately, the rebuttal still does not contain any *new* insights beyond the "better than x" style that I specifically asked for in my review. Thus, I’d like to kindly ask the authors again to consider adding such an analysis. This would strengthen the present work considerably in my opinion. --- Reply to Comment 1.1.1: Title: Example use cases Comment: We truly appreciate the time you are spending engaging with us. We are glad that our response on unidirectionality, LINEX trustworthiness and setting of $t$ were satisfactory to you. Regarding new insights we apologize for somewhat misinterpreting your ask. We would like to point out that we have used LINEX in multiple applications where the end user has preffered it over LIME by a significant margin. We describe two such applications below. If you find these examples to be beneficial we will add (a summarized version of) them to the main paper/supplement. 1. *Explaining misclassified incident tickets:* We worked with experts in the information technology division of a large corporation. This division has an automated incident ticket system that processes customer complaints in text format -- termed as tickets -- for hardware and devices, and tries to identify the problem category. For example, a customer may have memory issues with their devices or network connectivity issues or one of the other hundreds of issues. In a sense, the incident ticket system takes as input a stream of text and tries to predict one of the 300-odd problem classes (eg. memory error, motherboard issue, display issue etc.). In some cases, the system may also run an automated script to mitigate the issue once the problem is identified. Although this is a mature system that serves thousands of clients and has a high accuracy (upwards of 95\%), misclassified tickets are the ones that stand out to the clients. The experts told us that not providing good justification for these misclassifications to the clients severely hampers their trust in the system. We worked with them to test our LINEX algorithm in their framework. The resulting algorithm computes local explanations for classifying IT service ticket texts into the problem classes. What the experts particularly liked about the algorithm was that *it identified mostly the same set of words for similar misclassifications* (for example memory issues misclassified as motherboard issues). This allowed them in some cases to write custom rules to correct for such misclassifications and further improve system performance. This commonality of words we conjecture is because of the unidirectionality and stability properties of our algorithm. LIME did not exhibit similar behavior as it highlighted very different words for similar levels of INFD. Evaluating the algorithmic performance with the experts, we found that the explanations we provided were reasonable in ~79\% of the cases compared with around 40\% with LIME. This evaluation was done on about 100 misclassified tickets handled by the system after deployment. We were thus able to provide better explanations for (roughly) twice the number of cases. The experts said that our algorithm provided the end-users much better intuition about why the system made a mistake (because of commonality of words), showing in most cases that the mistake was acceptable. They felt that this was useful in developing trust in the system. 2. *Financial Fraud Explanation:* We worked with a large financial institution to explain the fraud detection model they had built. The Association of Certified Fraud Examiners (ACFE) claims that roughly 5\% of a companies revenue is lost to fraud every year. Thus, catching fraud or even non-compliance is extremely important for any organization. Their model (fraud $= 1$ else $0$) had $\approx$ 91\% accuracy. The inputs to the model were (transactional) invoices and details corresponding to those invoices such as vendor name, invoice amount, purchase order (PO) or not, vendor address, commodity code, country perception indices (CPI), etc. Since, one of the focuses is to reduce false positives accurate explanations are important. We applied LINEX to this setting to explain why certain invoices were classified as fraudulent. The experts found that in majority of the cases (913 out of 1000) the attributions of LINEX especially in terms of sign made sense. For instance, low CPI implies high risk and so LINEX gave a negative coefficient for this feature for most examples, while LIME gave a positive coefficient for many instances. Going forward their plan is to incorporate such capabilities into their workflow to further improve fraud detection precision.
Summary: The paper proposes Locally Invariant Explanations (LINEX) as a variation of the well-known LIME explanation method. While LIME is widely used in interpretability, it has been demonstrated to be dependent on the sampling procedure to create neighbouring points as well as the choice of the explanation model. The manuscript aims to mitigate the dependence on the sampling procedure by combining LIME with invariant risk minimisation. The method is evaluated on various datasets, such as IRIS, Fashion MNIST, CIFAR10, Rotten Tomatoes, as well as Medical Expenditure Survey and compared to number of previously proposed neighbourhood selection procedures. Both qualitative and quantitative comparisons using a number of metrics are discussed and it is demonstrated that the proposed method leads to a statistically albeit mild improvement. Strengths: - The proposed neighbourhood selection mechanism seems reasonable and theoretically grounded. - The quantitative experimental analysis is through. I particularly like that t-tests are used to evaluate statistical significance. - Experiments cover various datasets and modalities. Weaknesses: - The proposed method modifies the neighbourhood sampling procedure of LIME and is thus rather limited in scope. - The paper is a very hard read. This is not because the covered material is particularly unaccessible but rather a result of a suboptimally structured and at times rather confusing presentation. I will list a few concrete suggestions of how to improve the structure. I want to encourage the authors to thoroughly revise the presentation. This would make the ms a much more accessible and valuable contribution. Concrete suggestions for improving structure: - Explain IRM in more accessible manner (either in the main text or appendix). For example, an environment is currently not properly defined. Furthermore avoid imprecise mathematical definitions such as "the expectation \mathbb{E}_e is defined w.r.t. the distribution over points in the environment" where it is not obvious what is meant by "distribution over points in the environment". - I find the brief introduction to the Nash equilibrium very hard to parse (L 134-145). For example, the utility is defined over joint set actions but in L141 it takes a player k as an input. - Metrics for evaluation, as outlined in Section 4.1, should be moved to the experimental section. Separate paragraphs with corresponding equations and a brief explanation for each of the metrics used in Table 2 should be added. - Assumptions 1 and 2 are hard to parse. Assumption 1 is rather convoluted and should be split up into at least two sentences. Assumption 2 should clearly state that t and \gamma are thresholds and refer to their definitions. - Similarly, refer to the definitions of the environment generation distribution in Definition 2 and explain what is meant by |.| in the indicator functions of equation 2. - The proof sketch L264-277 is very hard to follow and adds little to the overall presentation. Consider to not include it. Minor suggestions for improving writing: - L28 reads as if the doctor is given a cancer diagnosis. I think what is meant is that a patient receives a cancer diagnosis and the doctor has to validate it? - L57 - L63 is rather convoluted. Splitting this up in separate sentences would help. - There are some ill-placed whitespaces such as in L302. - Be consistent with abbreviations, e.g. Supplement vs Suppl. - Avoid a.k.a. in favour of i.e. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - In L 302, it is stated that SHAP is not a "natural fit". Why is this so? - What was your logic for selecting the particular kernel sizes in L317? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are addressed briefly but adequately in the conclusions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your suggestions in improving the structure of our paper. We are also glad that you liked the approach and our experimental evaluations. > Scope: modifying the sampling scheme of LIME LIME is one of the most widely used explanation approaches and being model agnostic is applicable across many settings and domains. Given that our method is also model agnostic and provides simple and practical alternatives to create environments we believe our approach will also be widely adopted especially given the stability and unidirectionality benefits. For instance, recently it has been shown that explanation stability is an important requirement for different stakeholders that perform tasks such as model improvement, domain learning, adapting control and capability assessment (see Figure 2 in Vera et, al. Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI, AAAI HCOMP 2022). > Describing IRM in a more accessible manner We will discuss IRM and environments with an example in the final version. We define “distribution of points in the environment” in lines 121-122, so the statement we make regarding expectation with respect to this distribution in lines 126-127 is we believe grounded. > Nash equilibirum definition in Lines 134-139 The utility of player $i$ depends on their strategy $\bf{s_i}$ as well as the strategy of the rest of the players $\bf{s}_{-i}$. In Definition 1, we define pure strategy Nash Equilibrium, which “identifies a state where each player is using the best possible strategy in response to the rest of the players leaving no incentive for any player to alter their strategy.” (quoted from Line 142-143 of the paper). The $k$ in line 141 is “any other strategy” that will lead to suboptimal utility for the player $i$ if it were used in place of strategy $\bf{s}_i^{\dagger}$. We are happy to discuss this more using an example in the Supplement. > Metrics for evaluation should be moved to experiment section We will do this in the final version. > Other presentation/stylistic/minor suggestions Assumption 1: We will rephrase this as “The feature values for each of the dimensions in the samples created *while* forming the local environments are independent.” Assumption 2: We will refer back to the definitions of $t$ and $\gamma$ in Algorithm 1. > Minor suggestions We will address the other stylistic/minor suggestions in the final version. > SHAP not a natural fit (line 302) SHAP is not a natural fit here, since all the other explanation methods we list in Table 2 use some perturbation neighborhood to compute explanations, whereas SHAP does not. > Choice of kernel sizes in line 317 The size ($\tau$) of 0.75 was used in LIME code base as the default value. We chose fractions of it to make the kernel even more local. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal. Comment: I want to thank the authors for the detailed rebuttal. I agree that LIME is a widely used explanation method. I do however think that your work has the downside that it only applies to a specific (admittedly important) method in contrast to other methods that aim to robustify a wide range of methods. On the other hand, I think that there is value in proposing a method-specific robustness strategy. Thank you for pointing out that the distribution of environment points is defined in line 121-122. I indeed had missed this. The whole paragraph is quite dense and ideally should be split up. I realize this is challenging due to space constraints. I encourage the authors to change the ms as outlined in the rebuttal as it would significantly improve its readability. I will raise my score to 'weak accept'. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for revising your assessment and engaging with us. We will update the ms as outlined in the rebuttal. Thanks for suggesting the same.
Summary: Explanation methods are known to be unstable with flickering attributions on inputs with mild perturbations. This work addresses the explanation instability problem by re-posing it as computing attributions that are invariant across different environments (that are obtained through input perturbations). Their proposed scheme exploits Nash Equilibrium guarantees for convex objectives (of LIME) to obtain an explanation scheme with some desirable properties, which they also demonstrate empirically through multi-modal evaluation. Although I found the idea interesting, I did not like the story and have some critical concerns regarding their evaluation. I elaborate them below. Strengths: - Datasets used for experimental evaluation span various modalities: Image/Text/Tabular. - Connection with NE (Nash Equilibrium) is refreshing and the algorithm is simple. - Theoretical insight: their theorem is intuitive and I like that their estimator regresses all disagreeing attributions to 0, which is clean. Weaknesses: 1. Connection to IRM. I do not see any connection to IRM (Inv. Risk Minimization) setting or method except for the fact that they both use "environments" of some kind. Neither does their estimator borrow techniques from IRMv1 algorithm nor does it impose "invariance" across environments. In that regard, I found their story muddled. 2. Game theoretic perspective. The paper also did not justify well why we should view the estimation problem from game-theoretic standpoint. More specifically, why does the utility function as defined in L230 lead to an explanation method with desired properties. Why should the environments be viewed as agents in a game with competing objectives (and limited resources) at all? 3. I have many evaluation concerns that are described in "questions". 4. The paper is not easy to read because the story is cluttered (first two points), contributions and motivation not straight-forward. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Baselines and ablation. The problem resembles robust mean estimation. I would like to comparisons with the following baselines: 1. S-LIME but with attributions clipped at $[-\gamma, \gamma]$. 2. S-LIME but with attributions aggregated using median instead of mean. 3. (1) + (2) 4. Median-of-Mean estimation instead of mean estimation with S-LIME without increasing the number of function evaluations. 5. (1) + (4) 2. Metrics. 1. Has CAC been used before? I cannot understand well why/how the mean vector matching indicates recourse utility. 2. What happens if we slightly change the defn. of GI to aggregate $|y_b(x') - y_e^{x'}(x')|$? Although similar to INFD, we would then be able to check if the estimator is simply smoothing the explanations while the underlying model is not that smooth. 3. Theoretical analysis and explanation. LINEX performed well on CAC and CI metrics, but I do not understand what is contributing to the improvement. We need ablation study to understand the rrelative contribution of different aspects of LINEX: (a) $\gamma$ clipping, (b) setting non-zero attribution ($w_i$) only when $\sum_e \frac{|w_{ji}|}{w_{ji}}>0$ etc., (c) setting attribution value to the smallest among environments. In other words, a crisp intuitive explanation for why NE leads to empirical gains helps. 4. Figure 3 is hard to follow. LINEX too is highlighting features in the background. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: I do not see them mentioned anywhere. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are glad that you found our idea to be interesting. Below are our responses to your concerns. > Connection to IRM Besides the aspect of environments which you mentioned being similar to IRM, citation [4] mentioned in the paper is a game theoretic algorithm to solve IRM. IRMv1 which was mentioned in the paper that introduced IRM is actually an approximation to the original problem and also much slower than the game theoretic version in [4]. As mentioned in the related works section this work and [3] which also address the OOD problem are inspirations for our current approach and hence quite related to IRM. Our approach which automatically cancels features that have different sign attributions in the 2 environments making the explanation invariant to those features is analogous to canceling out spurious features in IRM. > Why Game theoretic perspective The main motivation for this is possibly answered in our response above. Framing of the explanation problem in this novel way naturally leads to properties that we have mentioned in the paper (viz. unidirectionality, stability), which is not the case (provably) with other methods. Please also see lines 227-231 in the paper. > Readability: Contributions, motivations We address the IRM connection and game theoretic perspective in the previous responses. Our differentiation is discussed in the second paragraph of related work. We will add the above clarifications to the final version. > The problem resembles robust mean estimation. Baselines and ablations Our approach is *not* estimating the mean. Even with 2 environments if a feature has same sign attributions it will go towards the lower (absolute value) attribution. While for opposite sign attributions it will tend towards 0. For more environments as discussed in the Supplement C the behavior depends on if there are odd or even number of environments. This type of behavior is very different than robust mean estimation. Nonetheless, we have experimented with the variations of S-LIME you suggested on the IRIS dataset and the **results are in a pdf document uploaded as part of the global response above**. In particular for, 1. *S-LIME but with attributions clipped at $[-\gamma,\gamma]$:* We do not report additional results since the behavior is the same as reported in the paper. This is because as mentioned in Supplement D $\gamma$ is set based on the maximum absolute coefficient found for LIME. So the clipping is trivial for S-LIME (i.e. coefficients do not change). 2. *S-LIME but with attributions aggregated using median instead of mean:* These are reported in the uploaded pdf. In summary, this leads to worse results on INFD and qualitatively similar behavior for the other metrics (when compared with the mean). 3. *(1) + (2):* Since (1) doesn’t affect the coefficients this is the same as (2). 4. *Median-of-Mean estimation:* These are again reported in the uploaded pdf. In summary, here too results are worse for INFD and qualitatively similar for the other metrics (when compared with the mean). 5. *(1) + (4):* Here again since (1) doesn’t affect the coefficients this is the same as (4). > Metrics: Significance of CAC CAC is a stability metric and Unidirectionality is the one meant for recourse. CAC checks whether certain important features are highlighted across most of the explanations of a class making it a reasonable stability metric where there is little rotation or shifting of the input. > Changing definition of GI to aggregate $|y_b(x')-y_e^{x'}(x’)|$ to check if explanations are being incorrectly smoothened? The GI definition we have in the paper is a stronger metric to check for incorrect smoothening than the suggested change. If the explanations are incorrectly smooth then explanations of neighboring examples will incur a higher penalty (i.e. high GI) when applied to the input. This will be more pronounced than applying to the example itself on which the explanation methods optimize to find an explanation. In any case, INFD checks for quality of explanations applied to themselves at a global scale, which we have reported. We found your suggested metric to give the same values (up to third decimal) as INFD. > Theoretical analysis and explanation for good performance of CAC and CI metrics The reason why LINEX performs better in CAC and CI metrics is mainly because LINEX can recover stable explanations. Stability falls out of our algorithm which weeds out features that change across the environments. It is reasonable to believe that within a class (CAC) and in the neighborhood of an example (CI), there will be common features that are representative of the model predictions. > relative contribution of different aspects of LINEX: (a) $\gamma$ clipping, (b) setting non-zero attribution only when sign is $>0$, (c) setting attribution value to the smallest among environments. The aspects you mention above are *not* independently applied in LINEX. The $\ell_{\infty}$ norm constraint ($|\tilde{w}|\le \gamma$) in Algorithm 1 implicitly leads to the (b) and (c) behaviors mentioned by you above. This is the power of LINEX, where this simple penalty we add leads to such desired behaviors. This is what we prove in Theorem 1. > More clarity on Figure 3 The attributions of LINEX for background pixels are small (yellow/red) as opposed to MeLIME. > Specification of Limitations This is provided as the last section of the supplementary material. --- Rebuttal Comment 1.1: Comment: Thanks a lot for the detailed response. I now see the connection to IRM through citation [1] better. Although I appreciate the algorithmic similarities between LINEX and iterative algorithm of [1], conceptual connection is still not clear. Invariance is a desired aspect under distribution shifts because of the assumption: _common features have consistent correlation while spurious or specific features have varying correlation across domains_. Which is why invariance can recover common/core features [2, 3]. In order to comment on the applicability of invariance principle to explanation stability, one may need to first understand the cause of variation in explanation across different environments. In my underastanding, varied explanations are because slight variation in inputs may activate very different neuron activation pathways (at least for gradient or perturbation based explanations). With this mental model, it is unclear why invariance or game-theoretic perspective is appropriate for explanation stability. The paper is hard to follow (as also noted by Reviewer bruB) partly because the motivation is unclear. > Even with 2 environments if a feature has same sign attributions it will go towards the lower (absolute value) attribution. While for opposite sign attributions it will tend towards 0. An invariant explanation would set every pixel with varying explanation importance irrespective of their sign to zero. Please correct me if I am wrong. I thank the authors for additional results with S-LIME as requested. It is good to see that none of the simple robust methods for aggrgation of explanations are better than theirs. I also went through the other reviews and agree with the evaluation concerns shared by Reviewer uF3q, aw9L. Instead of simply evaluating the stability or unidirectionality of the explanations across environments, we should also see how LINEX could lead to more faithful/trustworthy explanations. References. 1. Ahuja, Kartik, et al. "Invariant risk minimization games." International Conference on Machine Learning. PMLR, 2020. 2. Piratla, Vihari, Praneeth Netrapalli, and Sunita Sarawagi. "Efficient domain generalization via common-specific low-rank decomposition." International Conference on Machine Learning. PMLR, 2020. 3. Arjovsky, Martin, et al. "Invariant risk minimization." arXiv preprint arXiv:1907.02893 (2019). --- Reply to Comment 1.1.1: Title: Significant overlap conceptually to IRM Comment: Thank you so much for engaging with us. Below is our response to your concerns. > concerns shared by Reviewer uF3q, aw9L on how LINEX could lead to more faithful/trustworthy explanations Please note that reviewer aw9L seems to be satisfied with our response related to trustworthiness (uF3q hasn't responded yet). They say and we quote "In particular, I appreciate the discussions on unidirectionality, trustworthiness, and setting t. I think that the mention of the first and last points could be good and interesting additions to the current exposition." In particular for trustworthiness we mentioned the following to them: The experiment in Supp. J (Figure 25) demonstrates the trustworthiness of the explanation. Since we claim LINEX to be a better post-hoc explanation compared to the baselines, one of the characteristics we expect is that setting the features deemed important by LINEX to some baseline value must substantially reduce the performance of the model. This is exactly what we show in this experiment using ablation with FMNIST data and comparing LINEX/real to MeLIME. We have now also described to aw9L two use cases in which LINEX was used. > Conceptual connection to IRM There is significant overlap conceptually to IRM. Besides the points we have mentioned before (viz. environments, canceling or suppressing varying features, game theoretic method to solve IRM) we now detail deeper similarities. 1. In IRM, a Structural Causal Model (SCM) is assumed to create the data. For explanations the SCM is just the black box model which creates data (a.k.a. outputs) for the explanation procedures. In fact, since the black box takes an input and outputs some result based on it the SCM can be written as $y=f(x)$ with no latent confounders. The passing of an input through a black box model is analogous to simulating an SCM. Also note that we want to explain just one input at a time which (in all likelihood) has a single ground truth explanation and hence a true explanation model. This is analogous to IRM where in principle there should be a single causal graph (in particular, unique causal parents). Moreover, we are trying to explain inputs in a model agnostic manner and so we do not know if inputs took the same, similar or different paths in the model and in fact, the goal in a certain sense of model agnostic local explainability is to identify that but only through the lens of output behavior, which is why we believe our approach has merit. 2. The environments in IRM have different joint distributions. In our case, we create environments through bootstrapping (other ways also possible) and so the distributions are different in each environment because of covariate shift (as each environment in practice has different $x$'s repeated different number of times). 3. Because of (2) the models trained independently on each environment are thus different for both IRM and our case. > An invariant explanation would set every pixel with varying explanation importance irrespective of their sign to zero. This depends on which aspects one wants an explanation to be invariant to. Since one of the main goals of our paper is to provide unidirectional explanations we want the behavior to be different depending on if the environments produce explanations of opposite or same sign. Opposite sign more strongly indicates that the feature is risky to act on as the output could change in an unpredictable direction. Same sign at least provides some assurance that changing the feature in a certain direction will have the intended direction change in the output. As such, LINEX chooses the lower (absolute) value (amongst the value in the 2 environments) to be as careful and invariant to it as possible. Moreover, from an IRM perspective exact invariance is difficult to obtain in regression and such approximations are used [3]. This is relevant to us since, we are regressing on the class probabilities like LIME.
Summary: In this paper, the authors put forward a local, attribution-based explanation method that fits a local, explainable surrogate (a linear model in this case) to perturbation output of the original model. In essence, this is exact scenario as the immensely popular LIME explanation method; however, this work puts forward LINEX which seeks to improve on the robustness of LIME (where robustness is measured as Fidelity, Stability, Invariance, and a new property put forward as Unidirectionality). LINEX proceeds by learning a collection of linear classifiers, each fitting to a different perturbation “environment” and the authors provide an interpretation of their least-squares fitting approach as a multiplayer game. In addition the authors provide some minor theoretical analysis (Theorem 1) which justifies the modeling choices allowing them to give an intuition for why their method works well. Strengths: The paper provides clear motivation for the need for its method, provides a straight forward development of the method, and provides salient theoretical and experimental arguments for the method. The experimental evidence in favor of the method is strong, and the method is simple enough that I expect it could be widely adopted into post-hoc explanation tools without too much experimental effort. The paragraph “Implications of Theorem 1” is nice as it allows the reader to very easily digest an interpretation of why Algorithm 1 achieves the goal of the method. Weaknesses: The method seems to rely on the diversity of the “environments” in a way that is quite unclear to me. Simply picking gaussians with different covariances seems like it will not give rise to the desired benefits of the method (i.e., it is unclear that simply selecting an ensemble of random environments should be enough to improve over using a single superset random environment). I understand that in general we the method picks the lowest magnitude attribution, which can in turn provide greater stability. We observe, indeed, in table 1 that the method provides little benefit in the cases of FNIST, and CIFAR 10, where perhaps these random environments are not enough. However, in IRIS and MEPS, the random environments have significantly better performance than S-LIME, and I am having trouble fully understanding why this is the case. (See questions section). While I understand the authors general point about the desirability of unidirectionality, it seems like it is already generally satisfied in LIME, S-LIME, and MeLIME. The only place LINEX has a considerable advantage in unidirectionality is on IRIS (and to some extent CIFAR). Perhaps to underscore this point, the authors could run experiments on something like Adult or any other financial dataset where their motivating example for using recourse is relevant and see if there is a considerable difference. To be clear, I am not suggesting the authors carry out this experiment during the rebuttal phase, but I am simply stating that having a more clear experimental example demonstrating the importance of unidirectionality would be good in future versions of the paper. There are several works in robustness of explanations (albeit gradient-based and not LIME-based) that are very related to this work but are not included as prior works: [1] Dombrowski et. al. - https://arxiv.org/abs/2012.10425 [2] Wicker et. al. - https://arxiv.org/abs/2212.08507 These are just two of the more recent works that cover robustness of grad. Explanations. I think the more seminal work might be a single, sufficient citation to provide readers with more context: [3] Dombrowski et. al. - https://proceedings.neurips.cc/paper/2019/file/bb836c01cdc9120a9c984c525e4b1a4a-Paper.pdf Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Can the authors further elaborate precisely what they mean by “perform bootstrap sampling to create k different environments.” I would like to understand exactly how is this done step-by-step such that you get such an incredibly meaningful difference between the output of your method and that of S-LIME. Why is there a trailing 1 in Equation (2)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for realizing the potential of our approach. We now address your concerns. > Reliance of LINEX on the “diversity” of environments Typically when using IRM environments are given. One of our contributions here is proposing simple alternatives to create them for our problem. In our case, we just want sufficient diversity between environments so that the un-important features for explanations vary significantly across environments. This will let LINEX weed them out and preserve only the stable/persistent features. We found empirically that just setting different environments to random bootstraps from the base environments produced good results. It seems like this simple procedure is sufficient to induce necessary diversity across insignificant features. Note that our method is agnostic to the way environments are created and users can create them in other ways as well if it suits their problem better. > Performance with IRIS, MEPS, CIFAR10 and FMNIST We show benefits in the stability measures - CI, $\Upsilon$, CAC - which are either same or better for all the datasets. This is true even for CIFAR10 and FMNIST that the reviewer mentions, where LINEX/real is better than MeLIME in 4 out of 5 cases and similar in one case. > Advantage of LINEX in unidirectionality Taking your suggestion we ran experiments on a finance dataset. In particular, we trained a rule learning model (Dash et al Boolean Decision Rules via Column Generation, ICML 2019) on the HELOC dataset provided by FICO. This model won the FICO explainability challenge and had 74\% accuracy. Out of the 24 features the model highlighted three features namely ExternalRiskEstimate, NetFractionRevolvingBurden and MSinceMostRecentDelq. Using the same setup as for IRIS we ran experiments for the rand case (results averaged over a random 20\% test set) and found the following: For LIME, S-LIME and LINEX respectively, INFD was 0.017, 0.016, **0.012**; GI was 0.087, 0.069, **0.049**; CI was 0.097, 0.086, **0.041**; $\Upsilon$ was 0.671, 0.719, **0.911** and CAC was 0.632, 0.793, **0.902**. As can be seen we not only outperformed the competitors on unidirectionality but also other metrics in this case. > Inclusion of prior works Thanks. We will include the two Dombrowski et al. papers and Wicker et al. in the final version. > Detailed method for bootstrap sampling between to create k different environments For both S-LIME and LINEX, we create multiple ($k$) sets of perturbation neighborhoods (environment) using bootstrap sampling of the base perturbation neighborhood (base environment). For S-LIME, we compute multiple LIME explanations (one for each environment) and average them, for LINEX we use Algorithm 1 to obtain a single explanation from the $k$ sets. Thus, both methods have the same starting point. Creating k environments: 1. First generate a single base environment of size (number of examples) $= n$. 2. Repeat the following procedure $k$ times to create $k$ bootstrap samples: a. Create an index set of size $n$ which is randomly sampled with replacement from the base index set $\{1, \ldots, n\}$. b. The examples in the base environment corresponding to the randomly drawn index set corresponds to a bootstrap sample. > trailing 1 in Equation (2)? We will remove this to improve clarity. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I would like to thank the authors for their detailed response and for the extra effort they put in to run additional experiments. They have addressed a few of my concerns in the rebuttal. My overall opinion is that the paper tackles an interesting problem with solid to strong experimental results. The key drawbacks I see are the lack of clarity in the unidirectionality constraint, and, to no fault of the authors, the difficulty in validating/verifying improved utility of explanations. In light of the authors rebuttal I have increased my score. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you so much for your understanding. If there are any specific concerns you still have about the unidirectionality constraint we would be happy to engage further. Thanks again.
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive comments. We are glad that you found our work to be an **interesting idea**, that could be **widely adopted**, having **clear motivation**, with **straightforward development**, **strong experimental evidence**, **easily adaptable into post-hoc explanation tools**, **reasonable and theoretically grounded** and **clearly written**. We now individually address your concerns. Pdf: /pdf/821d557061065ca740b5945f51317b4483c4fef7.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
From Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusion
Accept (poster)
Summary: The paper proposes to use diffusion model to recover high-fidelity audio from compressed audio tokens. To improve the performance of diffusion model, the paper proposes multi-band diffusion with 1) Frequency Eq. Processor; 2) Scheduler Tuning; and 3) Band-Specific Training. The experiments show that the proposed method surpasses baseline method (EnCodec) by a margin. Strengths: 1. The paper proposes a diffusion-based model as decoder for audio compression model (EnCodec), which surpasses the default decoder joint trained with encoder. 2. The paper proposes multi-band diffusion model to improve the generation quality. Weaknesses: 1. The motivation of multiband is not clear. According to the description in paper, the multi-band is conducted on the hidden of encodec, which seems not have frequency information. If there is no intuition or explanation about the multi-band, it seems that using multi-band is a kind of improving the model parameters (using multiple model, one per band). 2. The experiments are only on EnCodec tokens, which are insufficient. The design in paper should be verified on vocoder tasks (recovering audio from mel-spectrum) and compared with more baselines (e.g., bigvgan, hifigan, wavegrad and multi-band MelGAN). Since mel-spectrum has frequency information, using multi-band on mel-spectrum is more convincing. 3. In ablation study, the comparison with single band seems to be unfair in terms of total model parameters. Technical Quality: 3 good Clarity: 3 good Questions for Authors: My main concern is the proposed method should be verified on vocoder task and compared with GAN based and diffusion based vocoder baselines. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and suggestions. **Clarification about the setting:** We will clarify the model's input-output relationship in the paper, as it currently appears to be unclear. All models take the full Encoder compressed representation as input. The diffusion process occurs in the waveform domain, specifically on waveforms that have undergone filtering using a passband filter. **Multi Band motivation:** We use a multi band approach is to address the issue of entangled errors that arise when conducting diffusion on full-band audio. In our experiments, this phenomenon manifested regardless of the model sizes. Our intuition suggests that the model overly relies on lower frequencies already present in the noisy waveform during training steps with small t (close to the clean x_0). Training independently for different frequency bands resolves this problem, preventing the model from inferring high-frequency content from the low frequency of noisy waveforms and requiring it to extract such information from the conditioning. **Discrete Units Focus:** Our paper is focused on generating from low bitrate discrete representation. Everything has been designed with this task in mind to seamlessly plug in prior text to audio work. Performing mel spectrogram to audio is not a task we considered yet. However we’ll add some discrete version of HifiGan and PriorGrad as additional baselines to our work. We’ll also add DAC(https://arxiv.org/abs/2306.06546), a new SOTA Compression model from July 2023. At 6kbps using their public implementation we have comparable performances. However we want to stress that this is not a decoder based on EnCodec but a completely different compression model. It is possible that replacing their decoder with diffusion would improve performances, which is something we might investigate in the future. | | Mean | CI95 | |--------------------------|--------------------|-------------| | Ground Truth | 90.32 | 1.39 | | MBD | *85.16* | 0.93 | | Encodec | 82.73 | 1.11 | | PriorGrad | 65.16 | 2.2 | | HifiGan | 82.5 | 1.25 | |--------------------------|--------------------|-------------| | DAC | 84.44 | 1.14 | | OPUS | 65 | 2.43 | **Comparison to Single Band:** We trained multi band models with 4 times less parameters to compare with the single band model from the ablation table (Table 3). We will add the line "Multi Band small" to the paper in table 3: | Setup | ViSQOL | Mel SNR-L| Mel SNR-M | Mel SNR-H | Mel SNR-A | |--------|-----------|--------------|---------------|--------------|---------------| |Multi Band | 3.67±0.02 | 13.33 | 9.85 | 9.26 | 10.81| |Multi Band small | 3.56±0.03 |12.93| 9.81 | 9.11 |10.61 | |Single Band |3.32±0.02 |12.76| 9.82| 8.58| 10.39| --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: The further results address most of my concerns. I have increased my score to 5.
Summary: This paper presents Multi-Band Diffusion, a diffusion-based decoder for neural audio codecs. The proposed method demonstrates superior generation quality across various audio domains compared to a publicly available state-of-the-art neural audio codec method. Based on the analysis of time-frequency representation of audio, the model provides a novel diffusion-based model for audio synthesis. Strengths: * The authors provide extensive experimental results, along with a detailed explanation of their methodology. * They present novel methods such as the band-specific diffusion model, frequency equalizer processor, and power noise scheduler, demonstrating their necessity and assessing their effectiveness through an ablation study. * The model not only outperforms benchmarks in terms of the generation quality of reconstructed samples across different audio domains, but also proves its applicability in tasks such as text-to-audio and text-to-speech through empirical evidence. Weaknesses: * There is no comparison of generation quality when producing from high bit-rate latent representations. While the generation quality remains inferior compared to the ground truth audio, the authors only showcase generation quality from low bit-rate representations. * There is a lack of a comparison of computational cost or model size. Although the authors mention that the proposed method requires more computation compared to standard decoders, a comparison of generation speed or parameter size would have provided a better understanding of the trade-off between generation quality and synthesis speed of this model. Furthermore, given that the model was trained using 1000 diffusion steps and only 20 steps were employed during inference, an evaluation of quality at different diffusion steps during inference could also be provided. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: From the selection of low bit-rate latent representation as the model's conditional input and the use of a small number of diffusion steps during inference, it seems that the authors have considered efficiency alongside the model's performance while empirically validating the proposed method. However, it would be beneficial to extend it and demonstrate the peak performance of the proposed model with larger bit-rate representation and high diffusion steps, as diffusion-based generative models have shown remarkable performance in various fields. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have adequately addressed both the limitations of their research and its possible societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **High bit rate:** The Neural audio codec literature has mainly focussed on low bit rates. Moreover, these low bitrate setups align with the configurations employed by language modelling approaches that our method seeks to enhance, such as Musicgen, or Vall-E.. While our method is applicable to higher bit rates, the potential improvements are more marginal and might not justify the computational trade off. Finally, there are fewer use cases of high bit rate representations in the realm of text-to-audio (to the best of our knowledge, none). **Number of denoising steps:** We provide in the appendix of the paper a comparison of different number of denoising steps showing that there are very few improvement going more that 20 steps (see Table A5) **Computational time & Model size:** We will add this table to the appendix of the paper. Showing generation time and model size of EnCodec vs MBD also put the cost of a full pipeline LM + Decoder to put it in perspective. | | Compute time (30s) | #parameters | |----------------------------------------|------------------------------|-------------------| | Encodec | 0.1s | 56M | | MBD | 21.2s | 411M | | MusicGen-large + Encodec | 102s | 3.3B | | MusicGen-large + MBD | 123s | 3.7B | --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: I thank the authors for addressing the previous concerns and emphasizing the importance of the model's performance at low bitrates, and presenting new evaluation results coupled with computational trade-offs. However, considering the results indicating this model as a high-performing, yet slower and larger alternative to EnCodec, I have chosen to maintain my initial review.
Summary: This paper proposes a novel multi-band diffusion (MBD) model that generates high-fidelity audio of multiple modalities, e.g., speech, music, and environmental sounds, from low-bitrate discrete representations. The authors show that MBD outperforms EnCodec and Opus in terms of perceptual quality. Strengths: The band-specific diffusion model is novel and appears to be a general technique for universal audio synthesis. The authors design a frequency equalizer to reduce the discrepancy between the prior Gaussian distribution and the data distribution in different frequency bands, which is sensible for improving the consistency and stability of audio generation, especially for general audios. The authors also propose a novel power noise scheduler, which empirically surpasses other commonly used schedulers, e.g., linear and cosine schedulers. I appreciate the high variety of audio generation experiments conducted for evaluating MBD. The generated samples of MBD on the demo page seem promising and are apparently better than EnCodec and Opus at the same bit rates. Weaknesses: 1. Baselines are not strong enough. The authors only compare MBD to Encodec and Opus, where Opus is an old method proposed in 2012. I am therefore skeptical of the choice of the baselines in this paper. Are these baselines strong enough? The authors mention SoundStream (Zeghidour et al., 2021) but do not take it as a baseline. Considering the wide and successful application of SoundStream in many recent audio generation works, e.g., in AudioLM, Vall-E, and Natural Speech 2, it would be more convincing if MBD can be compared to SoundStream and evidently demonstrate superiority. I consider increasing my rating if such an experiment can be supplemented. 2. MBD depends on the frozen latent representations of a pre-trained EnCodec. The dependency of MBD on EnCodec complicates the analysis of the significance of this work. It remains unclear whether the high quality comes from a well-trained EnCodec or the proposed MDB model. The training of MBD also becomes more unstable. Please explain why the encoder of EnCodec cannot be jointly trained with MBD. 3. It is unfair that the generation speed and model size of MBD are not compared to those of Encodec. While the authors only present the strength of their proposed MBD model in terms of different quality measures, they do not mention the drawback of the generative diffusion model. As an iterative sampling method, it is foreseeable that MBD could be slow than Encodec up to orders of magnitude. It is then unfair to only focus on the quality when it is more reasonable to evaluate the audio codecs based on their practical values. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some questions have been stated above. In summary: (i) What are the considerations when choosing the baselines? Are these baselines strong enough? (ii) Is the frozen latent representations of a pre-trained EnCodec necessary for training MBD? Can MBD be end-to-end trained from scratch? (iii) What are the speed and model sizes of MBD in comparison to EnCodec or other baselines (if any)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations of this work have been stated in the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and comments. **Comparison to SoundStream**: Despite our willingness to perform a direct comparison with SoundStream, it's important to note that the authors have not released a public repository, and there is no public reimplementation on GitHub that is reproducing SoundStream performances. **Baselines** EnCodec is a very strong Neural audio codec that builds on SoundStream. In their paper they provide a comparison showing that it outperforms SoundStream (cf. https://arxiv.org/pdf/2210.13438.pdf Appendix Table A2). Many models such as VALL-E or MusicGen are based on EnCodec. Natural speech 2 uses its own compression model that also builds on SoundStream. To add stronger baseline we add recent SOTA Compression Model DAC(https://arxiv.org/abs/2306.06546) and two other decoders (HifiGan and PriorGrad). We are comparable to DAC while building on different Encoders. It is likely that using MultiBand Diffusion as a decoder replacement for DAC improves performances over their GAN based decoder. | | Mean | CI95 | |--------------------------|--------------------|-------------| | Ground Truth | 90.32 | 1.39 | | MBD | **85.16** | 0.93 | | Encodec | 82.73 | 1.11 | | PriorGrad | 65.16 | 2.2 | | HifiGan | 82.5 | 1.25 | |--------------------------|--------------------|-------------| | DAC | 84.44 | 1.14 | | OPUS | 65 | 2.43 | **Aim of the paper:** Our approach serves as a replacement for EnCodec's decoder. We will highlight this aspect more prominently in the paper, as it has proven unclear to several reviewers. This approach offers the advantage of flexibility and compatibility across various applications. In the context of Text-to-Audio generation, it provides a means to swiftly preview audio using the fast and lightweight default decoder, and the option to switch to MultiBand diffusion when a desirable sample is identified and higher quality is needed. In all tables, the encoder and RVQ are shared between EnCodec and our method, thereby rendering the performance disparity solely attributable to our proposed diffusion decoder. Everything remains identical except for this aspect. **Training end-to-end compression with MBD:** This is an excellent question. We conducted preliminary experiments in this direction. However, we observed that utilising solely the L2 loss on the waveform does not yield satisfactory latent representations when compared to loss specifically designed for compression such as features and spectrogram matching, as is the case in a standard neural audio codec. How to combine the diffusion objective with perceptual losses is a complex and open problem that is beyond the scope of this paper. **Compute time:** We will include this table to the appendix of the paper including compute time and number of parameters for the different methods. We also add the comparison of complete pipelines using MusicGen to put the increase in compute time and model size in perspective. | | Compute time (30s) | #parameters | |----------------------------------------|------------------------------|-------------------| | Encodec | 0.1s | 56M | | MBD | 21.2s | 411M | | MusicGen-large + Encodec | 102s | 3.3B | | MusicGen-large + MBD | 123s | 3.7B | --- Rebuttal Comment 1.1: Title: Response to the authors Comment: I appreciate the authors' efforts made for the detailed response. The new strong baselines appear to be more convincing than the previous ones. Besides, following from the clarification from the authors, I am convinced that MBD is practically valuable for improving EnCodec's generation from codes. Yet, it seems unfair to me to compare a 411M MBD to a 8x smaller EnCodec. The immediate question raised here would be, if we train a 8x larger EnCodec, could it perform even better (I guess a ~400M EnCodec can still run much faster than MBD). Overall, considering the practical values of MBD, I decide to increase my previous rating to 5. --- Reply to Comment 1.1.1: Title: About a larger EnCodec Comment: We thank the reviewer for considering the supplementary results we provided an increasing their ratings. **About the limitation of comparing to a smaller EnCodec model:** while we do not have immediately an 411M EnCodec model to compare to, we would like to highlight that in the original EnCodec paper, the authors tested a larger EnCodec model, in Table A.3 of the supplementary material of [Defossez et al. 2022]. In particular, the authors compared an EnCodec model with 48 initial hidden channels (the default with 56M parameters), to one with 64 initial hidden channels (which would have roughly 100M parameters). This study shows very limited changes in SI-SNR (going from 6.67 to 6.70 dB) and ViSQOL (4.35 to 4.38) when doubling the model size. This hints to the fact that the limitation of the EnCodec approach does not come from the model size, but instead from the adversarial training procedure.
Summary: This is an interesting submission proposing the use of a diffusion model in a band-by-band manner to generate high-fidelity audio from (potentially) a variety of low bit rate inputs. The literature so far has only applied diffusion models for audio generation given spectrogram inputs. A specific diffusion noise schedule is proposed; and a diffusion-oriented frequency equalization method is proposed. Significant improvements in audio quality are described from a number of experimental evaluations. Strengths: Strengths: - Clear presentation (in spite of persistent minor problems with grammar & spelling); - Original multi-band diffusion model; - Good literature review; - Original frequency equalization model; - Good empirical results, i.e. nice improvements in generated audio quality. Weaknesses: The main weakness is a persistent pattern of small mistakes in grammar and spelling. I mention a number below but this is just a small sample. Overall the work is very readable, but the problems with the writing should be corrected. I note that there is mention of the proposed method being significantly slower than the alternatives examined, but I don't see a clear description of the compute cost of the proposed method compared to existing methods. Specific comments: > 27 artefacts [here and below] --> artifacts > have led to rich contextual representations that contains more contains --> contain > 32 They are optimized using complex combination combination --> combinations > 47 Results suggest the proposed method achieves significantly > 48 superior performance than the evaluated baselines. "significantly superior" is an odd phrasing, rewrite? > 67 Clustering using a few centroids leads to the speech content representation being > 68 mostly disentangled from the speaker and the f0 and thus controllable speech generation. End of sentence is not grammatical; missing words before/around "thus controllable speech generation"? "Eq. Processor" in Figure 1 caption vs. "EQ Processor" in the figure itself: choose one or the other, and keep consistent. > 165 As a result, training a diffusion model on full-band audio data would always > 166 provide the ground truth low frequencies when generating high frequencies I don't quite understand this statement, rephrase? > 170 Interestingly, dividing the frequency band > 171 along model channels I don't understand this statement. What are "model channels"? > 238 5.1 Multi modalities model Reading this section, i'm not sure what are the "modalities" referred to. Table 1: "MBD", though it is clear that this is the proposed method, I don't see the acronym introduced. Introduce it early on, e.g. state "... Multi Band Diffusion (MBD) ..."? Technical Quality: 3 good Clarity: 3 good Questions for Authors: What is the computational cost of the proposed MBD compared to Opus and EnCodec? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors cite reasonable limitations to their work, including the possibility of their method being used to generate deep fakes, as well as the usual potential issue with training set bias, in spite of their efforts to avoid that. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First thank you very much for the review and taking the time to point at typos and grammar issues. We have addressed all of your comments and will conduct a thorough proofreading of the paper to ensure that no error appears in the camera-ready version. **Regarding the computational cost:** We will include a table with the number of parameters and the compute time of our method vs Encodec, we also include in the total cost of a complete pipeline LM + Decoder such as the one described in the paper. This table will be included to the Appendix of the paper. | | Compute time (30s) | #parameters | |----------------------------------------|------------------------------|-------------------| | Encodec | 0.1s | 56M | | MBD | 21.2s | 411M | | MusicGen-large + Encodec | 102s | 3.3B | | MusicGen-large + MBD | 123s | 3.7B |
Rebuttal 1: Rebuttal: We want to make an ethical statement, the AudioGen and MusicGen team recently released the new public implementations with pre-trained models. We trained MultiBand Diffusion with those new versions of the compression models. We found improvement on MusicGen using MultiBand Diffusion as a decoder. However, in the first experiment that we did with pre-trained AudioGen, MBD was not improving over the baseline decoder. We want to come clean and say that we will remove the results on AudioGen as it is currently in the paper. However this doesn't change any other experiment conducted in the rest of the paper. Results (MUSHRA) on MusicGen: | | Mean | CI95 | |--------------------|-------|------| | MusicGen + Encodec | 70.99 | 1.19 | | MusicGen + MBD | **74.97** | 1.94 |
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a novel approach to processing audio data by developing and implementing a band-specific diffusion model, a frequency equalizer processor, and a power noise scheduler. The band-specific diffusion model processes various frequency bands independently, thereby reducing the accumulation of entangled errors. The frequency equalizer (Eq.) processor helps to lessen the discrepancy between the Gaussian prior distribution and the actual data distribution across different frequency bands by balancing energy-level between gaussian noise and each frequency band. The power noise scheduler, specifically designed for audio data with high sampling rate, is another contribution. The authors conduct extensive evaluations to gauge the efficiency of their approach, using both objective metrics and human studies. The results show that the proposed approach surpasses current state-of-the-art methods, encompassing both GAN and diffusion-based methods. Strengths: Generating high-frequency bands using diffusion model is something that has not been tackled very well. I personally have also suffered from this in various experiments. All three contributions, training of band-specific models, frequency energy balancer, and power noise scheduler, are all valid approaches and were validated by metrics in experiments. Weaknesses: It would have been nicer if there were more GAN-based / Diffusion-based vocoder baseline models other than just encodec. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Regarding the power noise schedule, what were the rules of thumbs to select hyperparameters? 2. Did the authors set the power noise schedule differently for each band? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed the limitations of this work in the conclusions such as slow speed of the current model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review. **Baselines:** We conducted a new subjective studies (MUSHRA) including new baselines such as a discrete version of HifiGan used in https://arxiv.org/abs/2104.00355 and Priorgrad (https://arxiv.org/abs/2106.06406) both conditioned on EnCodec units at 6 kbps. We also include DAC@6kbps(https://arxiv.org/abs/2306.06546), a concurrent SOTA neural compression model from July 2023. We underline that DAC is not 1-1 comparison since it isn't based on EnCodec codebooks but on its own learned latent space, it is likely that using MultiBand Diffusion on top of those codebooks would improve quality even more. | | Mean | CI95 | |--------------------------|--------------------|-------------| | Ground Truth | 90.32 | 1.39 | | MBD | *85.16* | 0.93 | | Encodec | 82.73 | 1.11 | | PriorGrad | 65.16 | 2.2 | | HifiGan | 82.5 | 1.25 | |-| | DAC | 84.44 | 1.14 | | OPUS | 65 | 2.43 | **Choice of Noise Schedule:** To determine our noise schedule function, we tested various functions, listened to diffusion process states, and empirically validated our choice and parameters through a grid search. Throughout this process, our primary objective was to have more steps in the less noisy region as described in the paper. We also adopted a schedule that wouldn’t include states where the model doesn’t manage to predict the noise more accurately than the Identity function. **Details about hyperparameters:** In order not to add unnecessary complexity, we used the identical noise schedule for every band, however we found that tuning the EQ processor differently for every band was beneficial. Comprehensive details will be added in the Appendix of the final version of the paper with all hyperparameters.
null
null
null
null
null
null
Constrained Policy Optimization with Explicit Behavior Density For Offline Reinforcement Learning
Accept (poster)
Summary: The paper proposes a novel algorithm for offline model-free reinforcement learning by constraining the actor to lie within a safe area defined by an explicit behavioral density. This behavioral density is obtained by thresholding a FlowGAN model. The resulting algorithm is competitive across various D4RL datasets. Strengths: - Clearly presented algorithm and well-motivated method - Strong empirical evaluation and competitive results on D4RL MuJoCo - Solid theoretical justification of the convergence of the FlowGAN and RL training Weaknesses: - Model-based offline methods typically use a state-action-based uncertainty penalty which could also be repurposed for the definition of the safe area. E.g. thresholding the model uncertainty as in MOReL [1]. This would be a useful baseline to show the benefit of the Flow-GAN method - The hand-crafted per-environment piecewise linear scheme for alpha limits the generality of the method - Algorithm 1 seems to be incorrect, the preceding discussion has the Flow-GAN optimization happening before RL training, whereas Algorithm 1 has the Flow-GAN optimization interleaved with RL training. - Line 150: definition of the ‘offline MDP’ is not a real MDP. For example, the state and action spaces should both be sets and the definition is a tuple. It is unclear what this definition adds beyond describing the marginal state distribution and behavioral policy. - In Theorem 4.2, the optimal Q-function on the “restricted product space” needs to be defined and explained. Minor: - Line 145: ‘triples’ is likely a mistake, the transition is a 4-tuple - Tables 1 and 2 missing EDAC [2], the current SOTA model-free offline RL method [1] MOReL: Model-Based Offline Reinforcement Learning. Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, Thorsten Joachims. [2] Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble. Gaon An, Seungyong Moon, Jang-Hyun Kim, Hyun Oh Song. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Could a dynamics model-based uncertainty (e.g. MOReL) provide a simple alternative to the Flow-GAN threshold? - Could the alpha tuning scheme be replaced by something not hand-crafted to increase the applicability of the method? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We appreciate your kind suggestions for improving our manuscript. **Response to Weakness 1**: We agree that MORel is a useful baseline for CPED, and we will certainly include the MORel[1] in our manuscript as baseline method. From our understanding, MoReL provides us good estimates of the offline RL dynamics, enabling us to have a better understanding of the safety of the state space as well. In this sense, both CPED and MORel want to identify safe areas, while they implement in different approaches. **Response to Weakness 2**: Among policy control methods for solving offline RL problems, some works (e.g., BEAR, UWAC) determine the value of alpha through automated learning (Lagrangian duality optimization). However, BRAC (ref[1]) has demonstrated through experimental studies that the performance of automatically optimized alpha is not as effective as manually determined alpha values. Therefore, the CPED selects the hand-craft threshold alpha. To better control the training process, CPED further proposes the piecewise linear scheme for adjusting alpha (To our best knowledge, the CPED is the first one using a piecewise linear scheme for determining alpha in offline RL solutions). In section 5.2 and Appendix E.5, we present the details of setting the piecewise linear alpha, and the experiments show that this scheme delivers better returns in most scenarios. **Response to Weakness 3:** We appreciate you pointing out the issue of training order when training Flow-GAN and RL models. When implementing the CPED, the Flow-GAN is joined trained with RL model. Specifically, the Flow-GAN is first trained with several epochs until it becomes stable, then the RL model is interleaved trained with Flow-GAN. Considering the Flow-GAN starts training first, in the manuscript, the description of Flow-GAN appears before the RL model. We will further revise Algorithm 1 and clearly state that the Flow-GAN model is firstly trained for a few epochs (M epochs), and then interleaved trained with RL model. Please find the attached **pdf file** ( in “**Author Rebuttal by Authors**” part at the top of this page) for the revised figure of Algorithm 1. **Response to Weakness 4**: Thank you for pointing out the unclear statement in the manuscript. The offline MDP is not real MDP as it is under online setting. The action/state under offline MDP is bounded in the probability space. To better illustrate the concept of “offline MDP”, we have revised the definition in Line 150. Please find the attached **pdf file** ( in “**Author Rebuttal by Authors**” part at the top of this page) for the revised Definition of “Offline MDP” We want to further explain that the main purpose of proposing the definition of offline MDP is to highlight the distinction between offline RL and online RL. In offline RL, the state and action spaces are bounded, and this boundary is constrained by the probability measure defined over these spaces. Therefore, in subsequent theoretical proofs and algorithm designs, we work within this bounded state and action space. **Response to Weakness 5**: Actually, we have explained that “restricted product space” in theorem 4.2 refers to ${\mathscr{S} \times \mathscr{A}}$, where \mathscr{S} and \mathscr{A} are measured state space and measured action space defined in “Offline MDP” in Line 150 respectively. **Response to Minor Issue 1**: We really appreciate you for pointing out the grammar error. We will further revise the sentence as “Dataset $\mathcal{D}$ contains many 4-tuples $(s,a,s',r)$ that can be viewed as independent.” **Response to Minor Issue 2**: Thank you for pointing out the related method and references. We will certainty include the above references and the EDAC methods in the revised manuscript (we have cited [2] already). From Table 1 in [2], the performance of EDAC is competitive with the proposed CPED and both methods contribute to the SOTA methods for offline RL task. It is also noted that multiple Q-functions are utilized in EDAC method, while there are only 2 Q-networks in CPED. Thus, motivated by [2], we believe the proposed CPED has space for further improvement when multiple Q-functions are applied. **Response to Question 1**: We appreciate this interesting question. As far as we know, model-based uncertainty methods (e.g., MOReL) focus more on state space when defining safe regions, while CPED pays more attention to the safety of the action space. In fact, during the training phase of offline RL method, OOD actions are the main source of model failure, whereas OOD states are more critical during the testing phase. Though MoReL also considers the uncertainty of action space in its dynamics model-based uncertainty, our view on whether MoReL can provide an alternative to Flow-GAN in this regard is relatively conservative. Furthermore, we believe that the key to defining a safe area lies in the ability to estimate the distribution of the safe area well. MoReL uses Gaussian dynamics models, and whether it can be used as alternative for flow-GAN threshold depends on whether the Gaussian dynamics models have enough representational power to learn complex distributions. **Response to Question 2**: As we mentioned in Question 2, we can apply the Lagrangian duality optimization for updating alpha to increase the applicability of CPED (e.g. BEAR and UWAC). Nevertheless, according to BRAC (ref[1]), this updating strategy may not lead to a good model performance. Thus, we select the hand-craft alpha in CPED when implementing the method. Especially we further propose piecewise linear scheme to determine alpha dynamically, and the effectiveness of the piecewise linear scheme is also validated from experiments. ref[1]: https://arxiv.org/abs/1911.11361 --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thank you for the responses. W2. I don't think a handcrafted schedule should be presented as an advantage. Any other algorithm could have manually done this but it would have reduced the applicability of their method. W3. Could the authors clarify why it is interleaved? Why would it not be better to use a GAN trained to convergence? Q1: The dynamics model in MOReL provides an uncertainty bonus over both state and action, so may already incorporate action uncertainty. --- Reply to Comment 1.1.1: Title: Thanks for your comments Comment: **W2. I don't think a handcrafted schedule should be presented as an advantage. Any other algorithm could have manually done this but it would have reduced the applicability of their method.** **Response:** Thank you for your comment. We agree with you that handcrafted alpha may increase the difficulties in hyperparameter tuning, thus limiting the method's applicability. However, our piece-wise alpha scheme is straightforward to implement. Specifcally, we would like to further emphasize the following: 1. While the idea of automatic learning alpha (optimizing Lagrangian multiplier) appears in some early works (BEAR/UWAC), more recent papers(BRAC) tend to prefer hand-craft alpha based on experimental considerations. 2. We agree that the handcraft scheme of alpha is not the specific advantage of our method. (The advantage of our manuscript comes from the Flow-GAN model). As a matter of fact, the hand-craft scheme is a userful trick in model training for better convergence and results. 3. In our manuscript, the piece-wise linear (constant) scheme is suggested, and this scheme is easy to implement. We only need to set continuous intervals (4 intervals in Mujoco and Antmze tasks, actually not very sensitive from experiments, see Figure 2(a,b) in the manuscript), and the alpha for each interval decreases exponentially during the training process. **W3. Could the authors clarify why it is interleaved? Why would it not be better to use a GAN trained to convergence?** **Response:** Thank you for your comment. We are sorry for our unclear statement in our last response. First, we would like to clarify that before conducting the offline RL task, the GAN has undergone training to achieve convergence (when Flow-GAM becomes stable). Subsequently, the GAN is trained in an interleaved manner with the RL model. The reason for interleaving the training of the Flow-GAN and RL model is to enhance the Flow-GAN's adaptation to downstream offline RL tasks. Actually, we have attempted only training the RL model with a converged and fixed Flow-GAN, we observed that the convergence of the RL model was not sufficiently stable. Therefore, we recommend trying the interleaved training approach. **Q1: The dynamics model in MOReL provides an uncertainty bonus over both state and action, so may already incorporate action uncertainty.** **Response :** Thank you for pointing out the merits of MORel. We acknowledge that MORel captures the uncertainties in both action and state, which we also mentioned in our previous response.(*Though MoReL also considers the uncertainty of action space....*). As for whether MORel can be a substitute for the flow-GAN threshold, its feasibility relies on its capacity to learn complex distributions. We find this to be an interesting topic (along with the utilization of model-based mechanisms to assist model-free methods), and it would be meaningful to delve deeper into this direction in the future. We again appreciate your helpful suggestions.
Summary: This paper proposes a novel approach named Constrained Policy optimization with Explicit Behavior density (CPED) to mitigate the existing policy control methods, e.g., overly conservative or failing to identify OOD areas accurately. CPED use a flow-GAN model to explicitly estimate behavior policy density. The strength of CPED lies in its ability to accurately identify safe regions for explorations, which in turn, leads to less conservative learning policies. Strengths: 1. Well-motivated example in Fig. 1 and overall good writing. 2. Sufficient theoretical evidence to show that CPED has a fast convergence rate and is able to find the optimal Q-function value. Weaknesses: 1. The contribution is limited as it seems to directly borrow the idea of flow-gan into offline RL domains. 2. Besides the motivation example in Fig. 1, there is no faithful evidence to show that "accurately identify the feasible region, which includes both observed and unobserved but safe points" both in theoretical analysis and experiments. 3. Lack of survey and comparison with the most similar work [1], which also learns an expressive generative behavior model but by the diffusion technique. 4. Though there is an obvious edge of CPED against SPOT in experiments, more explanations are lacking to suppose it. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could the authors provide more comparisons between CPED and SPOT and explain why CPED is more feasible? 2. Regarding policy learning (Eq. 7), why not transfer it to a Lagrange function like SPOT? To the best of my knowledge, explicitly excluding bad transitions has a little edge against other weighting or penalty methods in terms of generalization, which is a key contribution mentioned by the authors. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Lack of related work: the in-sample learning paradigm [2-8] is now one of the trends in offline reinforcement learning, those new methods are worth being well mentioned in related work. --- I decided to change my score from 4 to 5 after the first round discussion with the authors. --- [1] Chen H, Lu C, Ying C, et al. Offline reinforcement learning via high-fidelity generative behavior modeling[J]. arXiv preprint arXiv:2209.14548, 2022. [2] Kostrikov I, Nair A, Levine S. Offline reinforcement learning with implicit q-learning[J]. arXiv preprint arXiv:2110.06169, 2021. [3] Xu H, Jiang L, Jianxiong L, et al. A policy-guided imitation approach for offline reinforcement learning[J]. Advances in Neural Information Processing Systems, 2022, 35: 4085-4098. [4] Garg D, Hejna J, Geist M, et al. Extreme q-learning: Maxent RL without entropy[J]. arXiv preprint arXiv:2301.02328, 2023. [5] Xiao C, Wang H, Pan Y, et al. The in-sample softmax for offline reinforcement learning[J]. arXiv preprint arXiv:2302.14372, 2023. [6] Zhang H, Mao Y, Wang B, et al. In-sample Actor Critic for Offline Reinforcement Learning[C]//The Eleventh International Conference on Learning Representations. 2022. [7] Xu H, Jiang L, Li J, et al. Offline rl with no ood actions: In-sample learning via implicit value regularization[J]. arXiv preprint arXiv:2303.15810, 2023. [8] Hansen-Estruch P, Kostrikov I, Janner M, et al. Idql: Implicit q-learning as an actor-critic method with diffusion policies[J]. arXiv preprint arXiv:2304.10573, 2023. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their efforts and time reviewing our paper. **Response to Weakness 1**: Thank you for your comment. **We want to emphasize that the original intention of CPED is not to directly apply the Flow-GAN idea to offline RL problem** for the following reasons. 1. **Advantage of Flow-GAN**: One key reason for selecting flow-GAN comes from its advantages. A clear advantage of Flow-GAN is its capability to provide an explicit and direct density estimator for the behavior policy. In contrast, other generative models (VAE) can only approximate the lower bound behavior density through ELBO. 2. **Further motivation for Flow-GAN**: We choose to use Flow-GAN as a density estimator because, first, we theoretically analyze in section 3.2 that estimating the density of the behavior policy in RL is equivalent to training a GAN. However, traditional GANs (including VAE) are essentially sample generators. To realize the idea of estimating a density function, we combine the Flow model with GAN to estimate the behavior density. 3. **Modifications to traditional Flow-GAN**: The Flow-GAN is originally designed for image generation problems, in our specific implementation, we make numerous adjustments to the structure of Flow-GAN (ref[1]) , including converting many CNN into fully connected networks and simplified some residual networks, so that overall network structure can adapt to the tasks of offline RL. 4. **Other Generative Models in offline RL**: In recent offline RL study, generative models become popular for behavior modeling owing to their strong model capacity, such as BCQ (using VAE), SPOT(using VAE), ref[3](using diffusion model). However, it is inappropriate to claim these studies simply borrow the VAE/diffusion model into offline RL domains. \end{itemize} With respect to the contribution of our manuscript, limited by the length of rebuttal, please find the bullet point **1.1** & **1.2** & **1.3** in “***Author Rebuttal by Authors***” part at the top of this page. Thus, we do not agree with the claim “*CPED borrow the idea of flow-gan into offline RL domains*”, and we believe the contribution of our CPED is underestimated. **Response to Weakness 2**: In theory, we have demonstrated that the estimated density is accurate. Thus, as long as the density satisfies certain smoothness properties, we can prove that the estimated region is accurate. However, in practice, we are unable to observe all policies, making it challenging to quantify how well our estimation is. Nevertheless, the effectiveness of the experiments leads us to believe that the estimated policy is accurate, as the performance of offline RL heavily relies on the accuracy of the estimated region. **Response to Weakness 3**: Thank you for providing related references with diffusion technique. Actually, we have already cited [1] in the manuscript. We believe [1] is an important baseline for our CEPD. Compare [1] and CPED, we see that the CPED outperforms [1] for most D4RL and Antmaze tasks, which further shows the effectiveness of our CPED. **Response to Weakness 4**: We have provided further explanations about our contribution as well the comparison between flow-GAN and VAE (thus CEPD and SPOT) in both the response to Weakness 1 and “***Author Rebuttal by Authors***” part at the top of this page. Here we summarize the key points (limited by the rebuttal size, we do not list details for each point): 1. **Similarities**: Please refer the "Similarities" part in Section 2 of “***Author Rebuttal by Authors***” part at the top of this webpage. 2. **Advantage of CPED and Flow-GAN**: Please refer the part "Advantage of Flow-GAN" in **Response to Weakness 1** . 3. **Deficiency of SPOT**: Please refer the "Deficiency of VAE" part in Section 2 of “***Author Rebuttal by Authors***” part at the top of this page. 4. **Adaption of Flow-GAN and CPED for Estimating Behavior Policy**: Please refer to Further motivation for Flow-GAN in "**Response to Weakness 1**" above. 5. **Modifications to traditional Flow-GAN** : Please refer to Modifications to traditional Flow-GAN in "**Response to Weakness 1** "above. Therefore, it is convincing that our proposed CPED has a clear edge against SPOT in both theory and experiments. **Response to Question 1**: Please refer our responses for Weakness 1 and Weakness 4 for our motivation for using CPED (flow-GAN) and more comparisons between CPED and SPOT (flow-GAN vs VAE). **Response to Question 2**: We are afraid you may mis-understand our idea for optimizing Equation 7 in the manuscript. The actual objective function of Equation 7 is: $$\begin{aligned} & \max_{\psi} \mathbb{E}_{s \sim \mathcal{P}\_\mathcal{D} ,\\, a \sim \pi\_{\psi}(\cdot|s), (s, a) \in \tilde{\mathcal{S}} \times \tilde{\mathcal{A}} }[Q\_\eta(s,a)] \\\\ & s.t. \tilde{\mathcal{S}} \times \tilde{\mathcal{A}}= \\{(s,a) \in \mathcal{S} \times \mathcal{A}: -\log L\_{\theta}^{\pi\_\beta}(s,a) < \epsilon \\} \end{aligned}\tag{A}$$ which is a constrained optimization problem, and we are indeed using Lagrangian techniques to solve the constraint problem (A). For the constraint in problem (A), it is not hard constraints that exclude bad transitions as you mention. Instead it works as penalty terms in the Lagrange function (with Lagrangian multiplier $\alpha$) . In case other readers mis-understand our idea, in the revised manuscript, we will rewrite Equation 7 as Equation A under the form of constraint optimization problem. **Response to Lack of related work**: Thank you for your comment. We have already cited [1] and [2] in the manuscript. We appreciate you pointing out the research direction of in-sample learning paradigm, we will further cite [3-8] in the revised manuscript. ref[1]: https://arxiv.org/abs/1705.08868 --- Rebuttal Comment 1.1: Title: Official Response by Reviewer 1U1v Comment: Thank you for the clarifications provided, as most of they have addressed most of my concerns. I now understand the rationale behind using the Flow-GAN to estimate the density of the behavior policy. Specifically, the \textbf{further motivation for Flow-GAN} and Chapter 3.2 in the manuscript have become more convincing to me instead of finding a higher-fidelity density estimator. I also appreciate the authors' efforts to present more persuasive experiments demonstrating the efficiency of the Flow-GAN model by including additional comparisons with 'SfBC' in response to reviewer 6qDS. As a result, I have decided to raise my score. Furthermore, if the authors are willing to provide more compelling evidence as to why the Flow-GAN model outperforms the VAE model for estimating the behavior policy density, by conducting didactic toy examples, I would be inclined to further increase my rating. Other Questions: Is the code available? --- Reply to Comment 1.1.1: Title: (1/2) Response to Reviewer 1U1v Comment: Thank you for your reply, and we appreciate your kind response. **Furthermore, if the authors are willing to provide more compelling evidence as to why the Flow-GAN model outperforms the VAE model for estimating the behavior policy density, by conducting didactic toy examples, I would be inclined to further increase my rating.** **Response:** Thank you for your comment. Of course we are glad to conduct additional studies comparing the VAE and flow-GAN as well as the advantages of CPED. Here, we firstly want to cite the Figure 1 in [1], which is a typical motivating example showing GAN-based model is much better than VAE-based model for approximating behavior policy. We are also conducting additional toy example comparing VAE and flow-GAN in generating behavior samples. Since we cannot attach further figures on openreview website at this stage, we will compare the likelihood of generated samples (by VAE and flow-GAN). We are still running the program, and we will soon provide the results in 1 day. In addition, in our response to 6qDs, we further compare BEAR, SPOT and CPED under the Halfcheetah-medium task, and the result figure is in our github code page <https://github.com/rl-study-group/rl_study_cped/tree/main> (github: Readme -> Note. Not allowed to attach further figures now). We can see that SPOT could reach a score of 6500 very fast, while CPED could achieve higher scores. both SPOT and CEPD significantly outperform BEAR. **Other Questions: Is the code available?** **Response:** Yes, we have uploaded the code at <https://github.com/rl-study-group/rl_study_cped/tree/main>. Note: * Please kindly search rl-study-group or rl\_study\_cped on <https://github.com/> if the above link is invalid due to the markdown grammar issue. * Due to the tight timeline for organizing the CPED code and uploading to github, some test code hasn't been completely removed. We will make an effort to tidy up all the code as soon as possible. [1] A Behavior Regularized Implicit Policy for Offline Reinforcement Learning: https://arxiv.org/pdf/2202.09673.pdf --- Reply to Comment 1.1.2: Title: (2/2) Follow up to Reviewer 1U1v Comment: We have completed the toy-example experiments comparing VAE and Flow-GAN. Considering no further figures are allowed to be attached on openreview website at this stage, we compare the **mean of log-likelihood** of the generated samples in the following table. (We will attach the figure of generated samples in the revised manuscript) |Setting| Ground Truth | VAE Model | Flow-GAN | |----| ---- | ---- | ---- | |Setting 1| -2.89 | -140.83 | -24.18 |Setting 2| -2.78 | -50.05 | -6.12 Referring to Figure 1 in [1], it seems VAE-based model is not able to learn multi-modal data well. Thus, we select two simpler settings in our toy example. * **Setting 1**: We gather approximately $12,800,000$ data points from a multivariate normal distribution with a mean of $[1,9]$ and covariance matrix of $\Sigma = [[1,0],[0,1]]$. * **Setting 2**: We gather similar amount (with setting 1) data points from a mixture gaussian distribution, in which data are randomly drawn from two independent gaussian distributions ($\mu_1 =[1,1]$ and $\mu_2 = [9,9]$, $\Sigma_1 = \Sigma_2 = [[1,0],[0,1]]$) with equal probability. We then employ both VAE and flow-GAN to generate $1000$ samples and calculate the mean of log-likelihood. The table demonstrates that Flow-GAN outperforms VAE significantly in approximating the original data distribution. [1] A Behavior Regularized Implicit Policy for Offline Reinforcement Learning: https://arxiv.org/pdf/2202.09673.pdf --- Rebuttal 2: Title: Look forward to your response Comment: Thank you for taking the time to review our paper. We trust that our responses have effectively addressed the concerns you expressed in your review. Nevertheless, should you still have any lingering questions or unresolved matters, please feel free to inform us. We are committed to offering additional clarification and resolving any remaining issues to the best of our ability.
Summary: This paper proposes a new offline RL algorithm called Constrained Policy optimization with Explicit Behavior density (CPED). The main idea is utilizing a flow-GAN model to estimate the probability density explicitly, which limits the exploration in the safe region. To demonstrate the effectiveness, the author provides some theoretical proofs for the convergence of CPED, and evaluates CPED in various standard offline RL tasks. Strengths: This paper is overall well-written, and it performs a comprehensive survey in offline RL and related fields such as inverse RL and generative model. The proposed method is well-motivated, which introduces flow-GAN to offline RL appropriately. This paper provides detailed theoretical analyses of the convergence to confirm the effectiveness of CPED. Weaknesses: 1. The novelty of this paper is somewhat limited as it primarily replaces the VAE model used in SPOT [1] with flow-GAN. 2. With the exception of the ho-med dataset, CPED does not demonstrate clear improvements and, performs even worse than some previous methods in the experimental datasets. 3. The implementation of CPED is not clearly stated, and this aspect will require further discussion in the questions. 4. The usage of certain notations is confusing, and there are some grammar errors. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. In Equation 7, the Lagrangian multiplier $\alpha$ is mentioned in the algorithm but cannot be found. Could you clarify the actual objective function? Additionally, the hyperparameter $\epsilon$ is not mentioned in your algorithm. How does $\epsilon$ affect the performance of your algorithm? 2. In flow-GAN, specific neural networks are utilized for the generator and discriminator. Does this limitation affect their representational ability? Furthermore, does it result in a significant gap between the estimated policy and the behavioral policy? 3. Previous work often considers online fine-tuning tasks after offline RL. Can CPED be applied to such tasks? If so, how does CPED compare to other offline RL methods in these scenarios? 4. Figure 2 shows the learning curve comparison between BEAR and CPED. However, BEAR is considered somewhat outdated. As SPOT is a more recent offline RL method and seems more relevant to CPED, I'm particularly interested in the comparison between SPOT and CPED. 5. In Section 5.2, you mention an additional trick using time-varying $\alpha$. It appears that this trick could be applicable to SPOT as well. Since the empirical improvement is limited, I'm curious if the improvement primarily results from this trick rather than flow-GAN. Could you provide additional results of SPOT with the time-varying $\alpha"? 6. The choice of $\epsilon$ is not clearly explained. What is the specific quantile of the behavior policy density that you refer to? Does it correspond to the median or other quantiles? It would be helpful if you could provide examples of this "quantile" in several datasets. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Please refer to the weakness parts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Your comments are very useful for refining our manuscript. **Response to Weakness 1**:Thank you very much for this important question. **We want to emphasize that the original intention of CPED is not to directly apply the Flow-GAN idea to offline RL problem or to simply replace the method for estimating behavior policy in SPOT with GAN**. We have clearly explained the original contribution of our CPED method and as well the motivation for using Flow-GAN in the “***Author Rebuttal by Authors***” part at the top of this page. For the contribution of our paper, please refer the bullet point **1.1 & 1.2 & 1.3 in Section 1** in “***Author Rebuttal by Authors***” part at the top of this page For the issue why we choose flow-GAN instead of other methods: please refer **Similarities/Advantage of Flow-GAN/Deficiency of VAE model/Further motivation for Flow-GAN in Section 2** in “***Author Rebuttal by Authors***” part at the top of this page. **Response to Weakness 2**: In the manuscript, the selected competitors (DT/SPOT/IQL) are among the SOTA methods for D4RL tasks, resulting in high baselines. From Table 1&2 in our manuscript, our CPED achieves the best performance for hop-med task, and provides competitive performances with DT and SPOT for hop-med-exp and hop-med-rep tasks. For other tasks such as half-cheetah and walk2d, CPED delivers the best performances in 4 out of 6 tasks. In addition, the total score of the CPED is significantly higher than other competitors, thus it is clear that the proposed CPED performs better than the competitors, and it also contributes to SOTA methods for offline D4RL tasks. **Response to Weakness 3 & 4**: We will further refine the manuscript, especially revising notations and the implementation part (Algorithm 1, Section 5 and Appendix E), so that the manuscript is more readable. For further discussion in Questions, please refer to Response to Questions later. **Response to Question 1**: We did not specifically mention $\alpha$ in Equation 7 because of the limited page. In fact, the problem of equation 7 is a constraint optimization problem, and the actual objective function is: $$\begin{aligned} & \max_{\psi} \mathbb{E}_{s \sim \mathcal{P}\_\mathcal{D} ,\\, a \sim \pi\_{\psi}(\cdot|s), (s, a) \in \tilde{\mathcal{S}} \times \tilde{\mathcal{A}} }[Q\_\eta(s,a)] \\\\ & s.t. \tilde{\mathcal{S}} \times \tilde{\mathcal{A}}= \\{(s,a) \in \mathcal{S} \times \mathcal{A}: -\log L\_{\theta}^{\pi\_\beta}(s,a) < \epsilon \\} \end{aligned}\tag{A}$$ We believe a natural solution for the constraint problem (A) is to apply Lagragian method. Thus, we did not specifically mention Lagragian multiplier $\alpha$ in Equation 7 for simplicity (only show in Algorithm 1). In case other readers mis-understand our idea, we will rewrite Equation 7 as Problem (A) in the manuscript, and emphasize $\alpha$ is the Lagragian multiplier. With respect to the effect of $\epsilon$, limited by the page, the ablation study of $\epsilon$ is in Appendix E, in which we consider several choices of $epsilon$ in various D4RL tasks. **Response to Question 2**: The specific neural network in flow-GAN comes from the flow model. In the original paper of Flow-GAN [1], the power and representation ability of flow-GAN are fully shown from Table 1&Table 2 in [1] in image application. In other references[2,3], even the generator is assumed to be invertible, it has not significantly impacted the performance of Flow-GAN [2,3]. The representation power of Flow-GAN remains highly competitive and effective despite this assumption. **Response to Question 3**: We really appreciate your useful comment. Since the focus of our manuscript is offline reinforcement learning, online fine-tuning is not a primary topic for us. Considering some offline RL methods (SPOT/IQL) treat online fine-tuning as extensions of their papers, the proposed CPED is also capable of downstream online training and we would like to leave this topic as future work. Following Section 5.3 in SPOT paper, the performance of each method (SPOT/IQL) is further improved after online fine-tuning, and the SPOT performs better than IQL. Since CPED is less conservative than both IQL and SPOT, and the offline performance of CPED is significantly better than SPOT, it is expected that the performance of CPED after online fine-tuning is better than both IQL and SPOT. **Response to Question 4**: The main purpose of Figure 2 is to demonstrate that our CPED offers more flexibility compared to traditional distance-based control methods, and it overcomes the issue of being too conservative. Consequently, from the learning curve, the CPED provides a broader "safe area," leading to an upward trend in average return, while BEAR, due to its conservatism, does not show significant performance improvement. We believe that SPOT would likely demonstrate similar outcomes as it also tends to be less conservative. **Response to Question 5**: We perform extra experiments for SPOT with time varying $\alpha$ for Antmaze and two D4RL tasks, and the resulting figure is shown in the **pdf file** in ***Author Rebuttal by Authors***. We can see that the performances with time varying $\alpha$ are close or even inferior to those with constant $\alpha$, indicating the trick of varying $\alpha$ does not result too much benefit. Therefore, we can conclude the improvement indeed come from the Flow-GAN model, rather than the tricks in model training. **Response to Question 6**: We believe you may misunderstand our idea in setting $\epsilon$. We are not using the quantile of behavior policy density as $\epsilon$ (used in other methods such as SPOT), instead we use the mean likelihood of the training batch in the experiment. The choice of $\epsilon$ is illustrated in Section 5.2, Line 324-327: “A commonly used setting is ...” [1]https://arxiv.org/abs/1705.08868 [2]https://arxiv.org/abs/1410.8516 [3]https://arxiv.org/abs/1605.08803 --- Rebuttal Comment 1.1: Comment: Thank you for the clarification and additional experimental results. The rebuttal has addressed most of my concerns. However, the reviewer still feels it necessary to perform more detailed experiments to compare CPED with SPOT to confirm the superiority of CPED since SPOT is **fairly relevant** to CPED. Besides, the performance of SPOT is much better than BEAR, as shown in Table 2. The authors should not claim “SPOT would likely demonstrate similar outcomes” without further experiments. Besides, the reviewer is curious about the meaning of d4rl score in the pdf file and suggests that the same metrics (average return) should be used here to make it convenient to compare the results. --- Reply to Comment 1.1.1: Title: (1/2) Response to Reviewer 6qDS Comment: Thank you for your response. We are sorry for some unclear statements and definitions in our last response. We have added extra experiements, and corrected unclear statements and definitions in the following. **However, the reviewer still feels it necessary to perform more detailed experiments to compare CPED with SPOT to confirm the superiority of CPED since SPOT is fairly relevant to CPED.** **Response:** Thank you for your comment. In last 2 days, we add extra experiments further comparing SPOT and CPED. Per the request of reviewer 1U1v, we have released our code on github <https://github.com/rl-study-group/rl_study_cped/tree/main>, where we also add the addtional comparison figure (github: Readme -> Note. We are currently not allowed to attach further figures on openreview website at this stage. Limited by time, we only finish the half-cheetah task for demonstration, and we will add more task results in the revised manuscript). Please note as we haven't disclosed any personal information in this github link, this **does not** violate the double-blind reviewing policy. From the figure in the link, we can see that the average return of SPOT could quickly reach a score of 6500 and then fluctuates around this level. Neverthesless, although CEPD is not as fast as SPOT, it could finally reach a higher score around 7000. **Besides, the performance of SPOT is much better than BEAR,as shown in Table 2. The authors should not claim ''SPOT would likely demonstrate similar outcomes'' without further experiments.** **Response:** We apologize for the unclear statements. In our last response, we want to claim **''SPOT would likely demonstrate similar outcomes''** ***with CPED, not with BEAR***, since both SPOT and CPED are less conservative. As shown in the link above, we have compared BEAR, SPOT and CPED, and we can see that both SPOT and CEPD significantly outperform BEAR, and CPED works better than SPOT. **Besides,the reviewer is curious about the meaning of d4rl score in the pdf file and suggests that the same metrics (average return) should be used here to make it convenient to compare the results.** **Response:** Thank you for pointing out our unclear definition. Actually, the definition of *d4RL Score* in our manscript (defined from SPOT source code) is **d4rl\_score = normalized_score/100**, where *normalized_score* is **the standard definition for d4rl tasks** from the original d4rl paper[1]. In the SPOT paper [2], the authors also call *normalized_score* as *normalized return*, and the name **d4rl\_score** comes from the results of SPOT after running the source code. Since we cannot attach further figure here, we will revise the metric as **''normalized score''** for consistency (same metric with the original d4rl paper, SPOT and other references) in the revised manuscript. Actually the shape of the figure remains unchanged, only the vertical axis will be scaled up by a factor of 100. [1] d4rl paper: https://arxiv.org/pdf/2004.07219.pdf. [2] SPOT: https://openreview.net/pdf?id=KCXQ5HoM-fy --- Reply to Comment 1.1.2: Title: (2/2)Follow up to Reviewer 6qDS Comment: In this response, we provide additional evidence by presenting the outcomes of new experiments comparing the VAE and Flow-GAN models, as per the comment from Reviewer 1U1v. Firstly, we intend to cite Figure 1 in [1], a nice illustrative example, to show that GAN-based models outperform VAE-based models in approximating behavior policies. Furthermore, we conduct additional toy example experiments by comparing the performance of VAE and flow-GAN in generating behavior samples. Due to constraints on attaching additional figures to the OpenReview website at this stage, we present a comparison of the **mean of log-likelihood** of generated samples in the table below. (We will attach the figure of generated samples in the revised manuscript) |Setting| Ground Truth | VAE Model | Flow-GAN | |----| ---- | ---- | ---- | |Setting 1| -2.89 | -140.83 | -24.18 |Setting 2| -2.78 | -50.05 | -6.12 Referring to Figure 1 in [1], it seems VAE-based model is not able to learn multi-modal data well. Thus, we select two simpler settings in our toy example. * **Setting 1**: We gather approximately $12,800,000$ data points from a multivariate normal distribution with a mean of $[1,9]$ and covariance matrix of $\Sigma = [[1,0],[0,1]]$. * **Setting 2**: We gather similar amount (with setting 1) data points from a mixture gaussian distribution, in which data are randomly drawn from two independent gaussian distributions ($\mu_1 =[1,1]$ and $\mu_2 = [9,9]$, $\Sigma_1 = \Sigma_2 = [[1,0],[0,1]]$) with equal probability. We then employ both VAE and flow-GAN to generate $1000$ samples and calculate the mean of log-likelihood. The table demonstrates that Flow-GAN outperforms VAE significantly in approximating the original data distribution. [1] A Behavior Regularized Implicit Policy for Offline Reinforcement Learning: https://arxiv.org/pdf/2202.09673.pdf --- Rebuttal 2: Title: Look forward to your response Comment: Thank you so much for reviewing our paper, and we appreciate your helpful suggestions. We sincerely hope that our responses have adequately addressed the concerns you raised in your review. However, if you still have any unresolved concerns or additional questions, please do not hesitate to let us know. We would be more than happy to provide further clarification and address any remaining issues.
Summary: This work proposes to use flow-GAN model to explicitly model the distribution of the behaviour policy and use that to alleviate the OOD actions problem in offline RL. A theoretical analysis of convergence is presented. Empirical evaluation on d4rl gym locomotion and antmaze show that the proposed CPED algorithm is very competitive. Strengths: * The method is novel and very intuitive. Instead of using all different tricks to avoid OOD, explicitly modelling the behaviour policy is much more straightforward. * The writing is clear * Empirical evaluation shows the performance of CPED is comparable to SOTAs Weaknesses: * Compared to prior methods like IQL, DT, or even CQL. The proposed method has an additional generative modelling module, which makes the overall architecture more complex. * Again, the empirical results are good but it's technically not surpassing SOTA. For gym-locomotion, there are algorithms like ATAC [1] and SAC-N [2] that have similar level of performance. As for antmaze, the performance of CPED is inferior than IQL, which is not the SOTA for offline RL in general. For model-based methods, there're methods like trajectory transformer [3] that perform better than IQL. [1] https://proceedings.mlr.press/v162/cheng22b/cheng22b.pdf [2] https://arxiv.org/pdf/2110.01548.pdf [3] https://arxiv.org/abs/2106.02039 Minor issues: > CPED can accurately identify the safe region and enable exploration within the region I would avoid using the term "exploration" here because it can hint to readers that the proposed method can be helpful for online learning but I believe this is not what the authors mean here. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * What's the computational overhead of training this Flow-GAN? * Why do you pick Flow-GAN? Why not use simpler ones like VAE, or even just discretization? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your review. We appreciate your valuable questions and suggestions. We have replied your comments in detail, and please see the following for your response. **Response to Weakness 1**: We agree that the proposed CPED introduces an additional generative modeling module in the offline RL solution. Nevertheless, this will not lead to an over complex architecture. In recent studies of offline RL methods, many generative modeling modules (VAE/GAN/diffusion model) are suggested for modeling behavior policies (BCQ/SPOT), and the sizes of such modules does not pose significant issues. For CPED, the training cost of flow-GAN is close to that of GAN model, and both models are much more efficient than diffusion models. Thus, the complexity of CPED is acceptable. In addition, it is noted that the size of DT will be significantly large when the sequence/input trajectory is long. **Response to Weakness 2**: Thank you for pointing out the related references and methods. We will certainty include the above references and related methods in the revised manuscript (we have cited [2] already). According to references [1] and [2], both ATAC and SAC-N provide competitive performances with CPED on hopper and halfcheetah tasks, while CPED performs slightly better on walker2d tasks. We would like to clarify that the SAC-N method utilizes multiple Q-functions, which may contribute to better performances but can also lead to computational issues. In contrast, CPED employs only 2 Q-networks. In addition, the trajectory transformer methods lack strong theoretical interpretability and it requires extra parameters such as “return-to-go”. Therefore, we believe that the proposed CPED has potential for further improvement when multiple Q-functions or other modeling tricks are applied. As we mentioned in the “author rebuttal” in the top, our CPED does not solely aim to be SOTA in experiments. Moreover, we want to show CPED has great potential in further improvement, and we will continue to delve into this aspect in the future. **Response to Minor issues**: In response to your feedback, we will make further revisions to the term "exploration" to ensure that readers are not misled. The sentence will be revised as “CPED can accurately identify the safe region and allow policy learning within the region” **Response to Question 1**: From our experiment, training the flow-GAN will indeed incurs additional computational overhead. Nevertheless, this computational overhead is not a significant concern for the overall training process. In our study, the training time of CPED is only approximately 10% longer compared with the traditional offline RL methods such as BEAR and BRAC. Therefore, the training cost of CPED is quite acceptable. **Response to Question 2**: Thank you very much for this important question. We have clearly explained the advantage as well the motivation for using Flow-GAN in the “author rebuttal” part. We are glad to re-illustrate the points here so that you can understand our motivation better. For the issue why we choose flow-GAN instead of other generative methods (e.g. VAE): 1. **Similarities**: In the field of policy control methods for offline RL problem, the idea of ensuring the consistency of the support of learned policy and that of the behavior policy is considered the most desirable approach to tackle distribution shift. Both CPED and SPOT aim to achieve this idea, but they do it differently. 2. **Advantage of Flow-GAN**: A clear advantage of Flow-GAN is its capability to provide an explicit and direct density estimator for the behavior policy. In contrast, VAE can only approximate the lower bound behavior density through ELBO. 3. **Deficiency of VAE model**: Another experimental evidence comes from ref[1] and ref[2]. In ref[1], the author claims typically-used VAEs do not align well with the behavior dataset, making it challenging to effectively cover the behavior policy distribution. The author further presents an experiment result that the model performance is significantly reduced when the VAE architecture is used (Compared Fig2 to Fig 6 ). In addition, this claim is also validated in ref[2] (in appendix D), where the performance of the method with VAE structure is inferior to that of the original method. 4. **Further motivation for Flow-GAN**: We chose to use Flow-GAN as a density estimator because, first, we theoretically analyze in section 3.2 that estimating the density of the behavior policy in RL is equivalent to training a GAN. However, traditional GANs are essentially sample generators. To realize the idea of estimating a density function, we combine the Flow model with GAN to estimate the behavior density. The Flow-GAN is originally designed for image generation problems, in our specific implementation, we made numerous adjustments to the structure of Flow-GAN (ref[3]) , including converting many CNN into fully connected networks and simplified some residual networks, so that overall network structure can adapt to the tasks of offline RL. We believe the flow-GAN model is promising, and hope that future work will further optimize the design of Flow-GAN's structure, allowing it to unleash its true potential on more challenging behavior policy datasets. 5. **Discretization**: As for discretization, in the currently prevalent continuous control tasks, this method faces difficulties in accurately estimating the behavior policy density. ref[1]: https://arxiv.org/pdf/2007.11091.pdf ref[2]: https://arxiv.org/pdf/2209.14548.pdf ref[3]: https://arxiv.org/abs/1705.08868 --- Rebuttal Comment 1.1: Title: Response to the Rebuttal Comment: I would thank the authors for their clarification. I like the fact that the proposed method has only 10% computational overhead. For the reason of defending Flow-GAN, the arguments are not strong enough to convince me, especially without further empirical results. I'll keep the score for now. --- Reply to Comment 1.1.1: Title: Response to the Reviewer & Thank you for you comments Comment: **I would thank the authors for their clarification. I like the fact that the proposed method has only 10% computational overhead.** **Response:** Thank you so much for your nice comment. **For the reason of defending Flow-GAN, the arguments are not strong enough to convince me, especially without further empirical results.** **Response:** Thank you for your comment. To provide further empirical results and demonstrate the advantages of the flow-GAN, we conduct comparisons between CPED, SPOT[1], and SfBC[2]. In SPOT, the VAE model is employed to approximate the lower bound of the behavior likelihood (also compared in our manuscript). On the other hand, SfBC employs diffusion techniques, another type of generative model, to estimate the behavior model. The results of each method for D4RL tasks are presented in the table below. | Task | CPED | SPOT | SfBC | | ---- | ---- | ---- | ---- | | HalfCheetah-Medium | **61.8** | 58.4 | 45.9 | | Hopper-Medium | **100.1** | 86.0 | 57.1 | | Walker-Medium | **90.2** | 86.4 | 77.9 | | HalfCheetah-Medium-replay | **55.8** | 52.2 | 37.1 | | Hopper-Medium-replay | 98.1 | **100.2** | 86.2 | | Walker-Medium-replay | **91.9** | 91.6 | 65.1 | | HalfCheetah-Medium-expert | 85.4 | 86.9 | **92.6** | | Hopper-Medium-expert | 95.3 | 99.3 | **108.6** | | Walker-Medium-expert | **113.04** | 112.0 | 109.8 | From the table, it becomes evident that CPED outperforms its competitors across various tasks. Coupled with the reasoning analysis provided in our previous response, we are confident that Flow-GAN stands as a superior selection compared to alternative tools for behavior model estimation. [1] Supported policy optimization for offline reinforcement learning. NeurlIPS 2022. [2] Offline reinforcement learning via high-fidelity generative behavior modeling. ICLR 2023.
Rebuttal 1: Rebuttal: We are grateful for the valuable questions and suggestions were given by all four reviewers that help us to revise our manuscript. After reading all the reviews, we have answered each reviewer's questions in detail, and also added relevant experiments based on each reviewer's suggestions to support our work. As some reviewers still have concerns on the contribution of our manuscript as well as selecting the Flow-GAN instead of other methods (e.g. VAE), **we want to emphasize that the original intention of CPED is not to directly apply the Flow-GAN idea to offline RL problem or to simply replace the method for estimating behavior policy in SPOT with GAN**. Furthermore, we want to illustrate following issues: 1. For the contribution of our manuscript, through our exploration of the CPED method, we aim to provide the following conclusions from both theoretical and experimental perspectives: - **1.1** The combination of GAN and MLE-based density estimation (e.g. Flow-GAN) is more suitable for estimating behavior policies in offline RL, both in theory and experiments. We provide a theoretical guarantee indicating that the CPED can access the optimal Q-function, and extensive experiments show that CPED substantially outperforms state-of-the-art methods. - **1.2** Keeping the support of the learned policy close to that of the behavior policy is the key idea in policy control methods. In our proposed CPED, by introducing an explicit density function, we can effectively achieve this objective. - **1.3** CPED is not only intended to demonstrate its superiority through experimental results but also aims to advance the completeness of policy control methods. In fact, we believe that the Flow-GAN density estimator has significant potential, considering its representation of the policy density learning process in theory and its promising performance in experiments. We also hope that our work can serve as a catalyst, driving the development of the idea of estimating behavior policies to address offline RL problems. The above three points are our fundamental contributions to the offline RL community. 2. For the issue why we choose flow-GAN instead of other generative methods (e.g. VAE): - **Similarities**: In the field of policy control methods for offline RL problem, the idea of ensuring the consistency of the support of learned policy and that of the behavior policy is considered the most desirable approach to tackle distribution shift. Both CPED and SPOT aim to achieve this idea, but they do it differently. CPED utilizes flow-GAN to estimate the density of behavior policy while SPOT selects the VAE as the tools. - **Advantage of Flow-GAN**: A clear advantage of Flow-GAN is its capability to provide an explicit and direct density estimator for the behavior policy. In contrast, VAE can only approximate the lower bound behavior density through ELBO. - **Deficiency of VAE model**: Another experimental evidence comes from ref[1] and ref[2]. In ref[1], the author claims typically-used VAEs do not align well with the behavior dataset, making it challenging to effectively cover the behavior policy distribution. The author in ref[1] further presents an experiment result that the model performance is significantly reduced when the VAE architecture is used (Compared Fig2 to Fig 6). In addition, this claim is also validated in ref[2] (in appendix D), where the performance of the method with VAE structure is inferior to that of the original method. - **Further motivation for Flow-GAN**: We choose Flow-GAN as a density estimator because, first, we theoretically analyze in section 3.2 that estimating the density of the behavior policy in RL is equivalent to training a GAN. However, traditional GANs (including VAE) are essentially sample generators. To realize the idea of estimating a density function, we combine the Flow model with GAN to estimate the density of behavior policy. The Flow-GAN is originally designed for image generation problems, in our specific implementation, we make numerous adjustments to the structure of Flow-GAN (ref[3]) so that it can adapt to the offline RL tasks. We hope that future work will further optimize the design of Flow-GAN's structure, allowing it to unleash its true potential on more challenging behavior policy datasets. * ref[1]: https://arxiv.org/pdf/2007.11091.pdf * ref[2]: https://arxiv.org/pdf/2209.14548.pdf * ref[3]: https://arxiv.org/abs/1705.08868 Pdf: /pdf/92e8b311a58cfc3082a4a8ccd4f12e503d2d12bf.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Double and Single Descent in Causal Inference with an Application to High-Dimensional Synthetic Control
Accept (poster)
Summary: This paper investigates over-parametrized models in causal inference, specifically focusing on high-dimensional linear regression models and high-dimensional synthetic control estimators with a large number of control units. The authors examine the prediction risk behaviors associated with these two estimators, and present a unified theoretical perspective on the descent phenomena in the interpolating (=high-dimensional) regime, which they refer to as the “model-averaging property” as demonstrated in Proposition 1 and Proposition 4; also see Eq. (MA). By introducing the high-level assumption of the model-averaging property (MA), which posits that the full model can be expressed as a convex combination of simpler “leave-one-out” counterparts, and another technical assumption (P) concerning the optimality of the aggregated model against a random permutation of the weights, the authors assert that the complex model cannot perform worse than the average performance of the simpler models (Proposition 5). In summary, this paper provides a concise exposition of the “benign overfitting” phenomena observed in two estimators widely utilized in causal inference. The authors rely on two high-level geometric assumptions, the model-averaging property and Assumption (P), which provide insights into the descent behavior of these estimators and warrant further investigations in follow-up studies. Strengths: This paper offers a simple yet intriguing perspective on understanding the double descent phenomenon in the interpolating regime by leveraging the mechanical properties of predictive estimators that are largely agnostic about the underlying data generating process. The authors adeptly combine knowledge from linear algebra and convex analysis to provide concise explanations for the returns of complex models in high-dimensional causal estimators. The practical implications of their findings, particularly in the context of synthetic control with many control units, are noteworthy. The paper alleviates concerns about overfitting when using a large number of control units, eliminating the need for pre-selecting an appropriate subset, which could be challenging in practice. Furthermore, the authors' focused approach on linear regression models and synthetic control estimators enhances the clarity and concreteness of their arguments. They effectively develop their findings in these two settings first, and thereafter, hinting at the potential extension of the abstract results based on the model-averaging property (MA) and the permutation property (P) to broader settings beyond the two examined scenarios. While it is yet challenging to assess the significance of this work within the broader NeurIPS readership, I believe the authors make a meaningful contribution to the sub-community of econometrics/causal inference. The paper offers a fresh and clear perspective on understanding the phenomenon of benign overfitting, delivering an important message that reassures the practical utilization of high-dimensional synthetic control, specifically. Weaknesses: Although this paper has several strengths, there are three concerns/suggestions that could be addressed to further improve its quality. Firstly, there is a concern about the possibly limited scope of applicability for the presented results. Although the paper establishes theoretical support for the concept of "benign overfitting" in synthetic control estimators and acknowledges the possibility of extending the analysis to a broader class of estimators through the abstract conditions (MA) and (P), it remains unclear how far-reaching this perspective can be and to what extent it can be extended. It would be valuable if the authors could provide a discussion on the expected scope of extensions, limitations, and potential challenges that may arise. Additionally, in order to strengthen the positioning of the paper, it would be beneficial to provide additional comparisons and contrasts to previous approaches. Given the focus on synthetic control estimators, the authors could comment on the limitations of applying existing results on double descent and benign overfitting in over-parametrized models to the specific context of synthetic control estimators. This would effectively highlight the authors' contributions in this work and emphasize the distinctive features of the techniques employed. Lastly, while the paper primarily presents a novel theoretical perspective, it would be valuable to complement the theory with a more extensive set of numerical experiments. For instance, conducting an ablation study with synthetic datasets to verify the assumptions and theory, as well as performing experiments with real-world datasets at scale to confirm the descent behavior of the synthetic control method, could strengthen the paper's contributions. By incorporating such experiments, the authors can provide empirical evidence to support their theoretical findings and enhance the practical relevance of the paper. By addressing these concerns, the authors would be able to further strengthen their contributions in this paper, clarify the scope of their results, highlight its distinctive features, and provide a more comprehensive explanation for the implications of their findings. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I would like to request the authors to address the three concerns raised in the “Weaknesses” section. 2. Miscellaneous suggestions: (a) It would be clearer if the authors replace the term "full rank" with "full row rank" for clarity, e.g., in lines 127, 171, 174, 191, although it is obvious in the interpolating regime. (b) In lines 153-155, the authors state, "In this case, loss continues to decrease throughout, ultimately reaching a minimum that is below the lowest error achieved left of the interpolation threshold." However, this is not easily discernible in Figure 1-(b). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper suggests several potential avenues for future research, recognizing some limitations of their findings, such as exploring more fundamental conditions beyond (P). However, it could benefit from a more explicit discussion on the limitations and potential negative societal impact of the proposed approaches. It would be valuable for the authors to provide a more detailed acknowledgment of the technical limitations of their methods and offer insights into the potential adverse consequences that may arise when applying these approaches in real-world settings, specifically in the context of synthetic control methods. Nonetheless, it is important to note that given the paper's primary focus on theoretical aspects, a comprehensive and extensive examination of these limitations may not be deemed critical. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and suggestions. 1. For the **concerns** you lay out: - We believe that **important next steps on this agenda** include (i) developing low-level sufficient conditions under which more complex synthetic control performs better than less complex synthetic control, (ii) providing tools for valid inference following interpolating linear regression and high-dimensional synthetic control, and (iii) understanding implications for other causal-inference tools that rely on bias–variance trade-offs. Our current analysis is limited since assumptions are on a very high level and the analysis is limited to two specific methods. We believe that deriving inference results in particular may be methodologically challenging, but hope that our analysis can motivate further research in this area. - We now provide **additional comparisons** and results, following the suggestions from other reviewers (see **attached PDF** and our response to reviewer xBLi). In particular, we also consider low-dimensional linear regression, a comparison to the LASSO, and additional evidence for synthetic control with more pre-treatment periods. For our contribution, we believe that the results on synthetic control are new and not previously available in the literature. - Some additional **numerical experiments** are provided in the **attached PDF** and in the response to reviewer xBLi. 2. Thank you for your **suggestions**, which we will address in our manuscript. - We will replace “full rank” by “full row rank”. - Thank you for your suggestion about Figure 1. We agree that the error in the right tail is comparable to that on the left for the CPS (Figure 1(b)), while it shows a clear improvement for the NSW controls (Figure 1(a)), which we will clarify in the text. In addition, in the graphs in the **attached PDF** we added a zoomed-in panel to facilitate the comparison of performances. --- Rebuttal Comment 1.1: Comment: Thank you very much for the rebuttal and your clarifications for the points raised by myself and other reviewers. I maintain my initial positive evaluation as it stands, albeit I somewhat agree with the concerns expressed by Reviewer TPer.
Summary: The paper studies the issue of double and single descent for linear regression as well as synthetic control. For the former, empirical motivation is given in the context of predicting wages and theoretical results are given from the perspective of model averaging. For the latter, empirical motivation is given in the context of imputing counterfactual California smoking rates and again theoretical results are given from the model averaging perspective. Strengths: - To my reading, the most impressive result is Proposition 4, which shows that synthetic control has the model-averaging property. I presume that this is new in the literature but it is not explicitly mentioned in the paper. It would be good to clarify the novelty of Proposition 4 and emphasize its implications. - The two numerical examples are illustrative and provide very good empirical motivation. Weaknesses: - To my reading of the literature, one of the most under-explored but central issues in double descent or benign overfitting is the analysis of bias. For example, Tsigler and Bartlett (2023, JMLR) entitled "Benign overfitting in ridge regression" provides sharp bounds for the bias term. The current paper does not provide any result regarding the bias term, which will not be zero with over-parametrization. - It seems that it is not totally new to combine model averaging with double descent in the literature. For example, see the following quote from Wilson and Izmailov (2020, NeurIPS, page 8) entitled "Bayesian deep learning and a probabilistic perspective of generalization": _"Double descent [e.g., 3] describes generalization error that decreases, increases, and then again decreases, with increases in model flexibility. ... However, our perspective of generalization suggests that performance should monotonically improve as we increase model flexibility when we use Bayesian model averaging with a reasonable prior."_ It would be helpful to provide a more thorough discussion of the literature. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Tsigler and Bartlett (2023, JMLR) gives general sufficient conditions under which the optimal regularization parameter is negative. In view of this, I am wondering what would happen if $\eta$ in equation (2) is negative. In other words, would it be possible to study the penalized synthetic control estimator with a negative regularization parameter? - Model averaging is popular in both statistics and econometrics: e.g., Claeskens and Hjort (2008), Model selection and model averaging, Cambridge University Press; Hansen (2007), Least Squares Model Averaging. Econometrica. Some discussion of related papers on model averaging would be helpful. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: - Lines 221-223: it is stated that "We believe that these variance and geometric properties of linear regression are well understood in the literature and likely not new, although we are not aware of an explicit statement of the model-averaging connection between more and less complex interpolating linear-regression models." The statement is a bit unclear. It would be better if the literature review is more thoroughly done in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and suggestions. 1. For the point about **bias**, we agree that our analysis does not separately provide results for the bias of the estimator, which will generally depend on the relationship of the estimator to the data-generating process. We note, however, that our results on mean-squared error implicitly captures both the variance and bias components of the loss. For example, assumption (P) in _Proposition 5_ also constrains the bias of the estimator, although only implicitly. We agree that a study of bias is a relevant extension to our work, especially for the case of synthetic control. 2. For **negative regularization**, our results do not directly extend to this case, but this may be an interesting case for further analysis. For the specific case of synthetic control, the optimization problem with a negative cost of complexity is not generally convex any more, solutions are not guaranteed to be unique, and the model-averaging property does not have to hold (although we suspect that this is due to non-uniqueness if ties are broken inconsistently). 3. On the **relationship to the model-averaging literature**, we agree that it would be worthwhile to add additional references and connections. However, many approaches to model averaging rely on using outcome data to decide on weights to assign to different models, and effectively choose the weights that maximize prediction fit in the training data. This stands in contrast to some of the weights we consider. For example, in the case of interpolating linear regression, weights do not depend on the outcomes, since all component models fit the data perfectly, and we do not use their empirical performance to combine simpler models into a more complex one. 4. Thank you for pointing out the **relationship to prior work on Bayesian model averaging**, which we will reference. In this context, our approach could be seen as the uninformative limit of a Bayesian regularization approach. --- Rebuttal Comment 1.1: Title: Thanks Comment: I very much appreciate the rebuttal by the authors. I just wanted to re-iterate the importance of a more explicit study of the bias because Assumption (P) is a high-level sufficient condition. It would be good to mention this limitation in the camera-ready version. I changed my rating from 5 to 6 because the authors answered well most of my comments.
Summary: This paper examines single and double descent phenomena in two causal inference estimators: high-dimensional linear regression and synthetic control estimators with many controls. To begin, the paper starts with a high-dimensional linear regression problem and illustrates the double descent phenomena using the famous LaLonde dataset. Then, they show that complex models can be seen as the model averaging of simple models when using interpolating linear least squares estimators. Their main contribution is to explore the single and double descent phenomena in synthetic control methods for the first time in the literature. While it is commonly recommended not to include too many control units, the authors found the single descent phenomena: the performance monotonically increased as the number of control units increased. They derive the model-averaging-based risk bounds to explain the single phenomena they observed in the real-world example. Strengths: **Originality** To the best of my knowledge, this is the first paper to explore the single and double descent phenomena in causal inference settings. In particular, for synthetic control problems, there are a large number of control units in many important settings. For example, when a company tries to estimate the effect of a certain internal policy on its performance, there are potentially a large number of firms they can use as the donor pool. The conventional recommendation is to focus only on a small number of selected control units, but this paper opens up a new opportunity for researchers to incorporate a large number of control units beyond the number of pre-treatment periods allowed. **Quality** This paper provides a very useful insight into the single and double descent phenomena from a model averaging perspective. Providing both intuitive and geometric interpretations of results was helpful. **Clarity** This paper is clearly written, and starting with a simpler linear regression as a basis for the synthetic control method was also a very effective presentation. **Significance** As mentioned above, it will be significant as this paper will open up a new opportunity for researchers to incorporate a large number of control units beyond the number of pre-treatment periods allowed. Weaknesses: I do not think there is any obvious weakness in the paper with respect to the goal of the paper. I have some suggestions about making the paper more relevant to realistic causal inference settings, and I list them as clarifying questions below. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: **(1) Comparison to a "reasonable" simple model in a bit more realistic setting** In many deep-learning settings, it is probably difficult to think about a "reasonable" set of covariates as the variables might have very little substantive meaning in some applications like image detection. But, for many causal inference problems like those examined in this paper, there are many "simple" models that any social scientist would start with. While this paper provides very interesting insight about the single and double descent phenomenon, I think the paper would be even stronger if the authors can demonstrate the idea of double descent and over-parametrization can actually beat simple yet reasonable models. As far as I understand, in both empirical examples (Figure 1 and Figure 4), the baseline model is to include variables "randomly" (random simple models) and then average the performance across such random simple models. And the average of such random simple models can have low performance as it averages over many "random" non-reasonable models. I want to emphasize that the original authors already acknowledged this point in the paper, and I am here to encourage the authors to explicitly include some empirical evidence about this point. (1.1) LaLonde data In the LaLonde data example, because many variables are constructed just from eight variables, it is expected that most of the 8000 variables have very low signals and a random subset of such variables can be far from informative variables. **Can the authors include the RMSE based on a simple linear regression that includes the original eight variables additively?** Can overparameterization outperform this baseline model? (1.2) Synthetic Control Problem It is interesting to see that it only shows the single descent phenomenon. But I was wondering whether this is due to the very small number of pre-treatment periods (3). Essentially the regime before the interpolation threshold could be too small to show any bias-variance tradeoff in a meaningful sense. **Can the authors include at least ten pre-treatment periods, which is often recommended in practice?** Do we still see the single phenomenon with a larger number of pre-treatment periods? **(2) Overparametrization vs Regularization** Another simple benchmark is a classical regularized model. For example, for the Figure 1 problem, **can the over-parameterized model beat a simple Lasso?** For the linear regression problem, the minimal-norm interpolation solution picks a solution that has the minimum norm rather than solving the objective function with a penalty term. **What is the theoretical connection between a classical penalized regression (adding a penalty to the objective function to make a solution unique) and the over-parametrized model (finding all the solutions that make the objective function equal to zero and finding one solution that has the minimum norm)?** **(3) Inference** This paper considers the RMSE of causal estimation. But, in many causal inference settings, researchers are also interested in estimating confidence intervals. I am curious to know whether **over-parametrized models** will make inference intractable or can be addressed similarly in the double machine learning or the semiparametric inference literature (for a linear regression case; as far as I know, there is no unified inference framework for the synthetic control method yet). **(4) Connection to Super Learners** SuperLearners by van der Laan and his colleagues also use the convex combination of individual prediction methods to improve the performance of the ML prediction. **Is there any connection to SuperLeaners?** Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: They clarify the limitations of their approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and thoughtful comments and questions. 1. In response to your question about **reasonable comparison models**, we are reporting additional comparisons: - **For linear regression**, we report a comparison to low-dimensional linear regression using the original covariates for the task of estimating average treatment effects, with a varying size of the target control group (see **attached PDF**). Specifically, _Figure R1_ reports the average RMSE for estimating the average treatment effect (ATE) of subsets of varying size $m$ in the experimental NSW sample. (The results reported in _Figure 1 (a)_ correspond to $m=1$, for which the RMSE of imputing control outcomes is the same as the RMSE of estimating the corresponding treatment effect.) In the figure, we now add horizontal lines for the performance of a simple linear regression on the original eight covariates, which shows that the performance of the simple regression is better than the best interpolating regression when $m=1$, but that for $m=50$ and $m=100$ very high-dimensional over-parametrized solutions perform comparably to (and ultimately slightly better than) the simple model. We see this as an encouraging finding for interpolating solutions in linear regression, since the complex models do not explicitly use the original regressors (only their binned and interacted versions) but are still able to recover comparable predictive performance. A potential improvement in performance could be to always include the original eight variables in the complex models, without any explicit or implicit regularization, in which case we would expect the comparison to be more advantageous for our complex models. - **For synthetic control**, we also report results for more pre-treatment periods (see **attached PDF**). Specifically, _Figure R2_ reports results analogous to _Figure 4_, but for ten pre-treatment periods. The results are qualitatively similar. 2. For the **comparison to regularized models**, we report below the performance of LASSO solutions on all covariates in recovering the ATE on the evaluation sample. The relative performance depends on the size of the evaluation sample and the regularization parameter. In this table, every row corresponds to a different sample size of subset on which the ATE is evaluated (see our response above), and each column corresponds to a LASSO penalty parameter or the interpolating solution with all covariates included. |$m$ | $\alpha=0.01$ | $\alpha=0.005$ | $\alpha=0.001$ | $\alpha=0.0005$ | Interpolating | |----|---------------|----------------|----------------|-----------------|--------------| |1 | 6.403 | 6.564 | 8.149 | 8.765 | 8.168 | |5 | 3.009 | 3.005 | 3.666 | 3.956 | 3.692 | |10 | 2.293 | 2.226 | 2.588 | 2.752 | 2.512 | |20 | 1.779 | 1.604 | 1.773 | 1.890 | 1.757 | |50 | 1.540 | 1.238 | 1.112 | 1.098 | 1.154 | |100 | 1.319 | 0.985 | 0.730 | 0.699 | 0.771 | - The LASSO uses explicit regularization, while our main specification only regularizes among perfectly fitting solutions (for which it chooses the norm-minimal model). The latter approach has the advantage that it does not require us to choose a regularization parameter, but we do not claim that it is optimal. The explicit regularization approach of the LASSO has the advantage that it may further improve performance. Indeed, the finding that very complex, interpolating solutions can perform well and that adding complexity (in the form of additional covariates or donor units) can improve out-of-sample performance still leaves room for improvement by additional, explicit regularization. - The LASSO uses the Taxicab (L1) norm for regularization, while our approach uses the Euclidean (L2) norm. In principle, we could also use the L1 norm for choosing among interpolating solutions, but in that case we may obtain non-unique solutions (at least in some edge cases) since we lose strict convexity. Furthermore, we know from theory that LASSO regression does well in cases with approximately sparse parameters, but may do poorly if most regressors are relevant for prediction. 3. We agree that **inference** on treatment effects when nuisance components are estimated by very high-dimensional and interpolating models is an important open question. To the degree that inference results for e.g. double machine learning rely on predictive fit or variance properties, our analysis suggests that these results may be feasible even with interpolating solutions, since good out-of-sample performance and low prediction variance do not rely on low dimensionality. Establishing sufficient conditions for valid inference on causal parameters in the presence of interpolating estimation of nuisance components could be a promising direction for future research. 4. **Super learners** start with a set of candidate learners. The super learner then combines these candidate learners, for example by picking one out of the set, or by picking some linear combination, or a convex combination. Our method is in the spirit of the third flavor of the super learner, if one views the original regressors all as candidate learners. Picking a convex combination as in the super learner then has the averaging property that we exploit. We appreciate you pointing out the connection. --- Rebuttal Comment 1.1: Comment: Thank you so much for the rebuttal and your detailed clarification. These answered my questions well!
Summary: The paper studies overparameterized linear regression in the context of imputing data for estimation of causal effects. This involves both unconstrained regression for imputing wages in the CPS dataset and regression under a simplex constraint on the weights for synthetic controls in the California smoking rates dataset. Favorable performance is observed for the overparameterized estimators, where the number of donors is larger than the number of samples. The main contribution of the paper is an explanation of this favorable performance. It is shown that minimum norm solutions in overparamterized linear regression can be expressed as weighted averages of least squares solutions under subsets of the donors. Then it is claimed that such averaging results in better generalization in imputing the missing values. Strengths: I think that problems regarding the use of overparameterized models in causal inference are very important and interesting. The paper makes simple yet non-trivial observations about the model-averaging property of min-norm linear regression models when used in causal inference tasks. It offers some observations that might prove useful in practical considerations on whether one should include as many donors as possible versus selecting donors carefully from a large pool. Finally, I found the presentation quite simple and fluent. Overall, reading the paper was an enjoyable experience. Weaknesses: The main weaknesses of the paper in my humble opinion are as follows: 1) I think that the explanation for why model averaging results in better generalization is somewhat lacking. The closest result to a generalization bound is proposition 5, though this only proves the resulting classifier is better than the worst-case classifier. It is quite far from results on benign overfitting such as Liang and Rakhlin, Bartlett et al. and others, which bound the excess risk with respect to the optimal hypothesis and also derive conditions on the data distribution for these results to hold. The results in this paper are not dependent on properties of the data, and the generalization guarantees are quite weak, hence I am not entirely convinced that model averaging is the reason behind the improved generalization, and it might be a red herring. Hence for the unconstrained case, I am not sure whether the model averaging interpretation offers the same understanding of generalization of overparameterized models as previous works such on benign overfitting. 2) Unless there is something I've missed in the paper, the model averaging properties seem to be properties of linear regression (and linear regression under simplex constraints) in general, and they are not specific to causal inference problems. Hence it might be useful to write the paper without the focus on causal inference, to make it appeal to a broader audience which might not be fluent with causal inference techniques. Instead, it would be nice to give the causal inference problems as examples/applications of the more generic result on the types of linear regression. 3) The paper does not discuss the interpretation of regression weights in the overparamterized case. Giving causal interpretations (under certain conditions) to regression weights is one of the major differences in using such models for causal inference, instead of for standard prediction tasks. Hence I'd expect some kind of discussion on such interpretations in this paper, as it focuses on causal estimation. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1) Are the results data-dependent in some way? Under what formal conditions can we give meaningful bounds in the residual error between the overparameterized solution and the optimal subset of donors (or w.r.t to some baseline donor selection method)? 2) Is there any point in trying to give a causal interpretation to regression coefficients in the method under some standard assumptions? While my intuition is that we should not do that, it is interesting to discuss this explicitly. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: The authors have properly discussed limitations, there does not seem to be a concern for negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments. 1. The question **under which conditions we obtain better generalization** goes to the heart of the issue. While the model-averaging property holds mechanically for these estimators, the resulting generalization error bounds (like the one in _Proposition 5_) require restrictions on the data-generating process. We agree that understanding lower-level conditions under which model averaging translates into better performance is an important next step. At the same time, we want to clarify that the high-level condition in _Proposition 5_ applies beyond the worst case; it provides a condition under which a classifier of higher complexity outperforms an average classifier of lower complexity, not just a worst-case classifier. 2. We agree that the question of **how to interpret models** is an interesting one, and we also agree with your assessment that these coefficients should not be given a causal interpretation. This does not preclude a causal use of the resulting predictions, which can be helpful for imputing unrealized potential outcomes. In addition, we agree that **our results hold beyond the causal-inference context**. At the same time, we believe that our unique contribution is to show that they can be particularly relevant there, since estimation challenges in causal inference have frequently been framed around bias–variance trade-offs. Developments in the literature on double descent and benign overfitting, as well as our own results on synthetic control, have the potential to amend this view in causal inference in particular. In order to further clarify the role in causal inference, we have added some results on using linear regression specifically for the estimation of average treatment effects (see **attached PDF** and our response to reviewer xBLi). --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you very much for the rebuttal and your clarifications. To be precise about my current point of view on the paper, I think that the problem is interesting and I appreciate the treatment of overparameterization in the context of synthetic controls. I also think that the averaging property is something I did not know about and seems like a nice insight. However, without a strong generalization bound, it is unclear why model averaging as implied by the min-norm interpolator is beneficial for generalization. Concretely, we know there are cases where overfitting is not benign, hence model averaging can also be quite bad. I think that without a characterization of some intuitive data distributions where averaging helps us prove better generalization, the result is nice but lacks meaningful consequences. From reading proposition 5 I was not able to parse such a bound, since the average risk over all possible choices of donors seems like a pretty weak baseline, and I don't have any intuition about types of distributions where condition (P) holds (or even a toy example from which we can gain intuition). So while I am generally positive about this direction of work, the writing, and even like the current results, I think there are missing components that are important to the theory and its connection to practice. The additional experiments conducted with LASSO in response to reviewer xBLI are an interesting start in resolving such issues, but their connection to the theory is still weak and in my view (which may oppose the view of other people involved in the decision here), some more steps are required before publication. --- Reply to Comment 1.1.1: Comment: Thank you for following up on our comments, and for providing further clarifications. We agree that additional theoretical results will be valuable to understand which lower-level conditions are sufficient to guarantee that convex model-averaging translates into improved performance from higher complexity. We hope that our manuscript already provides valuable contributions by (1) providing a tuning-free synthetic-control method that applies in the case of many control units, (2) documenting its properties on a real-world dataset, (3) deriving theoretical properties of the estimator, and (4) discussing how these results relate to interpolating regression. We believe that these contributions may motivate and feed into theoretical follow-up work along the lines you suggest. We believe that our results on synthetic control may also be of valuable practical relevance to applied researchers, when there is no strong prior on the importance of individual control units.
Rebuttal 1: Rebuttal: We are attaching a one-page PDF with two additional figures: _Figure R1_ presents the performance of linear regression in estimating average treatment effects in samples of varying size, and _Figure R2_ presents the performance of high-dimensional synthetic control with ten pre-treatment periods. Both figures are referenced in our responses to the reviewers, and described in more detail in our response to reviewer xBLi. Pdf: /pdf/6b184eee5378433388e145dd45270b8c5fb027da.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
SwiFT: Swin 4D fMRI Transformer
Accept (poster)
Summary: This paper extends Swin Transformer to 4D brain functional MRI data. Unlike existing deep learning methods in the neuroscience field that either use an ROI-based method (taking as inputs functional connectivity) or a two-step method (3D spatial encoding followed by temporal encoding), the proposed model SwiFT takes as inputs 4D fMRI data and can be trained in an end-to-end manner. This also allows pre-training using contrastive losses. Experiments on multiple large human functional brain imaging datasets indicate the effectiveness of SwiFT compared to prior ROI-based and two-stage methods. Strengths: 1. Originality: this paper is the first attempt at extending Swin Transformer to 4D fMRI data. 2. Methods are technically sound. Experiments are carefully designed to support the authors’ claims. 3. While the building blocks of SwiFT exist in the literature, I consider this paper a significant contribution because 1) it extends Swin Transformer to 4D fMRI data and 2) it addresses a technically challenging task of modeling fMRI data in an end-to-end manner. Weaknesses: Due to memory constraint, the fMRI sequence needs to be divided into shorter sub-sequences and the model predictions for sub-sequences are aggregated through averaging. In my opinion, this is a limitation of the method and should be discussed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Section 3.2 “Instance contrastive loss”, what’s a clip? Please define it. 2. In page 6 lines 230-231, the authors mention that the average performance across 3 splits are reported. But in Appendix page 2 lines 26-27, the authors state that they report the performance on the test set. Please clarify. Also, is the train-validation-test data split patient-wise (i.e., different splits have different patients)? If not, the tasks are much easier and the results may be over-saturated. 3. Ablation studies: What’s the effect of the contrastive losses? Also, the ablation of absolute vs relative position biases is hidden in the Appendix, which should be mentioned in section 3.1 “4D absolute positional embedding”. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limited performance grain from pre-training is discussed in Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and giving us some helpful pointers. Here is our response to the questions and concerns: >*Due to memory constraint, the fMRI sequence needs to be divided into shorter sub-sequences and the model predictions for sub-sequences are aggregated through averaging. In my opinion, this is a limitation of the method and should be discussed.* - We understand your concerns regarding the issue, however we believe that there to be a trade-off for using sub-sequences. Please refer to the general rebuttal regarding the matter. > *In Section 3.2 “Instance contrastive loss”, what’s a clip? Please define it.* - We apologize for not stating it clearly. A *clip* refers to an fMRI sub-sequence that acts as an input for our model. Our goal for the contrastive pre-training process is to train the model to differentiate clips (inputs) that come from different subjects or different time frames. We would be happy to clarify the term for the final version of our paper. > *In page 6 lines 230-231, the authors mention that the average performance across 3 splits are reported. But in Appendix page 2 lines 26-27, the authors state that they report the performance on the test set. Please clarify. Also, is the train-validation-test data split patient-wise (i.e., different splits have different patients)? If not, the tasks are much easier and the results may be over-saturated.* - Thank you for the suggestion. To clarify our setup, we have created 3 randomly generated splits that separate our subjects to training, validation, and test sets with a 70:15:15 ratio. For example, if we had 1000 subjects in a dataset, for each split, 700 of them would be used for training, 150 would be used for validation, and 150 would be used for testing. For each split separately, the model was trained on subjects from the training set and after each epoch evaluated on the validation set, the best model iteration was chosen based on the performance on the validation set, and the chosen iteration was evaluated on the test set. The evaluation results (on the test set) from the 3 splits were averaged and reported on our paper. >*Ablation studies: What’s the effect of the contrastive losses? Also, the ablation of absolute vs relative position biases is hidden in the Appendix, which should be mentioned in section 3.1 “4D absolute positional embedding”.* - Regarding the effect of the contrastive losses, we have conducted experiments comparing a SwiFT model trained from scratch and a contrastive pre-trained SwiFT model in section 4.3 and appendix C.3. - Thank you for the suggestion and we would mention the positional embedding experiment in section 3.1 of the final version of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses!
Summary: This paper presents SwiFT (Swin 4D fMRI Transformer), a novel Swin Transformer architecture designed for modeling spatiotemporal brain dynamics from high-dimensional 4D functional MRI data. The architecture incorporates a 4D window multi-head self-attention mechanism and absolute positional embeddings, making memory usage and computation efficient. SwiFT outperforms recent state-of-the-art models in several tasks, including predicting sex, age, and cognitive intelligence, based on the evaluation on multiple largest-scale human functional brain imaging datasets. The paper further demonstrates the feasibility of self-supervised pre-training of SwiFT using contrastive loss for improved performance on downstream tasks, marking the first end-to-end learning application of Swin Transformer architecture on dimensional spatiotemporal brain functional data. Strengths: 1. The ability to utilize pretraining technology and leverage large datasets to aid small datasets is a considerable strength of this paper. This is particularly beneficial for fMRI analysis, where large public datasets and smaller private datasets are common. The paper's approach could help mitigate the challenges of small sample sizes. 2. The paper marks an important advancement by directly applying deep learning models to fMRI data. This approach could unify various preprocessing pipelines and simplify the analysis process, which is a significant step forward in the field. 3. The experiments were carried out on three large fMRI datasets, which adds credibility and robustness to the results. By testing their approach on different datasets, the authors ensured that their findings were not limited to a specific dataset, thereby improving the generalizability of the results. Weaknesses: 1. The paper lacks a proper discussion of its limitations. Understanding the constraints of the presented approach is important for future research and application of the study's findings. The authors should consider addressing potential limitations, caveats, and assumptions made in their methodology to provide a more comprehensive view of the work. 2. In Section 4.6, the authors had better thoroughly explain why different patterns emerge when the Input Time Sequence Length varies among different tasks. This lack of in-depth discussion and analysis might hinder the reader's understanding of the method's behavior under different conditions. Therefore, it would be beneficial to elucidate these differences. 3. The discussion in Section 4.4 seems insufficient in terms of relating the study's findings to previous literature. A more detailed comparison with past studies would provide readers with a better understanding of the novelty and contribution of this work. This could include a more explicit discussion of how their findings support previous studies. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Which ROI system is used to preprocess data for these ROI-based methods, like BNT? 2. How do you choose the number of layers? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and raising some important questions. Here is our response: >*The paper lacks a proper discussion of its limitations...* - Although our model has demonstrated high performance and efficiency compared to existing models, it still presents certain limitations for neuroscientists aiming to apply it to their specific subjects. The fMRI data utilized in this experiment ranges from a minimum volume of 383 (586 megabytes) to 1200 (1.3 gigabytes) per subject. While SwiFT significantly reduces the number of parameters and enhances computational speed compared to the existing fMRI Transformer, training a model on such data necessitates more than 24 gigabytes of GPU resources and storage space for thousands of fMRI images. This can pose a significant challenge for researchers with limited computing resources. - Our study is based on a sliding window approach and learning is performed on sub-sequences. This only offers a limited description of its capability to handle long-term dynamics. Processing entire fMRI volumes, which can amount to several gigabytes for multiple subjects, is unfeasible considering limitations in GPU resources. When using a sliding window, the model primarily focuses on the local temporal dynamics of the fMRI, which restricts its ability to learn long-term temporal patterns. Hence, when reporting performance, it is essential to conduct additional validation on the uniformity of the distribution of logit values within sub-sequences for each subject. > *In Section 4.6, the authors had better thoroughly explain why different patterns emerge ...* - We apologize for not elaborating as to why this behavior might have emerged. Please refer to the general response for our analysis on the matter. > *The discussion in Section 4.4 seems insufficient...* - Across all age groups, we observed brain regions associated with the default mode network, such as the medial prefrontal gyrus (mPFC), posterior cingulate cortex (PCC), precuneus (PCu), and parietal gyrus. Sex difference in these regions were implicated in the literature ([1], [2], [3]). These regions are known to exhibit concurrent activation when not engaged in specific tasks, where strong functional connection exists between each other. - Notably, we found different brain contributors to sex classification across different age groups. Unique regions were observed in young adults (HCP) and older-aged adults (UKB). In youth (HCP), the observation centered on the thalamus and insular cortex. On the other hand, among the older adults (UKB), the focus shifted to the inferior temporal gyrus (ITG) and medial orbitofrontal cortex (mOFC). These regions serve as hubs for various cognitive functions, encompassing multisensory integration (Thalamus), emotional processing (insular cortex), higher-level visual processing (ITG), and decision-making (mOFC), where diverse sensory information is synthesized. This result is in line with the previous findings reporting the sex differences in those regions ([4], [5], [6]). - These findings suggest that sex difference is prominently evident in regions characterized by robust connectivity with other areas and involved in substantial information exchange. Moreover, we found unique brain contributors to sex across different age groups. >*Which ROI system is used to preprocess data for these ROI-based methods, like BNT?* - As mentioned in **Datasets** of section 4.1 on page 6, we have utilized the HCP MMP1 atlas to preprocess our data for ROI-based methods. >*How do you choose the number of layers?* - During our earlier phase of testing, starting with the number of layers suggested by the first Swin Transformer paper ([19] of our paper), we have toyed with different configurations by adding or subtracting some layers, or even omitting the last stage. However we discovered that the original number of layers performed the best in general and decided to keep it while shifting our focus to other hyperparameters. As for the effect of other hyperparameters, please refer to our response to the question from reviewer pTkQ. [1] Ficek-Tani, B., Horien, C., Ju, S., Xu, W., Li, N., Lacadie, C., ... & Fredericks, C. (2023). Sex differences in default mode network connectivity in healthy aging adults. Cerebral Cortex, 33(10), 6139-6151. [2] Ernst, M., Benson, B., Artiges, E., Gorka, A. X., Lemaitre, H., Lago, T., ... & Martinot, J. L. (2019). Pubertal maturation and sex effects on the default-mode network connectivity implicated in mood dysregulation. Translational psychiatry, 9(1), 103. [3] Weis, S., Patil, K. R., Hoffstaedter, F., Nostro, A., Yeo, B. T., & Eickhoff, S. B. (2020). Sex classification by resting state brain connectivity. Cerebral cortex, 30(2), 824-835. [4] Wu, X., Lu, X., Zhang, H., Bi, Y., Gu, R., Kong, Y., & Hu, L. (2023). Sex difference in trait empathy is encoded in the human anterior insula. Cerebral Cortex, 33(9), 5055-5065. [5] Leming, M., & Suckling, J. (2021). Deep learning for sex classification in resting-state and task functional brain networks from the UK Biobank. NeuroImage, 241, 118409. [6] Weis, S., Patil, K. R., Hoffstaedter, F., Nostro, A., Yeo, B. T., & Eickhoff, S. B. (2020). Sex classification by resting state brain connectivity. Cerebral cortex, 30(2), 824-835. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. I think providing the open-source implementation can improve this work's impact. --- Rebuttal Comment 1.2: Comment: After reading the reply, I decide to raise my rating.
Summary: The paper focuses on the expansion of the Swin Transformer to a 4D version, enabling it to be trained on 4D fMRI data in an end-to-end manner. Specifically, the proposed method commences by constructing an absolute positional embedding layer across spatially neighboring patches, prior to implementing deep 4D Swin Transformer blocks. To facilitate interaction between the windows, a shifted window MSA technique is employed. The paper also explores the self-supervised pre-training of the method, leveraging instance contrastive loss and local-local temporal contrastive loss. Ultimately, the manuscript presents an evaluation of the method on classification and regression tasks, and conducts an ablation study on the model's efficiency in comparison to the TFF method. Strengths: The manuscript is coherently presented, and the concept of adapting the Swin Transformer for representation learning using 4D fMRI data is interesting. The paper's advancement into self-supervised learning, employing both local-local and instance contrastive learning, effectively demonstrates that the proposed SwiFT model exhibits robust generalization capabilities in the context of downstream tasks. Weaknesses: Swin Transformers primarily restrict self-attention computations to specific sub-windows, which potentially curtails the model's ability to capture information from brain regions that are spatially distant from one another. The computational cost associated with the proposed method is substantial. Instead of merely contrasting it with TFF, wouldn't a more comprehensive ablation study on the computational cost, compared to other state-of-the-art methods, offer a broader perspective? The method's experimental evaluation appears restricted (please refer to the questions section for further details). Technical Quality: 3 good Clarity: 3 good Questions for Authors: The method's evaluation is currently confined to classification and regression. What about what other real-world applications (that might be feasible with the proposed method)? For instance, could there be potential for extending the method to tasks like brain segmentation or image reconstruction? Could you possibly expand on the concept of integrating 206 spatially neighboring patches into a token? Might it not be more efficient to consider alternative techniques such as tokenization based on brain anatomical regions? As for the issue of sex classification, the method either failed to surpass baseline methodologies or the difference in accuracy was negligible. Could you delve deeper into potential reasons for this constraint within the study? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes, the paper covers some limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and providing us with helpful suggestions. Here is our response to some of the questions and concerns raised: > *Swin Transformers primarily restrict self-attention computations to specific sub-windows...* - We agree that it is important to address the issue. Please refer to the general rebuttal for our analysis on the matter. > *...wouldn't a more comprehensive ablation study on the computational cost ... offer a broader perspective?* - We could compare the computational cost our model against other baseline ROI-based models, although the comparison would be heavily favored towards the ROI-based models. If we compare the computation cost of the model given the input from a subject, the ROI-based models would have to only process a $180 \times 180$ input while SwiFT and TFF has to directly process a $96 \times 96 \times 96 \times 1200$ input. We could take account of the extra one-time preprocessing step that ROI-based models require, although even at that point it would be unfair to directly compare the computation cost of those two types of models. > *... What about what other real-world applications...* - Thank you for the good suggestion. SwiFT can be extended to solve many important psychological and neuroscientific problems. Let us introduce three feasible applications for SwiFT: - Predicting various task-related brain activity from resting-state functional connectivity is a well-established task that has received much attention [1]. The task holds a substantial potential value, particularly for patients or children who encounter challenges performing complicated tasks within fMRI. With SwiFT demonstrating exceptional predictive performance, we anticipate an improved ability to forecast task-related brain activity using the raw resting-state BOLD signal in comparison to functional connectivity methods. - SwiFT can also be utilized to execute functional segmentation. Previous researchers have extracted spatially independent components from fMRI based on multivariate decomposition methods such as independent component analysis (ICA). Coherent functional brain networks can be discovered based on the relationship between these spatial components. By analyzing the brain dynamics present in resting-state fMRI in a non-linear manner, SwiFT is expected to generate higher quality component maps than existing methods. - SwiFT can also be extended to brain decoding tasks, where information about what a subject sees or hears is reconstructed from the fMRI. Recent decoding studies have shown that it is possible to predict fMRI activity levels in specific brain regions, such as the visual cortex or inferior temporal gyrus, from the features of words learned by a large language model. By utilizing task fMRIs, we can extend the scope of brain decoding to the entire brain and understand how whole brain regions relate to the visual cortex and external stimuli. > *... consider alternative techniques such as tokenization based on brain anatomical regions?* - Thank you for the intriguing suggestion. To clarify, during the initial patch embedding step, 6×6×6=216 neighboring voxels within a patch are embedded into a 36-dimensional token, which is then used as the input for the Transformer. Incorporating information from the brain anatomical regions could enhance the model's ability to effectively learn the brain's structure. The ROI-based tokenization scheme could also help us reduce the number of tokens and make our model more efficient. - However, the reason for our current approach is that ROI-based tokenization based on a specific population-based template image, such as an atlas, may not reflect the individual specificity of the subject, such as age or race. If we ignore these characteristics and define ROI based on a population-based atlas, we may introduce bias into model training. However, if we differentiate atlases based on the demographic attributes of the data, we become constrained to training distinct models for varying atlases. This may limit the ability to generalize data from multiple ages and races in a single model. Therefore, we adopted a simple yet consistent method for SwiFT, which may also be one argument for using a end-to-end model that does not rely on hand-crafted features. > *As for the issue of sex classification, the method either failed to surpass baseline methodologies...* - While for each dataset there are baseline models that show comparable results against our model, on a whole there is no baseline model that consistently performs on all three datasets. The ROI-based models perform well on the ABCD dataset but underperforms on the HCP and UKB dataset. The TFF model performs well on the HCP and UKB dataset but underperforms on the ABCD dataset. - For the HCP and UKB dataset, the best performing baseline model (TFF) already performs extremely well, with 0.980 and 0.998 AUC for the HCP and UKB dataset respectively. This does not leave much room for improvement, and may be the reason why our model was only able to have minor gains compared to TFF. - As for the ABCD dataset, the ROI-based baseline models (BrainNetCNN, VanillaTF, BNT) have showed comparable or slightly better performance compared to SwiFT, although they have underperformed in other datasets. This may be the result of our choice of brain atlas for these models was appropriate for the ABCD dataset but not for the other datasets. We argue that this is one inherent problem of relying on such hand-crafted features, since we have to make sure the application of such feature is appropriate on a case-by-case basis. [1] Tavor, I. et al. (2016). Task-free MRI predicts individual differences in brain activity during task performance. Science, 352(6282), 216-220. [2] Hatamizadeh, A. et al. (2021). Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In International MICCAI Brainlesion Workshop (pp. 272-284). --- Rebuttal Comment 1.1: Comment: Thank you for the responses. After reviewing the author's answers, I've increased my score to "weak accept." However, I still believe that some aspects of the model's evaluation are incremental when compared to other methods. Additionally, if certain datasets are easy to address, the evaluation could be reframed to facilitate a more robust comparison.
Summary: * The authors seek to develop an approach for modeling spatiotemporal brain dynamics, as measured with resting state functional MRI. For this, they have extended a transformer-based architecture to handle time-varying 3D scan data. * They seek to predict subject traits and attributes like age, sex, and cognitive scores from the learned representations using three publicly available datasets. * They have succeeded in outperforming state-of-the-art models in these tasks while reducing the computational complexity and increasing throughput in model evaluations. * Additionally, they have demonstrated the usefulness of pre-training such models on downstream tasks. * Lastly, the relationships captured by the model between the subject attributes and the brain regions are consistent with prior work. Strengths: * The problem statement is well-motivated, and the limitations of prior works (i.e., ROI-based and two-step approaches) are detailed. * The results were replicated across three datasets and various tasks, showcasing this architecture's robustness. * The authors show the learned features are general enough to help downstream tasks through pre-training. * The authors have performed computational complexity analysis. Because their model has almost double the throughput compared to the second-best-performing model, it paves the way to use these models in real-time settings. * Regions of the brain used by the model (as in Section 4.4) to predict age align with previous work, which increases confidence that the model is learning meaningful features. * The authors have submitted their full code, which assists replicability and helps the scientific community extend their work. Weaknesses: * The novelty of the architecture vis-a-vis latest advances in transformer architectures ([23], [34], [35]) seems limited. * The authors aimed to learn representations for brain dynamics but focussed only on predicting traits (age, sex etc), because they used only resting state scans. Dynamic attributes such as task-level performance measures -- e.g., reaction times and accuracies in HCP tasks), cognitive load in a working memory task etc -- are relevant to establish the generality of the findings. The model should be tested on these tasks also. * While the authors have detailed the approach for the two contrastive losses if they could justify the choice of positive and negative samples through domain knowledge or established methods, it would help increase the confidence in the proposed approach. * The authors have mentioned performing ablation studies to "substantiate their modeling choices." I expect that this includes studying the effect of hyperparameters like the number of layers, channel size, window size, etc., given that this is a novel architecture. Details of such studies should have been mentioned. * Additionally, it would be helpful to see the loss curves of models for different datasets and targets. It will help understand the choice of a low number of epochs for training various models. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Could the authors clarify if the parameters for z-scoring, i.e., mean and variance, were learned over the whole data or only on the training data (& applied to validation and test data)? * I would expect that using the whole fMRI time series for any task prediction would provide better results than using sub-sequences. Could the authors provide more intuition about the seemingly counterintuitive results obtained in Section 4.6, especially in UKB (e.g., age prediction)? * While we commend that the authors have shared their code, it would be helpful if they could also share their trained models (especially due to the high compute power required to generate these results). It would help the scientific community at large and also be in line with the spirit of Section 4.3. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: * Explanation methods like IG are generally brittle. It would be helpful if one more explanation method could confirm the results in Section 4.4 * While a shifting window enables efficient feature extraction, it limits model expressivity. It would be useful to know how to extract features that span large spatial/temporal ranges. This could be particularly relevant for task-based fMRI data since signals of varying timescales could affect behavior in such paradigms. * Authors have mentioned that they divided the fMRI data into sub-sequences due to memory constraints. Moreover, as the authors have used four Nvidia A100s (a total of 160GB of GPU memory with NvLink support), they are already on the higher end of compute power. It would be helpful to understand what those memory constraints were and suggestions, if any, on tackling them, as this will have implications in replicating and/or extending their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for raising some important questions and providing us with helpful feedback. Here is our response to the questions and concerns raised: >*The novelty of the architecture ...* - The novelty of our architecture may seem to be limited, however there are several points that we claim to be novel. - As previous works (e.g., [23] in the manuscript) have extended the Swin Transformer to accept 3D inputs, we extend it one step further to directly process 4D inputs. This comes with its own set of challenges, such as how to deal with the patch merging step combining larger number of tokens or the memory and computation constraints due to the added dimension. - VAT ([35] of our paper) technically utilizes the Swin Transformer to process 4D inputs, although the Swin Transformer module was used to process pair of intermediate 2D feature maps instead of a "true" 4D data. The Swin Transformer is also missing some key features such as the patch merging step and the hierarchical structure. Due to these distinctions we believe that our introduction of a fully-fledged 4D Swin Transformer architecture can still be considered as a novel contribution. >*...The model should be tested on these tasks also.* - Thank you for the suggestion. As briefly mentioned in the general rebuttal regarding long-range interactions, we also believe this to be an exciting future work. We expect SwiFT to perform well on these datasets compared to ROI-based models since SwiFT does not "delete" the temporal information during the ROI preprocessing step. >*...approach for the two contrastive losses...* - We will add the following to the final version of the paper. - Our choices are based on an established method called TCLR ([38] of our paper). We adapt two loss functions from here to help SwiFT achieve a better understanding of temporal dynamics by a) **distinguishing fMRI scans from different subjects** and b) **distinguishing fMRI scans from different timestamps from the same subject**. - The instance contrastive loss accomplishes a) by considering fMRI scans from the same subject as positive pairs and fMRI scans from a different subject as negative pairs. - The local-local contrastive loss accomplishes b) by considering different augmentations of the same scan as positive pairs and fMRI scans from different timestamps as negative pairs. - We hope this clarifies the use of our contrastive loss. > *... performing ablation studies to "substantiate their modeling choices." ...* - Although we had originally intended the phrase "substantiate our modeling choices" to refer to our qualitative design choices such as the switch to the absolute position embedding scheme, we understand the reviewer's concerns. - Here are some of the test results on the HCP sex classification task. Tweaking the hyperparameters had surprisingly little effect, and after toying on a larger ABCD dataset, we settled on the architecture mentioned in the paper while balancing the model's performance, efficiency, and memory usage. | Temporal Window Size $P$ |4| 2 | 6 | 4 | 4 | |-|-|-|-|-|-| | **Channel Number $C$**|**36**|**36**|**36**|**24**|**48**| |Accuracy (%)|$92.9\pm1.51$|$93.2\pm2.18$|$93.2\pm3.22$|$94.1\pm2.41$|$92.7\pm1.76$| |AUC (%)|$98.0\pm1.79$|$97.9\pm1.40$|$98.0\pm1.65$|$98.6\pm0.6$|$97.2\pm1.1$| >*Additionally, it would be helpful to see the loss curves of models...* - Thank you for the suggestion, and we have included the loss curves in the general rebuttal PDF (figure 1). We would be happy to share the details in our final version of the paper. >*Could the authors clarify if the parameters for z-scoring...* - We originally utilized the mean and variance from the whole data to normalize the regression target labels, but we acknowledge the err in our methodology and modified our normalization to only use the mean and variance from the training data for each split. - Even though in practice this should not impact the test results since the same normalization method was used for all baseline models, and the change of the mean and variance due to this switch is minimal (table 1 of PDF), we trained our model using the correct normalization method and posted the training curve to make sure the training process was not impacted due to this change (figure 2 of PDF). We will make sure to update the regression results using the correct normalization method in the final version of our paper. > *I would expect that using the whole fMRI time series ...* - We agree that in an ideal scenario where we have access to a very large dataset the reviewer's expectation should be valid, however we believe that the input time sequence length should be considered as a hyperparameter in more limited settings. Please refer to the general rebuttal response for our opinion regarding the matter. > *...it would be helpful if they could also share their trained models...* - We agree with the sentiment and would be happy to release our trained models and open-source code alongside the final draft of our paper. >*Explanation methods like IG are generally brittle...* - The interpretation method we used for our analysis, which is Integrated gradient with Smoothgrad sQuare (IG-SQ), is known to be more robust [1] thanks to noise tunneling and the squaring of estimates. We believed this interpretation method to be the most appropriate for our model, and we would be happy to try other methods if there are any suggestions. > *... a shifting window ... limits model expressivity...* - We understand the concerns, and please refer to the general rebuttal regarding our analysis of the matter. > *Authors have mentioned that they divided the fMRI data into sub-sequences...* - We apologize for the confusion. Please refer to our general rebuttal discussing our computing setup. [1] Hooker, Sara, et al. "A benchmark for interpretability methods in deep neural networks." Advances in neural information processing systems 32 (2019). --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifications and the detailed responses. Validation loss trends are somewhat irregular for the Intelligence score prediction (e.g. UKB, ABCD). Although the method is cutting-edge, demonstrating more fine-grained applications, such as predicting within-scanner scores, is necessary to take it to the next level of impact. --- Reply to Comment 1.1.1: Comment: Thanks for your great feedback. Regarding the irregular loss trends in ABCD intelligence prediction, it's noteworthy that these fluctuations in loss occur during epochs in which the learning rate increases within the learning rate scheduler (Cosine Annealing with Warmup restarts). In UKB intelligence prediction, we observed that the model's performance tended to converge at the early stage of the training. To avoid overfitting, we early stopped the training. The fine-grained applications for SwiFT you mentioned will be thoroughly addressed in our forthcoming research endeavors.
Rebuttal 1: Rebuttal: We thank the reviewers for taking their time to review our work and provide constructive feedback that would help us improve our work. Here are our responses to common questions and concerns raised by multiple reviewers. ### Our computational resources (for reviewer bLJJ, pTkQ) - We have used four NVIDIA A100 GPUs with a distributed data-parallel (DDP) strategy. Using multiple GPUs speeds up the training process by processing subjects in parallel. However, the number of samples (and the amount of memory each GPU can use) each GPU can process does not change as the number of GPUs increases. - For training, on a single NVIDIA A100 GPU, it takes about 40 minutes for each epoch to train on the HCP dataset,composed of 1,084 subjects each with \~1,200 volumes of ($96\times96\times96$) fMRI scans, totaling \~6 hours to complete training (e.g., sex classification). The GPU profiling tool showed that the GPU was using 17.6 GB of memory during our typical training session with a mini-batch size of 8. Although we have mainly used a GPU with a 40GB memory, we have confirmed that NVIDIA A5000 GPU with a 24GB memory is enough to train our model. Using a DDP strategy with 4 GPUs speeds up the training by a factor of 3\~4, provided that the input data can be loaded from the storage into the GPUs fast enough. - For evaluation, using the throughput measured on a single NVIDIA A100 GPU (Table 2), the model can process the 1200 frames of a single subject in 0.57 seconds. With a lower-powered A5000 GPU we could still process a subject under a second. - We would be happpy to add these additional details to the final version of our paper. ### Swin Transformer's ability to capture long-range interactions (for reviewer pTkQ, JwtU) - As reviewers pTkQ and JwtU have pointed out, owing to the local nature of the attention mechanism, our model might not capture long-range interactions between spatially/temporally separated tokens. We understand the reviewer's concerns, but we believe our model have some built-in mechanisms to learn about long-range communication between tokens. Here we would like to explain how our model may learn about long-range relationships. - On the spatial dimensions, the Swin Transformer model can cover large ranges on the later parts of its layers through the patch merging step. On our configuration, after the initial patch embedding step where $6 \times 6 \times 6 = 216$ neighboring voxels are embedded into a patch, a $4 \times 4 \times 4$ window can cover a range equivalent to $24 \times 24 \times 24$ voxels from our $96 \times 96 \times 96$ input. However, after the patch merging step where 8 neighboring tokens are merged, the same window can cover a range equivalent to $48 \times 48 \times 48$ voxels. After another patch merging step, on stage 3, this grows to $96 \times 96 \times 96$ which is already the same size as our entire input. From this point on, a single window covers the entire span of our input in the spatial dimension, thanks to the patch merging step. - On the temporal dimension, where patch merging does not occur on the current version of the model, we needed another method that can ensure long-range interation between tokens. The shifting window allows a token to interact with tokens that are ~10 time frames away with our setup, but to allow longer range interactions we made the model do a full global attention on the final stage as mentioned on section 3.1. Thus, the model processes the entire temporal range on the final stage, allowing it to introduce long-range temporal interactions in the end. We acknowledge that there could be more effective methods to allow long-range interactions, and it would be an exciting problem to tackle when we are dealing with more temporally dynamic datasets as reviewer pTkQ have metioned. ### Limitations regarding the utilization of sub-sequences (section 4.6) (for all reviewers) - In our setup, instead of inputting the entire fMRI sequence of a subject all at once, we have opted to divide it into 20-frame sub-sequence and use the divided sub-sequences as an input to our model. This was mainly due to memory constraints and to regularize the number of input tokens to our model. We have investigated the effect of changing the length of this input sub-sequence in section 4.6. Here, we would like to elaborate as to why the optimal length seems to change for each dataset and task. - Inputting a longer time sequence into the model would allow the model to attend to longer sequences at once, at the expense of bloating the model with more parameters and causing it to overfit. Therefore in a setting where we are dealing with a limited number of training data, using a shorter sub-sequece has a possibility to be beneficial to the model's generalization capabilities. - We also note that the total amount of information used for our predictions remains the same regardless of the input time sequence length, as the averaged logits from all sub-sequences of the subject is used for the prediction. - The datasets used for this work are resting-state fMRI scans, where the image does not change drastically over time. In the future, it would be interesting to investigate the effect of input time sequence length on more temporally dynamic datasets, such as task-based fMRI scans. Pdf: /pdf/98630ba987cf5413015bea7122a07b409f3e4a93.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces SwiFT, a novel Swin Transformer-based architecture for analyzing high-dimensional functional brain MRI data. SwiFT learns end-to-end to process spatiotemporal brain functional data, and it achieves state-of-the-art performance on several tasks, including sex classification, age prediction, and cognitive intelligence prediction. The authors also demonstrate the feasibility of applying the pre-train and fine-tune framework to SwiFT, empowering researchers to construct large-scale foundation models for fMRI. Strengths: - The paper propose Swin Transformer-based architecture that can learn brain dynamics directly from 4D functional brain MRI data in an end-to-end fashion, encouraging researchers to construct foundation models for fMRI. - The authors provide a clear and concise introduction to the problem of analyzing high-dimensional functional brain MRI data and explain the motivation for developing SwiFT. - Results reveal that SwiFT consistently outperforms recent state-of-the-art models. The authors also conduct ablation studies to substantiate the modeling choices and present interpretation results using IG-SQ for SwiFT's predictions. - SwiFT has the potential to facilitate scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. - The authors also provide interpretability of SwiFT's predictions, which can help researchers better understand the brain's spatiotemporal dynamics. Weaknesses: - The paper compares SwiFT with limited recent state-of-the-art models,. It would be interesting to see how SwiFT compares with more types of models, such as CNNs or GNNs, which are commonly used in fMRI analysis. - While the paper presents interpretation results using IG-SQ for SwiFT's predictions, it would be helpful to have a more in-depth discussion and how they can be used to better understand the brain's spatiotemporal dynamics. - It would be helpful to have a more in-depth discussion of the limitations of SwiFT, such as its sensitivity to noise in the data or its ability to generalize to new datasets. - The paper does not provide open-source code for SwiFT, which could limit its adoption and reproducibility. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Could the authors provide more details on the computational resources required to train and evaluate SwiFT? This would help to better understand the practical feasibility of using SwiFT in real-world applications. - Can the authors provide more details on the implementation of the 4D window multi-head self-attention mechanism and absolute positional embeddings used in SwiFT? This would help to better understand the specifics of SwiFT's architecture and how it differs from other Transformer-based models. - Could the authors provide a more in-depth discussion of the limitations of SwiFT, such as its sensitivity to noise in the data or its ability to generalize to new datasets? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Consider the potential impact of SwiFT on the broader field of neuroscience and medicine, such as its potential to facilitate scalable learning of functional brain imaging. This would help to better understand the potential benefits and drawbacks of using SwiFT in research and clinical settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed analysis of our work and for the insightful suggestions. Here we address some of the questions and concerns raised: > *...It would be interesting to see how SwiFT compares with more types of models...* - Thank you for providing us with some pointers. To faithfully evaluate the effectiveness of our model, we have already reproduced multiple models as baselines. One of such models is BNT ([12] of our paper), the current state-of-the-art model to the best of our knowledge. BNT improves upon the traditional GNN-based approaches for fMRI analysis and represents the cutting-edge method of similar GNN-based approaches. > *While the paper presents interpretation results ...* - Thank you for your inquiry. Currently, we employ a method of calculating IG-SQ for each sub-sequence and subsequently averaging along the time axis. Through this approach, we have identified which regions, on average, exhibit the highest explanatory power for gender differences. - For these IG-SQ values to hold further significance, conducting additional analyses on their relationship with brain connectivity is necessary. The brain regions showing high IG-SQ values in our findings, such as the thalamus, insular cortex, and inferior temporal gyrus, are primarily hubs for integrating various sensory information and exhibit connectivity with several other brain regions. A crucial research question is investigating whether this high connectivity contributes to the elevated IG-SQ values. - Furthermore, exploring how well IG-SQ values can account for individual differences is worthwhile. IG-SQ values serve as a more direct measure of sex difference. Each individual possesses unique IG-SQ values, which might also represent individual differences. Consequently, investigating the predictive capacity of Interpretation maps for mental disorders characterized by significant gender differences, such as depression or ASD, is a crucial research avenue in the future. > *The paper does not provide open-source code ...* - We apologize that we had to provide our code only as a file in the supplementary material due to the anonymous review process. We would be happy to release our open-source code alongside the final draft of our paper. > *Could the authors provide more details on the computational resources ...* - Please refer to the general rebuttal for the additional details. > *Can the authors provide more details on the implementation...* - Thank you for the comment. We supplement our implementation details with the following: - The 4D window multi-head self-attention (4DW-MSA) mechanism is an extension of its 2D variant from the Swin Transformer. The core idea of the windowed attention mechanism is that tokens only attend to tokens that are spatially or temporally adjacent to them. This would allow the Transformer to achieve lower computation and memory complexity. - To be specific, we divide the entire 4D space into smaller sub-spaces (windows), and tokens within a sub-space only attend to tokens within the same sub-space. Additionally, after each layer the windows are shifted by half of its width to allow the tokens to exchange information with new adjacent tokens. - The absolute positional embeddings, although not commonly used, were introduced in our work to improve the model's efficiency. The absolute positional embeddings have a simple premise: we add a (learnable) embedding vector to each of the tokens based on their position. This would be adding a unique vector for each of the 81,260 possible positions for our configuration. As a design choice, we separate the time dimension and space dimension and employ 20 time-embedding vectors and $16 \times 16 \times 16 = 4,096$ space-embedding vectors and add them separately. > *a more in-depth discussion of the limitations of SwiFT...* - A limitation of SwiFT is its inability to process entire sequence as a whole, necessitating the division into sub-sequences. We theorize it may not be an entirely negative aspect, however it would be difficult to test out processing entire sequences due to memory constraints. For a more comprehensive explanation, please refer to the concept on the general rebuttal. - For limitations associated with generalizing to new datasets, please refer to the response provided below. > *Consider the potential impact of SwiFT on the broader field ...* - Thank you for your interest in the potential impact of SwiFT. We expect SwiFT to become a foundation model for functional brain imaging that can solve various problems in neuroscience and medicine. In recent years, it has been a trend to train large-scale models and then perform new tasks through few-shot learning. However, functional imaging has not had such an approach due to the lack of models and computational resources for large-scale training, and we believe that SwiFT can be an effective solution. - In research and clinical settings, SwiFT can be used to solve specific, well-defined problems. For instance, SwiFT could be directly employed for the diagnosis and prognosis of diseases, and the features learned by SwiFT could be harnessed for subtyping heterogeneous psychiatric disorders. However, owing to the intrinsic nature of transformers, the efficacy of SwiFT might not be fully realized with smaller datasets, and it becomes necessary to fine-tune the model using weights pre-trained on larger datasets. - In our work it has been observed that transfer learning exhibits restricted enhancements in specific tasks. However, this limitation can be due to conducting pre-training solely on a single type of data. We anticipate that broadening the scope of pre-training to encompass diverse datasets, such as task fMRI, will augment the effectiveness of transfer learning. This stands as a crucial avenue for future inquiries. --- Rebuttal 2: Title: Response to Rebuttal Comment: Thanks the authors for the explanation!
null
null
null
null
null
null
Keypoint-Augmented Self-Supervised Learning for Medical Image Segmentation with Limited Annotation
Accept (poster)
Summary: This paper tackles self-supervised learning for medical image segmentation. The main contribution is the keypoint identification in input images in self-supervised pre-training to consider long-range spatial dependencies. Each UNet encoder layer is followed by a keypoint-augmented fusion (KAF) layer, where keypoints are identified by SIFT on the input image and rescaled with respect to the resolution of each layer. Convolutional features at each keypoint are processed by a Vision Transformer to incorporate self-attention, provided to additional convolutional layers for extracting dense features from transformer outputs and finally concatenated with the input feature map. Resulting KAF layer features are used for contrastive training of UNet for both global representation learning similar to PCL [54], as well as local similarity learning via the SuperGlue graph neural network [42]. Strengths: Incorporating similarity learning from keypoints is interesting and novel in self-supervised medical image segmentation literature. Paper is well-written and the experiment setup is detailed. Cross-validation is used to compute error bounds in addition to performance metrics. Downstream segmentation performance with limited annotations outperforms several pre-training approaches across two benchmark datasets. Weaknesses: While contributions are novel, literature review is missing a lot of works on self-supervised medical image segmentation via global/local similarity/contrastive learning: Taher et al. "CAiD: Context-Aware Instance Discrimination for Self-supervised Learning in Medical Imaging" 2022 Yan et al. "SAM: Self-supervised Learning of Pixel-wise Anatomical Embeddings in Radiological Images" 2021 Zheng et al. "MsVRL: Self-Supervised Multiscale Visual Representation Learning via Cross-Level Consistency for Medical Image Segmentation" 2023 Ouyang et al. "Self-supervised Learning for Few-shot Medical Image Segmentation" 2022 Fischer et al. "Self-supervised contrastive learning with random walks for medical image segmentation with limited annotations" 2023 Xie et al. "PGL: Prior-Guided Local Self-supervised Learning for 3D Medical Image Segmentation" 2020 Wang et al. "Self-supervised learning based transformer and convolution hybrid network for one-shot organ segmentation" 2022: also incorporates Vision Transformer for global feature learning. Experimental comparisons against Wang et al. 2022 would further strengthen the novelty. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How are the thresholds selected for identifying positive/negative keypoints? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: limitations and future work are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > While contributions are novel, literature review is missing a lot of works on self-supervised medical image segmentation via global/local similarity/contrastive learning. > Experimental comparisons against Wang et al. 2022 would further strengthen the novelty. Thank you for offering additional literature on self-supervised medical image segmentation! We will include them in our related works. Unfortunately, we were not able to find the publicly available source code for `Wang et al.`, but we are happy to include them as one of the comparisons when the authors release the code in the near future. Among the list of papers provided, we were only able to find one available (`Taher et al.`) codebase, and we include them as an additional comparison. Results on CHD and ACDC dataset are reported below. Our method outperforms CAiD in both datasets under different numbers of subjects used for finetuning (M). A full comparison will be further attached into the revised paper. | | CHD | | ACDC | | |------------|:---:|------------|:----:|------------| | SSL Method | M=2 | M=15 | M=2 | M=6 | | *CAiD* | 0.265(.08) | 0.684(.04) | 0.483(.11) | 0.822(.02) | | Ours | **0.392(.06)** | **0.712(.03**) | **0.741(.03)** | **0.873(.01)** | > How are the thresholds selected for identifying positive/negative keypoints? As there are two thresholds involved in our models, we clarify each of them below. - The first threshold is applied across different slices to determine the positive/negative slices, and we follow the optimal value found by the PCL paper [1] - 0.1 and 0.35 for CHD and ACDC. - The second threshold is applied among keypoint features between two positional positive slices to determine the correspondence among these points. In this step, we do not explicitly recognize any "negative" keypoints, as we only use the matched correspondence ("positive") in our local loss. By default, we set the threshold as 20. From our ablation on the threshold, and the attached visualization of the correspondence, our method is not sensitive to the threshold values and the number of the matched correspondence. [1] Zeng et al., Positional contrastive learning for volumetric medical image segmentation. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal and keep my original score. Thank you, --- Reply to Comment 1.1.1: Comment: Dear Reviewer PaeV, Thanks a lot for the positive feedback!
Summary: This paper focuses on improving medical image segmentation performance with limited labeled data. They (1) propose a modification to the standard UNet convolutional architecture by incorporating local features (at locations identified by SIFT) into the backbone, (2) show how their modification can be used to control self-supervised contrastive learning positive and negative pair selection, and (3) evaluate their proposed architectural change and self-supervised training strategy against relevant baselines. They find both the architectural change and self-supervised strategy improve segmentation performance. Strengths: - Novel architectural change that impacts network performance both with random initializations and with self-supervised contrastive learning. - Improves performance against relevant and recent self-supervised baselines. Weaknesses: - The GLCL baseline reported in Table 1 is much lower than the numbers reported in the GLCL original paper; the original GLCL paper actually reports performance better than the proposed method on the ACDC dataset. Please comment on why the reported GLCL performance in 14-18 points lower than the GLCL paper. - The proposed architectural change may increase compute requirements, but this is not discussed nor are any timing/computational load experiments reported. - There is limited dataset variety—both evaluation datasets are 3D cardiac anatomical segmentation tasks. Additional experiments, particularly including pathological segmentation, would strengthen the results and show the proposed method works for different segmentation targets. - Scaling the proposed solution to non-CNN architectures or 3d data may be difficult (as mentioned by the authors). - Minor: Table 1 caption needs to specify what metric you are reporting; Line 221 has a period where there should be a comma+”and.” Technical Quality: 3 good Clarity: 3 good Questions for Authors: My questions and suggestions are covered in the Weaknesses section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations and impact adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The GLCL baseline reported in Table 1 is much lower than the numbers reported in the GLCL original paper; the original GLCL paper actually reports performance better than the proposed method on the ACDC dataset. Please comment on why the reported GLCL performance in 14-18 points lower than the GLCL paper. Thank you for pointing this out! We believe this is a miscommunication on our part and we will clarify in our paper. In the original GLCL paper, experiments are conducted based on a single fold train/test split, and results are reported on a single held-out test set. In our experiment, we followed the PCL paper and conducted 5-fold cross validation. Results in Table 1 indicate an average dice score across the 5 folds. Note that our reported result of the GLCL-global is consistent with the GCL results reported in the PCL paper (where they adopted the global loss of the GLCL). We additionally performed 5 folds evaluation with the full losses and results are shown in Table 1. > The proposed architectural change may increase compute requirements, but this is not discussed nor are any timing/computational load experiments reported. Thank you for your suggestion! Due to the page limit, we discussed the computational load in the supplementary materials. Please kindly refer to appendix A "Computational analysis" for more information. In summary, our method demonstrates a slower growth rate in both memory usage and GFLOPs compared with transformer-based unet (i.e. SwinUNETR). > Limited dataset variety. We agree. We now include results on an additional CT dataset for multi-organ segmentation. Results are reported in the Table below. With 2 training subjects, we tested the performance of the model under both random weight initialization and SSL pretrained initialization. In both scenarios, our method consistently outperformed the existing works. Due to the time constraint, we will supplement the remaining methods and different number of training subjects in the revised manuscript. | Init. | Method | Dice (M=2) | |--------------|----------------------|------------| | Random Init. | Unet | 0.253(.06) | | | Swin-Unet | 0.198(.04) | | | SwinUNETR | 0.279(.06) | | | Ours | 0.289(.06) | | SSL pretrain | PCL | 0.306(.05) | | | Swin-Unet (with PCL) | 0.210(.07) | | | Ours | 0.322(.06) | > Scaling the proposed solution to non-CNN architectures or 3d data may be difficult (as mentioned by the authors). Thank you for the comment. It is worth mentioning that the current scope of our work is to enable the long-range dependencies into the CNN based segmentation architectures, and our results on the 2D medical image segmentation indicates the effectiveness of the proposed keypoint-based formulation. Our proposed KAF layer is generic, which can also be incorporated into recent transformer-based Unet architectures that integrate both transformer blocks and the convolutional blocks. We fully agree that such architectural modification or scaling up to 3D may require different consideration and further investigation, but these directions are interesting future works. > Table 1 and Line 221 Thanks for pointing that out, and we will mention that we use Dice as a metric and report its computed mean and standard deviation in Table 1. For Line 221, we will change the dot to period. Thanks! --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: I have read the other reviews and the authors' rebuttal. The authors have presented a thorough rebuttal responding to the raised questions/weaknesses and have presented new data showing useful extensions and ablations of their method. I think this is an interesting addition to the self-supervised body of work and am changing my score from 4 to 5. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 8g4p, Thank you for your time and we appreciate your encouragement! Please let us know if you have other comments or questions.
Summary: SIFT keypoints are used in conjunction with a UNET to construct a transformer-like model for medical image segmentation. Global and local self supervised learning (SSL) losses are proposed. Correspondences identified using the SuperGlue method. Various experiments on two cardiac image datasets, including ablation studies, shows improves on other work in terms of segmentation dice score. Strengths: The use of traditional keypoints as attention operator mechanism for UNet segmentation makes sense and is novel, to my knowledge. Improved segmentation results convince the reader that the approach has practical value. Weaknesses: The method presented is based on 2D keypoint matching, issues with multiple 2D slices, wheras anatomy is 3D. It would be interesting to know if 3D keypoints would improve results. Table 1 should mention that the numbers shown are dice scores. Minor spelling Line 182 “based on the cloest L2 distance” -> “closest” Line 213 “patent ages ranging from 1 month to 21 year” -> “21 years” It would be interesting to know much the learning method here improves upon simple keypoint transfer segmentation as in [46]. Objective functions in equations (1) and (2) appear very similar to other recent keypoint matching work: Chauvin, Laurent, et al. "Efficient pairwise neuroimage analysis using the soft jaccard index and 3d keypoint sets." IEEE transactions on medical imaging 41.4 (2021): 836-845. The description of the datasets used does not mention the size and resolution of the images. Computational complexity of the proposed method is not mentioned. The paper lacks reproducibility. For instance, line 156 and line 181 allude to thresholds whose numerical values are not explicitly mentioned. Similarly, it is not clear what setup was used to train the Transformer block, including details such as learning rate, batch size, attention head size, and input embedding size. Additionally, it would be valuable to include statistical tests to demonstrate the superiority of the proposed method compared to existing approaches. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - It would be good to hear a discussion regarding 3D keypoint methods, and how much this method would improve upon simple keypoint transfer segmentation as in [46]. - In lines 208 and 209 it is stated that images are pre-processed using the methodology described in [54]. Could the preprocessing stage be briefly described in the paper? Are there aspects without which this approach would not work, e.g. registration of training data? - Could the authors explain why the Manhattan distance is preferred over other distance measures for keypoint filtering (as mentioned in line 181)? - Would it be possible for the authors to provide the values of the similarity index in Figure 2? This would allow us to compare those values with the ones presented in the previous Table 1. - Can the authors provide more insight into the justification for the following claim? Clearly there are various random transforms which would scramble image information, to which SIFT is not invariant. "(2) our proposed self-supervision helps maintain better local equivariance of the self-attention, i.e., with the same query point location, its interaction with other points remains identical no matter how the image is transformed." Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Several limitations exist: missing important details, computational complexity, description of data, transformer parameters (learning rate, batch size, attention head size, and input embedding size), statistical significance and comparison to based keypoint transfer segmentation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > It would be interesting to know if 3D keypoints would improve results. Thank you for the insightful point! As discussed in our limitations section, a promising future avenue is expanding our model to 3D. This direction, however, poses unique challenges. Firstly, 3D keypoint detection lacks established algorithms. Moreover, the inherent sparsity of 3D data adds complexity, making the transfer of features from sparse keypoint maps to dense feature maps a formidable task. Modifying this module presents an unexplored area requiring careful investigation. Furthermore, the increase in keypoints from $O(x^2)$ to $O(x^3)$ brings about the necessity for a novel sampling strategy to address memory constraints. While deploying our method on 3D data holds potential, we acknowledge the substantial learning curve and the need for extensive experimentation in this domain. > It would be interesting to know how much the learning method here improves upon simple keypoint transfer segmentation as in [46]. We fully agree that computing a registration based on keypoints and then transforming the segmentation task to the target image is an interesting idea when annotations are limited. However, we note that extending [46] for our task requires extensive research efforts that leads to a different project. Crucially, [46] registers key points across different subjects in order to transfer training segmentation during test time, whereas in our method, keypoint correspondence is detected across neighboring slices within the same subject, which is much simpler and trivial. Moreover, [46] uses keypoints for segmentation transfer, while our method utilizes keypoints for more general-purpose representation learning. Again, we acknowledge that comparing or even extending [46] under our task will be a promising future research direction. > Objective functions in equations (1) and (2) appear very similar to other recent keypoint matching work. Yes, we agree. In our approach, eq (1) is an established global contrastive loss, and the distinction of our method with others lies in the features used to compute similarity. Specifically, the global feature is now computed from our KAF layer in eq(1). Similarly, we follow SuperGlue in eq(2) for aligning the feature space between the matched correspondence features. [1] Sarlin et al., SuperGlue: Learning Feature Matching With Graph Neural Networks > The size and resolution of the images. We mentioned the resolution of the image size in the caption of Figure 1 in our appendix, but we will make it more explicit in the main text. > Computational complexity. Please kindly refer to our appendix A "Computational analysis". > Reproducibility: Threshold values in L156 and L181, learning rate, batch size, etc. Thank you for raising the issues. We included learning rate, batch size, optimizers in L226 – L240, but we are happy to add more relevant details in the manuscript. Our code will be released for better reproducibility. > Statistical tests. Thanks for the valuable suggestion. We include a statistical test and report p-values between results of our method versus each of the other methods below (each scalar indicates the p-value between our final method against others). We use the average dice score per slice to estimate the p-values. All values indicate that there is a significant difference between our method and the existing methods. | | CHD (M=15) | ACDC (M=6) | |----------------------|------------|------------| | Unet w/ random init. | < 0.001 | << 0.001 | | Ours w/ random init. | << 0.001 | << 0.001 | | GLCL-full pretrain | << 0.001 | << 0.001 | | PCL pretrain | << 0.001 | << 0.001 | > In lines 208 and 209 it is stated that images are pre-processed using the methodology described in [54]. Could the preprocessing stage be briefly described in the paper? Are there aspects without which this approach would not work, e.g. registration of training data? We will add a description on the preprocessing. It mainly included intensity normalization and resampling, which is a commonly adopted preprocessing step in medical images. Further, our model does not assume registered training images and thus won’t be affected by such pre-processing. > Why is the Manhattan distance preferred over other measures? We empirically used Manhattan distance as the criteria inspired by the PCL paper, but it is likely that other measures can also work, as long as the correspondence can be reasonably determined. This assumption was also supported by our ablation study for `9qHo` using different thresholds, where the number of matched correspondences and different threshold values had minimum effects on the final results. > Adding values of the similarity index in Figure 2. We now added the corresponding Dice (scaled by 100) into the revised figure attached. We observed that these individual values are consistent with the average scores in Table 1. > Insights on (1) SIFT is not invariant. (2) “our proposed self-supervision helps maintain better local equivariance of the self-attention." We fully agree that SIFT may not be robust when strong image augmentations are applied on the images, and this is precisely the reason why we first detect keypoints on the **original** images before any augmentations are applied, and this makes sure that the keypoints are consistent under random augmentations. Further, the keypoint correspondences are also matched on the original images, and our local loss further encourages that the features between corresponding points remain similar even under strong augmentations. Such formulation ensures that our model is more robust in maintaining the identical interaction between keypoints under different image transformations, as shown in main text Figure 3 and appendix Figure 2. --- Rebuttal Comment 1.1: Comment: I was initially very enthusiastic about this work, I am generally satisfied with the authors responses. Unfortunately the authors miss directly related literature and avoid broadening their paper literature review suggested by several reviewers, which ultimately reduces the potential impact of this work. For example, traditional 3D keypoint segmentation methods, published in a major medical imaging journal. Wachinger, C. et al. (2018). Keypoint transfer for fast whole-body segmentation. IEEE transactions on medical imaging, 39(2), 273-282. I thus reduce my recommendation from 7 to 6, more inline with other reviewers. --- Reply to Comment 1.1.1: Comment: Dear Reviewer m84B, Thank you for your time and we appreciate your overall positive and constructive feedback after reading our rebuttal! We believe there might be some miscommunication on our part and we would like to take this opportunity to provide further clarity regarding the related literature and the potential impact of our research. > “The authors miss directly related literature and avoid broadening their paper literature review suggested by several reviewers that ultimately reduced the impact of this work.” We greatly respect and value the important related literature provided by all reviewers w.r.t. Transformer-based UNet (Reviewer `9qHo`), local SSL (Reviewer `b18J,PaeV`), and keypoint-based segmentation (Reviewer `m84B`). Our intention was never to overlook the importance of broadening our literature review, and we genuinely value your suggestions. We will incorporate the missing literature into our manuscript. Additionally, we have carried out extra experiments with the suggested papers whose code was accessible, to further validate our methods. To specify, beyond our initial experimental set up, we have added more experiments and comparisons with these literature as follows: (1) Apart from the original computational comparison with transformers, we have further compared the segmentation performance with two transformer-based methods. Our architecture achieved better performance under both random initialized weights and self-supervised pretraining, while maintaining lower computational costs. (see response to `9qHo`); (2) We have added more comparisons with two recent local self/semi-supervised learning methods. Our method outperformed the existing works, which validated the benefits of our proposed pretraining strategies (see response to `b18J,PaeV`). (3) We have included a new CT dataset and verified that our method also generalizes to multi-organ non-MRI datasts (see response to `9qHo,8g4p,b18J`). We respectfully believe that these new experiments and comparisons could potentially contribute to a broader impact of our work. > "Miss directly related work on traditional 3D keypoint segmentation methods, published in a major medical imaging journal. Wachinger, C. et al. (2018). Keypoint transfer for fast whole-body segmentation. IEEE transactions on medical imaging, 39(2), 273-282" We genuinely acknowledge the significance of the work that employs keypoints for 3D segmentation, as highlighted in your comment. We did not intend to exclude a direct comparison with it, whereas a proper re-implementation without their official codebase may require extensive efforts. We are more than willing to reach out to the authors to explore the possibility of re-implementing their method for a fair comparison with ours. Moreover, we believe the 3D keypoints used in this paper will serve as a promising starting point for extending our model from 2D to 3D. Once again, we thank you for your time and effort in providing feedback. We hope that our clarification effectively addresses your concerns related to the missing literature and the potential impact of our work.
Summary: The authors introduce an approach to improve the performance of the UNet model, specifically in the task of segmentation. They propose a keypoint augmented fusion layer to enhance the CNN-based encoder of UNet. This layer leverages self-attention to capture long-range spatial dependencies among keypoints extracted from multi-scale feature maps. In addition, the paper suggests the use of global and local contrastive losses to pretrain the model in a self-supervised manner. These losses help the model learn meaningful representations from unlabeled data, improving its performance. The proposed framework was evaluated on Cardiac CT and MRI datasets for segmentation tasks. The results demonstrate reasonable improvements over the state-of-the-art methods, particularly in scenarios with limited labeled data (few-shot settings). Strengths: 1) This work addresses limitations of traditional UNet models, which rely on convolutional neural networks (CNNs) and may struggle to capture long-range dependencies among spatial positions in an image. In this study, the authors propose a more effective approach to handle this issue. At each scale of the encoder, the authors employ Scale-Invariant Feature Transform (SIFT) to detect key points within the CNN feature maps. By applying SIFT, the authors aim to identify salient locations within the feature maps that can capture important spatial information. To facilitate correlation and interaction among these detected key points, a transformer model is employed. 2) Global contrastive loss is applied in an unique way on two types of representations - concatenated keypoint enhanced feature maps, as well as features from last layer of UNet. 3) instead of being used as a traditional image-wise similarity loss, here the local contrastive loss is proposed to exploit pixel-wise similarity (by measuring the correlation between keypoints within a spatial distance among adjacent slices) Weaknesses: 1) It is unclear why this work ignores Transformer-based UNet architectures (TransBTS [1], UNetr[2], Swin-Unet [3], etc) which employ self-attention to better capture the correlations. Though the authors propose to use it on keypoints, I find it difficult to comprehend the advantages here, except maybe computational efficiency. There are many works which do it on entire feature maps - why are those suboptimal? 2) Handcrafted features like SIFT/SURF to detect keypoints, though powerful, aren't the current SOTA. The performance can definitely be tested using finetuned object detection models (such as Faster RCNN [4]) In short - there should be ablation showing (1) advantage of using keypoints, (2) Different keypoint detection strategies, and most importantly (3) comparison with more recent Transformer-based UNet architectures. [1] TransBTS: Multimodal Brain Tumor Segmentation Using Transformer [2] UNETR: Transformers for 3D Medical Image Segmentation [3] Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation [4] Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1) Keypoint Robustness - The manner of application of global contrastive loss depends on the effectiveness of keypoints. Because the other global loss is used off-the-shelf from [54] as PCL loss. 2) As for local loss, the contribution appears to be more engineering (lines 175-200) i.e on how to use it among keypoints in adjacent slices. Here also, there is no ablation on the multiple thresholds selected in the form of spatial distance between keypoints or number of adjacent slices. 3) Standard non-cardiac benchmark datasets like BTCV, SegTHOR have not been used. To establish it as a generic method, the authors should definitely explore non-cardiac pan-organ datasets 4) I also fail to understand what customization the authors have proposed in their architecture to make it work in few-shot settings with limited annotations. 5) Can the authors please mention the unlabeled datasets used for SSL pretraining? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Not adequately addressed in the 'Discussion' section Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Comparison with transformer-based UNets. Thank you for your suggestions! We now added two recent SoTA transformer-based unets including Swin-Unet[1] and Swin-UNETR[2]. To illustrate the effects of architectural changes, we train all models with random weight initialization, under limited annotation (number of training subjects $M=2$). Five-fold cross validation is performed on different datasets, and the average dice score with standard deviation is reported below. When trained with limited labels, Unet still outperformed transformer based architecture (except for Swin-Unet on CHD), and our method achieved the best results across the board. We note that our observation is consistent with the conclusion in [1], where transformers may be more severely affected by the initialization, and may benefit from large-scale pretraining. E.g., on Synapse, we observed an increased dice of SwinUNETR from 0.198(.04) to 0.210(.07) when further pretraining it with PCL. | | CHD | ACDC | Synapse | |--------------|------------|------------|------------| | Swin-Unet[1] | 0.236(.09) | 0.327(.10) | 0.198(.04) | | SwinUNETR[2] | 0.137(.09) | 0.501(.03) | 0.279(.06) | | Unet | 0.184(.06) | 0.588(.07) | 0.253(.06) | | Ours | 0.344(.05) | 0.655(.05) | 0.289(.06) | [1] Cao et al., "Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation". [2] Hatamizadeh et al., "Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images". > The advantage of using keypoints? The keypoint robustness? Other keypoint detection methods? Thank you for the questions. We will try to clarify our intuition with additional experiments, to hopefully address the concerns. **Why use keypoints?** Our rationale for using keypoints is rooted in enhancing the conventional CNN layers with long-range dependencies, a characteristic often encountered in medical images. While the attention in transformers facilitates the learning of such long-range dependencies, it operates among all pairs of feature points, potentially resulting in both computational expense and a deficiency in targeting crucial regions. Empirically, our formulation leads to much better segmentation performance compared with CNN-only UNet and transformer-based UNet, while being less computational intensive. **Other keypoint detection methods?** We agree. The keypoint detection can be replaced with other learning based SoTA methods as well. Given that finetuning Faster RCNN model for keypoint detection requires groundtruth labels (we do not have access to), we used a pretrained keypoint detection model Superpoint [1] as an alternative to SIFT. Segmentation results on CHD and ACDC dataset are reported below comparing two methods, and the results indicate that our method is not sensitive to different keypoint detection algorithms. Overall, Superpoint leads to slightly lower results, and we speculate this was due to the pretraining was done on natural images (COCO dataset). | Init. | | CHD | ACDC | |----------------|---------------------|--------------|-------------| | Random Init. | SuperPoint | 0.643(.04) |0.810(.02) | | | SIFT | 0.686(.03) |0.827(.05) | | SSL Pretrain | SuperPoint | 0.703(.03) |0.865(.02) | | | SIFT Pretrain | 0.712(.03) |0.873(.01) | [1] DeTone et al., "SuperPoint: Self-Supervised Interest Point Detection and Description". **Keypoint Robustness - No ablation on the thresholds.** To verify how sensitive the method is to the number of matching keypoints, we perform an ablation on different threshold values, to pretrain our model on the CHD dataset, and finetune it on $M=15$ labeled data. Global losses ($w_1=w_2=0$) are turned off to isolate the effects of the threshold for the local SSL loss. The dice scores are reported below, indicating that our model is not sensitive to the threshold setting and remains robust across different values. We also include a visualization in the attached pdf with the number of matching keypoints annotated under different threshold values. | Threshold | Dice | |:---------:|:-------------:| | 5 | 0.689 (0.043) | | 10 | 0.690 (0.031) | | 15 | 0.684 (0.035) | | 20 | 0.689 (0.036) | | 25 | 0.684 (0.032) | | 30 | 0.685 (0.033) | | 35 | 0.690 (0.037) | | 40 | 0.687 (0.037) | > Non-cardiac pan-organ datasets We now add a multi-organ CT segmentation dataset to evaluate the generalization of our method. Results are reported below. With 2 training subjects, we tested the performance of the model under both random and SSL pretrained initialization. In both scenarios, our method outperformed the existing works. We will supplement the remaining methods and different number of training subjects in the revised manuscript. | Init. | Method | Dice (M=2) | |--------------|----------------------|------------| | Random Init. | Unet | 0.253(.06) | | | Swin-Unet | 0.198(.04) | | | SwinUNETR |0.279(.06) | | | Ours | 0.289(.06) | | SSL pretrain | PCL | 0.306(.05) | | | Swin-Unet (with PCL) | 0.210(.07) | | | Ours | 0.322(.06) | > What customization in the architecture makes it work in few-shot settings? Our architectural customization involves integrating the KAF layer into every CNN block of the UNet. Leveraging the interplay among the features extracted from identified keypoints, this modification empowers the model to more effectively gather local information. Consequently, it facilitates enhanced representation learning even when working with limited data. These advantages have been substantiated through comprehensive empirical experiments conducted across diverse datasets. > The unlabeled datasets used for pretraining? We follow common practice of medical SSL and use all unlabeled volumes on each dataset for our pretraining. On ACDC, we use all unlabeled volumes (100 patients, and each of them has ~15 volumes)l. On CHD, we use a total of 68 cardiac images. --- Rebuttal Comment 1.1: Comment: Dear Reviewer 9qHo, We would like to thank you again for your constructive comments and suggestions. We've expanded upon the last table above by including a more thorough comparison between our approach and the two transformer UNet models under both random and self-supervised initialization. We hope that the updated results will provide more insights w.r.t. the comparisons with transformers. In particular, we further pretrain the SwinUNETR with two SSL strategies: PCL [1] and the self-supervised loss proposed in [2]. Our findings are in alignment with the trends we observed earlier, wherein the performance of transformer models is notably influenced by their initialization. Both of these pretraining methods led to observable improvements, and our proposed method achieved the best performance. | Init. | Method | Dice (M=2) | |--------------|----------------------|------------| | Random Init. | Unet | 0.253(.06) | | | Swin-Unet | 0.198(.04) | | | SwinUNETR | 0.279(.06) | | | Ours | 0.289(.06) | | SSL pretrain | PCL | 0.306(.05) | | | Swin-Unet (with PCL) | 0.210(.07) | | | SwinUNETR (pretraining with [1]) | 0.284(.07) | | | SwinUNETR (PCL) | 0.304(.06) | | | Ours | 0.322(.06) | [1] Zeng et al., “Positional Contrastive Learning for Volumetric Medical Image Segmentation” [2] Tang et al., “Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis”
Rebuttal 1: Rebuttal: We thank the reviewers for their time and expert feedback. As a summary of the reviews, we were happy to see that the submission was found to be well-written [`b18J,PaeV`], and it presents a sound and novel [`b18J,9qHo,m84B,8g4p,PaeV`] method for local self-supervised learning, with well formulated and detailed experiments [`b18J,PaeV`], which shows a significant advance [`b18J, m84B, 8g4p`]. Concerns raised included adding more related work [`b18J,PaeV`], comparisons with more local SSL methods [`b18J`] and transformer-based UNets [`9qHo`], experiments on non-cardiac datasets [`b18J, 8g4p`], ablations on key-point detection [`9qHo`], inconsistent GLCL results with the original paper [`b18J, 8g4p`], and scalability [`m84B,8g4p`]. To these ends, we address individual concerns below and will revise the paper accordingly. A few major responses are summarized as follows: - As suggested by `b18J, PaeV`, we include one additional local SSL method (CAiD) to benchmark our method. - As raised by `9qHo`, we now add comparison with SoTA Transformer-based unet architectures including Swin-Unet and Swin-UNETR as our benchmark. - As mentioned by `b18J, 8g4p`, we now add results on an additional non-cardiac CT dataset (Synapse: the multi-organ segmentation benchmark). - We clarified the difference between the GLCL results reported in the original paper (single fold train/validation/test) and ours (5-fold cross validation). - As requested by `9qHo` and `b18J`, we provided more ablation analysis on key-point detection methods, as well as various thresholds for determining the correspondence, to evaluate the sensitivity to the keypoint detection and matching. We also attached an additional figure to visualize the detected correspondence. - We revised Figure 1 and attached the Dice score suggested by `m84B`. - Typos, missing technical details, and related works will be revised in the paper. Again, we deeply appreciate the feedback and are happy to receive any further questions, comments, and/or suggestions for improvements. Pdf: /pdf/6525447618c953f7fc24de7d979318496d7b06d7.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents a self-supervised learning approach for medical image segmentation that uses matching keypoints to contrast local features that may be spatially distant from one another. The proposed model pretrains a segmentation network on unlabelled images, using three contrastive losses: a global loss on the keypoint-enhanced features, a global loss on pooled features in the last layer (as in PCL) and a local loss on features of matching keypoints (which are spatially close and have a high matching probability). This model exploits Keypoint-Augmented Fusion (KAF) layers which transforms the features of detected keypoints using self-attention transformer layers and then concatenates the transformed features to the original feature map (after constructing a sparse map that is diffused to a dense map via CNN layers). The SuperGlue network of Sarlin et al. is employed to compute the keypoints and their matching scores. The proposed approach is evaluated on two cardiac MRI segmentation datasets (ACDC and CHD) for which it shows superior performance compared to recent self-supervised approaches. Strengths: * The idea of using keypoints in a local contrastive loss for self-supervised pre-training is novel (to my knowledge) and interesting. Although it relies on the accurate detection and matching of keypoints, it has the potential to overcome the limitations of current strategies, for instance, based on spatial proximity and transformation consistency (which can not be used across different images). * The method is sound and well presented. * Experiments are detailed and results show clear improvements over recent self-supervised approaches for segmentation in few shot settings. Likewise, ablation studies and visualization experiments are nicely done and showcase the proposed method. * The paper is quite well written. Weaknesses: * While the proposed method is compared against recent self-supervised approaches, only one of these approaches (GLCL) tries to learn a local representation, which makes the comparison a bit unfair. Yet, several approaches have been proposed for this purpose, for instance, see references below. As I understand, these approaches can also contrast local features that are semantically related (not necessarily spatially related), for example, based on clustering as in Peng et al, pseudo-labels as in Zhong et al., or super-pixels as in Chaitanya et al. Wang, Zhaoqing, Qiang Li, Guoxin Zhang, Pengfei Wan, Wen Zheng, Nannan Wang, Mingming Gong, and Tongliang Liu. "Exploring set similarity for dense self-supervised representation learning." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16590-16599. 2022. Peng, Jizong, Marco Pedersoli, and Christian Desrosiers. "Boosting Semi-supervised Image Segmentation with Global and Local Mutual Information Regularization." Machine Learning for Biomedical Imaging 1, no. MIDL 2020 special issue (2021): 1-29. Zhong, Yuanyi, Bodi Yuan, Hong Wu, Zhiqiang Yuan, Jian Peng, and Yu-Xiong Wang. "Pixel contrastive-consistent semi-supervised semantic segmentation." In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7273-7282. 2021. Chaitanya, Krishna, Ertunc Erdil, Neerav Karani, and Ender Konukoglu. "Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation." Medical Image Analysis 87 (2023): 102792. * Some of the reported results are not consistent with the literature. Namely, Table 1 reports a Dice of 0.642 for GLCL-full with M=2 in ACDC whereas the original paper reports a Dice of 0.789, which is higher than results of the proposed method. * A potential weakness of the proposed method is that it requires to find matching local features across different images. Typically, descriptors like SIFT focus on points corresponding to sharp edges or blobs, which are present in cardiac structures. I suspect such points would be harder to find in structures with lower contrast such as prostate in MRI. Thus, it would be necessary to test the method on a broader range of organs and even other modalities like CT. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * What is the threshold used to match keypoints?How sensitive is the method to the number of matching keypoints? * Why are the results reported for GLCL-full lower than those of the original paper? * Why not test the method on other segmentation tasks? For example, the GLCL paper by Chaitanya et al. also test on prostate segmentation. * In the ablation study (Table 2), why not test the case where w1=w2=0? This would be useful since the main novelty of the method lies in L_local. * In eq (2), does the index N+1 correspond to a no-match class? Other comments: * p5: "based on the cloest L2 distance" --> closest Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Some limitations are mentioned in the discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Comparison with other local SSL methods. Thank you for offering additional literature. We will add them in our paper. To clarify the difference between our method and `Peng et al., Zhong et al.`, `Chaitanya et al.`: while they incorporate local losses, their frameworks are designed for **semi**-supervised segmentation purpose, which requires a set of labeled dataset for joint training. Our method is designed for more general purpose **self**-supervised representation learning, which can be further finetuned on a small set of annotations. We agree that the local losses in these frameworks may be applicable for self-supervised training, but properly repurposing them may require further investigation and remain interesting future directions. `Wang el al.` is relevant to our approaches and they establish the local set correspondence between two different views of an input image. Unfortunately, we were unable to find its public source code, and reproducing the method (originally designed for natural images) on medical images requires additional consideration. We will include the comparison with `Wang et al.` if the authors open source their code in the future. Yet, for more comprehensive benchmarking purpose, we now add an additional *self-supervised* local SSL method as suggested by Reviewer `PaeV`: "CAiD: Context-Aware Instance Discrimination for Self-supervised Learning in Medical Imaging". Results are reported below. Our method outperforms CAiD in both datasets under different numbers of finetuning subjects (M). A full comparison will be attached into the paper. | | CHD | | ACDC | | |------------|:---:|------------|:----:|------------| | SSL Method | M=2 | M=15 | M=2 | M=6 | | *CAiD* | 0.265(.08) | 0.684(.04) | 0.483(.11) | 0.822(.02) | | Ours | **0.392(.06)** | **0.712(.03**) | **0.741(.03)** | **0.873(.01)** | > Inconsistent Dice between GLCL-full and original paper Thank you for pointing this out! We believe this is a miscommunication on our part. In the original GLCL [1] paper, experiments are conducted based on a single fold train/test split, and results are reported on a held-out test set. In our experiment, we followed the more recent PCL [2]paper and conducted 5-fold cross validation. Table 1 indicates an average dice across the 5 folds. Note that our reported result of the GLCL-global is consistent with the GCL results reported in the PCL paper Table 1 (where they adopted the global loss of the GLCL). We will clarify the evaluation differences in our paper. [1] Chaitanya et al., "Contrastive learning of global and local features for medical image segmentation with limited annotations" [2] Zeng et al., "Positional Contrastive Learning for Volumetric Medical Image Segmentation" > Test the method on other organs and other modalities Thanks for your suggestion! We now add an additional multi-organ CT dataset Synapse [1], and the comparison with SoTA methods are below. Consistent with ACDC and CHD, our framework trained from randomly initialized weights or pretrained with SSL loss outperformed existing methods. We will supplement the full comparison in the paper. | Init. | Method | Dice (M=2) | |--------------|----------------------|------------| | Random Init. | Unet | 0.253(.06) | | | Swin-Unet | 0.198(.04) | | | SwinUNETR | 0.279(.06) | | | Ours | 0.289(.06) | | SSL pretrain | PCL | 0.306(.05) | | | Swin-Unet (with PCL) | 0.210(.07) | | | Ours | 0.322(.06) | [1] Synapse dataset: https://www.synapse.org/#!Synapse:syn3193805/wiki/217785 > Threshold used to match keypoints and sensitivity Thank you. We will specify the threshold. We used 20 in the paper to find the correspondence. To verify how sensitive the method is to the number of matching keypoints, we perform an ablation on different threshold values, to pretrain our model on CHD, before finetuning it on $M=15$ labeled data. We turned off the global losses ($w_1=w_2=0$) to isolate the effects of the threshold for the local SSL loss. Average dice scores are reported below. As indicated from the results, our model remains robust across different values and the number of matching keypoints. We also include a visualization in the attached pdf with the number of matching keypoints annotated under different threshold values. | Threshold | Dice | |:---------:|:-------------:| | 5 | 0.689 (0.043) | | 10 | 0.690 (0.031) | | 15 | 0.684 (0.035) | | 20 | 0.689 (0.036) | | 25 | 0.684 (0.032) | | 30 | 0.685 (0.033) | | 35 | 0.690 (0.037) | | 40 | 0.687 (0.037) | > w1=w2=0 ablation Thank you. We have reported the results of $w_1=w_2=0$ and more ablations on the weight terms in our *appendix A. "Ablation on Correspondence Weights"*. Empirically, we find that only using the local loss for pretraining outperforms a randomly initialized UNet, which indicates that enforcing the localized representation across different slices offers a better initialization. However, a combined global and local loss during pretraining further increased the performance. This is also a consistent observation as in [1] and [2], where it is necessary that the local loss is combined with a global loss and/or other regularizations to avoid degenerated representations. [1] Ren et al., "Local Spatiotemporal Representation Learning for Longitudinally-consistent Neuroimage Analysis", [2] Chaitanya et al., "Contrastive learning of global and local features for medical image segmentation with limited annotations". > In eq (2), index N+1 Yes. We follow [1] for the annotation. We will clarify this in the paper. [1] Sarlin et al., "SuperGlue: Learning Feature Matching with Graph Neural Networks" --- Rebuttal Comment 1.1: Title: Thanks for the detailed answers Comment: I have read the authors' answers carefully and I am generally satisfied with them. Regarding the first point (Comparison with other local SSL methods), I think the paper by Chaitanya et al. also evaluates their method in a self-supervised pre-training setup. Moreover, while it is true that the method Peng et al. is tested in a semi-supervised setup, it was extended to self-supervised learning in a follow up work: Peng, Jizong, Ping Wang, Marco Pedersoli, and Christian Desrosiers. "Boundary-aware information maximization for self-supervised medical image segmentation." arXiv preprint arXiv:2202.02371 (2022). --- Reply to Comment 1.1.1: Comment: Dear Reviewer b18J, Thank you very much for your positive comments, and for continuously pointing us toward the vital follow-up work of GLCL and Peng et al. For Peng et al., we were unable to find the open-source code yet, but we will add the reference in our manuscript. For Chaitanya et al., we conducted their semi-supervised training on ACDC dataset with M=2, and a comparison is reported as follows: | Dataset | Sample M | Method | Mean/Std | |---------|----------|---------------|-------------| | ACDC | 2 | Local Semi by Chaitanya et al. [1] | 0.724(.071) | | | | Ours | 0.741(.034) | We further acknowledge that self-supervised pretraining can be integrated with a semi-supervised framework, as pointed out by the reviewer. In Chaitaya et al., they first performed GLCL pretraining, which then serves as an initialization for the downstream joint semi-supervised finetuning. Similarly, our pretraining can serve as an alternative for initializing the network before performing downstream joint semi-supervised training. Please let us know if you have any further comments. [1] Chaitanya, Krishna, Ertunc Erdil, Neerav Karani, and Ender Konukoglu. "Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation." Medical Image Analysis 87 (2023): 102792.
null
null
null
null
null
null
Data Augmentations for Improved (Large) Language Model Generalization
Accept (poster)
Summary: This paper focuses on counterfactual data augmentation for training robust machine learning models on text data. The concerned applications such as healthcare are safety-critical, highlighting the importance of this work. To generate data, the authors propose to employ a Large Language Model (LLM) to rewrite the documents. The authors also propose utilizing auxiliary data in the generation process to encourage robustness. In the experiments, the proposed method is proved to be both robust and effective. Strengths: 1. The research topic, training robust classifiers on safety-critical applications, is clearly an important task. Therefore, the positive experiment results as reported are expected to bring practical value to the real world. 2. The model performance reported in the experiment section is good. Normally an out-of-distribution generalizable model sacrifices performance in the in-distribution setting. It seems that the proposed method does not suffer from this issue. 3. The paper is overall well-written and easy to follow. The experiment settings in the main paper and Appendix are detailed. Weaknesses: 1. The novelty of the proposed method may be limited. It seems that the main difference between the existing and the proposed method is introducing auxiliary data, which may not be considered as a significant technical advance. 2. The competitors in the experiments are rather basic. In particular, it may be worth considering including IRM (Arjovsky et al., 2019) and GroupDRO (Sagawa et al., 2019) (or Just-Train-Twice (Liu et al., 2021) if the authors are interested in re-weighting) for comparison. 3. As this research targets at safety-critical applications, there are some details regarding model bias and robustness may require more clarification. The questions are in "question to authors". Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. The data augmentation process relies on an off-the-shelf LLM. Is it possible that the LLM can bring additional and uncontrollable biases to the proposed method? 2. Following the first question, how critical is the selection of LLM in this work? 3. How critical is the quality of the auxiliary data? Also, would it be difficult to identify informative auxiliary data in practice? 4. Regarding the writing, - line 134: Should the term under argmax be c \in [K] instead of y \in [K]? - Would it be more clear if the authors specify that the employed LLM is GPT4 in the main text instead of Appendix? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors adequately address the limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and recognizing positive points on the importance of the topic we study, the achieved empirical performance and writing quality. In what follows we aim to address each question and concern in your review. **Limited novelty**: The novelty of our work, besides the favorable performance (w.r.t to all baselines) on a large scale real-world task, is: 1. Introducing causal methods for producing counterfactually augmented data that substantially improves OOD performance. We note that merely “introducing auxiliary data” is not enough to achieve the performance we observe, and the auxiliary data must be used in a disciplined manner (in our method, we are guided by causal inference methodology, i.e. matching and diff-in-diff). 2. Our theoretical analysis departs significantly from previous work on counterfactual data augmentation by considering sample complexity aspects instead of analyzing performance given infinite data. 3. The proposed method and analysis open up the possibility for using any applied causal method for generating training data. We highlight two different approaches in this paper: matching and diff-in-diff. However, in principle, other methods from the econometrics literature, such as instrumental variables and synthetic controls, can be leveraged for better OOD performance using our approach and we expect future work to significantly extend ours. **Baselines**: Following our model for the problem in Fig. 1, we choose baselines that are appropriate for this task since they (a) are proposed in a paper that studies this anti-causal prediction problem, and (b) achieve state-of-the-art performance on it. We have experimented with other baselines on the medical notes tasks and following your comment we will include results on GroupDRO and IRMv1 in the paper. The results can be found in the PDF attached to the rebuttal. IRMv1 underperforms the baseline from our original submission and GroupDRO is comparable, while CATO (A) has the best performance. We refrained from including these baselines in the original submission, since previous work already documented their limited success on most large scale problems (e.g. [Rosenfeld et al. 20, Kamath et al. 21, Gulrajani and Lopez-Paz 20]) and we wanted to avoid making this the focus of our work. The medical notes task is far larger in scale than datasets where these methods have shown significant gains, and devising methods that show gains in large scale problems was among our main motivations for this work. Following your comment, we see why it is important to include these results, and it led us to revise our decision. Thank you for helping us improve this aspect of the paper. **LLM-induced biases**: This is a great point! We have also included a discussion on this in our general response. Your comment about LLMs varying in their generative abilities is on point. We’ve experimented with several models in the medical notes task and will include our results on Bio-BERT, Sentence-BERT and GPT4 in our revised paper. We find that models trained on synthetically augmented data using each of these LLMs results in improved OOD performance w.r.t the baselines. Connecting this to your second question about biases of off-the-shelf LLMs, it is possible that LLMs introduce biases into our problem, inherited from their own training data and these biases may be specific to each LLM. This requires further study, however from our manual examination we found their quality satisfactory (see Appendix C for generation examples) and that all of them helped in OOD generalization which is our main evaluation metric (note that inaccurate counterfactual estimations in our case are only harmful if they adversely affect downstream predictive performance). Nonetheless, we believe that further studies are required to characterize the possible biases of LLMs in the context of our work, for instance by comparisons of the counterfactual estimation made by LLMs with a rewrite of the text made by the human whose style we are trying to emulate (over a small curated dataset). **Quality of auxiliary data**: This is also an important point. As in any causal matching problem, having reliable auxiliary data is crucial for selecting control examples. If the auxiliary data does not capture any information or contains wrong information we won’t be able to correctly match examples. Looking into our experimental analysis, we show that in two problems with radically different auxiliary data (complete patient history in the clinical narratives, and three review indicators in CeBAB) we get better performance than very strong baselines. We will add additional analysis where we corrupt the auxiliary data in both real-world experiments (we have already done that for the synthetic data). **Comments on writing**: Thanks for turning our attention to the typo in line 134, the term refers to the Bayes-optimal classifier, so it should be $y\in{[L]}$ (L being the number of classes) instead of $y\in{[K]}$. We will also move details on the LLMs we use into the main paper. **References**: 1. Rosenfeld et al. 20, The Risks of Invariant Risk Minimization 2. Kamath et al. 21, Does Invariant Risk Minimization Capture Invariance? 3. Gulrajani & Lopez-Paz 20, In Search of Lost Domain Generalization --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ hard work in the rebuttal period. I am still concerned about the heavy reliance on LLMs in this particular application. However, I agree that the authors have provided sufficient studies addressing or acknowledging the potential imitations. As some of my concerns are resolved, I would like to increase the score by 1. --- Reply to Comment 1.1.1: Comment: Thank you very much for your engagement in the discussion and for updating your score to reflect your perception of the paper. We greatly appreciate the effort you put into the process, and hope that our added experiments, discussion on limitations of using LLMs, and other edits motivated by your review will address the remaining concerns.
Summary: The paper addresses the issue of text classifiers relying on spurious correlations, leading to poor generalization vis-a-vis out of domain, particularly in critical areas like healthcare. The authors propose using counterfactual data augmentation, guided by knowledge of causal data structure, to create text classifiers more robust to distribution shifts. Using an LLM to implement this approach, they demonstrate effectiveness in tasks such as predicting clinical diagnoses from medical narratives and observe improved out-of-distribution accuracy compared to baseline algorithms. They suggest that their method could be beneficial in dealing with various distribution-shift problems in machine learning applications. They demonstrate the utility of using language models to create counterfactuals that could improve model robustness. The paper also goes a step further in formalizing counterfactual data augmentation. Strengths: - The authors experiment with both real-world clinical diagnoses scenarios and semi-synthetic data, demonstrating the broad applicability and practicality of their proposed methodology. Working with healthcare data isn't always easy either, so credit for that choice as well. - By using the capabilities of large language models for counterfactual generation, this work expands the potential applications of counterfactually augmented data while reducing costs associated with manually constructing counterfactuals. - The paper offers a formalization of counterfactually augmented data, which paired with prior work allows easier understanding and replication of the process for future researchers. Weaknesses: - The method pre-supposes full knowledge of the causal structure of the data and overlooks complexity involved in real-world datasets where such information might not be readily available or accurately defined. Furthermore, assuming "no unmeasured confounding" isn't always realistic in real-world data sets especially those from healthcare, possibly limiting the model's robustness within certain contexts. Granted it is a common assumption in causal inference, but causal inference methods are also usually restricted to simple worlds that can be represented in a few variables. - The authors acknowledge that generating versions of clinical narratives as if they had been written by different caregivers is difficult to achieve in practice. However, there's not enough discussed on substantive solutions or workarounds they employed (or considered) to meet these challenges. - The paper needs greater discussion of prior work that has attempted to address these issues, including attempts at formalizing counterfactually augmented data and counterfactual invariance (see [1] and [2]). [1] Divyansh Kaushik, Amrith Setlur, Eduard Hovy, and Zachary C. Lipton. "Explaining the efficacy of counterfactually augmented data." ICLR 2021. [2] Victor Veitch, Alexander D'Amour, Steve Yadlowsky, and Jacob Eisenstein. "Counterfactual invariance to spurious correlations: Why and how to pass stress tests." NeurIPS 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I think the paper could benefit a discussion of how it differs from [1] and [2]. Could you share how you view the difference (the bounds are clearly new)? - In Section 4 you share using auxiliary data to help with counterfactual generations so the models don't hallucinate. Could you share how often did the models hallucinate (or not) when this auxiliary data was presented vs when it wasn't? - Could you share results of a baseline where you have 2X datapoints from the original distribution (observational data alone), compared against the counterfactually augmented data (X original + X' counterfactuals)? - In line 308, you say "across 5 runs". What is the source of randomness in these 5 runs? - Am I correct in interpreting Figure 4's findings that seemingly more corruption leads to better performance? Could you elaborate? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the in-depth review of the paper and for raising important questions that help us improve it. Please find our responses to the concerns and questions below. **Knowledge of Causal Structure**: While our method assumes some knowledge of causal structure (i.e. Figure 1, an anti-causal prediction problem), we clarify that it does not require a fully specified structure that includes each and every feature in the problem (e.g. every component of the auxiliary data $M$). There are multiple graphs that comply with a “no unobserved confounding” assumption given $M$, and also with the constant effect assumption (which is a functional assumption, rather than a graphical one). Nonetheless we agree that these are strong assumptions that require domain knowledge. In this context, it is important to note that unlike standard causal inference problems, we are interested in downstream OOD prediction abilities instead of the estimation of a specific counterfactual text (i.e. the estimation accuracy). Our generalization bound suggests that the distribution of augmented data should resemble the distribution of true counterfactuals, this can still hold when specific estimations suffer from some errors. Eventually, the evaluation that matters to us most is prediction accuracy compared to the baselines. **Difference from Kaushik et al. and Veitch et al.**: Thank you for raising this point, which deserved more attention in the paper. Since reviewer J1UG also asked about comparison with similar papers to the one you mentioned, we included this in our general response and elaborate here. The focus of our work is on studying estimation methods for Counterfactual Data Augmentation (CDA) and explaining their improved performance from the view of sample complexity. Compared to previous work on CDA, Kaushik et al. 20 (and subsequent work in Kaushik et al. 21) do not propose scalable estimation methods and rely on manual rewriting of text by humans. Veitch et al. 21 propose a regularizer that promotes conditional independence, instead of CDA. We claim and show empirically, that in large-scale problems, such methods fail due to challenging optimization and statistical complexity. We are not aware of other works that adapt causal estimation methods for the purpose of data augmentation. As for formal results, Kaushik et al. 21 treat a rather specific data-generating process, and the most general treatment we are aware of is by Wang & Veitch 23 (follow up to Veitch et al. 21). These works discuss properties of CDA under infinite data, as opposed to our finite sample point of view. Specifically, our formal setting is an instance of the “purely spurious” problems defined by Wang & Veitch, where they show CDA achieves a min-max optimal model provided infinite data (as a side note, the assumptions for our estimation methods are not specific to cases where the graph in Fig. 1 holds, and hence they can be utilized in other such “purely spurious” settings). In contrast, we make an argument for the *sample complexity* of *approximate* CDA (i.e. augmentation might not be perfect). To this end, we identify a baseline that in our problem setting is also min-max optimal provided infinite data, and draw a comparison between the sample complexities of the two methods to explain the performance we observe in practice. Finally, our main empirical contribution is favorable out-of-distribution (OOD) generalization in a complex, large-scale, real world problem of medical note classification, which further distinguishes our work. **How often models “hallucinate”**: Estimating this reliably warrants manual inspection of many pairs of original and generated text to determine in each instance whether the LLM filled in data that did not exist in the original text. Since our final goal is not a perfectly accurate counterfactual estimation but rather favorable downstream classification accuracy, we did not perform an extensive evaluation to determine how frequently this occurs. Instead we found examples of such incidents during our manual experimentation with the models, and verified that it is indeed a possible failure (this also makes sense considering we provide example texts with certain properties), which can be amended by matching on auxiliary data. Furthermore, note that because of the focus on downstream prediction accuracy, our bound implies that inaccurate estimations of the counterfactual text can damage performance, regardless of whether these accuracies are “hallucinations” or other types of mistakes (e.g. omitting data from the note). Therefore our example in section 4 is merely one instance of possible mistakes that can be prevented by additional context. **Details on experiments**: Following your comments, we added an experiment with equal-sized observational samples and augmented samples. Preliminary results on this new experiment, which sheds light on the effect of augmentation while controlling for sample sizes, are in the attached PDF. We should obtain the full results (across all runs and tasks) during the discussion period and will update them. Regarding the source of randomness across 5 runs, these correspond to different random seeds. Finally, thank you for a careful reading of the synthetic experiments. For Figure 4, lower values of $\lambda$ result in larger corruptions. Hence the 3 top curves, which correspond to CDA, show the best performance for $\lambda=0.4$ (lowest corruption in this simulation) and worst performance for $\lambda=0.2$. So as intuition suggests, and opposite to your interpretation in the review, larger corruption results in inferior performance. We think the confusing part is that a *lower* value of $\lambda$ corresponds to a *higher corruption*. We will better emphasize this in the description of the figure, thank you for turning our attention to this. **Additional references**: Wang & Veitch 23 The Causal Structure of Domain Invariant Supervised Representation Learning --- Rebuttal Comment 1.1: Title: Thanks for the clarification Comment: Thanks for the response. It clarifies many things for me. I am happy to update my score to recommend acceptance assuming the authors would be updating the camera ready to reflect these clarifications as well. --- Reply to Comment 1.1.1: Comment: Thank you very much for your engagement in the discussion, and for updating your score to reflect your perception of the paper. We greatly appreciate the effort you put into the process, and already incorporated all the clarifications in our current revision of the paper. We also completed the experiments for which the initial results can be found in the PDF, as promised.
Summary: This paper develops and analyzes the counterfactual data augmentation strategies for the setting where the data-generating process is anti-causal. First, they first show the Bayes optimal predictor in the setting they considered and then how can we obtain that predictor using counterfactual data augmentation. They give two different strategies to generate the counterfactual data (1) by prompting LLM and, (2) using the difference in difference method. Then they analyze the sample complexity of counterfactual data augmentation and show that it is better than that of a baseline reweighting method. Finally, they empirically show the effectiveness of their method on synthetic and real-world datasets. Strengths: **Clarity**: The paper will be well-written and easy to follow. **Sample Complexity Analysis**: The authors show that the sample complexity of counterfactual data augmentation is better than the reweighting baseline introduced in Makar et. al. Weaknesses: 1. Contribution 2 - Counterfactual Data Augmentation (CDA) as a method to deconfound target and spurious attribute: It is not clear how this observation/finding is different from the previous literature on CDA eg. Kaushik et. al. (Explaining The Efficacy of Counterfactually Augmented Data) and Joshi et. al. (An Investigation of the (In)effectiveness of Counterfactually Augmented Data). 2. Lemma 1 is similar to the claim shown in Makar et. al. (Causally motivated shortcut removal using auxiliary labels) 3. Assumption 1 (constant effect): How justified is the assumption that the spurious attribute $c$ change the previous state in an additive manner? 4. The augmentation strategy in this paper is limited to the anti-causal setting which further limits the applicability. I understand this is standard practice in the OOD generalization literature to develop a method under specific data-generating process (causal and anti-causal) but then this paper doesn’t introduce something new than the previous works. Overall this paper seems to re-instantiate the point mentioned in previous work that argues for the effectiveness of counterfactual data augmentation. Though this paper introduces two new ways to perform data augmentation in the context of medical datasets, the overall novelty seems low. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Minor Typos/Corrections**: 1. Line 117 (Definition 1): State that $\Delta^{K-1}$ is a simplex Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We greatly value your feedback and would like to address each comment you’ve provided while highlighting the essential aspects crucial for evaluating our work. **Distinction from previous work**: Our contribution stands out first and foremost by the study of **counterfactual estimation methods**, and also the treatment of **finite sample effects**. These distinguish our work from those of Kaushik et al. 21 and Joshi & He 22. In reference to these papers, we would like to also add Wang & Veitch 23 to our discussion. It is important to emphasize that *all the aforementioned papers do not discuss estimation methods from observed data* nor address sample complexity. The first point is extremely crucial since the type of Counterfactual Data Augmentation (CDA) proposed by Kaushik et al. 20 is infeasible in our task. To put this in perspective, consider letting 3000 caretakers rewrite each other’s notes enough times to obtain an adequately large training set. To perform CDA at this scale, estimation methods are necessary and we are unaware of papers that propose causal estimation methods for purposes of data augmentation. As for the finite sample analysis, your review points to this as a strength of our paper. We emphasize that this departs from previous explanations on the effectiveness of CDA. Kaushik et al. and Joshi & He discuss the effect of CDA with an infinite sample and a very specific data-generating process. Thm. 9 in Wang & Veitch gives a more general structure where CDA provably helps generalization in what they call “purely spurious” problems. In the language of Wang & Veitch, we treat is an instance of these “purely spurious” problems. In our case, there are other methods besides CDA that learn the optimal model (both methods require infinite data). Hence a finite sample analysis illuminates CDA's favorable performance w.r.t baselines (provided our CDA is of sufficient quality) instead of simply reiterating known observations. Thank you for this comment, we see that the point on finite samples should be better emphasized in contribution 2 instead of the other points it makes. We will rephrase it accordingly. **Lemma 1 and Makar et al.**: Footnote 2 of our paper explicitly states that this claim was proved in Makar et al. We included it in the paper using our notation since the lemma is important for motivating our method. We do not see how this is a weakness of the paper. **Constant effect assumption**: Before discussing the validity of the assumption, we clarify that its focus is on the effect being constant, rather than being additive. By definition, the quantity $x_i(c) - x_{i, pre}$ exists and can be defined as an additive effect. The strong assumption is that once we fix $c$ and the auxiliary data $m_i$, this effect is constant (i.e. does not depend on $i$). The assumption is more reasonable as we control for more auxiliary features. In experiments, we control for many factors such as prescribed medications, vitals etc. While assuming a purely constant effect is unrealistic, our generalization bound includes a distributional distance between counterfactuals and their estimations. Hence even if the assumption is imprecise, the method may still suffice to improve generalization. Weaker versions of the assumption can thus be phrased and reasoned on, yet in this work, we opt for a simple version that is easy to communicate and fit in the scope of the paper. **Augmentation limited to anti-causal case**: There is no inherent limitation to applying the estimation methods in causal prediction problems. We use the specific anti-causal problem to motivate CDA in a principled manner, as it describes the main problem we study and it is possible to analytically compare sample complexity with an alternative solution (i.e. reweighting). CDA is also appropriate in the “purely spurious” problems mentioned in response to article 1 in your review. These are more general and include some causal prediction problems. Then to obtain the counterfactual estimates, our techniques are still viable when the required assumptions hold (constant effect, or auxiliary data accounts for all unobserved factors). Nonetheless, we agree that like all methods for OOD generalization, ours are not effective for all possible problems and must be used with caution. Thank you for turning our attention to this, our revised version will mention the usefulness of CDA in the more general setting of Wang & Veitch 23. **The paper re-instantiates points on CDA**: We believe that dismissing our work as “re-instantiating the point made in previous work” on CDA is not justified for several reasons. As explained in our rebuttal, estimation methods are crucial in practice and they are applicable beyond the medical datasets we work with. Our main interest is OOD generalization, and given that there are hundreds of papers on methods for this problem, we believe that a method that significantly outperforms the baselines on a large-scale real-world task is of interest to the community. At least from our familiarity with the literature, the large majority of proposed methods do not demonstrate results of this type. We are also unaware of other works that utilize causal estimation methods for data augmentation, and our finite sample bounds are different from analyses of previous work on CDA. Overall, we think the focus of our work has a significant but small overlap with those mentioned in the review. **References**: 1. Wang & Veitch 23 The Causal Structure of Domain Invariant Supervised Representation Learning 2. Makar et al. 22 Causally motivated Shortcut Removal Using Auxiliary Labels 3. Kaushik et al. 20 Learning the Difference that Makes a Difference with Counterfactually-Augmented Data 4. Kaushik et al. 21 Explaining the Efficacy of Counterfactually Augmented Data 5. Joshi & He 22 An Investigation of the (In)effectiveness of Counterfactually Augmented Data --- Rebuttal Comment 1.1: Title: Response to Authors Comment: I thank the authors for taking the time to address my questions. Also apologies for sounding dismissive of your work in the context of overall contribution and I thank you for pointing out the main contribution of the paper i.e. finite-sample complexity analysis and making CAD work in practice using causal estimation techniques. Given this, I resonate with one of the concerns raised by Reviewer 8nR3 around the reliance of this work on the LLMs and the potential bias introduced by them. I would like the authors to include the discussion they had regarding this issue in the final version of the paper. Overall, I am raising my score by 1 and would like the authors to incorporate the changes in the final version. --- Reply to Comment 1.1.1: Comment: Thank you very much for your engagement in the discussion and for updating your score to reflect your perception of the paper. We greatly appreciate the effort you put into the process, and believe that our added discussion on limitations of using LLMs will contribute to the paper and address the remaining concerns.
Summary: The authors propose to use knowledge from the causal structure of the data to counterfactually simulate interventions on spurious features and to learn more robust classifiers. They focus on text classification tasks and argue that this approach is appropriate in prediction problems for which the label is spuriously correlated with an attribute. Since the authors argue that their approach emulates interventions, one assumes that they referring to the second layer of Pearl's Causal Hierarchy. Here, we assume the data and the Causal Graph (CG). However, the authors left the specification of the causal graph to be addressed as future work. Additionally, following the Pearl's Causal Hierarchy, when leading with counterfactual questions, one must assume the data and the Structural Causal Model (SCM). Given that the authors do not provide neither the CG nor the SCM, I have concerns on how their approach generates counterfactually augmented data. Strengths: This paper presents an interesting approach to counterfactual-based augmentations focusing on textual data. Weaknesses: There are some important questions to better understand the proposed approach and the validity of results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The authors assume that the writing style of caretaker C affects the vector representation of the clinical note (Fig. 1). Further, they also assume that the label Y causes both X and X^. As far as I understood, the Clinical Condition Prediction consists in, given a patient note (X), predicts the concept(s) associated with it (Y). I want to understand if this is a causal or anti-causal problem. According to Fig. 1, we pick a concept and then generates a clinical note. In Sec. 2, the authors mentioned several approaches to invariant representation learning and counterfactual augmented data generation, however they did not select any of them as baseline(s) in the experiments section. Is it possible to provide a better understand of both (i) why the selected baselines are the most appropriate? and (ii) why the related works are not representative baselines? Does one need to know whether the label is spuriously correlated with an attribute in advance? Is it possible to provide details about the private held-out data used in this paper? All information necessary and sufficient to others reproduce the results using other datasets would be helpful. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I believe that the authors only mention limitations of their work in the last sentence of the Discussion section. I believe that would be interesting to improve this discussion. For instance, it is very hard to have the correct Causal Graph for most important applications, hence, how this could be an issue, and, in the absence of the causal graph, what is the best approach? Another issue is how can we guarantee that the counterfactual instances generated during augmentation are realistic? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your review and questions, and are eager to clarify all points regarding our data, baselines, and assumptions. In response to questions: **Causal vs. anti-causal prediction problem**: As you correctly suggest in the review, we solve an anti-causal prediction problem. A true condition in the world, $Y$, along with the identity of the caretaker, $C$, determines the text we observe $X$. Recovering $Y$ is thus an anti-causal prediction problem. Some additional points regarding this setting: The causal model we use has several roles. * To provide a coarse description of the real world problem motivating this work, namely anti-causal prediction on clinical notes. We say the description is coarse since we omit (on purpose) some variables from the graph, like the auxiliary data $M$, since one can consider multiple possible graphical structures that include it where our method is adequate (we will elaborate on this in the appendix of our revision). * Another role of this setting is to motivate Counterfactual Data Augmentation (CDA), showing that given infinite training data it retrieves the min-max optimal solution for the problem. Note that Wang & Veitch 23 formalize a family of graphical structures they call “purely spurious”, and show CDA retrieves a min-max optimal predictor in such problems. Hence our methods are well motivated under any of these structures (which also include some causal prediction problems), and we choose the one most adequate for our problem of interest. * Finally, the setting allows for a formal comparison in terms of sample complexity, between approximate data augmentation and a reweighting baseline that also provably solves the problem with enough data. **Baselines**: 1. Following our model for the problem in Fig. 1, we choose baselines that are appropriate for this task since they (a) are proposed in a paper that studies this anti-causal prediction problem, and (b) achieve state-of-the-art performance on it. 2. We have experimented with other baselines on the medical notes tasks and following your comment we will include results on GroupDRO and IRMv1 in the paper. These are arguably the two most popular approaches in the literature, and the results can be found in the PDF attached to the rebuttal. IRMv1 underperforms the baseline from our original submission and GroupDRO is comparable, while CATO (A) is still the best performing method. We refrained from including these baselines in the original submission, since previous work already documented their limited success on most large scale problems (e.g. [Rosenfeld et al. 20, Kamath et al. 21, Gulrajani and Lopez-Paz 20]) and we did not want to make that the focus of our work. The medical notes task is far larger in scale than datasets where these methods have shown significant gains, and devising methods that show gains in large scale problems was among our main motivations for this work. Following your comment, we see why it is important to include these results, and it led us to revise our decision. Thank you for helping us improve this aspect of the paper. **Knowing whether at attribute is spurious**: Our setup is suitable for problems where we have some domain knowledge, and we know that the correlation between the attribute and label is spurious (whether or not a large correlation exists in training data is something we can estimate numerically). This is a standard assumption in many works on spurious correlations (Makar et al. 22, Puli et al. 22, to name a couple). Nonetheless, we think that discovery of spurious correlations is an important direction for future work. **Private held-out data**: In all clinical data experiments, we use notes from different hospitals as the held-out data. Whenever possible, we use publicly available datasets (i2b2 competitions) that have the same structure as MIMIC-III. Whenever such data is unavailable we use data from real hospitals. While we can’t publish this data for privacy reasons, it also has the same structure as MIMIC-III and i2b2. In response to limitations: **Knowledge of causal graphs**: We agree that to justify our methods some strong assumptions are required, as is often the case when a method involves causal estimation. However, we note that the causal graph assumed in figure 1 is rather simple to reason about, and as explained earlier CDA is also suitable for several other graphs (see Wang & Veitch 23). The strongest assumptions we make in our view are the constant effect assumption for CATO (A), and that auxiliary data M accounts for all unobserved factors for CATO (B). Following this comment, we will add a section to the appendix that describes these assumptions much more extensively and discusses the possible limitations they pose. **Guaranteeing realistic counterfactual**: While we cannot guarantee realistic counterfactuals with total confidence, we can take some measures such as letting caretakers rewrite a handful of medical notes taken by other caretakers (in the spirit of what’s proposed in Kaushik et al. 2020) and compare these to the ones generated by the LLM. The scale of this validation may be too small to certify realistic counterfactuals, but it could be valuable. In addition, it would be interesting to complement this in future work with sensitivity analysis and uncertainty estimation to gain a better sense of the possible inaccuracies of the counterfactuals. **References**: 1. Makar et al. 22, Causally motivated Shortcut Removal Using Auxiliary Labels 2. Puli et al. 22, Out-of-distribution Generalization in the Presence of Nuisance-Induced Spurious Correlations 3. Rosenfeld et al. 20, The Risks of Invariant Risk Minimization 4. Kamath et al. 21, Does Invariant Risk Minimization Capture Invariance? 5. Gulrajani & Lopez-Paz 20, In Search of Lost Domain Generalization 6. Wang & Veitch 23, The Causal Structure of Domain Invariant Supervised Representation Learning --- Rebuttal Comment 1.1: Comment: Since the discussion period ends in a few hours and no response has been posted, we would like to summarize and slightly extend our answer. The main topics raised in the review are baselines, knowledge of the causal graph and verification of the counterfactuals’ quality. We have incorporated results for the required baselines in our response, showing our proposed method outperforms them and they are on-par with the previously considered baselines. For the causal graph, we emphasized that Counterfactual Data Augmentation (CDA) is well-motivated for a variety of structures (e.g. “purely spurious” cases as in Wang & Veitch 23), therefore the knowledge of the causal graph that we require is not exact and fine-grained as the reviewer may initially perceived. We focus on the anti-causal question as it well-describes our main problem of interest, and allows a formal comparison to other consistent baselines in terms of sample complexity. As for verifying the quality of the counterfactual, we mentioned an option of comparing hand-written rewrites, and would further like to emphasize that in principle, under no unobserved confounding, another option is to use validation data. That is, we can take validation data written by caretaker $c$, with auxiliary data $m$, and generate counterfactuals in the style of $c$ from notes written by other caretakers on cases that match auxiliary data $m$. Then the validation sample can be compared to the generated counterfactuals via any two-sample test. Further concerns included knowledge of whether a spurious correlation exists, discussion of limitations, and information of private data and. We confirm that knowledge of whether an existing correlation between the attribute and a label is spurious is required (its existence can be verified statistically), yet this is a relevant assumption in practice and it is common in the vast majority of work on the topic. Discussion of limitations will be expanded as described in other responses, and the discussion above regarding the causal graph and the knowledge it requires will also be included. Finally, we noted that our results can be reproduced from public data whenever such validation data is available . As your review raised several important questions, but you maintain a high certainty score, we would be very grateful for any response regarding whether our rebuttal managed to answer any of the concerns mentioned in the review. Thank you again for reviewing our paper.
Rebuttal 1: Rebuttal: We thank the reviewers for their efforts and valuable comments, which led us to make important revisions to the paper, particularly by adding two additional baselines to our experiments (IRMv1 and GroupDRO), clarifying the connections of our work to previous papers on invariant learning and counterfactual augmentations, and adding a discussion on limitations of LLMs and possible biases they can introduce. We are grateful for the appreciation of the novelty of our formal treatment of finite sample effects (reviewers J1UG, Erys), the superior strength of our method on real-world experiments (reviewers Erys, 8nR3), the novelty of using LLMs for counterfactual augmentation (reviewers MXTx, Erys), and for finding our paper very interesting (reviewers MXTx, YqAy, 8nR3). Moreover, we are happy to hear that all reviewers found our work to be clear and well-written. Below we summarize our responses to the reviewers’ major concerns and review the major changes we introduced following the reviewers' comments. 1. **Additional baselines**: Reviewers YqAy, 8nR3 were curious about the performance of popular methods in OOD generalization that were excluded from our baselines. Following our model for the problem, we chose baselines that are appropriate for this task since they (a) are proposed in a paper that studies this anti-causal prediction problem, and (b) achieves state-of-the-art performance on it. Following reviewers comments, we have added experiments with two other baselines, GroupDRO and IRMv1, on the medical notes tasks. These are arguably the two most popular approaches in the literature, and the results can be found in the PDF attached to the rebuttal. As can be seen in our newly-added experiments, our method consistently outperforms these methods. 2. **Previous work on Counterfactual Data Augmentation (CDA)**: Reviewers J1UG, Erys wanted clarification about the connection of our work to previous prominent papers (e.g. Kaushik et al. 2021, Joshi & He 2022, Veitch et al. 2021). We emphasize that the focus of our work is on studying estimation methods for CDA and explaining their improved performance from the point of view of sample complexity. In comparison to previous work on CDA, these works do not propose scalable estimation methods, and we are not aware of other works that adapt causal estimation methods for the purpose of data augmentation. They also discuss properties of CDA algorithms under infinite data, as opposed to our finite sample point of view. Finally, our main empirical contribution is favorable out-of-distribution (OOD) generalization in a complex, large-scale, real world problem of medical note classification, which further distinguishes our work. 3. **Limitations of LLMs**: Reviewers MXTx, 8nR3 thought that the paper could benefit from additional discussion on the potential limitations and biases of LLMs in the context of our problem. The following points summarize a discussion that will be added to our revised paper. * *LLM generation quality*: We acknowledge that LLMs vary in their ability to generate realistic text. It is possible that LLMs introduce biases into our problem, inherited from their own training data. This requires further study, however from our manual examination we found their quality satisfactory (see Appendix C for generation examples) and that OOD generalization also improved for models trained on the augmented data they generate. We'll include this analysis and results with different LLMs, namely Bio-BERT, Sentence-BERT and GPT4, in the revised paper. * *Counterfactual approximation*: Other than generation quality, the additional challenge in using LLMs for CDA is our ability to elicit a good approximation to the counterfactual text. Our methods rely on principles from causal inference to advance disciplined approaches for this task. While further studies are required (e.g. systematically comparing small sets of manual re-writes of texts to the elicited LLM output), we view our work as a promising first step in this direction, which we expect to be significantly extended and improved in future work. * *Effect of biases on OOD generalization*: Since we focus on OOD generalization, the limitations and possible biases mentioned above must be weighed within this context. Namely, we should bear in mind that even though generation may be biased, this bias is only harmful when it affects the generalization of a downstream classifier, and this is what we evaluate. Further, in OOD generalization we consider cases where the training data is biased in the first place, and training a standard predictive model also results in a biased solution. Hence we must weigh risks and limitations of alternative solutions vs. those of LLMs. **Overview of major changes**: 1. We added two widely-used baselines (IRMv1 and GroupDRO), and an experiment with modified sample sizes (suggested by reviewer Erys). Results are in the attached PDF. 2. We made precise our connection to relevant previous works, highlighting our novelty in discussing finite sample properties of our approach and the usefulness of our method for approximating counterfactuals. 3. We added a section to discuss the possible biases that LLMs may introduce into our problem, emphasizing that further studies are required. We also note that when our goal is OOD generalization, and training data may already be biased, we need to weigh the biases of LLMs vs. alternatives. **References**: 1. Kaushik et al. 2021 Explaining the efficacy of counterfactually augmented data. 2. Veitch et al. 2021 Counterfactual invariance to spurious correlations: Why and how to pass stress tests. 3. Joshi & He 2022 An Investigation of the (In)effectiveness of Counterfactually Augmented Data Pdf: /pdf/b35a840f28b436722fbc253723b730f8a78b469b.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this work, the authors develop causally-driven data augmentation methods to improve model robustness. Strengths: In general, I find the paper to be very, very well written and clear. The authors do a really great job of explicitly stating their assumptions, and acknowledging when certain assumptions are strong. I think this paper makes a very interesting first step in extending OOD generalization to the recent advances in LLMs! Weaknesses: I would have liked a section that describes some of the limits of using LLMs, and whether certain LLMs would be more appropriate that others. It feels like incorporating LLMs is a big part of this work, so I would have liked more context here. I wasn't completely convinced that the extra information M would be enough to assist the model in achieving more accurate estimates, but the authors do acknowledge that this is a strong assumption. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: see above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors do acknowledge limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful and positive review. We are happy to see that you think the paper is a very interesting first step in utilizing recent advances in LLMs for OOD generalization, which is exactly our goal in this work. The points for improvement that you lay out are very helpful, and we comment on them below. **Limitations of LLMs**: following this comment, we will add to the appendix a discussion on the possible limitations of LLMs for our approach. Since reviewer 8nR3 also asked about possible limitations of LLMs, we included a summary of this discussion in the general rebuttal and we repeat it below for convenience. * *LLM generation quality*: We acknowledge that LLMs vary in their ability to generate realistic text. It is possible that LLMs introduce biases into our problem, inherited from their own training data. This requires further study, however from our manual examination we found their quality satisfactory (see Appendix C for generation examples) and that OOD generalization also improved for models trained on the augmented data they generate. We'll include this analysis and results with different LLMs, namely Bio-BERT, Sentence-BERT and GPT4, in the revised paper. * *Counterfactual approximation*: Other than generation quality, the additional challenge in using LLMs for CDA is our ability to elicit a good approximation to the counterfactual text. Our methods rely on principles from causal inference to advance disciplined approaches for this task. While further studies are required (e.g. systematically comparing small sets of manual re-writes of texts to the elicited LLM output), we view our work as a promising first step in this direction, which we expect to be significantly extended and improved in future work. * *Effect of biases on OOD generalization*: Since we focus on OOD generalization, the limitations and possible biases mentioned above must be weighed within this context. Namely, we should bear in mind that even though generation may be biased, this bias is only harmful when it affects the generalization of a downstream classifier, and this is what we evaluate. Further, in OOD generalization we consider cases where the training data is biased in the first place, and training a standard predictive model also results in a biased solution. Hence we must weigh risks and limitations of alternative solutions vs. those of LLMs. **Auxiliary information may not be enough to assist a model**: As you correctly point out, we cannot be sure that the auxiliary information is enough to account for all the unobserved features relating to the writer and the text. One step we can take to further validate our method in real world settings is to obtain a small dataset where caretakers rewrite other caretakers’ existing notes in their own style (similarly to the counterfactual data augmentation suggested in Kaushik et al. 2020) and compare these against the synthetic estimates generated by the LLM. While such an evaluation would typically be of small scale, it offers another check for the validity of the approach. Another point worth mentioning on this topic is that our end goal is to obtain a classifier that generalizes well in an OOD setting. As suggested by our generalization bound in lemma 2 of the paper, this requires distributional similarity between the distribution of counterfactuals and the distribution of our estimates. So even if our estimation methods are imperfect, they may still suffice to achieve better generalization w.r.t the baselines. Understandably, this is why we view the accuracy on both ID and OOD test data as the main evaluation for our method and the baselines. Thank you again for your effort and help in improving our paper. **References**: 1. Kaushik et al. 2020 Learning the Difference that Makes a Difference with Counterfactually-Augmented Data --- Rebuttal Comment 1.1: Comment: Thank you for your response. I've read the authors' rebuttal, and stand by my original review. --- Reply to Comment 1.1.1: Comment: Thank you very much for your engagement in the discussion and for the positive review, we greatly appreciate the effort you put into the process.
null
null
null
null
null
null
On the Convergence of Black-Box Variational Inference
Accept (poster)
Summary: The paper proves convergence results for black-box variational inference (BBVI) with ordinary stochastic gradient descent (SGD) and proximal SGD under several assumptions: it considers only the reparameterization gradient setting with the location-scale variational family (in particular, mean-field and Cholesky parameterization) and a symmetric base distribution (with other mild requirements); it also requires an assumption involving the reparameterized gradient and the diagonal conditioner of the scale matrix and another one about the growth of the bijector. As an improvement over many previous works, it does not need to assume bounded domain, bounded support, or regularity conditions about the evidence lower bound (ELBO) directly. The paper notes that the theoretical results show that for the location-scale family, nonlinear scale parameterizations are suboptimal (but widely used in practice). This claim is tested experimentally on a synthetic and realistic problems. The empirical results confirm this claim and suggest that proximal methods converge faster. Strengths: The paper improves on previous convergence results on BBVI by lifting assumptions needed in previous work. It even goes further by empirically testing the theoretical prediction of linear scale parameterization being superior. Therefore it is interesting and relevant to the variational inference community. The presentation of the paper is mostly clear and clean despite the technical content. In particular, highlighting the assumptions in the text is very helpful for the reader. There is a lot of information about the experimental setup in the appendix and the code is available, which is good for reproducibility. Weaknesses: The main weakness of the paper is how it deals with the limitations: while it is good that the assumptions are highlighted in the text, there are more limitations that should be pointed out. For example, the paper only deals with reparameterization gradient BBVI, which is not mentioned in the abstract. In a similar vein, the title oversells the paper by a lot, making it seem as though it solved a much more general problem. (If the next paper generalizes these results or lifts some of the assumptions, should they give it the same title?) The paper also lacks a limitations section. Another weakness is that the experimental evaluation (for the most part) uses the Adam optimizer (and a proximal version) even though the theoretical results concern standard and proximal SGD. Furthermore, it explores the combinations Adam&linear, Adam&nonlinear, ProxAdam&linear, but not ProxAdam&nonlinear. I think it would be good to report the results of this combination too, in order to get a more complete picture. Finally, the results are quite technical, with a lot of variables involved in the bounds and there seem to be some errors in Theorem 3 (see my question to the authors below). It would be good the collect all the variable names and their meaning (or the position of their first occurrence in the text) in one table, so the reader can look up the meaning in one place. As a minor point, it would also help if expected values could include brackets around their arguments, to clarify their scope. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - What is meant by the "infinite sum structure" in line 115? - Theorem 3 claims that the iterates generated by BBVI include an $\epsilon$-stationary point. I would have expected a statement of the form "the iterates will be $\epsilon$-stationary points from some point onwards". Does that mean that BBVI will usually move away from the stationary point again? - In Theorem 3, $\kappa$ is defined but not used, $\gamma$ and $M$ are used but not defined (at least I failed to find their definition) and $C$ takes sometimes one and sometimes two arguments. Could you clarify the statement of the theorem? - Could you define the "proximal operator" (line 209)? It is defined for the specific setting, but not what it means in general. Some intuition would also be helpful. Typos: - line 3: comma should go after the parentheses - line 14: "box" missing - line 60: did you mean "sans-serif"? - line 212: remove "this" - line 230: Theorem 12 is not in the main part of the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Limitations (in the form of assumptions) are mentioned in the paper, but the list is incomplete (see "Weaknesses"). It would be good if all limitations were collected in one place in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for review. > The main weakness of the paper is how it deals with the limitations: while it is good that the assumptions are highlighted in the text, there are more limitations that should be pointed out. For example, the paper only deals with reparameterization gradient BBVI, which is not mentioned in the abstract. In a similar vein, the title oversells the paper by a lot, making it seem as though it solved a much more general problem. (If the next paper generalizes these results or lifts some of the assumptions, should they give it the same title?) Thank you for the suggestion. We will add this point in the abstract and introduction. We have used of BBVI to mean “BBVI with reparameterization gradients” primarily because we are following recent work on the topic e.g., Domke (2019,2020), Kim et al (2023), Hoffman and Ma (2020), Regier et al. (2017), none of which use score gradients. Nevertheless, we agree that reducing the scope of the title would be more appropriate. We thus propose to change the title to: “On the Convergence and Scale Parameterization of Black-Box Variational Inference.” Please let us know if the reviewer has better suggestions. > The paper also lacks a limitations section. We originally had a separate limitation section, but had to remove it due to space issues. We will add it back in the next version. > Another weakness is that the experimental evaluation (for the most part) uses the Adam optimizer (and a proximal version) even though the theoretical results concern standard and proximal SGD. Furthermore, it explores the combinations Adam&linear, Adam&nonlinear, ProxAdam&linear, but not ProxAdam&nonlinear. I think it would be good to report the results of this combination too, in order to get a more complete picture. We didn’t include ProxAdam&nonlinear because proximal SGD removes the need to use nonlinear conditioners. We could have included it, but didn’t see the point. > It would be good the collect all the variable names and their meaning (or the position of their first occurrence in the text) in one table, so the reader can look up the meaning in one place. As a minor point, it would also help if expected values could include brackets around their arguments, to clarify their scope. Thanks for the suggestion. We will add a separate nomenclature section in the future version. > What is meant by the "infinite sum structure" in line 115? We agree that we should have explained this. In the SGD literature, “finite sum” describes an objective function that can be represented as an infinite sum of functions, e.g. $\frac{1}{N} \sum_{n=1}^N f_n(x)$. In contrast, the VI objective cannot be represented as such an objective, as it is an expectation over an infinite number of points. To contrast with the finite sum setting, this setup has been called the “infinite sum.” We will add an explanation and citations to this in the next version. > Theorem 3 claims that the iterates generated by BBVI include an -stationary point. I would have expected a statement of the form "the iterates will be -stationary points from some point onwards". Does that mean that BBVI will usually move away from the stationary point again? This is a limitation of our current understanding of SGD in the nonconvex smooth setting in general, not a problem specific to BBVI. Most of the known analyses prove that the average gradient norm over the iterates is bounded. This in turn implies that $ \min_t || \nabla F (x_t) || < \mathbb{E}_t || \nabla F (x_t) || $. Thus one can only say that the trajectory includes a stationary point (the point achieving the minimum norm.) > In Theorem 3, gamma is defined but not used, kappa and M are used but not defined (at least I failed to find their definition) and C takes sometimes one and sometimes two arguments. Could you clarify the statement of the theorem? Thanks for pointing out that $\kappa$ is not used. Here $gamma$ is the step size, $M$ is the number of samples, and $C$ is used as a general placeholder for arbitrary constants. As suggested before, we will add a separate nomenclature table to resolve the confusion. > Could you define the "proximal operator" (line 209)? It is defined for the specific setting, but not what it means in general. Some intuition would also be helpful. Thank you for the suggestion. Yes, we should have included an explanation. In general, a proximal operator (or proximity operator) is a tool commonly used in optimization. For a convex function $h$ it is generally described as $\mathrm{prox}_{\gamma, h}(x) = \arg\min_z \frac12 \| x - z \|^2 + \gamma h(z)$, where $\gamma$ is the stepsize. We want to find a point close to $x$ and simultaneously minimize the function $h$. While it might not be obvious, this operator is a generalization of the gradient descent update $x_t - \gamma g_t$, which can be obtained by setting $h$ to be the first-order linearization of the objective function. A typical use-case of proximal operators is when $h$ is non-smooth or even non-differentiable. In those cases, the proximal operator may have a closed-form expression and favorable convergence properties than, say, subgradient descent. In our case, the proximal operator is used to circumvent the fact that the entropy term is non-smooth. We will add this explanation in the next version. --- Rebuttal Comment 1.1: Comment: Thank you for the response. It clears up a lot of things. I'm glad that you've decided to pick a more specific title and to re-add a limitations section. I noticed that you did not address the point regarding Adam in the experiments vs standard SGD in the experiments. Would you like to comment on that? --- Reply to Comment 1.1.1: Title: Response Comment: Thank you very much for the engagement. We are glad that our response has cleared up the discussion. Sorry that we missed answering the point on Theory with SGD v.s. Experiments with Adam. In our original submission, we conduct two types of experiments: - (a) A "controlled" synthetic experiment where the assumptions of the paper can be met perfectly (Section 4.1, Figure 3), and - (b). "realistic" experiments where the settings are closer to how BBVI is done in practice (Section 4.2, Figure 4). In (a), we used SGD, while in (b), we used Adam. Thus, we provide experimental results in both theoretical and practical extremes. We observe that, in (b), due to the fact that Adam handles non-smoothness quite well (although this is not well understood), the differences between parameterizations become narrower than in (a). But we still observe an effect that is in line with our theory and the controlled experiments in (a). We hope this answers the original comment and we more clearly state our experimental intent in the future version. Please let us know anytime if there are further questions about our work.
Summary: The authors analyze the smoothness and convexity of the ELBO under different parameterizations (linear vs nonlinear) of the scale for location-scale families, building on the work of Domke (2020). This enables convergence analysis for BBVI with standard and proximal stochastic gradient descent. Their main findings are that 1) the energy term of the ELBO is smooth under certain conditions (Theorem 1, ii) on the diagonal conditioner $\phi$ and reparameterization gradient, and 2) nonlinear diagonal conditioners break (strong) convexity of the energy, affecting SGD convergence rates, and 3) establishing rates for obtaining $\epsilon$-optimal solutions in BBVI for both convex and nonconvex energy terms. Strengths: * The work extends previous results meaningfully: Domke (2020) considered the case of a linear diagonal conditioner, while this work focuses on nonlinear ones while including the linear case for completeness. * Beyond showing convergence merely occurs (per the title), rates are established both for standard SGD iterates (Theorem 3) and proximal SGD in the case of a convex $f$. * The variational family considered (location-scale) encompasses a wide range of distributions usually used for VI; one additional step of generalizability is the use of the bijector $\psi$ to make the work relevant to ADVI in general. * The work has theoretical and practical merit, advising users on the tradeoffs between linear and nonlinear diagonal conditioners along with providing clearly established lemmas, assumptions and proofs that further work might build upon. Weaknesses: * The story is a bit disjointed. Assumption 4 and Theorem 3 might be better placed directly after Example 4 to complete a story about convergence of BBVI for nonlinear $\phi$ and general nonconvex $f$ that satisfies the smoothness conditions (among others). This might have even been a logical stopping point for a paper with the given title. Theorem 2 seems better placed nearby the section on Proximal SGD, as it is precisely the convexity of $f$ that Section 3.4 requires. * The exponential diagonal conditioner (line 108-109), commonly used in practice, does not satisfy the 1-Lipschitz assumption (Theorem 1, (ii)). I put this as a weakness, but maybe this point should be emphasized instead and used to evangelize the merits of softplus, which evidently does satisfy this assumption. * The case where $f$, which is usually $-\log p(x,z)$ in this work when $\psi$ is the identity, is convex in $z$ is uncommon in many practical problems of significance for VI when the density $p(x,z)$ is multimodal. This diminishes the significance of Theorem 2 (breaking convexity) and Section 3.4. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * The summary of contributions (lines 51-58) seems overly informal to me. What is a “full” guarantee? What converges? “Precisely as used in practice” may mean different things to different people. “Suboptimal” in what sense (item #2)? Maybe more precision would be better here. * The formatting of lines 67-70 is odd, and perhaps these quantities should just be defined in-line. * In line 102, I believe an additional “diag” is missing: should it be $diag(\phi(s)) = diag(\phi(s_1),\dots, \phi(s_d))$? * Line 115, “inifinite” is a typo. This taxonomy discussion I find detrimental to the paper, as it creates more confusion than clarity; Figure 1 and its caption could be removed altogether. * Is the distinction between the LHS and RHS of the equation after line 127 that of a “total derivative” vs a partial derivative with respect to $\lambda$? * Line 130-132 has a typo in phrasing “have been”, “alienating” * The proof of Theorem 1 seems to imply that $f$, in addition to being smooth, is twice-differentiable to compute the Hessian. If this isn’t implied by some other conditions, it should be stated explicitly perhaps for completeness. * The statement of Theorem 2 differs in the main paper and the supplement, this should be corrected. * Line 174, “becomes” is a typo Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The theoretical contributions make clear the assumptions that must be met for the results to hold. As stated, the 1-Lipschitz condition for $\phi$ could be discussed more. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. > The story is a bit disjointed. Assumption 4 and Theorem 3 might be better placed directly after Example 4 to complete a story about convergence of BBVI for nonlinear and general nonconvex $f$ that satisfies the smoothness conditions (among others). This might have even been a logical stopping point for a paper with the given title. Theorem 2 seems better placed nearby the section on Proximal SGD, as it is precisely the convexity of that Section 3.4 requires. Thank you for the suggestions. We will reorganize our paper in the next version accordingly. > The exponential diagonal conditioner (line 108-109), commonly used in practice, does not satisfy the 1-Lipschitz assumption (Theorem 1, (ii)). I put this as a weakness, but maybe this point should be emphasized instead and used to evangelize the merits of softplus, which evidently does satisfy this assumption. We point out that Theorem 2 does say something about non-Lipschitz conditioners; the exp conditioner will also result in a slower convergence rate since it breaks strong convexity. Given the breadth of this result, we thought it didn’t matter much which conditioner is used; any non-linear conditioner is bad. Nevertheless, we will add the suggested comment in the limitations. > The case where $f$, which is usually $- \log p(x, z)$ in this work when $\psi$ is the identity, is convex in $z$ is uncommon in many practical problems of significance for VI when the density $p(x, z)$ is multimodal. This diminishes the significance of Theorem 2 (breaking convexity) and Section 3.4. We believe that the significance of our theory should be taken qualitatively rather than quantitatively. Our results say that when the posterior is “easy” such that the landscape is Gaussian, nonlinear conditioners will fail to take advantage of this, unlike linear conditioners. If we extrapolate this intuition to posteriors that are non-Gaussian but not too much, it is still conceivable that linear conditioners will result in faster convergence. It is also known that, for large datasets, many Bayesian posteriors become Gaussian as a consequence of the Bernstein von Mises theorem. Thus we believe our results do explain the performance difference of our experiments, where none of the problems are provably Gaussian (strongly log-concave.) > Is the distinction between the LHS and RHS of the equation after line 127 that of a “total derivative” vs a partial derivative with respect to $\lambda$ Sorry for the confusion. We should have added a written description. The RHS is derivative with respect to $\lambda$, while the LHS is the composition of $t_{\lambda}(u)$ and the gradient of $f$ denoted as $\nabla f$. That is, $\nabla f \circ t_{\lambda}(u)$. > The summary of contributions (lines 51-58) seems overly informal to me. What is a “full” guarantee? What converges? “Precisely as used in practice” may mean different things to different people. “Suboptimal” in what sense (item #2)? Maybe more precision would be better here. Thank you for the comments. We will be more precise in the contribution summary and state that we do not assume any unrealistic modifications to the algorithm, such as bounded domain or bounded gradient. For the “suboptimal,” we will state that the nonlinear conditioners break strong convexity. > The proof of Theorem 1 seems to imply that $f$, in addition to being smooth, is twice-differentiable to compute the Hessian. If this isn’t implied by some other conditions, it should be stated explicitly perhaps for completeness. Thank you very much for catching this! Indeed we missed that assumption. We will add it in the next version. > The statement of Theorem 2 differs in the main paper and the supplement, this should be corrected. Sorry for the confusion. We will fix this in the next version. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: I thank the authors for the detailed response. Most of my points were minor and I am glad to see some will be addressed by the authors for clarity. The title change and the limitations section will help practitioners find this work and understand how the results affect implementations of BBVI.
Summary: This paper tries to prove convergence of BBVI algorithm in a more general setting compared to previous work: the main being that the target needing to be only log-smooth and not needing to be log-concave, which means that the objective can be non-convex. The paper claims that non-linear parameterizations of the variational parameters (the mean, the cholesky factor L for full rank matrix and scale for mean field), often used in practice can break strong convexity, even if the target is log-concave and thus plain SGD with non-linear conditioning is sub-optimal while proximal SGD with linear parameterization can give the fastest convergence for BBVI. Strengths: 1. The paper is solid and covers a lot of theory behind VI, Figure 1 is quite informative. 2. The figures are great and descriptive and support the theory and text. 3. I am not good with theory, but I found the theorems and inequalities fine 4. Covers relevant and contemporary literature quite well, although a paper with similar ideas and claims recently appeared on arxiv. Weaknesses: 1. I think, the paper is trying to unify many different parameterizations, and so at times the paper has become a bit harder to read, and in some places, authors can motivate more why and how certain theorems can have pratical impact, where things are generally followed and discussion is only theoretical and where theorems have a practical impact. 2. Then my main concern with this work, is that its claims can be easily misinterpreted, when readers see: 'BBVI converges', especially when there is no general consensus of what black box means, I think BBVI was first used as an acronym by the Ranganath 2013 paper which actually used score gradients. This nomenclature of algorithms is unfortunate because many new readers also think of Neural Networks powered inference algorithms, when they hear Black-box VI and assume Normalizing flows and Neural Network based Variational inference also as BBVI. With the recent explosion of methods, where a user can practically choose any divergence measure(any member of the f-divergence family) and any approximating family and use MC samples and auto diff to estimate gradients for inference, while this paper is narrow in the scope that it only considers exclusive KL, well behaved log-concave Gaussian (loc-scale) variational family and RP gradients, it will be unfair to say that 'BBVI converges', and therefore I strongly suggest authors to change the title of the paper. 3. In results section, it seems a bit odd and unsettling to me that the linear parameterization is claimed to be better than non-linear in all cases. Another submission in this conference, claims that non-linear(log) parameterization is theoretically more stable (also in Domke 2019), and shows some practical experiments confirming the same. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. If I understood correctly, the object of treatment in this paper is the objective function and not the gradient, both Theorems 1 and 2 are for the objective rather, I wanted to know why did the authors concentrate on objective and not the gradient itself. 2. Is it expected that the gradients will follow the same behaviour as objective which is the main object of treatment in this work, especially when there is a non-linear transformation involved in the parameterization, i.e. $ \nabla_{\lambda}f(\mathbf{t}_{\lambda}(\mathbf{u})) $ is different from $\nabla f$ ? 3. Does the work cover the case when gradients are computed with mini-batching which brings another source of stochasticity in addition to randomness of base distribution ? 4. Maybe the authors can also address, how their work is different/similar to the one in 'Provable convergence guarantees for black-box variational inference' on arxiv, how their claims match to it ? 5. Why was the exp conditioner/transformation not used in expts. for Figure 4, when it is the most popular one in my opinion ? 6. I did not like this statement so much: 'This would put mean-field BBVI in a regime comparable to the best-known convergence rates in approximate MCMC' since MCMC is aymptotically exact while MFVI is not and can be arbitrarily different from target. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Minor 1. Typo in line 115:'inifinite' 2. For the uninitiated, maybe introduce $L_{h}$ as Lipschitz smooth, if it does .. ? 3. Maybe explain what do all the constants $L_{f}, L_{h}, L_{s}$ mean at the beginning of Sec 3. I guess $L_{f}, L_{h}$ mean smothness constants for likelihood/energy part and entropy part in the objective respectively, I am overall inclined in favour of accepting the paper given some of the questions and concerns are addressed by the authors. I need also a bit more time in analysing this work and recent similar work, where some of the claims are similar but some are different. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. > I think, the paper is trying to unify many different parameterizations, and so at times the paper has become a bit harder to read We agree that the paper is quite dry in the submitted form. Due to the page limit, we had to remove many of our results' commentaries. We will add more practical discussions of our results in the next version. > Then my main concern with this work, is that its claims can be easily misinterpreted, when readers see: 'BBVI converges', especially when there is no general consensus of what black box means, I think BBVI ... actually used score gradients. In our defense, much recent work on BBVI with Monte Carlo gradients does seem to use “BBVI” synonymously. Recent papers such as the work of Domke (2019,2020), Hoffman and Ma (2020), Regier et al. (2017), Manushi et al. (2022) all use the term BBVI despite none of these works considering the score gradient. We are happy to use more general terms such as Monte Carlo VI and clarify this in the abstract. However, we’d also like to find some middle ground here and not deviate substantially from conventional technical vocabulary norms set in the recent literature, > I strongly suggest authors to change the title of the paper. We agree that the title might be perceived overly broadly. We propose to change the title to: “On the Convergence and Scale Parameterization of Black-Box Variational Inference.” Please let us know if the reviewer has better suggestions. > In results section, it seems a bit odd and unsettling to me that the linear parameterization is claimed to be better than non-linear in all cases. Another submission in this conference, claims that non-linear(log) parameterization is theoretically more stable (also in Domke 2019) Some of the authors have also seen that paper during bidding, although we couldn’t read it (as we didn’t bid for it due to the apparent conflict of interest). Thus, unfortunately, it would be impossible and inappropriate for us to comment on their results. However, Thm 2 is quite clear; you lose strong convexity with non-linear parameterizations, and thus one loses the guarantee to converge fast. To us, it was thus unsurprising that the linear param. did well in the experiments. Furthermore, we are unaware that Domke (2019) discusses non-linear params. His recent paper ($\S$ 5; Domke et al., 2023) agrees with our theoretical analysis: “*An alternate parameterization would also have implications for the (strong) convexity of the ELBO.*” In practice, the performance strongly depends on the choice of initialization. So this might explain the different conclusions. For instance, when initialized in the non-smooth regime (small initial scale), the linear parameterization starts by diverging (visualized in Domke 2020 Fig 1), resulting in slow convergence. However, as shown in Fig 3 in our paper and Domke (2020), proximal SGD fixes this sensitivity. > If I understood correctly, the object of treatment in this paper is the objective function and not the gradient ... I wanted to know why did the authors concentrate on objective and not the gradient itself. For studying the properties of SGD, one needs to analyze both the objective and gradient estimator. In particular, the landscape of the objective (smoothness, convexity) often determines the convergence rate. We thus focus on the properties of the ELBO. > Is it expected that the gradients will follow the same behaviour as objective which is the main object of treatment in this work? We kindly request clarification for this question. > Does the work cover the case when gradients are computed with mini-batching ... ? Yes, this should be fairly trivial to do whenever the linearity of the variance is used. We will add a discussion on how to do this in the future version. We also found that more insight could be drawn in the doubly stochastic setting, which we are attempting to study in a follow-up paper. > Maybe the authors can also address, how their work is different/similar to the one in 'Provable convergence guarantees for black-box variational inference' on arxiv, how their claims match to it ? We indeed saw that paper. Before we comment on the paper at all, note that the paper is concurrent work (potentially a concurrent NeurIPS submission), which we need to tread carefully about considering this paper here, as the NeurIPS FAQ suggests we should generally not do (see the “policy on comparisons to recent work”). While we emphasize the policy above, we are willing to contrast the two. Our paper was more interested in the effect of parameterizations, while that paper was more focused on proving the convergence of SGD with quadratic variance estimators, which were studied in Domke (2019). The only overlap is for the convergence of proximal SGD with a decreasing stepsize for strongly convex ELBO, for which the complexity matches perfectly: $\mathcal{O}(\kappa^2 \frac{1}{\epsilon})$. > Why was the exp conditioner/transformation not used ... ? To our knowledge, the softplus conditioner is also popularly used (see Tab 1 of Kim et al. 2023.) Furthermore, from the conclusions of Thm 2, we expected the performance of the softplus to be representative of all nonlinear conditioners. > I did not like this statement so much: 'This would put mean-field BBVI in a regime comparable to ... approximate MCMC' since MCMC is aymptotically exact while MFVI is not and can be arbitrarily different from target. Here, we’d like to emphasize the use of the word approximate, and will make this clearer in the text. The fastest convergence rates in MCMC are achieved with “approximate” MCMC methods, which are asymptotically biased, such as unadjusted Langevin. The cited paper indeed discusses the unadjusted Langevin algorithm, which is an approximate MCMC method. > For the uninitiated, maybe introduce Lipschitz smooth ... Thanks for the suggestion. We will add more discussion on these assumptions in the future version. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: Thanks to the authors for point by point response to my questions. I only wanted to emphasize that treatment of both objective and gradients is important in analysis for convergence, which they agree with. I like the new proposed title much more than the previous one which was my main concern and I must point out that other reviewers also felt the same. Then, I also felt that the authors tried too hard in overselling itself a bit too much in places when they actually didn't need to, which other reviewers also noticed. I am other wise happy to revise my rating and recommend acceptance. I think I am also satsfied with their response in comparison to Domke's old and recent work. Maybe they could have a joint session with the authors if the both papers are accepted. I remember a similar thing happening in last year NeurIPS when two accepted papers had conflicting results and conclusions, which is not necessarily the case here. From practise, I know that BBVI is so much dependent on initialization that it can be hard to say a general statement about parameterisations, and this is talking about exclusive KL, the other commonly used divergence objectives are even more wild.
Summary: The paper establishes the first complete convergence result of BBVI as it is used in practice. It is shown that a linear parametrization of the covariance leads to better convergence rates, which is confirmed in some numerical experiments. Finally, a proximal version for BBVI due to Domke et al 2020 is analyzed and shown to perform favorably in experiments to Adam. Strengths: - The paper is well-written and clearly a lot of time was spent by the authors to make the results look clean and polished. - The proposed proximal scheme for BBVI is implemented in the probabilistic programming language Turing, which may be of interest to the community once the code is released. Weaknesses: 1) The results and experiments are incremental (but nontrivial) extensions of what is presented in (Domke 2020). 2) The paper missed to cite the recent line of works * https://arxiv.org/abs/2205.15902 * https://proceedings.mlr.press/v202/diao23a.html * https://proceedings.mlr.press/v202/diao23a.html which give very strong convergence results for a BBVI-like algorithm by interpreting it as a Wasserstein gradient flow. A theoretical (and maybe even practical) comparison to these recent works would make the paper stronger. 3) It is known that BBVI is inferior to natural-gradient algorithms which exploit the KL / Fisher-geometry (https://arxiv.org/abs/2107.04562). This line of works could be mentioned in the introduction. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1) When using the linear parametrization, could there be an issue that the scale parameter becomes negative? Or is this somehow handled by the proximal operator of the entropy? 2) Do the proposed convergence results also carry over to ProxGen? Perhaps it is out of scope for this paper, but maybe a comment in the experiments section could be helpful. 3) How well does the method work for modern neural networks (ResNets, transformers)? There is many claims in the community that variational inference doesn't work well in these settings (e.g. worse performance than MAP inference). It would be highly interesting to see whether the proximal scheme solves these problems. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: All limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. > The paper missed to cite the recent line of works Thank you for the suggestions. We will add them to the next version. > It is known that BBVI is inferior to natural-gradient algorithms which exploit the KL / Fisher-geometry (https://arxiv.org/abs/2107.04562). This line of works could be mentioned in the introduction. Given the importance of NGVI, we agree to include it in the introduction and discussion sections. However, we would note that BBVI is a broader algorithm in the sense that one can use variational distributions outside of the exponential family, which our theory does include, and amortized variational families, which we plan on working on next. > When using the linear parametrization, could there be an issue that the scale parameter becomes negative? Or is this somehow handled by the proximal operator of the entropy? Yes, the proximal operator ensures that the scale is never negative. Furthermore, using projection operators as suggested by Domke (2020) is also an effective way of ensuring this. In practice, however, choosing a stable initial point (large scale) with a small enough step size appears to be enough to ensure convergence, though there is still room for investigation on whether it is a robust enough choice. > Do the proposed convergence results also carry over to ProxGen? Perhaps it is out of scope for this paper, but maybe a comment in the experiments section could be helpful. Unfortunately, ProxGen assumes the problematic “bounded gradient variance” assumption, thus our results cannot be immediately applied. It might be possible to extend their proof to include our setting. As suggested, we will add a discussion about this. > How well does the method work for modern neural networks (ResNets, transformers)? There is many claims in the community that variational inference doesn't work well in these settings (e.g. worse performance than MAP inference). It would be highly interesting to see whether the proximal scheme solves these problems. Given the theoretical and empirical results on “pruning” (Trippe et al., 2018; Coker et al., 2022; Huix et al., 2022), we conjecture that the proximal scheme will not fix the problem. Exclusive KL may simply be inappropriate for deep models, although more investigation is certainly needed. ### References Trippe, Brian, & Turner, Richard. 2017. Overpruning in Variational Bayesian Neural Networks. Tech. rept. arXiv:1801.06230. ArXiv. Coker, Beau, Bruinsma, Wessel P., Burt, David R., Pan, Weiwei, & Doshi-Velez, Finale. 2022. Wide Mean-Field Bayesian Neural Networks Ignore the Data. Pages 5276–5333 of: Proceedings of the International Conference on Artificial Intelligence and Statistics. PMLR. Huix, Tom, Majewski, Szymon, Durmus, Alain, Moulines, Eric, & Korba, Anna. 2022 (July). Variational Inference of Overparameterized Bayesian Neural Networks: A Theoretical and Empirical Study. Tech. rept. arXiv:2207.03859. ArXiv --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications, I am satisfied with the rebuttal. Just a small remark, natural-gradient methods do not necessarily require the distribution to be an exponential family [1,2]. [1] https://arxiv.org/abs/1906.02914 [2] https://arxiv.org/abs/2303.04397
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
LMC: Large Model Collaboration with Cross-assessment for Training-Free Open-Set Object Recognition
Accept (poster)
Summary: In this paper, the authors tackle the problem of open-set object recognition. The key challenge in open-set recognition problem they tackle is the model's reliance on spurious-discriminative features. To tackle this challenge, they propose an open-set object recognition system consisting of multiple pre-trained foundation models. They employ ChatGPT to generate virtual open-set categories that share spurious-discriminative features with closed-set categories. Then they generate images of the both closed-set and open-set categories by using DALL-E. During testing, they employ DINO and CLIP to get more accurate predictions by matching a test image to the generated images and, by matching a test image to category embeddings. They validate the proposed method on public benchmarks. Strengths: This paper has the following strengths. S1: The problem tackled, open-set object recognition is interesting and challenging. S2: The proposed approach is reasonable. Using generative models to leverage knowledge-base and image generation capability could improve open-set recognition performance by mitigating spurious correlations between the category and features. S3: The proposed method shows favorable performance on public benchmarks, without any fine-tuning. S4: The ablation experiments show the effectiveness of the proposed design choices. Chain-of-thought reasoning, self-checking, and CLIP-based feedback all contribute to the final performance. Furthermore, the performance improvement is not merely from the category overlap between the close-set and the virtual open-set as shown in Table 6. The improvement might come from mitigated spurious correlation. Weaknesses: This paper has the following weaknesses. W1: Although the proposed method is effective in open-set recognition, the technical contribution is somewhat limited. This is okay if the performance improvement compared to the baseline is very significant. However, the performance improvement compared to the baseline is not surprising given that the proposed method consists of four foundation models: GPT, DALL-E, CLIP, and DINO. For example, the proposed method shows 3.2 points AUROC improvement and 5.5 points OSCR improvement over the softmax baseline in TinyImageNet. W2: The authors claim the improved performance comes from mitigating the spurious correlation. However, there is no quantitative evaluation on the amount of spurious correlation mitigated. It would be better to show some quantitative evidence of how much the proposed method mitigates the spurious correlation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the weaknesses part. I have a few additional questions below. Q1: What is the performance if we learn a few parameters on top of the foundation models used? For example, what is the performance if we add some adapter layers on top of CLIP and/or DINO to fine-tune the model on the target datasets? Can we get an even stronger performance? Q2: Does the proposed method work well on more fine-grained recognition tasks? E.g., CUB or Oxford Flowers datasets. It seems that the proposed method heavily relies on ChatGPT’s common sense reasoning ability. What happens if ChatGPT cannot generate reasonable descriptions of certain fine-grained classes? Since the proposed method does not learn anything, it might fail if ChatGPT fails. I would like to listen to the authors' opinions and/or empirical validation on this issue. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I could not find limitation and broader impact part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >*Q1: Technical contribution.* **A1:** In this work, we propose a novel framework that collaborates large models and performs training-free open-set object recognition. Moreover, to effectively extract implicit knowledge from large models, we also incorporate several novel designs, such as iterative self-checking and cyclic self-assessing. To the best of our knowledge, we are the first to collaborate different large models to perform open-set object recognition. >*Q2: The proposed method consisting of four foundation models only outperforms baseline by 3.2 and 5.5 points.* **A2:** We would like to clarify, similar to our method, the softmax baseline we set up also collaborates foundation models (as described in Line 320-324 in paper). This softmax baseline can be actually regarded as a variant of our method without designs to extract implicit knowledge, and the performance improvement (3.2 and 5.5) shows the benefit of the designs to extract implicit knowledge, instead of fully showing the superiority of our method. Note that the baseline (with large models) outperforms previous SOTA methods [6] by 9.2 on OSCR, and our designs further improve over baseline by 5.5 on OSCR, showing that overall, our collaboration of large models and designs bring large improvement (9.2 + 5.5 = 14.7). Sorry for confusion caused. We will make it clearer in paper. >*Q3: Show some quantitative evaluation on the amount of spurious correlation mitigated.* **A3:** While it can be difficult to directly and precisely evaluate spurious correlation, below, we still do some analysis and show some quantitative evaluation. Take the open-set class Crane as an example, without our method's list of virtual open-set classes used, 40% of Crane are misrecognized as closed-set class Albatross (both with white feathers) and 26% of Crane are misrecognized as closed-set class Black Stork (both with long lags). While after adding virtual open-set classes such as Swan (also with white feathers like Albatross) and Avocet (also with long legs like Black Stork), only 4% of Crane are misrecognized as Albatross and 2% of Crane are misrecognized as Black Stork. Such reduced amounts (40% to 4% and 26% to 2%) can imply the amounts of spurious correlation mitigated. Similar to what was done above, we measure the reduced amount for all classes on the TinyImageNet protocol. We find that, the amount reduced (mitigated) on average is around 24% after applying our method, implying the efficacy of our method in mitigating spurious correlation. >*Q4: Add adapter layers on top of CLIP and/or DINO.* **A4:** We add two adapter (linear) layers on top of both CLIP and DINO, and fine-tune these layers on the training set of the target dataset while keeping other parts frozen. We fine-tune CLIP (or not) and fine-tune DINO (or not) to get the four variants below. We report AUROC. | Methods | CIFAR10 | CIFAR+10 | CIFAR+50 | TinyImageNet | |-|-|-|-|-| | **CLIP & DINO** | 96.6 | 98.9 | 98.5 | 86.7| | **Fine-tuned CLIP & DINO** | 96.3 | 98.5 | 98.0 | 86.8 | | **CLIP & fine-tuned DINO** | 96.4 | 98.4 | 98.2 | 86.8 | | **Fine-tuned CLIP & fine-tuned DINO** | 96.2 | 98.3 | 97.9 | 86.9 | As shown, on smaller-scale datasets (CIFAR10, CIFAR+10, and CIFAR+50), fine-tuning leads to slight performance drop, while on the relatively large TinyImageNet, fine-tuning leads to slight performance enhancement. Nevertheless, without any fine-tuning, our training-free method has already outperformed previous works largely. >*Q5: Fine-grained recognition tasks (CUB or Oxford Flowers datasets)* **A5:** Below, we evaluate on the more fine-grained datasets CUB and Oxford Flowers, and compare our method with previous SOTA methods. We report AUROC. | Methods | CUB-Easy | CUB-Hard | |-|-|-| | **MLS [48]** | 88.3 | 79.3 | | **Ours** | **90.0** | **81.4** | | Methods | Oxford Flowers | |-|-| | **DML [a]** | 90.8 | | **Ours** | **91.7** | As shown, on these more fine-grained datasets, our method can also outperform previous SOTA methods. This further shows the efficacy of our method. [a] The Importance of Metric Learning for Robotic Vision: Open Set Recognition and Active Learning. ICRA, 2019. >*Q6: What happens if ChatGPT cannot generate reasonable descriptions of certain fine-grained classes?* **A6:** Among ChatGPT's generated descriptions, we observe that, reasonable descriptions are consistently generated for all the evaluated classes (including fine-grained classes), while for certain (fine-grained) classes, in rare cases, unreasonable descriptions are also generated. But note that in our work, we designed a cyclic self-assessing module to replace unreasonable descriptions with reasonable descriptions. We find that, before passing into the self-assessing module, 3% of descriptions are checked to be unreasonable, but none of the descriptions output by the module is checked to be unreasonable. This shows that ChatGPT has small probability to generate unreasonable descriptions, while our self-assessing module can further mitigate this problem. The above check is done by inviting 3 volunteers and passing the same 1000 descriptions generated from fine-grained classes in CUB to each of them. The 3 volunteers first make decisions independently and then discuss disagreed decisions. We will also add above analysis to Supp. Though not observed, if ChatGPT cannot generate any reasonable description of a certain fine-grained class, all generated (unreasonable) descriptions may fail to be replaced in self-assessing process and images can fail to be generated for this class. This can weaken the mitigation of spurious-discriminative features w.r.t. this class. Nevertheless, in our experiments, ChatGPT is observed to be able to generate reasonable descriptions for all the evaluated classes, and our method achieves SOTA performances on all splits and all datasets, showing the efficacy of our method. --- Rebuttal Comment 1.1: Title: Acknowledging the rebuttal and other reviews Comment: I have read the rebuttal and other reviews. The rebuttal from the authors resolved most of my concerns. I appreciate the clarification on the softmax baseline and the additional analysis on the spurious correlation. The proposed method seems to have merit. I am leaning toward accepting this paper. I am increasing my rating to 6. --- Reply to Comment 1.1.1: Comment: Respectful Reviewer zWHw, We are glad that we have resolved most of your concerns. Thanks for your time and effort and thank you for recommending accepting our paper. Best regards, Authors
Summary: This work demonstrates a very interesting and sophisticated usage of various large models, including ChatGPT, CLIP, DINO and DALL-E., to tackle the open-set recognition problem. The main challenge in open-set recognition is that the chosen classifier may get confused by certain spurious-discriminative features that are shared between the closed- and open-set classes. The main idea of this work is to leverage the text-to-image alignment capability of CLIP and image-to-image alignment capability of DINO with the aid of virtual open-set classes. The list of virtual open-set classes is elicited from LLM, i.e. ChatGPT, through carefully designed prompts that introduces intermediate reasoning and self-checking. Additionally, the descriptive text can then serve as prompts for DALL-E to synthesize the representative images of the corresponding open-set classes. During inference time, both the pre-trained CLIP and DINO models can then utilize the text class names and synthetic representative images to perform zero-shot (training-free) classification, and meanwhile reducing the influence of spurious-discriminative features. Evaluation on several benchmarks demonstrates new state-of-the-art performance. Strengths: 1. The paper is well-written and very easy to follow. The additional information provided in the supplemental material is also very helpful in resolving most of the doubts I had when reading the main manuscript. In the beginning, I particularly felt that how the large models collaborate and self-improve to accomplish their jobs was a little magical and I wanted some more transparency after reading the main manuscript. I appreciate that the authors had put sufficient information in the supplemental material to make the whole work more technically sound and convincing. 2. This work tackles the open-set recognition from the system-level and demonstrates a very clever usage of several off-the-shelf large models of different modalities. Although it didn't introduce new techniques to improve any of the models involved, it successfully combined them to achieve superior performance. From the practical point of view, this method possesses many advantages, such as being training free and fast inference, that are important in real-world applications. Weaknesses: 1. Although large model collaboration is an interesting way of solving open-set recognition problem, it treats the large models as black boxes as it is. and it is perhaps not easy to come up with an even more sophisticated method to build a LMC system like this work, In some sense, I feel that it does not pave a new way for follow-up research, and yet I do not believe the open-set recognition problem is completely solved by the LMC approach. 2. Although I don't think the evaluation of this work is insufficient, I think it is still worth validating the effectiveness of LMC framework on larger benchmarks such as the Semantic Shift Benchmark proposed in [48]. I doubt if the construction of virtual open-set classes by LLM will fail at a certain point as the number of fine-grained visually similar close-set classes grows. Another drawback of using LLM is that it is very difficult for human to "fact check" the answers and detect any content that is made up by LLMs. Technical Quality: 3 good Clarity: 3 good Questions for Authors: From the exemplar dialogs with ChatGPT, the conversation still seems a bit open-ended and unstructured. I am curious about how the authors completed the preparation of the virtual open-set list, synthesis of diverse representative images and pre-compute the CLIP and DINO features in just about 38.9 minutes (Appendix, Table 9). I assume that this is done programmatically. Are there any tricks involved here to simplify the post-processing of ChatGPT answers? As far as I know, the most important tunable parameter when calling OpenAI API is the temperature, which controls the diversity of the results that the LLM generates. How did you set this parameter? Does it affect the quality of ChatGPT's self-checking process? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The performance of LMC framework is apparently affected by the large models and there is no discussion about it. I am particularly curious about the potential failure modes of ChatGPT. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >*Q1: Pave a new way for follow-up research.* **A1:** Thank you for pointing out our method *is an interesting way*. Below, we also discuss potential **follow-up research** in solving open-set recognition (OSR) that could be inspired by our work. In our method, we identify suitable large models, design schemes to extract implicit knowledge, and design collaboration of large models to solve OSR problem. This implies that follow-up research work can further investigate (1) how to identify more suitable large models, (2) how to better extract knowledge from large models, and (3) how to better collaborate large models for OSR. Besides, how to collaborate large models with conventional OSR methods to further improve performance with the complimentary ability can also be interesting. Moreover, despite the convenience of training-free, adding trainable modules to further improve the performance can be explored. This implies that our method could also inspire new ways for follow-up research. We will add more detailed discussions to paper. >*Q2: (1) Validate on larger benchmark Semantic Shift Benchmark. (2) Construction will fail at a certain point as the number of classes grows. (3) Difficult for human to "fact check".* **A2:** (1) **Larger benchmarks.** We evaluate on a larger benchmark Semantic Shift Benchmark [48] and report AUROC. |Methods|CUB-Easy|CUB-Hard|SCars-Easy|SCars-Hard|FGVC-Aircraft-Easy|FGVC-Aircraft-Hard|ImageNet-Easy|ImageNet-Hard| |-|-|-|-|-|-|-|-|-| |**MLS [48] (ICLR 2022)**|88.3|79.3|94.0|82.2|90.7|82.3|78.7|72.8| |**Ours**|**90.0**|**81.4**|**96.1**|**84.3**|**92.1**|**84.1**|**80.0**|**74.8**| As shown, on larger benchmark that has many fine-grained classes, our method can also outperform the previous SOTA method on all splits. (2) **Construction of classes.** We admit that if there are huge amount of very similar fine-grained classes, construction of classes may yield weaker performance. However, even on large fine-grained datasets (Semantic Shift) with 1000 close-set classes, we observe that our method can still construct classes reliably and achieve SOTA performance, showing the robustness of our method. But we also need to mention that, if there are too many and too similar fine-grained classes, it will become very difficult for existing open-set recognition methods in common. (3) **Fact check.** To further show the efficacy of our designs, below, we also introduce **human fact check**. We invite 3 volunteers, each conducting fact checks of the construction of classes independently with access to internet and online bases (e.g., Wiki) to help their assessment. Then, volunteers further discuss the disagreed class names. We conduct above fact check on the constructed classes for CIFAR10. We observe that, without proposed designs (intermediate reasoning and self-checking), around 5.5% constructed classes are judged as low quality, while with proposed designs, only around 0.9% constructed classes are judged as low quality. This further shows (1) the efficacy of proposed designs, (2) though it is difficult to do very accurate fact check, we can still perform fact check to some extent. >*Q3: ChatGPT's conversation seems unstructured. How to complete preparation in just about 38.9 mins. Any tricks to simplify the post-processing of ChatGPT answers?* **A3:** As mentioned in Supp, we complete the preparation automatically with a single script. Using this script (Python), we can (1) structure ChatGPT's answers and (2) complete the preparation very efficiently and by the following processes: (1) Like online tutorial [a], we simply use ChatGPT to **structure the answers** by instructing ChatGPT: "Please respond in a list and start each class with '--'". Thus, the answers are structured. (2) As the script can run automatically, we can complete preparation conveniently. To further enhance the efficiency, in our script, we open 20 ChatGPT instances concurrently during simulating class names, and use 20 DALL-E instances concurrently to synthesize images. Using this way, on a consumable workstation with a single 3090 GPU, we can **finish the preparation in just about 38.9 mins**. We will release code and script. [a] Zdnet. 7 advanced ChatGPT prompt-writing tips you need to know. >*Q4: How did you set the temperature? Does it affect self-checking?* **A4:** We set the temperature to its default value 1.0. Below, we test setting it to different values in self-checking. |Temperature|0.1|0.4|0.7|1.0|1.3|1.6|1.9| |-|-|-|-|-|-|-|-| |AUROC|86.1|86.4|86.6|86.7|86.7|86.3|86.0| We observe that our method gets optimal results on 1.0 and 1.3 (1.0 used in paper). We also find that setting temperature to different values all outperform the variant w/o self-checking, showing the efficacy of self-checking. >*Q5: The performance is affected by the large models and there is no discussion about it. Curious about potential failure modes of ChatGPT.* **A5:** (1) **The performance is affected by the capability of each large model**. For example, GPT has different versions (e.g., GPT 3.0 and 3.5) with different capabilities. For GPT 3.0, AUROC on TinyImageNet is 85.2, while for GPT 3.5, AUROC is 86.7. This shows that the performance is affected by the large models. We will add more discussion to paper. (2) **Potential failure mode** of ChatGPT: We observe that when we ask ChatGPT a complex question (e.g., "Given a list of classes [classes], please list some classes that share spurious-discriminative features with class [class]?"), on some classes, ChatGPT fails to well understand the question and provides undesired answers: Sorry, as an AI language model, I cannot answer this question. This can be seen as a failure mode. Nevertheless, as discussed in Sec. 3 in paper, we design to guide ChatGPT step-by-step with intermediate reasoning. When adding this design, we well-handle this failure mode and observe that ChatGPT can stably generate good-quality answers. --- Rebuttal Comment 1.1: Title: Response to authors' rebuttal Comment: I appreciate the great efforts made by the authors to prepare the rebuttal. I was impressed by the superior performance shown in Semantic Shift Benchmark and other experiments in the responses to other reviews. The rebuttal addressed most of my concerns and questions. I still hold my opinion that the LMC framework leaves less room for future exploration because large models remain a black box and attempts in (1), (3) of A1 could be easily regarded as *tricks* or lack of technical contribution. Nevertheless, I am in favor of accepting this paper and will keep my rating as Weak Accept. --- Reply to Comment 1.1.1: Comment: Respectful Reviewer BEEJ, Thank you for recommending accepting our paper. Besides the above-mentioned follow-up research in A1, one more thing that can be explored in the future is to incorporate black-box optimization algorithms (e.g., CMA Evolution Strategy) into our LMC framework. Recall that the LMC framework we propose in our paper is a training-free framework. Despite the convenience of being training-free, the usage of black-box large models in our framework for open-set recognition can be further improved through black-box optimization. Some attempts to enhance the usage of a large model for a specific task through black-box optimization have been made in the NLP area (e.g., "Black-Box Tuning for Language-Model-as-a-Service" published on ICML 2022). In our framework, for example, one potential way to enhance the usage of large models can be to utilize black-box optimization to craft prompts for ChatGPT in the virtual open-set class simulation process. By doing so, the implicit knowledge of ChatGPT can be better extracted. We will discuss this as a future direction in our paper as well. Thank you again for your time and effort. Best regards, Authors
Summary: This paper proposed to incorporate several large models to solve the spurious-discriminative problem in open-set recognition. It first prompts ChatGPT to describe the known classes and generate a list of new classes that share the spurious-discriminative features with the known classes. Then it prompts DALL-E to generate the images of known and unknown classes. It also uses CLIP to detect the less accurate generated images and asks ChatGPT to refine the description and uses DALL-E to generate again. During inference, it uses both the CLIP model and DINO model to calculate the uncertainty scores and fuse their results. The experiments and ablation study are comprehensive. Overall, I think this paper provides a new alternative solution for open-set recognition, and the data generation process could be very useful for the industry. Strengths: 1. The overall idea of using several large models for training-free open-set recognition is new and interesting. The pipeline of using ChatGPT to generate the new classes that share the spurious-discriminative features with the known classes is reasonable. The refinement procedure using CLIP and ChatGPT is very rigorous and effective. 2. The inference pipeline that fuses the results of CLIP and DINO is also reasonable. It is somewhat similar to the method in the few-shot open-set recognition [1, 2], which also fuses the uncertainty score from two aspects. The authors may consider citing the related papers. 3. The experiments and ablation studies are very comprehensive. Reference: [1] Ssd: A unified framework for self-supervised outlier detection. In ICLR, 2022. [2] THE DEVIL IS IN THE WRONGLY-CLASSIFIED SAMPLES: TOWARDS UNIFIED OPEN-SET RECOGNITION. In ICLR 2023. Weaknesses: The main pipeline is a kind of data generation process, which could not be done online. So although it is a good try to use several large models and coordinate them to generate wanted data, it is more like an engineering project rather than a research topic. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How much time does the data generation cost for different datasets? 2. What if we use other pre-trained image models instead of DINO? Like ImageNet pre-trained or MoCo? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >*Q1: May consider citing the related papers [1,2].* **A1:** We will cite [1,2] in the paper. Note that different from [1,2] which fuse the uncertainty score from two aspects (e.g., from FS-KNN and Softmax), we are the first to collaborate large models and leverage their rich and distinct knowledge to perform open-set object recognition. During inference, as mentioned in our paper, we collaborate two large models (CLIP and DINO) complementarily to match the testing image to both the overall concept and detailed local features of each class. >*Q2: The main pipeline is a kind of data generation process, which could not be done online. [...] it is more like an engineering project rather than a research topic.* **A2:** (1) **Online**: In open-set recognition, most existing works rely on a training process. Besides training, many works [8,13,21,30,32] also perform data generation (i.e., they need both training and data generation). The data generation process in these works (and also in our method) is not done **online**. In contrast to previous works, our framework performs **data generation** but does not require any training process. In other words, in this work, we, for the first time, propose a **training-free** framework to handle the open-set recognition task. This shows the advantage of our research work. As also mentioned by Reviewer BEEJ, "this method possesses many advantages, such as being training-free and fast inference, that are important in real-world applications". (2) **Research topic**: Open-set object recognition is a challenging research problem and has attracted lots of research attention. As mentioned by Reviewer zWHw, "open-set object recognition is interesting and challenging". To perform open-set object recognition well, as also pointed out by Reviewer BEEJ, "the main challenge in open-set recognition is that the chosen classifier may get confused by certain spurious-discriminative features". To tackle this challenge, in our paper, inspired by that different large models often contain rich and distinct knowledge, we, for the first time, propose a framework to collaborate different large models to handle the open-set object recognition task. Reviewer BEEJ also mentions that, our work "demonstrates a very clever usage of several off-the-shelf large models of different modalities". However, utilizing and collaborating large models to perform open-set object recognition is challenging and is a research problem by itself that is worth exploring. To handle this problem and extract implicit knowledge in large models effectively, in our paper, we also incorporate our framework with several novel designs. During simulating names of virtual open-set classes, to simulate a more comprehensive list of virtual open-set classes to better cover the spurious-discriminative features, we propose to enable the large model to perform self-checking. Moreover, during generating images via DALL-E, to ensure that the images generated can well represent their desired classes, we also design an effective cyclic self-assessing module to perform cross-checking across different large models and refine the generated images iteratively. In summary, open-set object recognition is challenging, and one of its main research challenges is to handle spurious-discriminative features (as mentioned by Reviewer BEEJ). In this work, to handle this research challenge, we, for the first time, propose a framework to collaborate large models to perform open-set object recognition. We also incorporate our framework with several new and effective designs to utilize and collaborate large models better. Our framework achieves SOTA performance on all the evaluated benchmarks. **These show that, our work can bring some new research contributions**. >*Q3: Data generation time cost.* **A3:** In our framework, data generation consists of two steps: (1) generating the names of virtual open-set classes and (2) generating diverse images w.r.t. each class. Below, we report the time cost on each of the above two steps across different datasets. | Methods | CIFAR+10 | CIFAR+50 | CIFAR10 | TinyImageNet | |---|---|---|---|---| | **Name generation** | around 1.3 mins | around 1.3 mins | around 2.0 mins | around 5.2 mins | | **Image generation** | around 9.4 mins | around 9.4 mins | around 9.5 mins | around 32.7 mins | As shown, our framework does not need much time during both the above two steps. Taking TinyImageNet as an example, on our workstation with a single RTX 3090 GPU, the above two steps take 37.9 minutes in total. Note that after the above two steps, our framework is free of training, while the training stage of previous SOTA methods often requires a long training time (e.g., ZOC [12] takes around 40 hours during its training stage when run on our workstation). >*Q4: Other pre-trained image models instead of DINO like ImageNet pre-trained or MoCo.* **A4:** Below, we compare "our framework with DINO" with two other variants. In these two variants, we replace DINO with ImageNet pre-trained transformer and MoCo respectively, while keeping the rest parts of our framework to be the same. We report the AUROC score. | Methods | CIFAR10 | CIFAR+10 | CIFAR+50 | TinyImageNet | |---|---|---|---|---| | **Previous SOTA** | 95.1 | 97.9 | 97.6 | 84.6 | | **ImageNet pre-trained transformer** | 95.7 | 98.5 | 98.1 | 86.4 | | **MoCo** | 95.2 | **98.9** | 98.0 | 85.8 | | **DINO** | **96.6** | **98.9** | **98.5** | **86.7** | As shown, no matter using DINO, ImageNet pre-trained transformer, or MoCo, our framework can consistently outperform previous SOTAs across different datasets. Moreover, our framework with DINO outperforms the other two variants. This can be because, as discussed in our paper, the collaboration of DINO with good capability in detailed local features, and CLIP with good capability in overall concepts, is complementary and can achieve good performance. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: I carefully read the rebuttal and the reviews of other reviewers, and I believe that my concerns are solved. This work is valuable for the research community and industry, so I raise my score to 6. --- Reply to Comment 1.1.1: Comment: Respectful Reviewer Qabu, Thank you for your time and effort and thank you again for pointing out that this work is valuable for both the research community and industry. Best regards, Authors
Summary: This paper proposes an open-set recognition framework, Large Model Collaboration (LMC), which collaborates several large models (ChatGPT, DALL-E, CLIP, and DINO) to make use of the rich implicit knowledge to reduce the reliance on spurious-discriminative features. The proposed framework consists of the following two stages: (i) The first stage is to simulate virtual open-set classes, including simulating names and generating images. In order to improve the effectiveness of the simulated names for open-set classes, ChatGPT is asked with three designed questions and the corresponding intermediate rationales for obtaining names of new classes that share spurious-discriminative features with each closed-set class, and is guided to perform iterative self-check (where the new simulated class names are used to expand the name list, and the expanded name list is used to obtain new simulated class names) for covering as many spurious-discriminative features as possible. Then, ChatGPT is asked to generate diverse descriptions for each class in the expanded name list, and DALL-E is used to generate images based on these descriptions. In order to improve the accuracy of the generated images, a cyclic self-assessing module is proposed, where ChatGPT is provided with feedbacks from CLIP about the less accurate images to refine the descriptions generated by ChatGPT and DALL-E is used to re-generate images based on the refined descriptions. At this stage, an expanded list of both closed-set and simulated open-set classes and diverse generated images for each class in the list are obtained. (ii) The second stage is for inference based on the expanded list and the generated images. At this stage, two alignments (image-text alignment by CLIP and image-image alignment by DINO) are performed for obtaining the corresponding scores to each testing image, and the weighted result of the two scores is used for open-set recognition. Strengths: (1) The idea, which makes use of several large models for open-set recognition, is somewhat novel and interesting. (2) The writing is clear and easy to follow. Some examples are well visualized. (3) Experimental results on four small-scale datasets (CIFAR10, CIFAR+10, CIFAR+50, and TinyImageNet) demonstrate that the proposed methods performs better than the comparative methods. Weaknesses: (1) Lacking comparisons under the cross-dataset setup: The proposed framework is only evaluated under the standard setup where the open-set-class images and closed-set-class images are from a same dataset. Such experiments are insufficient, and in fact, many open-set recognition methods [15,21,36,45,55,63] have also been evaluated under the cross-dataset setup. (2) Lacking comparisons on larger-scale datasets: in this paper, only four small-scale datasets (i.e., CIFAR10, CIFAR+10, CIFAR+50, and TinyImageNet) are used for evaluation. It is necessary to make comparisons on some larger-scale datasets, e.g., CUB and FGVC-Aircraft. (3) Comparison with other image generative methods: One main contribution of the proposed framework is generating virtual open-set-class images. Many other works (e.g., [8,21,30]) also generate virtual images, and it is better to replace the image generation method used in the proposed framework with some of them, and make a further comparison. (4) Lacking details of efficiency: One advantage of the training-free methods is its high efficiency. In the proposed framework, although there is no training stage, the pre-processing stage and the inference stage which is based on two alignments may also bring additional computational costs. Hence, it is suggested to provide details about the efficiency of the two stages. (5) Lacking discussion about the ablation results: In Sec 4.4, several ablation studies are conducted. However, the authors only stated that the final results were improved, but didn’t give any discussion on the effect of each module. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please see the weaknesses above. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors did not adequately address the limitations. Two additional suggestions are listed here: (1) It’s better to add the full name of LMC, i.e., Large Model Collaboration, in the abstract in line 6. (2) In Line 219, it is suggested to add a brief description of 5, although Fig. 5 is also described in Line 415. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >*Q1: Comparisons under the cross-dataset setup.* **A1:** Below, we evaluate our method under a common cross-dataset setup following [15,36,45,55,63]. In this setup, closed-set-class images are from CIFAR-10, and open-set-class images are from ImageNet-crop, ImageNet-resize, LSUN-crop, and LSUN-resize. We use the same evaluation metric F1-score as [15,36,45,55,63] under this setup. | Methods | ImageNet-crop | ImageNet-resize | LSUN-crop | LSUN-resize | |---|---|---|---|---| | **CVAECapOSR [15]**| 85.7 | 83.4 | 86.8 | 88.2| | **GFROSR [36]** | 82.1 | 79.2 | 84.3 | 80.5 | | **CGDL [45]** | 84.0 | 83.2 | 80.6 | 81.2 | | **CROSR [55]** | 73.3 | 76.3 | 72.0 | 74.9 | | **PROSER [63]** | 84.9 | 82.4 | 86.7 | 85.6 | | **Ours** | **88.0** | **86.0** | **91.5** | **93.5** | As shown, under the cross-dataset setup, our method also achieves significant performance improvement over previous methods, further demonstrating the effectiveness of our method. >*Q2: Comparisons on larger-scale datasets, e.g., CUB and FGVC-Aircraft.* **A2:** Following your suggestion, below, we also evaluate our method on larger-scale datasets including CUB and FGVC-Aircraft, and compare our method with previous methods which report results on these datasets. We report the AUROC score and we follow the Easy/Hard data split of [48] for these two datasets. | Methods | CUB-Easy | CUB-Hard | FGVC-Aircraft-Easy | FGVC-Aircraft-Hard | |---|---|---|---|---| | **ARPL+CS [6] (TPAMI 2021)** | 83.5 | 75.5 | 87.0 | 77.7 | | **MLS [48] (ICLR 2022)** | 88.3 | 79.3 | 90.7 | 82.3 | | **Ours** | **90.0** | **81.4** | **92.1** | **84.1** | As shown, on these larger-scale datasets, our method can also outperform previous methods and achieve SOTA performance on all splits. >*Q3: Replace the image generation method: Many other works (e.g., [8,21,30]) also generate virtual images.* **A3:** Following your suggestion, below, we compare our framework with three variants of our framework (**Our other parts + Image generation with [8]**, **Our other parts + Image generation with [21]**, **Our other parts + Image generation with [30]**). In these three variants, we replace the image generation method used in our framework with the image generation method in [8], [21], and [30] respectively, while keeping the rest parts of our framework to be the same (e.g., in these three variants, we will still use ChatGPT to simulate a list of virtual open-set classes, and these classes will still be aligned with the testing image through CLIP during inference). | Methods | CIFAR10 | CIFAR+10 | CIFAR+50 | TinyImageNet | |---|---|---|---|---| | **Our other parts + Image generation with [8]** | 95.0 | 97.1 | 96.1 | 84.5 | | **Our other parts + Image generation with [21]** | 94.3 | 96.7 | 95.3 | 83.7 | | **Our other parts + Image generation with [30]** | 94.5 | 97.0 | 95.4 | 83.8 | | **Ours** | **96.6** | **98.9** | **98.5** | **86.7** | As shown, our framework with its original image generation method outperforms all three variants, showing the effectiveness of the image generation method proposed in our framework. >*Q4: Details of efficiency: it is suggested to provide details about the efficiency of the two stages.* **A4:** Below, we show the time of the two stages, i.e., (1) the pre-processing stage and (2) the inference stage. We evaluate the time cost of our framework on a consumable workstation with an RTX 3090 GPU. With respect to the pre-processing stage, the time cost of our framework ranges from around 10.9 minutes to 38.9 minutes across the evaluated datasets. Taking TinyImageNet as an example, on our workstation, the pre-processing stage of our framework takes 38.9 minutes. Note that our framework is free of training, while the training stage of previous SOTA methods often requires a long training time (e.g., ZOC [12] takes around 40 hours during its training stage when run on our workstation). This demonstrates the efficiency of our training-free framework. Moreover, with respect to the inference stage, we find that our method achieves real-time performance (around 0.02 seconds per image across the evaluated datasets). >*Q5: Discussion about the ablation results in Sec 4.4 on the effect of each module.* **A5:** Following your suggestion, we will add discussions to Sec 4.4 on the effect of each module. Below, we take Tab. 4 and Tab. 5 in Sec 4.4 as examples. (1) In Tab. 4, our framework that has a design to guide ChatGPT with intermediate reasoning outperforms the variant "w/o intermediate reasoning". This is because, the intermediate reasoning design can lead ChatGPT to better understand the asked question, and thus lead to performance improvement. Moreover, our framework with the self-checking design also outperforms the variant "w/o self-checking". The performance improvement is due to the effect of the self-checking module, which can lead ChatGPT to simulate a more comprehensive list of virtual open-set classes. (2) In Tab. 5, our framework that uses the cyclic self-assessing module outperforms all three variants, i.e., the "w/o self-assessing" variant, the "Check and discard" variant, and the "Check and naively refine" variant. Such improvement can be due to the effect of the cyclic self-assessing module, which can leverage CLIP to provide specific feedback for ChatGPT, so that ChatGPT can refine the descriptions more toward the direction of our desire. Note that all three variants generate images in some alternative way but do not use the cyclic self-assessing module. Due to space limitations here, we will add more detailed discussions w.r.t. all ablations to Sec 4.4 of paper. >*Q6: Two additional suggestions.* **A6:** Thanks for your suggestion. We will (1) add the full name of LMC, i.e., Large Model Collaboration, to the abstract, and (2) describe Fig. 5 in Line 219 in the revised version. --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ responses that have addressed most of my concerns. To be honest, it is better to make a cross-dataset comparison with some methods (e.g. [27, 12]) that have appeared and performed relatively better in the original tables 1 and 2 of the submitted text, rather than some newly added methods. Just as stated in my first-round comment, I still feel that the idea of this paper is somewhat interesting, so I will keep my rating as Borderline accept. --- Reply to Comment 1.1.1: Comment: Respectful Reviewer Kpbj, We are glad that we have addressed most of your concerns and thank you for leaning towards accepting this paper. With respect to the cross-dataset comparison, as pointed out by you in the first-round comment, "many open-set recognition methods [15,21,36,45,55,63] have also been evaluated under the cross-dataset setup." Among these methods, a common cross-dataset setup is utilized by most of them ([15,36,45,55,63]), in which closed-set-class images are from CIFAR-10, and open-set-class images are from ImageNet-crop, ImageNet-resize, LSUN-crop, and LSUN-resize. Following [15,36,45,55,63], we also evaluate our method under this common cross-dataset setup, and we compare our method with [15,36,45,55,63] during our rebuttal. Thanks for your suggestion that it is better to also make a cross-dataset comparison with methods in [12] and [27]. While [12] and [27] do not make cross-dataset comparisons in their own paper, below, we re-evaluate [12] and [27] under the above common cross-dataset setup and compare our method with them. | Methods | ImageNet-crop | ImageNet-resize | LSUN-crop | LSUN-resize | |---|---|---|---|---| | **ZOC [12]** | 84.6 | 81.8 | 87.4 | 86.8 | | **PMAL [27]** | 85.8 | 83.2 | 86.5 | 87.6 | | **Ours** | **88.0** | **86.0** | **91.5** | **93.5** | As shown, under the cross-dataset setup, our method also outperforms [12] and [27] by a large margin. This further shows the effectiveness of our method. We will add the above discussion and results to our paper as well. Once again, we would like to express our sincere thanks for your time and effort. Best regards, Authors
Rebuttal 1: Rebuttal: We thank all reviewers for recognition of our contributions (Reviewer Kpbj:"novel and interesting"; Reviewer Qabu:"the overall idea is new and interesting"; Reviewer BEEJ:"demonstrates a very clever usage of several off-the-shelf large models of different modalities", "possesses many advantages, such as being training free and fast inference, that are important in real-world applications"; Reviewer zWHw: "the problem tackled is interesting and challenging", "favorable performance").
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Cookie Consent Has Disparate Impact on Estimation Accuracy
Accept (poster)
Summary: Cookies enable accurate identification and tracking of user behavior, leading to personalized ads and better ad campaign performance. However, this raises privacy and fairness concerns. This study investigates the impact of user consent on a recommender system's ability to learn about users' demographics and preferences. It reveals that when consent rates are demographically dependent, a user's choice to not share their cookie can paradoxically lead to the recommender system knowing more about them. Moreover, the gap in consent rates between demographics amplifies estimation errors. As the system receives more user responses, the effects of consent decisions diminish. The findings highlight the need for new fairness notions that promote consistency between users' privacy choices and the accuracy of the recommender system's estimations. Strengths: The paper studied a practical topic which is valuable to the community. The paper is also clearly written and easy to follow. Weaknesses: 1. The work seems incomplete. With a simulation experiment and an analysis, there's no solution to the observed issues. 2. The simulation setting is a bit simplified, introducing inductive bias to the results. Concretely, matrix factorization only mimics user behaviors (e.g., click), ignoring the user/item features. To be honest, matrix factorization is not the mainstream technique in industry. It is tree model and deep model. These model will be affected by user consent decision since user features will be unavailable, which will pose challenges to model training and serving, and then model performance and subsequent fairness, privacy issues. It will be more attractive if authors consider this setting. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. User grouping/clustering is a common practice in industry. Estimating effects of cookie consent stuff with real-world dataset is more meaningful. I would suggest analyze performance and consent rate with the external grouping info Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A since the authors didn't propose new methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the review. - Regarding your comment that our paper does not contain a solution, our view is that there is value in drawing attention to an issue without necessary having a fix. We’ve proposed changes to industry practice that should be made, along the lines of ensuring that how much a system knows about a user is consistent with their privacy decision, but we feel that by no means is the solution trivial. It will likely require deliberate loss of user information, which in turn will require the design of incentives to do so. The goal of our paper is to present an interesting observation related to the fairness of consent with the hopes of seeding further conversation on the topic. - In response to your comment on the simplicity of the simulation setting, we agree that it would be interesting to investigate the impact of user consent in the context of more realistic recommender system algorithms, however, we would like to note that (while a simple algorithm) matrix factorization does form the basis for many more modern recommender algorithms (hybrid and embedding-based methods). The goal of our paper was to illustrate the disparate impact of consent for a simple foundational algorithm such that these insights would encourage broader discussion in the community (and, to your point, investigation into if these effects also exist in more complex systems). Additional discussion on the simulation setting can be found in the global rebuttal section under “Real-world relevance of the studied model.” - Lastly, grouping or clustering of users is done in practice to aid with scalability — clustering reduces the effective dimension of the recommendation task by making group-level recommendations (generally at the cost of recommendation accuracy). While this is an important algorithmic step for practical recommender systems, the focus of our paper is on the impact of user-level consent decisions. Our understanding of your comment is to extend this study to include alternative grouping structures (not necessarily dictated by the user’s cohort), which is certainly an interesting direction, but we feel falls outside the main message of our paper. --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for their clarification and response to some of the reviewers' major concerns.
Summary: * This paper aims to investigate the effect of cookie consent (as mandated by GDPR, ePD), on the accuracy of recommendation algorithms using a simulation study. * Model: * Each advertisement $a$ is characterized by a topic $\tau \in \mathcal{T} \cong [m]$. * Each user $i\in[n]$ described by a cookie $\phi_i \in \Phi \cong [c]$, a demographic $\theta_i \in \Theta \cong [d]$, and vector of topic affinities $\alpha_i \in \mathbb{R}_+^m$. * Prior $\mu\in\Delta(\Phi\times\Theta)$ is assumed over cohort-cookie pairs. Users from cohort $\theta$ reveal their cookie with independent probability $q_\theta$. User preferences $\alpha_i$ are multivariate log-normal. Probability of user $i$ clicking ad $a$ (denoted $c_{i,a}=1$) is $p_{i,\tau}$, depending on the the utility and on a “no-click-mass” parameter $p_0$. * Upon observing the user's cookie (or lack of consent), the system constructs a posterior belief on their cohort $\bar{\mu}{i,0}\in\Delta(\Theta)$. Items are recommended by an $\varepsilon$-greedy policy, and the model is retrained at each time step $t$ using regularized least squares. * In the main experiment, the authors simulate the environment using the Recsim framework, for a population with two cohorts, two types of cookies, $n=200$ users, and $m=200$ topics. * Section 5.1 demonstrates disparate impact of consent - Errors are symmetric in the case of homogeneous consent, and disparate in the case of heterogeneous consent. * Section 5.2 aims to demonstrate amplification effects, showing the gap in consent rates serves as an amplifier for an individual’s consent decision. * Finally, the authors discuss in detail the possible implications of such effects on popular notions of fairness, and the effect of cookie of consent on the industry. Strengths: * Problem is novel and well-motivated. * Model is outlined clearly. * Documented code is provided, and sensitivity analysis was conducted on the provided model. Weaknesses: * Paper only analyzes the behavior of a specific, non-standard recommender in a fully-synthetic environment. It is not clear from the analysis whether such effects are significant compared to other factors which are currently ignored in practice. * Disparity between the consent group vanishes as $t$ grows, raising concerns about the significance of the claimed effects in practice * Experiments compare relatively simple populations: Only two populations, with relatively low dataset sizes (fixed pool of $n=200$ users). Such experiments can be complemented either by theoretical analysis or experiments on real-world data, but neither are provided in the paper. * Minor technical remark: The letter $a$ is used to denote both ads (e.g in section 4), and “agreement to collect cookies” (e.g in section 5.2), which caused confusion. The notation $\bar{\mu}$ is not referred to in the algorithm. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * What is the effect of the recommendation algorithm on such accuracy disparity effects? Do recommendation algorithms that are indifferent to cookie consent amplify or diminish disparity? (e.g matrix factorization with uniform weights instead of uniform, KNN, etc.) * What is the motivation behind the modeling of cookies as a low-dimensional categorical variable ($\phi \in \Phi \cong [c]$)? How does it reflect the common structure of tracking cookies in practice? As far as I understand, cookies contain unique identifiers that track users across visits, and are therefore unique to every user. Moreover, users can store multiple types of cookies, and not just one. * How does the model behave when there are more than 2 cohorts? How does it behave when more than 2 categorical cookie types? * In which ways does the synthetic simulation differ significantly from practical settings? Do we expect the magnitude of disparity effects to increase or diminish when discussion moves to practical settings? Why, and is it possible to support the claim with experiments? * Can similar disparity effects be derived from simpler principles? (i.e the “Missing at Random” model from survey statistics causing bias when unaccounted for?) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: I feel that the authors did not discuss limitations to a sufficient extent. Claimed effects are only demonstrated in a synthetic simulation with a non-standard recommendation algorithm. Even though I strongly agree that discrepancy effects may occur in practice to some extent, the analysis does not indicate whether their magnitude is significant in practical settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your careful review. We hope that our responses below resolve some of your concerns. Impact of recommendation algorithm on accuracy disparity: - The impact of different recommendation algorithms on the observed disparity differs depending on how the algorithm deals with missing information. Generally, algorithms that are indifferent to consent would treat all users identically (ignoring the cookie information of consenting users). Matrix factorization with uniform weights is one such algorithm — all user-item interactions would be weighed the same likely leading to lower disparity. For KNN, the situation is more complex, where depending on how many “neighbors” a user has (dictated by the consent decision) it could be possible that disparity would be amplified as the measure of similarity becomes less reliable. In general, whether an algorithm diminishes or amplifies disparities depends on how it deals with the user’s missing cookie information (in the case of non-consent). Modeling of cookies: - We agree that our cookie model is simplified. In practice, cookies generally contain both static information (e.g., user’s region) that is informative for their demographic/cohort as well as dynamic/behavioral information (e.g., click and browsing behavior). As part of our initial investigation, we’ve decided to focus on just the static component. - While we haven’t included behavioral information into a user’s cookie for the purposes of this paper, this is certainly an interested direction. As an initial thought, given the essentially unique cookie value for each user, revealed cookies would provide a significant amount of information on the identity of the user, potentially increasing the gap in treatment between consenting and non-consenting users. This is certainly a topic that we would like to investigate in follow-up work. Model behavior for more than 2 cohorts/cookies: - We’ve run additional simulations in response to your question (see the attached pdf). Explanation of the results for an increased number of cookies and cohorts can be found in the global rebuttal (under “Cookie and cohort model”) as it was raised by another reviewer. Differences between simulation and practical settings: - Our model differs from practical settings in multiple ways. As discussed in the global rebuttal section, our model is intentionally simplified in order to isolate the effects of cohort-dependent consent rates without the noise or confounding factors present in practical settings (factors that would mask our ability to detect such disparities but may still exist). We do not know with certainty how the degree of disparity would change in practice (it’s possible that disparities may increase with dynamic information as discussed above). We argue that the findings from our simplified setting should at least provide some motivation for investigating if these issues also exist in practice. Disparity effects from simpler principles: - Your question regarding the “Missing at Random” model is interesting, however, we feel that this is a different setting from our cohort-dependent consent setting. The MAR model assumes that the probability that a value is missing is related to the observed data but is independent of the unobserved data. Mapping this to our consent setting, this would imply that a user’s consent decision is based solely on observed quantities, without any influence from the underlying unobserved quantities (e.g., cohort). Our model specifies that consent is dictated by the unobserved quantity (cohort), which differs from the MAR model. Other comments: - Regarding your comment on the disparity vanishing as the number of interactions grow, this occurs after a relatively high number of interactions, with the disparity present for a significant number of interactions (upwards of 20 interactions). We lack precise data on the average number of sequential interactions that a user makes with a given website (or an ad serving platform), however, there are many reasonable scenarios where a user will only interact with a website a handful of times before navigating elsewhere (not to mention users periodically clearing their cookies, restarting the interaction counter as a result). - While we did not run simulations on a dynamic user pool due to the time constraints for the rebuttal, we would imagine that the impact of a such a modification may further amplify the observed disparities. The reason for this is because the gap in estimation accuracy between consenting and non-consenting users is greatest when the interaction count is low; when the user pool is changing, there will be more fresh users in the user pool who, by definition, have zero interactions with the recommender system. - We've included a summary of the limitations of our paper in the global rebuttal section. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response! It helped clarify, and the additional results you describe sound very interesting. In my perspective, the main claims of the paper seem to rely on two non-trivial elements: (1) a novel recommendation algorithm that differs significantly from algorithms commonly used in the literature and in practice, and (2) an interesting, but highly non-trivial model for internet cookies. As these two elements are evaluated together, the contribution of each element can't be isolated, making it more difficult to apply the results and build upon them. The extended simulations and results in the reply to reviewer cPvy suggest that effects persist even under more common recommendation algorithms, thereby isolating the effect of element (2) to some extent. However, despite the additional reasoning, the current cookie model still does not seem intuitive to me, and I believe it needs to be further justified. Additionally, while I regard the new results as essential to the key claims of the paper, they cannot be thoroughly validated. I also agree with the points raised by reviewer iC41. I therefore update my rating to 4 (Borderline Reject).
Summary: This paper simulates a recommendation system that is aware of and responds to users decision to share cookies, along with data from users' clicks. In this simulation, the paper finds that the recommendation accuracy could be higher for users who do not consent to provide cookies under certain conditions. The paper empirically analyzes these conditions along various dimensions. Strengths: The paper is very well written, and the presented result is novel to the best of my knowledge. The paper has a high potential to spur future work on the interplay between cookie consent decisions and recommendation accuracy. Weaknesses: **Strong claim in the title** The key finding is via simulations. The title makes it seem like the result of a randomized field experiment. I think it is possible to temper the title to reflect this. **Complex simulation setup, results possibly driven by unrealistic recommendation model* The simulation setup is not parsimonious, which makes it hard to see the key drivers of the result. a. Why is the ad pool sampled in each round and not deterministic? b. Why does the cookie contain demographic information, and not contain user click/navigation history information? c. The recommendation model is designed to be click-maximizing for the given user model. In practice, the recommendation system has no idea about the user model and likely just performs matrix factorization (with or without exploration) to generate recommendations. This modeling choice for the recommendation model is a large deviation from practice; what happens to the results if the recommendation model is just user model-agnostic matrix factorization? **Cookie-cohort dependency and cookie model assumptions** The cookie in the user model is designed to have the sole function of being a proxy for the user's cohort. Specifically, it is not "contaminated" by what the user actually clicks on, which deviates from reality (cookies track clicks and navigation history to some extent). Combined with the demographic disparity in consent rates, this turns the presence or absence of cookies (and the cookies themselves) into unrealistically strong correlates of the cohort of an individual. How do the results change if the cookies also stored some sort of "average empirical topic preferences" over the users' last N clicks? This reduces their strength as a prior and makes them less useful, since they reflect data that the recommendation model has already seen. However, I think this is more realistic. **Theoretical explanations for the empirical observations** How do the observations deviate from theoretical predictions? Specifically, Bayes' rule can be used to show that a demographic demographic disparity in consent rates will increase the accuracy of predicting a user's cohort from their consent decision. Further, the cohort with a higher consent rate will provide more data to estimate that cohort's topic preferences accurately in each round. These two factors combined will influence the overall recommendation accuracy until the recommendation system has collected enough clicks from both cohorts. As such, Observation 1 and 2 seem to say that the outlier groups in each cohort (with respect to the consent decision) have a higher estimation error. Does Figure 3 contradict this explanation? My low overall rating is primarily a function of the lack of theoretical explanations. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See weaknesses above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The paper does not discuss any limitations. I think discussing the limitations is important, given the assumptions made in the simulation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your detailed review of our paper. We’ve responded to each of your queries (either in this response or in the global rebuttal). Response to: "Complex simulation setup, results possibly driven by unrealistic recommendation model": - Practically, the set of recommendable ads may be fixed, (re)sampled, or generated by some other process (e.g., via some auction process); see a discussion in the simulation framework upon which our code is based [Ie et al., 2019]. We chose a resampled ad pool because we felt that it was more realistic than a fixed ad pool (real-world ads are continually coming and going). Additionally, from an algorithmic perspective, a resampled ad pool helps with inference/learning due increased diversity of ads across a given episode. We’ve included a plot in the one-page attachment to illustrate the effect of a deterministic/fixed ad pool. - Your comment regarding the nature of information contained in the cookie was raised by another reviewer as well. Please see the global rebuttal (under "Cookie and cohort model") for our response. - We would like to address your comment that “in practice, the recommender system has no idea about the user model.” While we agree that most recommender systems may not have access to an explicit user model, the system is not blind to user behavior/preferences. Real-world recommender systems base recommendations on learned user preferences (from vast amounts of user-item interaction data) in order to maximize the some measure of engagement (e.g., clicks, shares, watch time, etc.). These data-driven models serve as models of user behavior, even if implicit. Consequently, we argue that no recommendation model is completely model-agnostic, but we would be happy to discuss more on some model variants we can investigate. Response to: "Cookie-cohort dependency and cookie model assumptions": - We appreciate your comment regarding incorporating click behavior into the description of the user’s cookie. Under your proposed modification, a cookie would be represented as a pair (cookie value, empirical topic preferences). Given that both the consent rate and a user’s tastes are cohort-dependent, we feel that incorporation of average empirical topic preferences into a dynamic component of the cookie would still exhibit strong cohort-dependence (rather than be contaminated by user behavior). We’d like to incorporate this into the paper but given the time constraints of the rebuttal period we were unable to include this modification. Response to: "Theoretical explanations for the empirical observations": - Regarding your comment on theoretical explanations for the empirical observations, observation 1 states that, in the case of cohort-dependent consent rates, users who disagree (to sharing their cookie) experience a lower estimation error only if they belong to the lower consent rate population (the intuitive non-consent leading to higher errors only holds for the higher consent rate population). Note that this effect holds purely due to the difference in the consent rates between the two cohorts, and is not dictated by the relative size of the agree vs disagree groups in each cohort in general (in response to your statement on “outlier groups”). In other words, even in the case where both cohorts have more users that disagreed vs agreed (see the case where $q_0=0.1$ and $q_1=0.3$). Observation 2 explains how two users from different cohorts fare when making a given consent decision. These observations appear consistent with the figure. Other comments: - We’re currently discussing some modifications to the title without deviating too much from the main finding of our experiments (disparate impact of consent). - We've also included a summary of the limitations of our paper in the global rebuttal section. - Lastly, while we agree that our paper is empirical in nature, we feel that this brings sufficient value due to the potential to encourage additional conversations (and experimentation) in this space, even in the absence of theoretical explanations. --- Rebuttal Comment 1.1: Comment: Thanks for the response! It mostly addresses my concerns, but let me start with one remaining concern. My earlier comment said that “in practice, the recommender system has no idea about the user model.” Your response says "the system is not blind to user behavior/preferences." However, this is not what I claimed. I claimed that *your specific user model* is inaccessible to the recommender system (i.e. the form of the model). I agree that revealed preferences are always available for recommender systems to use. More concretely, I would model a recommender system parsimoniously as a matrix factorization of the user-item interaction matrix. More complex recommenders are possible, but this one is a canonical textbook example. Given a recommender system that simply factorizes the user-item interaction matrix, do the results hold? --- Reply to Comment 1.1.1: Title: Impact of a simpler recommendation system Comment: Thank you for clarifying. In response to your comment, we’ve just finished running some additional simulations with a recommendation agent that computes factor estimates via a standard matrix factorization algorithm (without heterogenous confidence weights, i.e., all responses are treated with equal importance). To ensure that the factorization is meaningful (i.e., there is something to factor), a set of 10 recommendation-response pairs (obtained uniformly and offline) are used for initial estimates. Updated factor estimates are obtained every 10 interactions. Unfortunately we’re unable to attach plots to this response but we’ll do our best to explain the main insights. The new simulations indicate that the disparity effects still persist even in the absence of confidence weights (namely observations 1 and 2 still hold). The reason for this is because even though under the standard MF procedure where the algorithm is not maintaining confidence weights on user responses, the user responses still carry the same information on user cohort’s as they do under the more complex (confidence-weighted) model. This is simply because for a given set of factor estimates, the (Bayesian) inference process is only a function of the cohort beliefs, the recommendations, and the associated user responses. Your comment is an important one, since it highlights that the disparity issue is not simply an artifact of the specific confidence-weight matrix factorization procedure used in the paper, but arises from the more fundamental interaction between cohort-dependent consent rates and the system’s ability to accurately infer user cohorts based on their responses. We intend to include this insight as part of a section in the supplemental material on the impact of various recommendation algorithms.
Summary: This paper investigates the influence of varying consent rates on the accuracy of estimation. The paper utilizes a simulation-based approach, in which the decision to share cookies plays a significant role. The findings reveal interesting patterns that can contribute to the development of more refined fairness metrics, taking into account the individual's choice to share information. Strengths: 1.The paper presents an intriguing explanation of the observed phenomena in the simulation experiment, particularly the behavior of low-agree individuals (such as older people) choosing not to share information, which instead results in a lower estimation error for the agent in a cold-start setting. 2.The author's discuss on how these results can guide future directions to enhance the consistency of fairness evaluations. Weaknesses: 1.The study's conclusions are based solely on a simple simulation experiment. There is a lack of discussion or testing on whether these observations would hold or approximately hold in real-world datasets. This limits the generalizability of the findings and their applicability to practical scenarios. Please correct me if I missed the details. 2.The observations (1 and 2) seem like they could be derived from trivial calculations using the prior $\mu_{i,0}^{-}$ without the need for simulation. (See Questions) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1: Can the observations in Fig.2 at t=0 be simply demonstrated by your calculated prior $\mu_{i,0}^{-}$? Is there a significant difference between the priors $\mu_{i,0}^{-}$ without the recommendation simulation and $\mu_{i,0}$ with the offline responses? Q2: The demographics and cookies are set to a specific number of 2. If the attribute number is changed (e.g., to 3, 4, etc.), do the observations still hold? Q3: What is the trend of cohort-dependent consent rates on cohort beliefs $\mu$, and weights W? Could you provide a similar illustration as in Fig.2 for estimation errors? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See weaknesses and questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We've provided some answers to your questions below. The need for simulations: - While the priors do indeed dictate the relative accuracy disparities across cohorts and user consent decisions, the evolution of these errors as a consequence of the recommendation process is not straightforward. We’ve run some additional simulations to emphasize this point. In particular, see the plots in the 1-page attachment studying the resulting CTRs (under the caption “Impact on recommendation quality”) as well as how the cohort belief errors behave for an increased number of cookies/cohorts ("Effect of model size parameters"). - Generally, there does seem to be an impact of consent on recommendation quality, however, the specific nature of the impact is not clear. We were originally planning on including these results in the initial submission, however, given that we lack a clear “observation” of the impact of consent on recommendation quality, we decided to instead present our findings on the impact of consent on estimation accuracy (we can however include these in the supplemental material for completeness). We feel that the impact on accuracy is a sufficiently interesting observation to encourage the community to investigate this issue further (in both simulated and real-world settings). Impact of number of cookies/cohorts: - Our additional simulations in the attachment illustrate the behavior of the model under both $c=d=3$ and $c=d=4$ - The primary observations still hold as the number of cookies and cohorts increases. See the section "Cookie and cohort model" in the global response section for a more detailed description (as this query was raised by another reviewer as well). Relationship between consent rates and cohort beliefs and weights: - The impact of consent rates on cohort beliefs can be derived directly from the Bayes update equations. Generally, higher consent rates for a particular cohort will increase the belief (higher confidence) that users are from that cohort if they opt-in, with lower consent rates having the opposite effect. Similarly, given that the weights are the expected binomial probability of the current response counts, a higher consent probability will lead to a higher weight (reflective of a higher confidence for the given counts). --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Thank you for your additional information. I agree with the author's explanation that, although the simulated scenario may differ from the real-world scenario, it can eliminate the interference of noise and allow us to observe some potentially useful conclusions. For this, I can hold a reserved opinion. However, in other aspects, I still believe that the reasons to reject this paper outweigh the reasons to accept it, and I will keep my score unchanged. Below are the reasons I have considered: Reason to accept: If this work is intended to encourage the community to further investigate these new questions, perhaps accepting this paper is justifiable. Reason to reject: However, as a work that proactively proposes new questions and observes phenomena, in my opinion, the observations made in this paper do not provide sufficient inspiration, which diminishes the paper’s appeal: Although observations 1 and 2 seem interesting because they show differences, as the authors also acknowledge, they can actually be simply predicted based on the setting of priors without simulation. Observation 3 is a conclusion truly obtained through simulation. The results related to Observation 3 in the paper (where the differences in observations 1 and 2 vanish), as well as the impact on recommendation quality outlined in the newly provided appendix by the author, seem to indicate that whether cookies are considered or not does not have a significant effect on subsequent recommendation success or on the belief of the cohort. This could undermine the motivation to further investigate this question.
Rebuttal 1: Rebuttal: Real-world relevance of the studied model: - We appreciate the reviewers’ concerns regarding the relevance of the model to real-world settings. We’d like the emphasize that the goal of this investigation is to present an intentionally simplified model in order to isolate and understand the effects of specific model aspects (i.e., impact of different user consent rates) without the additional noise and confounding factors present in real-world settings. - That being said, we were careful when designing our model to ensure that it captured the core aspects of real-world recommender systems, namely confidence weights [Hu et al., 2008; Koren et al., 2009] and continual/online update of factor estimates via online methods [Wang et al., 2017]. These seminal papers form the foundation of many (more complex) models deployed in practice. As such, a primary takeaway of our paper is that there is at least a cause for concern that our findings may also exist in production-level recommender systems that are based on these fundamental algorithms. We will include a segment on this discussion in the revised paper. - We feel that this philosophy (of gaining insights for a simplified version of a model) is a powerful approach to research in complex AI systems, as many negative effects can be hidden in the complexity of real-world models, but still present under the surface. Cookie and cohort model: - A couple of reviewers have raised questions on the choice of representing cookies as static quantities. We agree that in reality cookies contain both static information (e.g., location information) and dynamic information (e.g., user click and browsing behavior). Our decision to model user cookies as only containing static information (as a proxy for user cohort) was an intentional simplification to isolate the effects of cohort-dependent consent rates (which users are more or less likely to provide their consent) on the recommendation process and investigate the potential fairness issues on specific user groups. In practice, the user profile (as opposed to the actual cookie value) is what drives recommendations — we abstract this by representing cookies as proxies for the user’s core identity (cohorts/demographics). - Regarding the impact of a larger number of cookies/cohorts, we’ve run some additional simulations under two cases: i) $c=d=3$, and ii) $c=d=4$, each under a variety of cohort-dependent consent rates. Please see the attached pdf under the caption “Effect of model size parameters.” As seen from the plots, the primary observation still holds, with a necessary modification: non-consenting users in any cohort that is not the maximal consent-rate cohort experience lower estimation errors for disagreeing (non-consent) than agreeing (consent). We will augment the observations in the paper to include this generalization. In response to the reviewer’s comments, we’ve consolidated the limitations of our approach (to be included as a limitations section of the revised paper): - Our findings on the disparate impact of consent should be interpreted with the understanding that our model necessarily has some limitations. Broadly, our model uses a simplified definition of cookies in which cookies serves as a proxy for the users’ cohorts. While the reason for this simplicity is to extract insights that depend directly on the user’s cookie consent decisions, extending the definition of a cookie to more realistic settings by including user click/behavioral information would likely generate additional insights. Secondly, to capture the core aspects of recommender systems, our recommendation model is based on foundational recommendation algorithms (namely online + confidence-weighted matrix factorization). While these algorithms form the basis for modern recommender systems, it would be worthwhile to see how the insights extend to more general algorithms. Lastly, we consider a fixed user pool; consideration of a more realistic dynamic user pool could influence the findings. References (for both global rebuttal and individual rebuttals): - [Hu et al., 2008] Yifan Hu, Yehuda Koren, and Chris Volinsky. "Collaborative filtering for implicit feedback datasets." In 2008 Eighth IEEE international conference on data mining, pp. 263-272. IEEE, 2008. - [Ie et al., 2019] Eugene Ie, Chih-wei Hsu, Martin Mladenov, Vihan Jain, Sanmit Narvekar, Jing Wang, Rui Wu, and Craig Boutilier. "Recsim: A configurable simulation platform for recommender systems." arXiv preprint arXiv:1909.04847 (2019). - [Koren et al., 2009] Yehuda Koren, Robert Bell, and Chris Volinsky. "Matrix factorization techniques for recommender systems." Computer 42, no. 8 (2009): 30-37. - [Wang et al., 2017] Huazheng Wang, Qingyun Wu, and Hongning Wang. "Factorization bandits for interactive recommendation." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1. 2017. Pdf: /pdf/b0fe46931568d0dd23ef634defe0b31bda951642.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
CP-SLAM: Collaborative Neural Point-based SLAM System
Accept (poster)
Summary: The paper presents a collaborative implicit neural simultaneous localization and mapping (SLAM) system for RGB-D image sequences, which is able to merge sub-maps into a global map and also handle loop-closure for neural mapper for the first time. To facilitate the capability of doing loop-closure and sub-map merging, the authors introduce a neural point-based 3D scene representation, where each point maintains a learnable neural feature for scene encoding and is associated with a keyframe. The optimization follows the traditional SLAM’s graph optimization fashion. The experimental results on diverse datasets demonstrate the superior performance of the proposed method in both camera tracking and mapping. Strengths: 1. Novel Neural Point-based 3D Scene Representation: A neural point cloud representation for scene encoding in the proposed framework is a significant contribution. This approach allows each point to carry encoded information, improving the system's ability to represent and cross-view association for large-scale tracking and pose refinement. 2.Comprehensive Collaborative Framework: The authors provide a collaborative SLAM system with front-end and back-end modules, addressing essential aspects such as odometry, loop detection, sub-map fusion, and global refinement. This comprehensive approach ensures that the proposed system can handle various challenges typically encountered in SLAM tasks. 3.Distributed-to-Centralized Learning Strategy: The incorporation of a distributed-to-centralized learning strategy is an effective solution for improving consistency and cooperation in collaborative implicit SLAM. 4.Improved Performance: The experimental results on multiple datasets demonstrate the superiority of the proposed method in both camera tracking and mapping. Weaknesses: 1.In the Introduction: This paper emphasizes the significance of targeting collaborative-SLAM, considering the substantial value already provided by loop-closure capability in effectively reducing pose drift. However, the evaluation of the paper becomes more challenging due to the necessity for additional comparative studies that can convincingly demonstrate the advantages offered by both approaches. 2.In Section 2: For collaborative mapping, this paper should cite more recent works including: [1*] Campos, Carlos, et al. "Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam." IEEE Transactions on Robotics 37.6 (2021): 1874-1890. [2*] Tian, Yulun, et al. "Kimera-multi: Robust, distributed, dense metric-semantic slam for multi-robot systems." IEEE Transactions on Robotics 38.4 (2022). It is also expected to provide comparative studies to the above researches. 3.In Section 3: (1). Line 101: This paper introduced 4x4 patches to compute the point features. However, it misses details on there is a limitation on the number of patches. What if we increase the size of the patches? Will it impact the later performance? (2) This paper introduced so many symbols but without giving any explanation of the symbols, which makes it really hard to follow the paper. Please revise the whole paper and make them clear to the readers. (3) Equation 10: The explanation “Considering the strong non-convexity of the color map, only the geometry part is included in tracking loss” for not using color in the loss is not convincing. The authors are expected to provide ablation studies to prove the claim. 4.In Section 4: More experiments are expected: (1). More comparative studies on Collaborative-SLAM are expected. (2). For single-agent cases: The local covisibility map can be formed as well; it would be interesting to only look at cases without a loop but having a local covisibility map. (3). Ablation study on “Distributed-to-Centralized Learning” is needed to prove why this is needed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1.In the context of this paper, which relies on point features for establishing frame-to-frame correspondence, an important concern arises regarding the maintenance of continuous tracking. Unlike the traditional SLAM pipeline, where feature locations typically reside around corners, this paper selects points without necessarily adhering to this characteristic. Thus, it becomes crucial to address how this approach ensures the continuity of tracking throughout the SLAM process. 2.Collaborative mapping can be done by recent work on No Pose Nerf: []Bian, Wenjing, et al. "Nope-nerf: Optimising neural radiance field with no pose prior." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. This paper should provide a comparison to show the advantage of the proposed method. 3. This paper has too many contributions, which does not form a scientific paper but more an engineering paper. Not sure if this paper fall into the NeuralIPS group. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. The writing should be improved, especially the equations should be better explained. 2. More experiments and ablation studies should be provided to demonstrate the advantage of the proposed algorithms. 3. The literature review needs to refer to more recent research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and valuable suggestions. Here are our responses to your comments. **1. To demonstrate advantages from both collaboration and loop-closure capability, evaluation in this paper becomes more challenging. (W1)** In CP-SLAM, loop detection and pose graph optimization (PGO) work together to mitigate pose drift, and collaboration mainly helps to improve exploration efficiency, but has little effect on pose drift. In Tab.2 and Tab.1, we perform comparative studies in the **‘w/ and w/o PGO‘** setting and observed consistent improvement across all datasets. Overall these two experiments consistently demonstrate the effectiveness of our proposed pipeline. **2. This paper should cite and compare with more recent works, such as ORB-SLAM3 and Kimera-multi. (W2)** As far as we know, ORB-SLAM3 is not a collaborative SLAM system. It lacks the capability to process multiple sequences at the same time, thus unable to overcome the efficiency bottleneck in scene exploration. As suggested, we tested ORB-SLAM3 for a comprehensive comparison. We have sequentially connected multiple parts from a single multi-agent dataset. We reported multi-agent and single-agent results in **Tab.G1** and **Tab.G2** in global response. In addition, as a visual-inertial method, Kimera-multi takes IMU data as input, thus we did not consider it as a comparable method. **3. In Section 3: (1). Line 101: How the performance is affected by the number of patches? (2) Some symbols are not clear. (3) Equation 10: Authors should perform an ablation study to confirm their point on non-convexity from color loss. (W3)** **(1)** In principle, the number of patches will not affect CP-SLAM in terms of memory usage and accuracy. Regarding memory usage, more patches bring more 3D neural points. However, we apply a filtering strategy to maintain point cloud sparsity, as indicated in **Supp.D.** From the perspective of accuracy, we empirically found that an increased quantity of neural points slightly decreases the tracking and mapping performance due to the increased complexity and redundancy of the neural point field. **(2)** Thank you for your comment, we will check and clarify all symbols to make our paper more readable. **(3)** We conducted an additional ablation study to confirm our viewpoints, as shown in **Fig.3** in the attached **'pdf'** in global response. **4. (1). More comparisons on collaborative SLAM are expected. (2). For single-agent cases: it would be interesting to take a local covisibility map into account. (3). Ablation study on “Distributed-to-Centralized Learning” is needed to prove its necessity. (W4)** **(1)** We added ORB-SLAM3 and Swarm-SLAM as new baselines for fully comparative studies. The results are shown in **Tab.G1** in global response, we can observe that our method still performs the best. **(2)** In CP-SLAM, we have used covisible correlations between keyframes. Specifically, we utilize multiple frames, including the current frame, the nearest keyframe, and three other covisible keyframes, for joint mapping, which ensures the consistency of the neural field. However, CP-SLAM which relies on neural features is completely different from the traditional visual SLAM that relies on the point cloud. In the volume rendering-based SLAM, the local covisibility map cannot replace the loop detection module. We present results for single-agent cases with joint mapping but without PGO in Tab.2. **(3)** In **Supp.C**, We discussed the necessity of `Distributed-to-Centralized' learning strategy. We have attempted to perform centralized learning from scratch, but its results were remarkably poor. **5. It is crucial to address how CP-SLAM ensures the continuity of tracking throughout the SLAM process. (Q1)** CP-SLAM is a collaborative NERF-based SLAM system, similar to NICE-SLAM, relying on volume rendering to optimize poses and the neural field. Neural points serve solely as spatial carriers of neural features in CP-SLAM, which are different from traditional keypoints and descriptors, similar to feature grid nodes in NICE-SLAM. In CP-SLAM, we do not perform inter-frame feature matching, so there is no discontinuity in tracking and mapping due to feature locations. We provide a detailed video in our supplementary material, which dynamically shows NERF-based tracking and mapping. **6. This paper should consider taking Nope-Nerf as a comparison. (Q2)** Nope-nerf was not public before our paper was submitted. Furthermore, it is worth noting that Nope-nerf is more biased towards Structure-from Motion (SFM) rather than SLAM. It has a different background setting from our CP-SLAM, thus we didn't consider it as a comparable method. **7. Contributions of this paper are more biased towards engineering. (Q3)** We aim to build a complete collaborative NERF-based SLAM system, focusing more on system-level development. Such work is missing in the field of NERF-based SLAM. CP-SLAM is the first collaborative neural implicit SLAM system, consisting of neural point-based odometry, loop detection, sub-map fusion, and global pose graph optimization. In collaborative NERF-based SLAM, in addition to the lack of appropriate neural field representation, it is not feasible to simply stack and cascade traditional modules in a plug-and-play manner. Consequently, we have introduced technical enhancements to both the neural field representation and the pipeline of CP-SLAM, such as innovatively proposing a keyframe-centric neural point field and a novel learning framework to maintain global consistency. Overall, we have made significant scientific contributions to the up-to-date NERF-based SLAM field as Reviewer Zpdr pointed out. Similar to Droid-SLAM [Teed et al. NeurIPS 2021], we also committed to a system-level development and introducing new learning-based neural implicit techniques into SLAM. --- Rebuttal Comment 1.1: Title: 2nd round comments Comment: Thank you to the authors for their responses. The authors largely addressed my last comments. But there are still several more questions: 1. After reading the responses, the reviewer still believes that the authors should separate it into two papers: 'Point-SLAM' and 'Collaborative Mapping.' Alternatively, the authors have to highlight the motivation in the Abstract and Introduction. Otherwise, it is really hard to understand why the authors want to include Collaborative Mapping in this paper. 2. The authors answered, 'From the perspective of accuracy, we empirically found that an increased quantity of neural points slightly decreases the tracking and mapping performance due to the increased complexity and redundancy of the neural point field.' This contradicts the conventional feature-based tracking SLAM approach. This will make the proposed method difficult to generalize and use. I hope the authors can address my concerns and I would like to increase my rating. --- Reply to Comment 1.1.1: Title: Response to Reviewer jr2H - 2nd round comments Comment: Dear Reviewer jr2H: Thank you for reviewing our last responses. The following are our responses to the further comments. 1. **Motivation of Our paper and Clarification on Collaborative Mapping and Point-based NeRF SLAM** We apologize for the misunderstanding regarding the initial question. Here, we want to explain our motivation and clarify our collaborative mapping and point-based NeRF SLAM. In this paper, we aim to build a collaborative NeRF-based SLAM system instead of a new 'Point-SLAM' system. We find that the neural field representation used in existing works, like the feature grid in NICE-SLAM, cannot support the fusion of multiple neural fields that is essential for multi-agent collaboration, since it is difficult to fuse the 'collision areas' among different feature grids as shown in Fig. 2 in our main paper. Thus, motivated by the Point-Nerf [Xu et al. 2022], we designed a novel keyframe-centric neural point-based representation for our collaborative NeRF-based SLAM in which each point associated with a certain keyframe maintains a learnable neural feature for scene encoding. This new representation enables the seamless fusion of different neural sub-maps for multi-agent mode, and it also natually facilitates the loop-closure for single-agent mode. Consequently, we conducted experiments under both multi-agent and single-agent scenarios. The former demonstrates the effectivenss of our full collabative SLAM system, while the latter shows the advantage from our new representation in monocular NeRF SLAM. We will make this clearer in our revised verison. 2. **The impact of the number of neural points** As mentioned in our response to **'Q1'**, in CP-SLAM, neural points are utilized as spatial carriers for neural features, which are subsequently decoded by MLPs to generate color and depth through volume rendering. Consequently, when the number of neural points is small, they are not enough to encode detailed scene geometry and color due to inadequate scene representation. Conversely, an excessive number of neural points significantly extends the learning time to converge (i.e., increases the number of optimization steps) for both neural point features and MLPs. This means that, within limited optimization steps, it will decrease the tracking and mapping performance due to the increased complexity and redundancy of the neural point field. Bearing these insights in mind, we opted for 4\*4 patches to produce initial dense neural points, which yields a sufficient number of neural points to represent the complex scene. Furthermore, together with our proposed point filtering strategy (as discussed in **Sec.D** of our supplementary material), it achieves a good trade-off between efficiency and accuracy. It's worth noting that we consistently employed 4*4 patches for all experiments presented in the paper as well as for the new TUM RGB-D experiment, which demonstrates the generalization of our system.
Summary: This paper proposed a collaborative RGB-D neural implicit SLAM method named CP-SLAM. CP-SLAM is based on neural point representation similar to PointNeRF, which is natually suitable for collaborative SLAM and loop closing. Based on this neural point representation, CP-SLAM is able to perform full SLAM with a single or multiple agents: tracking, mapping, loop detection, loop closing, submap merging, etc. Experiments show that CP-SLAM achieves SOTA results compared with previous collaborative SLAM and neural implicit SLAM methods. Strengths: 1. Using neural point representation in the context of neural implicit SLAM is novel and the motivation of using it is also clear, i.e. more suitable for loop closing and sub-map merging. 2. A full SLAM system that is capable of loop closing, Pose Graph Optimisation and sub-map merging. This is the first time I saw a full SLAM system in the context of neural implicit SLAM. 3. The system seems well engineered, putting many components together and works effectively well. 4. Competitive tracking and mapping results compared with previous collaborative and neural SLAM method. Weaknesses: The motivation is clear and the method is very well engineered, but from novelty perspective, all the components in CP-SLAM are already there with some incremental modification: e.g. the scene representation is from PointNeRF, Loop detection is from an off-the-shelf NetVLAD model, etc. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. From Sec. 3.1 it seems that at each frame a new set of neural points are created densely via back-projection, and there doesn't seem to be any point merging scheme. Then how is CP-SLAM able to achieve 0.62MB parameters size in Tab. 4? 2. Also from Sec. 3.1 it seems that the rendering loss will only optimize the neural features stored in the points but will never affect the locations of the points, which means once created the location of a point will never change unless a loop is detected and a Pose Graph Optimisation is performed. If this is the case, then the entire mapping process would be equivalent to simply back-projecting the depth images? 3. How did you convert your point-based map to mesh for evaluation in Tab. 3? 4. Following my question in 2: if CP-SLAM's map points are obtained explicitly from depth measurement, then it is supposed to have better accuracy than NICE-SLAM and Vox-Fusion (the same reason that TSDF-Fusion has better accuracy than neural SLAM methods). Thus, better accuracy numbers in Tab. 3 is not enough to prove CP-SLAM have "more powerful geometric reconstruction capability". Also it's not clear why the authors didn't report completion. After rebuttal: the authors have addressed all my questions and concerns. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments and recognition of our strengths. Our responses to your concerns are as follows: **1. Some components in CP-SLAM are incrementally modified based on existing works. (W1)** We aim to build a complete collaborative NERF-based SLAM system, focusing more on system-level development. Such work is missing in the field of NERF-based SLAM. We have introduced technical enhancements to both the neural field representation and the system-level pipeline, such as innovatively proposing a keyframe-centric neural point field and a novel learning framework to maintain global consistency. Although the technical novelties are relatively limited, we have made significant contributions to the pipeline as **Reviewer Zpdr** pointed out. **2. Is there a point cloud fusion strategy in CP-SLAM to reduce memory usage? (Q1)** In Line 163, we briefly mentioned our point cloud filtering strategy, namely the sparsification strategy, and elaborated on it in **Supp.D**. Specifically, we adopt a grid-based filtering strategy, that is, we perform non-maximum suppression based on the distance from a neural point to the center of a $\rho^3$ cube. **3. Is the entire mapping process in CP-SLAM equivalent to simply back-projecting the depth images? (Q2)** CP-SLAM is completely different from traditional visual SLAM methods. As a purely NERF-based SLAM system, 3D points serve solely as spatial carriers of neural feature embeddings for both photometric and geometric information. In CP-SLAM, the mapping process refers to the process of training neural features to better represent the appearance and geometry of the scene and improve the rendered RGB and depth images, rather than back-projecting the depth image. In other words, although the points' location does not change, the encoded scene geometry is changed and improved with the optimized neural features. We dynamically illustrate the above mapping process in our attached supplementary video. **4. How did you convert your point-based map to mesh for evaluation in Tab.3? (Q3)** In our implementation, we invoked the built-in **'TSDF-Fusion'** function in Open3D library, supplying it with estimated poses and depth maps from neural volume rendering to generate the final mesh. **5. If 3D points in CP-SLAM are obtained from depth measurements, the accuracy improvement in Tab. 3 is not enough. Why the completion is not reported in reconstruction? (Q4)** Although we utilize ground truth depth maps to obtain neural point locations, as mentioned above, 3D points serve solely as spatial carriers of neural feature embeddings in CP-SLAM. What we input into **`TSDF-Fusion'** is the rendered depth maps from the optimized neural field, not the measurements. In this setting, we achieved better mapping performance than NICE-SLAM and Vox-Fusion. In Lines 274 and 275, we mentioned the reason why we did not evaluate the completion metric. In our loop datasets, the scenes are not completely scanned along with our trajectories, which leads to holes in mesh reconstruction and makes this metric less objective. Therefore, we use 'Depth L1' and 'Acc.' as evaluation metrics of the reconstruction. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response which has answered most of my questions. I still think this paper proposed a novel and interesting method, and I will keep my initial positive rating. However, regarding Q2, 3 and 4, I still have some reservations. Through reading the the authors' answers, now I understand that the neual point-based representation used in this paper is more like a keyframe-centric representation which relies on fusing rendered depth to obtain the final map. While this representation has advantages over feature grids and MLP, it by definition loses the hole-filling ability in unobserved resgions that are outside of cameras field-of-view. But one big advantage of previous neural-SLAM methods like iMAP and NICE-SLAM over traditional methods is they can achieve certain level of hole-filling ability even in the regions outside of cameras' field-of-view, like the hole in ScanNet scene scene0000_00 and several Replica scenes. If this is the case, then the authors need to acknowledge this as an limitation compared to previous methods. Moreover, this also explains why completion is not a good metric here. But it still cannot justify the removal of that metric completely from the table. The authors could simply follow what has been done in Neural-RGBD [1] and GO-Surf [2] by culling out regions that are ouside of any camera frustums before evaluation. In more recent works, ESLAM [3] applied the same culling strategy and Co-SLAM [4] performed more detailed discussion about different strategies used in all previous methods. As such, I think including a detailed discussion and comparision about using neural point-based representation vs. previous neural SLAM methods should definitely be helpful. [1] Neural RGB-D Surface Reconstruction, CVPR 2022 [2] GO-Surf: Neural Feature Grid Optimization for Fast, High-Fidelity RGB-D Surface Reconstruction, 3DV 2022 [3] ESLAM: Efficient Dense SLAM System Based on Hybrid Representation of Signed Distance Fields, CVPR 2023 [4] Co-SLAM: Joint Coordinate and Sparse Parametric Encodings for Neural Real-Time SLAM, CVPR 2023 --- Reply to Comment 1.1.1: Title: Response to Reviewer 46C6 - Hole-filling Ability and Completion Metrics Comment: Dear Reviewer 46C6: Thank you for your constructive review and positive comments on our work. Here are our responses to your latest comments. **1. The hole-filling ability of the neural point-based representation.** We agree that our neural point-based representation has weaker hole-filling ability in unobserved regions than the existing methods like NICE-SLAM (hierarchical regular feature grids) and iMAP (coordinate encoding-based ). This arises from the fact that neural points are distributed around the surfaces of observed objects, encoding surrounding scene information within a fixed-radius sphere, which covers smaller regions than iMAP or the feature grid used in NICE-SLAM. We will discuss this limitation in the revised version. **2. Completion metric evaluation with the culling strategy.** Thank you for your suggestion. As suggested, we have introduced the **'culling strategy (ESLAM)'** into the completion metric evaluation, and the results are listed in the following tables. As shown, our method achieves state-of-the-art performance in terms of completion benefiting from high accuracy, while performing on par with the SOTA method (Vox-Fusion) in terms of completion ratio (refer to above 1.), which validates the effectiveness of our method for the single-agent SLAM. | Method |Office0-loop | Office3-loop | Room0-loop | Room1-loop | | :--------: | :----------: | :----------: | :--------: | :--------: | | NICE-SLAM |1.69 | 2.22 | 1.74 | 1.73 | | Vox-Fusion | 1.11 | 1.51 | 1.32 | 1.06 | | Ours |**1.04** | **1.47** | **1.21** | **1.01** | **Table. A: Completion [cm] ($\downarrow$) Metric.** The culling strategy is adopted in the completion evaluation. We can observe that our method achieves SOTA performance. | Method |Office0-loop | Office3-loop | Room0-loop | Room1-loop | | :--------: |:----------: | :----------: | :--------: | :--------: | | NICE-SLAM | 97.22 | 94.82 | 98.14 | 97.98 | | Vox-Fusion |**99.69** | **98.87** | **99.35** | **99.84** | | Ours |99.45 | 98.34 | 99.185 | 99.70 | **Table. B: Completion Ratio [<5cm, %] ($\uparrow$) Metric.** The culling strategy is adopted in the completion ratio evaluation. We can observe that our method performs better than NICE-SLAM and on par with Vox-Fusion.
Summary: The authors propose a novel collaborative RGB-D SLAM pipeline, where the map is represented by an implicit field based on Point-NERF that are transformed together with their keyframes allowing pose graph optimization. They use a loop closure detection based on learned features (NetVLAD) also for sub-map alignment. They propose a central federated learning based averaging and refinement of the distributed MLPs after submap alignment. Strengths: The use of point based features nicely connects the generalization and compression of implicit mapping with the ability to do efficient map refinement. The results on tracking and reconstruction show a large improvement compared to existing works. Weaknesses: The explanations are sometimes a bit vague. E.g. line 101 what does it mean to incrementally project a pixel to a 3d location? The initialization of system is not described well. While the authors describe the averaging and finetuning of the MLP weights, but do not discuss if and how the point features are shared between agents. Evaluation was only done on synthetic data. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Why is there another local finetuning and weight averaging done after the central averaging and finetuning? How is the finetuning done after pose graph optimization? Is it the same as for mapping e.g. eq 9? Would it not be possible to run ORB-SLAM3 (https://arxiv.org/pdf/2007.11898.pdf) in a multi-map setting using depth from RGB-D as a baseline? Why not evaluate on real data? How is the TSDF fusion done to generate the final mesh? What is the average baseline between frames in the dataset? How long are the trajectories? After rebuttal: The authors addressed all of my questions. Thank you. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: One of the main limitations of framerate/view point change is addressed by the authors.. an initial experiment to showcase the effect would have been nice though. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments. Here are our responses to your concerns. **1. What does it mean to incrementally project a pixel to a 3d location in line 101? (W1)** In the SLAM task, both the map and trajectory are incrementally constructed over time. In Line 101, when a new mapping frame arrives, we will back-project a 2D pixel location $\textbf{p}$ to a 3D point $\textbf{P}$ based on the pose, intrinsics, and the depth map, as shown in the following formula: $$ \begin{equation} \label{eq1} P = D*K^{-1}*p \end{equation} $$ Here, $D, K$ represent the ground truth depth and intrinsics, and $p$ is the homogeneous pixel coordinate. **2. The initialization of the system is not described well. (W2)** The initialization in NERF-based SLAM refers to utilizing the first frame to construct an initial neural field, similar to the mapping process, but with a few more optimization steps. Specifically, in CP-SLAM, we perform the initialization with 3000$\sim$5000 steps. This practice can stabilize subsequent tracking and mapping with the help of a high-quality initial neural field. **3. Authors did not discuss if and how the point features are shared between agents. (W3)** In our implementation, we built up a real-time updating feature list corresponding to 3D points one by one, and synchronize it to each agent during data exchange. **4. Can CP-SLAM be generalized to real-world scenes? (W4+Q4)** We have experimented on the TUM-RGBD real-world dataset, and the results are shown in the **Tab.G3** in the global response. Following recently proposed NERF-based SLAM works, we chose three scenes: fr1-desk, fr2-xyz, and fr3-office (with loop closure) in TUM-RGBD. The results in **Tab.G3** illustrate that CP-SLAM has also achieved state-of-the-art performance in the real-world setting, and the loop detection and pose graph optimization are equally effective for the real-world scene. It is worth noting that we compared the most recently proposed SOTA methods Co-SLAM [Wang et al. 2023] and ESLAM [Johari et al. 2023] in the TUM-RGBD experiment, and ESLAM encountered an out-of-memory (OOM) issue in the fr2-xyz. **5. Why is the local finetuning and weight averaging done after the central averaging and finetuning? (Q1)** Once loop closure is detected, we will conduct central averaging and finetuning to unify different models into a common domain. Then the finetuned model will be sent to each agent for continuous local tracking and mapping. Once the new keyframe arrives, we will conduct the local update/finetuning given the new observation and the finetuned local model will be sent to the center for central averaging as a new global model for subsequent tracking. This approach only requires lightweight data exchange which further improves real-time performance. **6. How is the finetuning done after pose graph optimization? Is it same as Eq. 9? (Q2)** Finetuning after pose graph optimization is taken as a larger-scale joint mapping. Specifically, we will select a certain number of keyframes from the keyframe pool, which contains keyframes from all agents, for a few-step mapping until the entire pool is traversed. It uses the same principle as **Eq.9**. **7. Would it not be possible to run ORB-SLAM3 as a baseline? (Q3)** As far as we know, ORB-SLAM3 [Campos et al. 2021] is not a collaborative SLAM system. It lacks the capability to process multiple sequences at the same time, thus unable to overcome the efficiency bottleneck in scene exploration. As suggested, we tested ORB-SLAM3 with **‘Atlas'** mode, and we find that only when ORB-SLAM3 fails, it works by saving the disconnected map constructed prior to failure in the atlas as a candidate sub-map, and then creates a new sub-map for subsequent mapping and tracking. If a loop is detected between this new sub-map and one in the atlas, ORB-SLAM3 will perform sub-map fusion. For a comprehensive comparison, we have sequentially connected multiple parts from a single multi-agent dataset. We reported multi-agent and single-agent tracking results in **Tab.G1** and **Tab.G2** in the global response. It is clear that CP-SLAM still achieves the best performance in most scenes. **8. How is the TSDF fusion done to generate the final mesh? (Q5)** In our implementation, we invoked the built-in **`TSDF-Fusion'** function in Open3D library, supplying it with estimated poses and rendering depth to generate the final mesh. **9. Average baseslines and the length of trajectories in all datasets. (Q6+Q7)** Average baselines and the length of trajectories in all datasets are shown in the table below: | | Room-0-loop | Room-1-loop | Office-0-loop | Office-3-loop | | :-----------------: | :---------: | :---------: | :-----------: | :-----------: | | Length[$m$] | 7.69 | 7.82 | 7.69 | 12.95 | | Baseline(mean)[$m$] | 0.0039 | 0.004 | 0.0039 | 0.0066 | **Table A. Average baselines and the length of trajectories in single-agent datasets.** | | Part | Apartment-1 | Apartment-2 | Apartment-0 | Office-0-C | | :-----------------: | :----: | :---------: | :---------: | :---------: | :--------: | | Length[$m$] | Part 1 | 18.04 | 12.12 | 8.71 | 5.66 | | | Part 2 | 18.61 | 11.24 | 13.34 | 5.66 | | Baseline(mean)[$m$] | Part 1 | 0.0046 | 0.0048 | 0.0053 | 0.0037 | | | Part 2 | 0.0063 | 0.0040 | 0.0035 | 0.0037 | **Table B. Average baselines and the length of trajectories in two-agent datasets.**
Summary: The paper introduces a neural implicit representation based collaborative SLAM technique. They introduce a learnable feature encoding for each point in the 3d scene based on neural rendering subsequently using it for depth and color map estimation. They introduce a centralized learning strategy to share MLPs and learn better weights. They also introduce a LM algorithm based global pose optimization and key frame centric point cloud adjustment. Strengths: The federated learning strategy improves the neural rendering performance. Map refinement based on keyframe centric neural point field. By encoding scene geometry and appearance with 3D points, when camera poses are optimized, these features are easily adjusted as well. Weaknesses: 1. The paper only evaluates on one synthetic datasets. It would be nice to see evaluations on more datasets 2. Better and more robust neural rendering techniques could have been used to improve the radiance, depth and color rendering Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The paper is not clear whether it includes neural field generation time in tracking/mapping time while comparing with Nice SLAM. 2. The paper lacks citations. For example, in the mapping and tracking section, a similar technique is used in Nice SLAM and I see no mention of it. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1.The paper does produce SOTA results on some of the scenes of the dataset chosen. No explanation is given to why it doesn't produce better results on the rest. 2. The paper is not novel enough. It rather combines the best of many other works and produces good results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments! Here are our responses to your concern. **1. It would be nice to see evaluations on more datasets. (W1)** We extended experiments on the TUM-RGBD real-world dataset, and the results are shown in the **Tab.G3** in global response. Following recently proposed NERF-based SLAM works, we chose three scenes: fr1-desk, fr2-xyz, and fr3-office (with loop closure) in TUM-RGBD. The results in **Tab.G3** illustrate that CP-SLAM has also achieved state-of-the-art performance in the real-world setting, and the loop detection and pose graph optimization are equally effective for the real-world scene. It is worth noting that we compared the most recently proposed SOTA methods Co-SLAM [Wang et al. 2023] and ESLAM [Johari et al. 2023] in the TUM-RGBD experiment, and ESLAM encountered an out-of-memory (OOM) issue in the fr2-xyz. **2. More advanced neural rendering techniques can be used to improve the rendering quality. (W2)** Several innovative and robust neural rendering techniques have been proposed subsequent to our submission. Nevertheless, given its nature as a collaborative NERF-based SLAM system, we focus more on the overall performance of tracking, mapping, and collaborative cooperation than rendering quality. However, it is interesting to explore more robust rendering techniques in the future. **3. It is not clear whether time evaluation includes neural field generation time. (Q1)** Following previous works, such as NICE-SLAM and Vox-Fusion, we evaluated the efficiency of mapping and tracking in Tab.4. All results in Tab.4 solely measure single-frame optimization time in tracking and mapping, excluding the neural field generation time (neural point back-projection and initial neural features inference from CNN). **4. Some citations are missing in the paper. (Q2)** In Section 3.1, we described volume rendering, depth loss, and color loss used in tracking and mapping. We presumed these concepts as common sense reached in the NERF-based SLAM field, so we omitted relevant citations. Thanks for your comment, we have checked and added them in Section 3.1. **5. Novelty is limited. (L1)** We aim to build a complete collaborative NERF-based SLAM system, focusing more on system-level development. Such work is missing in the field of NERF-based SLAM. we have introduced technical enhancements to both the neural field representation and the system-level pipeline, such as innovatively proposing a keyframe-centric neural point field and a novel learning framework to maintain global consistency. Although the technical novelties are relatively limited, we have made significant contributions to the pipeline as **Reviewer Zpdr** pointed out. --- Rebuttal Comment 1.1: Title: More evaluations and limited novelty Comment: The authors have included more datasets and experiments which achieve SOTA performance. The authors have also corrected the issues with paper writing. Though the authors have developed and achieved a good pipeline for the whole collaborative NERF-based SLAM system, I think the novelty is still a bit limited. --- Reply to Comment 1.1.1: Title: Further Response to Reviewer q5H1 Comment: Dear Reviewer q5H1: Thank you for your further feedback and for acknowledging our contribution to developing a good pipeline for the collaborative NeRF-based SLAM system. Our work may lean more towards system-level technical innovation than theoretical novelties, while we believe it offers a valuable contribution to the community by extending NeRF-SLAM into collaborative scenarios with novel techniques including keyframe-centric neural point representation, distributed-to-centralized learning framework, and flexible fusion strategy among neural sub-maps.
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable comments and appreciate the positive comments like “the contributions to the pipeline are significant” **(Reviewer Zpdr)**, "CP-SLAM introduces a novel neural point-based representation, enabling dense collaborative neural SLAM." **(Reviewer 1EKn)**, "The federated learning strategy improves the neural rendering performance. " **(Reviewer q5H1)**, "The use of point based features nice connects implicit mapping with the ability to do efficient map refinement" **(Reviewer zwcP)**, "This is the first time I saw a full SLAM system in the context of neural implicit SLAM." **(Reviewer 46C6)**, "Improved Performance in both camera tracking and mapping." **(Reviewer jr2H)**. In this global response, we address the common concerns regarding our experiment and will separately address specific comments raised. - **Comparison with additional traditional methods:** In the multi-agent experiment, we added two traditional baselines including ORB-SLAM3 [Campos et al. 2021] and Swarm-SLAM [Lajoie et al. 2023] as suggested for a more comprehensive comparison. For the single-agent experiment, we compared with ORB-SLAM3. The results are listed in **Table G1** and **G2**. We can observe that our method still performs the best compared to these methods. It is worth noting that ORB-SLAM3 is not a collaborative SLAM system. It lacks the capability to process multiple sequences at the same time, thus unable to overcome the efficiency bottleneck in scene exploration. - **Real-world experiment:** We have experimented on the TUM-RGBD real-world dataset. From **Table G3**, we can see that our CP-SLAM has also achieved state-of-the-art performance in the real-world setting, and the loop detection and pose graph optimization are effective for the real-world scene. The most recently proposed SOTA methods, i.e., Co-SLAM [Wang et al. 2023] and ESLAM [Johari et al. 2023], are evaluated for comparison in the TUM-RGBD experiment. | Method | Part | Apartment-1 | Apartment-2 | Apartment-0 | Office-0-C(SingleRoom) | | :--------: | :---------: | :----------------: | :----------------: | :----------------: | :--------------------: | | CCM-SLAM | **Part1** | 2.12/1.94/1.74 | **0.51/0.45/0.40** | -/-/- | 9.84/8.23/6.41 | | ORB-SLAM3 | | 4.93/4.65/5.01 | 1.35/1.05/0.65 | 0.67/0.58/0.47 | 0.66/0.62/0.62 | | Swarm-SLAM | | 4.62/4.17/3.90 | 2.69/2.48/2.34 | 1.61/1.33/1.09 | 1.07/0.96/0.98 | | Ours(w/o) | | 1.15/0.99/0.88 | 1.45/1.34/1.36 | 0.70/0.48/**0.27** | 0.71/0.62/0.67 | | Ours(w/) | | **1.11/0.95/0.81** | 1.41/1.30/1.36 | **0.62/0.47**/0.30 | **0.50/0.46/0.55** | | CCM-SLAM | **Part2** | 9.31/6.36/5.57 | **0.48/0.43/0.38** | -/-/- | 0.76/**0.36/0.16** | | ORB-SLAM3 | | 4.93/4.04/3.80 | 1.36/1.24/1.11 | 1.46/1.11/0.79 | **0.54**/0.49/0.47 | | Swarm-SLAM | | 6.50/5.27/4.39 | 8.53/7.59/7.10 | 1.98/1.48/0.94 | 1.76/1.55/1.83 | | Ours(w/o) | | 2.12/2.05/2.23 | 2.54/2.45/2.60 | 1.61/1.55/1.70 | 1.02/1.03/0.99 | | Ours(w/) | | **1.72/1.61/1.46** | 2.41/2.33/2.44 | **1.28/1.17/1.37** | 0.79/0.74/0.70 | | CCM-SLAM | **Average** | 5.71/4.15/3.66 | **0.49/0.44/0.39** | -/-/- | 5.30/4.29/3.29 | | ORB-SLAM3 | | 4.93/4.35/4.41 | 1.36/1.15/0.88 | 1.07/0.85/**0.63** | **0.60/0.56/0.55** | | Swarm-SLAM | | 5.56/4.72/4.15 | 5.61/5.04/4.72 | 1.80/1.41/1.02 | 1.42/1.26/1.41 | | Ours(w/o) | | 1.64/1.52/1.56 | 2.00/1.90/1.98 | 1.16/1.02/0.99 | 0.86/0.81/0.83 | | Ours(w/) | | **1.42/1.28/1.14** | 1.91/1.82/1.90 | **0.95/0.82**/0.84 | 0.65/0.60/0.63 | **Table G1: Two-agent Tracking Performance. ATE RMSE/Mean/Median [cm] are reported.** | Method | Room-0-loop | Room-1-loop | Office-0-loop | Office-3-loop | Average | | :---------: | :----------------: | :----------------: | :----------------: | :----------------: | :----------------: | | NICE-SLAM | 1.27/1.15/1.09 | 1.74/1.61/1.66 | 2.27/1.91/1.82 | 3.19/2.77/2.28 | 2.12/1.86/1.71 | | Vox-Fusion | 0.82/0.77/0.78 | 1.35/1.30/1.25 | 0.99/0.94/0.95 | 0.82/0.74/0.73 | 0.99/0.94/0.93 | | ORB-SLAM3 | 0.54/0.52/0.53 | **0.21/0.19/0.19** | 0.58/0.51/0.52 | 0.89/0.80/0.84 | 0.56/0.51/0.52 | | Ours(w$/$o) | 0.61/0.56/0.54 | 0.51/0.48/0.52 | 0.67/0.63/0.67 | 0.38/0.32/**0.27** | 0.54/0.50/0.50 | | Ours(w$/$) | **0.48/0.44/0.43** | 0.44/0.40/0.46 | **0.56/0.53/0.56** | **0.37/0.31/0.27** | **0.46/0.42/0.43** | **Table G2: Single-agent Tracking Performance. ATE RMSE/Mean/Median [cm] are reported.** | Method | fr1-desk (w/o loop) | fr2-xyz (w/o loop) | fr3-office (w/ loop) | Average | | :-------: | :-----------------: | :----------------: | :------------------: | :----------------: | | Co-SLAM | 7.10/6.83/6.79 | 4.05/3.76/3.47 | 5.58/5.06/4.57 | 5.58/5.22/4.95 | | ESLAM | **6.81/6.56/6.88** | Fail/Fail/Fail | 4.23/3.91/3.73 | - | | Ours | 7.84/7.34/7.12 | **3.93/3.50/3.29** | **3.84/3.47/3.39** | **5.20/4.77/4.60** | **Table G3: Single-agent Tracking Performance on the TUM-RGBD dataset.** In the real-world experiment, we can find that CP-SLAM still performs the best. Besides, ESLAM fails in the fr2-xyz because of OOM (out of memory). '-' indicates that metrics cannot be evaluated due to ESLAM failures. Pdf: /pdf/87d0d821993322aba3a8cb9aab8472a531dd72fe.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents CP-SLAM, a collaborative neural implicit SLAM system developed by the authors. CP-SLAM incorporates neural point-based odometry, loop detection, sub-map fusion, and global refinement. Additionally, the authors propose a novel neural point 3D scene representation with keyframes to enhance map fusion and adjustment. They introduce innovative learning and optimization frameworks to ensure accurate and consistent 3D mapping for cooperative localization and mapping. The performance of CP-SLAM is evaluated on various indoor RGB-D sequences, showcasing its superior mapping and camera tracking capabilities. The contributions made by the authors significantly advance the field of SLAM and offer state-of-the-art solutions. Strengths: 1. CP-SLAM introduces a novel neural point-based representation, enabling dense collaborative neural SLAM. 2. CP-SLAM retains the front-end and back-end modules like traditional SLAM systems, providing a comprehensive pipeline. 3. CP-SLAM outperforms state-of-the-art methods in both localization and reconstruction. The CP-SLAM system supports both single-agent and multi-agent modes, allowing for versatile evaluation. 4. The system is evaluated using datasets based on the Replica scenes, providing a realistic and representative testing environment. CP-SLAM is compared against recent neural implicit RGB-D SLAM methods in single-agent experiments, showcasing its performance in loop closure. 5. In the two-agent experiments, CP-SLAM is compared with a traditional method, highlighting its collaborative capabilities. 6. The ablation studies are conducted to demonstrate the importance of different modules in the proposed system. 7. The implementation details, such as GPU specifications and parameter settings, are provided, ensuring reproducibility. Weaknesses: 1. CP-SLAM requires significant GPU resources to handle multiple image sequences. The relative pose computation in loop closure relies on existing rendering-based optimization, which may be inaccurate for large viewpoint changes and lead to map fusion drifting. 2. The comparison with traditional methods in the two-agent experiments may not fully represent the performance of other collaborative neural SLAM approaches. 3. CP-SLAM's performance is limited by its GPU resource demand, making it less suitable for resource-constrained devices or environments. 4. Currently, a large-scale experiment is not conducted, but for multi-agents, this is an important point to consider. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How does CP-SLAM handle scalability in terms of the size and complexity of the datasets? 2. Can CP-SLAM be applied to real-world datasets to evaluate its performance in practical scenarios? 3. Are there any specific limitations or challenges in the implementation of CP-SLAM that need to be addressed? 4. Can Nicer-SLAM and DROID-SLAM be compared as a contrast? 5. The article mentioned "Naturally, it is expected that the neural point cloud layout should be rearranged following global pose graph optimization. However, a world-centric point cloud map obviously cannot allow such adjustment. To tackle this limitation, we propose a keyframe-centric neural point field, where each 3D point is associated with a keyframe". Can the authors better explain why a world-centric point cloud map obviously cannot allow such adjustment? Sincerely, Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations have been discussed in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful reading and constructive comments on our paper, and noting the novelty of our CP-SLAM system and its superiority. Here are our responses to your comments. **1. Large viewpoint change is challenging for NERF-based SLAM methods. (W1)** As pointed out in Section.5 in our paper, the rendering-based optimization is limited in the face of large viewpoint changes, which remains a bottleneck for other existing works, such as NICE-SLAM and Vox-Fusion. We conduct an ablation study on viewpoint difference in **Fig.4** in the attached **'pdf'** in the global response. It may be partially addressed by the particle filtering algorithm proposed in the latest Loc-NeRF [Maggio et al. ICRA 2023]. **2. More comparisons with traditional methods should be added in the two-agent experiments. (W2)** Due to the word limit, please refer to **'Comparison with additional traditional methods'** and **Tab.G1** in our global response. **3. GPU resources are a limitation for CP-SLAM. (W3)** It is indeed a common weakness in existing neural implicit SLAM works and our CP-SLAM, and we expect further improvements to be made in the future. **4. A large-scale experiment is not conducted in this paper. (W4)** In our research, we indeed struggled with limited data resources, and we have not been able to find a public large-scale dataset suitable for collaborative NERF-based SLAM. Multi-agent datasets were mainly captured at large-scale outdoor scenes, such as urban streets. Affected by noisy depth, these available datasets are normally RGB-based, which is not compatible with our RGBD-based CP-SLAM. Moreover, when applied to outdoor scenarios, almost all existing NERF-based SLAM systems encounter pressing issues such as dynamic interference, unbounded scene, and fast motion. These challenges deserve immediate and critical attention. It is interesting to work on these limitations and generalize CP-SLAM to outdoor environments, which are out of our current scope. **5. How does CP-SLAM handle various scene sizes and complexity? (Q1)** In terms of scene size, in principle, CP-SLAM only needs to incrementally add local neural points to support follow-up tracking. Facing large-scale scenes, our CP-SLAM is more scalable, particularly when compared to the feature grid-based method such as NICE-SLAM. In terms of complexity including extreme lighting and weak texture, NERF-based SLAM exhibits even better robustness over traditional methods, which is evidenced by the failure of traditional methods in Apartment-0 in Tab.1 of our paper. **6. Can CP-SLAM be generalized to real-world scenes? (Q2)** Due to the word limit, please refer to **‘Real-world experiment’** and **Tab.G3** in our global response. **7. Are there any specific limitations or challenges in the implementation of CP-SLAM that need to be addressed? (Q3)** In collaborative NERF-based SLAM, in addition to the lack of appropriate neural field representation, it is not feasible to simply stack and cascade traditional modules in a plug-and-play manner. Consequently, we have innovatively proposed a keyframe-centric neural point field to address the limitation of sub-map fusion and a novel learning framework to maintain global consistency, etc. **8. Can Nicer-SLAM and DROID-SLAM be compared as a contrast? (Q4)** Nicer-SLAM is a non-public and monocular RGB-based neural SLAM system, while our CP-SLAM is a collaborative RGBD-based system. Therefore, we only added Droid-SLAM as a new baseline, and we report experiment results from the official DROID-SLAM in the following table: | Method | Room-0-loop | Room-1-loop | Office-0-loop | Office-3-loop | Average | | :---------: | :----------------: | :----------------: | :----------------: | :----------------: | :----------------: | | DROID-SLAM | **0.28/0.26/0.23** | **0.26/0.22/0.19** | **0.45/0.41/0.37** | **0.33/0.29**/0.30 | **0.33/0.30/0.27** | | Ours(w$/$o) | 0.61/0.56/0.54 | 0.51/0.48/0.52 | 0.67/0.63/0.67 | 0.38/0.32/0.27 | 0.54/0.50/0.50 | | Ours(w$/$) | 0.48/0.44/0.43 | 0.44/0.40/0.46 | 0.56/0.53/0.56 | 0.37/0.31/**0.27** | 0.46/0.42/0.43 | **Table.A Single-agent Tracking Performance compared with DROID-SLAM** | Method | Room-0-loop | Room-1-loop | Office-0-loop | Office-3-loop | Average | | :--------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | | DROID-SLAM | 58/6.47 | 3/4.17 | 34/8.12 | 61/7.99 | 46/6.69 | | Ours | **0.32/1.53** | **0.23/1.20** | **0.22/1.21** | **0.76/1.7** | **0.38/1.41** | **Table.B Mapping Performance compared with DROID-SLAM** We observe that CP-SLAM performs slightly worse than DROID-SLAM in terms of tracking, but much better than DROID-SLAM in reconstruction. In addition, DROID-SLAM cannot support multi-agent collaboration. **9. Why the world-centric model cannot allow neural point cloud adjustment after pose graph optimization? (Q5)** Keyframe-centric model is a crucial module in the proposed CP-SLAM system. Specifically, the neural point field is composed of two parts: the 3D point cloud and the corresponding feature embeddings which encode the surrounding regions. Following pose graph optimization (PGO), it is necessary to refine previously selected point positions based on more accurate poses, so as to reuse previously optimized feature embeddings. This requires a clear correspondence between 3D points and mapping frames. If a world-centric model is adopted, neural points cannot be refined along with camera poses due to the absence of such correspondences, so it may be possible to recompute all 3D points and learn all neural features from scratch for stable tracking and mapping after PGO. --- Rebuttal Comment 1.1: Title: Comment Comment: Dear authors, thank you for your responses and added analyses. We appreciate that the authors add the comparisons against DROID-SLAM and the comparisons in the global response. These analyses should be added to the final version. We would like to maintain the positive rating and suggest that the authors make the code publicly available to foster future research in collaborative SLAM. Sincerely, --- Reply to Comment 1.1.1: Title: Thank you! Comment: Dear Reviewer 1EKn, Thank you for your constructive suggestions and positive evaluation of our work again. As suggested, we will release our code upon acceptance and integrate these new comparisons into the revised version.
Summary: There have been some papers on getting dense Visual SLAM with neural fields published in the last couple of years. In this work, the authors proposed a set of additions to prior pipelines that get the NN SLAM closer to the traditional approaches concerning the overall framework. Specifically, the authors addressed loop detection, pose graph optimization, bundle adjustment, and collaborative SLAM. Strengths: 1) Visual-SLAM with neural fields has the huge benefit of providing a dense map of the environment. The topic is relevant to computer vision, robotics, and I believe it fits the NeurIPS scope. 2) The paper is well-written, easy to follow, and well-organized. Related work is OK. 3) Although the technical novelties are relatively limited, in my opinion, the contributions to the pipeline are significant. Weaknesses: 1) As written above, the technical contributions are limited. The paper focuses more on adding new steps to prior Visual SLAM pipelines. 2) The inclusion of the NetVLAD (not new) as a tool for loop detection looks like a trick. It is not elegant, in the sense that it is a parallel feature extractor that (maybe) could be done using the neural field encoders. 3) Only one dataset and 4 sequences are used, which is small concerning quantity and variety. 4) I would prefer to see more comparisons with previous traditional methods (without neural fields). 5) Figures 5 and 6 are not easily readable. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) I would like the authors to explain what is the motivation for the inclusion of NetVLAD. Can't those descriptors be obtained from the neural field? 2) It is written that a pre-trained NetVLAD was used. It would be important to give more details about this training and how it generalizes to different scenes. 3) Usually, with classical V-SLAM, the bundle adjustment estimates both the pose of the cameras and the 3D map simultaneously. It seems that the authors are doing this alternatively. I would like to see some comments on this. 4) Why are the results for Apartment 2 so bad for the proposed approach? I would like the authors to comment on this. 5) It seems that traditional V-SLAM methods using RGB-D are missing (such as OrbSLAM). I would like to see some comments on this. 6) Maybe I missed something. But it is not clear to me how is the error in Tab. 2 computed. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and for appreciating our contributions and writing. Here are our responses to your comments. **1. The paper focuses more on adding new steps to prior Visual SLAM pipelines. (W1)** We aim to build a complete collaborative NERF-based SLAM system, focusing more on system-level development. Such work is missing in the field of NERF-based SLAM. CP-SLAM is the first collaborative neural implicit SLAM system, consisting of neural point-based odometry, loop detection, sub-map fusion, and global pose graph optimization. In collaborative NERF-based SLAM, in addition to the lack of appropriate neural field representation, it is not feasible to simply stack and cascade traditional modules in a plug-and-play manner. Consequently, we have introduced technical enhancements to both the neural field representation and the pipeline of CP-SLAM, such as a novel keyframe-centric neural point field and a novel learning framework to maintain global consistency. **2. Why not use the neural field encoders, instead of NetVLAD, for loop detection? (W2+Q1)** We have simply tried to take neural point features as descriptors for loop detection. However, neural features only encode local surrounding information, rather than a global descriptor like NetVLAD. This leads to poor distinctiveness of these neural features and further results in false positive matches which makes the loop detection unstable. **3. Dataset is limited. (W3)** We have experimented on the TUM-RGBD real-world dataset, and the results are shown in **Tab.G3** in the global response. Following recently proposed NERF-based SLAM works, we chose three scenes: fr1-desk, fr2-xyz, and fr3-office (with loop closure) in TUM-RGBD. The results in **Tab.G3** illustrate that CP-SLAM has also achieved state-of-the-art performance in the real-world setting, and the loop detection and pose graph optimization are equally effective for the real-world scene. It is worth noting that we compared the most recently proposed SOTA methods Co-SLAM [Wang et al. 2023] and ESLAM [Johari et al. 2023] in the TUM-RGBD experiment, and ESLAM encountered an out-of-memory (OOM) issue in the fr2-xyz. **4. More comparisons with traditional methods should be added. (W4+Q5)** We have compared with two more baselines including ORB-SLAM3 [Campos et al. 2021] and Swarm-SLAM [Lajoie et al. 2023] for the two-agent experiment to demonstrate our performance and the corresponding results are shown in **Tab.G1** in the global response. It is clear that our method still performs the best. To be noted, ORB-SLAM3 is not a collaborative SLAM system. It lacks the capability to process multiple sequences at the same time, thus unable to overcome the efficiency bottleneck in scene exploration. We evaluate ORB-SLAM3 just for a more comprehensive comparison. **5. Figures 5 and 6 are not easily readable. (W5)** Thank you for your comment, we have improved these two figures for a better visual experience, please refer to **Fig.1** and **Fig.2** in the attached **`pdf'** in our global response. **6. Training details and generalization of NetVLAD. (Q2)** The pre-trained NetVLAD model used in our paper comes from the open-source project proposed by Paul et al. In our test, we observed its robustness across various scenes and insensitivity to hyperparameters. Actually, the pre-trained NetVLAD was also normally adopted for loop detection in other SLAM systems like DOOR-SLAM [Lajoie et al. 2020] and DSLAM [Cieslewski et al. 2018]. **7. Separate pose and map optimization. (Q3)** During mapping, we select multiple frames for joint optimization based on the covisible correlations. Considering the limitation of GPU memory, the total number of rays is fixed, that is, only a few rays can be allocated to each frame. In this case, considering the irregular distribution of neural points compared to the regular feature grid, the pose adjustment is prone to the sub-optimal solution if it is done together with mapping, so for more stable pose optimization, we choose to perform tracking separately with more rays after mapping. **8. Pose drift in Apartment-2. (Q4)** Existing NERF-based SLAM methods often construct an L1 loss between rendered depth and the ground truth depth, relying on local depth variance to guide pose estimation towards a correct solution. However, in the **`Apartment2-part2'** sequence, the camera frequently moves parallel to walls with uniform depth, aggravating non-convexity in the depth loss and resulting in ambiguous solutions. This is a common problem in recently proposed NERF-based SLAM approaches, and we hope to overcome this bottleneck in future work. **9. Error calculation method in Tab.2. (Q6)** The results in Tab.2 were obtained in an origin-aligned manner with the **EVO** toolbox for proper drift/loop closure evaluation. Specifically, after aligning the origin poses, we calculate the absolute trajectory error (ATE) for each pose and compute the RMSE/Mean/Median values. --- Rebuttal 2: Comment: Dear Reviewer Zpdr, Many thanks for your review on the paper. Your rating is borderline kind. Could please read the author's rebuttal and give your opinion and final rating? Thank you very much, AC
null
null
null
null
Separable Physics-Informed Neural Networks
Accept (spotlight)
Summary: This paper introduces a novel approach to using separable neural networks to represent solutions of partial differential equations (PDEs). In contrast to the conventional form used in physics-informed neural network (PINN) methods, the specialized form of neural networks proposed in this work takes advantage of forward-mode auto-differentiation, leading to faster training times in PINN frameworks. The authors provide a mathematical demonstration of the proposed form's approximation ability and conduct experiments on several PDEs. The results reveal that the proposed method achieves comparable accuracy to previous work, while significantly reducing computational and memory costs. Strengths: 1. The speed-up achieved by the proposed method is highly promising. The time required for training a single network to convergence has been a limitation in the application of PINN methods. This work addresses this issue by introducing a specialized neural network form, and in combination with the forward-mode auto-differentiation method, the training speed is significantly improved. 2. In addition to the improved auto-differentiation computation, this work employs fixed collocation points, enabling further acceleration of the total training time by reusing calculations on each point. By leveraging this approach, the computational efficiency of the method is significantly enhanced. Weaknesses: Although the proposed method in this paper is novel and interesting, there are several concerns that the reviewer would like to address: 1. The choice of using a manufactured solution that aligns with the variable separation form for evaluating the results on the Helmholtz Equation, Klein-Gordon Equation, and (3+1)-d Navier-Stokes Equation raises some questions. While this choice is suitable for evaluating original PINN methods that do not utilize the variable separation property of the ground truth, it may be unfair to compare SPINN with the original method on these tasks. 2. One of the key advantages of PINN-based methods is their flexibility to be applied to any geometric surface without modifying the grids. However, the variable separation form of the neural network in SPINN may limit this flexibility. It would be challenging for SPINN to fit the boundary conditions of complex geometries due to the constraints of the variable separation form. 3. Throughout the paper, the evaluation metric predominantly used is the $L_2$ evaluation metric. However, the reliability of the $L_2$ metric, especially for non-linear PDEs, can sometimes be questionable. It would be valuable to also report metrics such as the Sobolev norm or the PINN loss to provide a more comprehensive evaluation. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. To assess the performance of SPINN on more complex solutions that do not directly align with its form, the authors should conduct more experiments. The experiments could involve using a Multilayer Perceptron (MLP) in the classical PINN as the manufactured solution and training SPINN under the associated PDE. 2. The reviewer is curious about the performance of SPINN on PDEs with sophisticated boundary. For instance, it would be interesting to explore how SPINN performs on a sphere boundary condition, where the variables are still separated into x, y, and z components. 3. Including the Sobolev norm or PINN loss as additional evaluation metrics would provide the reviewer with more information to assess the performance of SPINN comprehensively. 4. The reviewer wonders if it is possible to reduce the dimension of the problem through variable separation. Most PINN methods work on high-dimension manifolds, incurring significant computational costs. While SPINN employs low-dimensional functions to construct the high-dimensional function, its domain is still a high-dimensional manifold. The reviewer believes that there might be a way to directly optimize these low-dimensional functions within the low-dimensional manifold, potentially offering benefits in solving high-dimensional PDEs. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors have talked their limitation in the paper Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R: It is unfair to test SPINN on solutions that align with the variable separation form.** A: We tested on manufactured solutions because this is the most conventional way of making an exact reference solution. As you can see in the diffusion and (2+1)-d Navier-Stokes equation, SPINN is still effective in solving such equations where the solution does not align with the variable separation. During the rebuttal period, we conducted additional experiments on the flow mixing problem. This is a (2+1)-d PDE of two fluids being mixed at the interface. We followed the problem settings used in CAN-PINNs [1]. It has an analytical solution that does not explicitly decompose into each input variable: $u(t, x, y)=-\tanh(\frac{y}{2}\cos(\omega t)-\frac{x}{2}\sin(\omega t))$, where $\omega=\frac{1}{r}\frac{\tanh(r)}{0.385\cosh^2(r)}, r=\sqrt{x^2+y^2}.$ Since CAN-PINN reported mean-squared errors on a box plot without specifying the exact numerical value, we compared against their rough result. You can see the visualization in Figure 3 of the attached PDF. Standard PINNs: 1.08e-2 CAN-PINN: 2e-5~3e-5 SPINN: 2.95e-5 We also added a result of (1+1)-d chaotic Kuramoto-Sivashinsky equation following the causal PINNs’ experiment [2]. Within the temporal domain [0, 0.4], our model achieved a relative L2 error of 3.81e-2 without using a causal loss, and the training speed was 15 times faster than the causal PINNs. You can see the visualized result in our attached PDF, Figure 2. We then have total of 8 PDEs: *separable form*: Helmholtz, (2+1)-d Klein-Gordon, (3+1)-d Klein-Gordon, (3+1)-d Navier-Stokes *non-separable form*: diffusion, (2+1)-d Navier-Stokes, flow mixing, Kuramoto-Sivashinsky Moreover, we will add a paragraph in the experiment section to indicate to the readers that some examples align with the variable separation form or not. *** **R: SPINN on complex boundary conditions** A: Although our current work showed examples of a rectangular domain, it is premature to conclude that SPINN *cannot* be applied to arbitrary geometries. Inspired by PhyGeoNet [3] and Geo-FNO [4], we can apply an additional operation after the SPINN’s feature merging to map a rectangular mesh to an arbitrary physical mesh. We are eager to explore combining these works with SPINN as a future study. We also believe that current SPINN can be applied to circular or spherical boundaries. The exploration of SPINN across a range of coordinate systems greatly intrigues us; however, we could not find a suitable example involving a spherical coordinate-defined PDE with a non-decomposable solution along each axis for testing purposes during the rebuttal phase. We will conduct experiments whenever we find such examples. Thank you for your suggestion. *** **R: Other metrics** A: Based on your and reviewer KaJa’s suggestion, we have introduced two additional metrics – RMSE and PINN loss value – as detailed in the attached PDF file. We will provide the complete results in the revised version of the paper if permitted. Thank you for your valuable comment. *** **R: Dimensionality reduction through variable separation** A: This is a very good idea. We also believe that optimizing the each separated function would greatly reduce the computational cost since it does not require any operations on a high dimensional space. At first thought, if we were able to decompose the original high-dim PDE into a set of ODEs, we could directly optimize each function using the ODE of corresponding dimension. There still exists some challenges that needs to be solved such as 'how can we actually decompose a PDE into ODEs?' or 'can such decomposition applied universally?'. There may be a completely different approach we have not thought of. We find this avenue of research very interesting and are enthusiastic about exchanging further insights with you. [1] Chiu et al., "CAN-PINN: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method", Computer Methods in Applied Mechanics and Engineering 395 (2022) [2] Wang et al., "Respecting causality is all you need for training physics-informed neural networks", arXiv (2022) [3] Gao et al., "PhyGeoNet: Physics-Informed Geometry-Adaptive Convolutional Neural Networks for Solving Parameterized Steady-State PDEs on Irregular Domain", Journal of Computational Physics (2021) [4] Li et al., "Fourier Neural Operator with Learned Deformations for PDEs on General Geometries", arXiv (2022) --- Rebuttal Comment 1.1: Comment: Thank you for your reply! The reviewer believes that additional evaluation properly addresses weaknesses 1 and 3, but the argument for weakness 2 is not very convincing for the reviewer since those methods are still mesh-based methods. After consideration, the reviewer will increase the score to 5. --- Reply to Comment 1.1.1: Title: Thank you for your reply Comment: Thank you for your reply! We agree that applying the domain transformation on SPINN still has limitations on arbitrary geometry. As you pointed out, the current version of SPINN requires training on a mesh-based grid. We will clarify this in the limitations section. We believe that we can handle this drawback for future work. Thank you.
Summary: This paper presents a novel variant of PINN for learning multi-dimensional PDEs. Unlike traditional approaches considering point-wise inputs, this method incorporates each axis information separately during the learning process and leverages tensor decomposition to interpret the predicted solution variables. Additionally, the authors consider a forward-mode auto-differentiation technique that facilitates training with a more significant number of collocation points. The extensive experiments proved the efficiency, scalability, and effectiveness of the proposed SPINN. Strengths: - This paper contributes to alleviating the scalability and efficiency issues of PINNs. Also, the proposed forward-mode auto diff method overcomes collocation point constraints, leading to more accurate solutions. I think it is of interest to the PINN community and the experimental results are convincing. - The authors provide the theoretical foundation of SPINN. Also, the authors have done extensive experiments on high-dimensional PDEs to prove the scalability of SPINNs. Although the tested PDEs are not that complicated, I don't think that is a big issue in this paper since it is more related to the optimization issue of PINNs. - This paper is well-written and well-organized. Weaknesses: - This paper only alleviates the scalability issue, but the primary concern of PINNs lies in the challenges of optimization. Therefore, its contribution to the scientific machine-learning community is moderate. - It would be good to have a paragraph discussing the related work on scalability for PINNs. There are many existing techniques to handle the scalability issue, such as domain decomposition [1], seq2seq [2], and adaptive sampling method [3]. Using adaptive sampling can also help reduce the collocation points. Moreover, in the baseline comparison, it may strengthen the paper to compare SPINN with one of these techniques though my impression is that SPINN will be better based on the results of this paper. [1] Jagtap, A. D., & Karniadakis, G. E. (2021, March). Extended Physics-informed Neural Networks (XPINNs): A Generalized Space-Time Domain Decomposition based Deep Learning Framework for Nonlinear Partial Differential Equations. In AAAI spring symposium: MLPS (Vol. 10). [2] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., & Mahoney, M. W. (2021). Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems, 34, 26548-26560. [3] Subramanian, S., Kirby, R. M., Mahoney, M. W., & Gholami, A. (2022). Adaptive self-supervision algorithms for physics-informed neural networks. arXiv preprint arXiv:2207.04084. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What about the performance of forward-mode auto diff for higher-order derivatives (e.g., fourth-order in the Kuramoto-Sivashinsky equation)? - How do the authors consider the rank *r* (i.e., *r*-dimensional feature representation in Eq. (5))? In Theorem 1, the authors claim *r* should be sufficiently large. What about in practice? - In lines 193-195, for each axis, this paper considers random sampling for 1D input points. Random sampling may not be an optimal choice. Is there any way to improve it in the context of SPINN? Can the authors add some discussions on this part? - How do the authors consider the optimization in PINNs? In the section on Limitations (lines 335-342), the authors claim that the learning-rate annealing will be considered in the future. I assume the authors were just doing the hyper-parameter tuning by using a grid search. Is that correct? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The discussion of the limitations in this paper is comprehensive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R: The primary concern of PINNs is the optimization issue. Therefore, the contribution is moderate.** A: We believe that scalability is also an important issue to tackle in PINNs’ literature. In our Navier-Stokes experiment, where the solution shows complex and turbulent behavior, previous studies [1, 2] have demonstrated the necessity for a large number of collocation points. Particularly in the causal PINNs [1], they had to adopt parallelized training in multi-GPUs and sophisticated software optimization to employ large-scale collocation points. However, without bells and whistles, our proposed method effectively mitigates the scalability issue. Moreover, our experimental results, when compared with prior methods that focused on PINNs' optimization (such as modified MLP, and causal PINNs), suggest that the adoption of large-scale collocation points can serve as an effective means of optimizing PINNs. Standard PINNs were infeasible to employ such large-scale collocation points due to the computational issue. Also, since the seq2seq (time-marching) and the modified MLP are orthogonal to our proposed method, we additionally adopted these techniques to SPINN and observed their positive impact. Other optimization methods like adaptive activation function [3], and learning rate annealing [4] are also applicable to SPINN and we believe that investigating this would be a valuable direction for future study. *** **R: Related works on scalability for PINNs** A: Thank you for the references. We will add an additional paragraph in the related works section. We will also try to run the provided baselines on our PDE settings and compare them with SPINN’s result for future work. *** **R: Higher-order PDEs** A: During the rebuttal period, we tested our model on the chaotic (1+1)-d Kuramoto-Sivashinky equation following the causal PINNs’ experiment [1]. Within the temporal domain [0, 0.4], our model achieved a relative L2 error of 3.81e-2 without using a causal loss, and the training speed was 15 times faster than the causal PINNs. You can see the visualized result in our attached PDF, Figure 2. We also want to notify the reviewer that the PDE we used in (2+1)-d Navier-Stokes equation is a third-order. The vorticity form of the equation is a second-order and our model's prediction is a velocity, in which additional curl operation needs to be taken to obtain the vorticity. *** **R: In the theorem, rank should be sufficiently large. What about in practice?** A: Empirically we showed that SPINN w/ 128 output units (meaning the rank, not too large) can more accurately approximate the (2+1)-d Navier-Stokes equations than any other existing PINNs. As you can see in Figure 7, using a larger rank does not show a meaningful performance increase. Knowing the sufficient number of output units (the rank in SPINN) beforehand is also challenging since we do not know the ranks of the solution functions in general. It is a similar question as 'how many layers and units are sufficient to solve a certain PDE in PINN?'. We believe it is an important question and very challenging, and we plan to further study as a future work. *** **R: Random sampling may not be an optimal choice. Any other ways to improve in the context of SPINN?** A: One straightforward way is to sample more points where the loss value is large. When solving time-dependent PDE, we can sample more temporal coordinates near the initial point (t=0) at the beginning of the training and progressively extend the sampling to later times as the training proceeds. This would guide the model to learn the causal nature of the solution. *** **R: Hyper-parameter tuning** A: Our hyper-parameters are tuned by grid search. We mentioned the learning-rate annealing because it helps the model to search for the best weight balances of the loss terms. [1] Wang et al., "Respecting causality is all you need for training physics-informed neural networks", arXiv (2022) [2] Sankaran et al., "On the impact of larger batch size in the training of Physics Informed Neural Networks", The Symbiosis of Deep Learning and Differential Equations II (2022) [3] Jagtap et al., "Adaptive activation functions accelerate convergence in deep and physics-informed neural networks", Journal of Computational Physics (2020) [4] Wang et al., "Understanding and mitigating gradient pathologies in physics-informed neural networks", SIAM Journal on Scientific Computing (2021) --- Rebuttal Comment 1.1: Comment: Thanks for the clarification and additional experiments. I think the authors have done a good work of tackling the scalability issue of PINNs. I partially agree with the authors that it facilitates optimization with larger amounts of collocation points. I will keep my score.
Summary: The paper proposes a methodology for significant savings in computation and memory in learning physics informed networks for approximately solving PDEs. This enables learning with a much larger number of points, resulting in better accuracy. The idea is to use a specific function class, where features are constructed from each input dimension, and the output is simply the product of these features. The function class is shown to be universal. Since the same feature encoding can be used for all points that share the same value in each dimension, feature embeddings are calculated only once in a linear time for an exponential number of points, resulting in substantial savings. Using forward mode automatic differentiation, derivatives also benefit from the same savings. Experimental results demonstrate the effectiveness of this approach in approximating the solution of diffusion, Helmholtz, Klein-Gordon, and Navier-Stokes PDEs. Strengths: The paper is nicely written, the background material is quite useful and many figures help demonstrate the main idea of the paper. In particular, the paper briefly discusses connections to similar ideas in implicit neural representation, and tensor factorization methods, which I found quite relevant. The main idea is simple, and a priori it is not obvious that the chosen function class can perform so well in practice. Therefore the primary strengths of the paper is its rather surprising experimental results, where for some important PDEs orders of magnitude in speed-up results in accurate estimates compared to vanilla PINN and some other variants. For the same reason, the main theorem of the paper that states the universality of the proposed architecture is also quite useful. Overall, I enjoyed reading the paper, and I believe the proposed methodology can have a high practical impact. Weaknesses: While I did not identify a major weakness, I believe the paper can benefit from the following changes: - While experimental results suggest that the proposed architecture can deliver more improvements in some settings than others (e.g., results for diffusion), there is no discussion of this. In general, while the universality result is reassuring, it is unclear in what kind of PDEs it is a useful inductive bias. - I found Section 4.3 on gradient computation for SPINN is useful; the paper can benefit from doing the same exercise for higher-order derivatives since the complexity of calculating these derivatives in terms of the dimensionality of the embedding space will dominate the complexity of SPINN. - While I’m aware that reporting relative error is a common exercise, it is quite sensitive and unstable (due to division by the true value of the PDE), and, therefore, could become misleading. I suggest also reporting the RMSE for all experiments. - For clarity, the paper should state that the overall complexity still remains exponential in resolution (N), despite reducing it to linear in “network propagations”. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R: Discussion of why SPINN delivers more improvements** A: Our performance enhancement mainly stems from the adoption of large-scale collocation points. This is particularly effective when dealing with PDEs where the solution is complex, such as a solution that contains high-frequency components (as demonstrated in our Helmholtz equation) or a solution that shows turbulent and chaotic behaviors (Navier-Stokes equation). Moreover, as the input dimension increases, our model can derive substantial benefits from the adoption of large-scale collocation points, effectively spanning the high-dimensional input space. *** **R: Higher-order derivatives for section 4.3** A: Thank you for your suggestion. We could easily derive the formula for higher-order derivates of Equation 7 and found that as the order increases, the overall pattern in the equation does not change a lot. You can just replace the derivative $df/dx$ with its higher-order counterpart $d^nf/dx^n$. As you mentioned, since computing the derivatives dominates the PINNs training, we will provide a detailed explanation of the higher-order case in this section. *** **R: Other metrics** A: Based on your and reviewer dosJ’s suggestion, we have introduced two additional metrics – RMSE and PINN loss value – as detailed in the attached PDF file. We will provide the complete results in the revised version of the paper if permitted. Thank you for your valuable comment. *** **R: Clarification in the expressions: “complexity is linear in network propagations”** A: Thank you for pointing this out. We will carefully examine the entire text for issues and make the necessary corrections.
Summary: The paper proposes a new class of architecture called Separable PINN (SPINN) that operates on a per-dimension basis instead of the standard coordinate based MLP architecture. This architecture significantly reduces the computational complexity and allows the use of a large number of collocation points to enforce the PDE residuals. The paper also provides an universal approximation theorem for the proposed architecture. The paper empirically demonstrates that the proposed SPINN is significantly faster than the original PINN architecture during training while maintaining accuracy. Strengths: 1. The idea of using a different sub-network for every dimension combined with an outer-product operation to evaluate on a grid is both technically sound and interesting. 2. The paper provides theoretical foundations of the SPINN architecture in the form of the universal approximation theorem. 3. The gradient computation using forward mode AD and the network architecture together leads to significant speed-ups in the training time of PINNs (which is quite impressive). Weaknesses: 1. Although the SPINN architecture is quite efficient, I am concerned about the scalability of the architecture for high dimensional PDEs (>4). (Please see question 1) 2. The size of the plots in Figure 6 is excessively small, making it difficult to compare the performance between the baselines. 3. The PDEs chosen for the baselines are relatively simple (other than the Navier Stokes). It would be interesting to compare the architectures for equations where standard PINNs fail such as Convection Equation (with larger values of $\beta$), Kuramoto Sivashinsky Equation (in chaotic regimes). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. One of the main claims of the paper is that the computational complexity of the SPINN architecture linearly scales with the number of collocation points $N$ and the dimensions $d$, in contrast to the standard PINN which scales as $N^d$. However, for the SPINN architecture if we have a different branch for each dimension $d$, the outer product operation of multiplying $d$ different quantities can be numerically unstable. For example, can it lead to gradient vanishing or gradient exploding problems? Is some form of normalization required to ensure that the network is stable? 2. The universal approximation theorem guarantees that the approximation error can be as low as possible. However, does the SPINN architecture have any optimization problems? 3. In Figure 6, for a small number of collocation points(16, 32, 64) for Diffusion Equation, the PINNs perform much better than SPINN. Is there a specific reason why SPINNs did not perform well? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The limitations have been adequately addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R: Computational stability and gradient vanishing/exploding problems** A: We have tested on the (5+1)-d system to check whether our model can handle higher dimension PDEs (please see section G.1 in the supplementary material). Additionally, we acknowledge the potential numerical instability introduced by the outer product operation as the dimensionality ($d$) increases. To address this, a promising approach for solving higher-dimensional PDEs involves integrating multiple body networks into one, thereby reducing the multiplications during feature merging: $U(x_1, x_2, …, x_d) = \sum\prod f(x_1, x_2)f(x_3, x_4), …, f(x_{d-1}, x_d)$ We view the exploration of SPINN's performance on high-dimensional PDEs as an intriguing avenue for future research. Thank you for your suggestion. *** **R: Size of Figure 6** A: You can see the magnified version of Figure 6 in our attached PDF file (Figure 1). We will also update the figure in our revised version of the paper. Thank you for your advice. *** **R: Choice of the PDEs** A: The Helmholtz and Klein-Gordon equations pose notable challenges for conventional PINNs, even in 2-dimensional cases [1]. Moreover, we intentionally introduced high-frequency components into the reference solution of the Helmholtz equation to examine the strength of SPINN’s capability to approximate highly complex solutions. Due to neural networks' inherent spectral bias [2], both the standard PINN and the PINN with a modified MLP failed in this case, regardless of the number of collocation points used. During the rebuttal period, we tested our model on the chaotic (1+1)-d Kuramoto-Sivashinky equation following the causal PINNs’ experiment [3]. Within the temporal domain [0, 0.4], our model achieved a relative L2 error of 3.81e-2 without using a causal loss, and the training speed was 15 times faster than the causal PINNs. You can see the visualized result in our attached PDF, Figure 2. *** **R: Does SPINN architecture has any optimization problems?** A: Since our work addresses the scalability issues of PINNs in priority, SPINN inherits the difficulty of optimization in PINNs’ context. However, our experimental results, when compared with prior methods that focused on PINNs optimization (such as modified MLP, and causal PINNs), suggest that the adoption of large-scale collocation points can serve as an effective means of optimizing PINNs. Standard PINNs were infeasible to employ such large-scale collocation points due to the computational issue. Regarding optimization, we have conducted some experiments to explore the use of L-BFGS when training SPINN. Please see section G.2 in the supplementary materials. Understanding the effect of the optimization algorithm is still an open question in PINNs, we believe that investigating this issue in the context of SPINN would be a valuable direction for future study. *** **R: Why PINNs perform better than SPINN for a small number of collocation points for the diffusion equation?** A: Among the PDEs in our experiments, the diffusion equation was the simplest case where the solution is a superposition of three Gaussians. It seems that even standard PINNs trained with a small amount of collocation points can accurately predict the relatively simple solution function. Because of the structural collocation points, SPINN with small training inputs seems to suffer in some cases. However, we want to note that SPINN eventually finds a more accurate solution with more collocation points ($>10^6$). Training standard PINNs in this scale would require much more training time and memory space. [1] Wang et al., "Understanding and mitigating gradient pathologies in physics-informed neural networks", SIAM Journal on Scientific Computing (2021) [2] Rahaman et al., "On the spectral bias of neural networks", ICML (2019) [3] Wang et al., "Respecting causality is all you need for training physics-informed neural networks", arXiv (2022) --- Rebuttal Comment 1.1: Comment: I thank the authors for providing clarifications to my questions/comments. I have read the other reviews and the authors' comments, and I have increased my score accordingly.
Rebuttal 1: Rebuttal: ## For all reviewers We thank the reviewers for taking their time to suggest all the invaluable comments and constructive feedback. **We attached a PDF file** to show tables and visualizations of additional experiments conducted during the rebuttal period. Please see our responses below. Pdf: /pdf/7cd56a8b37a01847c3de37b577b96417463895ee.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the authors propose two main contributions: (i) forward mode AD for PDEs (ii) separating the contributions from each dimension (just like in separable conv) and performing computation using tensor multiplication. The results naturally show significant improvement over PINNs. Strengths: + Significant amount of speedup in performance. + To be honest, this is probably the only PINN variant I have seen showing results for 3D with as many collocation points as 64^3 etc. + They can even solve (3+1) d problems. Weaknesses: - the language needs to be worked upon. It's not that forward AD is a very new area. In the broad community of Neurips, there are a lot of works that use similar ideas. Statements like "To our knowledge, it is the first attempt to exploit the power of forward-mode AD" needs to be made with caution. Perhaps make it clear that in the purview of PINNs or something. In fact, even in PINN-type papers, there are works that use Finite difference and Finite element based approaches (which don't need a backward pass) e.g. (i) https://arxiv.org/pdf/2005.08357.pdf (ii) https://arxiv.org/abs/2112.04960 (iii) https://arxiv.org/abs/2211.03241 (iv) https://arxiv.org/abs/1901.06314 (v) https://arxiv.org/pdf/2109.07143.pdf - lack of rigorous comparisons - lack of ablation studies Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the rank in Figure 7? - It seems that your accuracy with even higher number of collocation points is higher than PINNs. Yes, you are solving faster. But is it better? I don't seem to see it clearly... - Where are the comparisons with other methods like deepOnets, FNOs, etc. - is 1.9e-3 sufficient loss for Navier-Stokes? What is the typical acceptable error in real applications? and can SPINN achieve it? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R: Expressions when stating our contribution** A: In the related work section, we mentioned that we utilized forward-mode AD *in training PINNs* (lines 108-109), but regrettably, this detail was inadvertently omitted in the conclusion section (line 348). Thank you for pointing this out. We will carefully examine the entire text for any additional inaccuracies and make the necessary corrections. *** **R: Forward-mode AD is not very new** A: We want to clarify that our contribution lies in the demonstration of novel PINNs architecture that *utilizes* the forward-mode AD *for fast and accurate PINNs training*. Although forward-mode AD itself is not a new concept, we have rediscovered its potential within the context of PINNs research. *** **R: PINN-type papers that use numerical derivatives** A: We appreciate the provided references. We will cite them in the related works section. It is true that PINN models that use numerical differentiation do not require any backward pass for obtaining the input derivatives. However, these models are still burdened by a computational complexity of $O(N^d)$ during the network propagation, thereby limiting them to handle large-scale collocation points. Furthermore, numerical differentiation has truncation errors depending on the step size. *** **R: Lack of rigorous comparisons** A: We are afraid that we do not know which comparison you would like to see, so we tried our best to give more information in this context. But, please let us know the details of the rigorous comparisons. In the experiment section, we focused on demonstrating 1) training speed and 2) the effectiveness of using large-scale collocation points. * Training speed: For a fair comparison, we matched all factors which can affect the training runtime between our model and the baselines. Each experiment was run on a single RTX 3090 GPU and the training source code for both our model and the baselines was implemented with JAX. We matched the total number of learnable parameters, and the training speed of each model is compared under the same number of collocation points. * The effectiveness of using large-scale collocation points: We wanted to show that using large-scale collocation points during training can enhance the model’s ability to predict more accurate solutions. We numerically compared the accuracy using the relative L2 error metric. However, as the reviewers ‘KaJa’ and ‘dosJ’ pointed out, we acknowledged the potential limitations of the relative error in universally explaining the results. To address this, we have introduced two additional metrics – RMSE and PINN loss value – as detailed in the attached PDF file. We will provide the complete results in the revised version of the paper if permitted. *** **R: Lack of rigorous ablation studies** A: Again, please let us know which particular ablation study you would like to see. The most essential factor that determines the performance of SPINN is the rank of the reconstructed tensor. We carried out experiments with varying ranks on the (2+1)-dimensional Navier-Stokes equation to showcase its impact on both the prediction accuracy and training speed (Figure 7). *** **R: Rank in Figure 7** A: This is the rank of the reconstructed solution tensor. SPINN constructs multiple rank-1 tensors via the outer product, and the rank is the number of these rank-1 tensors to be added to represent the final solution tensor. Please see Figure 4 in the main paper for details. Increasing the rank gives more expressive power to the model supported by Theorem 1 in the main paper. Figure 7 is intended to demonstrate whether increasing the rank actually helps the model find more accurate solutions. *** **R: No comparisons against operator methods** A: We did not take the neural operator methods into account because they require a pre-training stage to train a neural network over a large set of ground truth data. Thus, the resulting neural network is not capable of solving other PDEs (not shown in the pre-training stage). For example, an FNO trained on Burgers’ equations cannot be used to solve the Helmholtz equations. On the other hand, SPINN (or PINN in general) can be applied to solve any PDEs without any data. To summarize, PINNs and operator learnings (deepOnets, FNO) differ in many aspects, and we thought it is hard to perform an apples-to-apples comparison. We will add more discussions about the differences in the final version of the paper. *** **R: Typical error in Navier-Stokes equation** A: For the (2+1)-d case, we’ve shown that SPINN achieved higher accuracy compared to the causal PINNs, which is the best-performing prior method. We have found a PINNs paper [1] that also tried to solve a (3+1)-d Navier-Stokes equation. Although there is a discrepancy in the equation settings, we leave their result below for a rough reference. Equation: (3+1) dimensional Beltrami flow Manufactured solution (velocity vectors): $u_x = -a[e^{ax}\sin(ay+dz)+e^{az}\cos(ax+dy)]e^{-d^2t}$ $u_y = -a[e^{ay}\sin(az+dx)+e^{ax}\cos(ay+dz)]e^{-d^2t}$ $u_z = -a[e^{az}\sin(ax+dy)+e^{ay}\cos(az+dx)]e^{-d^2t}$ Relative L2 error: 0.023886 We want to note that the error encountered when solving the (3+1)-d Navier-Stokes equation using a numerical solver can vary significantly based on the specific solver, the chosen discretization methods, the grid resolution, and the nature of the problem to being solved. It is hard to pin down a single “typical” error value as it depends on various factors as mentioned above. Moreover, since there’s no analytical solution available for such a general PDE setting, engineers and researchers often assess the accuracy of their numerical solutions by studying the convergence behavior of the numerical methods, akin to our proposed theorem. [1] Jin et al., NSFnets (Navier-Stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations, Journal of Computational Physics (2021) --- Rebuttal Comment 1.1: Comment: With due consideration of the author's response and other reviewers' responses. I would like to increase my score.
null
null
null
null
null
null
Fed-CO$_{2}$: Cooperation of Online and Offline Models for Severe Data Heterogeneity in Federated Learning
Accept (poster)
Summary: In this paper, the authors focus on tackling the feature skew issue, and introduces a new approach that utilizes knowledge distillation. Specifically, the proposed method transfers the knowledge between the FL online model and the local offline model to enable them to have both domain invariant and domain specific knowledge. Strengths: Strength: 1. The proposed approach of mutual learning between the offline and online models is novel. 2. Theoretical analysis of the proposed method is provided (though they are all in the appendix) 3. Experimental results on multiple datasets with both label distribution skew and feature skew demonstrate promising outcomes. Weaknesses: 1. Lack of important baselines: a. Ditto: Since Ditto is a very classic pFL method, it should be included as the baseline. b. FedMask [1]: There is no hypernetwork based method that directly personalizing parts of the model parameters. FedMask is one of the SOTA method, which learns distinct binary masks for the last several layers of the local models, and aggregates the masks with an intersection operation. 2. Lack of some experiments: The proposed method actually contains three components, intra-client, inter-client knowledge distillation, and the fuse of online, offline models. In the ablation study, the authors focus on the experiment of the previous two parts, and the fuse part is only mentioned in the label distribution skew setting (Figure 4 in the appendix). How about conducting experiments to evaluate the performance of the online and offline models in the feature distribution skew setting, i.e., the DomainNet dataset? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Refer weakness 1 and weakness 2. 2. Regarding the order of two knowledge transfers: Why is Intra-Client Knowledge Transfer executed before Inter-Client Knowledge Transfer? 3. Why is the theoretical analysis part only included in the appendix? In my opinion, at least the main theorem should be presented in the main body of the paper, while the assumptions, lemmas, and proofs could be included in the appendix. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Q1**: Lack of important baselines: a. Ditto: Since Ditto is a very classic pFL method, it should be included as the baseline. b. FedMask [1]: There is no hypernetwork based method that directly personalizing parts of the model parameters. FedMask is one of the SOTA method, which learns distinct binary masks for the last several layers of the local models, and aggregates the masks with an intersection operation. **R1**: Thank you for providing us with two important baselines. We will cite them in Related Work of our final version but will not compare our method with them. Generally, it is hard to compare all the existing methods in federated learning. For Ditto, one of our baseline methods FedRoD [12] has shown higher performance than it in the case of label distribution skew, while our Fed-CO$_2$ outperforms FedRoD[12] on FL with label distribution skew, feature skew, and both. This can clearly prove our method is better than Ditto on PFL issues. For FedMask, this algorithm is designed to address label distribution skew issues with no special design to overcome domain gaps. Moreover, we have not yet found published code online. It is hard to reimplement this algorithm. >**Q2**: Lack of some experiments: The proposed method actually contains three components, intra-client, inter-client knowledge distillation, and the fuse of online, offline models. In the ablation study, the authors focus on the experiment of the previous two parts, and the fuse part is only mentioned in the label distribution skew setting (Figure 4 in the appendix). How about conducting experiments to evaluate the performance of the online and offline models in the feature distribution skew setting, i.e., the DomainNet dataset? **R2**: Here we supplement experiments that evaluate the performance of the online and offline models in FL with the feature skew on DomainNet. The experimental results and relevant analyses will be added in the Appendix of our final version. Based on the experimental results shown in the Table below, we have the following observations: First, the online model outperforms the offline model on most clients, except for the Quickdraw client. This phenomenon validates our hypothesis that when the feature shift is severe, the model aggregation will lead to the loss of important local offline information, resulting in the model's failure to adapt to such significant data heterogeneity. Second, prediction fusion is even inferior to the online and offline models for some clients. This result reflects that prediction fusion is not sufficient to fuse online general knowledge and offline specialized knowledge. Therefore, for FL with feature skew, intra-client and inter-client knowledge transfer mechanisms are required to boost model and client cooperation under our novel framework. | Method | Clipart | Infograph | Painting | Quickdraw | Real | Sketch | Avg | | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | | Online Model | 51.2±1.4 | 26.8±0.5 | 41.5±1.4 | 71.3±0.7 | 54.8±0.8 | 42.1±1.3 | 48.0±1.0 | |Offline Model | 41.0±0.9 | 23.8±1.2 | 36.2±2.7 | 73.1±0.9 | 48.5±1.9 | 34.0±1.1 | 42.8±1.5 | |Fed-CO2 (no intra and inter knowledge transfer) | 48.7±0.9 | 26.5±2.0 | 42.1±1.0 | 72.9±0.8 | 57.1±1.1 | 40.0±0.8 | 47.9±0.7 | |Fed-CO2 | 55.0±1.1 | 28.6±1.1 | 44.3±0.6 | 75.1±0.6 | 62.4±0.8 | 45.7±1.9 | 51.8±0.2 >**Q3**: Regarding the order of two knowledge transfers: Why is Intra-Client Knowledge Transfer executed before Inter-Client Knowledge Transfer? **R3**: If the Inter-Client Knowledge Transfer is executed before the Intra-Client Knowledge Transfer, the global general information in the online model will be damaged and the initial model for the local training process will be weaker. The online model gets global general information after parameter aggregation on the server. But parameter aggregation will also cause the online model to forget some local specialized information. Meanwhile, the offline model keeps all learned local specialized information but cannot access any global information. Intra-Client Knowledge Transfer not only aids the online model in compensating for lost local information but also equips the offline model with some global information. This approach initializes the offline and online models better for the next local training. Inter-client Knowledge Transfer utilizes local data to not only adapt to local data distribution but also concurrently enhance the domain generalization ability for the online and offline models during the local training process. When the Inter-Client Knowledge Transfer is executed ahead of the Intra-Client Knowledge Transfer, the global general knowledge in the online model will unavoidably be covered by the learned local knowledge. In addition, the initial online model will lose local knowledge and the initial offline model will lose global knowledge in this case. >**Q4**: Why is the theoretical analysis part only included in the appendix? In my opinion, at least the main theorem should be presented in the main body of the paper, while the assumptions, lemmas, and proofs could be included in the appendix. **R4**: We appreciate your valuable advice and will present the Main Theorem in the main body of the paper in our final version. --- Rebuttal Comment 1.1: Comment: Thanks for your response! All my concerns have been addresses, and I will keep my score.
Summary: The paper proposes a personalized federated learning (PFL) method to handle both label and feature distribution skew.Specifically, each clients holds a partially perosnalized model (personalization happens at the level of batch-normaizliation layers), and a fully personalized local model. The paper conducts a series of numerical sumilations to evaluate the performance of the proposed method and to compares with state-of-the-art PFL methods. Strengths: - The paper is overall well-written and easy to follow. - The proposed method shows promising performance and outperforms competitors in most cases. - The proposed method does not induce any communication overhead compared to standard federated aggregation operations since the fully personalized offline models are not uploaded to the server. Weaknesses: - The paper does not provide stong explantion to motivate the proposed method. In particular, it is unclear where (6) and (7) come from, and why they take this specific form. - The proposed method induces a memory overhead in comparison to standard federated averaging. - Apart from proposing a PFL with good performance, the paper does not provide any insights helping to understand personalization. - The paper does not provide any theoretical guarantees. - I think that (3) is not accurate. Maybe a division by 2 in the RHS of (3) is missing. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Could you please provide more intuition on the particular choice of the loss function in (6) and (7)? - Could you please double check (3)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: - The proposed method induces a memory overhead in comparison to standard federated averaging. - The paper does not provide any theoretical guarantees. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Q1**: The paper does not provide stong explantion to motivate the proposed method. In particular, it is unclear where (6) and (7) come from, and why they take this specific form. **R1**: The motivation of our proposed method is answered together with Q3. Formulas (6) and (7) describe the generalization loss function $L_{gen}$ for the online and offline models separately, which force feature extractor $f_{i}$ to produce robust and well-generalized features that can be recognized by frozen classifiers $\overline{C}\_{j}^{\rm off} (j\neq i)$ introduced from other clients' offline models. This function facilitates the acquisition of domain-invariant knowledge for the online and offline models from other clients. Our design is inspired by the belief that if the feature extractor owns general knowledge from other clients then the image features extracted by it should be well recognized by classifiers trained on other clients. Take formula (6) as an example, which transfers general knowledge from other clients to the online model. $f_{i}^{\rm on}(\eta_{i}^{\rm on}; x_{k})$ represents image features extracted by the feature extractor of the online model. Then we give these features to frozen classifiers from other clients. $\overline{C}\_{j}^{\rm off}(\overline{\phi}\_{j}^{\rm off}; f\_{i}^{\rm on}(\eta\_{i}^{\rm on}; x\_{k}))$ represents the prediction logits provided by the classifier from client j. >**Q2**: The proposed method induces a memory overhead in comparison to standard federated averaging. **R2**: Yes, we do need more memory overhead but our method achieves SOTA in cases with severe data heterogeneity, where standard federated averaging shows a quite poor performance. In addition, some published methods induced a memory overhead (even more than ours) as well. For example, KNN-Per [51] employed a local memory bank to save local image features; PerFCL [52] kept an extra local feature extractor to obtain local information; [12] utilized an extra personalized classifier or hyper network to deal with heterogeneity issues. [51] Marfoq, Othmane, et al. "Personalized federated learning through local memorization." International Conference on Machine Learning. PMLR, 2022. [52] Zhang, Yupei, et al. "Doubly contrastive representation learning for federated image recognition." Pattern Recognition 139 (2023): 109507. >**Q3**: Apart from proposing a PFL with good performance, the paper does not provide any insights helping to understand personalization. **R3**: The key of Fed-CO$_2$ relies on the fusion of offline specialized knowledge and online general knowledge that is crucial for federated personalization. Our insight comes from the observation (as mentioned in line 40) that in certain instances of extreme data heterogeneity, models trained by existing personalized algorithms may exhibit inferior performance compared to the locally-trained model. Conversely, in FL scenarios with milder heterogeneity, partially personalized models trained by PFL algorithms perform better. Based on this observation, we raised the question (as mentioned in line 46): Is there a more effective approach to fuse the online general knowledge and the offline specialized knowledge to achieve better performance? To answer this question, we conducted a series of experiments and designed our effective universal framework Fed-CO$_2$. >**Q4**: The paper does not provide any theoretical guarantees. **R4**: Actually, we have provided a detailed theoretical analysis from the perspective of Neural Tangent Kernel [47] in Appendix C. As reviewer NQAC suggests in Q4, we will present the main theorem in the main body of the paper in our final version. >**Q5**: I think that (3) is not accurate. Maybe a division by 2 in the RHS of (3) is missing. **R5**: In fact, whether (3) is divided by 2 or not remains consistent for the final prediction result. This is because Formula (3) is only utilized during the testing phase for class prediction. In our framework, the online model $F^{\rm on}$ and offline model $F^{\rm off}$ are trained separately. We will add this illustration in the final version.
Summary: The paper presents Fed-CO2, a novel framework for Federated Learning (FL), a distributed learning paradigm that allows multiple clients to collectively learn a global model without sharing their private data. FL's effectiveness depends heavily on the quality of the data used for training. Data heterogeneity issues, like label distribution skew and feature skew, can significantly impact FL's performance. Traditionally, most studies have focused on dealing with label distribution skew, while a few recent ones have started addressing feature skew. These forms of data heterogeneity have been studied separately and have not been integrated within a unified FL framework. Fed-CO2 aims to address this gap by developing a framework that handles both label distribution skew and feature skew. The framework utilizes a cooperation mechanism between the Online and Offline models: the online model learns general knowledge shared among all clients, and the offline model is trained locally to learn each individual client's specialized knowledge. To further enhance model cooperation in the presence of feature shifts, the authors design an intra-client knowledge transfer mechanism to reinforce mutual learning between the online and offline models. Additionally, they introduce an inter-client knowledge transfer mechanism to enhance the models' domain generalization ability. Through extensive experiments, Fed-CO2 outperforms a wide range of existing personalized federated learning algorithms, handling label distribution skew and feature skew, both individually and collectively. Strengths: - The cooperation mechanism between the online and offline models is a novel and effective way to deal with data heterogeneity issues. This mechanism allows for general learning shared among all clients, as well as specialized learning unique to each client. - The Fed-CO2 framework represents a comprehensive approach to addressing data heterogeneity in Federated Learning. It doesn't just focus on label distribution skew or feature skew, but rather addresses both these challenges within a unified framework. - The extensive experiments show that the Fed-CO2 model performs better than a wide range of current personalized federated learning algorithms. This indicates the effectiveness and potential of the Fed-CO2 model. Weaknesses: - The model may be complex to implement, especially in environments where the number of clients is very large, due to the dual model structure (online and offline) and the knowledge transfer mechanisms. - Despite being designed to handle heterogeneity, the performance of the framework is still reliant on the quality of the data used for training, which may not always be guaranteed, especially in a federated setting. - The paper does not clearly articulate how the proposed method scales with increased data, feature dimensions, or numbers of clients. Understanding the scalability of the method is crucial in real-world applications. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What are the computational costs associated with the proposed intra-client and inter-client knowledge transfer mechanisms? - How does Fed-CO2 handle potential security and privacy concerns associated with inter-client knowledge transfer? - How does Fed-CO2 handle scenarios where the label distribution skew and feature skew are extreme? - Can the framework handle other types of data heterogeneity beyond label distribution skew and feature skew? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weaknesses and questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Q1**: The model may be complex to implement, especially in environments where the number of clients is very large, due to the dual model structure (online and offline) and the knowledge transfer mechanisms. **R1**: We partially disagree with this opinion. Our method will be easy to implement if we remove the inter-client knowledge transfer mechanism. As mentioned in response to Reviewer xxVq Q7, Fed-CO$_2$ is able to address numerous non-I.I.D. challenges without the need for the inter-client knowledge transfer mechanism, and in such cases, no additional communication overhead arises as the offline model remains consistently local. >**Q2**: Despite being designed to handle heterogeneity, the performance of the framework is still reliant on the quality of the data used for training, which may not always be guaranteed, especially in a federated setting. **R2**: We respectively disagree with this comment. We believe every Deep Learning method (not limited to federated learning) relies on the quality of the training data. While our method does not rely much on the training data quality as we study very challenging data heterogeneity cases. >**Q3**: The paper does not clearly articulate how the proposed method scales with increased data, feature dimensions, or numbers of clients. Understanding the scalability of the method is crucial in real-world applications **R3**: We appreciate your advice for supplementing experiments discussing the influence of increased data, feature dimensions, and number of clients. Considering the influence of feature dimensions is rarely discussed in former FL algorithms, we have only supplemented experiments concerning increased data and client number. As for the experiments of increased data, please refer to the response to Reviewer xxVq Q5. Here, we provide the experiment results of our method with the increased client number. To be specific, we employ dataset CIFAR-10 with Dirichlet setting ($\alpha=0.3$) to conduct a series of experiments on benchmark methods with client number N $\in $ \{50, 100, 150, 300, 500\}. As Table 1 (in our new supplementary) demonstrates, our Fed-CO$_2$ consistently outperforms a bunch of SOTA methods in every case. The results in this Table and Fig. 1 (in our new supplementary) prove that our Fed-CO$_2$ owns higher scalability than benchmark methods. These supplemented experimental results will be included in our final version. >**Q4**: What are the computational costs associated with the proposed intra-client and inter-client knowledge transfer mechanisms? **R4**: Intra-client knowledge transfer mechanism will add one epoch in the local training phase of each client. In this epoch, the online and offline models learn from each other via knowledge distillation. Inter-client knowledge transfer mechanism does not introduce extra data epochs. The extra computation cost is the calculation of cross-entropy loss between the prediction of frozen classifiers from other clients and the labels. >**Q5**: How does Fed-CO2 handle potential security and privacy concerns associated with inter-client knowledge transfer? **R5**: Our inter-client knowledge transfer only communicates partial model parameters without exposing client's data. The traditional FL algorithm FedAvg [2] communicates clients' models to the server as well to do model parameter aggregation on the server. Therefore, model communication will not cause serious security and privacy issues. >**Q6**: How does Fed-CO2 handle scenarios where the label distribution skew and feature skew are extreme? **R6**: We enhance model cooperation via intra-client and inter-client knowledge transfer mechanisms based on the prediction fusion framework. The intra-client knowledge transfer mechanism is applied to facilitate mutual learning between the online and offline models and the inter-client knowledge transfer mechanism is utilized to leverage additional knowledge from other clients. >**Q7**: Can the framework handle other types of data heterogeneity beyond label distribution skew and feature skew? **R7**: Yes, our framework can handle other types of data heterogeneity. Apart from label distribution skew and feature skew, we also studied the heterogeneity case with mixed label distribution skew and feature skew. The experiment results were shown in Table 4, Table 8, 9 in the main paper. Moreover, we have added additional experiments on the new non-I.I.D. scenarios, where each client has the same label distribution but is subject to various degrees of noise. In detail, we divide the dataset CIFAR-10 into 10 clients in a way to make them independently and identically distributed. Then we apply the increasing level of random Gaussian noise with mean $\mu_{i}=0$ and standard deviation $\sigma_{i}=\frac{\sigma_M}{N-1}*i$, where $i\in$ \{0, 1,$\cdots$, N-1\}. In this series of initial experiments, we let the client number $N=10$ and the sample rate be 0.5 with $\sigma_{M}\in $ \{0, 5, 10, 15, 20\}. The backbone of each method is the same as that in other experiments. As presented in Fig. 2 (in our new supplementary), the experimental results reveal that our Fed-CO$_2$ consistently exhibits better and more robust performance across nearly all experiments, particularly in scenarios with severe noise. These experiments will be incorporated into the Appendix of our final version and will also serve as a source of inspiration for our future work. --- Rebuttal Comment 1.1: Comment: Thank you for your response. It answers my questions and concerns, so I will be keeping my positive score.
Summary: This paper introduces an algorithm to address both label and feature heterogeneity in federated learning. The algorithm entails that each client learns two models: one that has personalized batch normalization parameters and otherwise global parameters, and another that is fully personalized. To learn the parameters, a two-stage knowledge distillation procedure is proposed, where in the first stage knowledge is distilled between the two models on each client, then in the second stage knowledge is distilled across clients. Experiments study the performance of the proposed model against a variety of baselines on image classification in the presence of label heterogeneity, feature heterogeneity, and both label and feature heterogeneity. Strengths: 1. The experimental results show convincing improvement over a variety of relevant baselines in multiple settings with different types of heterogeneity. The procedure is rigorous. 2. The related works are well-discussed. 3. The paper addresses a relevant topic. Weaknesses: 1. The writing can be improved. There are numerous sentences with improper grammar/semantics, e.g. “Federated learning for Data Heterogeneity of Feature Skew”. The model parameter notation is not consistent as both \theta= \{\phi, \eta\} and \theta = \{\phi, \xi\} are used. Label and feature heterogeneity/skew are never defined. The motivation for personalizing specifically the batch normalization parameters is not clear whatsoever, and seems to be simply because it worked well in [10]. 2. Label and feature heterogeneity/skew are never defined. 3. The proposed method entails learning many parameters per client. This is likely to degrade performance in settings with few samples per client, but the effect of the number of samples on algorithm performance is not investigated. 4. The experiments are limited to image classification on relatively simple datasets. 5. The idea to encourage retaining global information locally by adding a regularizer that penalizes loss on local samples of local models with concatenated heads of all other clients is interesting, but introduces additional communication and privacy concerns (due to all clients sharing each of their heads with each other) that are not addressed. ----------- Post-rebuttal: I have raised my overall score from 4 to 5, and Contribution score from 2 to 3, please see comment below. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: n/a Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: See Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Q1**: The writing can be improved. There are numerous sentences with improper grammar/semantics, e.g. “Federated learning for Data Heterogeneity of Feature Skew”. **R1**: We will improve improper grammar/semantics in the final version. >**Q2**: The model parameter notation is not consistent as both $\theta= $\{ $\phi, \eta$ \} and $\theta = $\{ $\phi, \xi$ \} are used. **R2**: We will rectify the confusing parameter notation in our final version. The parameter notation you are confused is actually two types of parameter division. The former parameters $\theta=$\{ $\phi, \eta$ \} represent parameters of the feature extractor and the classifier (defined in section 3.1). The latter parameters $\theta=$\{$ \psi_i, \xi$ \} represents the personalized parameters and the global parameters of the model (defined in section 3.2). In the final version, we will rectify this latter notation into $\theta=$\{ $\theta_{p,i}, \theta_g$ \} to avoid misunderstanding. >**Q3**: Label and feature heterogeneity/skew are never defined. **R3**: Actually, we have briefly explained label distribution skew and feature skew in the Related Work (the former in line 77 and the latter in line 112). Label distribution skew is "label distribution imbalance among clients" and feature skew is "feature shift among clients (domains)". Specifically, for FL with label distribution skew, data in each client has different label distributions. For FL with feature skew, data in each client comes from different domains. In the final version, we will add more illustrations to explain label distribution skew and feature skew in the Introduction. >**Q4**: The motivation for personalizing specifically the batch normalization parameters is not clear whatsoever, and seems to be simply because it worked well in [10]. **R4**: We respectively disagree with this comment. [10] is a classic method in addressing feature skew issues and provides a significant insight that BN helps harmonizing local feature distributions. As we mentioned in line 143, we also need to capture feature distribution in local clients. Therefore, we adopted BN personalization in our online model. Moreover, we did an ablation study in Appendix F.3, where we evaluated three personalization strategies including personalizing BNs, personalizing classification head, and personalizing BNs and classification head. The experimental results persuasively convinced us to personalize BNs. >**Q5**: The effect of the number of samples on algorithm performance is not investigated. **R5**: Here, we have supplemented experiments exploring the effect of the number of samples. For convenience, we utilize the Digits dataset and set 10\% of its training data as the original training dataset. Then, we select a certain ratio $\gamma$ of the original training dataset as a new training dataset for each client, where $\gamma \in $ \{0.2, 0.4, 0.6, 1.0, 2.0\}. The results are shown in Fig. 1 (in our new supplementary) and demonstrate that our Fed-CO$_2$ have an edge over various benchmark methods in almost every case, especially when the number of training data is very limited. We will add this experiment in our final version. >**Q6**: The experiments are limited to image classification on relatively simple datasets. **R6**: We respectively hold a different view from this comment. Actually, we utilized five distinct datasets: CIFAR-10, CIFAR-100, Digits, Office-Caltech10, and DomainNet under several non-I.I.D. settings including FL with label distribution skew, FL with feature skew, FL with label distribution skew and feature skew. Notably, most existing methods utilized even fewer datasets under more limited non-I.I.D. setting. As revealed in our paper (in line 32), prior research either concentrated on addressing label distribution skew [2, 4, 7, 11, 12, 13] with datasets CIFAR-10 and CIFAR-100, or alternatively, attempted to mitigate feature skew [8, 10, 32] with datasets Digits, Office-Caltech10, and DomainNet. Some other algorithms researched the effect of noise on FL including pFedGP [50] and [13] with datasets CIFAR-10 and CIFAR-100. Here are more details: 1. Experiment results for FL with label distribution skew with datasets CIFAR-10 and CIFAR-100 are shown in Table 3 (section 4.2) and Figure 4 (Appendix F.1); 1. Experiment results for FL with feature skew with datasets Digits, Office-Caltech10, and DomainNet are shown in Table 1, 2 (section 4.2) and Table 7 (Appendix E); 3. Experiment results for FL with label distribution skew and feature skew with datasets Digits, Office-Caltech10, and DomainNet are shown in Table 4 (section 4.2), Table 8, 9 (Appendix E); 4. We supplemented experiment results of our Fed-CO$_2$ for FL with noise on CIFAR-10 in respondence to Reviewer 5E9k Q7. [50] Achituve, Idan, et al. "Personalized federated learning with gaussian processes." Advances in Neural Information Processing Systems 34 (2021): 8392-8406. >**Q7**: The idea ... introduces additional communication and privacy concerns (due to all clients sharing each of their heads with each other) that are not addressed. **R7**: We partially disagree with this opinion. (1) Privacy concerns. In the context of traditional FedAvg [2], communicating clients' models to the server remains necessary for model parameter aggregation, meaning that model communication will not cause severe privacy concerns. (2) Additional communication. Our Fed-CO$_2$ merely communicates parts (the classifier) of the offline model. Thus, the additional communication overhead is relatively modest. Moreover, the inter-client knowledge transfer, which causes these two concerns, is only employed in extreme data heterogeneity issues. As our ablation study shows in Table 5, our method still achieves SOTA performance without Inter-Client Knowledge Transfer. In most non-I.I.D. cases, our framework already works. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you to the authors for your detailed response. The clarification of the experiments and new results have convinced me to raise my score. I still have concerns about the writing, and strongly encourage the authors to discuss the motivation for batch normalization in greater detail in the main body, add more clear definitions of label and feature skew -- ideally formal definitions if space permits -- and clean up grammatical mistakes.
Rebuttal 1: Rebuttal: We thank the reviewers for their careful reading of our paper and help with improving our manuscript. We sincerely appreciate that you find our work proposes 'a novel and effective approach' (Reviewer 5E9k, Reviewer NQAC), addresses 'both label distribution skew and feature skew within a unified framework' (Reviewer 5E9k), shows "promising performance" (Reviewer y4gh and Reviewer NQAC) and "convincing improvement over a variety of relevant baselines in multiple settings with different types of heterogeneity" (Reviewer xxVq), and "provided theoretical analysis" (Reviewer NQAC). In what follows, we try to address your concerns/questions and provide a detailed item-by-item response to your comments. According to the reviewers ' advice, we have supplemented four extra experiments to make our work more comprehensive. Specifically, we study the effect of the training number ratio and the client number on our Fed-CO$_2$ and some benchmark methods. **The results are provided in Fig.1 and Table 1 in our newly submitted attachment**. We also provide an additional ablation study about prediction fusion on FL with feature skew in our response to Reviewer NQAC Q2. Finally, we further explore the performance of our Fed-CO$_2$ and some baseline methods on FL with a new type of data heterogeneity issue caused by noise and demonstrate **the results in Fig. 2 in our newly submitted attachment**. Pdf: /pdf/59bf7e88edc97289ae529962b50c3499dc952540.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Online Inventory Problems: Beyond the i.i.d. Setting with Online Convex Optimization
Accept (poster)
Summary: In this paper, the authors consider the classic online inventory control problem. While previous works mainly consider the case where the demand is stochastically drawn from a stationary distribution, this paper considers the case where the demand can be adversarial. In addition, the authors consider a more general framework Online Inventory Optimization (OIO), which includes many different setups in the inventory literature, i.e. censored demand / backlogged demand / perishable / non-perishable. Then, the authors proposed the algorithm MaxCOSD achieving $O(\sqrt{T})$ regret under the assumption on the lower boundedness of the demand (with constant probability). The algorithm shares similar ideas to the cyclical updates proposed in CUP [Zhang et al., 2018] but with a different definition on the cycle and an adaptive choice of learning rate. The algorithm keeps using the same base-stock level to collect enough gradient information until the time where the outcome from the (cumulative) gradient descent step gives a feasible strategy. The authors also discuss their demand assumptions and show that without lower bounded demand assumption, no deterministic algorithm can achieve sublinear regret. Experiments are also done to show the effectiveness of the algorithm MaxCOSD. Strengths: - The problem considered in this paper is interesting and has wide applications in real life. - The proposed algorithm is clean and easy to follow. The proposed framework OIO is general and includes many different setups that are considered in the inventory literature. - Experiments on both synthetic data and real-world data are also done for their proposed algorithm. Weaknesses: - One major concern is the novelty of the proposed algorithm. From the technical perspective, I think the analysis generally follows the classic analysis of online gradient descent with a self-confidence tuning of the learning rate in order to handle the issue raised by the scale of the gradient that may cumulate over rounds. Based on this, I think the $\sqrt{T}$ regret is not very surprising. - Several claims made in this paper may need more explanations or clarifications. - On Assumption 10, it is unclear how to transform the assumption that $\mathbb{E}[d]\geq \rho$ assumed in AIM to this assumption with the same theoretical guarantees. Specifically, with the current Assumption 10, when $\mu$ and $\rho$ is $\Theta(T^{-\alpha})$ and $\Theta(T^{-\beta})$, the regret bound becomes $T^{\frac{1}{2}+\alpha+\beta}$. It would be better if the authors can compare the guarantees of both algorithms under the same assumptions. - The lower bound analysis is problematic or not rigorous. For proposition 13, in line 601, the authors argue that $(T-T_0+1)hy_{T_0}=\Omega(T)$ , which does not hold when ${y}_{T_0}=o(T)$. Similarly, in Proposition 14, $y_1$ can also be $o(T)$ making the regret sublinear. A refined analysis on both propositions should be included. - The experiment results do not show a better performance of MaxCOSD compared to other algorithms in Settings 1-4, even though some algorithms are not designed to handle the adversarial/non-stationary demand. This weakens the motivation of providing an algorithm that performs better in the adversarial environment. In addition, for Settings 1-3, showing the error bar or the std of the algorithm would be better since the algorithm actually guarantees high-probability regret bound. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - In line 244, I do not understand why Assumption 10 with $\rho=D$ matches the assumption 1 made by CUP in [Zhang et al., 2018]. In CUP, they are assuming the knowledge of the upper bound on the optimal inventory level. - On the optimality on the parameters $\rho$ and $\mu$ in Theorem 12: I wonder whether there is lower bound on $\mu$ and $\rho$ for this problem? This may show further tightness of the obtained bound. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See Weakness and Questions sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. *One major concern is the novelty of the proposed algorithm. From the technical perspective, I think the analysis generally follows the classic analysis of online gradient descent with a self-confidence tuning of the learning rate in order to handle the issue raised by the scale of the gradient that may cumulate over rounds.* See our global response. 2. *Based on this, I think the regret $O(\sqrt{T})$ is not very surprising.* We would like to highlight here that applying a classic analysis of OSD to inventory cannot lead to $O(\sqrt{T})$ rates. This is what we prove in Proposition 13. To get this rate we need to make assumptions which is totally unusual for online learning. 3. *Several claims made in this paper may need more explanations or clarifications [...] The lower bound analysis is problematic or not rigorous [...] A refined analysis on both propositions should be included.* We firmly disagree with the reviewer. Each of the four critics raised by the reviewer rely on the assumption that some constants ($\mu$, $\rho$, $y_{T_0}$, or $y_1$) could depend on $T$. It is hard for us to make sense of this, as it is stated in our assumptions and statements that those quantities are *constant* and independant of $T$. This is for instance very clear in Assumption 10. We conjecture that there is a confusion here, and that the reviewer has in mind an Online Learning setting where the horizon is *finite*. In such setting there is a fixed finite horizon $T \in \mathbb{N}$, and the learning dynamic unfolds on time periods $t=1,...,T$; at the end of which we compute a regret $R_T$. There we could indeed have parameters depending on the horizon $T$ (this happens in the corresponding literature), in which case the raised critics would be relevant. We stress that this is not our setting (we work with an *infinite horizon*). There is also the possibility that the above is incorrect, and that we are completely missing something. In this case we would need further explanations from the reviewer? 4. *The experiment results do not show a better performance of MaxCOSD compared to other algorithms* This is correct, and stated in the paper ("*we see that MaxCOSD is less efficient when the number of products becomes large*"). Please note that the goal of this paper is not to improve on state-of-the-art performances in a specific setting. Instead, we provide a versatile algorithm able to tackle various (and new) settings while being theoretically grounded. We believe that wanting an algorithm which applies to a broader setting while at the same time beating the algorithms designed for solving a specific setting is irrealistic. This being said, we should not compromise on performance either, and we show that besides hard problems, our method performs comparatively to competitors. 5. *This weakens the motivation of providing an algorithm that performs better in the adversarial environment.* We never make the claim that our algorithm performs better in an adversarial environment. 6. *showing the error bar or the std of the algorithm would be better* We considered doing this but chose not to do it for two reasons. The first reason is that we use both synthetic data with real data, for which how to compute the std is not clear. The second reason is that it did not provide meaningful information for synthetic data. Indeed, in Settings 1-2 the variance is relatively very low overall ; and in Settings 3 the variance becomes significative only in the regime when $\gamma$ is large (for both algorithms), which makes the plots look messy, while having little interest since it is the regime where the algorithms performs less well. For the sake of the conversation, we share in the global response our plots with the std displayed. 7. *In line 244, I do not understand why Assumption 10 with $\rho=D$ matches the assumption 1 made by CUP in [Zhang et al., 2018]. In CUP, they are assuming the knowledge of the upper bound on the optimal inventory level.* First of all, we emphasize that we do not make the claim that our Assumption 10 implies the Assumption 1 for CUP. Instead we say that we recover "*an*" assumption. Allow us to explain this in detail. Our point of view on the CUP algorithm is that Assumption 1 can be decomposed in two assumptions which have very different purposes. - First, the authors assume that the (unconstrained) problem they want to solve is equivalent to a problem with the constraint $\mathcal{Y} = [0, \bar S]$ . To do so, they assume that the optimal policy $S^*$ is lower than $\bar S$. This assumption allows them to retrieve some compactness (which our paper has since we assume $\mathcal{Y}$ to be compact). - Second, they make a non-degeneracy assumption on the demand to obtain $\sqrt{T}$ rates : $\mathbb{P}[d_t \geq \bar S] >0$. It is this assumption which is equivalent to our Assumption 10 when taking $\mathcal{Y}=[0,D]$ and $\rho = D$. This is the reason why we did not refer to Assumption 1 as a whole in this sentence, but to *an* assumption. We recognize that this is imprecise, and could be ambigous. In view to remove any ambiguity and make all this more understandable for the reader, we propose to add a sentence in Remark 6 (about the feasible set $\mathcal{Y}$) explaining that in various papers (the CUP one, but also the AIM one) the authors first assume to know an upper bound $\bar S$ on the optimal policy $S^*$, and then work with $\mathcal{Y} = [0,\bar S]$. This way our sentence in line 244 would make more sense, after eventually adding a call back to Remark 6. 8. *On the optimality on the parameters $\rho$ and $\mu$ [...]* : We address this in our global response (in short: we do not know, and as far as we know other competitors obtain the same kind of constants). --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. However, I am still not convinced on the significance of the obtained results. - 1. We would like to highlight here that applying a classic analysis of OSD to inventory cannot lead to $O(\sqrt{T})$ rates. This is what we prove in Proposition 13. To get this rate we need to make assumptions which is totally unusual for online learning. Yes but I was referring to the $O(\sqrt{T})$ results with additional assumptions made (e.g. lower bounded density for the per-round demand distribution), which can be obtained by similar online gradient descent analysis in [9]. Also the adversarial demand lower bound is also not hard since you may accumulate too many inventory for one round while not getting these inventories sold in later rounds because of the adversarial demand. - 2. Several claims made in this paper may need more explanations or clarifications [...] The lower bound analysis is problematic or not rigorous [...] A refined analysis on both propositions should be included. I think the authors misunderstood my point in the sense that $y_1$ is a decision variable decided by **the algorithm** instead of the problem instance. If the algorithm decides $y_1$ to be $o(T)$, then your lower bound does not hold. - 3. This weakens the motivation of providing an algorithm that performs better in the adversarial environment. I was thinking the paper tries to consider the non-i.i.d demand case and design an algorithm which performs good in this environment (not necessarily adversarial). However, the experiments show that the performance of the algorithm designed for the i.i.d case (DDM) performs even better than the one proposed in this paper in setting 4, which is definitely not an i.i.d environment. This weakens the motivation of using the algorithm proposed in this paper, or in other words, whether the existing algorithms can already achieve so under these environment. --- Reply to Comment 1.1.1: Title: Answer about the misunderstanding regarding constants possibly depending on time Comment: We again insist that $y_1$ is a constant real number chosen in the interval (0,D], as stated in our Proposition 14. It is not a function, and can by no means depend on T (which is a mute variable). As explained in our rebuttal, T is not a quantity fixed beforehand, because we are not in a fixed horizon setting. To illustrate this and further convince the reviewer, we provide below a simple technical explanation. Assume we fix some $T\in\mathbb N$ (say T=10) and set $y_1=T^{-1/2}$, then we could obtain in the proof of this proposition, $C=T^{-1/2}/2$ by choosing an appropriate sequence of demands $(d_t)$, this would indeed lead to $R_T\geq (\sqrt T)/ 2$. This does *not* contradict our claim, since this inequality is only valid for this **single value** of $T$. Consider now any $T’\in\mathbb N$ (a mute variable), then we obtain $R_{T’} \geq CT’ = T^{-1/2}T’/2$ which means that the regret grows linearly with time, which conforms to our claim. The same argument holds for all the other constant parameters. --- Reply to Comment 1.1.2: Title: Answer about the significance of the results Comment: *"the results [...] can be obtained by similar online gradient descent analysis in [9]"* We disagree with the claim reducing the results in [9] to a classic analysis of OGD. For instance, the key part of the proof in [9] relies on additional results from queuing theory to prove these rates [9, Paragraph 3.2.3], see also the proof of [9, Proposition 4] in [9, Appendix B]. The final $O(\sqrt T)$ regret bound is indeed not very surprising but not so easy to obtain either. We recall that our work provide such rates for a significantly large range of settings, some of which have never been considered in the literature such as stateful non-iid settings or perishable multi-product settings. *"Also the adversarial demand lower bound is also not hard since you may accumulate too many inventory for one round while not getting these inventories sold in later rounds because of the adversarial demand"* Please note that we never claim that this lower bound is hard to obtain. We do claim this bound is new. *"I was thinking the paper tries to consider the non-i.i.d demand case and design an algorithm which performs good in this environment"* As explained in the abstract, in the introduction of the paper, as well as in our rebuttal: the goal of this paper is to provide an algorithm with *theoretical guarantees* in a specific setting (non-iid demands) where no such algorithm exists so far. Please acknowledge that this is a theoretical paper. *"DDM performs even better than the one proposed in this paper in setting 4"* Indeed, and our performance with respect to DDM is discussed in Section 4. It is indeed a non-iid setting but this environment also features lost sales, non-perishablity, capacity constraints and a large number of products. DDM has been specifically designed to handle these features while providing theoretical guarantees only in the iid setting. So, in Setting 4, MaxCOSD has proven guarantees whereas DDM has not (see the discussion with Reviewer 5kk9), but DDM performs better because of its design specific to the aforementioned features. Imagine for instance that we change Setting 4 by considering instead another feasible set $\mathcal Y$ or perishable products, then we would end up with a setting for which MaxCOSD can be applied (and still has provable guarantees) whereas, to the extent of our knowledge, no previously existing algorithm in the literature would be applicable. This fact alone should highlight the value of MaxCOSD and the significance of our results.
Summary: The paper studies a general online decision making problem for inventory management. The problem setting extends the generic online convex optimization by introducing structures such as states and demands. Compared to settings studied by related works in inventory management, the setting of this paper relaxes several restrictive assumptions on the loss functions, dynamics and demands. Then, the problem is solved by a new batch-update online gradient descent algorithm with an optimal regret upper bound, whose analysis relies on a geometric cycle property induced by the problem structure. The strengths and limitations of this algorithm are further demonstrated through experiments. Strengths: Overall, this is a strong submission. - It is very well-written, with a smooth and clear coverage of the motivation, how the problem connects to classical online learning models, and the novelty of the proposed algorithm. - Related works on the inventory problem are discussed in detail. This is especially valuable since the readers are most likely from the machine learning community, who are not familiar with the necessary background. - Although I didn't check everything in the proof, the analysis is solid to my knowledge. Intuitions of the algorithm including the nondegeneracy assumption are clearly presented and natural. - Experiments are included, with both positive and negative results. The experimental methodology makes sense. Weaknesses: I don't have much to complain in general. One aspect I'm not sure is the novelty. The backbone of the proposed algorithm is a batch-update version of online gradient descent, which is standard from the perspective of online learning. Hence, the novelty of the paper is mainly on the combination of OGD with inventory structures, which requires sufficient domain knowledge to appreciate. Unfortunately due to my unfamiliarity with inventory problems, I cannot make a very informed evaluation. This is more of my limitation rather than the limitation of this paper. Has the geometric cycle appeared in existing analyses? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - It is assumed that the adversary is oblivious, as the loss function and the demand sequence are determined at the beginning. Is there any particular reason for making this assumption? In online learning, people mainly need oblivious adversaries when the algorithm is randomized, but the algorithm in this paper is deterministic. - The paper leaves discrete feasible sets to future works (L135). I guess this might be handled by an expert algorithm, following the similar batch-update idea. Probably out of the scope of this paper, but it seems a reasonable addition. - The paper discusses the many ways that the setting of this paper generalizes existing works. Does the regret bound also recover existing bounds for more restrictive settings? - Appendix D compares the algorithm to a more direct approach based on OCO. The limitation of the latter is that the competitor is subject to the feasibility constraint. This leaves me wondering, if the competitor in this paper is not feasible, then technically we cannot implement it. Then, the meaning of the competitor term $\sum l_t(y)$ becomes more obscure. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. *One aspect I'm not sure is the novelty.* ; *The backbone of the proposed algorithm is a batch-update version of online gradient descent* ; *the novelty of the paper is mainly on the combination of OGD with inventory structures* We address this in our global response. 2. *Has the geometric cycle appeared in existing analyses?* Yes. It appears in the proof of the CUP paper: with their assumptions, the authors observe that their cycles length are geometrically distributed with a parameter lower bounded by mu, which is a particular case of what we call “geometric cycles”. We did not see this concept as very important, but it is true that it would be fair to to explain the reader that this concept appeared before (even if unnamed). We will fix that. 3. *It is assumed that the adversary is oblivious [...] Is there any particular reason for making this assumption? [...]* This is a good question, which we debated while writing the paper. We eventually decided to set our paper with an oblivious adversary, which we explain below. The first reason is that an oblivious adversary is the standard setting for online inventory problems. Indeed, fully adversarial environments make little sense from a modelling perspective. We would like to quote here the introduction of Lugosi et al. in [15], because we believe that it summarizes perfectly the situation: "*[Oblivious adversarial] is also referred to in the literature as the “adversarial” approach, although this term is somewhat unfortunate: in our setting, the market chooses a sequence of demands for the different time periods arbitrarily, as far as the inventory manager is concerned. Importantly though, the market does not adapt its strategy according to the actions of the inventory manager. Indeed, it seems far-fetched to assume that an entire market adapts, and acts adversarially, to the inventory decisions of a firm. So, while our modeling framework can capture demand correlations and nonstationarities to a significant extent, it is certainly not a game theoretic model.*" The second reason is related to the notion of regret we want to use if the adversary is not oblivious. Let us consider for instance an "adaptive" adversary, in the sense precisely defined in *Online bandit learning against an adaptive adversary: from regret to policy regret* from Arora et al.. More precisely, it is a setting where the functions are chosen in advance but depend on all the past decisions. In such adaptive setting, we could use the classical notion of regret, but it would be more difficult to interpret. Indeed, the comparator (the constant strategy) would be evaluated against the losses chosen in reaction to the previous choices made by our algorithm. An other option would be to consider a policy regret (suggested by reviewer 5kk9)(note that in our oblivious setting policy regret is equal to regret). Policy regret makes more sense from a modelling perspective, but the counterpart is that it makes the analysis more difficult, and somehow requires to make some extra assumption (see for instance Theorem 1 in Arora et al.). We do not want to make superfluous assumptions on our problem, so we would need to make an assumption limiting how adaptive the adversary can be. We found one way of doing this: consider that the losses depend not only on y_t but also on the state x_t, in a very specific way. Precisely, assume that the losses have the form f_t(y,x) = L_t(y) + delta(y,x), where delta(y,x) is equal to 0 if y \preceq x, +\infty otherwise. Forcing the losses to have this form can be seen as a way to restrict the adaptive adversary. We observe that this is exactly equivalent to our current setting: a sequence y_t is feasible and has a certain classical regret bound if and only if the policy regret with those losses have the same bound. This is due to the fact that those new losses f_t are finite and equal to L_t if an only if the feasibility constraint is satisfied. We considered that it was not worth doing it, as it would introduce more technicalities in a paper for no apparent gain. 4. *The paper leaves discrete feasible sets to future works (L135). I guess this might be handled by an expert algorithm, following the similar batch-update idea. Probably out of the scope of this paper, but it seems a reasonable addition.* See our global response in the section about discrete sets (in short: it is a good idea but not trivial to apply because of the feasibility constraints). 5. *The paper discusses the many ways that the setting of this paper generalizes existing works. Does the regret bound also recover existing bounds for more restrictive settings?* See our global response (in short: yes for the dependency on T, more or less for mu) 6. *Appendix D compares the algorithm to a more direct approach based on OCO. The limitation of the latter is that the competitor is subject to the feasibility constraint. This leaves me wondering, if the competitor in this paper is not feasible, then technically we cannot implement it. Then, the meaning of the competitor term [...] becomes more obscure.* In our setting the competitor is always feasible (see our explanations in the global response). This is why our approach makes sense, while the one described in Appendix D.1 is problematic. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed and particularly straight-to-the-point response. It makes good sense to me, and I'll continue voting for acceptance.
Summary: The paper introduces the online inventory optimisation problem. In the problem, we have $n$ products to stock and every day choose the levels to stock up the products to. We then suffer loss according to a convex loss function, which may vary from day to day (e.g., it may depend on the demand). We also get to know a subgradient of the loss. After the demand has wiped out some of the stock, we need to choose new levels to top up the remainders too etc. We want an online algorithm with a guarantee on the regret w.r.t. all possible constant levels. The paper proposes an algorithm, which is an extension of the online subgradient descent (OSD). OSD would have been applicable to the problem directly, if not for the presence of remainders (we cannot top up to a level below the current remainder). The version of the protocol without remainders is called stateless in the paper, so the proposed algorithm is a generalisation of the OSD to the "stateful" setup. The paper proceeds to obtain a $O(\sqrt{T})$ regret bound for the algorithm. Strengths: I think the paper formulates an important problem and obtain a useful result. Weaknesses: The algorithm and the pound in the paper are an extension of the online subgradient descent (OSD). One can argue that the result is of the incremental nature. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: A constant strategy will only work if the constant is above the remainders all the time; otherwise it makes no sense. Am I correct that the regret is w.r.t. the constant strategies that work in this sense? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. *The algorithm and the pound in the paper are an extension of the online subgradient descent (OSD). One can argue that the result is of the incremental nature.* Our answer : we address this in the global response. 2. *A constant strategy will only work if the constant is above the remainders all the time; otherwise it makes no sense. Am I correct that the regret is w.r.t. the constant strategies that work in this sense?* Our answer : we address this in the global response (in short: our regret is always well defined in our context).
Summary: The paper considers a general family of online inventory management problems under convex losses and stateful inventory dynamics. They propose a general algorithm based on an adaptation of ideas from online convex optimization. The method is essentially AdaGrad enhanced with an additional delayed update mechanism that ensures that the inventory constraints are always satisfied. The main result is a sublinear regret guarantee against the best fixed base-stock policy chosen in hindsight, given full knowledge of the realization of the demand sequence. Strengths: The paper is very well-written when it comes to both its informal and technical content. In particular, the authors provide an excellent and succinct introduction into the relevant part of the management / operations literature, and discuss their results in this context extensively. The algorithm itself is natural and effective. Weaknesses: While I generally like the paper, there are a couple of things that I would like to have some more clarity on before I can fully support its acceptance: - Regret is defined against the "best feasible constant strategy", as defined in Eq.(3). I worry that this comparator strategy is problematic in the stateful setting. First off, some fixed constant strategies may be infeasible when some of the carryover inventory levels x_t are large. Thus, the best feasible constant strategy may be a very poor one, ordering very high levels to make sure that the constraint is never violated. Second, I am concerned that evaluating a fixed policy on the loss sequence generated in response to the online agent's actions may be meaningless in the first place, and it might be more sensible to consider "policy regret" by taking into account the fact that the environment would respond differently to the comparator policy than to the agent. Treating this case, however, seems much harder than what is being handled in the present paper. Can the authors let me know if I'm missing something here? - The assumption that the order levels are continuous seems like a very important one. The authors themselves admit that this is a limitation of their method. However, I tend to believe that it is a rather major limitation, as just about every practically relevant problem instance in this space features discrete actions. In lack of continuity, one has to deal with the challenge of partial information due to lost sales, which makes it impossible to estimate the gradients even after convexifying the action space via randomization (as noted by Huh and Rusmevichientong [9]). It would be useful to clarify this to the reader so they don't end up believing that this assumption is minor. - I find it a bit disappointing that the regret bounds depend so heavily on the demand parameters mu and rho. I appreciate the negative results showing that certain degenerate demand sequences can lead to linear regret, but it's not clear to me if this justifies the poor linear dependence on 1/mu and 1/rho demonstrated by the bound. How do we know that this dependence cannot be improved to log(1/mu) and log(1/rho) or something even better? I really appreciate the experiments that plot performance as a function of the learning rate gamma, but I would also like to understand which values of gamma actually satisfy the constraint required by the theorem --- I suspect only the very very small ones, for which the algorithm works rather poorly. In this sense, it may be unfair to criticize DDM for not having performance guarantees since MaxCOSD doesn't have any either for the majority of the studied stepsizes. This is of course always a problem for theoretically motivated stepsizes, but it would be important to mention it at least once in passing. (A final note on the experiments: my impression was that AIM was also a very similar policy to what is being proposed here, but see no discussion on this.) Overall, I think this could be a good addition to the program, but I cannot confidently recommend acceptance until the authors respond to the above concerns. Technical Quality: 3 good Clarity: 3 good Questions for Authors: see above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. *I worry that this comparator strategy is problematic in the stateful setting. First off, some fixed constant strategies may be infeasible when some of the carryover inventory levels x_t are large. Thus, the best feasible constant strategy may be a very poor one, ordering very high levels to make sure that the constraint is never violated.* We address this in our global response (in short: constant strategies are always feasible). 2. *I am concerned that evaluating a fixed policy on the loss sequence generated in response to the online agent's actions may be meaningless in the first place, and it might be more sensible to consider "policy regret" by taking into account the fact that the environment would respond differently to the comparator policy than to the agent. Treating this case, however, seems much harder than what is being handled in the present paper. Can the authors let me know if I'm missing something here?* With our assumptions on the model (see remark 4) the adversary/environment is oblivious. Therefore, the regret and policy regret coincide, since the losses are oblivious and consist of functions of the current order-up-to level only. The loss functions do not change according to our decisions (oblivousness), only the states and thus the feasibility constraint vary. We will update remark 4 to make this clear to the reader. For more discussion on this topic, see maybe our answer number 3 to Reviewer ZTux, which asks us why we chose the environment to be oblivious. 3. *The assumption that the order levels are continuous seems like a very important one [...] It would be useful to clarify this to the reader so they don't end up believing that this assumption is minor.* See our global response (in short: we agree) 4. On the rates: *I find it a bit disappointing that the regret bounds depend so heavily on the demand parameters mu and rho [...] How do we know that this dependence cannot be improved to log(1/mu) and log(1/rho) or something even better?* See our global response (in short: we do not know, and other papers obtain similar constants) 5. On the parameter $\gamma$: *I really appreciate the experiments that plot performance as a function of the learning rate gamma, but I would also like to understand which values of gamma actually satisfy the constraint required by the theorem --- I suspect only the very very small ones, for which the algorithm works rather poorly. In this sense, it may be unfair to criticize DDM for not having performance guarantees since MaxCOSD doesn't have any either for the majority of the studied stepsizes. This is of course always a problem for theoretically motivated stepsizes, but it would be important to mention it at least once in passing.* We agree with the reviewer. It is true that we run MaxCOSD for a wide range of parameters which might violate our condition $\gamma \leq \rho/D$ in Theorem 12. For the record, in our experiments we always have $\rho = 1$ and $D \simeq 10,10,10^3,10^4,10^4$ for Settings 1-5 (in this order), while we explore the interval $[10^{-5}, 10^1]$. From this perspective, MaxCOSD lacks theoretical guarantees when using the parameters achieveing the best performance (around $10^{-1}$) for Settings 3-4-5. This being said, we observe that we can force Theorem 12 to accept large stepsizes, by taking rho as large as needed. This would impact our Assumption 10 on the demand, and for things to work we would need to assume that there is a nonzero probability for all products to have a very high demand. This is either unrealistic in practice, or would hold with such a small mu that the rates would be meaningless. To correct this, we propose to ponder our sentence "*Remember that in this setting the baseline DDM has no whatsoever theoretical guarantees*" with the information that MaxCOSD also violates our theory as well for large stepsizes. 6. *A final note on the experiments: my impression was that AIM was also a very similar policy to what is being proposed here, but see no discussion on this.* We respectfully disagree. Our policy is quite similar to CUP's, which we discuss in Section 4. Instead, AIM performs projections onto the feasibility constraint, and do not involve cycles as we do. We agree that they share the same skeleton based on OSD, but from our perspective AIM is a rather different algorithm. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed and honest response! I really appreciate the effort put into the rebuttal and in particular the detailed answers given to all my questions. Let me respond in detail: 1. Re feasibility of constant comparators: got it, thanks! 2. Re policy regret: thanks for the clarification! I guess I should have been more explicit about my concern here. What I worried about is that the losses may implicitly depend on the carryover state $x_t$, and it may be meaningless to compare the total loss of the learner with that of a comparator strategy that would have generated a different sequence of states. I now see that this is explicitly disallowed by the model. I am not super familiar with the inventory management literature so I am not sure if it makes sense to consider losses that depend on $x_t$ --- in my mind, it may make sense to model the cost of storing leftovers between order periods with such dependence. 3. Re continuous order levels: thanks for acknowledging the importance of this assumption! 4. Re lower bounded demands: OK, fair enough. I am still unsure about the importance of lower-bounded demands. E.g., Lugosi et al. [15] do not require such assumptions and still get good rates in a related setting. 5. Re parameter settings for the experiments: OK, that makes sense, please do add this clarification to the final version. 6. Re the relationship with AIM: sorry for phrasing this comment like that. I did not want to suggest that you failed to compare to AIM, but rather ask for a comment on the relationship (in particular any dis-/ similarities) between the proposed method and AIM. Overall, I am happy to recommend acceptance and have raised my score accordingly. --- Reply to Comment 1.1.1: Title: Answer to the comments by Reviewer 5kk9 Comment: We thank the reviewer for their feedback and for taking the time to participate to the discussion. Here are our answers to some of the comments: 2: Re losses depending on x_t: The reviewer's intuition is correct, it makes sense to consider losses depending on the inventory state x_t. Such losses appear in the literature, and fall outside of our model. We explain this limitation at the end of Remark 7 (on the losses). To make clear what exactly we are losing, we can say that 1) for lost sales dynamics, losses in x_t are often equivalent (up to a reparametrization) to losses in y_t. This is the case of the newsvendor loss, which is the mostly considered one in practice. This is due to the simple relation between x_t and y_t. 2) for dynamics involving perishability, the relation between x_t and y_t is more complicated, and there it can be natural to have losses depending in x_t (see e.g. [32] which considers an outdating cost). 4: Re the importance of lower bounded demands. Allow us to point out that [15] obtains optimal rates without this assumption because they are in a *stateless* setting. As we explain in Sections 3.1 and 5, a stateless setting does not require any assumption on the demand (which is usual for OCO). Our consideration of a more general dynamic forces us to make an assumption on the demand (as established in Proposition 13). That said, one could argue that our assumption can be replaced by a weaker one. But because our competitors need the same/similar assumption in their more specific settings (see below Assumption 10) we feel like replacing it will not be an easy task. 6: Re the relationship with AIM: thanks for further precising your comment. What we can do is to complete the paragraph following Algorithm 2, where we compare our method to CUP. We could add a sentence explaining that our method (and CUP) handle the feasibility constraint thanks to their cycles, which differs from the approach in AIM/DDM where projections onto the constraint are performed (we already explain in Section 3.2 that AIM is based on a subgradient descent and dynamic projection onto the feasibility constraint).
Rebuttal 1: Rebuttal: We will use this general response to address issues which were raised simultaneously by several reviewers. We responded to all the other issues in the individual responses to reviewers. 1) *Reviewers 5kk9, UtoR, ZTux raised concerns on the fact that the constant competitor in the regret definition might not staisfy the feasibility constraint.* Our answer : There is no issue, because constant strategies are always feasible (we give a short proof below), so our regret is well-defined. This should absolutely appear in the paper, and we propose to state it clearly after Definition 3 (defining the regret), eventually pointing to a formal proof in the Appendix D. The proof: Given any fixed order-up-to level $y\in\mathcal Y$, consider the constant strategy $y_t=y$ for all $t\in \mathbb{N}$, and denote by $x_t$ the states associated to this strategy. First of all, $y_1$ is feasible because we impose $x_1=0$. Then, for every $t\in\mathbb N$ we can write that $y_{t+1} = y \succeq [y-d_t]^+ = [y_t-d_t]^+ \succeq x_{t+1}$. The first inequality comes from the monotonicity of the positive part, while the last inequality is the inventory dynamical constraint. 2) *Reviewers 5kk9, ZTux point out that our action space is continuous instead of discrete.* Our answer : Indeed, in our work, it is a crucial assumption that the feasible set is continuous. To handle the discrete case, two main ideas appear in the literature of online inventory problems. First, probabilistic rounding (see e.g. Huh and Rusmevichientong [9, Section 3.4] or Shi et al. [25, Section EC.4.2]) which, as mentioned by Reviewer 5kk9, even in the simple newsvendor case, requires additional knowledge, such as lost sales indicators, since a single subgradient may not provide enough information. Second, the use of expert algorithms as suggested by Reviewer ZTux, which has already been applied in stateless settings (see e.g. Lugosi et al. [15]). To adapt the cycle update technique to expert algorithms in stateful settings, we face the following challenge: how to bound efficiently the distance between consecutive (randomized) iterates? Answering this question is the key to design a sufficient condition for feasibility (as in our Lemma 15). We propose to rewrite the end of Remark 6 (on the feasible set) to highlight the fact that not being able to handle discrete sets is a major restriction of our model. We will further complete the conclusion by developing the above mentioned ideas to tackle this issue. 3) *Reviewers 5kk9, ZTux, 8ZLu made comments about the optimality of our regret rates, and their dependency in mu and rho.* Our answer : In Section 3 we explain that all previous works obtain a $O(\sqrt{T})$ regret, and that it is somehow optimal for this class of problems. In Theorem 12 we obtain the same sublinear regret, but it is true that we do not make the connection with this literature review, and that we never discuss the constants appearing in the bound (which is roughly D^2G/(mu rho)). To the question of whether this constant is optimal, we have absolutely no answer. We were not able to find a worst-case example where the regret has a constant lower bounded by 1/mu. We are also not aware of any result in the literature with a better dependency in mu and rho. For instance, the CUP paper has more or less the same rates as ours. If we combine [32, Theorem 2 & Remark 3 & Assumption 1], we see that their multiplicative constant is also GD^2/(mu rho). There is also the rates for AIM, where the multiplicative constant is explicited in [9, end of the proof of Theorem 2: equation 9]. It is a bit hard/unfair to compare to ours, because they directly make the assumption that $\mathbb{E}[d]$ is bounded below, and they obtain a constant which scales like $\mathbb{E}[d]^{-6}$. If we take the liberty to say that $\mathbb{E}[d]$ scales like $\rho\mu$ (vaguely justified by the fact that $\mathbb{E}[d] \geq \rho \mu$), then we could say that their constant scales like $(\rho\mu)^{-6}$. This last discussion is imprecise and hides a lot of other constants under the carpet, but we are confident to say that AIM doesn't depend logarithmically in $1/\mathbb{E}[d]$. To make all this clear to the reader, we propose to add after Theorem 12 a few sentences recalling 1) that our rate recovers the same $O(\sqrt{T})$ scaling than other methods presented in Section 3 and 2) that we do not know if the scaling in $\mu, \rho$ is optimal or if it can be improved, also pointing to the above mentioned references. 4) *Reviewers UtoR, ZTux, 8ZLu raise concerns about the novelty of the paper* Our answer : This topic is partially subjective, so we will try to keep our answer factual. We believe that we are transparent about the nature of MaxCOSD in the paper (see the introduction where we present "*MaxCOSD, which can be seen as a generalization of the Online Subgradient Descent method.*"). We also make clear that OSD naturally solves online inventory problems in the stateless setting (Section 3.1 or Corollary 20). On the other hand, we would like to emphasize that applying naively OSD to stateless inventory problems does not work (see Section D.1 in the appendix). Even under strong assumptions on the demand, one needs to understand the link between stepsize and feasibility, which is what we explain around Theorem 17 and Lemma 15. We would also like to stress that our contribution is not limited to Theorem 12 (which can be seen technically as a fine control on the size of an adaptively chosen batchsize for OSD), but also to the contents of Section 5. Our lower bound (Proposition 13) is also new, and formalizes an idea which was in the air (but not proven). Aside from this, we humbly hope that this paper will serve as a friendly introduction to inventory problems for the online learning community, motivating them to apply their techniques to a field where a lot of progress can still be made. Pdf: /pdf/fd9e48f7f1f997ffcc9d730800819490fa700891.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Supervised Pretraining Can Learn In-Context Reinforcement Learning
Accept (spotlight)
Summary: This paper introduces a new pretraining objective for decision-making tasks. Leveraging the in-context learning capabilities of transformers, the authors propose to pretrain a model that predicts the optimal action given the current state and a set of history interactions as context. Empirically, the pretrained model is capable of learning various algorithms implicitly and can transfer to unseen tasks. Theoretically, the pretraining objective can be viewed as training the model to perform posterior sampling. **After rebuttal**: The rebuttal addressed my concerns sufficiently. The new experimental results highlight the importance of both in-context datasets and optimal actions. I have increased my score to 6. Strengths: 1. This work proposes a new pretraining framework for decision-making tasks. It is one of the pioneering efforts in the problem of in-context decision-making. 2. The paper is generally sound and presents adequate empirical results. The connection to posterior sampling is also interesting. 3. The paper is well-written. Weaknesses: 1. My main concern is about the requirement of optimal actions at pretraining. This assumption appears to be overly strong and I doubt its scalability to large-scale pretraining and more complex tasks. 2. I find it unclear whether the pretrained model benefits more from in-context datasets or optimal actions. It is suggested to conduct an ablation study on the context size, which appears to be missing from the appendix. I am curious about the performance when the context size is set to 0. 3. The modification of history-dependent pretraining in Section 6 needs further clarification. The importance of this modification is unclear to me. 4. The authors should provide a more comprehensive comparison between their proposed pretraining objective and previous research, such as retrieval-augmented RL [1]. [1] Goyal A, Friesen A, Banino A, et al. Retrieval-augmented reinforcement learning[C]//International Conference on Machine Learning. PMLR, 2022: 7740-7765. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Appendix A.5, you mention that the in-context datasets for Miniworld do not include images as observations. Does this imply that in-context datasets are not important as expected for the final performance? 2. Can you please explain the difference between DPT(PPO, PPO) and AD, and why DPT(PPO, PPO) is superior to AD? 3. How are the datasets constructed for Dark Room (Three Tasks)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors acknowledge certain limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for detailed feedback and questions! **Optimal actions at pretraining.**: Indeed, the theoretical formulation of DPT requires that task-specific optimal actions can be obtained for pretraining. While restrictive in some cases, we actually view this as being potentially quite flexible. This is because these labels can be obtained *offline* through many different means: - If task-specific expert labelers or demonstrations are available, those can be used directly - If the data is collected via a (inefficient or efficient) single-task RL algorithm, we can retroactively use labels from the learned policy (Fig 5.bc, also Fig 3.bc, Cor 6.2) - If privileged information is only available during pretraining but not testing (e.g. synthetic tasks, sim-to-real, hindsight [1]), we can obtain action labels that would otherwise be too hard to obtain - If the data covers the optimal policy distribution, offline RL can compute a near-optimal policy (same setting as [2]). In all cases, one can “relabel” the data with the optimal action in a query state. A key feature of DPT is that it is agnostic to how the training tasks are solved as long as the labels are found. The theory of DPT does not require demonstrations from an optimal RL algorithm, only task-specific optimal actions. Many meta-RL strategies also implicitly assume the training tasks can be “solved,” but they often require that they are solved in a specific way (e.g. online like PEARL [3] or by using a realizable source algorithm like AD). Perhaps more importantly, DPT can work with approximately optimal actions. The procedure is agnostic to whether the provided labels are actually optimal or not. We showed empirically that if we only know “good” actions such as those from a final PPO policy, DPT still can work well (see Figs 5.bc). [1] Sinclair et al. Hindsight learning for mdps with exogenous inputs. 2023. [2] Mendonca et al. Guided meta-policy search. 2019. [3] Rakelly et al. Efficient off-policy meta-RL via probabilistic context variables. 2019. **Does pretraining benefit more from in-context datasets or optimal actions? What if the context size is set to 0?**: Both are critical. The in-context dataset provides information about the task through rewards and transitions. It serves the same purpose as an offline dataset for an offline RL or a history for online RL. With no data in a new task, DPT makes a guess from its learned prior. Information-theoretically, no algorithm can do better than this. The prediction becomes more refined with more data or online interactions. See Fig 2.a at $n = 0$ for offline bandits and Fig 2.b for online. For MDPs, see Figs 4.bd. We also included a new figure for offline Dark Room with DPT in the response PDF (Fig 2.c). Optimal actions are important to connect with posterior sampling (PS) on new RL problems. Again, approximate ones suffice in practice. Pretraining to predict actions only in the in-context datasets would be equivalent to AD. **The modification of history-dependent pretraining in Section 6 needs further clarification.**: Thanks for pointing this out. The context is augmented with a history of optimal actions and states up to the current timestep during pretraining. During deployment, $D$ behaves the same, and the learner’s in-episode history is recorded in $\xi$. This modification is important to establish the precise connection between DPT and PS. In particular, conditioning on the history allows us to ‘collapse’ the posterior distribution over actions that are only consistent with policies that could have been generated by PS. **Comparison to retrieval-augmented RL [1]**: Thank you for pointing out this relevant work. [1] augments a standard RL algorithm with a dataset retrieval process to supply additional immediate information for the agent to use. They demonstrate this can also be used for multitask settings. DPT learns directly to map datasets and states to actions via offline supervised pretraining, and we also provide a theoretical analysis. **The in-context datasets for Miniworld do not include images as observations. Does this imply that in-context datasets are not important as expected for the final performance?**: We apologize for the confusion. To make this experiment more clear, we have included a new version in the response PDF (Fig 2.ab) which uses images and completely removes the xy-position information. To answer your question, in the original version, the in-context datasets provide contextual information about the task through the xy-position + rewards. Observations are indeed necessary to identify and solve the task. **Difference between DPT(PPO, PPO) and AD, and why DPT(PPO, PPO) is superior to AD?**: Both use data from PPO. A key difference is that AD is directly trained on the trajectories generated by PPO (i.e. to predict the next action from the learning algorithm), but DPT(PPO, PPO) is trained to predict the action from the *final PPO policy*, given an offline history and a state. This action acts as an approximation of the optimal policy. AD will only be as good as the algorithm it imitates, but DPT may be better because it can act as a good approximation of posterior sampling with a data-driven prior. [Note that the AD authors consider skipping trajectories to predict slightly further ahead, which we implemented.] **How are the datasets constructed for Dark Room (Three Tasks)?**: The three tasks in Dark Room (Three Tasks) differ in the start and goal location of the agent: (1) start at (0, 5) and goal at (9, 5); (2) start at (5, 0) and goal at (5, 9); (3) start at (0, 5) and goal at (5, 9). The pretraining in-context dataset here is collected by a random policy for the two goals. The in-context dataset consists of demonstrations of the first two tasks, where the reward is 1 only for transitions to (5, 9), and our evaluation is in the third task. This demonstrates the ability to do offline stitching of suboptimal demonstrations. --- Rebuttal Comment 1.1: Comment: Thank you for your comprehensive response to my concerns. Given the convincing experimental results highlighting the importance of both in-context datasets and optimal actions, I am inclined to increase my score to 6.
Summary: This paper proposes an in-context learning based algorithm for meta-learning decision-making algorithms, combined with a novel analysis which shows that unlike prior works, the proposed algorithm is guaranteed to work. Moreover, the authors also show empirically and theoretically that their algorithm is able to improve upon its training demonstrations. Strengths: 1. The paper conducts a novel theoretical analysis on the ability to generalize well when meta-learning with the in-context learning paradigm applied to decision-making problems. 2. The latent representation learning results in linear bandits which enable the improvement over the training demonstrations is a novel motivation for in-context learning based decision-making algorithms. 3. The paper is clearly written and accessible to readers with varying backgrounds. Weaknesses: The formulation of a consistent learned model assumes the meta-learning algorithm is perfect. Can you relax this assumption to provide the fine-grained pre-training sample complexity? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Could you please explain why in Figure 2, UCB is outperformed by Thompson Sampling even though you stated that UCB is an optimal online multi-armed bandit algorithm? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Since the analysis for MDP assumes that the training demonstrations are generated with an optimal policy, the claim that the proposed method can theoretically improve upon the training demonstrations should explicitly specify that the improvement is only for the bandit and contextual bandit cases, not the more general MDP case. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive comments and detailed feedback! **Can you relax the consistency assumption on the learned model for the theory?**: Thank you for the great question. In Appendix C, we provide some justification for this via a generic MLE concentration inequality (cf. Zhang, 2006). This will be true for realizable function classes of bounded statistical complexity. Asymptotically (with N, the number of pretraining samples), the distribution can be learned through this MLE procedure. While finite sample versions can be derived [2], the calculations are long and can obfuscate the key theoretical takeaways of the theorem and proof, which is primarily about how the pretraining can lead to in-context posterior sampling behavior. While indeed extremely important, the study of the generalizability and complexity of transformers is in early stages and beyond the scope of this paper. This assumption also allows the theory to be somewhat agnostic to the underlying model used to learn the distribution. [1] Zhang. From ɛ-entropy to KL-entropy: Analysis of minimum information complexity density estimation. 2006. [2] Du et al. Few-shot learning via learning the representation, provably. 2020. **Why is UCB outperformed by Thompson Sampling when it is said to be optimal?**: We apologize we were unclear and will update the draft to clarify this statement. UCB is theoretically *minimax* optimal for regret in the sense that its proven regret *rates* match an information-theoretic lower bound, up to log factors and constants. TS is also minimax optimal. In specific instances TS or UCB may outperform each other. **Since the analysis for MDP assumes that the training demonstrations are generated with an optimal policy, the claim that the proposed method can theoretically improve upon the training demonstrations should explicitly specify that the improvement is only for the bandit and contextual bandit cases, not the more general MDP case**: To clarify, we do not assume the in-context datasets are necessarily generated with an optimal policy or optimal RL algorithm. For example, a random algorithm could be executed in a well-connected tabular MDP and, if there is sufficient data, one can extract the task-specific optimal policy from collected data. In such settings, DPT can learn a RL strategy that is better than the random strategy used to generate data for the pretraining process, so that it can solve new RL problems more efficiently than the random algorithm. When we refer to it improving over pretraining data, we mean it can learn an RL strategy that may be better than the process used to generate the pretraining data (which could be an inefficient RL algorithm). We do not require demonstrations of an optimal RL algorithm. For MDPs, this is evidenced by Corollary 6.1 (pretraining in-context datasets generated by random interactions) and Figs 5.bc (pretraining data generated by PPO). We will be sure to articulate this point better in the revision. --- Rebuttal Comment 1.1: Comment: Thanks for the thorough rebuttal. Authors have addressed all my concerns and clarified misunderstandings. I am thus willing to increase my rating.
Summary: This paper studies in-context learning for sequential decision-making problems. It proposes an approach, Decision Pretrained Transformer (DPT), which takes as input a (state, context) pair and produces an action. It is pretrained on a large number of tasks drawn from some task distribution and evaluated on held-out tasks. This approach can be applied in both offline settings (in which case the context is a given set of trajectories from the task at hand) and online settings (in which case the context is the agent’s experience on the current task so far). The approach is evaluated in two settings: toy bandits (including linear contextual bandits) and simple MDPs. In the online bandit setting, it is shown to learn behaviors which efficiently balance exploration and exploitation, similarly to hand-coded bandit algorithms like UCB/LinUCB. This hold both in the stateless bandit setting and the linear contextual bandit setting. In the offline setting, it also matches Thompson sampling. The second setting the algorithm is evaluated on is simple MDPs, namely a small partially observed grid world and MiniWorld (a pixel-based version of Minigrid). Here again, DPT is able to perform well both online and offline, even if the dataset quality is poor. Finally, the paper shows that under certain assumptions, DPT can be shown to perform posterior sampling, which is consistent with the empirical results in the bandit settings. Overall, this is pretty good paper. The idea, though simple, has not to my knowledge been explored before. The experiments are well-designed, illustrate the claims well and the writing is clear. The main downside is that the tasks feel pretty toy - the bandit examples are just Gaussians and the MDPs are grid worlds (possibly dressed up with pixels, as in MiniWorld). However, as a first proof of concept I think this still meets the bar for acceptance at NeurIPS. Strengths: - The problem is definitely interesting, since large-scale, diverse datasets are often available for pretraining and zero-shot adaptation to new tasks is often desirable - The writing is very clear, and the experiments are well presented - The experiments are well-designed to show the properties of the proposed approach Weaknesses: - The experiments are on toy environments - There is no mention of open sourcing the code. I think this is important since the setting is new and future work should be able to compare to this approach on the same tasks. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - How does the computational cost of the proposed method compare to that of the baselines? Due to the need to perform attention over the context each step, this might be expensive. Please add a short discussion of this. - Will the code to reproduce experiments be open-sourced? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive comments and detailed feedback! **How does the computational cost of the proposed method compare to that of the baselines? Due to the need to perform attention… this might be expensive**: AD also uses a GPT-like architecture, so its computational cost is approximately the same as DPT. RL^2 uses an RNN which will do inference faster (although progress in transformers is rapidly trying to close this gap). PPO is perhaps the fastest with a simple MLP, but of course it does not benefit from pretraining. Fortunately, driven by the surge of interest in language models, there is considerable effort in the community to develop fast computational methods for inference with either transformers or other architectures. Known examples include FlashAttention and S4. There is already work beginning to address this for in-context RL [Lu et al, 2023]. We expect the general framework of DPT to immediately inherit benefits from these advances. **Will the code to reproduce experiments be open-sourced**: Yes, we intend to open-source the code to reproduce the experiments in a convenient and easily extensible manner. In the meantime, original code may be found in the supplementary material for the reviewers’ convenience. **Experiments on toy environments**: The environments are indeed simple, but also we believe thoroughly studying the problem empirically and theoretically with simple models is key to developing generalizable and still non-trivial insights about it. We look forward to scaling these insights in future work!
Summary: This work leverages transformers to solve decision making problems in a few-shot manner. In particular, they train transformers to predict optimal actions from a task given a query state and a few-shot dataset of state, action, reward transitions. The approach is tested in both multi-armed-bandit problems and POMDPs, showing superior performance than other RL and Meta-RL approaches. It also provides theoretical analysis of the proposed policy, providing regret bounds and guarantees of performance when different algorithms are used to generate the pre-training data. Strengths: Method: The work conducts a thorough study of the effectiveness of transformer for in-context learning in decision making problems. While the domains chosen are quite simple, there are some valuable learnings (especially if they transfer to more complex domains), such as: - The value of sampling actions from the transformer's likelihood in multi-armed bandits - The effect of using expert datasets for in-context learning - Better generalization than competing baselines, when tested on new tasks The work also includes examples of POMDPS with higher dimensional observations (25x25 pixels) showing the proposed pretraining paradigm achieves better performance than competing baselines. Valuable analysis of the performance of in-context learning when having access to PPO-trained policies, bringing the setting closer to AD. I particularly value how authors evaluate the effect of different sources for the context states and actions. I am wondering whether PPO, PPO is directly comparable to AD though since in their case they provide in-context actions and trajectories for different stages of training. By chosing simple domains, the work can thoroughly explore the behavior of in-context learning for different properties of the training data, particularly in the multi-armed bandit section. To my knowledge, this is the first work to build connections of in-context learning in decision making problems and posterior sampling, which allows to: 1. Provide an upper bound on the regret of the proposed transformer-based policy. 2. Show that if the pretraining in-context dataset comes from policies that were only trained on the data and task present in the dataset, the final policy will be the same. Clarity: Generally clear writing and background. The experiments are simple but informative, and the work is easily reproducible. More notation and clarity should be given in the theory section, such as stating what is H, and providing more intuition in the propositions. Weaknesses: Novelty: while this paper conducts a reasonable study on the advantages of in-context learning in decision making problems, this is a feature that has been recently studied in more complex domains and tasks (see related works next). My main concern in this work is that there is not much novelty or learnings compared to those works, and that despite making a reasonable study of the capabilities of transformers for few-shot decision making problems, they are studied in very simple scenarios. Related work: Several works have studied in context-learning for decision making problems, in more complex scenarios [1, 2]. These works should be cited. I am also not clear on what novelty this work provides with respect to the aforementioned papers. I would like a clarification on this as well. [1] https://arxiv.org/pdf/2206.13499.pdf [2] https://arxiv.org/abs/2301.07608 Baselines: While RL2 is a fair baseline, I think authors should look at other meta-RL algorithms, or tune the existing baselines to be more comparable. I understand that PPO is used as a reference, but it should at least be finetuned with the few-shot examples given - it is hard to compare methods when some are actually seeing less data. AD is designed to learn to do RL and therefore works best when having a curriculum of trajectories, which is not the case here. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: It is not intuitive to me that a model trained with expert data (DPT-Exp) behaves poorly when the % of expert data is high (Figure 3.a) since that is what is seen during training, I would expect it to b heave the best. Could authors elaborate on why this is the case? The setting in lines 312-317 makes a lot of sense. Why not using that in all the experiments, if as claimed "the original pretraining method can be seen as a simpler approximation 316 of this modified method"? I don't understand assumption 1.1, if we are following that assumption, why not directly using P_{pre} as out policy? Some intuition would be helpful. Proposition 6.3 is very unitnuitive to me, if I understand it correctly. Is it stating that a pretraining dataset coming from PPO and one coming from a random policy would perform the same? I don't see anything in the proposition that implies otherwise, but this doesnt match in the empirical experiments. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 1 poor Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed feedback! **Novelty**: We apologize this was not clear. DPT indeed offers significant novel insights, distinct from prior work. 1. It is not necessary to start with a good RL algorithm to learn a good RL algorithm. Supervised pretraining can produce one. Prior work has assumed access to good data through either a good online RL algorithm or expert demos at test-time. Instead we make the much more flexible assumption that one can compute task-specific (approximately) optimal actions for the histories *offline during pretraining*. This can be done under different assumptions, subsuming prior work, including: - data collected from a (inefficient or efficient) single-task RL algorithm (Figs 5.bc, 3.bc, Cor 6.2). The final learned policy can be used to relabel the training trajectories (Figs 5.bc) - data gathered in a way that includes the task-specific optimal policy distribution, which can then be used to compute the optimal policy via offline RL - privileged information (e.g. sim-to-real, hindsight info) or task-specific expert labelers 2. DPT shows provably sample-efficient posterior sampling (an algorithm that has long been studied primarily theoretically) can be scaled up via transformers and *supervised* pretraining (Thm 1, Cor 6.1, Figs 2.ab) 3. DPT offers a new way of training a transformer, showing that learning distributions over task-specific optimal actions (rather than imitating existing algorithms) better leverages data-driven priors (Figs 3.abc, 4.b, 5.bc) We do not believe these important and sometimes surprising implications have been empirically or theoretically elucidated before, in either simple or complex settings. **Comparison to [1] and [2]**: Thanks for recommending these papers. In addition to the aforementioned new insights, there are technical differences in operation. - Prompt-DT [1] is designed for offline RL and conditions on expert demos and return-to-go in the context. This necessitates expert demos as input at test-time. DPT is meant for *both* online and offline in-context RL and does not require test-time demos (but it can benefit from them e.g. Figs 4.ac). DPT does not use return-to-go, which saves a test-time hyperparameter. Pretraining is also different as DPT predicts optimal actions from arbitrary datasets. - AdA [2] meta learns online (like RL^2). Major differences are (1) a base online RL alg to optimize and online access to the simulator, and (2) a curriculum learning method. DPT is learned from offline supervised pretraining and it is agnostic to how the pretraining data is acquired. The curriculum is important but orthogonal to our contributions. We believe many of the ideas in AdA are complementary and can be ported to DPT. **Baselines**: Thanks for your suggestions. We are not aware of an immediate extension of PPO to few-shot finetuning. Training on different tasks does not work because the policy cannot know its task only from state. Using history effectively would make it RL^2. We presume the reviewer is suggesting to do something like MAML. We have included MAML in the response PDF (Fig 1.a). MAML is notably worse and this has been observed before in exploration problems [1]. We inspected the pre-adaptation episodes (Fig 1.b) and found that they fail to explore the task space. We also included a new comparison with Prompt-DT (Fig 1.c). Prompt-DT requires expert demos and only works offline, whereas DPT works with expert or random data and works online. [1] Gupta et al. Meta-RL of Structured Exploration Strategies. 2018. **AD implementation**: We apologize for any confusion. We believe our implementation of AD is already consistent with both your suggestion and the recipe in the original paper. We distill PPO histories by predicting the next actions. As the authors suggest, we apply skipping to improve adaptation. **DPT-Exp performance in Fig 3.a**: We believe there may be a misunderstanding. Fig 3.a shows that the suboptimality of DPT-Exp goes to zero as the expert data % is increased. Lower is better. This appears to be consistent with what the reviewer has expected. **Why not do modified DPT (presented in the theory) in the experiments?**: While theoretically elegant, the modified version is not the most practical for a transformer. It is possible to implement, but leaves open some ambiguous design choices. E.g. one should differentiate the in-context dataset D from the history $\xi$ and $\xi$ should have positional encoding. In practice, it is easier to implement the original, which still has strong empirical performance (and is still equivalent to PS for bandits). **If we are following Asmp 1, why not directly use P_{pre}?**: Your understanding is correct. In Section 6, after Asmp 1, you may assume that any reference to $M_{\theta}$ can be replaced with $P_{pre}$. Asmp 1 helps us answer: “if we had perfect pretraining conditions and exactly learned the target distribution, what are the theoretical characteristics of DPT?” We will clarify this in the text. However, the proof of Thm 1 is not simply proof-by-definition! PS and DPT are fundamentally different in their operation. The key technical challenge of the proof is showing that the posterior distribution over actions collapses in just the right way to make the two trajectory distributions equivalent. **Prop 6.3: Is it stating that a pretraining dataset from PPO and a random policy would perform the same?**: Thanks for highlighting this. Prop 6.3 helps explain the following observation. Whether we pretrain with datasets sampled randomly or from an algorithm, DPT has similar behavior. However, some pretraining datasets produce different results (DPT-Exp in Fig 3.a). Prop 6.3 shows pretraining with adaptively collected datasets will *distributionally* lead to the same model. Statistical differences may arise due to finite samples and coverage in practice. This is qualitatively consistent with what we observe in Figs 3.bc and 5.bc. --- Rebuttal Comment 1.1: Comment: Thanks for the thorough rebuttal. Authors have addressed all my concerns, and clarified misunderstandings. I am thus changing my rating.
Rebuttal 1: Rebuttal: Dear Reviewers and AC, Thank you for your detailed and thoughtful reviews. We are delighted to hear that the reviewers found the problem and results interesting and novel (R3a7, pHmJ), and the analyses well-designed and thorough (pHmJ, CX5S). We share excitement about the insights such as the connection to posterior sampling (qnXx, CX5S) and the ability to exploit latent structure from pretraining (R3a7). We also greatly appreciate the feedback and will incorporate it in the revision. Please see the individual responses for answers to specific questions and clarifications. We have also attached a PDF response with new figures addressing certain questions that warranted new experiments. Here are a few key points: - Reviewers CX5S and qnXx: We have included discussion of how (approximately) optimal actions during offline pretraining can be acquired in many settings, that are a superset of related meta-RL problem settings. - Reviewer CX5S: We have highlighted major contributions/implications of the work that we believe have not been elucidated in prior work, including how - it is not necessary to start with a good RL algorithm to learn one, via supervised pretraining. - provably sample-efficient posterior sampling (studied primarily theoretically) can be scaled up via transformers and *supervised* pretraining. - learning distributions over task-specific optimal actions (rather than imitating existing algorithms) better leverages data-driven priors. - Reviewer CX5S: In the attached PDF, we have included additional MAML and Prompt-DT baselines in their respective settings, as well as a MAML visualization. - Reviewer qnXx: In the attached PDF, we have included a new version of the MiniWorld results with position completely removed and instead images used in the in-context dataset. The pretraining was also scaled up for all methods. We have also included an ablation varying the context size for offline Dark Room (in addition to existing ones in the original submission for offline/online bandits and online Dark Room). We are happy to answer any additional questions. Pdf: /pdf/6dd7a8fca61c43613c341c3605daf0eca653c132.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
This Looks Like Those: Illuminating Prototypical Concepts Using Multiple Visualizations
Accept (poster)
Summary: This paper proposes a new interpretable prototype-based classifier, called ProtoConcepts. Unlike existing prototype-based classifiers that use one-to-one comparisons, ProtoConcepts learns prototypical concepts using multiple image patches. This approach aims to make it easier to identify the underlying concept being compared, allowing for richer and more interpretable visual explanations. Experiments show that this modified "this looks like those" reasoning process can be applied to various prototypical image classification networks without affecting accuracy on benchmark datasets. Strengths: 1. The authors make a valid point that utilizing a single training image patch as a prototype can be insufficient for users to comprehend the concept the prototype represents. For example, when presented with a "blue circle" prototype, it is unclear whether the concept is related to the color "blue" or the shape "circle." 2. The paper (Table 1) demonstrates that ProtoConcepts can be integrated with various prototype-based classifiers, such as ProtoPNet, TesNet, and ProtoPool, showcasing its adaptability across different methods. Weaknesses: 1. It is definitely not a new idea in the XAI community to use multiple images/patches to visualize a concept. This is a common practice in the works of concept-level explanations, like [TCAV](http://proceedings.mlr.press/v80/kim18d/kim18d.pdf), [ACE]( https://arxiv.org/pdf/1902.03129.pdf), and so on. However, none of them are discussed in the paper; 2. The main issue of the paper lies in the evaluation of interpretability. It is unclear whether these examples are cherry-picked. Also, given that the source code of the method is unavailable at this moment, it is unknown how good/bad the explanation results would be for many other cases. If the examples are not cherry-picked, it would be better for the authors to clearly state this in the paper or use examples in a fixed order (e.g., always use the first example of each class). It is hard to be convinced that the interpretability of ProtoConcepts is good by showing several separate examples. I would suggest that the authors conduct a carefully designed human user study. The user study results should better convince users that the method brings about a better understanding of the model's behavior. User studies are often conducted in the work of concept-level explanations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: 1. The novelty of the paper appears to primarily lie in the combination of two distinct lines of research within XAI - concept-level explanations and prototype-based classifiers. 2. The absence of user study results and the unavailability of source code make it difficult to be convinced about the interpretability of the explanations generated by the proposed method. See more details in the part of "Weaknesses". Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review comments. We are happy to make some clarifications as follows: >It is definitely not a new idea in the XAI community to use multiple images/patches to visualize a concept. This is a common practice in the works of concept-level explanations, like TCAV, ACE, and so on. However, none of them are discussed in the paper **A**:Thank you for bringing these relevant works to our attention; we agree that concept-level explanations are not a new idea in the field of XAI, and will add discussion of these works to our final manuscript. However, TCAV and ACE are from an entirely different genre than our approach. Those methods are **posthoc** and **supervised**. In other words, TCAV needs to be fed images of a **known, predefined** concept and used those to analyze an existing model. Unlike our method, **posthoc** methods are not involved in the model computation process. Any results produced **posthoc** are not always faithful to the classification decision. Instead our method is **inherently interpretable**, and the algorithm **discovers the concepts** by itself. It is similar to ProtoPNet and its variants, but none of those learn concepts; they use comparisons between **pairs** of images. The concepts that ProtoConcepts discovers could be totally new, whereas TCAV/ACE/etc. require **known** concepts. >I would suggest that the authors conduct a carefully designed human user study. **A**:As per your suggestion, we evaluated the added interpretability of our ProtoPConcepts module using the **distinction** survey task in HIVE[1]. In this task, we asked participants to select the correct prediction out of two options based on provided visual explanations, and found that our ProtoPConcepts module allowed for a statistically significant increase in user interpretability. Please see the global rebuttal for exact study details and results. We thank for the suggestion of a human study and believe that our survey results objectively show the interpretability benefit from our method. >If the examples are not cherry-picked, it would be better for the authors to clearly state this in the paper or use examples in a fixed order (e.g., always use the first example of each class). **A**:Thank you for this comment. We believe the results are representative. Luckily, you can judge yourself whether you believe the results are cherry picked because we included many additional examples of the reasoning process in the supplement. In addition, our user study shows that our method results in an objective increase in user interpretability of our visual explanations. [1] Kim et al., HIVE: evaluating the human interpretability of visual explanations, ECCV 2022 --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: I appreciate the authors' response. I take the point that the concept-level explanation methods like TCAV and ACE are posthoc methods while the proposed method in the paper is inherently interpretable. However, the work of concept-level explanation methods should definitely be discussed in the final version of the paper. Also, note that while TCAV requires known concepts, ACE does not. The human study results supplied are vital in persuading readers about the interpretability of ProtoConcepts. The results should also be included in the final version. I raised my rating of the paper to 6. --- Reply to Comment 1.1.1: Comment: Thank you for the correction on ACE. However, it is still a post-hoc method. It would require an outlier removal step to "make every cluster of segments clean of meaningless or dissimilar segments"[1]. It also relies on segmentation methods “at the cost of suffering from lower segmentation quality”[1]. We believe it’s a valuable discussion, and we will make sure to include it in the final manuscript. Thank you again for pointing it out. And we will definitely add the user study results to the main manuscript. Thank you again. [1]Ghorbani, Amirata, et al. "Towards automatic concept-based explanations." Advances in neural information processing systems 32 (2019).
Summary: The paper proposes a modification of the prototype layer of the existing prototype-based networks. Existing methods rely on a single training image patch as a prototype. The main drawback of this is that it is difficult for the user to understand the meaning of the prototype from a single visualization. The paper proposes to learn prototypical concepts which are visualized using multiple training image patches. In this way, the semantic of each prototype is less ambiguous. Strengths: _Clarity_: the paper is generally well-written and structured clearly. The figures showing image examples are very helpful, especially for readers that are not experts on prototypical networks. _Significance_: the paper addresses a relevant problem which is well known to practitioners using prototypical networks. The problem is filling the gap between what the prototypes represent and the human understanding of prototype semantics. Previous works tried to tackle it. The proposed changes do not badly impact the overall performance of the prototypical networks; indeed, the accuracy is comparable to previous methods. _Quality and originality_: the two novel losses are technically sound. The idea of representing the prototypes as a ball in the latent space is interesting. The solution can be applied to several types of prototypical networks. _Reproducibility_: the code will be made available upon acceptance (it was unavailable to the reviewers). The supplementary material reports the training parameters. Weaknesses: As reported in the introduction, the proposed solution should allow the user to determine the semantic meaning of each prototype with less ambiguity. However, the experiments do not report one or more metrics that show this reduction of ambiguity with respect to the plain prototypical networks. The paper and the supplementary material reports some example of concrete cases but not an overall evaluation. The optimal solution would be to run an evaluation with real users, but I understand that it is expensive. The paper [1] proposes activation precision as an interpretability metric (section 4.2) (the segmentation masks are provided with CUB-200). Given that Cub200 dataset provides part location annotations for the images, another metric could measure the degree to which the multiple visualizations of the prototype refer to the same bird part. [1] Barnett 2021, "IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography" Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - what is the main advantage of your solution with respect to learning standard prototypes (e.g., as done in ProtoPNets) and visualizing it on multiple most activated training images? In other words, the original work on ProtoPNets visualizes the prototype on the most activated training image; what if you just take more images? - How is k set in the experiments? Why does the value of k change depending on the prototype-based model? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors address the limitations. I would like to stress a point that is also discussed in the limitations. The user expertise influences the impact of the method on identifying the concepts: given the same set of prototype visualization, an expert user has a different interpretation of them with respect to a non-expert. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review comments. We are happy to make some clarifications as follows: >However, the experiments do not report one or more metrics that show this reduction of ambiguity with respect to the plain prototypical networks. **A**: As per your suggestion, we evaluated the added interpretability of our ProtoPConcepts module using the **distinction** survey task in HIVE[1]. In this task, we asked participants to select the correct prediction out of two options based on provided visual explanations, and found that our ProtoPConcepts module allowed for a statistically significant increase in user interpretability. Please see the global rebuttal for exact study details and results. We thank for the suggestion of a human study and believe that our survey results objectively show the interpretability benefit from our method. >What is the main advantage of your solution with respect to learning standard prototypes (e.g., as done in ProtoPNets) and visualizing it on multiple most activated training images? In other words, the original work on ProtoPNets visualizes the prototype on the most activated training image; what if you just take more images? **A**:Unfortunately, the type of calculation you're thinking about doesn't work for models such as ProtoPNet, and is part of the motivation for this work. The reason why models such as ProtoPNet are interpretable-by-design is that the prototypes learned by the model correspond **directly** to a single training patch in latent space. When models such as ProtoPNet perform inference on a test image, they calculate distances to these exact training patches in latent space. Therefore, the patches visualized by ProtoPNet are not just the **closest** training patch to that prototype, they are **exactly** the single training patch corresponding to that prototypical vector in latent space. If we tried to create a way to highlight other nearby training patches, it would yield unfaithful explanations, as the distances calculated in the model's reasoning process doesn't directly correspond to other nearby parts. To overcome this problem, we instead represent prototypes as sets for both the reasoning and explanation process, which allows us to create multiple visualizations for each prototype in a way that is still faithful to the model's exact reasoning process. >How is k set in the experiments? Why does the value of k change depending on the prototype-based model? **A**:In the top-k cluster loss for our optimization, the exact value of k is treated as a hyperparameter and is selected by cross-validation. Because the prototype-based models we analyze in our experiments have widely varying latent space geometries due to different distance metrics (e.g. L2 distance vs. cosine similarity), we tune the exact value of k separately for these different models along with other parameters such as the initial concept ball radius. [1] Kim et al., HIVE: evaluating the human interpretability of visual explanations, ECCV 2022 --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for your answer and for taking the time to run a user study. The user study improves the evaluation part. I agree with Reviewer x5Kg that the user study should be added to the final version of the main text. I agree with your explanation about visualizing the prototypes on multiple most activated training images. Would it make sense to have an additional experiment to demonstrate this also visually by showing the user the same number of visual explanations for both ProtoPNets and ProtoConcept? I expect that ProtoConcept wins the comparison because its multiple visualizations are faithful and of higher quality. It is not only a matter of having more multiple visualizations but also of their quality and interpretability. This can also be shown by computing the activation precision metric (see IAIA-BL paper), which should be higher for ProtoConcept. This may partially address the charry-picking issue raised by Reviewer p3JV and Reviewer x5Kg. I raised the score to 6. --- Reply to Comment 1.1.1: Comment: Thanks for your response! We will definitely add the user study results to the final manuscript, as you and other reviewers have suggested. >Would it make sense to have an additional experiment to demonstrate this also visually by showing the user the same number of visual explanations for both ProtoPNets and ProtoConcept? **A**: In regards to comparing multiple visualizations between ProtoPNet and ProtoConcepts, we can add a comparison of the multiple visualizations of ProtoConcepts to the **closest patch** visualizations of ProtoPNet to the supplement, noting that the **multiple visualizations** generated for ProtoPNet are not the actual patches used in inference, but rather generated as a visual comparison of quality and interpretability as you suggest, similar to in Fig. 5 of ProtoPNet paper[1]. > This can also be shown by computing the activation precision metric (see IAIA-BL paper), which should be higher for ProtoConcept. **A**:Unfortunately, since the activation precision metric requires bounding box part annotations and all of the prototypes learned by ProtoPNet and ProtoConcepts for the CUB dataset are learned from the data without explicit part supervision, there isn't a straightforward way for us to compute the activation precision metric for either the CUB or Stanford Cars datasets used in our experiments. Although CUB contains part annotations, we do not use them during training, so neither ProtoPNet nor ProtoConcepts are trained to learn the parts corresponding to these annotations explicitly. Specifically, our visualizations are not always restricted to the specific part of the bird, e.g., only the belly or only the neck. It could include the part of the neck and bellow, as shown in Supplementary Figure 1. Thus, it is hard to compute the activation precision metric based on that. IAIA-BL was trained using expert annotations on exactly where the network is allowed to activate[2]. Such annotations don’t really make sense for the bird dataset since the concepts could be (as mentioned) at the boundary between the bellow and the neck, which doesn’t have a name and wouldn’t be annotated by a human (but is still a good concept for us to have discovered and used). [1] Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. This looks like that: deep learning for interpretable image recognition. Advances in Neural Information Processing Systems, 32, 2019 [2] Alina Jade Barnett, Fides Regina Schwartz, Chaofan Tao, Chaofan Chen, Yinhao Ren, Joseph Y. Lo, and Cynthia Rudin. IAIA-BL: A case-based interpretable deep learning model for classification of mass lesions in digital mammography. Nature Machine Intelligence, 3:1061–1070, 2021.
Summary: This paper extends the patch comparison of prototypical part-based classifiers from one-to-one patch to one-to-multiple (e.g. from this looks like that to this looks like those). They compare their proposed network with Proto-PNet variants and show no classification accuracy improvements and little evidence of better interpretability. The proposed method was tested on CUB and Cars as previously done in ProtoPNet variants. I have read the author’s rebuttal and adjust the rating accordingly! Strengths: 1. The research is well positioned in the literature of prototypical part-based classifiers. 2. The extension from 1vs1 to 1vsmany is interesting and could help improve human understanding tasks that all prototypical part-based classifiers fall short on [1] [1] hive: evaluating the human interpretability of visual explanations Weaknesses: There are multiple weaknesses of this paper, mostly about the novelty. 1. I see the extension is incremental though well motivated. There is no surprise in the classifier accuracy compared to other ProtoPNet variants. I always see (all of them) they perform roughly the same. I believe the "big fish" is to really make this prototypical part-based explanations usable for humans. 2. Although it was motivated by the lack of interpretability of prototypical part-based classifiers, the analysis of model interpretability is superficial and not spot-on. I believe maybe this work also fails to help humans [1] as its siblings [2,3]. The authors could improve the interpretability analysis by running more extensive study (both automatic and human studies) rather than dissecting sample-wise visualizations that could be easily cherry-picked and misled. 3. I think the paper lacks the analysis of computation cost (i.e. 1vs1 and 1vsmany comparison). [2] ProtoPNet [3] ProtoTree Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The 1vsmany patch-wise comparison has been studies and they even conducted human studies on interpretability. How this work differs from them [4]. [4] visual correspondence-based explanations improve ai robustness and human-ai team accuracy Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. However, we would like to make some clarifications as following: >I believe maybe this work also fails to help humans [1] as its siblings [2,3]. The authors could improve the interpretability analysis by running more extensive study (both automatic and human studies) rather than dissecting sample-wise visualizations that could be easily cherry-picked and misled. **A**: As per your suggestion, we evaluated the added interpretability of our ProtoPConcepts module using the **distinction** survey task in HIVE[1]. In this task, we asked participants to select the correct prediction out of two options based on provided visual explanations, and found that our ProtoPConcepts module allowed for a statistically significant increase in user interpretability. Please see the global rebuttal for exact study details and results. We thank for the suggestion of a human study and believe that our survey results objectively show the interpretability benefit from our method. In terms of cherry-picking, we included a lot of examples in the appendix so the user can see whether we are cherry picking (we are not). >I think the paper lacks the analysis of computation cost **A**: Thank you for this input; we find that our method is actually more computationally efficient than previous prototype-based methods. Although our method allows for multiple visualizations, the only real computational overhead that is added comes when generating visual explanations after inference by calculating which training patches lie within each concept ball. However, this operation only needs to be performed once and is still relatively fast. During training and inference, the addition of our ProtoConcepts module amounts to a single thresholding layer after the distance calculation in latent space, which we have found to have negligible computational cost. In addition, our set representation for prototypical concepts allows us to skip the projection step of other prototype-based networks, which allows for faster training of a ProtoPConcepts network. We thank the reviewer again for this suggestion and will add further discussion of computational efficiency to the supplement. >The 1vsmany patch-wise comparison has been studies and they even conducted human studies on interpretability. How this work differs from them [4]. **A**:There are some major differences between our work and [4]. [4] do not learn any concepts or prototypes. Instead, their study uses kNN-based modeling and has to compare a given test image with all training images to find the top 20 training images that are most similar to the test image. This is computationally expensive. On the other hand, our work does not use kNN-based modeling and instead learns a fixed set of relevant prototypes and concepts from the training set. Unlike [4], our model works by comparing a given test image with the learned prototypes/concepts, thereby avoiding having to compare each test image with the entire training set. We also benefit from learning concepts that are meaningful in the domain. Although the study by [4] also considers one vs. multi-patches algorithm, the multiple visualizations are ranked by their similarity scores to the given test image and thus contribute unevenly to the decision process. However, our approach finds multiple visualizations for each learned concept which contribute **equally** to the decision process. [1] Kim et al., HIVE: evaluating the human interpretability of visual explanations, ECCV 2022 [2] Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. This looks like that: deep learning for interpretable image recognition. Advances in Neural Information Processing Systems, 32, 2019 [3] Meike Nauta, Ron Van Bree, and Christin Seifert. Neural prototype trees for interpretable fine-grained image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14933–14943, 2021. [4] Nguyen, Giang, Mohammad Reza Taesiri, and Anh Nguyen. "Visual correspondence-based explanations improve AI robustness and human-AI team accuracy." Neural Information Processing Systems (NeurIPS) (2022). --- Rebuttal Comment 1.1: Title: Concerns remained about the evaluation and novelty Comment: I would like to thank the authors for the efforts in the rebuttal. After reading your rebuttal and reviews from other reviewers, my replies are: 1. Not only me, the other reviewers also question on the novelty of this work (wBiV in Q1, x5Kg in W1). 2. I genuinely value the authors' efforts in conducting the human study, but I remain unconvinced by the results. * The report lacks specifics regarding the human study—like the number of participants and their expertise (whether they were experts or laypeople), etc… * Your comparison with the random choice (50%) and ProtoPNet—a model with nearly random choice accuracy—doesn’t adequately capture human utility due to the weak baselines employed. Given that, my primary concerns are still about the novelty and evaluation methodology of this paper. I raise the score (to 4) and am open for more discussion. --- Reply to Comment 1.1.1: Comment: Thank you for your comments, and no problem, let's try again. >Not only me, the other reviewers also question on the novelty of this work (wBiV in Q1, x5Kg in W1).I genuinely value the authors' efforts in conducting the human study, but I remain unconvinced by the results. **A**: Our paper's novelty is to provide a **non-posthoc** method that discovers interpretable concepts and uses them for case-based reasoning classification. No other methods do this. ProtoPNet does not reveal concepts, only images. It has 800+ citations and none of them do this. The reason this is important is that people don't understand a concept from a single image. **Multiple examples** make it clear (see Figure 1, where we can tell whether it's the shape of the beak, color, image saturation, etc., that is being used because there are multiple images). The discussions with other reviewers are not relevant here. That has been a clarification of posthoc methods and heuristic methods vs. non-posthoc methods, and our method is not in the same genre of methods as the posthoc methods they mentioned, which we clarified (and one of their suggestions is not possible at all). Feel free to read our responses to those reviewers. >The report lacks specifics regarding the human study—like the number of participants and their expertise (whether they were experts or laypeople), etc… Your comparison with the random choice (50$\%$) and ProtoPNet—a model with nearly random choice accuracy—doesn’t adequately capture human utility due to the weak baselines employed. **A**: Please see the global rebuttal that we had submitted. There were 50 participants, one was thrown away due to sanity checks. The participants rated their expertise with AI. That 50\% value reflects that this was an extremely difficult experiment! The HIVE paper had 4 classes, so random guessing would be 25\%, but ProtoPNet got $>50\%$. Thus, the ProtoPNet baseline is *not* weak, as you state. In our human studies experiment, we took only the *top two* classes, which makes the problem extremely difficult for humans, so it is not a surprise to get 50\% accuracy. The fact that our method got higher than that is thus very meaningful. To add a bit more information on this human studies experiment, the ProtoPNet results were generally lower than our ProtoConcept results -- the median for ProtoPNet on the distinction task was 55\% whereas ProtoConcept's median was 66\%. (There were a few people who got low scores, which reduced the mean). We hope this clarification is helpful. We went to a lot of trouble to conduct this experiment quickly to satisfy your request, and the results were quite good. We're not sure what else we could have done... >Given that, my primary concerns are still about the novelty and evaluation methodology of this paper. I raise the score (to 4) and am open for more discussion. **A**: We hope our clarification about novelty and usefulness, and about the fact that ProtoPNet is not a weak competitor, that we did provide experimental details, and that our results were good will be helpful. Thank you for engaging with us!
Summary: This paper improves interpretability for prototypical learning. Most previous methods use a single prototype as an interpretation -- and this is not as informative as we don't know what portion of the prototype corresponds to our target image. The authors introduce a *prototypical ball* to interpret what prototypes are used for inference. Instead of selecting a single prototype, training samples from this prototypical ball are sampled. Strengths: *Originality*: I am not completely informed on all prototypical learning methods, but it appears that this method is original. In addition to the novel use of a ball for multiple visualizations, they describe a training scheme specifically for their method. This contributes to the originality of their work. *Quality*/*Clarity*: The paper is generally presented well, with descriptions of previous work and their drawbacks, as well as what contributions improve the current work. I think some of the descriptions of previous work is not very clear, but I understand space constraints preclude a better description. I believe that the main contribution of this paper can be more clearly elucidated in the figures, which I describe in ``Weaknesses" section. *Significance*: I think this work is important to the community. While it matches performance of previous methods, it presents a new way of interpreting the results of algorithms. Integrating this novelty into previous works is adds to its significance. Weaknesses: * A better explanation of interpretability would stregthen the paper. The results do improve interpretability, but the fundamental question (that is introduced in the paper) still remains: why are these selected as prototypes? There are more images and we humans can make connections and inferences, but it's still unclear as to why these are the prototypes within the ball. I do see this explicitly addressed in Section 5, but I still think it is a weakness of the paper. I think an analysis like your reference [16] would strengthen your paper signficantly. I don't know if it is critical to this paper, but it would be a good follow-up paper. * Figure 2 is fairly unclear. It's difficult to understand what your method is actually doing. It's not clear how the logits are obtained either. Overall, if I were to look at that diagram without the help of the text, it would be quite unclear as to what your novelty is and how it is implemented. The ball that you introduce, which is your main contribution, is missing from the figure. A couple small things: * Please add reference for ProtoPool in Figure 1 and line 209. * Reference [16] and [18] look very similar -- are these the same? * The text "ProtoConcepts Layer $g_p$" in Figure 2 looks compressed compared to the rest of the text in that figure. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Do you have an ablation on the loss scaling parameters in the supplementary (Tables 2,4)? How were these $\lambda$ chosen? How does performance vary with this? * Why was $k=10$ chosen for $\mathcal{L}_{\text{clstk}}$? Do you have an ablation on this? * Continuing from the section above, I think a figure showing the ball, it's center, and radius with its prototypes would help readers understand this work more clearly. For example in Figure 1, the multiple visualizations of prototypical concepts doesn't clearly identify how those multiple visualizations are obtained (i.e. you could use the same figure if the results were the same using a k-NN algorithm) (same for figure 2) * In equation three, I think it's important that you show the full loss equation i.e. $\mathcal{L} = \mathcal{L}_{\text{other terms}} + \lambda_1 \mathcal{L}_{\text{clstk}} + \lambda_2 \mathcal{L}_{rad} $. It would be clear as to what the loss weights mean in Section 4.1.1 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review comments. We are happy to make some clarifications as follows: > Why are these selected as prototypes? **A**: These prototypes are learned. The **data** chooses these prototypes. The interpretation of the prototypes as a concept is done by the human. The algorithm doesn't know what (e.g.) "long beak'' is - it's supposed to discover a concept as a bunch of images of beaks. If you had only one example of a beak, it would just be a picture of a bird's head, and you wouldn't necessarily know what aspect of the beak was important (color? length? shape? combination of those?), but if you had several examples, you could detect the learned concept. Moreover, we performed post-hoc-analysis similar to [1] on the ProtoPool-Concepts examples, and the analysis can be found in the global rebuttal Figure 1. It is worth noting that the texture and shape, as shown in the result, are the two most important factors in learning the "long beak" concept for the given test image of Brown Thrasher. (Note that the ability to learn the prototypes from data is not a weakness; it is a strength since it would find concepts that may not be defined yet by a human. It also makes things a lot easier to assemble the data since the concepts do not need to be defined in advance.) >Figure 2 is fairly unclear. It's difficult to understand what your method is actually doing. It's not clear how the logits are obtained either. Overall, if I were to look at that diagram without the help of the text, it would be quite unclear as to what your novelty is and how it is implemented. The ball that you introduce, which is your main contribution, is missing from the figure. **A**: Thank you for your comment. Because we present a general method which can be added to a wide range of prototype-based models, each of which has a different architecture for calculating logits, we tried to create a figure which describes the prototype-based reasoning process in generality. However, we agree that it can be difficult to understand how the architecture works for a specific model given only the information in Figure 2. Therefore we will create separate architecture figures for each prototype-based model in our experiments and add them to the supplement, so readers can understand the prototype-based reasoning process for each model. >Do you have an ablation on the loss scaling parameters in the supplementary (Tables 2,4)? How were these chosen? How does performance vary with this? **A**: The parameters from Tables 2 and 4 are directly borrowed from the previous corresponding model, ProtopNet, and ProtoPool. These are obtained through finetuning by the previous works. For TesNet parameters and the weight of radius loss, we fine-tuned the weight to accommodate our method to the model by grid search. An ablation study on different choices of radius can be found in the Main paper Table 2. >Why was K=10 chosen for clstk? Do you have an ablation on this? **A**: K is a hyperparameter of our model and is selected through parameter tuning by grid search. The ablation of different choices of Ks on the ProtoPool-based model can be found in Supplementary material Table 1 . >A couple small things: Please add reference for ProtoPool in Figure 1 and line 209. Reference [16] and [18] look very similar -- are these the same? The text "ProtoConcepts Layer " in Figure 2 looks compressed compared to the rest of the text in that figure. **A**:Thank you for catching these small errors. We will fix these issues in our final manuscript. [1] Meike Nauta, Annemarie Jutte, Jesper Provoost, and Christin Seifert. This looks like that, because ... explaining prototypes for interpretable image recognition. In Machine Learning and Principles and Practice of Knowledge Discovery in Databases, pages 441–456. Springer International Publishing, 2021. ISBN 978-3-030-93736-2. --- Rebuttal Comment 1.1: Comment: Than you for your response. *A: Thank you for your comment. Because we present a general method which can be added to a wide range of prototype-based models, each of which has a different architecture for calculating logits, we tried to create a figure which describes the prototype-based reasoning process in generality. However, we agree that it can be difficult to understand how the architecture works for a specific model given only the information in Figure 2. Therefore we will create separate architecture figures for each prototype-based model in our experiments and add them to the supplement, so readers can understand the prototype-based reasoning process for each model.* I think what I mean is: the core algorithmic idea (as I understand it) is generating the prototypical ball from which prototypes are generated. It would be good to see this in the figure. I'm not sure if an architecture-specific figure is needed, but I would like to see your core contribution. You have sufficiently addressed my concerns. I think the global rebuttal and results presented there improve the strength of the paper. If the authors commit to including the improvements listed in their individual and global rebuttals, I will increase my score "Accept" --- Reply to Comment 1.1.1: Comment: Thank you for your comments and clarifications! We will add our core algorithmic idea (prototypical ball) element in Figure 2 for the final manuscript. And we will add the survey and its result to the final manuscript.
Rebuttal 1: Rebuttal: **User Study** We thank all the reviewers for their comments. To show the reduction of ambiguity (and resulting improvement in user interpretability), we created a **distinction** user study similar to HIVE[1] to compare our ProtoPConcepts method with ProtoPNet. We randomly picked ten samples from the test set and calculated the top two predicted classes (i.e., the classes with the highest predicted probabilities according to the model) for each test sample. We then provided visual explanations from the most activated prototypes for these classes by ProtoPNet and ProtoPNet-Concepts. A test-taker was then asked to choose which class the model is actually predicting, looking only at the visual explanations without the class probabilities. Examples of our user study are shown in the global rebuttal pdf Figure 2. We released our user study on Amazon Mechanical Turk and collected 50 responses from test takers with a 98% survey approval rate to ensure the quality of responses, and removed 1 response from both surveys after screening for nonsensical free response answers. We first ran a two-sided t-test on self-rated ML experience for the test takers from the ProtoPNet and ProtoPNet-Concept. The p-value is 1, and we are assured that there is no statistically significant difference in machine learning experience between the two groups on average. We further performed a one-sided, two-sample Welch t-test with the alternative hypothesis that the ProtoConcept user study results in higher accuracy than the ProtoPNet on average. The p-value is 0.003, which means that **ProtoConcept exhibited a statistically significant improvement in model interpretability over ProtoPNet**. Moreover, users given visual explanations from ProtoPNet could not statistically beat a random guessing accuracy of 50 percent (p=0.289), which is consistent with the findings in the HIVE paper[1]. With our model, users' mean accuracy was able to beat random guessing by a statistically significant margin (p=$2.85*e^{-5}$). Our survey results show not only that **our model provides a notable improvement in user interpretability**, but is able to **improve non-expert user performance** in a difficult fine-grained classification task whereas the previous ProtoPNet model cannot. The detailed test statistics can be found in global rebuttal pdf Table 1. [1] Kim et al., HIVE: evaluating the human interpretability of visual explanations, ECCV 2022 Pdf: /pdf/af55b2092d2c5e5ab09466bd34ccc5f8496d9e4d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
SmooSeg: Smoothness Prior for Unsupervised Semantic Segmentation
Accept (poster)
Summary: The paper tackles unsupervised semantic segmentation which aims to group pixels into semantic clusters without manual annotation. This paper proposes the smoothness prior by enforcing that adjacent features in metric space share the same semantics. The approach relies on features from a self-supervised representation learning method (ie., DINO). Additionally, the architecture relies on an asymmetric teacher-student style predictor that updates pseudo labels, leading to smoother segmentation predictions. This approach outperforms STEGO (prior s-o-t-a) in terms of pixel accuracy on three datasets: COCOStuff, Cityscapes, and Potsdam with a ViT-S backbone. Strengths: ### Method: - Leverages advantages in self-supervised learning (SSL): The proposed method leverages the advancements in self-supervised representation learning in generating dense representations. Notably, it proposes to model the relationships between image patches by relying on high-level features from a pre-trained DINO model. - Kmeans is not required: The introduced smoothness prior can be used to directly optimize the generated segmentation map. The paper claims this strategy leads to more coherent and accurate segmentation results than prior work STEGO. In particular, the latter relies on minibatch Kmeans to obtain the final segmentation map. - Learnable prior: The approach does not rely on grouping priors as explored in prior works (see below for references). Instead, the approach implicitly uses a learnable grouping prior by relying on the self-supervised representation learning method (i.e., DINO). ### Experiments: - Dataset variety: The approach is applicable to different datasets. In particular, the experimental validation considers 3 different datasets: COCOstuff, Cityscapes, and Potsdam. - Quantitative results: the approach outperforms prior art (STEGO) on pixel accuracy when using DINO pre-trained weight on Citysapes and COCOstuff dataset. ### Misc. - Clarity: The method is well presented by relying on clear figures (e.g., Figure 2). The paper is also easy to follow (for someone who’s familiar with the literature at least). Weaknesses: ### 1. Originality Method - Difference with STEGO / SlotCon: - The paper can be seen as a combination of STEGO and SlotCon. The method largely follows the methodology of STEGO. The main SmooSeg loss is similar to the correlation loss in STEGO, which is also mentioned in the paper (see L158). Overall, I don’t think this is a major issue as I haven’t seen this combination being applied to the task of unsupervised semantic segmentation directly. - (Minor) I also don’t completely follow the reasoning that the proposed method is superior to Kmeans (L165). While this is indeed an advantage, it’s not clear why Kmeans would necessarily perform worse, as used in STEGO (both STEGO and SmooSeg rely on the features from DINO anyway). Kmeans also has to rely on a distance metric (e.g., cosine distance between features), resulting in similar features/patches being assigned to the same cluster. ### 2. Robustness - setting hyperparameters: There are many parameters that require supervision during training: - CRF weights: The weights of the CRF are determined using the annotated validation set. How can these be determined without annotations? It’s not clear from the supplementary if the weights are dataset-specific. (I assume that https://github.com/lucasb-eyer/pydensecrf was used.) - Number of classes: The number of classes is not always known a priori. In particular, it’s not clear how much the approach relies on this information. As a result, the introduced setup (section 3) is relatively artificial. In most practical applications, this information won’t be available. As a result, there are important experiments missing, where the sensitivity to this parameter is ablated (ideally comparing the mean and variance over multiple runs with other methods (e.g., STEGO, PiCIE). - Dataset-specific parameters: Overall, there are many training parameters that differ across datasets and require finetuning (i.e., loss parameters). Ideally, the same set of parameters is applicable to new/unseen images, especially for an ‘unsupervised’ method. Now, it’s hard to judge the robustness of the approach as it relies on finetuned parameters for each dataset. To my knowledge, this is not the case for prior works and baselines (IIC, PiCIE, DeepCluster). Also, TransFGU keeps most of its parameters constant. While this issue is somewhat tackled in the supplementary (see Section B), it’s still not clear if this strategy was actually used to set these parameters during training. ### 3. Scope of the experiments: - The experiments are limited to results on 27 classes (COCOStuff and Cityscapes). TransFGU also includes COCO-80 and COCOStuff-171. It would be interesting to see the performance for these setups as well. - A more practical setting is the semi-supervised setup, especially as the ‘unsupervised’ results are relatively poor and not immediately useful for practical applications. Does the method improve the representations by finetuning the learned representation compared to DINO? Finetuning the complete model or simply training a linear probe on top of the model are interesting experiments [f]. I would also expect that the learned representation can be efficiently fine-tuned with only a few samples [b]. ### 4. Quantitative results - different backbones: I noticed that only a ViT-S is used for COCO-stuff and Cityscapes, while a ViT-B is used for the Potsdam dataset. However, STEGO reports its final results with a ViT-B and outperforms the numbers in this paper (i.e., STEGO obtains 28mIoU on COCOstuff with a ViT-B). So, why is a ViT-B not used for COCO-stuff and Cityscapes as well, to make sure that the observations transfer to stronger backbones? This would furthermore make the main claim of the paper stronger. ### 5. Related work There are also a few related works missing. Interestingly, some of these rely on priors, such as superpixels, edge, or saliency estimators [d, e, f], which the paper under review does not require. More suggestions can be found below: [a] Wang et al, Dense Contrastive Learning for Self-Supervised Visual Pre-Training, CVPR. [b] Wang et al, FreeSOLO: Learning to Segment Objects without Annotations, CVPR. [c] Ziegler et al, Self-Supervised Learning of Object Parts for Semantic Segmentation CVPR. [d] Hwang et al, Segsort: Segmentation by discriminative sorting of segments CVPR. [e] VanGansbeke et. al, Unsupervised Semantic Segmentation by Contrasting object mask proposals, ICCV. [f] Zhang et al., Self-Supervised Visual Representation Learning from Hierarchical Grouping, NeurIPS. Technical Quality: 3 good Clarity: 3 good Questions for Authors: My most important questions are listed above. I also have a few additional questions: - L265 claims that the proposed method outperforms STEGO at the boundaries. However, are the boundaries not primarily dependent on the CRF? How do these methods compare when we don’t use a CRF? It would also be useful to see a few examples without a CRF in the supplementary (optional). - How is the best model selected in the absence of a validation set? Can the loss function be used to select the best model during training? - The method is dependent on the pretrained DINO weights. We see that MoCov2 performs worse in Table 1. What happens if we use other weights (MoCov3, Pixpro, DenseCL etc.)? - Are the visualized predictions in the supplementary randomly selected? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: While some of the limitations are mentioned in the paper (see Section 4), I suggest including other limitations (e.g., known CRF weights, and known number of classes). I also couldn't find a paragraph on the societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. ### Why SmooSeg is superior to Kmeans used in STEGO The performance of Kmeans heavily relies on the quality and discriminative power of the features it operates on. In the case of DINO features, as shown in the 2nd column of Figure 5, the Kmeans centers do not possess much semantic meaning as they almost stand at the same point. This phenomenon indicates that the DINO features lack sufficiently discriminative information to learn distinct semantic centers through Kmeans. Additionally, STEGO aims to distill DINO features into a more compact and structured representation using a distillation loss, making the class centers of Kmeans in this learned embedding space more semantically meaningful (3rd column of Figure 5). However, it is worth noting that although the DINO features and STEGO embeddings may exhibit many compact sub-clusters, they are not continuous or coherent in the semantic sense. In contrast, our SmooSeg, empowered by the smoothness prior, can produce highly semantically compact and coherent semantic clusters, as evidenced by the last column of Figure 5. ### Setting hyperparameters - **CRF weights:** Please see the general response. - **Number of classes:** Predefining the number of classes appears to be a common practice in existing studies that address the problem of unsupervised semantic segmentation. PiCIE adopts the number of classes as the number of cluster centroids in its algorithm; TransFGU adopts this number as the number of class embeddings in its Segmenter module; and STEGO requires a predefined number of classes when performing Kmeans. We follow this problem setting in our work for fair comparison but agree that this deserves further studies. - **Dataset-specific parameters:** We agree that tuning parameters without a validation set can be challenging for unsupervised methods. While some prior works (e.g., IIC and PiCIE) do not explicitly have dataset-specific parameters, they generally perform significantly worse than those with (e.g., TransFGU and STEGO). In view of the parameter setting challenge, we strive to find a practical solution, which leads to the analysis in Appendix B. Through experimentation, we found that stable results can be achieved when b1 is set at 0.5 and b2 is nearby 0.0, which may to some extent enhance the practicality and robustness of SmooSeg in real-world applications. ### Scope of the experiments - **Additional experiment** Thanks for your advice. COCOStuff-171 and COCOStuff-80 require much more computations and we are not able to obtain their results before the rebuttal deadline. We will add their results in the revision. - **Semi-supervised setup and few-shot setup** We appreciate the reviewer for providing these constructive suggestions. As can be observed in Figure 5, the embedding of SmooSeg exhibits significantly higher semantic consistency compared to the features of DINO, which confirms that SmooSeg is able to enhance representations over DINO. We are also curious if the representations learnt by SmooSeg can be further fine-tuned with a few samples. As the current work focuses on the unsupervised setting, we would like to reserve this exploration to future studies. ### The choice of ViT-S The choice of using ViT-S as the backbone on the COCO-stuff and Cityscapes datasets was made to ensure a fair comparison with both STEGO and TransFGU, as TransFGU only adopts ViT-S as its backbone. Accordingly, the results of TransFGU are directly cited from their original paper. ### Missing references We appreciate the reviewer for providing these valuable references. In our revision, we will include a thorough discussion with these literatures to highlight the distinctions in approaches. ### The performance and examples without CRF Please see the general response. ### Select the best model without a validation set In practice, the loss from the pretext task may not always be a reliable indicator of the model's performance on actual downstream tasks. Basically, it is impossible to select the “best” model, but there are some potential approaches to choose a good model without a validation set: - Setting a global training step and selecting the last saved mode. As the training progresses, the model tends to become more stable, and the performance may also stabilize. In our experiments, we have found that the checkpoint at the end of training often shows good results. - Making visualizations of segmentation maps for some training samples during training. By comparing these results, one can qualitatively assess the performance of the model and select a model that shows promising segmentation results. ### The performance with different backbones Following you suggestion, we further conduct experiments on COCOStuff by using Pixpro and DenseCL weights for SmooSeg. | Methods | Acc | mIoU | | -------- | ---- | ---- | | SmooSeg + DINO | 63.2 | 26.7 | | SmooSeg + MoCoV2 | 52.4 | 18.8 | | SmooSeg + pixpro | 48.3 | 18.1 | | SmooSeg + DenseCL | 54.6 | 19.2 | DINO features indeed yield the best performance of SmooSeg when compared to its partnership with other SSL representations. Its superority on dense prediction tasks is also evident from its widespread adoption across the literature (in STEGO, TransFGU, and Deep spectral methods [12]). On the other hand, the ResNet architectures employed in models like Pixpro, DenseCL, and MoCov2 have certain limitations in generating low-resolution coarse features, rendering its challenges used for fine-grained semantic map prediction, especially under unsupervised setup. ### How to select the visualized predictions To ensure a comprehensive representation of the model's performance, we manually selected a diverse and representative set of images for visualization. To provide further evaluation, we have included some difficult samples predicted by our SmooSeg in the attached PDF file. --- Rebuttal Comment 1.1: Title: Response to authors after the rebuttal Comment: I thank the authors for providing the rebuttal. There are several points I want to emphasize after reading the rebuttal and the other reviews: 1. The motivation why the approach outperforms STEGO could be further corroborated by also including "linear probing" results. Why is this information not provided in the paper? STEGO also includes this. 2. One of the main weaknesses is that the approach requires dataset-specific hyperparameters. This limits the scalability of the approach and furthermore contradicts the claim that the approach is unsupervised. While STEGO has shown better results by changing the hyperparameters across datasets, it’s reasonable to argue that this is far from realistic. Additional experiments are necessary to quantify the robustness of the approach as the adopted setup can be considered artificial: the exact amount of clusters, supervised CRF weights, and ideal loss weights are currently being used. As a result, including linear probing and over-clustering experiments would make the paper certainly stronger. 3. In addition, semi-supervised results (e.g., with a linear probe or a shallow head) would also make the approach more useful in practice as the current mIoU scores are relatively low (e.g., 18 mIoU on Cityscapes). 4. I also still don’t understand why different architectures are being used. As the approach heavily relies on STEGO, it makes more sense to include results with a ViT-B for Cityscapes. This is especially important since STEGO reports higher numbers than currently presented in the paper. This would confirm the claims in the paper. --- Reply to Comment 1.1.1: Comment: Thank you for the prompt response. **1) Regarding the linear probing:** We would like to clarify that linear probing is a **supervised** approach for assessing the quality of the representations generated by self-supervised representation learning methods. Therefore, we think that linear probing is not suitable for evaluating the performance of SmooSeg as labels are not accessible in unsupervised semantic segmentation. We would like to further elaborate on the rationale as below. Existing unsupervised semantic segmentation methods can be roughly categorized into two types: - **One stage:** IIC[3], PICIE[4], HSG[10], TransFGU[6], and the proposed SmooSeg, which directly learn semantic labels, with losses typically defined on labels. - **Two stages:** STEGO[8], representation learning + Kmeans for unsupervised semantic segmentation. In the representation learning stage, losses are defined on feature maps. For one-stage methods, as they directly output the predicted semantic maps instead of intermediate features, their performance is evaluated by Acc and mIoU, which are task-specific metrics for unsupervised semantic segmentation. Linear probing is thus not directly applicable and not reported in one-stage methods [3, 4, 6, 10], as the quality of intermediate features is not the objective of these studies. For the two-stage method, STEGO, as it outputs good representations to be used by K-means for images, it is thus reasonable to assess the quality of the learnt representations by linear probing. Therefore, it is not sensible and unfair to compare the results of linear probing between a one-stage and a two-stage method due to their distinct objectives. In addition, linear probing may not be a reliable assessment even for two-stage unsupervised segmentation methods because of its sensitivity to feature dimensionality. To illustrate this point, we cite the **linear probing results** from Table 2, page 5, in [a]: | Method | COCOStuff | Cityscapes | | -------- | -------- | -------- | | | Acc / mIoU | Acc / mIoU | | DINO (ViT-B)| 75.8 / **44.4** | **91.3** / **34.9** | | STEGO | **76.1** / 41.0 | 89.6 / 28.0 | It is shown that the linear probing results of DINO (dimensionality of features: 768) are better than STEGO (dimensionality of embedding: 90 or 100). These linear probing results contrast with the superiority of STEGO over DINO for the unsupervised semantic segmentation task. *[a] Uncovering the Inner Workings of STEGO for Safe Unsupervised Semantic Segmentation. CVPR, 2023.* **2) Semi-supervised setup.** Our method aims to address the problem of unsupervised semantic segmentation rather than semi-supervised semantic segmentation [b,c]. *[b] Semi-supervised semantic segmentation with prototype-based consistency regularization, NeurIPS 2022.* *[c] Semi-supervised semantic segmentation via gentle teaching assistant, NeurIPS 2022.* **3) Problem setup and hyperparameter issue.** We strictly follow the established unsupervised semantic segmentation setup as in [3,4,6,8], where **the number of clusters is predefined**. For example, this number is used as the number of cluster centroids in PiCIE [4], the number of class embeddings in TransFGU [6], and the "K" in Kmeans in STEGO [8]. In addition, it's important to highlight that **CRF with default settings is employed only during the testing phase**. Therefore, its parameters are not relevant to the hyperparameters of our method. Finally, most existing SOTA methods, including STEGO and TransFGU, also contain dataset-specific hyperparameters. To further demonstrate the effect of the number of classes, we tune the number of prototypes in SmooSeg and show the results below. The best result of SmooSeg is obtained when the number of prototypes equals the ground-truth number of classes. Interestingly, even with more prototypes, SmooSeg consistently outperforms STEGO (Acc: 48.3, mIoU: 24.5). | Number of prototypes | 27 | 30 | 33 | 37 | |:--------------------:|:-------------------:|:-----------:|:-----------:|:-----------:| | | Acc / mIoU | Acc / mIoU | Acc / mIoU | Acc / mIoU | | SmooSeg | **63.2** / **26.7** | 58.5 / 25.6 | 61.3 / 25.3 | 59.1 / 24.2 | **4) Additional results with DINO ViT-B/8 on the Cityscapes dataset.** The results of SmooSeg with DINO ViT-B/8 as the backbone are provided below. It's evident that SmooSeg is also superior to STEGO under this backbone. | Method | Acc | mIoU | | -------- | -------- | -------- | | STEGO (ViT-B) | 73.2 | 21.0 | | SmooSeg (ViT-B) | **84.5** | **21.5** |
Summary: The paper presents an unsupervised training method for semantic segmentation models. It trains the network on a loss with two terms: one encouraging smoothness in the segmentation labels, and a "data" term based on self-training. It assumes a neural net architecture with a) an extractor that produces per-pixel features $X$, b) a projector that produces lower-dimensional features, to be compared against "prototypes" and used in self-training, and c) the "predictor" that outputs the per-pixel segmentation labels from the prototype/feature comparisons. The backbone feature extractors are primarily based on DINO [11], though some experiments appear to use MoCo [24] in its place. This is evaluated in terms of its accuracy in semantic segmentation tasks including COCOStuff, Cityscapes, and Potsdam-3. Strengths: i) Experiments consider variety of semantic segmentation tasks: street, general, and aerial image datasets. This is a good demonstration that the method can be applied generally, even if it is sensitive to some choice of hyperparameters. ii) Uses a backbone (DINO) also trained in an unsupervised (self-supervised) way. Uses DINO ViT-S/8 [11] or MoCo [20] as backbones in different experiments on the segmentation head & training. These were also trained without hand labelling, wtih a self- or unsupervised method; as opposed to using, for example, a standard ImageNet backbone from supervised training. This is a correct and principled way to make sure the proposed segmentation models are truly and completely unsupervised. iii) Precise presentation Uses both an algorithm, written in plain code, along with diagrams and the math to describe the method quite completely. Those parts of the method that are unclear from the definition of the loss can be deduced quite unambiguously from Algorithm 1, and vice versa. Weaknesses: iv) Not a lot of experiments or analyses explaining *why* the method works. The experiments mostly show end results (mAP, qualitative segmentation quality). Hyperparameter studies are useful, though: providing good insight into the role each hyperparameter plays. And there is a minimal ablation study in Table 4. It still seems unclear what exactly the network is learning from. For example, to what extent is the smoothness a constraint on the extractor vs. the projector? It seems possible that $E_\mathrm{smooth}$ is also penalizing the cosine distances, as defined in Equation (2), and pushing feature vectors $X$ together, and not just using it as a weight for spatial smoothness in the labels $Y$, as in the intuitive explanation of Eq (1). I don't see `torch.no_grad` or any equivalent around the weight computations in Algorithm 1. Fig 5 visualizes feature embeddings on Potsdam-3, and this shows that the group similar classes together. Though, if the mapping between feature vectors $X$ and the labels $Y$, in the projector and predictor, is a "smooth" function (in the sense of Lipschitz smoothness) so that similar features get similar labels, might we already expect that the smoothness penalty is reduced, even if there is no meaningful spatial smoothness of the labels? The relative role/contribution of the within-image and between-image comparisons also seems to complicate the intuitive explanation of the method. There is a large accuracy drop if $E_\mathrm{smooth}$ only penalizes adjacent pixels within the image, and not across the images. The motivation for $E_\mathrm{smooth}$ that adjacent pixels are likely to have the same label, the "natural tendency towards piecewise coherence regarding semantics, texture, or color," does not apply to pixels within different images. The smoothness between pixels that are adjacent in a metric space is a property of the construction of that metric space: not a natural property of images, so this part's role in the function of the smoothness perhaps needs more explanation and experiments. Why do we expect with this unsupervised loss that the metric space will be semantically meaningful? v) Novelty is arguably more limited than is claimed. The use, in unsupervised training of semantic segmentation models, of spatial smoothness of the labels does seem to be explored. Though many of the particulars and how it's used within the overall framework is new as of this submission. Smoothness priors themselves are, of course, quite common in semantic segmentation. Spectral methods also expected to be a form of smoothness prior, but using squared differences in place of the $\delta$ in Eq (1) of the submission. This kind of prior is used in unsupervised training of semantic segmentation models in: [a] Xia & Kulis "W-Net: A Deep Model for Fully Unsupervised Image Segmentation" [b] Melas-Kyriazi et. al. "Deep Spectral Methods: A Surprisingly Strong Baseline for Unsupervised Semantic Segmentation and Localization" W-Net also incorporates a fully-connected CRF, in section 4.1, though this is only for preprocessing and not for unsupervised training. I however don't see prior work that uses a fully-connected CRF within the training objective in the way that this submission does. The backbone is also updated relative to previous work: this submission is perhaps in the genre of papers that applies a given vision idea/concept to ViTs instead of CNNs. vi) There may also be some gaps in the citation of prior work similar to the submission's teacher/student/self-training approach described in section 3.2. Citations on self-supervised learning in the "Related Work" in section 2 focus on contrastive learning approaches. More similar approaches might include: [c] Scudder "Probability of error of some adaptive pattern-recognition machines" [d] Xi et. al. "Self-training with Noisy Student improves ImageNet classification" Based on my read, the submission does *not* modify the inputs or consider different views between teacher and student, as in [19, 20, 21, etc.]. (If my read is incorrect, what exactly is the "pretext task" that the authors use?) Though [c.d] and similar papers differ enough from the submission's training scheme, including the use of a projector module to get the prototypes that are compared between teacher & student, that this part is not relevant to judging novelty. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Why the focus on semantic segmentation? If the labels aren't used, then I'd assume the method should equally apply to instance, panoptic, or salient-foreground segmentation. Especially given it does seem to generalize across different kinds of semantic segmentation. Is it because the prototypes $P$ are expected to somehow relate to semantic classes of "stuff?" The stated goal of the smoothness prior is that "the segmentation model is encouraged to assign similar labels to adjacent patches, thereby promoting spatial coherence within objects." What determines which pixels are "adjacent" when constructing E_smooth? I.e. is it 4-connected, 8-connected, or fully connected? The pairwise weights don't seem to include spatial distance. Toward the end of Section 3.1 it's stated that the authors "also apply the smoothness prior across images." Which pairs of pixels across pixels are included as terms in the sum in Eq (1)? Minor comments: * Typos in Algorithm 1 comments: "updata" and "prototyeps" * Citation to [20] is repeated on line 94 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The sensitivity of the results to hyperparameter choices is mentioned as a limitation. This does seem likely to be the major drawback, with bad results like mode collapse being a concern. In Appendix B of the supplement, it is described in qualitative terms that one can monitor the distribution of the differences in soft label assignments (the "smoothness degree") to adjust the parameters. They don't give a fully reproducible method or algorithm for determining b1 and b2 to achieve the desired $\delta$ distribution, but it does seem clear what one would be looking for to tune this manually. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. ### Why the method works **What the network is learning from.** Please refer to the general response. ### The relative role/contribution of the within-image and between-image comparisons. We apologize for any confusion caused. We clarify that our “adjacent” patches are defined in the high-level feature space generated by the frozen pre-trained model, rather than in the spatial coordinate space of images. That is, adjacent patches in our model refer to patches with similar extracted features: they may or may not lie closely in a single image and may belong to different images. Your insight regarding the performance drop when penalizing only adjacent pixels within an image aligns with our observations. The rationale behind this phenomenon lies in two folds: 1) the smoothness term across the images could provide the negative force that prevents the model collapse; 2) the smoothness term across the images helps construct a global and semantically meaningful distribution across the entirely dataset. While it is intuitive that similar patches within the same image should share the same label, the smoothness term across images extends this concept to all patches with the same semantics across all images. This promotes the creation of coherent and meaningful semantic clusters that transcend individual images, as shown in Figure 5 in the main submission. We argue that the concept of smoothness between image patches in a metric space is reflective of the natural properties of images. This assertion is grounded on the fact that a pre-trained model can extract semantically consistent features from images. For instance, patches belonging to a common object like a dog would naturally cluster together both in the pixel space and in the metric space when provided with semantically consistent features. This highlights the reason behind our decision to construct the closeness matrices in the metric space, as opposed to the spatial coordinate space. ### Difference with W-Net and Deep Spectral Methods We deeply appreciate the reviewer's kind provision of references and recognition of the originality inherent in our overall framework. While acknowledging the utilization of a fully-connected CRF in W-Net for preprocessing, it's crucial to emphasize that our submission distinguishes itself by incorporating a smoothness loss within the context of unsupervised training, rather than being limited to preprocessing or postprocessing stages. Furthermore, we'd like to clarify that Deep Spectral Methods do not encompass model training; they utilize spectral decomposition for object or foreground segmentation. It's worth noting that Deep Spectral Methods lack the capability of distinguishing the "stuff" category. Moreover, we concur with your observation regarding the updated backbone: our method aligns particularly well with ViTs structure. ### Missing citation of prior work We sincerely appreciate the reviewer for pointing out the missing citations. In our revision, we will ensure to include these relevant works and provide a more comprehensive discussion on similar approaches. We also appreciate the reviewer's accurate understanding of our work: SmooSeg does not modify the inputs or consider different views between teacher and student. ### Why focusing on semantic segmentation The focus on semantic segmentation in SmooSeg is mainly due to the way prototypes are learned. In SmooSeg, the prototypes are automatically learned from image patches without any direct constraints, making it challenging to differentiate between foreground and background prototypes or distinguish different instances. Indeed, smoothness has the potential to be incorporated into various types of semantic segmentation as a smoothness regularization term. However, for tasks requiring finer distinctions, such as instance segmentation or salient foreground segmentation, additional modifications and research may be needed to address the specific challenges associated with those tasks. Its extension to other types of segmentation tasks is an interesting direction for future research. ### The construction of closeness matrices $W^{i,i}$ and $W^{i,i^{\prime}}$ While early smoothness-prior-based methods often define adjacent pixels for a given pixel based on the 4-connected or 8-connected grid in the coordinate space, we argue that this approach neglects the high-level semantic information of image patches. In contrast, SmooSeg defines the closeness relationship among image patches in a metric space (the feature space of self-supervised representation learning methods) rather than in the coordinate space, by calculating the cosine similarity of high-level features. The resulting closeness matrix is fully connected, representing the adjacency between all image patch pairs. Theoretically, a large element value in the closeness matrix, indicating a high cosine similarity, suggests a high possibility of "adjacent" patch pair, and vice versa. From this perspective, when minimizing the smoothness loss, SmooSeg encourages two patches to have similar labels if their relationship in the closeness matrix is positive, and vice versa. SmooSeg enables the utilization of high-level semantic information to guide the smoothness regularization, leading to improved segmentation results that account for semantic coherence and consistency between image patches. We apologize for the confusion on the smoothness prior across images. In our smoothness term given by Eq (4), there are two closeness matrices involved—one for within an image and another for across images. Specifically, the closeness matrix across two different images $I_i$ and $I_{i^{\prime}}$ is computed as the cosine similarities of all across-image patch pairs: $W_{pq}^{ii^{\prime}}=\frac{X_{i,p} \cdot X_{i^{\prime}, q}}{||X_{i, p}|| \ ||X_{i^{\prime}, q}||}.$ ### Typos and repeated citation We will proofread the paper again and correct all the typos. --- Rebuttal Comment 1.1: Comment: > We clarify that our “adjacent” patches are defined in the high-level feature space generated by the frozen pre-trained model, rather than in the spatial coordinate space of images This is a good clarification, thanks! Definitely a few passages I'd suggest editing in the camera-ready keep this clear. For instance, the intro seems to be setting up to motivate smoothness in the image space, or conflate the two: > However, despite their effectiveness, these methods often overlook the property of spatial coherence of image segments > Observations close to each other, either in the form of neighboring pixels or adjacent features in a metric space, are expected to share similar semantic labels or, in section 3,1: > In other words, the segmentation model is encouraged to assign similar labels to adjacent patches, thereby promoting spatial coherence within objects. --- Reply to Comment 1.1.1: Title: Thanks for your valuable suggestions Comment: Thanks a lot for your valuable suggestions that help improve our paper. We will certainly address this confusion in our revision. Please let us know if there are any additional concerns or suggestions that could further enhance the quality of this work.
Summary: This work addresses the problem of unsupervised semantic segmentation. In contrast to STEGO (baseline), the approach learns semantic embeddings directly in a student-teacher regime without the need for the K-means. The objective is entirely unsupervised and is reminiscent of a CRF energy formulation with data and a smoothness term. The pseudo labels from the teacher embeddings provide the signal for the data term. The work achieves improved segmentation accuracy on standard benchmarks over the state of the art and offers an interesting analysis. Overall, the work is compelling in quality and contribution. Strengths: - The application of the smoothness constraint is natural and simple. - The learning problem is well-designed and executed. The text is generally well-written (albeit not without minor typos). - The experiments have sufficient scope and provide valuable insights. The empirical results are compelling. Weaknesses: There are technical similarities to STEGO, such as in the loss formulation in Eq. (4). There are differences, though, as discussed in 158-174. However, it does take away a bit from the novelty. Empirically, the approach appears at its best with DINO features using ViT architecture; the improvement over prior art may not translate to other SSL representations and model architectures (MoCo, ResNet, see Tab. 1). Nevertheless, the approach remains competitive. The method introduces a number of hyperparameters, which can be challenging to fine-tune in an unsupervised setup. I like that this is acknowledged in the work, however, and Appendix B provides some interesting analysis. Typos: e.g. l. 178, 307 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - l. 140-141: Why is optimisation only stable with such normalisation? - Eq. 5: Why is there a need to stop the gradient flow in this fashion? - l. 212: I would be curious to understand how the predictions and the ground truth are aligned in a bit more detail. - How were the other parameters (e.g. the momentum, the temperature) chosen/fine-tuned? - I do not quite follow the sentence in l. 172: “which represents discontinuities between image patches that should be preserved.” Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are discussed in the main text. The supp. material provides further analysis. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Technical similarities to STEGO We acknowledge some technical similarities between our smoothness loss and the correlation loss in STEGO. These loss formulations are not entirely novel and have been seen in various dimensionality reduction methods (e.g., [a]). Essentially, STEGO boils down to an unsupervised dimensionality reduction method [b], followed by a kmeans grouping of learned embeddings for patches. In contrast, our method goes beyond dimensionality reduction and utilizes the smoothness prior to facilitate the learning of the labeling function, which encourages piecewise smoothness and leads to more coherent and semantically meaningful semnentation maps. Additionally, our smoothness loss directly constrains the label assignment, which brings a desirable property when compared to the correlation loss that operates on the reduced feature correspondence. *[a] He, Xiaofei, and Partha Niyogi. "Locality preserving projections." Advances in neural information processing systems 16 (2003).* *[b] Koenig, Alexander, Maximilian Schambach, and Johannes Otterbach. "Uncovering the Inner Workings of STEGO for Safe Unsupervised Semantic Segmentation." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3788-3797. 2023.* ### Backbone choice DINO features using ViT architecture indeed yield significant benefits for downstream dense prediction tasks when compared to other SSL representations. This is evident from its wide adoption in the literature, including STEGO, TransFGU, and Deep spectral methods [12]. ### Hyper-parameter choice Fine-tuning hyper-parameters is a common requirement in state-of-the-art unsupervised methods, including STEGO and TransFGU. Therefore, we strive to find a practical solution to address this challenge, which leads to the analysis in Appendix B. ### Why is optimisation only stable with such normalisation By treating the closeness matrix as a weight matrix between nodes in a graph, the zero-mean normalization balances the negative and positive forces during optimization. This balance ensures that the optimization process is more stable, preventing excessive influence from either the negative or positive components of the closeness matrix. Consequently, this normalization contributes to a smoother and more consistent learning process. We will clarify this in the revision. ### Why is there a need to stop the gradient flow in this fashion The stop-gradient operation in Eq. 5 is an essential step in our asymmetric student-teacher style predictor. In our approach, there is no observed semantic map available for the data term, so we adopt a self-training strategy to minimize the data loss. The self-training relies on the teacher branch to generate enhanced pseudo labels, which are then used to supervise the learning of the student prototypes. To ensure stability and prevent rapid updates during each training batch, the stop-gradient operation is necessary. On the other hand, we stop the gradient flow from the data loss to the projector’s parameters. This decoupling mechanism for the projector and the predictor will make the learning easier and more stable, as we can avoid the need to trade-off between these two losses during model training. ### how the predictions and the ground truth are aligned The Hungarian matching algorithm is widely used to align the predictions and the ground truth in the unsupervised setting. The objective is to find an assignment function that maps each predicted class to a ground truth class with the minimum total cost. One needs to first compute a cost matrix with each entry indicating the cost of an assignment and uses the Hungarian matching algorithm to find the best assignment function. In practice, we utilize the linear_sum_assignment function from the scipy package to calculate the assignment. This function takes the cost matrix as input and outputs a list of pairs, such as $[(m_0, n_0), (m_1, n_1), ...]$, where each predicted class $m_i$ is aligned with a ground truth class $n_i$. ### How were the other parameters chosen/fine-tuned We have conducted experiments to evaluate the sensitivity of the momentum parameter and the temperature parameter. Empirically, we found that using a large momentum parameter and a small temperature parameter consistently leads to good results on all datasets. Throughout the experiment, we set these two parameters as follows: momentum parameter $\tau = 0.1$ and temperature parameter $\alpha = 0.998$, unless stated otherwise. ### Explanation of “which represents discontinuities between image patches that should be preserved.” We apologize for any confusion caused. In STEGO, a correlation tensor $S$ (with each entry within $[-1, 1]$ when using cosine similarity) is used to measure the similarities between two image patches. A large value of $S_{ij}$ indicates that patches $i$ and $j$ have a high similarity and should belong to the same class. Conversely, a small value of $S_{ij}$, especially when $S_{ij} < 0$, indicates a low similarity between patches $i$ and $j$, suggesting they may be located at the boundary of segments and tend to belong to different classes. We refer to these differences between patches as "discontinuities". Under the smoothness assumption, an effective model should not only encourage piecewise smoothness but also maintain the discontinuities between image segments. Preserving these discontinuities is crucial to avoid trivial solutions. However, in STEGO, the negative part of the correlation tensor $S$ is discarded using a 0-clamp, potentially overlooking the significance of these discontinuities. In contrast, our label penalty function $\delta(·, ·)$, which satisfies $0 ≤ \delta(·, ·) ≤ 1$, possesses a desirable property compared to $S$. It allows us to properly handle and preserve these crucial discontinuities in the smoothness term, contributing to improved performance and meaningful segmentation maps in our approach. --- Rebuttal Comment 1.1: Comment: Thank you, I'm happy with the response. This work leaves an overall positive impression and I intend to keep my score. I also read the other reviews. I agree with some points, but most of them seem to be addressed well and can be included in the camera-ready. In other cases, I tend to agree with the authors that they do not seem essential (e.g. linear probing) or even feasible (e.g. adding the smoothness loss to STEGO). --- Reply to Comment 1.1.1: Title: Thanks for your constructive feedback Comment: We would like to thank the reviewer for the constructive feedback which helps shape our revision. Please let us know if there are any additional concerns or suggestions that could further enhance the quality of this work.
Summary: The paper introduces a new approach called SmooSeg for unsupervised semantic segmentation, which aims to segment images into semantic groups without manual annotation. SmooSeg is based on the idea of smoothness: adjacent features in a metric space should share the same semantics. Specifically, it formulates unsupervised semantic segmentation as an energy minimization problem (Eq. 1,4). Training is performed using teacher-student style self-training. Strengths: ### The paper is well-structured. * The introduction is well-written and motivates the problem well. The methods section breaks down the contribution into provides helpful preliminaries (S3.1) in a good amount of detail. It also describes the contribution (S3.2-3.4) precisely without unnecessary complications. ### The visualizations provided are nicely done * The t-SNE plot in Figure 5 is quite illustrative, and the qualitative examples in Figure 3 are also helpful. ### Algorithm 1 does a good job of communicating the proposed method. * It is very helpful to be able to refer to the PyTorch pseudocode alongside the equations in the paper. ### The “Discussion with CRF and STEGO” section is very helpful * My natural first question upon reading the introduction and the beginning of the methods section was about the relationship between SmooSeg and CRFs. This section answered some of my questions, as it gave an analysis of their relationship. Weaknesses: ### It would help to have further analysis of why the method works. * As discussed in the paper’s introduction, the idea of smoothness is image segmentation has a long history, having been explored since well before deep learning. This is especially true of energy minimization approaches, for which many different types of smoothness losses have been proposed over the years. Empirically, it seems that the performance of the smoothness loss proposed in this paper is good. However, I do not really understand _why_ it should be better than any other approach to enforcing image smoothness, such as the approach applied in CRFs. What would make this paper really useful to the community would be if it tried to understand why this particular formulation works well, so that the community can learn some generalizable lessons/insights (which could then be applied to other problem domains as well). * Is it something particular about the combination of your smoothness term and the student-teacher approach that works well, or is it each of them individually? * What would happen if you took STEGO (exactly as it is) and added your smoothness loss to their method? Would it be good, or does your smoothness loss work particularly well when combined with your student-teacher method? ### Why do you also need to apply a CRF? * The supplement states: “We also use a CRF as the post-processing to refine the predicted semantic maps.” This confuses me. If you already have a smoothness loss (which is your contribution), why do you need a CRF? * How does your performance compare without a CRF? _Meta note on the score (because there is nowhere else to put this in the review)_: I am between 4 and 5. I gave a preliminary score of 4, but I am certainly willing to raise my score when the above questions/points are answered. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: ### Removing E_{data} in Table 4 * For Table 4, I’m surprised that the method still works when the data term is removed in Table 4. In that experiment, are there any losses apart from the smoothness term? Without the data term, should there not be degenerate solutions? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: Limitations are discussed adequately in the paper. One main limitation (how to set hyperparameters) is mentioned clearly at the end of the paper and addressed in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. Some of the concerns are responded in the "General Response" thread. Please kindly refer to the response regarding "**why this method works**" and "**CRF**". ### Roles of the smoothness term and the student-teacher predictor We apologize for any confusion caused by the lack of detailed information of our ablation study in the main submission. We will make it clear in our revision. The student-teacher predictor is specifically designed for the data term. The smoothness term requires a predictor but not necessarily a student-teacher one to generate the label map for label penalty. In order to justify our choice of the student-teacher predictor and the effectiveness of the smoothness term, we remove the student branch in the variant of “w/o $E_{data}$” (Table 4) and the data term in the loss function. That is, we only keep the prototypes in the teacher branch to generate the label map for the smoothness term. The prototypes $P^t$ and the projector are then updated simultaneously by SGD from the smoothness term. Our ablation study shows that the smoothness term with a normal predictor has some performance downgrade (see w/o $E_{data}$ in Table 4). On the other hand, the student-teacher predictor alone, without considering any smoothness prior, could not obtain reasonable results (see w/o $E_{smooth}$ in Table 4). Combining the smoothness term and the student-teacher predictor leads to the best performance. ### Add smoothness loss to STEGO The smoothness loss in our work is similar in form to the correlation loss in STEGO. However, the main difference is that the correlation loss aims to perform feature correspondence, while our smoothness loss works on the label penalty and requires a predictor to generate the label map. From this point, there is no straightforward way to add our smoothness loss to STEGO. ### (Question) Removing $E_{data}$ in Table 4 In Table 4, we only used the smoothness term as the sole loss when we removed the data term. The smoothness term utilizes a closeness matrix constructed from high-level features of a pre-trained model, acting as strong supervision signals to guide label learning for all image patches. Therefore, it is reasonable to see that the smoothness term contributes significantly to the overall performance. On the contrary, the data term operates in a self-training fashion with pseudo labels derived from the teacher branch, which alone cannot generate accurate segmentation maps, as evidenced by the extremely poor results of "w/o $E_{smooth}$" in Table 4. When combined with $E_{smooth}$, which provides enhanced pseudo labels, the data term aiming to minimize the entropy of the predicted segmentation maps, yields performance improvements. We will update the descriptions in the ablation study to provide better clarity. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your response and clarifications. After reading your response and the reviews of the other authors, I am satisfied that the proposed method is sufficiently different from STEGO. I agree there is no straightforward way to add the smoothness loss to STEGO; apologies for the misunderstanding. I also now get that $E_{data}$ is essentially a self-training loss; it is good to see that the method works even without $E_{data}$. I will be updating my score from 4 to 5. --- Reply to Comment 1.1.1: Title: Thanks for your constructive feedback Comment: We are glad that our response addressed all your concerns. We also greatly appreciate that you awarded us a higher score. Please let us know if there are any additional concerns or suggestions that could further enhance the quality of this work.
Rebuttal 1: Rebuttal: # General Response We appreciate all reviewers for the highly constructive comments that help improve the paper quality. We also thank the reviewers for the recognition of our work being well structured (**1NSo**, **gyDm**) and well presented (**iJZU**, **yftt**), the proposed solution being simple (**gyDm**), novel (**HQSk**), experimentally convincing (**HQSk**, **R6c1n**, **RceoA**), insightful (**HQSk**), of sufficient scope (**gyDm**), and covering various datasets (**yftt**). *We respond to some general comments as follows. We hope our rebuttal address the reviewers' questions and concerns. We would be more than happy to discuss with all reviewers if they still have any unresolved concerns or additional questions about the paper or our rebuttal.* ## Why our method works (@1NSo, iJZU) **@1NSo** **The difference between our method and previous smoothness-based methods.** Thank you for the thoughtful feedback. As mentioned in the introduction, one of the key challenges in applying smoothness priors to unsupervised semantic segmentation is to define a good closeness relationship among image patches. The main difference of our proposed smoothness prior from those defined in other methods lies in the definition of “adjacent” patches/pixels. Early image segmentation methods, as well as CRF models, define adjacency in the coordinate space, e.g., using a 4-connected or 8-connected grid in the coordinate space to define the adjacent pixels of a given pixel. These methods primarily relied on low-level appearance information and fell short in capturing high-level semantic information in images. Consequently, these approaches alone may not yield good results for high-level semantic segmentation tasks, as it may struggle to effectively capture the complex relationship of images patches necessary for semantic understanding. Our smoothness prior, on the contrary, imposes the closeness in the high-level feature space generated by self-supervised representation learning methods. This definition better suits the task of semantic segmentation as it better captures high-level semantic similarity. Such a closeness design provides strong supervision signals to guide the label learning effectively. Through our ablation study, we demonstrate that the proposed smoothness prior alone (w/o $E_{data}$ in Table 4) can still achieve promising results. We remark that the design of “across images” smoothness term ($E_{smooth}^{across}$) further extends the definition of "smoothness" from local patches to cross-image levels and is also novel and effective (see the ablation in Table 4). We will add these discussions in the revision. **@iJZU** **What the network is learning from.** (1) We apologize for not explicitly mentioning the use of `torch.no_grad` in Algorithm 1. We will include this missing detail in the revision. We did, however, state that we utilize a frozen pre-trained model as an extractor. (2) Our core learning mechanism revolves around employing the closeness matrices (by Eq (2)) as smoothness signals to facilitate the projector and predictor learning. These closeness matrices act as strong supervision cues to guide the label learning (projector + predictor). (3) In relation to Fig 5, your understanding aligns accurately. After learning the smooth labeling function, similar features are already assigned similar labels, and the smoothness penalty is reduced. ## CRF - **Why need to apply a CRF (@1NSo):** We would like to emphasize that CRF postprocessing is a common practice in both supervised and unsupervised semantic segmentation (line 23, p6 in [8]), and that the use of CRF does not overshadow the contribution of the smoothness term in our work. The smoothness prior in this work performs on the high-level feature maps and mainly contributes to semantic smoothness, while CRF operates on pixels to refine the fine details and remedy the resolution loss caused by the final upsample operation that exists in most semantic segmentation models (a normal upsample rate is 8x8). Therefore, the application of a CRF serves as a supplement to our smoothness prior to further refine low-level smoothness. - **The ablation of CRF (@1NSo, @yftt)**. The following table demonstrates that SmooSeg still achieves state-of-the-art without CRF, as CRF only accounts for $1.3$ mIoU out of the $2.2$ mIoU performance gain over STEGO. Moreover, we also present qualitative visualizations with and without CRF in the appended PDF file. It is found that CRF is able to refine the quality of fine details on both STEGO and SmooSeg. However, SmooSeg is consistently more semantically coherent than STEGO either with or without CRF. | | COCOStuff27 | Cityscapes | Potsdam-3 | Avg. | | --------------- |:-----------:|:-----------:|:-----------:|:-----------:| | | Acc / mIoU | Acc / mIoU | Acc / mIoU | Acc / mIoU | | STEGO w/o CRF | 46.5 / 22.4 | 63.5 / 16.8 | 74.1 / 58.9 | 61.4 / 32.7 | | STEGO | 48.3 / 24.5 | 69.8 / 17.6 | 77.0 / 62.6 | 65.0 / 34.9 | | SmooSeg w/o CRF | 60.6 / 25.2 | 79.8 / 18.0 | 81.4 / 68.4 | 73.9 / 37.2 | | SmooSeg | 63.2 / 26.7 | 82.8 / 18.4 | 82.7 / 70.3 | 76.2 / 38.5 | - **The use of CRF not mentioned in the main submission (@HQSk):** As previous studies (e.g., STEGO) reported their performance with CRF by default, we compare the performance with CRF applied during test by default as well. Thanks for pointing it out, and we will move the postprocessing details from the appendix to the main manuscript in the revised version. - **CRF weight (@yftt)**. We utilized *pydensecrf* for our CRF refinement, with all parameters set to default values to make a fair comparison with previous studies. Pdf: /pdf/c2a8835e87981e6fed52702255c5ec542a64163b.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper tackles the difficult task of unsupervised semantic segmentation at the level of the scene (dense prediction). The authors exploit the piecewise coherence regarding the semantics, texture, or color that similar objects naturally have (termed as smoothness prior). The problem formulation raises some challenges the author solves by using a pre-trained (frozen) feature extractor to model the closeness relationships among observations and introduce a novel pairwise smoothness loss, and a teacher-student style predictor. Results are showcased on popular benchmarks: COCOSttuff, Cityscapes, and Potsdam, on which the authors report state-of-the-art performance. Strengths: 1. Originality - the problem formulation (energy minimization objective function) and the addition of the smoothness prior counts as novel work. 2. Quality - the paper is decently structured, method and experimental analysis are sound and convincing. 3. Clarity - some aspects of the paper could be improved. Some details are left out, making it be challenging to reproduce the results based on the information provided in the paper. 4. Significance - the topic is indeed relevant to the research community. Weaknesses: * L112 - The architecture description paragraph is tough to follow and the dedicated figure (Figure 1 in the main submission) is overcomplicated. Also, the use of the term "prototypes" for the teacher-student paradigm is confusing. Please consider rephrasing this part. * L62 vs. L312 - minor contradiction. * L307 - Type "We" - no capital letter * The main submission has an appendix document but there is no reference in the main submission regarding the contents of the supplementary material. * Important details are in the supplementary material and not mentioned in the current submission - such as the use of CRF for further refining the final segmentation maps. * In the experiments section there is no mention of how the authors produced the segmentation maps from SSL feature extractors such as ResNet50, MoCoV2, DINO (Table 1), DINO (Table 2), or DINO, DINOV2 (Table 3). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see weaknesses above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have not properly addressed the limitations of their proposed method. I highly encourage them to do so in a dedicated section in the main submission. Ideas for future work are a plus. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. ### Architecture description is tough to follow, the dedicated figure is overcomplicated,and the term "prototypes" is confusing. 1\) We apologize for the confusion on the architecture description and the figure (we guess you are referring to Fig.2 instead of Fig.1 here?). To improve clarity, we will itemize main modules/steps in the architecture description. - Step 1: feature extraction: $X = f_{\theta}(I)$, where $f_{\theta}$ denotes the extractor, a pre-trained model. - Step 2: feature projection: $Z = h_{\theta}(X)$, where $h_{\theta}$ denotes the projector, a non-linear projection head. - Step 3: label prediction: $A^{s,t} = Z^TP^{s,t}$ in both teacher and student branches in the predictor. We will also simplify Fig.2 by e.g., removing unnecessary annotations. 2\) We thank the reviewer for pointing out the confusion on "prototypes". We will clarify the term "prototypes" as "class centers" in the paper for better clarity. ### Minor contradiction and typos We greatly appreciate your careful reading and pointing out these errors. We will update the description in L312 to eliminate the inconsistency and correct any typos. ### No reference regarding the supplementary material Thank you for bringing this to our attention. We will make a reference for each Appendix in the corresponding section of the revised main paper as: - *L 215: “Implementation details can be found in Appendix A.”* - *L 249-251: “Additional qualitative results, along with color maps, can be found in Appendix C.”* - *L 304: “However, we present a feasible strategy in Appendix B to alleviate this issue.”* ### The use of CRF not mentioned in the submission Please see the general response. ### Implementation detail of baselines We will add the following details in the experiments section. *Results of ResNet50, MoCoV2, DINO (Table 1), DINO (Tables 1,2,3) are directly cited from the paper [8], while the results of DINOV2 (Table 3) are obtained by our implementation. For these baselines, we first extracted dense features for all images. We then utilized a minibatch k-means algorithm to perform patches grouping, which resulted in the final segmentation maps.* ### Limitations: Thanks for your suggestions. We will reorganize the materials and discuss more in the main manuscipt about the setting of hyper-parameters in the smoothness term. It is observed that the pre-calculation on the statistics for estimating the feature similarities could be a good prior in setting the hyper-parameters, which is still under investigation in our further study.
null
null
null
null
null
null
Contrastive Sampling Chains in Diffusion Models
Accept (poster)
Summary: This paper analysis why diffusion models need addtional contrastive loss. The main target is the reduce $D_{KL}(p_t|| p_t^{SDE})$. This paper provides a detailed theoretical analysis and solid examples to compare its method with existing ones. Strengths: 1. This paper analysis the the gap $D_{KL}(p_0|| p_0^{SDE})$ to design better diffusion models. While this is not the first analysis of this object, its solution is still interesting. 2. This paper clearly analyzes and explains why contrastive loss is useful for diffusion models, as opposed to simply combining multiple losses without clear motivation. These analyses are beneficial for future research in this area. 3. The main paper's experiments incorporate the contrastive loss function along with several state-of-the-art acceleration methods, demonstrating robust performance. Weaknesses: 1. In the loss function $I_{InfoNCE}(x_t^{SDE}, x_j^{SDE})$, the first item is represented by $exp(E(x_t^{SDE}), E(x_j^{SDE}))$. This item is similar to the consistent loss, which has been proven useful for diffusion models [1]. Therefore, it would be helpful if the authors could provide further analysis on the role of the second item, which is $exp(E(x_j^{SDE}), E(x^-))$, and clarify how consistent loss and contrastive loss have different influences. 2. In Section 3, this paper discusses the discretization error. However, it is not the only error present in the sampling process. Another significant factor to consider is the estimation error arising from $s_\theta$. Although this may not influence the design of $I_{InfoNCE}(x_t^{SDE}, x_j^{SDE})$, I believe that a more detailed analysis should be provided by the authors. 3. The OOD detection results presented in Table 2 are not satisfactory. Additionally, the connection between this part and the main contribution is unclear. Are the good detection results attributed to classical diffusion models or the new loss function proposed in this paper? [1] Daras G, Dagan Y, Dimakis A G, et al. Consistent diffusion models: Mitigating sampling drift by learning to be consistent[J]. arXiv preprint arXiv:2302.09057, 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: From both a theoretical and experiential perspective, what are the distinct effects of consistent loss versus contrastive loss? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weaknesses ### 1. Provide further analysis on the role of the second item and clarify how consistent loss and contrastive loss have different influences. This question is quite valuable and may provide us further research direction! The contrastive loss is designed for reducing the distance between similar features and put away the dissimilar features. Hence, the contrastive loss will reduce the distance between $x_{j}^{\mathrm{SDE}}$ and $ x_{t}^{\mathrm{SDE}}$, and the second item $exp(E(x_{j}^{\mathrm{SDE}}),E(x^{\mathrm{-}}))$ of contrastive loss contribute to put away those dissimilar images $x_{j}^{\mathrm{SDE}}$ and $x^{\mathrm{-}}$. In this manner, the sampling chain will become tighter and the discretization error decreased accordingly because the distance between sampling steps decreased. By comparison, consistent loss is designed for reducing score mismatching error. In this manner, consistent loss helps DMs to generate similar images $x_{0}^{\mathrm{SDE}}$ as $x_{0}$, which enables to minish the gap between $x_{0}^{\mathrm{SDE}}$ and $x_{0}$. In a word, consistent loss is utlized during traing process to decrease the score mismatching error while our method enables to reduce the discretization error via fine-tuning pre-trained DMs. ### 2. However, it is not the only error present in the sampling process. Another significant factor to consider is the estimation error. The errors present in the sampling process are composed by discretization error and score mismatching error. Though those two errors both influence the final quality of generated images, our work focuses on solving discretization error caused by numerical solvers. Score mismatching error arises during training process which shows little connection with numerical solvers. ### 3. The OOD detection results presented in Table 2 are not satisfactory. Additionally, the connection between this part and the main contribution is unclear. Are the good detection results attributed to classical diffusion models or the new loss function proposed in this paper? Our work aims to improve generative performance of diffusion models and OOD detection is not one of our contributions. We apologize for not comprehending this question. Could you please explain it in detail? ## Questions ### 1. From both a theoretical and experiential perspective, what are the distinct effects of consistent loss versus contrastive loss? In a theoretical perspective, our contrastive loss is mainly focus on discretization error which caused by numerical solvers during sampling process. By contrast, the consistent loss concentrates on reducing score mismatching error during training process. On the other hand, our contrastive loss enables to combine with both deterministic sampling and stochastic sampling while consistent loss works seamless with deterministic sampling. In an experiential perspective, our contrastive loss decreases the FID on CIFAR-10 from 2.04 (random seed) to 1.88, which demonstrates a significant improvement on the pre-trained EDM. The consistent loss also helps the pre-trained EDM to reduce the FID from 1.97 (manual seed) to 1.95. Based on the above analysis, we guess consistent loss and contrastive loss can be combined to further improve DMs. --- Rebuttal Comment 1.1: Title: Rebuttal acknowledgement Comment: I have read all the reviews and author responses, and I thank the authors for their efforts. Therefore, I argee to accept this paper and keep my score. If OOD detection is not pertinent to your primary contribution, it may be best to remove it. --- Reply to Comment 1.1.1: Comment: We express our gratitude for your participation in the discussion and supporting our paper! We did not include the OOD detection in our paper as our sole focus was on improving pre-trained diffusion models. In my humble opinion, we are unsure whether this issue stems from our paper or other papers that you have reviewed. Once again, we express our gratitude for your thoughtful discussions, which have greatly elevated the quality of our paper!
Summary: This paper proposes to fine-tune pre-trained diffusion models using contrastive losses to reduce discretization errors. The positive pair is formed by the same image at different steps, while the negative pair is formed by different images. To better optimize the contrastive loss, dynamic weighting schedules and back-propagation through time techniques are used. Strengths: 1. The paper is well-motivated to handle an inherent problem of diffusion models, i.e., discretization errors. The theoretical conclusion is meaningful and well-aligned with the motivation. 2. The proposed method of contrastive fine-tuning with weighting schedules and BPTT is interesting and well-aligned with motivation. One advantage of this method is that it is flexible to fine-tune various off-the-shelf pre-trained diffusion models, which makes it a good contribution to the field. And it is also compatible with fast-sampling methods. 3. The experimental results are good in general, showing the effectiveness of the proposed method when combined with different baselines. Weaknesses: 1. The term “Contrastive Sampling Chain” is a bit confusing to me, since the proposed method only fine-tunes the diffusion model without changing the sampling process. I would recommend using something like “Contrastive Diffusion Chain”. Likewise, the authors should avoid the saying “refinement of the sampling chain”, such as in L258-259. 2. No generated examples are given for qualitative comparisons. 3. It is unclear whether the code will be released or not. 4. In L134, the analysis is based on ODE (lambda = 0). Can this analysis be applied similarly to cases where lambda is not zero? 5. Minors issues: - In L27, “learn” should be “learns” - In L55, “aims” should be “aim” - In L91, “slightly equivalent” - In L104, “There” should be “Three” - In L106, “diffusion” should be “diffuse” - In L112, “modeling” should be “model” - You should have punctuation at the end of each equation - I would recommend adding short conclusions of the theory at the start of section 3 and section 4, for TL;DR purposes. - The math symbols are inconsistently in bold, such as x, s, and theta. - In L144-145, grammar errors. - In L211, the wrong position of citations. - In L263, should “j” be “t”? - In reference, wrong capitalization like “gans” and “Dpm”. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the issues in the Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are discussed. I would recommend adding a discussion about the fine-tuning cost of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weaknesses ### 1. The term "Contrastive Sampling Chain" is a bit confusing to me, since the proposed method only fine-tunes the diffusion model without changing the sampling process. I would recommend using something like "Contrastive Diffusion Chain". Likewise, the authors should avoid the saying "refinement of the sampling chain", such as in L258-259. Thanks for your sincere review and valuable comments! We will change the term "Contrastive Sampling Chain" into a clearer presentation, where your recommendation "Contrastive Diffusion Chain" is one of our considerations. Moreover, we will fix the typos in L258-259. ### 2. No generated examples are given for qualitative comparisons. To qualitatively evaluate generated images, we visualize some generated images in the one-page PDF. These generated images contain rich and coherent semantic information. It is sufficient to prove the effectiveness of our method. ### 3. It is unclear whether the code will be released or not. We send our code to Area Chair as rebuttal policy. Moreover, we will also release our code on github after this review process. ### 4. In L134, the analysis is based on ODE (lambda = 0). Can this analysis be applied similarly to cases where lambda is not zero? Our analysis on ODE ($ \lambda =0$) is a special case of SDE ($\lambda \ne 0$). The discretization error is caused by numerical solver which shows no connection with the term $ \lambda \boldsymbol{G}_{t} d \omega $ that throwed by $ \lambda =0 $. In other words, the gap between the approximate solution and the exact solution is not come from this term $ \lambda \boldsymbol{G}_{t} d \omega $, shown in equation (6). Concretely, the integral term of equation (6) is the factor that causes the discretization error. By comparison, $ \lambda \boldsymbol{G}_{t} d \omega $ is not an integral term which can be solved without numerical solvers. Hence, Our analysis can be applied similarly to cases where lambda is not zero. The reason we present it this way is for convenience. ### 5. Minors issues. We will fix all these typos in our latest paper. ## Limitations ### 1. The limitations are discussed. I would recommend adding a discussion about the fine-tuning cost of the proposed method. For fine-tuning pre-trained EDM, our method allows achieving remarkable performance with only 20 epochs or even fewer, where each epoch costs about 130 seconds. For sampling images, the improved pre-trained EDM takes the similar overhead as the pre-trained EDM. --- Rebuttal Comment 1.1: Comment: Thanks for your feedback. I will keep my score.
Summary: This paper focuses on diffusion models (DMs). Current DMs suffer from discretization error since it leverage the numerical solvers to solve SDEs. To overcome this issue, authors propose a a contrastive loss when optimizing DMs. Authors present a theoretical analysis to demonstrate that combining both the generative loss and the contrastive loss are reasonable. Instead of training DMs from scratch, this work optimize their model from the pratrained model. Strengths: 1) This paper firstly analyze DMs suffers from discretization error, which sounds interesting. Then to address it, a contractive loss is introduced. 2) Authors present a detail theoretical analysis about the provided problem and method. 3) This paper is well-originated. Weaknesses: 1) Why it use the pretrained model? how about the performance when optimizing it from scratch? I am not convincing about the generalization capability. 2) DMs is attracting due to the large model trained on huge dataset. However, this paper only verify this proposed on small dataset, which is not convincing. I am not sure whether it still keep the advantage of the proposed method. 3) Is it expensive to visualize the generated image? even in the supplementary material. 4) Also there are not the corresponding codes to reproduce the reported results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The method are demonstrated with both the small data and the pretrained model, which is hard to support the generalization. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: 1) I recommend to train the proposed method from scratch with large dataset. 2) Showing a few images is not expensive, right? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weaknesses ### 1.Why it use the pretrained model? how about the performance when optimizing it from scratch? I am not convincing about the generalization capability. # It is traditional [1-6] to improve generative performance via fine-tuning pre-trained diffusion models (DMs). For instance, [1,4] optimize the pre-trained DMs with the help of knowledge distillation techniques. Moreover, [2] utilizes discriminator guidance to refine pre-trained EDM and achieve state-of-the-art generative performance on some datasets, while [6] proposes a consistent loss to improve pre-trained EDM and greatly improve the sampling speed of DMs. Motivated by this, we propose a plug-and-play approach to enhance different DMs. Compared to training from scratch, our method allows achieving remarkable performance with training only 20 epochs or fewer when fine-tuning pre-trained DMs. On the other hand, our method remarkably improves generative performace of various pre-trained DMs, such as EDM, LSGM, STDDPM and IDDPM. Concretely, we reduce the FID of EDM and LSGM on CIFAR-10, and decrease the FID of STDDPM and EDM on CelebA and FFHQ respectively. Morver, we also conduct experiments on some training-free fast samplers and achieve obvious improvements on ImageNet dataset, seen in Table 2. This flexibility enables us to enhance various off-the-shelf pre-trained DMs, effectively demonstrating its generalization capability. [1] T. Salimans and J. Ho. Progressive Distillation for Fast Sampling of Diffusion Models. In International Conference on Learning Representations, 2021. [2] D. Kim, Y. Kim, W. Kang, and I.-C. Moon. Refining generative process with discriminator guidance in score-based diffusion models. arXiv preprint arXiv:2211.17091, 2022. [3] Z. Zhang, Z. Zhao, and Z. Lin. Unsupervised representation learning from pre-trained diffusion probabilistic models. Advances in Neural Information Processing Systems, 35:22117-22130, 2022. [4] C. Meng, R. Rombach, R. Gao, D. Kingma, S. Ermon, J. Ho, and T. Salimans. On distillation of guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14297-14306, 2023. [5] M. Careil, J. Verbeek, and S. Lathuilière. Few-shot Semantic Image Synthesis with Class Affinity Transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 23611-23620), 2023. [6] Y. Song, P. Dhariwal, M. Chen, and I. Sutskever. Consistency Models. arXiv e-prints, arXiv-2303, 2023. ### 2. DMs is attracting due to the large model trained on huge dataset. However, this paper only verify this proposed on small dataset, which is not convincing. I am not sure whether it still keep the advantage of the proposed method. # The largest dataset used in recent DMs works, such as **ADM [1]**, **EDM [2]**, **Dpm-solver [3]** and **FDM [6]**, as well as other DMs **[4-5]**, is the imagenet dataset. ImageNet dataset contains more than one million images which can not be regared as a "small dataset". Beyond Imagenet, these works also tend to evaluate their methods on other moderate-sized datasetes, e.g., **FFHQ** dataset, **CelebA** dataset and **CIFAR-10** dataset. We also follow this well-establsihed experiemntal setup to evaluate our algorithm. According to Table 2 in the paper, our algorithm can acheive improve performance on the ImageNet dataset. For instance, we reduce the FID from 3.67 to 3.60 with 14 steps on ImageNet when combined with DEIS-tAB3 training-free sampler. Moreover, we also we decrease the FID (lower is better) from 24.62 to 22.65 with 12 steps on ImageNet when combined with DPM-Solver-3 training-free sampler. Based on this evidence, our method is enable to keep the advantage on large dataset. [1] P. Dhariwal and A. Nichol. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780-8794, 2021. [2] T. Karras, M. Aittala, T. Aila, and S. Laine. Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems 35: 26565-26577, 2022. [3] C. Lu, Y. Zhou, F. Bao, J. Chen, C. Li, and J. Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. arXiv preprint arXiv:2206.00927, 2022. [4] C. Lu, K. Zheng, F. Bao, J. Chen, C. Li, and J. Zhu. Maximum likelihood training for score-based diffusion odes by high order denoising score matching. In International Conference on Machine Learning, pages 14429-14460. PMLR, 2022. [5] S. Wizadwongsa and S. Suwajanakorn. Accelerating Guided Diffusion Sampling with Splitting Numerical Methods. In International Conference on Learning Representations, 2023. [6] W. Du, H. Zhang, T. Yang, and Y. Du. A Flexible Diffusion Model. In International Conference on Machine Learning, pages 8678-8696. PMLR, 2023. ### 3. Visualize the generated image To qualitatively evaluate generated images, we randomly visualize some generated images in the one-page PDF. These generated images contain rich and coherent semantic information. It is sufficient to prove the effectiveness of our method. ### 4. Also there are not the corresponding codes to reproduce the reported results. We send our code to Area Chair according to rebuttal policy. --- Rebuttal Comment 1.1: Title: Convincing rebuttal Comment: Thank you, authors, for the rebuttal. Your responses are quite convincing. I can see that you have made great efforts to address my concerns within the limited time frame, such as training on the Imagenet dataset. Additionally, the code has been provided. I have looke other reviewers' comments and have come to the conclusion that this paper makes a significant contribution to the community of DMs. As a result, I have decided to revise my score. --- Rebuttal 2: Comment: Dear Anonymous Reviewer e493, We sincerely thank you for kindly reviewing our paper and providing valuable comments! We believe that we have fully addressed the concerns you raised. If you have any additional comments about our paper, we are more than willing to discuss them in detail and make the necessary improvements accordingly. Best, Paper2687 Authors --- Rebuttal 3: Title: Conf Comment: Dear Reviewer e493, The authors of this submission just prepared a reply to concerns. Would you please check the authors' rebuttal and see whether your concerns have been addressed or not? Best regards, Your AC --- Rebuttal 4: Comment: Dear Reviewer e493, Thank you very much for your great efforts in reviewing the referred submission. I notice that you have not yet responded to the author's rebuttal. As the discussion period is about to close, would please double the authors' response and make a final decision? Your final judgment is very important to the PC to make final decisions. Best regards, Your AC
Summary: This paper employs the contrastive loss to construct a contrastive sampling chain, which optimizes the KL divergence between the true sampling chain and the simulated chain at each time step to reduce the discretization error associated with numerical solvers used for solving SDEs. Experimental results demonstrate that this method improves sample quality and log-likelihood, while slightly accelerating pre-trained DMs. Strengths: ++ The authors provide a comprehensive error analysis, theoretical analysis, and theoretical proof regarding the causes and upper bounds of discretization error between the true distribution and its corresponding model distribution. ++ This paper conducts extensive experiments to demonstrate the performance on pre-trained diffusion models and fast samplers , and provides ablation studies to assess the impact of different techniques. ++ Reducing the discretization error is an important area for diffusion models optimization. This paper offers a novel solution by minimizing discretization error through optimizing the upper bound of the Kullback-Leibler (KL) divergence between the true sampling chain and a simulated chain at each time step. Weaknesses: -- It is preferable to provide evaluation metrics and visual results for the outputs of generative models, rather than solely focusing on the performance of enhancing pre-trained diffusion models. -- The formatting of Tables 1, 2, and 4 could be improved, for instance, by subdividing them into several sub-tables to enhance readability, rather than separating different settings with a grey background. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. How should individual tasks choose between the linear weighting schedule and the nonlinear weighting schedule? 2. What is the expression for bate(t)? 3. Figure 1 provides limited information. It might be beneficial to incorporate additional details. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors refine the diffusion models by optimizing the upper bound of the KL divergence between the true sampling chain and a simulated chain. The remaining limitations and broader impact are discussed towards the conclusion of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weaknesses ### 1. It is preferable to provide evaluation metrics and visual results for the outputs of generative models, rather than solely focusing on the performance of enhancing pre-trained diffusion models. To qualitatively evaluate generated images, we visualize some generated images in the one-page PDF. These generated images contain rich and coherent semantic information. It is sufficient to prove the effectiveness of our method. ### 2. The formatting of Tables 1, 2, and 4 could be improved, for instance, by subdividing them into several sub-tables to enhance readability, rather than separating different settings with a grey background. Thanks for pointing out this. We will reformat those Tables in a more clarity manner and help others understand them quickly. ## Questions ### 1. How should individual tasks choose between the linear weighting schedule and the nonlinear weighting schedule? There is no limitation to choose weighting schedule for improving individual tasks. We empirically achieved similar results on various datasets when utilize those two weighting schedules. For instance, when fine-tune pre-trained EDM on CIFAR-10, we both obtain 1.88 FID. ### 2. What is the expression for bate(t)? Thank you for pointing out our typos! This is actually due to our carelessness in not expressing the $ \beta(t)$ correctly. We apologize for this misrepresentation. The correct form of $ \beta(t)$ is $\beta(t)=\alpha * (T-t)$ in equation (13) and $ \beta(t)=\alpha * \operatorname{PNSR}\left(x_{j}^{\mathrm{SDE}}, x_{t}^{\mathrm{SDE}}\right) $ in equation (14). ### 3. Figure 1 provides limited information. It might be beneficial to incorporate additional details. We incorporate more additional details into the Figure 1 and can be seen in the one-page PDF. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I have no questions and keep my original rating.
Rebuttal 1: Rebuttal: We incorporate more additional details into the Figure 1 of our paper and visualize some generated images for qualitative analysis. Pdf: /pdf/e50dbab67e3f0e0554e40b5d5efbc06f5f057d24.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper targets the discretization error of the diffusion SDE sampling, especially when the number of sampling steps is small. The authors propose a contrastive loss for diffusion sampling where instances on the same sampling chain are deemed positive pairs. The InfoNCE loss provides an upper bound on the KL divergence at time t. It is shown that an appropriate combination of contrastive loss and score matching serves as an upper bound for the KL divergence between the data distribution and the model distribution. The proposed training objective (equation 10) is a combination of the score matching loss and an InfoNCE loss and can be used to fine-tune any diffusion models' samplers. Having introduced several two hyperparameters, beta(t) for balancing the two terms and temperature $\tau$), the authors experimented with different training strategies with ablation studies. Experiments on CIFAR10 and ImageNet64 generation show that the sampler tuned with their proposed method outperforms the baseline diffusion model sampler. Strengths: The writing of the paper is clear. The motivation and presentation of the results are easy to follow. The idea of introducing contrastive learning to DMs is interesting and novel to the best of my knowledge. The proposed methodology can be widely applicable to any pre-trained diffusion models as a post-fix, which can be valuable to the community. Weaknesses: ### 1. Pre-trained encoder might be unfair For all datasets, the authors use the pre-trained MoCo V2 encoder (I assume it was trained on ImageNet? Please correct me if I am wrong) to extract features in order to compute the infoNCE loss. This raises some concerns: (1) The MoCo is pre-trained on datasets larger than CIFAR10 (and other small datasets), so the encoder has seen more data than diffusion models. Given that, it becomes less clear whether the improvement is coming from the contrastive loss itself or the encoder has seen more data. What makes it more concerning for me is that the gain from contrastive loss seems less significant on the more complicated ImageNet64 dataset. Intuitively, as the dataset gets more complicated, the baseline performance is weaker and may leave more room for improvement (correct me if I am wrong). I suggest the authors try training using encoders trained on CIFAR10 for evaluating diffusion models on CIFAR10 to make a more convincing case. (2) For some forms of data, there is no pre-trained encoder, so the author proposed method may not be easily adapted. Of course we can train from scratch. But that introduces more computational overhead, which makes the method less appealing. ### 2. Marginal performance gap on more complicated data On the ImageNet64 conditional experiment, the finetuned sampler shows only a marginal performance improvement over the default sampler. No standard deviation of the method is reported so I am not sure whether the improvement is statistically significant. ### 3. Concerns about training: The infoNCE loss requires differentiable samples that are generated through multiple evaluations of the diffusion model's network, this may lead to a heavy computational overhead to differentiate through the entire computational graph. The extra computation cost is not very clear. The proposed method seems brittle with many moving parts and hard to tune from task to task (different datasets, models, steps). Even only changing the sampling steps needs extra training. In comparison, the diffusion model's sampler can change flexibly with arbitrary sampling steps. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is the MoCo V2 encoder pretrained on ImageNet? Is the encoder in all the experiments the same (except for changing resolutions)? 2. How stable is the proposed method, regarding the randomness during the training process? How sensitive are the results to different hyperparameter choices? 3. For weighting function $\beta(t)$, is there any optimal/analytical solutions? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations and broader impact have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weaknesses ### 1. MoCo is pre-trained on ImageNet larger than CIFAR10. The MoCo V2 encoder utilized in this paper is pre-trained on ImageNet, and we use this encoder to fine-tune DMs in all our experiments. To further analysis the performance of our method, we conduct an ablation experiment on pre-trained EDM with the help of the MoCo V2 encoder pre-trained on CIFAR-10. Concretely, we keep the same training settings as previous experiments and fine-tune the pre-trained EDM with only 10 epochs. Empirically, we decrease the FID (lower is better) from 2.04 to 1.95, shown in the below Table. The 0.09 reduction of FID proved that our method effectively improves the generative performance of the pre-trained EDM. | Models | FID$\downarrow$ | Encoder | | :---: | :---: | :------: | CIFAR-10 | EDM | 2.04 | - | | EDM-C++ (Ours) | 1.95 | MoCo V2 (CIFAR-10) | | EDM-C++ (Ours) | 1.88 | MoCo V2 (ImageNet) | IDDPM (ImageNet, 14 sampling steps) | DPM-Solver-2 | 4.46 | - | | DPM-Solver-2-C++ (Ours) | 4.38 | MoCo V2 (ImageNet) | IDDPM (ImageNet, 20 sampling steps) | DPM-Solver-2 | 3.42 | - | | DPM-Solver-2-C++ (Ours) | 3.36 | MoCo V2 (ImageNet) | In this paper, we utilize an encoder pre-trained on ImageNet to fine-tune pre-trained DMs. In this manner, our method helps the pre-trained EDM reduces the FID on CIFAR-10 from 1.95 to 1.88, seen in the above table. The 0.07 reduction of FID demonstrate that our method further improves the gerative performance. Hence, our contrastive sampling method contributes almost half of the FID reduction. On the other hand, our method increases the FID from 3.42 to 3.36 on ImageNet with 20 steps when utilizes DPM-Solver-2 training-free sampler to sample images for evaluation, shown in Table 2. By contrast, the encoder pre-trained on CIFAR-10 decreases the FID from 2.04 to 1.95 on CIFAR-10 dataset. The 0.06 reduction of FID on ImageNet is an order of magnitude as the 0.09 reduction of FID on CIFAR-10. Hence, our method presents a consistent improvement when fine-tune pre-trained DMs with the corresponding encoder. ### 2. For some forms of data, there is no pre-trained encoder. As shown in the above table, we achieve 0.09 reduction of FID when utilize the encoder pre-trained on CIFAR-10, while further obtain 0.07 reduction of FID when utilize the encoder pre-trained on ImageNet. In this paper, we all utilize the pre-trained encoder trained on ImageNet and improve the performance on four different datasets, including CIFAR-10, FFHQ, CelebA and ImageNet. Hence, arbitrary pre-trained encoder can be utilized for improving other forms of data, which demonstrate the flexibility and scalability of our method. ### 3.1. Only a marginal performance improvement on the ImageNet64 (conditional). Our method remarkably improves various DMs on CIFAR-10, FFHQ and CelebA, shown in Table 1 and 4. We also improve the generative performance on ImangeNet with very few sampling steps compared to prior DMs, see in Table 2. Moreover, our method demonstrates an consistent improvement on class-conditioned 64 $\times$ 64 ImageNet. For intance, when compared to DPM-Solver3 sampler, the sampler DEIS-tAB3 reduces FID from 2.72 to 2.69 with 50 sampling steps, and decrease the FID from 2.84 to 2.81 with 30 sampling steps. ### 3.2. No standard deviation of the method is reported. Prior DMs have not shown the standard deviation results [1-3]. To demonstrate the stability of our approach, we conduct this experiment on the same settings for five times. Concretely, the FID results respectively are 1.8820, 1.8836, 1.8922, 1.8903, 1.8801, which fluctuate between 1.88 and 1.90. [1] T. Karras, M. Aittala, T. Aila, and S. Laine. Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems 35: 26565-26577, 2022. [2] T. Salimans and J. Ho. Progressive distillation for fast sampling of diffusion models. In International Conference on Learning Representations, 2022. [3] C. Lu, K. Zheng, F. Bao, J. Chen, C. Li, and J. Zhu. Maximum likelihood training for score-based diffusion odes by high order denoising score matching. In International Conference on Machine Learning, pages 14429-14460. PMLR, 2022. ### 4. The extra computation cost. For fine-tuning pre-trained EDM, our method allows achieving remarkable performance with only 20 epochs or even fewer, where each epoch costs about 130 seconds. For sampling images, the improved pre-trained EDM takes the similar overhead as the pre-trained EDM. ### 5. Changing the sampling steps needs extra training. Our method enables to improve pre-trained DMs with arbitrary training-free samplers. For instance, we utilize the DEIS-tAB3 sampler to construct a sampling chain and fine-tune the pre-trained IDDPM via propagating gradients along opposite direction of this chain. Hence, we only needs to fine-tune IDDPM once and subsequently report experimental results on different sampling steps. ## Questions ### 1. How stable is the proposed method, regarding the randomness during the training process? How sensitive are the results to different hyperparameter choices? To demonstrate the stability of our approach, we conduct experiments on the same settings with five times. Concretely, the FID results respectively are 1.8820, 1.8836, 1.8922, 1.8903, 1.8801, which fluctuate between 1.88 and 1.90. Actually, we only have one hyperparameter which is $\alpha$ used for controlling the weighting schedules $\beta (t)$. Concretely, our method achieves the similar results when $\alpha$ is within a certain range, which is insensitive to small fluctuations. ### 2. For weighting function beta(t), is there any optimal/analytical solutions? This is a valuable question! It may exists an analytical solution since we have derived an upper bound in equation (10). An appropriate weighting function $\beta(t)$ may change the inequality sign to equal sign in equation (10). We left it for future work. --- Rebuttal 2: Comment: Dear Anonymous Reviewer mcPT, We sincerely thank you for kindly reviewing our paper and providing professional comments! We believe that we have fully addressed the concerns you raised. We once again express our gratitude for your valuable comments, which have significantly enhanced the quality of our paper. If you have any additional comments about our paper, we are more than willing to discuss them in detail and make the necessary improvements accordingly. Best, Paper2687 Authors --- Rebuttal Comment 2.1: Comment: Thanks for the rebuttal! Most of my raised questions have been addressed. Two of my concerns still stand and they are closely connected. 1. Pre-trained encoder might be unfair I really appreciate the authors' added experiments with MoCo pretrained CIFAR-10. An improvement from 2.04 to 1.95 looks sound, although less significant than the ImageNet-pretrained case 1.88. I think the performance of the added contrastive part significantly depends on the pre-trained encoder. My guess is that if using encoders from CLIP models, the performance gain would be even higher. * It is difficult to distinguish which part, the extra info from pre-trained encoders, or the added contrastive sampling train, is contributing more to the performance. * If we were to incorporate extra knowledge contained in pre-trained encoders, I am not sure whether the contrastive sampling train is the best way to go. 2. Marginal performance gap on more complicated data In my humble opinion, an improvement from 2.84 to 2.81, or from 2.72 to 2.69 is significant. In the ImageNet 64 case, with no extra information from the pre-trained encoder(also on ImageNet), the performance gain may be a more realistic reflection of the added contrastive sampling train. The performance gain seems to vanish as the data gets more complicated. I am concerned about whether this method can improve bigger models, say text-to-image diffusion models. --- Reply to Comment 2.1.1: Comment: ### 1. Pre-trained encoder might be unfair Taking InfoNCE objective function Eq. (11) in the main paper as an example, we need to calculate the similarity between images, e.g., positive pair **($x_{t}^{SDE} $, $x_{j}^{SDE} $)**, and negative pairs **($x_{j}^{SDE} $, $x^{-} $)**. As far as we know, in the literature of contrastive learning, such as **CLIP**, **MoCo**, **SimCLR**, and **DINO**, along with **MAE**, this similarity is often caluclated in the feature space, rather than the pixel space. This implies that the encoder is an essential module when implementing the contrastive loss, othertwise directly calculating the similarity between image pixels is meaningless. Hence, encoder is a key component in our method since we motivated by enhancing pre-trained DMs with contrastive learning. To further evaluate the performance of our method, we conduct several experiments on the CIFAR-10 dataset employing the pre-trained EDM. For instance, we train the MoCo V2 from scratch and simultaneously fine-tune the pre-trained EDM. In this manner, we obtain 1.9391 FID which is better than 1.9507 FID when using the pre-trained MoCo V2 encoder, shown in the below Table. It's essential to highlight that both of them are exclusively trained on the CIFAR-10 dataset, without any supplementary information. The substantial enhancements we observe can be solely attributed to the efficacy of our method. Hence, our method contributes more to the performance instead of extra information. Moreover, our method is not limited to be used with a pre-trained encoder. It can achieve better results when the encoder is trained from scratch. Nevertheless, we advise opting for a pre-trained encoder due to its demand for much fewer computational resources. Fundamentally, the encoder is an essential module in our method, regardless of whether it has been pre-trained or not. | Models | FID $\downarrow$ | Encoders | | :---: | :---: | :------: | |EDM|2.04|-| |Ours|1.9507|MoCo V2 (CIFAR-10, pre-trained)| |Ours|1.9391|MoCo V2 (CIFAR-10, training from scratch)| |Ours|1.8856|MoCo V2 (ImageNet, pre-trained)| |Ours|1.8831|CLIP (LAION-400M, pre-trained)| |Ours|1.8797|CLIP (LAION-2B, pre-trained)| We also utilize pre-trained CLIP encoders to improve the pre-trained EDM. To provide specific instances, we have achieved FID values of 1.8831 and 1.8797 utilizing two distinct CLIP encoders that were trained on the LAION-400M and LAION-2B datasets, respectively. Though a better encoder will further improve the performance, those FIDs are almost identical to the one we utilize the encoder trained on ImageNet. There is an upper limit to this improvement and a better encoder will not improve DMs indefinitely. Thus, while the inclusion of a more advanced encoder might have its merits, it's essential to recognize that the crux of our method hinges on the potency of contrastive loss rather than solely relying on an encoder enriched with extra information. ### 2. Marginal performance gap on more complicated data | Models | FID $\downarrow$ | Sampling Steps | | :---: | :---: | :------: | ImageNet | DPM-Solver3| 2.72 | 50 | | DEIS-tAB3| 2.69 | 50 | | Ours| **2.67** |50| | DPM-Solver3|2.84|30| | DEIS-tAB3| 2.81 |30| | Ours| **2.75** |30| | DPM-Solver2|5.36|12| | Ours| **5.22** |12| | DPM-Solver2|7.93|12| | Ours| **7.78** |12| To evaluate our method on ImageNet dataset, we use training-free fast samplers rather default samplers to construct contrastive sampling chain to fine-tune IDDPM. In this manner, we obtain better results with the same sampling steps when using those fast samplers. Previously the DEIS-tAB3 sampler reduces the FID from 2.72 to 2.69 with 50 sampling steps when compared to DPM-Solver3 sampler. After using contrastive sampling chain constructed by DEIS-tAB3, we further reduce the FID from 2.69 to 2.67 with 50 sampling steps, seen in the above table. Moreover, DEIS-tAB3 previously reduces the FID from 2.84 to 2.81 with 30 sampling steps compared to DPM-Solver3. By contrast, we further reduce the FID from 2.81 to 2.75 with 30 sampling steps when using previous fine-tuned IDDPM. Hence, our method achieves a comparable level of promotion compared to those methods. Specifically, with 50 sampling steps, we attain a reduction of 0.02 in FID, whereas DEIS-tAB3 achieves a reduction of 0.03. When employing 30 sampling steps, we realize a FID reduction of 0.06, in contrast to the 0.03 reduction achieved by DEIS-tAB3. Importantly, our method has demonstrated significant improvements when employing fewer sampling steps. It is worth noting that achieving good results in smaller steps is the focus of recent DMs research. After using the contrastive sampling chain constructed by DPM-Solver2 sampler to fine-tune IDDPM, we reduce the FID from 7.93 to 7.78 with only 10 sampling steps, as well as reduce the FID from 5.36 to 5.22 with 12 sampling steps. This implies that our method is meaningful to achieve great improvements with few sampling steps.
null
null
null
null
null
null
A*Net: A Scalable Path-based Reasoning Approach for Knowledge Graphs
Accept (poster)
Summary: The paper introduces ANet, a scalable path-based approach for reasoning on extensive knowledge graphs (KGs). In contrast to embedding techniques, path-based methods exhibit inductive capabilities but encounter challenges in terms of scalability due to the exponential growth of paths. ANet addresses this issue by incorporating a priority function, inspired by the A\* algorithm for shortest-path problems, which enables the selection of crucial nodes and edges during each iteration. This novel approach effectively reduces the time and memory requirements for both training and inference processes. Strengths: S1: This paper proposes an efficient GNN called A\*Net for link prediction with good scalability. S2: A\*Net shows impressive results on various KGs. Weaknesses: W1: Although the method proposed in this article has better scalability, the contributions from theoretical perspectives are incremental compared to NBFNet. W2: The introduction of the parameter sharing between the priority function and predictor is somewhat unclear, and the reason why the reasoning task can be regarded as weak supervision for the priority function is not well explained. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Q1: The priority function in A\*Net is similar to the attention used in RED-GNN except that A\*Net selects the nodes and edges according to the attention score. In the case where memory allows, how does the performance of A\* Net change when Top operation is disabled in Algorithm 1 (line 5 & line 7)? Q2: If some nodes and edges are discarded in the early phase of model training, it may introduce incorrect inductive biases and prevent the model from training effectively. How do you address this issue to avoid such problems or why is this not an issue in A\*Net? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: See **Weaknesses** and **Questions**. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. Here is our response to your concerns. **W1: Although the method proposed in this article has better scalability, the contributions from theoretical perspectives are incremental compared to NBFNet.** A1: The major contribution of this paper is scalability, not the theoretical insights. Previous path-based methods like NBFNet can only deal with graphs of tens of thousands of nodes. Our A\*Net scales path-based methods to ogbl-wikikg2, a dataset containing 2.5 million entities and 16 million triplets, which is **2 magnitudes larger than what NBFNet can solve**. Note that ogbl-wikikg2 has been previously dominant by embedding methods, and none of the embedding methods are inductive or interpretable like A\*Net. Therefore, we believe this contribution is very important to the knowledge graph community and may potentially change future research directions as new abilities may emerge with scalability. The theoretical design insights from the A\* algorithm are a minor contribution compared to our empirical achievements. However, we should emphasize that the design insights led us to develop A\*Net without significant performance loss. As showed in Tab. 6(a), naive solutions like personalized PageRank or node degree sacrifice the performance when they prune the paths. **W2: The introduction of the parameter sharing between the priority function and predictor is somewhat unclear, and the reason why the reasoning task can be regarded as weak supervision for the priority function is not well explained.** A2: The priority function is $s_{uq}^{(t)}(x) = \sigma(f(\mathbf{s}_{uq}^{(t)}(x))$ where $\mathbf{s}_{uq}^{(t)}(x)$ is the representation computed by Eqn. 10 and $f(\cdot)$ is a feed-forward network. The predictor function is $p(v|u,q) = \sigma(f'(\mathbf{s}_{uq}^{(T)}(v)))$ where $\mathbf{s}_{uq}^{(T)}(v)$ is the representation from the last layer and $f'(\cdot)$ is a feed-forward network. Since two functions have similar formulas, we share the parameters between $f(\cdot)$ and $f'(\cdot)$. For the priority function, an ideal supervision should assign high scores to nodes on the important paths, and low scores to nodes that are not on the important paths. We notice that the reasoning task assigns labels of 1 for true answer nodes, and labels of 0 for false answer nodes. Since true answer nodes are always on the important paths, and false answer nodes are less likely to be on the important paths, the supervision from the reasoning task is correlated to the supervision we need for the priority function. Therefore, we share the weights between these two functions, and hope that a well-trained predictor can help the priority function to converge when the whole model is trained end-to-end. We illustrated this idea in Line 188-191 in the paper. **Q1: In the case where memory allows, how does the performance of A\* Net change when the topk operation is disabled in Algorithm 1?** A3: Our ablation study (Fig. 6) suggests that the performance of A\*Net hits a plateau when we select more nodes or edges than a certain threshold. We run experiments with the topk operations disabled, which makes A\*Net almost identical to NBFNet. Here are the results. Generally, the performance doesn’t increase and is upper bounded by NBFNet. |FB15k-237|MRR|H@1|H@3|H@10| |---|---|---|---|---| |A\*Net (10% nodes, 10% edges)|0.411|0.321|0.453|0.586| |A\*Net (no pruning)|0.407|0.314|0.445|0.590| |NBFNet|0.415|0.321|0.454|0.599| **Q2: If some nodes and edges are discarded in the early phase of model training, it may introduce incorrect inductive biases and prevent the model from training effectively. How do you address this issue to avoid such problems or why is this not an issue in A\*Net?** A4: That’s a valid concern. Empirically, this is not an issue for A\*Net, as A\*Net achieves competitive performance compared to its unpruned version, NBFNet. We conjecture the reason is that A\*Net converges in a curriculum learning fashion with the help of the weight sharing trick. Since every edge in the training graph is a training sample, we automatically have a curriculum for training path-based methods. That is, the training samples cover consecutive distances from 1 to some finite value. As long as we start from some reasonable ratios of nodes and edges, the predictor function can converge on answers of distance 1. Through the weight sharing trick, this helps the priority function to find important nodes of distance 1, and reach answers of distance 2. Then the predictor function converges on answers of distance 2. Eventually, this procedure enables A\*Net to converge on answers of all distances. --- Rebuttal Comment 1.1: Comment: Most of my concerns are addressed by rebuttal, so I raise my score to 6. Your reply to Q2 is interesting and I am curious about if there are any experimental results supporting your conjecturing. --- Reply to Comment 1.1.1: Title: Discussion Comment: Thank you for recognition of our work! Regarding curriculum learning conjecture, we are running ablation studies to verify this claim. Due to the congestion in our cluster, we are not sure if we can get the results before the end of the discussion period, but we'll try our best to give you an answer.
Summary: This paper presents a scalable path-based method for knowledge graph reasoning, which is inspired by the A* algorithm for shortest path problems. Strengths: 1. The intriguing approach of applying the A$^*$ algorithm's principle to path reasoning in KG is proposed in this paper, along with the introduction of novel methods for crafting the priority function. 2. The paper achieves state-of-the-art results on the large-scale KG reasoning dataset, ogbl-wikikg2. 3. There's a substantial enhancement in efficiency, considering both time and memory usage, as opposed to the top-performing baseline, NBFNet. Weaknesses: The proposed method performs slightly worse than NBFnet as shown in Table 1, and no results of NBFnet are reported on tail prediction in Table 2. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In the context of KG reasoning, a crucial question is, how many steps are typically required for a query? According to the vanilla path reasoning in Equation 1, the number of paths increases exponentially with respect to path length. However, if the path length is typically small, this might not pose a significant problem? Moreover, when dealing with a large-scale KG, the BF algorithm would need to visit $|\mathcal{V}|$ nodes and $|\mathcal{E}|$ edges for each step, which can be quite computationally intensive. Given these considerations, it leads to the question: If the path length is usually small, could vanilla path reasoning be a more efficient choice compared to BF? 2. Another question is, can we simply leverage the idea of beam search into vanilla path reasoning? For example, we keep top-K ranked paths for each step, which may also avoid the exponential growth of the number of paths. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your recognition and constructive comments. Here is our response to your concerns. **W1: The proposed method performs slightly worse than NBFNet as shown in Table 1, and no results of NBFNet are reported on tail prediction in Table 2.** A1: By design, A\*Net cannot be better than NBFNet in performance, and our goal is to show that the gap between A\*Net and NBFNet is very small (e.g. only 1% absolute difference in H@10 and less than 1% absolute difference in other metrics in Tab. 1). The reason is that A\*Net explores strictly less paths (e.g. 10% nodes and 10% edges on FB15k-237) than NBFNet in both training and inference. In a fair comparison, the performance of A\*Net should be roughly upper bounded by the performance of NBFNet. While it is possible to improve A\*Net by tweaking the neural parameterization and hyperparameters, it will result in an unfair comparison and blur the contribution of this paper. Here are the results including NBFNet on tail prediction. We can see that A\*Net achieves competitive performance compared to NBFNet on both datasets. We will update these results in the paper. |FB15k-237|MRR|H@1|H@3|H@10| |---|---|---|---|---| |MINERVA|0.293|0.217|0.329|0.456| |Multi-Hop|0.393|0.329|-|0.544| |CURL|0.306|0.224|0.341|0.470| |NBFNet|**0.509**|**0.411**|**0.562**|**0.697**| |A\*Net|**0.505**|**0.410**|**0.556**|0.687| |WN18RR|MRR|H@1|H@3|H@10| |---|---|---|---|---| |MINERVA|0.448|0.413|0.456|0.513| |Multi-Hop|0.472|0.437|-|0.542| |CURL|0.460|0.429|0.471|0.523| |NBFNet|**0.557**|**0.503**|**0.579**|**0.669**| |A\*Net|**0.557**|**0.504**|**0.580**|**0.666**| **Q1: In KG reasoning, how many steps are typically required for a query? If the path length is usually small, could vanilla path reasoning be a more efficient choice compared to Bellman-Ford?** A2: For path-based methods that use exhaustive search[1, 2, 3], they compute typically up to 3 steps due to poor scalability. For path-based methods that use the Bellman-Ford algorithm[4, 5], their ablation studies suggest that the performance keeps increasing with more steps until 6 steps. To give an intuition of how infeasible exhaustive search is, we compute the number of paths for different lengths, averaged over all positive triplets in each dataset. Here is the statistics. The number of paths grow exponentially w.r.t. the length of the paths. We note that exhaustive search is more efficient than the Bellman-Ford algorithm only when the number of paths is less than the number of nodes $|\mathcal{V}|$. This only holds for paths with length ≤ 1 on FB15k-237, length ≤ 4 on WN18RR and length ≤ 2 on ogbl-wikikg2 respectively. Therefore, vanilla path reasoning is usually not a good choice. ||\|V\||Length=1|Length=2|Length=3|Length=4|Length=5|Length=6| |---|---|---|---|---|---|---|---| |FB15k-237|14,541|367.0|31943|9.014e6|1.023e9|2.411e11|3.199e13| |WN18RR|40,943|19.38|138.9|4705.5|35504|1.624e6|1.214e7| |ogbl-wikikg2|2,500,604|135698|2.337e6|3.792e11|7.027e12|1.077e18|2.131e19| **Q2: Can we simply leverage the idea of beam search into vanilla path reasoning? For example, we keep top-K ranked paths for each step, which may also avoid the exponential growth of the number of paths.** A3: Beam search can be applied to path-finding methods[6, 7, 8] that use reinforcement learning to find paths. However, these methods assume a sparse set of answers (typically ≤100 answers) and can only be evaluated on tail prediction. Their performance is much worse than path-based methods that operate on a dense set of paths (e.g. NBFNet, A\*Net), as showed in Tab. 2. It is not trivial to apply beam search on path-based methods like NBFNet. First, beam search requires a score function to prune intermediate steps, which isn’t present in the original NBFNet. Second, even if we have such a score function, beam search is not differentiable w.r.t. the scores at the intermediate steps. This means we can’t learn the score function but can only use a handcrafted score function, which is likely to be suboptimal. [1] Lao and Cohen. Relational Retrieval Using a Combination of Path-Constrained Random Walks. ML 2010. [2] Neelakantan et al. Compositional Vector Space Models for Knowledge Base Completion. IJCNLP 2015. [3] Wang et al. Relational Message Passing for Knowledge Graph Completion. KDD 2021. [4] Zhu et al. Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction. NeurIPS 2021. [5] Zhang and Yao. Knowledge Graph Reasoning with Relational Digraph. WWW 2022. [6] Xiong et al. DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning. ACL 2017. [7] Das et al. Go for a Walk and Arrive at the Answer: Reasoning Over Paths in Knowledge Bases using Reinforcement Learning. EACL 2017. [8] Lin et al. Multi-Hop Knowledge Graph Reasoning with Reward Shaping. EMNLP 2018. --- Rebuttal Comment 1.1: Title: Thanks for your rebuttal Comment: The authors addressed my concerns. Since my score is already positive, I'm maintaining my score.
Summary: The main contribution of this paper is presenting a scalable path-based method A*Net, for link prediction on large-scale knowledge graphs. A*Net is inspired by the A* algorithm for solving shortest path problems, where it learns a priority function to select important nodes and edges at each iteration. This allows for the time and memory reducing for both training and inference. From an efficiency perspective, this could be considered as a path-pruning method to progressively reduce the subgraph based on the learned priority function. The empirical results also demonstrate efficiency improvement. Strengths: 1. The efficiency problem caused by the explosively increasing entities in deeper propagation layers is indeed serious in the recent GNN-based inductive methods. And the proposed method makes sense and technically sound. 2. The experimental results are impressive. The paper demonstrates the practical applications of A*Net in various settings and datasets, with the efficiency improvement compared with several recent baselines. Furthermore, the paper sets a new state-of-the-art on the million-scale dataset ogbl-wikikg2 and converges faster than embedding methods. 3. The paper's organization is well-executed and the content is easily comprehensible. Weaknesses: 1. The paper's comparison to the A* algorithm seems somewhat overstated. As a derivative work of NBFNet, this paper draws an analogy to another shortest path algorithm, A*. Contrary to the Bellman-Ford algorithm that resolves the shortest path problem from the source to all other points, the A* algorithm typically addresses the shortest path problem from the source to a specific target point. However, in the context of knowledge graph (KG) reasoning, the target point is unknown, rendering the core principle of A*, assessing the estimated remaining cost to the target point, unfeasible. In fact, the A* algorithm's priority rule, involving the distance to the target node, is not pertinent to the priority function in the proposed model. The A* algorithm appears to function primarily as a promotional point, rather than as a guiding principle. 2. Perhaps due to the overemphasis on the A* analogy, the paper's true technical contributions remain unclear. Comparing the core function of NBFNet in Eq. 3 and that of A*Net in Eq. 12, the only discernible difference lies in introducing the priority score, calculated based on the embeddings of the query and the current node. Stripping away the A* algorithm framework, it essentially seems to be a path-pruning technique reliant on an attention mechanism to select the top K nodes and top L edges in each layer for efficiency's sake. 3. The paper lacks insightful contributions regarding important paths beyond a weighted version of the NBENet method. The theoretical appendix focuses solely on integrating path selection into the NBFNet framework, premised on the assumption that a certain function can distinguish important nodes. However, how to ensure that important paths are chosen is not clear. In response to this, the authors propose weight sharing between the priority function and the predictor, asserting that the reasoning task can be seen as a weak supervision for the priority function. However, this appears counterintuitive, given that the priority score is dependent on a specific query. A high predictor score, indicating that the node x answers the query (u, r_1), should not contribute to the priority score of x for a different query (u, r_2). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. As addressed in Weaknesses 3, could you elaborate on how weight sharing aids in the selection of important paths? 2. I observe that two handcrafted priority functions, PPR and Degree, are employed in the ablation studies. Given that high connectivity doesn't necessarily denote the importance of paths, what about the effectiveness and efficiency of a random pruning strategy, particularly with respect to the obgl_wikikg2 dataset? 3. In the Visualization section, only the results of the proposed method are displayed without any comparison. Could you clarify what distinct paths the Neural function selects compared to the two handcrafted ones? Furthermore, does the Neural-based path selection align more closely with knowledge semantics? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. The authors stated the limitation, future work and social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments. Here is our response to your concerns. **W1: In KG reasoning, the target point is unknown, rendering the core principle of A\* unfeasible. In fact, the A\* algorithm's priority rule, involving the distance to the target node, is not pertinent to the priority function in the proposed model.** A1: We agree that estimating the remaining distance is the core principle of the A\* algorithm. Our A\*Net follows this principle. We design Eqn. 10 in A\*Net to match Eqn. 4 in the A\* algorithm. $\mathbf{h}_q^{(t)}(u, x)$ corresponds to the current length $d(u, x)$, while $\mathbf{g}([\mathbf{h}_q^{(t)}(u,x), \mathbf{q}])$ corresponds to the remaining distance $g(x, v)$. A figure illustration is in the attached PDF file. Your misunderstanding might come from the fact that we don’t have $v$ in Eqn. 10. In fact, the learned representation $\mathbf{q}$ encodes the relative position from $u$ to $v$. e.g. If $q$ is the mother relation, the aggregation of paths between $u$ and $v$ in a positive sample should roughly match the representation of mother $\mathbf{q}$, i.e. $\mathbf{q} \approx \mathbf{h}^{(T)}_q(u, v)$ for any $(u,q,v)\in\mathcal{E}$. The reason why $\mathbf{q}$ can be independent of $u$ and $v$ is that the definition of a relation is independent of its triplet instances. The representation of the remaining cost is (due to TeX bugs, $g$ and $h$ should be $\mathbf{g}$ and $\mathbf{h}$ resp.) $g^{(t)}(x,v)\approx g([h_{q}^{(t)}(u,x),h_{q}^{(T)}(u,v)])\approx g([h_{q}^{(t)}(u,x),\mathbf{q}])$ where the first approximation says that the remaining cost between $x$ and $v$ can be estimated by the current aggregation of paths $\mathbf{h}_{q}^{(t)}(u,x)$ between $u$ and $x$ and the final path representations $\mathbf{h}_{q}^{(T)}(u,v)$ between $u$ and $v$. The second approximation replaces $\mathbf{h}_{q}^{(T)}(u,v)$ with $\mathbf{q}$. So $\mathbf{g}([\mathbf{h}_q^{(t)}(u,x), \mathbf{q}])$ matches the remaining distance $g(x, v)$ in the A\* algorithm. **W2: Due to the overemphasis on the A\* analogy, the paper's true technical contributions remain unclear.** A2: We clarified the analogy between A\*Net and the A\* algorithm in A1 and they align at each term in the priority function. Our major contribution is the first path-based method that scales to ogbl-wikikg2. Note that ogbl-wikikg2 has been previously dominant by embedding methods, and none of them are inductive or interpretable like A\*Net. Hence we think this contribution is very important to the community and may potentially change future research directions. The theoretical insights from the A\* algorithm are a minor contribution compared to our empirical achievements. **W3 & Q1: How to ensure that important paths are chosen is not clear. The weight sharing between the priority function and the predictor appears counterintuitive, given that the priority score is dependent on a specific query.** A3: Since there isn’t any annotation of the important paths, we can’t verify whether important paths are chosen or not. However, the observation that A\*Net matches the performance of NBFNet suggests that A\*Net captures most important paths. Note that the performance of A\*Net is upper bounded by NBFNet, as A\*Net visits much fewer paths. We agree that the priority score should be dependent on a specific query, and that’s what we designed A\*Net to be. Both the priority function and the predictor take a **query-dependent** representation $\mathbf{s}_{uq}^{(t)}(x)$ as input, but the parameters in both functions (i.e. $f(\cdot)$ in Eqn. 11) are **query independent**. This is consistent with previous works[1, 2, 3]. We share the **query-independent** parameters in these functions. For two queries $(u, q_1, ?)$ and $(u, q_2, ?)$, they will have different representations $\mathbf{s}_{uq_1}^{(t)}(x)$ and $\mathbf{s}_{uq_2}^{(t)}(x)$, and hence different priority scores. **Q2: What about the effectiveness and efficiency of a random pruning strategy?** A4: We run experiments for the random pruning strategy on FB15k-237 and ogbl-wikikg2. We set the random pruning strategy to have the same node and edge ratios as A\*Net. A\*Net outperforms the random pruning strategy on both datasets, and is slightly better than the random pruning strategy in time and memory. We conjecture the reason is that A\*Net tends to revisit nodes more often than the random pruning strategy, resulting in a more cache-friendly behavior. |FB15k-237|MRR|H@1|H@3|H@10|#message|time|memory| |---|---|---|---|---|---|---|---| |Random|0.378|0.288|0.413|0.556|39,017|9.20min|16.9GiB| |A\*Net|**0.411**|**0.321**|**0.453**|**0.586**|38,610|8.07min|11.1GiB| |ogbl-wikikg2|TestMRR|ValidMRR|#Params|#message|time|memory| |---|---|---|---|---|---|---| |Random|0.5815|0.5827|**6.83M**|51,458|1.74hr|26.5GiB| |A\*Net|**0.6767**|**0.6851**|**6.83M**|52,371|1.30hr|24.1GiB| **Q3: Only the results of the proposed method are visualized without any comparison. What distinct paths the neural function selects compared to the two handcrafted ones? Does the neural-based path selection align more closely with knowledge semantics?** A5: Due to space limits, we put the visualization in the global response. We observe that the paths captured by A\*Net are concise. By contrast, NBFNet and PPR find longer and more redundant paths. The degree priority function results in the closest behavior as A\*Net, but the entities it visits are less relevant than those in A\*Net. It is hard to confirm that A\*Net aligns better with knowledge semantics, as we don’t have any ground truth for important paths. We don’t intend to claim the quality of the visualization of A\*Net, but just show its interpretability. [1] Zhu et al. A General Graph Neural Network Framework for Link Prediction. NeurIPS 2021. [2] Yang et al. Differentiable Learning of Logical Rules for Knowledge Base Reasoning. NIPS 2017. [3] Teru et al. Inductive Relation Prediction by Subgraph Reasoning. ICML 2020. --- Rebuttal Comment 1.1: Comment: Thanks so much for the detailed responses! While the rebuttal does provide some clarification, I still have some concerns. 1. The authors explained the query vector $\mathbf{q}$ in the second term of Eq. 10 can represent the relative distance from $u$ to $v$. However, this interpretation seems to be more of an intuitive justification rather than an inherent aspect of the model design. Meanwhile, since $\mathbf{q}$ is also a factor in the computation of each $h^{(t)}_q(u,x)$, it remains unclear if $\mathbf{q}$ maintains its independence and intuitive significance throughout the training process. 2. The authors emphasized that the major contribution is the first path-based method that scales to ogbl-wikikg2. Nevertheless, I question the true significance of this experimental work for the inductive models calculating at subgraph level. Because the model would not calculate in the entire large-scale KG, excluding part of nodes/edges is an obvious strategy to improve efficiency and scalability. Compared with embedding-based methods, the natural drawback of inference complexity still exists. 3. I also have one additional concern on the model's performance. Table 1 illustrates that the proposed method performs slightly worse than NBFNet. I am skeptical of the statement that the performance of NBFNet represents a definitive upper bound. Notably, a recent study titled "AdaProp: Learning Adaptive Propagation for Graph Neural Network based Knowledge Graph Reasoning" has outperformed both RED-GNN and NBFNet by leveraging an edge sampling strategy. Given these considerations, I would like to keep my initial score. --- Reply to Comment 1.1.1: Title: Discussion (1/2) Comment: Thank you for your feedback. Regarding your further concerns, here is our response. It would be nice if you can give us a basic feedback before the end of the discussion period. **C1: The query vector $\mathbf{q}$ in the second term of Eq. 10 can represent the relative distance from $u$ to $v$. However, this interpretation seems to be more of an intuitive justification rather than an inherent aspect of the model design. Meanwhile, since $\mathbf{q}$ is also a factor in the computation of each $h_q^{(t)}(u, v)$, it remains unclear if $\mathbf{q}$ maintains its independence and intuitive significance throughout the training process.** A1: We should correct that the query vector $\mathbf{q}$ represents the **relative position**, not the **relative distance**, from $u$ to $v$. The relative position contains not only the relative distance, but also the relative direction. For both A\*Net and the A\* algorithm, knowing the relative distance can’t fully recover the position of $v$ from the position of $u$, but knowing the relative position can. We agree that this is an intuitive justification, but it is also an inherent aspect for A\*Net to be inductive. We conjecture that you expected A\*Net to use the position of $v$ as the absolute goal, faithful to the A\* algorithm. First, as we disccused in Line 171-173, unlike the shortest path problem, we don’t know the answer entity $v$ beforehand in knowledge graph reasoning. So we can only reparameterize $v$ by $u$ and $q$ from the query. Second, if we use the absolute goal, no matter from an oracle or predicted by the representations $\mathbf{u}$ and $\mathbf{q}$, the model will fit to some absolute positional information, thereby can’t generalize to unseen entities in the inductive setting. Hence a relative goal $\mathbf{q}$ is better for preserving the inductive advantage of path-based methods. Yes, it is not that clear whether the representation $\mathbf{q}$ effectively captures the relative goal, when it is used as both the relative goal and the condition in Eqn. 2 & 3. We conduct ablation study to verify whether $\mathbf{q}$ contributes more as the relative goal or the condition. We consider two variants of A\*Net: 1) A variant without conditioning on $q$, where the indicator function (Eqn. 2) is replaced with $\mathbb{1}(u=v) = \overrightarrow{1}\text{ if }u=v\text{ else } \overrightarrow{0}$, and the edge representation (Eqn. 3) is replaced with $\mathbf{w}(x, r, v) = \mathbf{r}$. 2) A variant without the representation $\mathbf{q}$ in the neural priority function (Eqn. 10). Here are the results |FB15k-237|MRR|Hits@1|Hits@3|Hits@10| |---|---|---|---|---| |NBFNet (w/o conditioning)|0.400|0.306|0.439|0.585| |NBFNet|**0.415**|**0.321**|**0.454**|**0.599**| |A\*Net (w/o conditioning)|0.401|0.311|0.439|0.580| |A\*Net (w/o relative goal)|0.185|0.105|0.203|0.350| |A\*Net|**0.411**|**0.321**|**0.453**|0.586| We observe that conditioning on $q$ has a small gain on the performance (1.5% absolute difference in MRR) and is general to both NBFNet and A\*Net, suggesting that conditioning is not the key to the sucess of A\*Net. Note the conditioning design is a common practice to improve performance in path-based methods[1, 2, 3, 4, 5]. However, we observed a significant performance drop for A\*Net without $\mathbf{q}$ in the priority function (22.6% absolute difference in MRR). Hence we think $\mathbf{q}$ captures the relatively goal in the neural priority function and aligns with the intuition of the A\* algorithm. [1] Yang et al. Differentiable Learning of Logical Rules for Knowledge Base Reasoning. NIPS 2017. [2] Sadeghian et al. DRUM: End-To-End Differentiable Rule Mining On Knowledge Graphs. NeurIPS 2019. [3] Zhu et al. Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction. NeurIPS 2021. [4] Zhang and Yao. Knowledge Graph Reasoning with Relational Digraph. WWW 2022. [5] Zhang and Zhou et al. AdaProp: Learning Adaptive Propagation for Graph Neural Network based Knowledge Graph Reasoning. KDD 2023. --- Reply to Comment 1.1.2: Title: Discussion (2/2) Comment: **C2: Because the model would not calculate in the entire large-scale KG, excluding part of nodes/edges is an obvious strategy to improve efficiency and scalability. Compared with embedding-based methods, the natural drawback of inference complexity still exists.** A2: It is always trivial to scale up models by random sampling and sacrificing the performance, but **it is not trivial if we can scale up models without performance drop**. This is also the main point of previous papers that studied sampling methods for GNNs on homogeneous graphs[6, 7]. As for A\*Net, we have shown in the answer to your Q2 that random pruning strategy is significantly worse than A\*Net on FB15k-237 (3.3% absolute difference in MRR) and ogbl-wikikg2 (9.5% absolute difference in MRR) under the same node and edge ratios. In other words, A\*Net is strictly better than random pruning in terms of efficiency and effectiveness. A\*Net has an inference complexity of $O(T(\alpha|\mathcal{V}|d^2+\alpha\beta|\mathcal{E}|d))$ for answering a single query $(u, q,?)$. By comparision, its full counterpart, NBFNet, has a complexity of $O(T(|\mathcal{V}|d^2+|\mathcal{E}|d))$. Embedding methods such as TransE or RotatE have a complexity of $O(|\mathcal{V}|d)$ for enumerating all entities to answer the query $(u, q,?)$. If we take the hyperparameters from ogbl-wikikg2, A\*Net is about $6\times(0.002\times2.5\text{M}\times32^2+0.002\times1\times16.1\text{M}\times32)=37\text{MFlops}$. Embedding methods are about $2.5\text{M}\times500=1.25\text{GFlops}$. Empirically, A\*Net is even faster than embedding methods due to its pruning ratios $\alpha$, $\beta$ and small dimension $d$. We would advocate to compare different knowledge graph reasoning methods through pareto frontiers, e.g. what is the best model at the complexity of $O(|\mathcal{V}|d)$, and what is the best model at the complexity of $O(|\mathcal{V}|^2d)$. A\*Net is a new pareto frontier here since it achieves competitive performance with NBFNet while uses signficantly less time. To our best knowledge, none of the embedding methods (e.g. TransE, RotatE) at the complexity of $O(|\mathcal{V}|d)$ is inductive, and none of the inductive methods (e.g. GraIL, NBFNet, A\*Net) can reach the complexity of $O(|\mathcal{V}|d)$. Also the inference complexity of embedding methods doesn’t suggest their actual time cost in applications. When the graph changes over time (e.g. Wikidata), embedding methods need to be frequently re-trained on the whole graph to accommodate any new entity, which is very costly. By contrast, inductive methods like A\*Net can directly perform inference over such new entities. **C3: I am skeptical of the statement that the performance of NBFNet represents a definitive upper bound. Notably, a recent study titled "AdaProp: Learning Adaptive Propagation for Graph Neural Network based Knowledge Graph Reasoning" has outperformed both RED-GNN and NBFNet by leveraging an edge sampling strategy.** A3: We feel it is a standard practice to assume that a sampling method is upper bounded by a full inference method in performance[6, 7] when they are compared as apples to apples. A\*Net exactly follows the neural parameterization and hyperparameters of NBFNet, so it is natural to think that A\*Net is upper bounded by NBFNet. AdaProp is compared to its full variant RED-GNN with different sets of hyperparameters, rather than a fair comparsion like ours. For example, AdaProp uses 8 layers and RED-GNN uses 5 layers, which explains the performance gain of AdaProp. [6] Chen et al. FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling. ICLR 2018. [7] Chen et al. Stochastic Training of Graph Convolutional Networks with Variance Reduction. ICML 2018.
Summary: This paper proposes a scalable path-based knowledge graph reasoning approach. The idea is to extend only important paths from the exponentially growing set of all possible paths. A heuristic priority function is parametrized by a feed-forward network and is trained to predict the priority of nodes to expand. Experiments show that the proposed approach can significantly improve time and memory efficiency and also achieve good results. Strengths: - Scalability is an important issue for path-based reasoning approaches. The idea of selecting only important paths is interesting and sounds reasonable - The proposed approach is effective and supported by extensive experiments. Time and memory efficiency has been significantly improved. Benchmark results are also good. Weaknesses: My concern is mainly about the design of the priority function Eq (10) In Eq (10), the first part $h_q^{(t)}(u, x)$ is already conditioned on q, u, and x, so in principle the second part $g([h_q^{(t)}(u, x), q])$ doesn't provide any additional information. Therefore, the priority function is purely based on the current path from the start and contains no information about the goal. In other words, the prediction of the priority function would be the same even if the goal changes. This is different from the design of the A* algorithm and may lose theoretical guarantees. It is not appropriate to present the approach in the manner of A* algorithm Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please see Weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: properly addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. We notice that you raised only one concern for the rating of 3. Feel free to bring up other concerns for discussion. Here is our response to your concern. **W1: The first part $h^{(t)}_q(u, x)$ is already conditioned on q, u, and x, so in principle the second part $g([h^{(t)}_q(u, x), q])$ doesn't provide any additional information. The priority function is purely based on the current path from the start and contains no information about the goal. The prediction of the priority function would be the same even if the goal changes. This is different from the design of the A\* algorithm and may lose theoretical guarantees.** A1: It is a misunderstanding that the priority function contains no information about the goal. For a given query $(u,q,?)$, the goal is the set of answer nodes $\mathcal{V}_{ans} = \\{ v | (u,q,v) \in \mathcal{E} \\}$. Since the set $\mathcal{V}_{ans}$ is a function of $u$, $q$ and $\mathcal{E}$, we can represent the goal with source node $u$ and the query relation $q$. Note that we exclude the ground truth triplets $\mathcal{E}$ because they are unknown during inference. A more intuitive figure can be found in the PDF file attached to the global rebuttal response. The vector representation $\mathbf{s}_{uq}^{(t)}(x)$ for the priority function in A\*Net (Eqn. 10) is designed to match the priority function $s(x)$ in the A\* algorithm (Eqn. 4). For a given $u$, $x$ and $t$, $\mathbf{s}_{uq}^{(t)}(x)$ will be different if we change the goal $q$. We will make the correspondence between A\*Net and the A\* algorithm more explicit in the paper. Here we illustrate the correspondence between the terms in A\*Net and the A\* algorithm. The priority function in the A\* algorithm (Eqn. 4) is $$s(x) = d(u,x) \otimes g(x,v)$$ where $d(u, x)$ is the length of the current shortest path from $u$ to $x$. $g(x, v)$ is a heuristic function that estimates the remaining length from $x$ to the target node $v$ (i.e. the goal). Typically, $g(x,v)$ is defined as the $L_1$ distance from $x$ to $v$. The vector representation $\mathbf{s}_{uq}^{(t)}(x)$ in A\*Net (Eqn. 10) is $$\mathbf{s}_{uq}^{(t)}(x) = \mathbf{h}_q^{(t)}(u,x) \otimes \mathbf{g}([\mathbf{h}_q^{(t)}(u,x), \mathbf{q}])$$ where $\mathbf{h}_q^{(t)}(u, x)$ corresponds to $d(u, x)$ and represents the aggregation of paths from $u$ to $x$ within $t$ hops. The optional subscript $q$ means that the representation $\mathbf{h}_q^{(t)}(u, x)$ is conditioned on the goal $q$. In other words, $\mathbf{h}_q^{(t)}(u, x)$ mostly aggregates paths that are relevant to the goal $q$. One may also opt for an unconditioned version $\mathbf{h}^{(t)}(u, x)$ which aggregates all the paths uniformly. Here we follow NBFNet[1] and use this conditioned parameterization. We agree with you that the second part $\mathbf{g}([\mathbf{h}_q^{(t)}(u,x), \mathbf{q}])$ is conditioned on the same set of variables $u$, $q$ and $x$ as the first part. However, two architectures that model the same information don’t suggest they have the same inductive bias and generalization ability. For example, CNNs are better than MLPs on images because they have the inductive bias for translation equivariance, despite that they take the same information as input[2]. In graph machine learning, several papers[3, 4, 5] also suggest algorithmic alignment is the key to the success of GNNs on graphs. Here we show why $\mathbf{g}([\mathbf{h}_q^{(t)}(u,x), \mathbf{q}])$ is aligned with $g(x,v)$ in the A\* algorithm. The learned representation $\mathbf{q}$ captures the relative position from the source node $u$ to the target node $v$. For example, if $q$ is the mother relation, then the aggregation of paths between the source node $u$ and an answer node $v$ should roughly match the representation of mother $\mathbf{q}$, i.e. $\mathbf{q} \approx \mathbf{h}^{(T)}_q(u, v)$ for any $(u,q,v)\in\mathcal{E}$. The reason why $\mathbf{q}$ can be independent of $u$ and $v$ is that the definition of a relation is independent of its triplet instances. $\mathbf{h}_q^{(t)}(u,x)$ is the current aggregation of paths from $u$ to $x$ and represents the relative position from $u$ to $x$, conditioned on the query relation $q$. By transforming $\mathbf{h}_q^{(t)}(u,x)$ and $\mathbf{q}$ with a function $g(\cdot)$ (e.g. a vector substraction function), we can obtain an estimate of the relative position from $x$ to $v$, which corresponds to $g(x,v)$. [1] Zhu et al. Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction. NeurIPS 2021. [2] Bronstein et al. Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges. arXiv 2021. [3] Xu et al. What Can Neural Networks Reason About? ICLR 2020. [4] Xu et al. How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks. ICLR 2021. [5] Dudzik and Velickovic. Graph Neural Networks are Dynamic Programmers. NeurIPS 2022. --- Rebuttal 2: Title: Comment by Reviewer rgXX Comment: I thank the authors for their detailed response I agree that $u$ and $q$ contain some information about the goal. Now the explanation is a bit clearer by regarding $q$ as some kind of "relative goal". But I still don't think the alignment between A\*Net and A\* algorithm is appropriate **In A\* algorithm, $d(u, x)$ and $g(x, v)$ model complementary information based on different inputs: the current shortest path from $u$ to $x$, and the estimation of the cost from $x$ to $v$, respectively. However, in A\*Net, $h_q(\cdot)$ and $g(\cdot)$ in Eq (10) are conditioned on the same variables $u$, $q$, $x$.** The authors' response argued that $\textbf{h}_q(\cdot)$ and $\textbf{g}(\cdot)$ can have different inductive biases, but this argument is too vague... It is still not clear enough what different information is modeled by $\textbf{h}_q(\cdot)$ and $\textbf{g}(\cdot)$, since $\textbf{h}_q(\cdot)$ also takes into account the "relative goal" $q$. Therefore I don't think there is a clear correspondence between A\*Net and A\* algorithm. I will raise my score to 4 to reflect the authors' further explanation --- Rebuttal Comment 2.1: Title: Discussion Comment: Thank you for your feedback. We understand your concern that it looks like we fabricated the story to make A\*Net looks like the A\* algorithm, but they are actually well aligned. The basic logic behind the alignment is 1. The A* algorithm have two equivalent formulations, one based on the absolute goal $v$, and one based on the relative goal from $u$ to $v$, as illustrated in our attached PDF file. 2. A\*Net follows the relative goal formulation of the A\* algorithm. There are two reasons that we must use this design: 1) We don’t have access to the absolute goal $v$ in knowledge graph reasoning, so we reparameterize with the source node $u$ and the relative goal $q$. 2) An absolute goal can’t transfer to unseen entities in the inductive setting, while a relative goal can. 3. The reason that $\mathbf{h}_q(\cdot)$ and $\mathbf{g}(\cdot)$ are conditioned on the same variable is that we additionally condition the representation $\mathbf{h}_q(\cdot)$ on the query relation $q$ following the common practice in path-based methods[1, 2, 3, 4, 5]. This step is optional. In other words, we can think of that $\mathbf{h}(\cdot)$ is parameterized by $u$, $x$ and $\mathbf{g}(\cdot)$ is parameterized $u$, $x$, $q$. 4. We verify this with ablation studies that either remove $q$ from the condition of $\mathbf{h}_q(\cdot)$ or from the priority function $\mathbf{g}(\cdot)$. We observe that conditioning on $q$ has a small gain on the performance (1.0% absolute difference in MRR) and is general to both NBFNet and A\*Net, suggesting that conditioning is not the key to the sucess of A\*Net. However, there is a significant performance drop for A\*Net without $\mathbf{q}$ in the priority function (22.6% absolute difference in MRR). This means that the relative goal $\mathbf{q}$ plays an important role in the priority function, which aligns with the intuition of the A\* algorithm. |FB15k-237|MRR|Hits@1|Hits@3|Hits@10| |---|---|---|---|---| |NBFNet (w/o conditioning)|0.400|0.306|0.439|0.585| |NBFNet|**0.415**|**0.321**|**0.454**|**0.599**| |A\*Net (w/o conditioning)|0.401|0.311|0.439|0.580| |A\*Net (w/o relative goal)|0.185|0.105|0.203|0.350| |A\*Net|**0.411**|**0.321**|**0.453**|0.586| We are aware that the alignment between A\*Net and the A\* algorithm is important for understanding the methodology of this paper. Hence we will clarify this point and add the above ablation studies in the revised version. [1] Yang et al. Differentiable Learning of Logical Rules for Knowledge Base Reasoning. NIPS 2017. [2] Sadeghian et al. DRUM: End-To-End Differentiable Rule Mining On Knowledge Graphs. NeurIPS 2019. [3] Zhu et al. Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction. NeurIPS 2021. [4] Zhang and Yao. Knowledge Graph Reasoning with Relational Digraph. WWW 2022. [5] Zhang and Zhou et al. AdaProp: Learning Adaptive Propagation for Graph Neural Network based Knowledge Graph Reasoning. KDD 2023.
Rebuttal 1: Rebuttal: # Summary of Responses We would like to thank all reviewers for your time and patience on our submission. Here is a summary of reviewers’ points and our responses. **We attached a PDF file to illustrate the correspondance between A\*Net and the A\* algorithm.** **Contributions** - **Scalability is an important problem for path-based methods (all reviewers).** - **A\*Net achieves impressive results in performance, time and memory, especially on a million-scale dataset ogbl-wikikg2 (all reviewers).** - **The technical contribution of A\*Net is incremental (Reviewer 7LW7, ALvG):** The technical contribution involves mathematical derivation of A\*Net and the design of neural priority function inspired by the A\* algorithm. However, we emphasize the main contribution of this paper is to scale path-based methods to ogbl-wikikg2, which is **2 magnitudes larger** than datasets solved by previous path-based methods. This contribution is recognized by all reviewers. Considering that **ogbl-wikikg2 has been previously dominated by embedding methods**, this is an important breakthrough and may potentially change future research directions. **Writing** - **The neural priority function contains no information about the goal and is different from the A\* algorithm (Reviewer rgXX, 7LW7):** We design the representation for the neural priority function $\mathbf{s}_{uq}^{(t)}(x)$ in A\*Net (Eqn. 10) to match the priority function in the A\* algorithm (Eqn. 4). The current aggregation of paths $\mathbf{h}_q^{(t)}(u, x)$ corresponds to the current length $d(u, x)$ in the A\* algorithm, while the term $\mathbf{g}([\mathbf{h}_q^{(t)}(u,x), \mathbf{q}])$ corresponds to the remaining distance $g(x, v)$. While the A\* algorithm uses the target node $v$ as **the absolute goal**, A\*Net uses the learned vector $\mathbf{q}$ to represent **the relative goal** from the source node $u$ to the target node $v$. A more intuitive figure can be found in the attached PDF file. - **The weight sharing between the priority function and the predictor is not clear. (Reviewer 7LW7, ALvG):** The priority function is $s_{uq}^{(t)}(x) = \sigma(f(\mathbf{s}_{uq}^{(t)}(x))$, where $\mathbf{s}_{uq}^{(t)}(x)$ is the representation computed by Eqn. 10 and $f(\cdot)$ is a feed-forward network. The predictor function is $p(v|u,q) = \sigma(f'(\mathbf{s}_{uq}^{(T)}(v)))$, where $\mathbf{s}_{uq}^{(T)}(v)$ is the representation from the last layer and $f'(\cdot)$ is a feed-forward network. We share the parameters between $f(\cdot)$ and $f'(\cdot)$. Note that $f(\cdot)$ is a query-independent function, but the node priority $s_{uq}^{(t)}(x)$ can be query dependent since the input representation $\mathbf{s}_{uq}^{(t)}(x)$ is dependent on the query relation $q$. - We will carefully improve our writing to address these concerns in the revised version. Since these concerns are related to our technical contribution, please let us know if you have further questions. **Experiments** - **Comparison with a random pruning strategy (Reviewer 7LW7).** - **Comparison on path visualization of A\*Net, NBFNet, and handcrafted priority functions (Reviewer 7LW7).** - **Performance of NBFNet on tail prediction (Reviewer 2kB4).** - **Performance of A\*Net when pruning is disabled (Reviewer ALvG).** - We provide results to all the experiments required by the reviewers. All these experiments are consistent with the observations and claims in the paper. We will include them in the revised version to provide a better context to the readers. **Questions** - **If the path length is small, can vanilla path-based methods be more efficient than Bellman-Ford algorithm (Reviewer 2kB4):** We compute the number of paths for different lengths, averaged over all positive triplets in each dataset. We observe an exponential growth in the number of paths w.r.t. the length of the paths. Vanilla path-based methods are only efficient for length ≤ 1 on FB15k-237, length ≤ 4 on WN18RR and length ≤ 2 on ogbl-wikikg2 respectively. This is shorter than the optimal length of 6 reported by NBFNet and RED-GNN. --- # Path Visualization of Different Methods 1. $(\text{Bandai}, \text{industry}, \text{?})$ - NBFNet - $\text{Bandai} \xleftarrow{\text{subsidiary}} \text{Bandai Namco Holdings} \xrightarrow{\text{webpage}} \text{official website}\xleftarrow{\text{webpage}}\text{Bandai Namco Entertainment}\xrightarrow{\text{industry}}\text{video game}$ - $\text{Bandai}\xrightarrow{\text{webpage}}\text{official website}\xleftarrow{\text{webpage}} \text{Santa Clara}\xleftarrow{\text{location}} \text{Bandai Namco Entertainment}\xrightarrow{\text{industry}} \text{video game}$ - A\*Net (neural) - $\text{Bandai} \xleftarrow{\text{subsidiary}}\text{Bandai Namco Entertainment} \xrightarrow{\text{industry}}\text{video game}$ - $\text{Bandai} \xrightarrow{\text{industry}}\text{media}\xleftarrow{\text{industry}}\text{Pony Canyon}\xrightarrow{\text{industry}}\text{video game}$ - A\*Net (PPR) - $\text{Bandai}\xrightarrow{\text{webpage}}\text{official website}\xleftarrow{\text{webpage}}\text{Bandai}\xrightarrow{\text{webpage}}\text{official website}\xleftarrow{\text{webpage}}\text{Bandai}\xleftarrow{\text{subsidiary}}\text{Bandai Namco Entertainment}\xrightarrow{\text{industry}}\text{video game}$ - $\text{Bandai}\xrightarrow{\text{webpage}}\text{official website}\xleftarrow{\text{webpage}}\text{Bandai}\xleftarrow{\text{subsidiary}}\text{Bandai Namco Entertainment}\xrightarrow{\text{industry}}\text{video game}$ - A\*Net (degree) - $\text{Bandai}\xrightarrow{\text{webpage}}\text{official website}\xleftarrow{\text{webpage}}\text{Def Jam Recordings}\xrightarrow{\text{industry}}\text{video game}$ - $\text{Bandai}\xleftarrow{\text{lead}}\text{CEO}\xrightarrow{\text{company}}\text{Microsoft}\xrightarrow{\text{industry}}\text{video game}$ Pdf: /pdf/71144a4300e491ca0d48dd472f6bfab1425590a7.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ZipLM: Inference-Aware Structured Pruning of Language Models
Accept (poster)
Summary: This paper proposed a novel structured compression approach for the inference of LLM, which reduces the model size by removing entire sub-components, like rows or columns from the model’s weight matrices. Based on their algorithm, they manage to drop out some attention heads and shrink the size of FFN layers. Besides, the proposed method considers the runtime speedup while deciding the sparsity of the pruning, which leads to a better trade-off between the loss and the performance. Strengths: - This work comes up with a new structured pruning method for the transformer models and achieved reasonable speedup in the experiments with multiple models. - The theoretical analysis is solid. Weaknesses: - The performance of the proposed method seems strongly dependent on the data distribution of inference, which may make it not that practical in real-world inference settings where the incoming inputs can vary a lot. - In the latency-constrained scenario, the authors said "However, for the latter, the inputs are much smaller, and the size of weight matrices is no longer the primary bottleneck.", however, in my opinion, when the batch size is very small (like 1), loading weights from GPU HBM is exactly the bottleneck. Therefore, I guess sparsifying model parameters should lead to reasonable speedups? - In the throughput-constrained (aka. large batch size), it will be interesting to see how ZipLM could support an even larger batch size since we can take advantage of the memory saved by pruning to hold more inputs. In this way, we may further increase the throughput? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the Weekness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The potential social impact is not discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question 1: The performance of the proposed method seems strongly dependent on the data distribution of inference, which may make it not that practical in real-world inference settings where the incoming inputs can vary a lot.** We would like to note that encoder-only models, such as BERT, are designed for per-task finetuning. Similarly, ZipLM enables per-task pruning, to very large speedup ratios. This per-task setting is completely standard in pruning & distillation literature around encoder-only models. Additionally, we also apply ZipLM to decoder-only models, which are typically used for diverse tasks without additional finetuning. Consequently, we perform pruning on a very general dataset (OpenWebText) and evaluate the model in zero-shot fashion on the unseen WikiText dataset. In terms of sensitivity to different amounts of calibration data, we have performed additional experiments and found that ZipLM is very robust: even with only 32 samples, we outperform the prior state-of-the-art Kwon et al. using 2048 samples, in the one-shot setting. We present results in the table below: | Speedup = 1.5x | | $\|$ | Speedup = 2.0x | | | :----------------: | :-------- | :--------------: | :--------------: | :-------------: | | Num samples | F1 score | $\|$ | Num samples | F1 score | | ZipLM, 4 | 82.33 | $\|$ | ZipLM, 4 | 48.41 | | ZipLM, 32 | 86.76 | $\|$ | ZipLM, 32 | 82.63 | | ZipLM, 128 | 86.79 | $\|$ | ZipLM, 128 | 83.56 | | ZipLM, 512 | 86.79 | $\|$ | ZipLM, 512 | 84.06 | | ZipLM, 2048 | 87.08 | $\|$ | ZipLM, 2048 | 84.14 | | ZipLM, 4096 | 87.62 | $\|$ | ZipLM, 4096 | 84.68 | | Kwon et al, 2048 | 86.20 | $\|$ | Kwon et al, 2048 | 76.50 | Finally, to ensure that there is no overfitting of hyper-parameters to the validation set, we have also performed test set evaluations on the official GLUE servers in Appendix C: Additional Validation. **Question 2: In the latency-constrained scenario, the authors said "However, for the latter, the inputs are much smaller, and the size of weight matrices is no longer the primary bottleneck.", however, in my opinion, when the batch size is very small (like 1), loading weights from GPU HBM is exactly the bottleneck. Therefore, I guess sparsifying model parameters should lead to reasonable speedups?** Yes, structured pruning does, in general, bring speedups for memory-bound applications like small batch-size generative inference as well. What we are referring to in the paper is that for our particular model and inference environment, generative runtime is significantly impacted by overheads (e.g. kernel launches, layer-norms, under-utilization, etc.) as low batch-size matmuls are so fast (on larger models or on low-power devices, this should be much less significant). This means size reduction does not necessarily lead to proportional speedups. ZipLM, however, understands this automatically and accounts for it during pruning. This is what we wanted to illustrate in this part. Nevertheless, we agree that this can be made more clear and will make improvements in the next revision. **Question 3: In the throughput-constrained (aka. large batch size), it will be interesting to see how ZipLM could support an even larger batch size since we can take advantage of the memory saved by pruning to hold more inputs. In this way, we may further increase the throughput?** Yes, this is an excellent suggestion, and we therefore ran some benchmarking tests on an 11GB RTX 2080Ti. For each model, dense (1.0x) and ZipLM compressed (4.9x, 9.8x, 14.5x) models, we increase batch-size to the maximum value and evaluate the throughput (number of samples processed per second). The table below presents throughput improvements from increased batch-size for the corresponding speedup targets. | Speedup | Throughput gain | | :-------: | :---------------: | | 1.0x | 1.0x | | 4.9x | 6.5x | | 9.8x | 12.0x | | 14.5x | 17.6x | **Question 4: The potential social impact is not discussed in the paper.** Please note that, following the NeurIPS23 submission guidelines, we have provided a discussion of limitations and impact in the 'Appendix H: Broader Impact and Limitations'.
Summary: This paper looks into structure pruning (e.g. neurons, or attention heads) of Transformer based models. Authors propose to formulate the pruning objective by requiring the output of each pruned layer to be as close as possible to that of an unpruned layer. Then they adopt optimal brain surgeon algorithm to come up with the weights to keep (mask) and the update to such weights. The proposed algorithm prunes each group (e.g. a neuron in a layer) one by one, ensuring that correlated groups are properly accounted for (which is hard to do when choosing a number of neurons to prune in one go). Additional extensions to the method include inclusion of inference aware criteria into the algorithm for choosing the groups to keep (using pre-computed knowledge of latency table) Suggested approach can be used for one shot pruning and gradual pruning. Experimental results indicate good performance on attention head pruning, fully connected layers pruning and removing full attention modules. Strengths: - Really good experimental results - Well written and easy enough to follow (modulo my comments below) Weaknesses: - Not very clear novelty - seems computationally expensive - Ablations are missing (is the improvement coming from the fact that you use formulation (1) for each layer? is it from line 155 of the algorithm? Is it the inference awareness inclusion? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1) How does pruning two consecutive layers happens? First pruning first layer (Algorothm 1) and then pruning the next layer? If yes you are essentially assuming that layers are independent (so assuming block diagonal structure of the hessian). 2) In your formulation (1) you seem to suggest that X is the input to the layer (whose weights W you are considering to prune). In the original Optimal Brain Surgeon, X is actually based on the gradients (and the global objective is considered, instead of local l2 objective of matching each layer's output). I think this all needs to be made clearer 2) for experiments, it would be nice to report flocs or any other time measurement for your method (not the inference) and competing methods. 3) Potentially you can apply your idea of pruning only one structure at a time (line 155 in Algorithm 1) to any of the existing methods, so ablation is needed (is your improvement comes from the fact that u do essentially more gradual pruning?) 4) The method you propose does not seem to be LLM specific - why not to compare it with methods on structured pruning for non LLM models - I am confused about contribution on line 56 (produces a family of compressed models). How is it any different from say using gradual pruning with various levels of sparsity (and at each step updating the copy for that sparsity level). You are still saving/updating multiple copies of the models right? Minor: - Algorithm 1: Mask Mr is not defined and is not updated in the body of the algorithm. I think you want to compile it based on R at the end of k iterations (the same also applies to Ms - please define how this mask looks like and its shape) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question 1: Computational Cost** Please note that in the section “5 Discussion and Extensions”, paragraph “Computational efficiency”, we report two additional metrics: end to end runtime and efficiency with respect to the number of epochs, comparing also against the strongest competitor, CoFi: - In terms of the total number of epochs, ZipLM is more efficient than CoFi by a factor of _4.87x_ on larger tasks and _14.5x_ on smaller tasks. - In terms of the end to end runtime, ZipLM is very efficient as well: for large GLUE tasks, it produces all 14 compressed models in only 35 hours on a single GPU, while for the small tasks it does so in only 10 hours. **Question 2: Ablations** In our submission, “Appendix B: Ablation Studies” presents results of an ablation study on the gains from our distillation technique and demonstrates improvements up to _2 points_ in accuracy. Additionally, in Figure 2 of the response PDF, we focus on ablations specifically for the pruning metric. We compare results when pruning for sparsity (like prior approaches did) and when pruning for speedup (the ZipLM approach). The results suggest that the choice of pruning for speedup brings significant improvements, up to _10 points_, especially at higher speedups where inference-awareness is very important. **Question 3: How does pruning two consecutive layers happen?** In step 1 of our framework (Section 3.1), we indeed consider individual layers in isolation; but account for all correlations between different channels within each to perform accurate local pruning. However, in step 2 (Section 3.2) we also handle global interactions between layers by constructing (in an inference-aware manner) and evaluating many global pruning candidates based on the layer-wise solutions produced in step 1. Thus, overall, ZipLM captures both local as well as global correlations, which is a key feature of our algorithm. **Question 4: Formulation (1) vs. original OBS** Yes, this is correct. In the local pruning step (Section 3.1), we apply similar techniques to the OBS, but in the context of the layer-wise pruning problem defined in equation (1). For this particular formulation, the Hessian, which OBS approximates via Fisher gradients, can be calculated directly as XX^T (and our OBS is thus exact in this case). We will improve the clarity of this aspect in the next revision. **Question 5: Potentially you can apply your idea of pruning only one structure at a time to any of the existing methods.** The idea of pruning only one structure at a time cannot be applied to most other structured pruning approaches because they do not apply weight updates to remaining weights during the pruning step (e.g., they simply remove structures with lowest saliency); hence there would be no benefit of such a strategy. In contrast, in ZipLM, after removing the first structure, we can very efficiently update the remaining weights to compensate for this removal and re-evaluate their scores before the next selection. We would further like to emphasize that this one-at-a-time removal is only enabled by the high efficiency of our algorithm; applying this idea to other approaches which compute updates in less efficient ways would be far too slow (e.g., the MLP-layer of BERT-base has > 3k structures to remove one-at-a-time). **Question 6: The method you propose does not seem to be LLM specific - why not to compare it with methods on structured pruning for non LLM models** It is true that aspects of our method are general, and could be extended to other model types (CNNs or ViTs). We have chosen to focus on LLMs in our practical comparison since they are a clear focus for efficiency research these days, and, consequently, we have several high-quality baselines to compete against. **Question 7: I am confused about contribution on line 56 (produces a family of compressed models).** This contribution warrants special attention because all other structured pruning methods necessitate _a complete gradual pruning process for each sparsity target independently_, since intermediate checkpoints are not accurate enough. In contrast, ZipLM achieves the generation of all models within a single run, producing remarkably accurate intermediate models featuring varying levels of sparsity. To elaborate, if one were to obtain models with compression ratios of 2x, 3x, ..., up to 15x using the CoFi method, it would entail distinct runs for each ratio: one for 2x, another for 3x, and so forth, up to 15x. **Question 8: Please define how the masks look like and their shapes** We use $M_A$ to denote a pruning mask where all entries corresponding to the set of indices in $A$ are 1 and other elements 0; e.g., $M_R$ is a mask of all remaining indices after pruning. We will make this more clear in the next revision. **Question 9: Not very clear novelty** Regarding novelty, we would like to note a few additional points, relating to the answers to your questions above: - From the practical perspective, ZipLM is the first _gradual structured pruning approach_ which enables it to produce the entire Pareto frontier of models a lot more efficiently than the previous state-of-the-art method CoFi. As we detailed in Question 1 and Question 7, ZipLM is _4.87x_ more efficient on larger datasets and _14.5x_ more efficient on smaller datasets. - This advancement over prior work is enabled by a number of conceptual innovations: a new highly accurate pruner (as shown to be state-of-the-art in one-shot scenario), implemented by a highly efficient algorithm (as demonstrated by the ability to do one removal at a time with re-evaluation of the pruning scores), and a new distillation technique. All of this is complemented by inference-awareness (as demonstrated in the ablation study in Figure 2 of the response PDF). --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Given that we did not get the opportunity to interact during the discussion period, we briefly summarize our response: 1. Our rebuttal provides data and experimental results which address your concerns. Specifically, our algorithm is anywhere from ~5x to ~14x more efficient than previous state-of-the-art approaches, while at the same time providing superior results across the board. 2. This ability is unlocked by a number of new novelty aspects relative to prior work: specifically, a highly accurate and highly efficient structured pruner, complemented with inference-awareness step for the best accuracy-speedup tradeoff. We sincerely hope that you will acknowledge our response and additional results. Thank you for your time and effort, \ The ZipLM authors. --- Rebuttal Comment 1.2: Comment: Thank you for your response. Do you have any references to backup this statement "necessitate a complete gradual pruning process for each sparsity target independently, since intermediate checkpoints are not accurate enough. " or may be experimental results? E.g. any results for the intermediate checkpoints from other methods vs your intermediate checkpoints that you claim are much better quality? --- Reply to Comment 1.2.1: Comment: Dear Reviewer, Yes, we can certainly support this statement. Please examine the GitHub repository of CoFi, the prior state-of-the-art approach: https://github.com/princeton-nlp/CoFiPruning#:~:text=An%20example%20for,script%20for%20evaluation . It is clear that, to produce one sparse model, one has to run the entire pruning+finetuning pipeline. Moreover, this is directly confirmed by the main author of the paper, in the following comment: https://github.com/princeton-nlp/CoFiPruning/issues/2#:~:text=Hi%2C,a%20specific%20sparsity . Specifically, the text: > "CoFi requires training a single model every time for a specific sparsity" By contrast, with ZipLM we produce all models in a single run. Specifically, the ZipLM results illustrated across all figures are results of intermediate checkpoints, whereas the results with other approaches (like CoFi or any other distillation-based method) are obtained via one full run for each sparsity target. To our knowledge, ZipLM is the only method which produces all checkpoints in a single run.
Summary: The paper proposes ZipLM, a structured pruning method that can achieve desired target runtime speedups.​ The idea of ZipLM is to solve a layer reconstruction problem with a structural constraint which minimizes the output changes on a set of calibration examples if the layer is reconstructed as it. The proposed ZipLM is shown to be effective on both decoder-only models and encoder-only models, outperforming prior distillation approaches and structured pruning approaches. Strengths: * The paper focuses on structured pruning problems, where the pruned model can easily get real speedups. Moreover, the proposed approach takes as input a target speedup as well as the hardware environment and optimizes the model on this specific setup to ensure the model to achieve desirable speedup. I admire this realistic setting and believe that this can be practically impactful. * The paper has conducted extensive experiments studying the effectiveness of the proposed ZipLM approach, including on both encoder and decoder models, both one-shot pruning setting and gradual compression setting, and with different levels of sparsity. * The experimental results show the proposed approach is more effective compared to the existing baseline methods. Weaknesses: * I believe the paper (especially the methodology part) can be presented better and a lot more details should be added (even in the appendix). For example, how do you solve the reconstruction problem and obtain the optimal mask and weight update (equation (2) & (3))? How do you consider the constraint C in equation (1) in the solution equation (2) & (3)? What exactly does a latency table look like and how do you get the table given a hardware environment? How many calibration inputs did you use in your experiments and how sensitive the final results will be to the number/quality of the calibration inputs? * The paper is motivated by structured compression of large language models (LLMs). However, all experiments are conducted on models with hundreds of millions of parameters. Given the current state-of-the-art language models (e.g., LLAMA-7B/13B/65B) contain a couple orders of magnitude more parameters compared to BERT, it is not clear how the proposed method works in larger models. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: * Do you think the proposed approach can apply to larger language models such as LLAMA-7B? If not, what is the barrier? * How sensitive the proposed approach is to the calibration examples? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: I didn’t see a clearly potential negative societal impact of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question 1: I believe the paper (especially the methodology part) can be presented better and a lot more details should be added (even in the appendix). For example, how do you solve the reconstruction problem and obtain the optimal mask and weight update (equation (2) & (3))? How do you consider the constraint C in equation (1) in the solution equation (2) & (3)?** Thank you for the comments on the presentation, which we will follow in the next revision. The main idea for deriving (2) and (3) is that (1) consists of the sum over row-wise linear regression problems. For linear regression, the exact error incurred by removing a set of weights as well as the optimal adjustment to the remaining weights can be determined in closed form, e.g., via solving a Lagrangian. Then, to couple structures across rows, we utilize the fact that there is no interaction between rows in the error of (1). Consequently, we can sum up the per-row formulas in appropriate ways to introduce our required coupling. (2) and (3) provide optimal formulas for removing a *single* structure; we then efficiently iterate those formulas until we have reached the desired sparsity target dictated by $C$ (we note that this iteration is not necessarily globally optimal but appears to be a good approximation in practice). In the next revision, we will improve the presentation of this part and include more details in the Appendix. **Question 2: What exactly does a latency table look like and how do you get the table given a hardware environment?** For a given model, batch-size, sequence-length and a device, we measure time of the forward pass through the main components of the Transformer layer, running in isolation: the attention and the FFN module. For the attention module we measure timings for varying number of heads, and for FFN module for varying intermediate size. The supplementary material of our submission contains an example of such a latency table at: code/bertbase_squad_V100.txt. Since the table is formatted such that it can be directly consumed by ZipLM algorithm, we provide a human readable example here: | Intermediate size | Latency (ms) | $\|$ | Num of heads | Latency (ms) | | :-----------------: | :------------: |--- | :------------: | :------------: | | 3072 | 11.9 | $\|$ | 12 | 7.9 | | 1814 | 7.4 | $\|$ | 10 | 6.7 | | 1322 | 5.8 | $\|$ | 8 | 5.8 | | 302 | 1.6 | $\|$ | 6 | 4.4 | | 130 | 1.0 | $\|$ | 4 | 3.2 | | 76 | 0.9 | $\|$ | 2 | 1.9 | | 33 | 0.7 | $\|$ | 0 | 0 | **Question 3: Do you think the proposed approach can apply to larger language models such as LLAMA-7B? If not, what is the barrier?** In principle, there is no major obstacle to appling ZipLM to massive models such as LLAMA-7B, and in fact we believe this is a very interesting question for future work. Our results show that the method translates to good speedups both for generative inference (ZipGPT2) and for large-batch scenarios (ZipBERT). The main challenge to porting ZipLM to billion-scale models would be computational: since structured compression methods require significant fine-tuning for best results, one would need to have the computational budget (as well as sufficient GPU memory to train with optimal settings) to reproduce a fraction (e.g. 5%) of the original training in order to allow for good recovery. **Question 4: How sensitive the proposed approach is to the calibration examples?** We have found the method to be very robust to the amount of calibration data. To illustrate this, in the table below, we present a sensitivity analysis with respect to the number of calibration samples. We one-shot prune the fine-tuned BERT-base model on the SQuADv1 task for two speedup targets: 1.5x and 2.0x. In this setup, we compare results with the current state-of-the-art one-shot pruning approach of Kwon et al, which uses 2048 samples by default. As can be seen from the table, ZipLM outperforms prior state-of-the-art starting at only 32 samples. As we increase the number of samples, the results improve, up to 2 points in F1 score. | Speedup = 1.5x | | $\|$ | Speedup = 2.0x | | | :----------------: | :-------- | :--------------: | :--------------: | :-------------: | | Num samples | F1 score | $\|$ | Num samples | F1 score | | ZipLM, 4 | 82.33 | $\|$ | ZipLM, 4 | 48.41 | | ZipLM, 32 | 86.76 | $\|$ | ZipLM, 32 | 82.63 | | ZipLM, 128 | 86.79 | $\|$ | ZipLM, 128 | 83.56 | | ZipLM, 512 | 86.79 | $\|$ | ZipLM, 512 | 84.06 | | ZipLM, 2048 | 87.08 | $\|$ | ZipLM, 2048 | 84.14 | | ZipLM, 4096 | 87.62 | $\|$ | ZipLM, 4096 | 84.68 | | Kwon et al, 2048 | 86.20 | $\|$ | Kwon et al, 2048 | 76.50 |
Summary: This paper proposes ZipLM, a structured pruning and reconstructing + layer-wise distillation + inference-aware pruning algorithm. The authors first extend the pruning formula of OBS to structured pruning and utilize estimation of inference for each structure and structured SPDY search to achieve more accurate inference-aware pruning. Then, the authors distill the model in a token-wise manner. Experimental results demonstrate that ZipLM achieves state-of-the-art (SOTA) performance in both one-shot and pruning/knowledge distillation settings. Strengths: 1. ZipLM demonstrates exceptional results among existing compression algorithms. It not only outperforms retraining-constrained approaches but also surpasses the performance of some pruning + knowledge distillation algorithms. 2. The authors propose a systematic framework for structured pruning, encompassing aspects such as pruning and reconstruction, inference-aware structure search, and knowledge distillation for performance recovery. This comprehensive framework exhibits a well-designed structure that, in my opinion, contributes to the entire community. Weaknesses: 1. The methodology is somewhat incremental. The core contributions of the authors revolve around extending previous methods (OBS, SPDY) to structured pruning since those methods couldn't be directly applied. While the authors have made these extensions, the core essence of the method remains largely based on the previous framework. 2. The experimental section of this paper has some shortcomings. For instance, although the authors conducted a simple ablation experiment, they did not analyze which specific parts of the framework played a crucial role in improving performance. Additionally, in the majority of compression papers, the GLUE benchmark's eight datasets are evaluated as a whole since they represent the fundamental measure for assessing the performance of compressed models. However, the authors only evaluated four of these datasets. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could you provide a more comprehensive ablation experiment, specifically to identify which module within the entire framework contributes significantly to performance improvement? 2. Since Kwon [1] also employed a pruning metric and the layer-wise reconstruction via LLS for structured pruning, have you directly compared your pruning metric and the performance after reconstruction of OBS with their method to determine which one performs better, apart from the effect of distillation? 3. Line 192-193: For example, a 95% sparse BERT produced by CoFi has 12x speedup on a V100 GPU, but only 5x on an A100 GPU. Since this is an interesting observation that FLOPs-based calculation != inference speed, can you explain what is the potential reason of this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question 1: Ablations** In our submission, “Appendix B: Ablation Studies” presents results of an ablation study on the impact of distillation during the fine-tuning stage. As can be seen, our distillation technique can improve results up to _2 points_ in accuracy. Additionally, in Figure 2 of the response PDF, we focus on ablations specifically for the pruning metric. We compare results with ZipLM pruning when the target for pruning is sparsity (like prior approaches did) and when the target for pruning is speedup (the ZipLM approach). As can be seen from the Figure, the choice of pruning for speedup brings significant improvements, up to _10 points_, especially at higher speedups where inference-awareness is very important. For additional ablations on inference-awareness, please see our answer below for the difference between V100 and A100 speedups. In summary, these ablation studies decouple improvements from the three major components of our framework: token distillation, pruning metric, and inference-awareness. Please let us know if you would like to see any additional ablation studies, and we will be more than happy to provide them during the discussion period. **Question 2: Evaluation on the remaining tasks in the GLUE suite** We ran ZipLM on the remaining four tasks from the GLUE benchmark suite (CoLA, MRPC, STS-B, and RTE) and presented results in Figure 1 of the response PDF. As can be seen, ZipLM provides state-of-the-art results across these datasets as well, especially at higher speedup targets relative to prior structured pruning and distillation based approaches. **Question 3: Comparison of pruning metrics with Kwon et al. without distillation** We would like to highlight that we already show in “Table 2: One-shot structured pruning of BERT-base” that ZipLM outperforms the prior state-of-the-art one-shot approach of Kwon et al., in a setup without fine-tuning and thus also without distillation. In this setup only pruning metric and reconstruction are being compared. **Question 4: Difference in speedups on V100 vs A100** This discrepancy for CoFi models arises because the A100 GPU is significantly more powerful and thus faster on the dense model; at the same time, it is highly underutilized for small matrices, which significantly limits the speedups for very high sparsity. To illustrate this, we have measured the speedup from reducing the MLP size for both GPU types (see Table below). As can be seen, pruning to ~90% sparsity (3072 -> 302) gives ~7x speedup on a V100 but only ~3x speedup on an A100. *Table 1: Speedup improvements from shrinking the intermediate size of MLPs in the FFN section of a Transformer layer.* | MLP size | V100 | A100 | | :--------: | :----: | :----: | | 3072 | 1.0 | 1.0 | | 1814 | 1.6 | 1.1 | | 1322 | 2.0 | 1.4 | | 302 | 6.9 | 3.1 | | 130 | 11.8 | 4.4 | | 76 | 13.1 | 4.4 | | 33 | 14.8 | 4.4 | Such discrepancies are captured by inference-awareness of ZipLM approach, where pruning for sparsity is replaced by pruning for speedup which can utilize this information to guide pruning decisions. **Question 5: Novelty** Regarding novelty, we would like to note a few additional points, relating to the answers to your questions above: - Conceptually, ZipLM introduces two innovations relative to prior instances of the Optimal Brain Surgeon (OBS) framework: it focuses on structured pruning (so it can produce speedups on any hardware), and it is inference-aware (so it directly relates accuracy loss with real-world speedup gains). This is enabled by new technical derivations for the layer-wise structured compression problem, and by algorithmic insights which serve to speed up the combinatorial search (see the discussion in lines 172-182). - In addition, we also propose a very effective form of distillation for structured pruning. Practically, this leads to a very powerful framework, as we can produce the entire Pareto frontier corresponding to accuracy-vs-compression a lot more efficiently than prior methods: as shown, our method is on average _10 times more computationally efficient_ for this task than CoFi, the prior SOTA method. - As illustrated by the V100/A100 runtime discrepancy, a key differentiating feature of ZipLM is taking inference characteristics directly into account during compression. As such, ZipLM is the first method that yields state-of-the-art results for all 8 GLUE tasks, question answering (SQuAD), and text generation (WikiText), across BERT-base, BERT-large and GPT2 models, for both GPU and CPU deployment targets.
Rebuttal 1: Rebuttal: We wish to sincerely thank the reviewers and the AC for their work, and for the valuable feedback. We have provided individual responses for each reviewer question. We outline answers to some common topics below: - Regarding the discussion of limitations and potential negative social impact in our work (Reviewers **27Y8** and **5zkW**), we would like to clarify that these aspects were taken into account in our original submission. Following the NeurIPS 2023 Call for Papers, we have addressed them in a dedicated section called '*Appendix H: Broader Impact and Limitations*,' where we delve specifically into the limitations and potential negative social impact of our research. - Similarly, the ablation studies on various components of our method were already present in ‘*Appendix B: Ablation Studies*’ which isolates the accuracy impact of e.g. distillation on the final accuracy of the model. In addition, we have performed the following additional experiments, which further address the reviewer concerns: - To address the question of Reviewer **mFy7**, we have run ZipLM on the remaining 4 GLUE tasks. The results are illustrated in Figure 1 of the PDF response, and show the same trends as the original tasks. Specifically, ZipLM provides significant gains over prior methods, especially at higher compression rates. Results also showcase the fact that our method is significantly more computationally-efficient than prior work: we were able to obtain the full trade-off for each task in a single execution, whereas prior work would need a separate execution for each target, requiring on average _10x more computation_. - To address the questions regarding differences in speedup on different GPUs and using different metrics (Reviewers **mFy7** and **5zkW**) we have provided a detailed analysis of how these differences arise, and how pruning for size compares with pruning directly for speedup using our method (see also Figure 2 in PDF response). The results show that pruning directly for speed is critical to obtaining good practical performance at the same accuracy level. The full answers are provided in the individual responses. - We also provide an ablation with respect to the number of samples, showing that our method is extremely stable with respect to sample complexity. Specifically, we can outperform the prior SOTA method of Kwon et al. (which uses 2048 samples by default) starting at just 32 samples. We thank the reviewers again for their feedback, and look forward to the discussion. Pdf: /pdf/33a50c655c04142173d852e54ce58ad547069f52.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a structured compression method to optimize inference efficiency for language models. The proposed method uses the accuracy-efficiency trade-off under specific inference objectives as the importance measure of the model component( attention head, MLP neurons, entire FC/attention blocks.) Extensive evaluation results are provided to demonstrate the effectiveness of the proposed method. Strengths: I appreciate the thoroughness of the evaluation, in which the authors consider the encoder/decoder and decoder-only model, retraining compression and zero-shot compression, throughput and latency objective, etc. Evaluation provides a better understanding of the proposed method. Weaknesses: This paper focuses on improving the language model's inference efficiency. Other than distillation and pruning, quantization is also a popular direction[1], which is not discussed/compared. Quantization generally reduces the model size. At the same time, a reduction in model size would require less number of GPUs for large model inference, leading to latency speed up. Further, there is a line of work on dynamic sparsity[2] that improves inference latency, which is also not discussed/compared. [1]Frantar, Elias, et al. "Gptq: Accurate post-training quantization for generative pre-trained transformers." arXiv preprint arXiv:2210.17323 (2022). [2]Liu, Zichang, et al. "Deja vu: Contextual sparsity for efficient LLMs at inference time." (2023). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Table 1 shows a significant drop in PPL compared to the original GPT2, even at the smallest speed-up. To better understand the trade-off, can the authors comment on at what speedup we can expect no loss in performance? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors didn't discuss limiation or negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question 1: Discussion of alternative approaches (quantization, Deja Vu)** Please note that quantization is complementary to both distillation and pruning; in practice, it is applied in conjunction with these two. Thus, rather than comparing pruned and quantized models, one can _combine_ these compression techniques to obtain compound improvements. This was our approach in this submission, where _we did apply quantization_ to ZipLM models. Specifically, in the “CPU as an LLM-inference environment” paragraph in section “5 Discussion and Extensions”, we describe how we combine ZipLM pruning with quantization. We used quantization-aware training (QAT) which, instead of quantizing only weights (like GPTQ), quantizes both weights and activations, thus enabling direct deployment of LLMs on edge devices as well as computational speedups for arbitrary batch sizes and number of input tokens (GPTQ only leads to speedups for generative inference at very low batchsize). Deja Vu leverages dynamic forms of sparsity, which appear at runtime, such as activation sparsity. Thus, Deja Vu is also complementary to ZipLM, which induces and leverages static structured sparsity. Further, while promising, Deja Vu comes with a few key limitations: (1) As of yet, DejaVu has only been shown to work on a few very specific models (OPT-30B/66B/175B and BLOOM-175B), which are extremely large and not very efficient to start with. (2) DejaVu requires complex custom CUDA kernels to produce runtime speedups, which are optimized for one-token-at-a-time inference, while speedups quickly diminish for larger batch-sizes or non-generative applications. The structured pruning approach we adopt is much more general; we have shown it to be applicable to both batch-prediction (throughput-constrained) and text-generation (latency-constrained) use cases. In addition to text-generation (GPT) use cases, our work also shows viability in various (non-generative) downstream tasks such as: text-classification, question-answering, spam detection, etc, for which Deja Vu currently _does not yield speedups_. In summary, both quantization and Deja Vu (contextual sparsity) are complementary to ZipLM. We have already shown compatibility with quantization in our submission (Section 5), and plan to investigate compatibility with contextual sparsity in future work: in particular, we plan to investigate whether ZipLM structured sparsity can induce significant additional contextual sparsity. We will add a clarifying discussion on both approaches in the next version. On a technical note: the Deja Vu paper was not publicly available at the time of the NeurIPS submission deadline, and therefore we would not have been able to cite or discuss it. **Question 2: PPL difference relative to the original GPT2** GPT2 has been trained by OpenAI on a much larger (> 10x) closed-source dataset, for significantly longer than both the DistilGPT2 and ZipGPT2. In contrast to that, DistilGPT2 and our ZipGPT2 models are trained on the same open-source dataset, for a much shorter period of time. Therefore the only possible direct comparison in this setup is between DistilGPT2 and ZipGPT2, which is clearly in favor of our approach: relative to DistilGPT2, we provide higher accuracy at the same speedup, and higher speedup for the same accuracy. In fact, to support these claims we pretrain GPT2 model from scratch on the open-source dataset used by DistilGPT2 and ZipLM. After evaluating the resulting model in the same setup as the other two models, we are observing the perplexity of 38.5 (the closed-source trained GPT2 has 28.5). This means that when trained on the same data and under similar computational budget, DistilGPT2 and ZipLM models do exhibit competitive performance to the uncompressed GPT2 model. **Question 3: Limitations and social impact** Please note that, following the NeurIPS23 submission guidelines, we have provided a discussion of limitations and impact in the 'Appendix H: Broader Impact and Limitations'. --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: Thanks for the pointer to the quantization experiment and the additional perplexity of GPT2 trained under the same setup. A discussion of quantization LLM would be nice to include in Related Work, aside from distillation and pruning. --- Reply to Comment 1.1.1: Comment: Thank you for the useful suggestion, we will include a discussion of quantization as well as contextual sparsity (Deja Vu) in our Related Work section in the next version of the paper. Please let us know whether you have any additional concerns or questions that we can address during the discussion period.
null
null
null
null
null
null
3D Copy-Paste: Physically Plausible Object Insertion for Monocular 3D Detection
Accept (poster)
Summary: This paper proposes a data augmentation approach to assisting the training of monocular 3D detectors by inserting 3D objects into indoor scenes in a physically plausible manner. Specifically, it addresses two main challenges in the entire pipeline: 1) where and how to insert those objects; 2) which illumination on the object makes the rendering photorealistic. Experiments validate that the data generated by the overall pipeline can enhance the state-of-the-art monocular 3D detectors. Detailed ablation studies further provide some insights regarding which aspects are important in the proposed method. Strengths: - The basic idea is easy to follow and the illustration figures are clear. - The overall pipeline is proven effective for enhancing downstream 3D perception systems. - This pipeline is systematic and comprehensive, taking almost most of the aspects when inserting 3D objects into scenes and image data generation for training monocular detectors. - The key insights regarding three challenges in the introduction and two critical considerations (where and how & illumination) in the methodology are accurate. - The proposed method achieves new state-of-the-art on the SUN RGB-D benchmark and has detailed ablation studies to reveal which aspects are most essential in the overall pipeline. (For example, the analysis in Table 4 is interesting.) Weaknesses: - (Related work) The related work section has many inaccurate statements, such as MV3D is a multi-view method incorporating both LiDAR-based point clouds and images and VoxelNet is a LiDAR-only method, which should not be discussed in the monocular 3D detection section. In contrast, there are many missing works of monocular 3D detection in driving scenarios, such as 3DOP[1], MLFusion[2], M3D-RPN[3], MonoDIS[4], Pseudo-LiDAR[5], FCOS3D[6], SMOKE[7], RTM3D[8], PGD[9], CaDDN[10], etc. There is also some missing literature for 3D Data Augmentation, such as MoCa[11], GeoAug[12], etc. - (Methodology) The overall pipeline is the main contribution of this paper. From another perspective, the most important part is to combine different "existing" techniques together and make it finally work to produce photorealistic images after inserting 3D objects. The key flaw here is that most of the techniques used in each stage are not newly proposed in this work, making this work more like an engineering one. (although I admit that the overall pipeline is still a good contribution to the community) - (Experiments) The proposed method is not limited to any method and any dataset, but only tested on ImVoxelNet and SUN RGB-D. Although it can demonstrate the basic effectiveness, it would be much more convincing if the author can provide more ablation results on other baselines and datasets. [1] 3D Object Proposals for Accurate Object Class Detection [2] Multi-Level Fusion Based 3D Object Detection from Monocular Images [3] M3D-RPN: Monocular 3D Region Proposal Network for Object Detection [4] Disentangling Monocular 3D Object Detection [5] Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving [6] FCOS3D: Fully Convolutional One-Stage Monocular 3D Object Detection [7] SMOKE: Single-Stage Monocular 3D Object Detection via Keypoint Estimation [8] RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving [9] Probabilistic and Geometric Depth: Detecting Objects in Perspective [10] Categorical Depth Distribution Network for Monocular 3D Object Detection [11] Exploring Data Augmentation for Multi-Modality 3D Object Detection [12] Exploring Geometric Consistency for Monocular 3D Object Detection Technical Quality: 3 good Clarity: 2 fair Questions for Authors: None. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The author discusses the limitations and social impacts in the supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and comments! Please see our response below. - **Related work**: Thanks for your suggestion and comments! We will modify the statement and add all the provided related papers. - **Methodology**: To conduct 3D object insertion for data augmentation, traditional methods [1] require manually locating support planes, designing poses, estimating lighting, etc, which are hard to scale up. Out method is the first automated 3D insertion pipeline on complex indoor scenes, enabling large-scale 3D insertion for data augmentation. To allow a fully automated pipeline, we make the following technical contributions: (1) To avoid collision, after plane detection, we propose a constrained insertion parameter search method (algorithm 1) to guarantee a physically-plausible inserted position, pose, and size. We also include an efficient collision check method to solve the time-consuming challenge. (2) We conduct environment map transformation and refinement to add more accurate illumination on inserted objects. To our best knowledge, our work is the first to showcase that physically plausible 3D object insertion can serve as an effective generative data augmentation technique for indoor scenes, leading to state-of-the-art performance in discriminative downstream tasks such as monocular 3D object detection, opening up new avenues for research and practical applications. - **More experiments on other models and datasets**: **For more monocular 3D object detection methods**, we conducted experiments on Implicit3DUnderstanding (Im3D[2]) in supplementary materials, and the results are in supplementary Table S1. Using our method (Im3D + 3D Copy-Paste) improve the mAP_0.25 from 42.13 (Im3D) to 43.34. **For more datasets**, we extend our method to ScanNet [3] dataset. Here are the detailed setting and results on ScanNet: [*Adapt to monocular setting*] ScanNet contains 1,201 videos(scene) in the training set and 312 videos(scene) in the validation set. For monocular 3D object detection, we use one RGB-D image per video, so 1,201 RGB-D images for training and 312 for validation (test). We calculate the ground truth 3D bounding box label for each of our used views from their provided video(scene) level label (because some objects in the scene may not be visible in our monocular view). [*Training and test*] For the baseline, we train an ImVoxelNet monocular 3D object detection model on the training set and test on the validation set. For our method, there are 8 overlap categories (sofa, bookshelf, chair, table, bed, desk, toilet, bathtub) in the 18 classes of ScanNet with our collected Objaverse data (main paper Table1). We use our 3D copy-past to augment the training set and train an ImVoxelNet. All the training parameters are the same as the training on SUN RGB-D dataset. We will release all the code. We show the results on the average accuracy of the 8 overlap classes (AP_0.25) in the Table below. Our 3D Copy-Paste improves ImVoxelNet by 2.8% mAP. |ScanNet AP_0.25 | Average (mAP)|bed |chair|sofa|table|bookshelf|desk|bathtub|toilet| |-----------------|------|------|----|----|----|----|----|----|----| | ImVoxelNet |14.1 |25.7|7.9 |**13.2**|7.8 |4.2 |20.5|22.1 |**11.5** | | ImVoxelNet+3D Copy-Paste|**16.9** |**27.7**|**12.7** |10.0|**10.8** |**9.2** |**26.2**|**29.2** |9.0 | **References** [1] Debevec, P., 2008. Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography. In Acm siggraph 2008 classes (pp. 1-10). [2] Zhang, C., Cui, Z., Zhang, Y., Zeng, B., Pollefeys, M. and Liu, S., 2021. Holistic 3d scene understanding from a single image with implicit representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8833-8842). [3] Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T. and Nießner, M., 2017. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5828-5839). --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: I acknowledge that I have read the authors' rebuttal and the other reviews. Thanks for your rebuttal and I feel most of my concerns are addressed. Given my impression that this paper has done solid work but may have relatively small technical contributions, I will keep my score in the final decision but support its acceptance considering its value to this community. --- Reply to Comment 1.1.1: Title: Thank you for updating your review! Comment: Thank you for your feedback! We will incorporate the suggested changes in the revised paper.
Summary: This work introduces 3D Copy-Paste, a physically plausible indoor object insertion technique for automatically generating large-scale annotated 3D objects. This approach ensures the plausibility of the objects’ physical location, size, pose, and illumination within the scene Using this 3D copy-paste as augmentation, a better monocular 3D detector can be trained. Experiments are demonstrate on the SUN RGB-D dataset, demonstrating the effectiveness of the proposed object insertion method in improving the 3D detection. Strengths: 1. The method is clearly explained 2. The proposed methods for object insertion can help create annotated data for free. 3. The inserted object is physically plausible in location, size and illumination. Moreover, it's fully automated. 4. Solid experiments are conducted in SUN RGB-D based on a SOTA detector (ImVoxelNet), and the detection AP is improved using the proposed method as augmentation. (40.96-->43.79) Weaknesses: This work may somewhat lack novelty. I'm not familiar with the indoor scene object insertion task, but for the outdoor scene, there are quite a lot similar works[1-4]. It would be better to include these works in the literature review. These works share similar ideas with the authors, where they insert object in physically plausible location and illumination. And some works also test with downstream tasks and demonstrate effectiveness of using actor insertion as data augmentation. Besides, I'm not fully convinced by the claim that indoor scenes are "more challenging" in L51. The layout, illumination, complexity of outdoor scenes are apparently more complex. [1] Augmented Reality Meets Computer Vision : Efficient Data Generation for Urban Driving Scenes [2] GeoSim: Realistic Video Simulation via Geometry-Aware Composition for Self-Driving [3] Neural Light Field Estimation for Street Scenes with Differentiable Virtual Object Insertion [4] 3D Data Augmentation for Driving Scenes on Camera Technical Quality: 3 good Clarity: 3 good Questions for Authors: I'm not familiar with the ImVoxelNet,I just quickly went through it to review this paper. ImVoxelNet reported AP@0.15. Why AP@0.25 is reported in this paper? I will raise my rating if the authors can apply this approach to outdoor scenes like kitti and nuscenes and improve ImVoxelNet results.(ImVoxelNet is already tested on this dataset) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Apparently, the authors did not go over https://neurips.cc/public/guides/PaperChecklist. Not many experiments and implemented details are provided, Licenses and Assets are not clearly described either. But I don't penalize it in the rating though. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and comments! Please see our response below. - **Indoor/outdoor difference and novelty**: Different from outdoor scenes, the challenges in indoor scenes include (1) complex spatial layouts (in particular, cluttered background and limited object-placeable space) that necessitate a carefully-designed method to allow automated object placement (physically-plausible position, size, pose), and (2) complex lighting effects such as soft shadow, inter-reflection and long-range light source dependency that demand dealing with lighting for harmonious object insertion. To deal with the above challenges, we make the following key technical contributions: (1) To avoid collision, after plane detection, we propose constrained insertion parameter search (algorithm 1) to guarantee a physically-plausible inserted position, pose and size. We also include an efficient collision check method to solve the time-consuming challenge. (2) We conduct environment map transformation and refinement to add more accurate illumination on inserted objects. To our best knowledge, our work is the first automated 3D insertion pipeline on complex indoor scenes, enabling large-scale 3D insertion for data augmentation. It also showcases that physically plausible 3D object insertion can serve as an effective generative data augmentation technique for indoor scenes, leading to state-of-the-art performance in discriminative downstream tasks such as indoor monocular 3D object detection, opening up new avenues for research and practical applications. Thank you for your suggestion, we will modify the manuscript as follows: - we already cite your ref [3] Neural Light Field Estimation but will add the other outdoor citations. - we will tone down the claim that indoor scene is more challenging since it may indeed be hard to defend. - we will extend our discussion of how our work compares to previous art with outdoor scenes. - **AP_0.25 and AP_0.15**: This has to do with the fact that official results are only available for a few combinations of dataset setting, object classes, and IOU threshold: The official ImVoxelNet GitHub code, when using SUN RBG-D on “10 classes from VoteNet” setting (same as ours), also uses 0.25 IOU threshold with performance 40.7 (even though they use 0.15 with other datasets/classes). The authors provided an implementation in MMdetection3D [1] Github, which also uses mAP 0.25 for the 10 classes from VoteNet setting (performance 40.96). We used the official code of MMdetection3D, so we used the 0.25 IOU threshold. Per your suggestion, we also show our results with mAP 0.15 on SUN RGB-D dataset (10 classes from VoteNet) below, our method shows consistent improvements. |SUN RGB-D | mAP_0.15| |-----------------|------| | ImVoxelNet |48.45 | | ImVoxelNet + 3D Copy-Paste|**51.16**| - **More experiments on outdoor dataset**: Thanks! In this paper, we focus on indoor scene insertion, we will treat extension to the outdoor scene as a future work to explore. In the meantime, we note that our title and overall positioning of the paper make it very clear that it is currently focused on indoor scenes (i.e., we are not overselling the work). However, we added experiments on the new dataset ScanNet[2] and new models (Implicit3DUnderstanding[3] in supplementary) as a way to show the generalization of our method. Here are the detailed setting and results on ScanNet: [*Adapt to monocular setting*] ScanNet contains 1,201 videos(scene) in the training set and 312 videos(scene) in the validation set. For monocular 3D object detection, we use one RGB-D image per video, so 1,201 RGB-D images for training and 312 for validation (test). We calculate the ground truth 3D bounding box label for each of our used views from their provided video(scene) level label (because some objects in the scene may not be visible in our monocular view). [*Training and test*] For the baseline, we train an ImVoxelNet monocular 3D object detection model on the training set and test on the validation set. For our method, there are 8 overlap categories (sofa, bookshelf, chair, table, bed, desk, toilet, bathtub) in the 18 classes of ScanNet with our collected Objaverse data (main paper Table1). We use our 3D copy-past to augment the training set and train an ImVoxelNet. All the training parameters are the same as the training on SUN RGB-D dataset. We will release all the code and training data. We show the results on the average accuracy of the 8 overlap classes (AP_0.25) in the Table below. Our 3D Copy-Paste improves ImVoxelNet by 2.8% mAP. |ScanNet AP_0.25 | Average (mAP)|bed |chair|sofa|table|bookshelf|desk|bathtub|toilet| |---|--|--|--|----|----|----|----|----|----| | ImVoxelNet |14.1 |25.7|7.9 |**13.2**|7.8 |4.2 |20.5|22.1 |**11.5** | | ImVoxelNet+3D Copy-Paste|**16.9** |**27.7**|**12.7** |10.0|**10.8** |**9.2** |**26.2**|**29.2** |9.0 | - **Paper checklist**: Thanks for your reminder! The limitations and broader impact discussion were in the supplementary, we also added more experiment details in the supplementary. We will add more experiment and implementation details and corresponding Licenses and Assets, and double-check the PaperChecklist. **References** [1] Contributors, M. (2020). MMDetection3D: OpenMMLab next-generation platform for general 3D object detection. [2] Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T. and Nießner, M., 2017. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5828-5839). [3] Zhang, C., Cui, Z., Zhang, Y., Zeng, B., Pollefeys, M. and Liu, S., 2021. Holistic 3d scene understanding from a single image with implicit representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8833-8842). --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thank you for the response, especially on the new metric setting and results on ScanNet. After reading others reviews and response from author, I plan to maintain weak accept decision. Overall this work is decent without significant flaws. The reasons preventing me from raising rating are: 1. No experiments on outdoor scenes, which is the biggest consideration for me to raising score before rebuttal. 2. I hold similar ideas to 7hAU on that it may not be necessary to build such a complicated system, just to improve the mAP a bit, though I understand that it's hard to to achieve such improvements. One suggestion for the work is that to also take realism as consideration, and may add some non-paired metrics like FID and human evaluation. Best --- Reply to Comment 1.1.1: Title: Thank you for your feedback! Comment: Thank you for your feedback! We agree with the importance of experimenting with outdoor scenes and will prioritize it in our future work. We agree with the study on realism. In fact, our main paper's Table 3 presents preliminary results on this matter. As lighting and shadow play pivotal roles in photorealism, we investigated the effects of various lighting scenarios and the presence or absence of shadows on monocular 3D object detection outcomes. Our findings indicate that a more photorealistic insertion, characterized by the use of physically plausible lighting and the inclusion of shadows, tends to enhance downstream detection performance.
Summary: This paper presents a novel 3D augmentation method to augment the variety of 3D scenes with the corresponding 2D images. This augmentation method focuses on addressing the data-hungry problem when doing monocular 3D detection. The strategy here is intuitive: to insert 3D synthetic object models into 3D real scenes to augment the 3D scene data, and make sure consistent illumination, shading, and layout reasonability without any object collision issues. Therefore, the pipeline in this paper consists of three parts to answer three questions: 1. where and how to place the object in a 3D scene; 2. what is the illumination and how to add it onto object; 3. use the augmented dataset for network training. In my view, the major contribution of this paper is the pipeline or concept: to leverage inverse rendering and re-rendering to augment 3D data and its corresponding 2D image for monocular 3D detection. Each module used in the pipeline already exists. The experiments in this paper are pretty extensive, and this paper is well written. Strengths: As discussed above, the major strength of this paper is its concept and the tailored pipeline. 1. This paper proposed a new strategy to do 2D-3D data augmentation using neural inverse rendering and re-rendering. 2. The tailored pipeline successfully verified the idea, that such a data augmentation strategy can improve monocular 3D detection by a large margin. Weaknesses: In my view, the weakness is from the method contribution side: 1. Neural inverse rendering to support object editing and augmenting is not novel. There are many works in image-based rendering and decomposition that can support inserting new objects into a 3D scene. I understand that the contribution here is to use it for data augmentation to support monocular 3D detection. But the methodology here is not novel. 2. Many modules are from existing works (e.g., the inverse rendering framework, plane extraction) which makes this paper more like a novel combination to improve an existing task. I like the insight here by leveraging inverse rendering to do data augmentation for monocular 3D detection, and the performance is quite good. It would be more convincing if it also works for other datasets, e.g., ARKitScenes. Because in my view, the experiment performance is the other major contribution. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. I wonder how to choose the object class category for insertion. Is it randomly sampled from an object class set? or manually? Because for indoor scenes, there is a strong scene context, e.g., it is not much likely to insert a "bed" in a "toilet". How do you consider such consistency? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors discussed the limitation and societal impact in the supplemental. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and comments! Please see our response below. - **Methodology contribution**: To conduct 3D object insertion for data augmentation, traditional methods[1] require manually locating support planes, designing poses, estimating lighting, etc, which are hard to scale up. Out method is the first automated 3D insertion pipeline on complex indoor scenes, enabling large-scale 3D insertion for data augmentation. To allow a fully automated pipeline, we make the following technical contributions: (1) To avoid collision, after plane detection, we propose a constrained insertion parameter search method (algorithm 1) to guarantee a physically-plausible inserted position, pose, and size. We also include an efficient collision check method to solve the time-consuming challenge. (2) We conduct environment map transformation and refinement to add more accurate illumination on inserted objects. To our best knowledge, our work is the first to showcase that physically plausible 3D object insertion can serve as an effective generative data augmentation technique for indoor scenes, leading to state-of-the-art performance in discriminative downstream tasks such as monocular 3D object detection, opening up new avenues for research and practical applications. - **Experiments on other datasets**: Thank you for your suggestion. We conduct new experiments on the ScanNet [2] dataset. Here are the detailed setting and results: [*Adapt to monocular setting*] ScanNet contains 1,201 videos(scene) in the training set and 312 videos(scene) in the validation set. For monocular 3D object detection, we use one RGB-D image per video, so 1,201 RGB-D images for training and 312 for validation (test). We calculate the ground truth 3D bounding box label for each of our used views from their provided video(scene) level label (because some objects in the scene may not be visible in our monocular view). [*Training and test*] For the baseline, we train an ImVoxelNet monocular 3D object detection model on the training set and test on the validation set. For our method, there are 8 overlap categories (sofa, bookshelf, chair, table, bed, desk, toilet, bathtub) in the 18 classes of ScanNet with our collected Objaverse data (main paper Table1). We use our 3D copy-past to augment the training set and train an ImVoxelNet. All the training parameters are the same as the training on SUN RGB-D dataset. We will release all the code. We show the results on the average accuracy of the 8 overlap classes (AP_0.25) in the Table below. Our 3D Copy-Paste improves ImVoxelNet by 2.8% mAP. |ScanNet AP_0.25 | Average (mAP)|bed |chair|sofa|table|bookshelf|desk|bathtub|toilet| |-----------------|------|------|----|----|----|----|----|----|----| | ImVoxelNet |14.1 |25.7|7.9 |**13.2**|7.8 |4.2 |20.5|22.1 |**11.5** | | ImVoxelNet+3D Copy-Paste|**16.9** |**27.7**|**12.7** |10.0|**10.8** |**9.2** |**26.2**|**29.2** |9.0 | - **Inserted object class selection**: Good point! The main results in Tables 2 and 3 use a random sample from the object class set, taken uniformly and without consideration of context. However, we also explore the influence of global context on detection performance in the main paper, Table 4. For this experiment, we only insert the object categories already existing in the current room to make the insertion more globally semantically plausible (e.g., avoid inserting toilets into other rooms except for the bathroom). For instance, if the room contains a table, chair, and sofa, we only consider inserting new objects that belong to these 3 categories. The results (Table 4) show that considering the global semantic meaning (43.75) is on par with the random category selecting setting (43.79). One potential reason is that the downstream detection (CNN-based models) may rely more on local information to conduct detection, so they are not sensitive to the global semantics. Different from point cloud-based 3D detection, where context information is important as RGB information is often discarded, in monocular 3D object detection where the input is an RGB image, appearance per se may be the most important clue. **References** [1] Debevec, P., 2008. Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography. In Acm siggraph 2008 classes (pp. 1-10). [2] Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T. and Nießner, M., 2017. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5828-5839). --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their detailed response. After thoroughly reading, I would like to raise my score to "weak accept", and I would strongly suggest the authors include the new experiments in the main paper. For your second response on "Inserted object class selection", I agree that the 2D detection method relies on CNNs that prioritize local information and it is view-dependent, but it is hard to judge if the point-based method relies on global clues more than the local ones or not. It depends on what backbone you use. I highlight the class selection here since I would like to know if the augmented dataset shares the same class distribution with the original distribution, or if it improves/balances the original class distribution to make your method work for long-tail classes. --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: Thank you for your response! Yes, we will include the new experiments in the revised paper. - **Backbone influence** Yes, we agree that the backbone is important for the point-based method. - **Class selection** For the insertion, yes, and we tried both in main paper table 4, where “Follow global context‘’ represent relatively keeping the original distribution and “Not Follow global context” may balance the original class distribution. We did not observe significant differences.
Summary: This paper addresses the scarcity of large-scale annotated datasets which is challenging for rapid deployment of monocular 3D object detection models. A physically plausible indoor 3D object insertion approach is proposed to automatically copy and paste virtual objects into real scenes. The resulting objects have 3D bounding boxes with plausible physical locations and illumination, augmenting existing indoor scene datasets. The proposed data augmentation method achieves state-of-the-art performance in monocular 3D object detection. Most importantly, in this approach, a candidate selection process is applied along with a spatially varying illumination procedure from an existing method to ensure the plausibility of the objects’ physical location, size, pose, and illumination within the scene. From the results, the location and illumination of the inserted objects appear to have a significant impact on the performance of the downstream model. Strengths: This paper's proposed approach is impressive. It automatically inserts virtual objects from ObjaVerse into real scenes, addressing the issue of limited annotated datasets in computer vision. The method ensures the objects' physical location, size, pose, and illumination, resulting in augmented indoor scene datasets. The use of plane selection methods, discarding objects with collisions using a simplified assumption, and constrained parameter search for insertion, along with the use of spatially-varying lighting estimation is well-thought-of and designed process. Its improved performance from resulting augmentations for monocular 3D object detection demonstrates the effectiveness of the method. Furthermore, the paper is very well-written and easy to follow. Weaknesses: The paper lacks a comparison or discussion in relation to Common 3D Corruption (CVPR 2022; not cited). Even though common 3d corruption only evaluates 2D downstream tasks, it could still be utilized and compared for 3D object detection and demonstrate how important is 3D and illumination-aware physically grounded insertions. Another weakness is that the paper only evaluates one task. It would be beneficial to assess the impact of 3D grounded and illumination-aware object insertion on other 3D or 2D tasks. Could it also enhance 2D recognition tasks such as segmentation or detection? Including these aspects in the paper would reinforce its findings. It appears that the paper is lacking a basic 2D copy-paste baseline as well. It would be interesting to see how much this technique could potentially improve 3D tasks (Q: Is random insert in comparisons a 2D insertion or 3D insertion?). Additionally, the paper is missing ablation results for changes in object detection performance for each individual object. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have a few questions that I hope the authors can help with. - First, I'm curious about what happens to out-of-distribution insertions such as the NYU dataset. Would it be helpful to include more datasets? - Second, I'm wondering how long it takes to render one object, taking into account the time it takes to search for the insertion point, correct the appearance, and other factors. - Lastly, I've noticed that most of the inserted objects are generally diffuse. I'm curious about how the object insertion process is affected when inserting shiny or specular objects. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Missing comparison with common 3D corruptions as data augmentation and comparing 2D recognition tasks + other 3D downstream tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and comments! Please see our response below. - **Comparison to Common 3D Corruption**: 3D Common corruptions (3DCC) use 3D information to generate real-world corruptions, which can evaluate the model robustness and be used as a data augmentation for model training. Our method's contribution may be orthogonal to 3DCC. 3DCC conducted scene-level global augmentation and did not introduce new object content. Combining our method with 3DCC may achieve better performance. We will cite this paper and add comparisons in related works. - **Evaluation on other tasks**: That is a very good point, we focus on monocular 3D object detection because it is a challenging, fundamentally representative, and data-intensive task, which involves both 3D geometry and semantic understanding, and our method could help. We also conducted experiments to show that the inserted position, size, pose, and lighting do influence the downstream model performance. We extend our method in other 3D detection datasets (ScanNet [1] below) and other models (Implicit3DUnderstanding [2] in supplementary), and we will treat extending to other tasks as future work. While we agree with the reviewer that extending to other downstream tasks is desirable, this will take more time than available during the rebuttal period; in the meantime, we note that our title and overall positioning of the paper are not overselling our results, i.e., it is clearly stated that our method is useful for monocular 3D object detection. Likewise, we will clearly note in the manuscript that this is important but considered for future work beyond this paper. Here are the detailed experiments and results on ScanNet: [*Adapt to monocular setting*] ScanNet contains 1,201 videos(scene) in the training set and 312 videos(scene) in the validation set. Adapting it for monocular 3D object detection, we utilized one RGB-D image per video, amounting to 1,201 RGB-D images for training and 312 for validation. We calculate the ground truth 3D bounding box label for each of our used views from their provided video(scene) level label (because some objects in the scene may not be visible in our monocular view). [*Training and test*] For the baseline, we train an ImVoxelNet monocular 3D object detection model on the training set and test on the validation set. For our method, there are 8 overlap categories (sofa, bookshelf, chair, table, bed, desk, toilet, bathtub) in the 18 classes of ScanNet with our collected Objaverse data (main paper Table1). We use our 3D copy-past to augment the training set and train an ImVoxelNet. All the training parameters are the same as the training on SUN RGB-D dataset. We will release all the code. Table: ScanNet experiments. |ScanNet AP_0.25| Average|bed |chair|sofa|table|bookshelf|desk|bathtub|toilet| |-|-|-|-|-|-|-|-|-|-| |ImVoxelNet|14.1|25.7|7.9|**13.2**|7.8|4.2|20.5|22.1|**11.5**| |ImVoxelNet+3D Copy-Paste|**16.9**|**27.7**|**12.7**|10.0|**10.8**|**9.2**|**26.2**|**29.2**|9.0| For the 2D recognition task, some related works [3,4] showed that simple 2D copy-paste is already good enough to help improve performance. While it is hard to conduct copy-paste in 3D, that is one of the motivations of our work. We believe our work should also help on 2D task and will treat it as future work. - **2D copy-paste baseline and individual object performance**: For 2D insertion, it is hard to obtain the 3D bounding box, which is required for downstream monocular 3D object detection. The random insertion in the main paper comparison is 3D insertion, where the insertion position, size, pose, and illumination are not physically plausible, which causes a significant performance drop. Here are the detailed SUN RGB-D monocular 3D object detection results of the main paper Table 2 on each individual object category: |SUN RGB-D AP_0.25 | Average (mAP)|bed |chair|sofa|table|book shelf|desk|bathtub|toilet|dresser|night stand| |--|--|---|---|--|--|--|--|--|--|--|--| | ImVoxelNet|40.96|72.0|55.6 |53.0|41.1|**7.6**|21.5|29.6|76.7|19.0|33.4| | ImVoxelNet+3D Copy-Paste|**43.79** |**72.6**|**57.1** |**55.1**|**41.8** |7.1|**24.1**|**40.2** |**80.7** |**22.3**|**36.9**| - **Include more datasets**: Yes, we posit that a richer collection of objects to insert would be beneficial. However, we need full 3D models that we can insert in any pose (thus, the NYU dataset may not work as it does not provide full 3D object models). But other 3D object datasets (e.g., OmniObject3D) could be included in future work. - **Time cost to render one object**: Overall it will take around 5~10 seconds. Specifically, searching the insertion position, pose, and size takes less than 0.5 seconds with iteration 1000. Plane detection, lighting estimation, and rendering take most of the time. - **Insert shiny or specular objects**: Changing the reflection property only influences the final rendering process; our physically plausible position, pose, size, and illumination are agnostic to the object's surface property. Specular objects will show reflections of other objects or lights during the rendering process. **References** [1] Dai, Angela, et al. "Scannet: Richly-annotated 3d reconstructions of indoor scenes." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. [2] Zhang, Cheng, et al. "Holistic 3d scene understanding from a single image with implicit representation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. [3] Dwibedi, Debidatta, Ishan Misra, and Martial Hebert. "Cut, paste and learn: Surprisingly easy synthesis for instance detection." Proceedings of the IEEE international conference on computer vision. 2017. [4] Ghiasi, Golnaz, et al. "Simple copy-paste is a strong data augmentation method for instance segmentation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. --- Rebuttal Comment 1.1: Title: Response to author's rebuttal Comment: Thank you for your comprehensive response addressing the concerns raised in my review. I appreciate the clarifications provided and the commitment to making the suggested changes in the revised version of the paper. I look forward to seeing the updated manuscript and the enhancements you plan to incorporate based on the feedback and other reviews. --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: Thank you for your response! We will incorporate the suggested changes in the revised paper.
Rebuttal 1: Rebuttal: ## Global response (common questions) to all reviewers We would like to thank our reviewers, which put considerable time and thoughts for helping improve our paper. We are pleased that the reviewers find our paper "very well-written and easy to follow"(R-sM4b, R-mHBD, R-DWmW); our method being described as "novel" (R-DWmW), "fully automated" (R-B4nT), "impressive, well-thought-of and designed" (R-sM4b) and "systematic and comprehensive"(R-mHBD), with a "solid implementation"(R-7hAU); our experiments were commended as "extensive" (R-DWmW), "solid" (R-B4nT), backed by "supportive ablations" (R-7hAU, R-mHBD), demonstrating "effectiveness" (R-sM4b), achieving "state-of-the-art performance" (R-sM4b, R-mHBD), and notably improving "monocular 3D detection by a large margin" (R-DWmW). Please find below our summary of major changes and responses to some common questions. We will incorporate these major changes into our revised paper. **Summary of major changes**: 1. [7hAU, DWmW, mHBD] We add new experiments on the ScanNet[1] dataset to show that 3D Copy-Paste also improves monocular 3D object detection performance on other dataset. 2. [7hAU, B4nT] We evaluate our monocular 3D object detection performance on SUN RGBD with mAP_0.15 and show consistent improvements. 3. [sM4b] We add detailed results of each individual object category on the SUN RGB-D dataset. **Experiments on ScanNet dataset**: To show the generalization to other datasets, we apply our 3D Copy-Paste to ScanNet[1] and conduct monocular 3D object detection with ImVoxelNet. ScanNet is a large-scale R-GBD video dataset that isn't specifically tailored for monocular 3D object detection. Here are the detailed experiments and results (we use ScanNet v2): [*Adapt to monocular setting*] ScanNet contains 1,201 videos (scenes) in the training set and 312 videos (scenes) in the validation set. Adapting it for monocular 3D object detection, we utilized one RGB-D image per video, amounting to 1,201 RGB-D images for training and 312 for validation. We calculate the ground truth 3D bounding box label for each of our used views from their provided video (scene) level label (because some objects in the scene may not be visible in our monocular view). [*Training and test*] For the baseline, we train an ImVoxelNet monocular 3D object detection model on the training set and test on the validation set. For our method, there are 8 overlap categories (sofa, bookshelf, chair, table, bed, desk, toilet, bathtub) in the 18 classes of ScanNet with our collected Objaverse data (main paper Table1). We use our 3D copy-past to augment the training set and train an ImVoxelNet. All the training parameters are the same as the training on SUN RGB-D dataset. We will release all the code. We show the results on the average accuracy of the 8 overlap classes (AP_0.25) in the Table below. Our 3D Copy-Paste improves ImVoxelNet by 2.8% mAP. |ScanNet AP_0.25 | Average (mAP)|bed |chair|sofa|table|bookshelf|desk|bathtub|toilet| |-----------------|------|------|----|----|----|----|----|----|----| | ImVoxelNet |14.1 |25.7|7.9 |**13.2**|7.8 |4.2 |20.5|22.1 |**11.5** | | ImVoxelNet+3D Copy-Paste|**16.9** |**27.7**|**12.7** |10.0|**10.8** |**9.2** |**26.2**|**29.2** |9.0 | **Experiments on other 3D detection method**: To show the generalization of our method to other downstream monocular 3D object detection methods, in our supplementary material Table S1, we conducted additional experiments with another monocular 3D object detection model: Implicit3DUnderstanding (Im3D [2]). Using our method (Im3D + 3D Copy-Paste) improve the mAP_0.25 from 42.13 (Im3D) to 43.34. **Performance details of each category in SUN RGB-D dataset**: In the table below, we show the detailed SUN RGB-D monocular 3D object detection results with ImVoxelNet of the main paper Table 2 on each individual object category: |SUN RGB-D AP_0.25 | Average (mAP) |bed |chair|sofa|table|book shelf|desk|bathtub|toilet|dresser|night stand| |-----------------|------|------|----|----|----|----|----|----|----|----|----| | ImVoxelNet |40.96 |72.0|55.6 |53.0|41.1 |**7.6** |21.5|29.6 |76.7 |19.0|33.4| | ImVoxelNet+3D Copy-Paste|**43.79** |**72.6**|**57.1** |**55.1**|**41.8** |7.1 |**24.1**|**40.2** |**80.7** |**22.3**|**36.9**| **Reference** [1] Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T. and Nießner, M., 2017. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5828-5839). [2] Zhang, C., Cui, Z., Zhang, Y., Zeng, B., Pollefeys, M. and Liu, S., 2021. Holistic 3d scene understanding from a single image with implicit representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8833-8842).
NeurIPS_2023_submissions_huggingface
2,023
Summary: The manuscript introduces system to create RGBD datasets augmented with additional 3d objects (one per rgb frame). The inserted objects are placed such that they dont intersect other objects, stand on the ground floor and the objects are rendered into the original RGB frame such that the lighting of the scene is respected. The augmented dataset can then be used to train slightly better 3d detectors since more 3d object examples can be supervised. Strengths: The manuscript tackles an important problem for 3D object detection research: the scarcity of supervision data and the difficulty inherent in augmenting in 3D. The 3 factors for physical plausability make sense to me and the implementation of them in the paper seems pretty solid judging by the example images from the generated augmented dataset. The claim that the full system of physically plausible augmentation is needed is well supported by ablations against other plausible choices (random placement, simpler light sources). The system seems sound although I am not an expert on the most recent ways of estimating illumination. Weaknesses: The primary weakness to my mind is the use of only a single (small!) 3d object detection dataset with SUB RGB - ScanNet is substantially bigger and commonly used. Who knows maybe this method would be even more powerful on a harder dataset? The delta in mAP is not very big with the addition of the presented augmentation method (from 40.96 to 43.79). I question whether people will want to implement the full presented system in order for such modest gains. This limits the potential impact in my eyes and means claim (2) "significant" improvements is not fully supported. ImVoxelnet is not a single-view detector (it is multiview but can be used for single-view detection); In the original paper, the mAP for SUN RGB-D is actually 43.66 (I suppose thats because of the different IoU threshold of 0.15 but it raises the question to me why the evaluation in this manuscript doesnt use the same one?) I am not 100% clear on the way the illumination is estimated and used as somebody who has never worked with such methods. This would make it hard for me to replicate the presented work. There is a few sentences on semantic plausibility and that it didnt help but its not well explained how semantic plausibility was achieved. This is an important negative result that I am not sure I can trust at the current level of explanation. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: see weaknesses Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitations such as the need for metrically scaled objects, the need for depth images and the knowledge of the gravity direction are not explicitly discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and comments! Please see our response below. - **Experiments on ScanNet**: We use the SUN RGB-D dataset as this dataset offers 10,000+ monocular RGB-D images (scenes), and is densely annotated with 146,617 2D polygons and 58,657 3D bounding boxes. Many 3D object detection papers [1,2] use SUN RGB-D performance as the main result. ScanNet is a large-scale R-GBD video dataset that isn't specifically tailored for monocular 3D object detection. We appreciate the reviewer’s suggestion and have conducted new experiments on ScanNet. Experimental settings are described in detail in the Global Response to all reviewers (above). We summarize it here: [*Adapt to monocular setting*] We utilized one RGB-D image per video: 1,201 for training and 312 for validation. We calculate the ground truth 3D bounding box label for each of our used views from their provided video(scene) level label. [*Training and test*] There are 8 overlap categories in ScanNet with our collected Objaverse data. We use our 3D Copy-Paste to augment the training set and train an ImVoxelNet. We show the average accuracy of the 8 overlap classes below. Our method improves ImVoxelNet by 2.8% mAP. |ScanNet AP_0.25| Average (mAP)|bed|chair|sofa|table|bookshelf|desk|bathtub|toilet| |-|-|-|-|-|-|-|-|-|-| |ImVoxelNet|14.1|25.7|7.9|**13.2**|7.8|4.2|20.5|22.1|**11.5**| |ImVoxelNet+3D Copy-Paste|**16.9**|**27.7**|**12.7**|10.0|**10.8**|**9.2**|**26.2**|**29.2**|9.0| - **Improvement significance and implementation easiness**: Monocular 3D object detection is a challenging task that requires inferring the 3D information given only a single RGB image, and improving from 40.96 to 43.79 can be considered significant as Reviewer DWmW also pointed out. If checking the performance leaderboard, it has been hard to improve mAP further beyond 40 (e.g., ImVoxelNet remains the current SOTA on Papers-With-Code on SUN RGB-D even though it is now 2 years old). Given our 2.83% improvement over that, our method is the new state-of-the-art performance on indoor scene monocular 3D object detection. Beyond just a numerical improvement in mAP, our work provides a number of conceptual advances in the field, which we believe are important as well. Firstly, the task of monocular 3D object detection is notably data-intensive, and labeling 3D labels can be both time-consuming and costly. Our approach addresses this challenge through data augmentation, introducing an automatic pipeline that remains model-agnostic. Since data augmentation is a one-off effort, it can potentially enhance various models. To allow easy usage, we will release our code, model, and generated data. Secondly, through comprehensive experiments, we demonstrate the efficacy of our method across diverse models, such as ImVoxelNet and Implicit3D (detailed in the supplementary), and on different datasets, including SUN RGB-D and ScanNet. Importantly, our findings highlight the potential of 3D data augmentation in improving the performance of 3D perception tasks, opening up new avenues for research and practical applications. - **mAP 0.25 evaluation**: This has to do with the fact that official results are only available for a few combinations of dataset/object classes/IOU threshold: The official ImVoxelNet GitHub code, when using SUN RBG-D on “10 classes from VoteNet” setting (same as ours), also uses 0.25 IOU threshold with performance 40.7 (even though they use 0.15 with other datasets/classes). The authors provided an implementation in MMdetection3D Github [3], which also uses mAP 0.25 for the 10 classes from VoteNet setting (performance 40.96). We used the official code of MMdetection3D, so we used mAP 0.25. As you suggested, we also show our results with mAP 0.15 on SUN RGB-D dataset below. We will incorporate this in our revised paper.: |SUN RGB-D|mAP_0.15| |-|-| |ImVoxelNet|48.45| |ImVoxelNet + 3D Copy-Paste|**51.16**| - **Illumination estimation and reproducibility**: Illumination estimation is an important and challenging task, our main paper Sec. 2.3 (L108~L119) listed some representative works in this domain. We will release all our code and dataset upon acceptance for reproducibility. - **Semantic plausibility influence**: For this experiment, to make the insertion more globally semantically plausible (e.g., avoid inserting toilet into other rooms than bathroom), we only insert the object categories that already exist in the current room. For instance, if the room contains table and chair, we only consider inserting new objects that belong to these 2 categories. The results (Table 4) show that considering the global semantic meaning (43.75) is on par with the random category selecting setting (43.79). One potential reason is that the downstream detection (CNN-based models) may rely more on local information to conduct detection, so they are not sensitive to the global semantics. Different from point cloud-based 3D detection, where context information is important as RGB information is often discarded, in monocular 3D object detection where the input is an RGB image, appearance per se may be the most important clue. - **Requirement of depth, scale, and gravity**: The metrically scaled object, depth, and gravity direction can be either dataset provided or estimated by off-the-shelf methods, e.g., metric depth estimation from ZoeDepth [4], and gravity direction estimation from ground segmentation with plane fitting. We will add this discussion in the limitations section (currently, limitations are in the supplementary). **References** [1] Huang et al. "Perspectivenet: 3d object detection from a single rgb image via perspective points." NeurIPS 2019. [2] Zhang et al. "Holistic 3d scene understanding from a single image with implicit representation." CVPR 2021. [3] MMDetection3D: OpenMMLab next-generation platform for general 3D object detection. [4]"Zoedepth: Zero-shot transfer by combining relative and metric depth." arXiv 2023. --- Rebuttal Comment 1.1: Title: response Comment: Thank you for addressing my questions and concerns. "To allow easy usage, we will release our code, model, and generated data." will be critical to enable others to benefit from this paper. Based on this promise and the clarification around the mAP improvement and the other reviewers responses I am happy to increase my rating to weak accept. --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: Thank you for updating your review!
null
null
null
null
null
null
Meta-AdaM: An Meta-Learned Adaptive Optimizer with Momentum for Few-Shot Learning
Accept (poster)
Summary: The paper presents Meta-AdaM, a meta-learned adaptive optimizer that also includes momentum. The parameter of the optimizer in the inner loop, i.e. inner learning rate and momentum, are predicted by a LSTM network, that takes into account the previous gradient and momentum steps. The optimization process also includes a double look-ahead strategy for the prediction of the coefficients for the gradient and the momentum. Lastly, the optimization also includes a dynamic class weighting scheme, using a softmax to assign weights to each class according to the loss changes for each class. The optimizer is then combined with MAML and evaluated on three benchmarks datasets (mini-Imagenet, tiered-Imagenet and CIFAR100) with two different backbones (conv4 and Resnet12). Strengths: ### Originality The paper builds upon ALFA [1] by replacing the learned network that predicts the *inner learning rate* and *weight decay* parameters, by a LSTM network, that predicts *inner learning rate* and *momentum* parameters. The LSTM allows taking into account previous update steps. The dynamic class weighting scheme is also an interesting regularization approach to reduce overfitting to easy classes in the inner loop. ### Significance The proposed method shows strong results on the different benchmarks considered. [1] : Baik, S., Choi, M., Choi, J., Kim, H., & Lee, K. M. (2023). Learning to learn task-adaptive hyperparameters for few-shot learning. IEEE Transactions on Pattern Analysis and Machine Intelligence. Weaknesses: ### Clarity - The paper is difficult to follow as there is a lot of text to describe the method, but it is missing a formal definition of the process. Thankfully, the two algorithms help to clarify the text, but the equations in the algorithms could be directly introduced and described in the text. There is also several concepts vaguely defined such as the loss function $f_2$ in Equation 5 and the temperature $T$ in Equation 6 that is never mentioned in the hyperparameters. - The discussion around the dynamic class weighting scheme seems unrelated to the original idea described throughout the paper. The way it is introduced in the optimization is also not really explained, neither in the text nor in the algorithms. - The ablation study mentions four different components, but it is not clear what the *momentum* component refers to. ### Quality - The gradient descent formula described in Equation 1 and 2 is not the commonly used one, described in [2] for instance. Here, the learning rate is impacting the update coming from the momentum. - In line 175, it is said that the coefficients $\alpha$ and $\beta$ should sum to 1, hence the use of a softmax function, but it is not clear to me why it should be the case, and it is never explained. ### Significance The introduction of the dynamic class weighting scheme, while being an interesting addition, makes the work specific to the problem of few-shot classification, even though this component does not seem necessary in the approach. From Table 4 in the ablation study, we can see that it has a strong impact on the performance, mainly in the 5-shot setting, and makes the method more effective than others in the mini-imagenet benchmark. Results without this weighting and in other tasks, such as few-shot regression, would make the paper and the evaluation of the approach more solid. [1] : Baik, S., Choi, M., Choi, J., Kim, H., & Lee, K. M. (2023). Learning to learn task-adaptive hyperparameters for few-shot learning. IEEE Transactions on Pattern Analysis and Machine Intelligence. [2] : Sutskever, I., Martens, J., Dahl, G., & Hinton, G. (2013, May). On the importance of initialization and momentum in deep learning. In International conference on machine learning (pp. 1139-1147). PMLR. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - From the paper and the equations, it seems that the optimization process does not include weight decay. This difference from the ALFA [1] paper is not really explained in the related work section, what is the motivation behind this change ? - Why $\alpha$ and $\beta$ need to sum to 1 ? Why the learning rate is included in the update coming from the momentum ? - How the dynamic class weighting scheme is included in the optimization process ? Does it impact both the loss of the LSTM and the base learner ? - What are the results without the class weighting ? And in different tasks than classification ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors did not include any discussion of the limitations of their work, nor of potential negative impacts. An important limitation is the limitation to few-shot classification tasks as mentioned above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Table 1:** Ablation Study for class dynamic weight using Convnet4 on Mini-ImageNet, TieredIma- geNet, and Cifar100 datasets | Dataset|Method|5-way-1-shot| 5-way-5-shot | | :----: | :-----:|:--------------:|:-------------:| |Mini-ImageNet|ALFA |50.58 ± 0.51|69.12 ± 0.47| |Mini-ImageNet|ALFA+DW| 50.65 ± 0.49|70.02 ± 0.45| |Mini-ImageNet|Meta-AdaM w/o DW| 51.64 ± 0.49| 68.80 ± 0.46| |Mini-ImageNet|Meta-AdaM| 52.00 ± 0.49| 70.70 ± 0.49| | Dataset|Method|5-way-1-shot| 5-way-5-shot | | :----: | :-----:|:--------------:|:-------------:| |TieredImageNet|ALFA |53.16 ± 0.51| 70.54 ± 0.46| |TieredImageNet|ALFA+DW| 53.54 ± 0.9|72.36± 0.19| |TieredImageNet|Meta-AdaM w/o DW| 53.62 ± 0.50|71.57 ± 0.49| |TieredImageNet|Meta-AdaM| 53.93 ± 0.49|72.66 ± 0.49| | Dataset|Method|5-way-1-shot| 5-way-5-shot | | :----: | :-----:|:--------------:|:-------------:| |Cifar100|ALFA|39.77 ± 0.48|53.39 ± 0.49| |Cifar100|ALFA+DW|40.79 ± 0.19|54.34 ± 0.48| |Cifar100|Meta-AdaM w/o DW|40.14 ± 0.49|54.36 ± 0.50| |Cifar100|Meta-AdaM (ours)|41.11 ± 0.49|56.32 ± 0.49| ____ **Concern 1 (Clarity 1):** *The paper is difficult to follow as there is a lot of text to describe the method, but it is missing a formal definition of the process.* **Answer:** Thank you for the suggestions. We will improve the clarity of our work and provide comprehensive formal definitions for variables used in the work. ___ **Concern 2 (Clarity 2):** *The discussion around the dynamic class weighting scheme seems unrelated to the original idea described throughout the paper. The way it is introduced in the optimization is also not really explained, neither in the text nor in the algorithms.* **Answer:** The dynamic class weighting aims to provide a more balanced training signal. The loss computed by it is used to train both LSTM and the base learner. ___ **Concern 3 (Clarity 3):** *The ablation study mentions four different components, but it is not clear what themomentum component refers to.* **Answer:** Sorry for the confusion. The momentum component indicates whether we keep an accumulated momentum for the inner loop for weights update and adaptive learning rate prediction. ___ **Concern 4 (Quality 1):** *The gradient descent formula described in Equation 1 and 2 is not the commonly used one, described in [2] for instance. Here, the learning rate is impacting the update coming from the momentum.* **Answer:** Thanks for pointing out this. We want to note that Equation 1 represents the gradient descent. We substitute the gradient $g_{i}^{t}$ in SGD with accumulated first-order momentum. Equation 2 represents the update $\alpha_i^t \times g_{i}^{t} + \beta_i^t \times m_{i}^{t-1}$ process of accumulated first-order momentum. This form is widely used in popular optimizers for deep learning models, such as AdaM [3], AdaMax [3], and RmsDrop [2]. ___ **Concern 5 (Question 2):** *Why α and β need to sum to 1? Why the learning rate is included in the update coming24 from the momentum?* **Answer:** The primary rationale behind setting the sum of α and β to 1 is to ensure that the resultant gradient aligns in scale with both the momentum and the original gradient. This approach follows the same strategies employed by popular optimizers frequently used by deep learning models, such as AdaM [3], AdaMax [3], and RmsDrop [2]. Incorporating the learning rate with the momentum arises because momentum isn’t directly applied to weight updates. As observed in algorithms like Adam, the learning rate and momentum are combined. ____ **Concern 6 (Significance & Question 4):** *Contribution of dynamic class weighting scheme* **Answer:** Thank you for your insightful recommendation. We conduct experiments and provide more comparative results with ALFA [1] in Table 1. Our comparison spans four configurations: ALFA, ALFA with Dynamic Class Weighting, Meta-AdaM excluding class weighting, and the full Meta-AdaM. Based on the results, it’s shown that our Meta-AdaM without dynamic weighting surpasses ALFA in 5 out of the 6 evaluated settings. Moreover, Dynamic Class Weighting offers a tangible enhancement when integrated with ALFA. For few-shot regression tasks, the dynamic class weight scheme cannot be used since it is used to balance the performances of different classes. While in regression tasks, there is no explicit class information. ___ **Question 1:** *From the paper and the equations, it seems that the optimization process does not include weight decay. This difference from the ALFA [1] paper is not really explained in the related work section, what is the motivation behind this change?* **Answer:** Thank you for drawing attention to this distinction. Notably, our method can predict weight decay, as illustrated in Eq. (3). This potential enhancement could positively influence the inner optimization process owing to its inherent regularization effects. In the current context, we emphasize the influences of adaptive learning rates and momentum. By sidelining the impact of weight decay, we can observe their contributions more clearly. Nonetheless, we are considering the inclusion of weight decay prediction from Eq. (3) in our finalized version. ____ **Question 3:** *How the dynamic class weighting scheme is included in the optimization process? Does it impact both the loss of the LSTM and the base learner?* **Answer:** Thank you for highlighting that. Indeed, you’re right. The LSTM and the base learner will be trained using the dynamic loss. ___ **References:** [1] S. Baik, M. Choi, J. Choi, H. Kim, and K. M. Lee. Meta-learning with adaptive hyperparameters,2020. [2] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. MIT press, 2016. [3] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization, 2017. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answer and the additional results. - I had a misunderstanding on the gradient descent formula, and the answer provided helped me clarify it. From the answer, I understand that the authors consider an *adaptive learning rate strategy with momentum*, such as the Adam or RMSProp optimizers. It also explains why $\alpha$ and $\beta$ sums to 1. - I appreciate the results without the dynamic class weighting (DCW) and the comparison with ALFA + DCW. We can see now that the method has strong performance even without it. I highly encourage the authors to include these results in the revised version. **Remaining concerns** - I still think that additional results on regression tasks would strengthen the contribution. I understand though that running these experiments might be difficult given the time. I'm fully aware that DCW cannot be used in this setting, but I don't think that DCW should be seen as an *intrinsic component* of the proposed method. As shown in the provided table, the method has good results even without DCW, so it should also translate to the regression case, similarly to ALFA. - I'm not sure to understand how weight decay can be included from Eq. (3). I think that it could be an interesting addition and discussion to the contribution. Given that my misunderstanding has been clarified, I'm increasing my rating to "borderline accept". I would be happy to increase again my rating if the authors can provide results on regression tasks. --- Reply to Comment 1.1.1: Title: Few-shot regression results Comment: **Table 1:** *Comparison results on few-shot regression tasks. The target function for regression is a sine curve $y(x) = A \sin(\omega x)$. We report MSE errors with a 95% confidence interval.* |Method |Training |5-shot Testing| 10-shot Testing| 20-shot Testing| |:-------|:---------|:---------------|:----------------|:------------------| |MAML| 5-shot |1.13 ± 0.08|0.85 ± 0.14| 0.71 ± 0.12| |Meta-SGD| 5-shot |0.90 ± 0.16 |0.63 ± 0.12| 0.50 ± 0.10| |Meta-SGD+ALFA (reproduced)| 5-shot| 0.60 ± 0.04| 0.41 ± 0.02| 0.25 ± 0.22| |Meta-SGD+Meta-AdaM| 5-shot| **0.52** ± 0.04 | **0.35** ± 0.04| **0.23** ± 0.01| |Method |Training |5-shot Testing| 10-shot Testing| 20-shot Testing| |:-------|:---------|:---------------|:----------------|:------------------| |MAML| 10-shot| 1.17 ± 0.16| 0.77 ± 0.11| 0.56 ± 0.08| |Meta-SGD| 10-shot| 0.88 ± 0.14| 0.53 ± 0.09| 0.35 ± 0.06| |Meta-SGD+ALFA (reproduced)| 10-shot| 0.72 ± 0.16| 0.45± 0.03| 0.26 ± 0.02| |Meta-SGD+Meta-AdaM| 10-shot| **0.66** ± 0.22| **0.34** ± 0.05| **0.23** ± 0.01| |Method |Training |5-shot Testing| 10-shot Testing| 20-shot Testing| |:-------|:---------|:---------------|:----------------|:------------------| |MAML| 20-shot |1.29 ± 0.20| 0.76 ± 0.12| 0.48 ± 0.08| |Meta-SGD| 20-shot| 1.01 ± 0.17| 0.54 ± 0.08| 0.31 ± 0.05| |Meta-SGD+ALFA (reproduced)| 20-shot| 1.01± 0.18 |0.48± 0.09| 0.25 ± 0.03| |Meta-SGD+Meta-AdaM| 20-shot| **0.95** ± 0.11| **0.43** ± 0.06| **0.23** ± 0.03| ___ **Concern 1**: *Provide results on regression tasks.* ___ **Answer:** Thanks for your quick response. To address your concern about few-shot regression tasks, we conduct additional experiments on few-shot regression tasks and show the results in Table 1. We follow the experimental settings of Meta-SGD [2]. The regression tasks are to map the underlying functions based on the input. The target function is $y(x) = A sin(\omega x)$, where amplitude $A$, frequency $\omega$, and phase $b$ follow a uniform distribution with intervals [0.1, 5.0], [0.8, 1.2], and [0, $\pi$]. We train three few-shot regression learners on 5-shot, 10-shot, and 20-shot tasks separately. Each learner will test on 100 5-shot, 10-shot, and 20-shot testing tasks. We report MSE errors with a 95% confidence interval. We compare MAML, Meta-SGD, Meta-SGD+ALFA[1], and Meta-SGD+Meta-AdaM. The results show that our method outperforms two major baselines, Meta-SGD and Meta-SGD+ALFA in all settings. The promising results indicate that our proposed method is also effective on few-shot regression tasks. ___ **Reference**: [1] S. Baik, M. Choi, J. Choi, H. Kim, and K. M. Lee. Meta-learning with adaptive hyperparameters,2020. [2] Z. Li, F. Zhou, F. Chen, and H. Li. Meta-sgd: Learning to learn quickly for few-shot learning,2017.2
Summary: The authors introduce an adaptive optimizer for meta-learning that uses double lookahead to incorporate momentum in scenarios where ‘conventional’ approaches of using momentum usually fall short (few-shot inner-loop, due to high momentum/gradient fluctuations during initial steps). The use of an LSTM to predict the adapted learning rates allows for consideration of the weight-update history. To further boost the obtained results, the authors additionally introduce dynamic weighting of the class-specific losses to aid a more balanced optimization / loss minimization. Strengths: * Well written paper, easy to read and follow, good introduction of method and required background. Placement within related work well accomplished, and approach well motivated. * Generally interesting idea and approach: Incorporating momentum with the double look-ahead and predicting the learning rate based on the history are to me the main contributions. * Results of the overall approach seem solid. * For the most part, interesting evaluations including run-time. Weaknesses: Major: 1. **Entanglement of ‘dynamic class weighting scheme’ and ‘optimiser approach’**: While an interesting contribution in itself, the “dynamic class weighting scheme” is highly entangled with the actual meta-adam approach throughout the reported results. While I do understand the pressure to ‘outperform’ other works, the reported results in its current state do in my opinion not allow for an insightful comparison of approaches & limits the reader’s insights in an unnecessary manner. I would highly encourage the authors to: - Add the Meta-AdaM (w/o dyn-cl-w) to the tables - Evaluate MAML or others together with their proposed dyn-cl-w. This would allow the reader to better judge the influence of the individual contributions and their influence beyond this particular work! Note: I am aware that Table 4 shows the result w/o dynamic class loss, but since the *paper’s main selling point and TITLE focuses on the optimiser*, this should be individually demonstrated and be part of an honest discussion. 2. **Usefulness of the LSTM / history**: How essential is the fact that the adaptive learning rate being predicted by an LSTM? How important is the history, really – which is one of the main claims of the paper! -> How does it compare to e.g. using an MLP with similar complexity and the same input (prev. momentum and current gradient)? Minor: 3. **Dataset inconsistency** (Description & References): The authors state 'Cifar100' as few-shot learning benchmark – however, there two inconsistencies that should be improved: * The FSL community uses two different versions: Cifar-FS and FC100, that yield very different results due to their different complexities. The provided reference in the paper links neither of them, but the original non-FSL Cifar -> clarification here is required. * The description of the dataset (sec 4.1) states “*with 100 different labels of birds*”, which seems more like a description for the popular CUB dataset than Cifar -- I assume some mixup has happened here? Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: The three main points & questions are listed in the weaknesses section. However, I have several additional suggestions/questions I'd like comments from the authors on: Several experiments / ablation studies would imo strengthen this work: * What are the actual learning rates predicted within the inner loop? (Is there some ‘commonality’ within the same step across different test episodes?) * What is the ‘average’ mix of gradient and momentum contribution across inner-loop update steps (i.e. Algorithm 1, l.10)? Does this reflect ‘intuitions’ of growing momentum along the trajectory, or maybe defy these? (visualization/comment would be interesting) * How big is the influence of the base learning rate in Algorithm 1? You are using it in lines 6-9 to predict the effect of taking a step of this size into the gradient or momentum direction – the step-size (and direction) of the actual step then taken however differs and is according to your comments ‘not limited’ (not even sign-wise) -> Is the lookahead still a ‘good and robust’ measure, or did you encounter difficulties? Does it change when choosing larger / smaller base-lrs, or is it rather robust (and why)? Additional questions: * What is the size of the LSTM, and how much complexity does it add? * How is the temperature chosen for the dynamic class weighting, and how much does it matter & potentially differ across datasets? * I’m slightly confused about the message of Figure 2: While Meta-AdaM does converge to a better minimum/lower loss at the final step, ALFA seems to converge faster – It would be helpful for the authors to comment on this. (In its current state, I am not convinced about this part of the paper in general, since we already see in the tables that Meta-Adam seems to converge to a better minimum, and the only fact the figure shows me is that it does so in a slower way than ALFA?) * Algorithm 2 ln 12/13 indicate SGD for outer loop – is this a simplification for conciseness, or did you choose SGD for the outer loop? Some remarks to further improve presentation: * Stating where the results for MAML are taken from would be helpful, since some of the datasets and architectures have not been evaluated in the original paper * Line 36/37: “[…] has been shown that weight-update history is more important than the weights themselves […]” -> Reference would be important! I am aware that some are given in line 131, but two entire books without any sub-indication are a rather large search space for the reader! -> more recent publication or chapter/sub-chapter of book would be appreciated * Sorting the references in ascending order (easier for reader to cross-check) * Typos/Layout: Fig. 1: space after “A”; mix of capitalized and non-capital words within sub-captions inconsistent **Note**: I do think the idea presented in this paper is interesting! Depending on the response from the authors, I am happy to reconsider my current rating. > UPDATE: raised rating to weak accept after rebuttal. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: The authors provide the runtime in comparison to related approaches, which adequately addresses the main limitation I see for this approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Table 1:** Ablation Study for class dynamic weight using Convnet4. | Dataset|Method|5-way-1-shot| 5-way-5-shot | | :----: | :-----:|:--------------:|:------------:| |Mini-ImageNet|ALFA |50.58 ± 0.51|69.12 ± 0.47| |Mini-ImageNet|ALFA+DW| 50.65 ± 0.49|70.02 ± 0.45| |Mini-ImageNet|Meta-AdaM w/o DW| 51.64 ± 0.49| 68.80 ± 0.46| |Mini-ImageNet|Meta-AdaM| 52.00 ± 0.49| 70.70 ± 0.49| |TieredImageNet|ALFA |53.16 ± 0.51| 70.54 ± 0.46| |TieredImageNet|ALFA+DW| 53.54 ± 0.9|72.36± 0.19| |TieredImageNet|Meta-AdaM w/o DW| 53.62 ± 0.50|71.57 ± 0.49| |TieredImageNet|Meta-AdaM| 53.93 ± 0.49|72.66 ± 0.49| |Cifar100|ALFA|39.77 ± 0.48|53.39 ± 0.49| |Cifar100|ALFA+DW|40.79 ± 0.19|54.34 ± 0.48| |Cifar100|Meta-AdaM w/o DW|40.14 ± 0.49|54.36 ± 0.50| |Cifar100|Meta-AdaM (ours)|41.11 ± 0.49|56.32 ± 0.49| **Table 2**: Comparison result LSTM and MLP using Convnet4. | Dataset|Method|5-way-1-shot| 5-way-5-shot | | :----: | :-----:|:--------------:|:------------:| |Mini-ImageNet|Meta-Adam with MLP| 52.27± 0.49| 70.48 ± 0.49| |Mini-ImageNet|Meta-AdaM with LSTM|52.00 ± 0.49| 70.70 ± 0.49| |TieredImageNet| Meta-Adam with MLP|53.48 ± 0.49|72.60 ± 0.49| |TieredImageNet|Meta-AdaM with LSTM|53.93 ± 0.49|72.66 ± 0.49| |Cifar100|Meta-Adam with MLP |40.28 ± 0.49 |54.96 ± 0.49| |Cifar100| Meta-AdaM with LSTM| 41.11 ± 0.49 |56.32 ± 0.49| **Weakness 1:** *Add the Meta-AdaM (w/o dyn-cl-w) to the tables. Evaluate MAML or others together with proposed dyn-cl-w.* **Answer** We conduct additional experiments on four configurations: ALFA [1], ALFA with Dynamic Class Weighting, Meta-AdaM excluding class weighting, and Meta-AdaM in Table 1. The results show that Meta-AdaM without Dynamic Class Weighting outperforms ALFA in 5 out of 6 evaluated settings. Notably, Dynamic Class Weighting can enhance other meta-learners like ALFA. ___ **Weakness 2** *How essential the adaptive learning rate is predicted by LSTM? How important is history-> How does it compare to e.g. using an MLP with similar complexity and the same input (prev. momentum and current gradient)?* **Answer** We conduct additional experiments on LSTM and an MLP with the same inputs: accumulated momentum and gradients. The result is presented in Table 2. From the results, we can observe that LSTM outperforms MLP in most settings. This indicates the importance of weight update history. ___ **Weakness 3** *Dataset inconsistency* **Answer** Thanks for pointing out that. We will change to the right description in our final paper. ___ **Question 1:** *What are the actual learning rates predicted within the inner loop? Is there some commonality within the same step across different test episodes* **Answer** We conduct experiments on test tasks on Cifar100 with Convnet4 as the backbone model and present the result in Figure 2 in the global rebuttal. From the result, we can observe the predicted learning rate is small at the beginning since the conflict between momentum and gradient is large in the first few steps. The learning rate becomes larger along with the update steps. This behavior indicates the conflict between momentum and gradient is smaller, and we get a smoother update direction. ___ **Question 2** *What is the average mix of gradient and momentum across inner-loop update steps? Does this reflect intuitions of growing momentum* **Answer** We conduct experiments on test tasks on Cifar100 with Convnet4 as the backbone and present the results in Figure 2. The result shows that the gradients give lower losses until the first three steps. So the coefficient for gradient α is larger than for momentum β. However, after accumulating for a few steps, the accumulated momentum can generate a lower loss and a larger coefficient. Such results support the intuition that the momentum quality improves over the trajectory. ___ **Question3** *How big is the influence of the base learning rate in Algorithm 1* **Answer** In our experiments, we set the base learning rate as 0.01. We do some additional evaluation on base learning rates 0.001 and 0.0001. Under the 5-way-1-shot setting on Mini-Imagenet using Convnet4 as the backbone, the accuracy for learning rates of 0.001 and 0.0001 are similar, which indicates that LSTM can adjust the predicted learning rate accordingly. The base learning rate has a small impact on the overall performance. ___ **Question 4** *What is the size of LSTM, and how much complexity does it add* **Answer** The LSTM used in our paper contains two layers with 30 and 50 hidden sizes for 5-way-1-shot and 5-way-5-shot tasks, respectively. The complexity will increase. Table 5 in our paper shows that our method has an extra 35-second running time compared to ALFA for one epoch. ___ **Question 5** *How is temperature chosen* **Answer** In our experiment, we choose 15 for 5-way-1-shot tasks and 25 for 5-way-5-shot tasks during the dynamic class weighting in all our settings and datasets. ___ **Question 6** *Why ALFA seems to converge faster* **Answer** For the convergence rate of the first several steps, we empirically show the predicted learning rates across test tasks in Figure 2 in the global rebuttal. The results show that our optimizer tends to predict a smaller learning rate at the first few steps since the conflict between momentum and gradient is large. This can be reflected in Figure 3 in the global rebuttal, where momentum coefficients are smaller than gradients. A smaller learning rate will lead to a slow convergence rate in the first few steps, providing spaces for accumulating high-quality momentum. Once the momentum quality is good after several steps, the predicted learning rates become larger, leading to a better optimum than ALFA. ___ **Question 7** *did you choose SGD for the outer loop?* **Answer** In our experiment, we use Adam as our outer loop optimizer. In Algorithm 2 ln 12/13, we use a general update form. ___ **References** [1] S. Baik, M. Choi, J. Choi, H. Kim, and K. M. Lee. Meta-learning with adaptive hyperparameters,2020 --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for their thorough response! The provided answers & results have addressed most of my questions -- I appreciate the honesty regarding the provided results, especially regarding the clarification of the dynamic class reweighting scheme. I think the insights provided during the rebuttal are important and the authors should make a deliberate effort on incorporating them into the paper where possible (and add references to the supplementary material where space is too limited). However, I do keep my concern that while the presented method of momentum-incorporation and weight-update-history are interesting aspects, the dynamic class weighting has a major influence on the presented results -- and the paper in its current form more-or-less hides this fact (as pointed out by other reviewers as well); Additionally, to claim to present an optimizer for "few shot learning" as stated in the title, I have to agree with other reviews that 'regression' should at least be demonstrated as well -> Note that ALFA [1] does present regression results (Sec. 4.5). All in all, I do see the novelty but equally the remaining limitations of the work in its current form. -> I update my rating to 'weak accept' (would support acceptance due to novelty; but not champion this paper) --- Reply to Comment 1.1.1: Title: Few-shot regression results Comment: **Table 1:** *Comparison results on few-shot regression tasks. The target function for regression is a sine curve $y(x) = A \sin(\omega x)$. We report MSE errors with a 95% confidence interval.* |Method |Training |5-shot Testing| 10-shot Testing| 20-shot Testing| |:-------|:---------|:---------------|:----------------|:------------------| |MAML| 5-shot |1.13 ± 0.08|0.85 ± 0.14| 0.71 ± 0.12| |Meta-SGD| 5-shot |0.90 ± 0.16 |0.63 ± 0.12| 0.50 ± 0.10| |Meta-SGD+ALFA (reproduced)| 5-shot| 0.60 ± 0.04| 0.41 ± 0.02| 0.25 ± 0.22| |Meta-SGD+Meta-AdaM| 5-shot| **0.52** ± 0.04 | **0.35** ± 0.04| **0.23** ± 0.01| |Method |Training |5-shot Testing| 10-shot Testing| 20-shot Testing| |:-------|:---------|:---------------|:----------------|:------------------| |MAML| 10-shot| 1.17 ± 0.16| 0.77 ± 0.11| 0.56 ± 0.08| |Meta-SGD| 10-shot| 0.88 ± 0.14| 0.53 ± 0.09| 0.35 ± 0.06| |Meta-SGD+ALFA (reproduced)| 10-shot| 0.72 ± 0.16| 0.45± 0.03| 0.26 ± 0.02| |Meta-SGD+Meta-AdaM| 10-shot| **0.66** ± 0.22| **0.34** ± 0.05| **0.23** ± 0.01| |Method |Training |5-shot Testing| 10-shot Testing| 20-shot Testing| |:-------|:---------|:---------------|:----------------|:------------------| |MAML| 20-shot |1.29 ± 0.20| 0.76 ± 0.12| 0.48 ± 0.08| |Meta-SGD| 20-shot| 1.01 ± 0.17| 0.54 ± 0.08| 0.31 ± 0.05| |Meta-SGD+ALFA (reproduced)| 20-shot| 1.01± 0.18 |0.48± 0.09| 0.25 ± 0.03| |Meta-SGD+Meta-AdaM| 20-shot| **0.95** ± 0.11| **0.43** ± 0.06| **0.23** ± 0.03| ___ **Concern 1**: *Provide results on regression tasks.* ___ **Answer:** Thanks for your valuable suggestions. To further enhance our work, we conducted additional experiments on few-shot regression tasks. We follow the experiment setting of the Meta-SGD [2]. The regression task is to map the underlying function based on the input. The target function is $y(x) = A \sin(\omega x)$, where amplitude $A$, frequency $\omega$, and phase $b$ follow a uniform distribution with intervals [0.1, 5.0], [0.8, 1.2], and [0, $\pi$]. We train three few-shot regression learners on 5-shot, 10-shot, and 20-shot tasks. Each learner will test on 100 5-shot, 10-shot, and 20-shot testing tasks. We report MSE error with a 95% confidence interval. We compare MAML, Meta-SGD, Meta-SGD+ALFA[1], and Meta-SGD+Meta-AdaM. The result has shown that our method outperforms two major baselines, Meta-SGD and Meta-SGD+ALFA, in all settings. The results show that our proposed method is also effective on few-shot regression tasks. ___ **Reference**: [1] S. Baik, M. Choi, J. Choi, H. Kim, and K. M. Lee. Meta-learning with adaptive hyperparameters,2020. [2] Z. Li, F. Zhou, F. Chen, and H. Li. Meta-SGD: Learning to learn quickly for few-shot learning,2017.2
Summary: The paper proposes Meta-Adam, a meta-learned adaptive optimizer that employs momentum to rapidly adapt a meta-learned initial model to a few-shot task. The proposed framework consists of four components. First, an LSTM that predicts a learning rate for each weight inside the model using the weight update history in previous iterations. Furthermore, the optimizer employs momentum and additionally develops a look-ahead procedure for identifying environments where momentum hurts optimization and adjusting the parameters accordingly. Lastly, the loss function is weighted class-wise based on the class-specific loss during each gradient step to ensure that all class losses are optimized collectively. Experiments on mini-, tiered-ImageNet and Cifar100 benchmarks demonstrate the efficacy of the proposed method. Strengths: - The method proposed explores interesting ideas around better-optimizing learning rates and momentum in few-shot adaptive meta-learned models. The components of the method are well-motivated, derived to address specific problems, and empirically effective. - Experiments demonstrate improvements in performance across various benchmarks compared to MAML-based benchmarks. - Ablation studies provided demonstrate the empirical efficacy of each component of the proposed method and how much they contribute to the overall improvements in few-shot image classification accuracy. - The paper is written clearly with reasonable notation and is easy to follow. Weaknesses: - All baselines that are compared are MAML-based. Although it's understandable given the focus of the paper on gradient adaptive few-shot image classification models, the method would still be compared against other groups (for instance, metric-based [1, 2, 3, 4]) when deciding to use a few-shot learner in an applied setting. It's useful to provide a discussion of this and potentially include a comparison to other non-MAML-based baselines. - Figure 2 / Section 4.5 - it would be interesting to see what the variance bounds are for each optimization method as averages can be uninformative as to how much loss reduction can vary per task. Furthermore, ALFA seems to achieve much faster convergence although a worse optimum compared to Meta-AdaM. Why is that the case? - 4.6 Dynamic class loss discussion lacks details as to the task/benchmark used in Figure 3. Without said details, it's difficult to evaluate whether the trends observed are statistically significant to support claims on reducing loss across all classes. [1] Prototypical Networks for Few-shot Learning [2] Improved Few-Shot Visual Classification [3] Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes [4] Enhancing Few-Shot Image Classification with Unlabelled Examples Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See weaknesses for questions. Happy to improve my rating once the authors have addressed the concerns above. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: There is no explicit discussion of limitations or potential negative societal impact of the work, although a brief discussion of some directions for future work, in particular with respect to negative learning rates, is provided. I strongly encourage the authors to provide a discussion of other potential limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Table 1:** Additional Comparison results using Convnet4 on Mini-ImageNet. We report performances in terms of accuracy (%) with standard deviation. |Dataset|Method| 5-way-1-shot|5-way-5-shot | |:-----------------:|:-------------------------:|:------------------------:|:------------------------:| |Mini-ImageNet|ProtoNet[1]|49.42 ± 0.78|68.20 ± 0.66| |Mini-ImageNet|Meta-AdaM(ours)|52.00 ± 0.49|70.70 ± 0.49| ___ **Question 1** *All baselines that are compared are MAML-based. Although it's understandable given the focus of the paper on gradient adaptive few-shot image classification models, the method would still be compared against other groups (for instance, metric-based [1, 2, 3, 4]) when deciding to use a few-shot learner in an applied setting. It's useful to provide a discussion of this and potentially include a comparison to other non-MAML-based baselines.* **Answer** Thank you for highlighting these studies. In Table 1 above, we compare our result with ProtoNet [1] on Mini-Imagenet using Convnet4 as the backbone. The results show that our method outperforms the ProtoNet [1] in both settings. For the other three methods, we want to note that the methods presented in [2,3,4] use pre-trained models. In contrast, our primary focus centers around the proposed Meta-Adam optimizer. Consequently, the performance results we have reported do not stem from pre-trained models, ensuring minimal interference from other factors. While time constraints have prevented us from providing performance data using pre-trained models, it's crucial to understand that our techniques complement them. As such, our methods can be integrated with gradient-based meta-learning strategies that employ pre-trained models as long as they employ an inner fine-tuning process. ___ **Question 2:** *Figure 2 / Section 4.5 - it would be interesting to see what the variance bounds are for each optimization method, as averages can be uninformative as to how much loss reduction can vary per task. Furthermore, ALFA seems to achieve much faster convergence, although a worse optimum compared to Meta-AdaM. Why is that the case?* **Answer:** Thank you for your recommendation. In Figure 1 of the global rebuttal, we present the variance bounds of both our method and ALFA across 500 test tasks. From the figure, we can see that after the third step, the losses associated with the test tasks closely align with the mean loss. The variances after step 3 are around $1 \times 10^{−4}$. Therefore, the loss reduction works for the majority of tasks. For the convergence rate of the first several steps, we empirically show the predicted learning rates across test tasks in Figure 2 in the global rebuttal. The results show that our optimizer tends to predict a lower learning rate at the first few steps since the conflict between momentum and gradient is large. This can be reflected in Figure 3 in the global rebuttal, where momentum coefficients are smaller than gradients. Using a smaller learning rate will lead to a slow convergence rate in the first few steps, which provides spaces for accumulating high-quality momentum. Once the momentum quality is good after several steps, the predicted learning rates become larger and lead to a better optimum than ALFA. This also highlights our contribution that using double-look-ahead to handle highly variable momentum. ___ **Question 3:** 4.6 Dynamic class loss discussion lacks details regarding the task/benchmark used in Figure 3. Without said details, it’s difficult to evaluate whether the trends observed are statistically significant to support claims on reducing loss across all classes. **Answer:** Thank you for drawing attention to this. In Section 4.6, we used a 5-way-1-shot task on the mini-Imagenet dataset. For the finalized version, we intend to broaden our evaluation to include additional tasks and benchmark datasets. ___ **References:** [1] Snell, J., Swersky, K., & Zemel, R. (2017). Prototypical networks for few-shot learning. Advances in neural information processing systems [2] Bateni, P., Goyal, R., Masrani, V., Wood, F., & Sigal, L. (2020). Improved few-shot visual classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 14493-14502). [3] Requeima, J., Gordon, J., Bronskill, J., Nowozin, S., & Turner, R. E. (2019). Fast and flexible multi-task classification using conditional neural adaptive processes. Advances in Neural Information Processing Systems [4] Bateni, P., Barber, J., Van de Meent, J. W., & Wood, F. (2022). Enhancing few-shot image classification with unlabelled examples. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 2796-2805). --- Rebuttal Comment 1.1: Comment: The authors have adequately addressed the limitations noted in my original review. I strongly recommend that the authors include the discussion above within their camera ready and expand on the number of non-MAML baselines they compare to (including some that may have not been referenced explicitly). That being said, after reviewing the rebuttal, reviews by other reviewers, and the authors' responses, I am happy to recommend the paper for acceptance and will be improving my rating to full acceptance. --- Reply to Comment 1.1.1: Comment: Thanks for your kind advice in your review. We will take your advice to further improve our final paper.
Summary: This paper introduces a novel meta-learned optimizer with momentum for few-shot learning. The proposed approach incorporates the parameter-update history by leveraging LSTMs to adaptively adjust the learning rate. To enhance convergence speed, the momentum is integrated into the meta-optimizer by using a double look-ahead mechanism. Furthermore, to address imbalanced learning, a dynamic class weighing schema is introduced, encouraging optimal model optimization by assigning more weight to influential classes. The effectiveness of the proposed method is evaluated on multiple benchmarks, showcasing its performance in few-shot learning scenarios. Strengths: * The proposed meta-optimizer is novel, as it considers the history of weight updates and momenta when adapting the learning rate, which distinguishes it from previous works in the field. However, as my research expertise lies outside of the meta-optimizer domain, I cannot ascertain the paper's coverage of previous literature. Thus, I eagerly anticipate feedback from other reviewers with expertise in this topic. * The paper is written in a clear and concise manner, facilitating readers' comprehension of the ideas and technical contributions. Each sub-problem is precisely defined, including its limitations, followed by the proposal of techniques to address those issues. This clarity enhances the paper's accessibility and aids in conveying the research concepts effectively. Weaknesses: * The related work section may have missed some recent works or baseline methods on meta-optimizer for few-shot learning (FSL). All the existing works mentioned in Section 2.2.1 were published before 2020, which raises the question of whether there have been significant developments in this topic between 2021 and 2023. This could lead to confusion and a potential gap in the literature review. * The writing in the paper lacks conciseness. The author repeats high-level ideas and the limitations of previous works using very similar expressions multiple times throughout the entire paper. For instance, the description provided in Line 118, Line 129, and Line 139 are identical, which can be redundant and could benefit from consolidation. * The method section contains some content that should be included in the related works section. For example, in Line 145, the comparison between the proposed idea and a previous work should be presented in the related works section. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * I would appreciate it if the authors and other reviewers could carefully verify the inclusion of all relevant previous works in this area within the paper. This thorough examination is crucial for evaluating the novelty of the paper, and it will significantly influence my final rating. * I would expect the author could improve the conciseness of the writing. * The dynamic loss contributes to the accuracy of the proposed method, but the paper does not mention such a technique in the introduction. I expect the author could provide an explanation. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper includes the limitation in section 4.8 but fails to mention the negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Table 1:** Additional Comparison results using Convnet4 on Mini-ImageNet, TieredImageNet, and Cifar100 datasets. We report performances in terms of accuracy (%) with standard deviation. | Dataset | Method | 5-way-1-shot | 5-way-5-shot | |:-----------------:|:-------------------------:|:------------------------:|:------------------------:| | Mini-Imagenet | e3bm(2020)[1] |53.2| 65.1 | Mini-Imagenet | Sparse-MAML(2021)[4] |51.04 ± 0.59| 68.05 ± 0.84| |Mini-Imagenet | MAML+SiMT (2022)[3| 51.49 ± 0.18| 68.74 ± 0.12| |Mini-Imagenet |MeTAL(2022)[2]|52.62 ± 0.37 |70.52 ± 0.29| |Mini-Imagenet|Meta-AdaM (ours)| 52.00 ± 0.49| **70.70 ± 0.49**| | Dataset | Method | 5-way-1-shot | 5-way-5-shot | |:-----------------:|:-------------------------:|:------------------------:|:------------------------:| |TieredImageNet |e3bm(2020)[1]| 52.1| 70.2| |TieredImageNet |Sparse-MAML(2021)[4]|56.39 ± 0.38|73.01 ± 0.24| |TieredImageNet |MAML+SiMT(2022)[3]|52.51 ± 0.21| 69.58 ± 0.11| |TieredImageNet |MeTAL(2022)[2]| 54.34 ± 0.31| 70.40 ± 0.21| |TieredImageNet |Meta-AdaM(ours)| 53.93 ± 0.49| **72.66** ± 0.49| | Dataset | Method | 5-way-1-shot | 5-way-5-shot | |:-----------------:|:-------------------------:|:------------------------:|:------------------------:| |Cifar100|e3bm(2020)[1]| 39.9 |52.6| |Cifar100|Meta-AdaM (ours) |**41.11** ± 0.49| **56.32** ± 0.49| ___________________________________________________________ Table 2: Additional Comparison results using Resnet12 on Mini-ImageNet and TieredImageNet. We report performances in terms of accuracy (%) with standard deviation. | Dataset| Method | 5-way-1-shot| 5-way-5-shot | | :-----------------: | :-------------------------:| :------------------------:|:------------------------:| |Mini-ImageNet|MAML+SiMT(2022)[3]| 56.28 ± 0.63| 72.01 ± 0.21 |Mini-ImageNet|MeTAL(2022)[2]| 59.64 ± 0.38 |76.20 ± 0.19| |Mini-ImageNet|Meta-AdaM(ours)| **59.89** ± 0.49| **77.92** ± 0.43| | Dataset| Method | 5-way-1-shot| 5-way-5-shot | | :-----------------: | :-------------------------:| :------------------------:|:------------------------:| |TieredImageNet|MAML+SiMT(2022)[3]|59.72 ± 0.22 |74.40 ± 0.90| |TieredImageNet|MeTAL](2022)[2] |63.89 ± 0.43| 80.14 ± 0.40| |TieredImageNet|Meta-AdaM(ours)| **65.31** ± 0.48| **85.24** ± 0.35| ________________________________________________________________________________________________________________________ **Weakness 1 & Question 1**: *I would appreciate it if the authors and other reviewers could carefully verify the inclusion of all relevant previous works in this area within the paper. This thorough examination is crucial for evaluating the paper's novelty and will significantly influence my final rating.* **Answer**: Thanks for your suggestion. We have carefully examined the related works, and the novelty of our work is integrating momentum into a meta-learned adaptive optimizer. Existing methods cannot utilize momentum for weights update due to the high variances of momen at the initial update steps. We address this challenge by double look-ahead. Also, we propose to utilize momentum for adaptive learning rate prediction. To address your concern, we present the result of recent works in Table 1 and Table 2. The experiment section in our paper already presents sparse-MAML (2021)[4]. In addition, we include e3bm (2020)[1], MAML+SiMT (2022)[3], and MeTAL (2022)[2], which also work on improving the fine-tuning process. We can observe that our methods outperform the other methods in most of the settings. ________________________________________________________________________________________________________________________ **Weakness 2 & Question 2**: *The writing in the paper lacks conciseness. The author repeats high-level ideas and the limitations of previous works using similar expressions multiple times throughout the paper. For instance, the description provided in Line 118, Line 129, and Line 139 are identical, which can be redundant and could benefit from consolidation.* **Answer**: Thanks for your suggestion. We will revise and reduce the redundancy in the final version. _________________________________________________________________________________________________________________________ **Weakness 3**: *The method section contains some content that should be included in the related works section. For example, in Line 145, the comparison between the proposed idea and previous work should be presented in the related works section.* **Answer**: Thanks for your suggestion. We will add the missing baseline in the related work section of our final paper. _____ **Question 3**: *The dynamic loss contributes to the accuracy of the proposed method, but the paper does not mention such a technique in the introduction. I expect the author could provide an explanation.* **Answer**: Thanks for pointing out this. We will add a discussion of dynamic loss in the introduction. _______ **References** [1] Y. Liu, B. Schiele, and Q. Sun. An ensemble of epoch-wise empirical bayes for few-shot learning. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVI 16, pages 404–421. Springer, 2020.29 [2] H. K. D. C. J. M. K. M. L. Sungyong Baik, Janghoon Choi. Meta-learning with task-adaptive loss function for few-shot learning. In International Conference on Computer Vision (ICCV),2021 [3] J. Tack, J. Park, H. Lee, J. Lee, and J. Shin. Meta-learning with self-improving momentum target. In Advances in Neural Information Processing Systems, 2022.34 [4] J. von Oswald, D. Zhao, S. Kobayashi, S. Schug, M. Caccia, N. Zucchet, and J. Sacramento. Learning where to learn: Gradient sparsity in meta and continual learning, 2021. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. Since most of my concerns are addressed, I would recommend an acceptance for the paper. --- Reply to Comment 1.1.1: Comment: Dear reviewer cqvx, we extend our sincere gratitude for the invaluable feedback you've provided. Your insights have played a pivotal role in enhancing the quality of our work. Upon reviewing your updated comments, we observed a notable shift towards a more positive perspective. In light of this, we wanted to kindly bring to your attention the possibility that the associated score might have been inadvertently overlooked for adjustment. Please know that we completely respect your decision. Once again, thank you for your invaluable feedback.
Rebuttal 1: Rebuttal: Dear Reviewers, Firstly, we'd like to express our deep gratitude for the comprehensive review and insightful feedback on our paper. We have carefully addressed each of your comments, and our individual responses to each query can be found in the subsequent sections. Incorporating your feedback, we've endeavored to enhance the overall clarity and substance of our paper. To this end, we've conducted supplementary experiments for dynamic class weighting, the predicted learning rates, and computed coefficients for momentum. Some figures are now integrated into the attached PDF. Your diligent review and valuable suggestions have been important in improving the quality of our manuscript. We deeply value your expertise and the time you invested in assessing our work. Best regards, The Authors Pdf: /pdf/d6eef178cc7e3a7c2ec6126cab680402ac147870.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the authors aim to solve the few-shot learning problem by proposing a meta-learned learning rate learner. This learner utilizes weight-update history as input to predict more appropriate learning rates for rapid convergence. Furthermore, they incorporate momentum into the optimization process of few-shot learning via a double look-ahead mechanism, enabling rapid convergence similar to many-shot settings. Extensive experimental results are provided to show the effectiveness of the proposed method. Strengths: - This paper is well-written and easy to read. A lot of figures are included to explain the proposed method. &nbsp; - The proposed method is technically sound. It is reasonable to use meta-learning to adjust the learning rates. &nbsp; - Extensive experimental results are provided. Weaknesses: - The proposed idea of using a model to output the learning rates is not novel. A similar thing has been done in [a], which uses a meta-learned model to output a series of hyperparameters, including learning rates. &nbsp; - The performance of the proposed method is much lower compared to recent few-shot learning methods. For example, the 1-shot accuracy on miniImageNet is only 59.89% in Table 3. Existing methods, e.g., [b] and [c], achieve more than 68% accuracy in the same setting. It is not necessary that the proposed method beats all existing methods. However, it should not be much lower than other methods. At least, the performance should be comparable. Besides, it is necessary to indicate the proposed method doesn’t achieve the SOTA performance and compare the SOTA in the tables. &nbsp; - All important related works are missing, e.g., [a-d]. They should be compared and discussed. &nbsp; Overall, I think the quality of this paper is not satisfactory. The idea is a little bit incremental. The overall performance is not significant. Many important related works are missing. Therefore, I recommend rejection. The authors need to show the proposed method can be applied to recent popular baselines, e.g., [b, c, d]. I don’t think the current version is ready to be presented at NeurIPS. &nbsp; [a] An Ensemble of Epoch-wise Empirical Bayes for Few-shot Learning, ECCV 2020. [b] Partner-Assisted Learning for Few-Shot Image Classification, ICCV 2021. [c] DeepEMD: Differentiable Earth Mover’s Distance for Few-Shot Learning, TPAMI. [d] Rectifying the Shortcut Learning of Background for Few-Shot Learning, NeurIPS 2021. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please address my concerns in the "weaknesses" Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See my concerns in the "weaknesses" Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Table 1:** Additional Comparison results using Convnet4 on Mini-ImageNet, TieredImageNet, and Cifar100 datasets. We report performances in terms of accuracy (%) with standard deviation. |Dataset|Method| 5-way-1-shot|5-way-5-shot | |:-----------------:|:-------------------------:|:------------------------:|:------------------------:| | Mini-Imagenet | e3bm[3] |53.2| 65.1| |Mini-Imagenet |Meta-AdaM (ours)| **52.00** ± 0.49| **70.70** ± 0.49| |TieredImageNet |e3bm[3]| 52.1| 70.2| |TieredImageNet |Meta-AdaM(ours)| **53.93** ± 0.49| **72.66** ± 0.49| |Cifar100|e3bm[3]| 39.9 |52.6| |Cifar100|Meta-AdaM (ours) |**41.11** ± 0.49| **56.32** ± 0.49| ________________________________________________________________________________________________________________________ **Question 1**: *The proposed idea of using a model to output the learning rates is not novel. A similar thing has been done in [a], which uses a meta-learned model to output a series of hyper-parameters, including learning rates.* **Answer**: It seems there’s been a misunderstanding regarding the originality of our methods. We wish to emphasize our pioneering contributions in two aspects: * Integrating momentum into a meta-learned adaptive optimizer. * Utilizing the history of weight changes for adaptive learning rate prediction Existing methods have been hindered from employing momentum for weight updates due to pronounced variances in momentum during initial optimization steps. To overcome this limitation, we introduce a double-look-ahead strategy. Furthermore, our approach involves leveraging momentum for adaptive learning rate prediction, which more accurately reflects weight change compared to other adaptive learning rate prediction methods. The key difference between our work and [1] and other related works is that our learning rate prediction is based on the update history. Our LSTM network takes accumulated momentum and current gradients to grasp the weight update history, which is more critical than gradients and weights [1, 2]. In contrast, [3 ] and other related works mostly use average loss, input features or weight, and current gradient as their inputs to determine the learning rate. They focus more on the current update rather than the update history. The experiment studies in section 4.2 and section 4.3 and the additional results presented in Table 1 show that our methods are more effective than other related works. __________________________________________________________________________________________________________________________ **Question 2**: *The performance of the proposed method is much lower compared to recent few-shot learning methods. For example, the 1-shot accuracy on miniImageNet is only 59.89 % in Table 3. Existing methods, e.g., [b] and [c], achieve more than 68% accuracy in the same setting. It is not necessary that the proposed method beats all existing methods. However, it should not be much lower than other methods. At least, the performance should be comparable. Besides, it is necessary to indicate the proposed method doesn’t achieve the SOTA performance and compare the SOTA in the tables.* **Answer**: Thank you for highlighting these studies. We want to note that the methods presented in [4,5,6] (b,c,d in your review) use pre-trained models. In contrast, our primary focus centers around the proposed Meta-Adam optimizer. Consequently, the reported performance results do not stem from pre-trained models, ensuring minimal interference from other factors. While time constraints have prevented us from providing performance data using pre-trained models, it’s crucial to understand that our techniques complement them. As such, our methods can seamlessly be integrated with gradient-based meta-learning strategies that employ pre-trained models. _________________________________________________________________________________________________________________________ References [1] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. MIT Press, 2016. [2] M. J. Kochenderfer and T. A. Wheeler. Algorithms for optimization. MIT Press, 2019. [3] Y. Liu, B. Schiele, and Q. Sun. An ensemble of epoch-wise empirical bayes for few-shot learning.In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVI 16, pages 404–421. Springer, 2020. [4] X. Luo, L. Wei, L. Wen, J. Yang, L. Xie, Z. Xu, and Q. Tian. Rectifying the shortcut learning of background for few-shot learning. Advances in Neural Information Processing Systems, 34:13073–13085, 2021. [5] J. Ma, H. Xie, G. Han, S.-F. Chang, A. Galstyan, and W. Abd-Almageed. Partner-assisted learning for few-shot image classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10573–10582, 2021 [6] C. Zhang, Y. Cai, G. Lin, and C. Shen. Deepemd: Differentiable earth mover’s distance for few-shot learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(5):5632–5648, 2022.50 --- Rebuttal Comment 1.1: Comment: Thanks for the response from the authors. **For Q1**: The authors didn't really answer my question. I agree that the authors propose a better way to meta-learn the learning rates for few-shot learning. However, the basic idea of "meta-learning the learning rates" is not novel. Therefore, the overall contribution is this paper is somewhat incremental and limited. In the authors' response, they only emphasize the technical detail differences between the proposed method and existing works. **For Q2**: Again, the authors only explained their experimental settings and didn't provide any solid results. I will consider upgrading my rating if the authors can: - Change the claims in the paper and highlight that "meta-learning the learning rates" has been explored a lot in existing works, e.g., [a] and [e]. - Provide the results using pre-trained models, and show their method still works better. [a] An Ensemble of Epoch-wise Empirical Bayes for Few-shot Learning, ECCV 2020. [e] Meta-SGD: Learning to Learn Quickly for Few-Shot Learning. https://arxiv.org/abs/1707.09835 --- Reply to Comment 1.1.1: Title: Result on pretrained network Comment: |Dataset| Method| 5-way-5-shot| |:---------|:-----------|:-----------------| |Mini-ImageNet|AM3+TRAML[4]|79.54 ± 0.60| |Mini-ImageNet|baseline[1]| 81.38 ± 0.41| |Mini-ImageNet|Net-Cosine[5]|81.57 ± 0.56| |Mini-ImageNet|MABAS[3]|82.70 ± 0.54| |Mini-ImageNet|IEPT[8]| 82.90 ± 0.30| |Mini-ImageNet|MELR [2]| 83.40 ± 0.28| |Mini-ImageNet|DeepEMD [7]| 83.47 ± 0.61| |Mini-ImageNet|COSCO [6] |85.16 ± 0.42| |Mini-ImageNet|Meta-AdaM + pretrained (ours)| 84.46 ± 0.36| ___ Sorry for the late response. We have been working on getting results for combining our Meta-AdaM with the pre-trained model ResNet12. ___ **Concern 1:** *Change the claims in the paper and highlight that "meta-learning the learning rates" has been explored a lot in existing works, e.g., [a] and [e].* **Answer:** Thank you for your great suggestion. We will revise the paper’s claims and place more emphasis on prior research regarding learning rate meta-learning in both the Introduction and Related Work sections of the final version. ___ **Concern 2:** *Provide the results using pre-trained models, and show their method still works better.* **Answer:** To address the concern, we conducted further experiments utilizing pre-trained networks for the 5-way-5-shot task on the Mini-ImageNet dataset. Table 1 summarizes our findings in comparison to current state-of-the-art models. We employed a pre-trained ResNet12 as the backbone network, as provided by [ 6]. Our approach yields results in line with top-performing methods, and notably, our result is close to that of COSCO [6], which leverages additional data, such as object positioning. ___ **References:** [1] W.-Y. Chen, Y.-C. Liu, Z. Kira, Y.-C. F. Wang, and J.-B. Huang. A closer look at few-shot classification. arXiv preprint arXiv:1904.04232, 2019. [2] N. Fei, Z. Lu, T. Xiang, and S. Huang. Melr: Meta-learning via modeling episode-level relationships for few-shot learning. In International Conference on Learning Representations,2020 [3] J. Kim, H. Kim, and G. Kim. Model-agnostic boundary-adversarial sampling for test-time generalization in few-shot learning. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, pages 599–617. Springer, 2020. [4] A. Li, W. Huang, X. Lan, J. Feng, Z. Li, and L. Wang. Boosting few-shot learning with adaptive margin loss. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12576–12584, 2020. [5] B. Liu, Y. Cao, Y. Lin, Q. Li, Z. Zhang, M. Long, and H. Hu. Negative margin matters: Understanding margin in few-shot classification. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV 16, pages 438–455. Springer, 2020 [6] X. Luo, L. Wei, L. Wen, J. Yang, L. Xie, Z. Xu, and Q. Tian. Rectifying the shortcut learning of background for few-shot learning. Advances in Neural Information Processing Systems, 34:13073–13085, 2021 [7] C. Zhang, Y. Cai, G. Lin, and C. Shen. Deepemd: Differentiable earth mover’s distance for few-shot learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(5):5632–5648, 2022. [8] M. Zhang, J. Zhang, Z. Lu, T. Xiang, M. Ding, and S. Huang. Iept: Instance-level and episode-level pretext tasks for few-shot learning. In International Conference on Learning Representations, 2020.
null
null
null
null
null
null
How to Fine-tune the Model: Unified Model Shift and Model Bias Policy Optimization
Accept (poster)
Summary: This paper investigates the problem of model shift in model-based Reinforcement Learning (RL). The author proposes a two-stage model learning method called USB-PO (Unified model Shift and model Bias Policy Optimization). The first stage is the same as previous work, updating based on the previous model using model replay buffer. The second stage is based on the previous model and the replay buffer, aiming to simultaneously reduce model shift and model bias after the model first stage update. The paper also provides a theoretical analysis for the proposed method, and the effectiveness of the method is demonstrated in experiments conducted on MuJoCo. Strengths: 1. As far as I know, the method proposed in this paper is novel. 2. The authors provide theoretical analysis and motivation for their method. 3. The ablation study in the paper is rather comprehensive. Weaknesses: 1. What is the basis for lines 133-135? I don't think such a conclusion can be derived from Definition 1 and Theorem 1. 2. What is the relationship between Theorem 2 and 3? Both seem to be proving different bounds for the same term, what is the significance of this? Judging from the appendix, there appears to be a typo in Theorem 3. 3. In the experiment section, the performance curve of the MBRL method CMLO drops much more than what is shown in the original paper. I would like to know the reason for this. 4. The baselines STEVE and SLBO are too outdated, and they can no longer be considered as state-of-the-art (SOTA) baselines for MBRL. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The two-stage model learning will inevitably lead to more time consumption. Can you specify how much the time consumption has increased? 2. Please compare with some recent MBRL methods instead of STEVE and SLBO, such as PDML [1] and ALM [2]. Reference: [1]. Wang et. al. "Live in the Moment: Learning Dynamics Model Adapted to Evolving Policy," in ICML 2023, https://arxiv.org/abs/2207.12141. [2]. Ghugare et. al. "Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective," in ICLR 2023, https://arxiv.org/abs/2209.08466. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. The time efficiency needs to be reported. 2. More baseline comparisons are required. 3. It would be desirable to compare with more benchmarks, such as DMC and Metaworld, not just MuJoCo. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to appreciate the reviewer for the recognition and valuable comments! Our specific responses to the questions raised by the reviewer are as follows: ___ ## 1. The basis for lines 133-135 According to the previous work MBPO, the return discrepancy scheme refers to the difference between the return under the model and that in the real environment. Under our notation, the return discrepancy is denoted as $V^{\pi_i|M_i}-V^{\pi_i}\_{M_i}$. According to Theorem 1, we make some revisions on line 133-135 to make it clear. **Please refer to point 2 in global response.** ___ ## 2. The relationship between Theorem 2 and 3 Sorry, we do have a typo in Theorem 3. We correct the formula in Theorem 3 (Eq.(7)). **Please refer to point 3 in global response.** ___ ## 3. The performance curve of the MBRL method CMLO drops much more than what is shown in the original paper. We at first ran a lot of random seeds for CMLO when verifying its performance, and found that in very occasional cases CMLO would have extremely high performance (e.g. ant for 8088, walker2d for 7553). We do not doubt the performance of CMLO, we just think that the authors just select such special cases in their plots under limited runs. **For a fair comparison, we exclude these special cases from our plots.** Moreover, the HalfCheetah and Ant environments are only plotted up to 250K in our experiment, which may seem to have lower performance, but is actually compatible with CMLO, **as we take greater smoothing operation**. ___ ## 4. More baseline comparisons We are sorry for being unable to run PDML because the repo released by the author misses the key source-code file, **so we contact the author to get the results in their original paper**. **Please refer to Figure 2 in global response pdf.** USB-PO still exhibits SOTA asymptotic performance and sample efficiency, although not as good as PDML in Humanoid environments. We are not claiming to beat all SOTA algorithms with sophisticated designs as PDML needs extra computational resources to compute policy shift ($ξ_{\pi_i}$) and to memorize the policy sequence. Although the growth on time reported in the PDML is small compared to the MBPO, **our method USB-PO has the potential to shorten the time consumption, please refer to the time cost in Appendix C.4**. As for the ALM, the training for its classifier may introduce instability, which impairs the performance. ___ ## 5. Can you specify how much the time consumption has increased? / Time efficiency needs to be reported. According to Section 5.2, although USB-PO is a two-stage model training process, **continuing to use the fine-tuned model for the next iteration has the potential to accelerate model convergence** (the fine-tuning process reduces model bias) and then possibly reduce the training time. **Please refer to the time cost we reported compared to MBPO in Appendix C.4**. Training time is reduced on Humanoid, HalfCheetah, and Walker2d. In addition, we make some revisions to describe the computational cost. **Please refer to point 7 in global response.** ___ ## 6. It would be desirable to compare with more benchmarks, such as DMC and Metaworld, not just MuJoCo. Sorry, we don't give experimental results on DMC or Metaworld within a limited time for the following three reasons: · Previous methods including MBPO, M2AC, P2P, BMPO, CMLO, and so on have not been experimented on DMC or Metaworld, so DMC and Metaworld are unfamiliar to us. · After investigating DMC and Metaworld, we made an attempt and found that DMC or Metaworld is quite different from MuJoCo, so both baseline and USB-PO need to tune the parameters carefully. · MBRL algorithms generally consume a long time to train once, e.g. almost 3 days for MBPO and 6 days for P2P on HalfCheetah. Though we don’t give specific results, we set this as future work. ___ We expect the reviewer could increase the score if we address the issues the reviewer raised. Of course, if there are more questions, we are willing to further discuss them with the reviewer. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. I still believe that the reasons mentioned by the author are not excuses for not conducting experiments in other environments, so I will maintain my score --- Reply to Comment 1.1.1: Title: Discussion reply to the reviewer eAwQ Comment: Dear reviewer, We sincerely thank you for your reply! We will take experiments on other environments (DMC/Metaworld) as future work. However, we would like to emphasize that the aim of this paper is to lay down the theoretical foundations of the extra design of phase 2. **Our experiments are designed to validate the superiority of unifying model shift and model bias and show its working mechanism. The results shown in the experiment are sufficient to support our theory.** In addition, during the rebuttal period, we add ALM and PDML and analyze the time cost to further strengthen our method. While we intend to evaluate the proposed algorithm on an extensive deep RL benchmark in the future (DMC/Metaworld), these experiments are beyond the scope of the current work. We sincerely wish you to value our work and reconsider the score. Best wishes! The authors.
Summary: This paper proposes a new MBRL algorithm which adaptively adjusts the model update by considering the model shift (distance towards previous step update) and the model bias (towards true env model) simultaneously, which avoids model overfitting and shows better empirical performance compared to only considering either of them. Strengths: - This paper clearly discusses the drawbacks of previous MBRL approaches, either considering model bias only, which gives possible excessive model updates, or considering model shift only, which might be subject to overfitting to the previous model. A natural and neat idea is proposed by unifying the two aspects together to further fine-tune the model, which supported by strong empirical results. - The experiments section is thorough. The ablation study is interesting, which shows the performance of USB-PO is better than removing any of the model bias or shift term. Section 5.2 is also nice, which empirically verifies the assumptions in the method development, by removing the consideration of $\Delta$, as well as the visualization on the optimization objective value to see how the update happens. These experiments facilitate the understanding the method. Weaknesses: - The biggest weakness of the paper is that it is very poorly written, the idea is simple but the way the paper organizes and writes make it very difficult to convey the main message. For example, Section 4.1 is too general and it is hard to get what it is trying to say. Algorithm 1 does not show any important message, for example, Line 115 says the optimization objective is MLE loss and the phase 2 is for finetuning, which is too high-level and it makes readers confused. - This might be correlated with point 1, the algorithm in Algorithm is not super clear to me. It seems Equation 3 is used to learning $M_2$, but the Algorithm 2 Line 2 says we get $M_2$ from $M_1$ update, this is confusing. Is Line 3 only for finetuning $M_2$ given the previously learned $M_1$, $M_2$? These are the main points of the paper and it is important to make things clear and precise. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: - For Line 195, could the $M_1$ ensembles be also used in the bias term estimate? - The experiment in Figure 6, is it possible to try more lr of the MBPO to showcase the advantage? - Another ablation study is to varying the number of ensembles for the models, and comment on the computational complexity. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for identifying our technical contributions. The valuable comments help us improve our submission! The reviewer’s primary concern seems to be how the paper is written and organized. Our specific responses to the concerns raised by the reviewer are as follows: ___ ## 1. Section 4.1 is too general and it is hard to get what it is trying to say. Algorithm 1 does not show any important message. / The algorithm in Algorithm is not super clear. We notice a strong correlation between the two points mentioned by the reviewer in the Weaknesses, so **we make a unified answer here**. After carefully checking the reviewer’s comments and reading the statements in our paper, we find that Section 4.1 is indeed not very clear in its description and fails to highlight the main points. Also, the notation $p_M$ in Algorithm 1 and the notation $p_{M_1}$ and $p_{M_2}$ in Section 4.1 do have some confusion for the understanding. Therefore, we make revisions to clarify Section 4.1. Furthermore, we integrate Algorithm 2 into Algorithm 1 to unify the notation, removing the confusing notation $p_M$ and removing the original Algorithm 2 from the main paper. **We invite the reviewer to refer to point 4 and point 5 in global response . Moreover, we add a schematic diagram in global response pdf to further describe our method. Please refer to point 8 in global response.** Here we provide additional clarifications to address the reviewer’s confusion. Algorithm 1 wants to emphasize the difference from the traditional MBRL algorithms, namely the extra fine-tuning process (phase 2). $M_1$ denotes the model backed up before training by MLE while $M_2$ denotes the learned model after training by MLE. **Due to the possible performance drop, we devise phase 2 to further fine-tune $M_2$. As for $M_1$, it is used for computing the model-shift term in Eq.(3).** ___ ## 2. could the $M_1$ ensembles be also used in the bias term estimate? Model bias refers to the error between the learned model and the real environment. According to previous work, the real environment is usually estimated using all models not selected in the ensemble models. Since **the model bias of $p_{M_2}$ is to be estimated in Eq.(3)**, the calculation only involves the $M_2$ ensemble rather than using the $M_1$ ensemble. However, **it may be possible that using the** $M_1$ **ensemble could lead to more accurate estimates, but this is not our key point and requires further research to validate.** ___ ## 3. Figure 6: more lr of the MBPO to showcase the advantage We conduct the ablation experiment on more learning rates containing 7e-4, 5e-4, 3e-4, 1e-4 and 1e-3 (the original MBPO) to compare with USB-PO, further strengthening the advantage of USB-PO. **Please refer to Figure 4 in global response pdf.** ___ ## 4. Another ablation study is to varying the number of ensembles for the models, and comment on the computational complexity. We conduct the ablation experiment on more ensemble numbers containing 3, 5 and 7. We find that **as the number of ensemble models goes up, the performance will be higher and more stable, but it will cost more time**. To maintain the balance between performance and time, we finally set the value of this parameter to 7, which is recommended by the MBPO original repo. **We invite the reviewer to refer to Figure 5 and Table 1 in global response pdf.** ___ If we have addressed the reviewer's concerns, we expect the reviewer to consider improving the score. If the reviewer has any additional questions or comments, we would be happy to discuss them further. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I appreciate the authors' additional experiments on the learning rates and the number of ensemble models, it is indeed helpful. However, i am still concerned about the clarity, presentation and writing of the paper. I personally do think the current version is not ready to be publishable in the conference like Neurips. I recommend that the authors invest some effort into revising the paper. This could also potentially enhance the paper's impact. --- Reply to Comment 1.1.1: Title: Reply to the reviewer VRuX Comment: Dear reviewer, We sincerely appreciate your valuable feedback and have taken it into careful consideration. We first would like to recall our writing and presentation as follows. · In Section 4.1, we provide a comprehensive overview. We highlight that the novel aspect of our methodology, as discussed in this paper, centers around the design of phase 2. This step aims to unify model shift and model bias, leading to a performance improvement guarantee. · Following this, in Section 4.2, we provide the theoretical proof about why can get a performance improvement guarantee and the derivation for the optimization objective of phase 2 (Eq. (3)). · Lastly, Section 4.3 is dedicated to the practical implementation of the algorithm. We'd like to emphasize that we've diligently acted upon your suggestions for Section 4.1. We have seamlessly integrated Algorithm 2 into Algorithm 1, removed Algorithm 2 from the main paper, and updated the notation for $p_M, p_{M_1}, p_{M_2}$ to prevent misunderstanding, as noted in points 4 and 5 in the global response. This version diverges from our initial submission. Regarding your mention of additional concerns, we kindly ask you to specify them so that we may continue our revision process. Your continued guidance is immensely appreciated. Best wishes! The authors.
Summary: The paper considers model based RL (MBRL), focusing on the question of fine-tuning the model while learning the policy (the policy optimization). While prior works treat these two aspects somewhat separately with tuning thresholds, the authors derive a single cost function that incorporates both. The paper presents a lengthy derivation of the cost function and shows through inequalities and assumptions that it is a well grounded approach, and then numerical experiments validate the approach and compare to MBRL methods as well as model free (MFRL) methods. Strengths: The paper is very clearly written, with good motivation and placement of the results in the state of the art. The idea of combining the model refinement and policy learning is perhaps obvious; what is not obvious is how this can be carried out. The paper does this theoretically, and then carries this through to a useful algorithm. The derivations are clear and the assumptions needed to bring this to a numerical method are laid out. Theorem 2 is nice because it bounds the expected return difference between and 2 models and their respective policies. This should be useful in other learning contexts. The meta-algorithm (Algorithm 1) puts the overall ideas across in a way that doesn’t require the reader to understand the subtleties of the proofs. Weaknesses: The reviewer doesn't see any significant weaknesses in the paper. Some clarifying questions are listed below. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Perhaps it is worth saying more about the assumption after eqn (9), regarding policy shift being small wrt the model bias? When would this not be true? Deriving eqn (10) relies on a Gaussian assumption. Could you say why this is generally valid, or when it might not be so valid? Similarly in eqn (11) the Wasserstein distance is easily calculated for the Gaussian case. So here the Gaussian assumption is carrying through? After Algorithm 2 statement, “following [18]”: What is the elite set of B models? And you choose one of these with uniform probability? How large is B typically? The “pull-back” interpretation is interesting. That is what you call the no-update case? In the appendix, Table 2, top 3 cases, the model-free works well, albeit with a lot of steps. Can you compare the overall complexity of the MFRL and USB-PO for these cases, accounting for cost per step of each? Also, is there a hybrid MFRL-MBRL approach? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: There are no negative social issues. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the high recognition of our work! Our specific responses to the questions raised by the reviewer are as follows: ___ ## 1. Policy shift wrt the model bias · According to the experiment in MBPO, when the amount of data provided by the policy for training the model is very small, the policy shift can not be ignored. · In my view, as the model gradually converges, model bias will drop to a very small amount, which may make the policy shift unignorable. Further research on this topic may improve performance and save computational resources. ___ ## 2. The detail of deriving Eq.(10) Deriving Eq.(10) does not require the assumption of the Gaussian distribution. Please refer to **line 167-170**, the derivation of Eq.(10) **only needs the assumption that $V^\pi_M$ is** $L_v$**-Lipschitz** and please refer to **Appendix A Lemma 3** for the corresponding details. Though satisfying the Lipschitz continuity is actually difficult, **this occurred frequently in previous theoretical work to get better convergence properties**. Based on the above statement, deriving Eq.(3) also does not require the assumption of Gaussian distribution, except that a closed-form expression can be further obtained with the help of the Gaussian distribution. **To prevent misunderstanding, we make revisions on line 171-173. Please refer to point 6 in global response.** ___ ## 3. Validity of the Gaussian distribution assumption In MBRL, it is normal to assume that the model is subject to the Gaussian distribution. Since PETS [6] demonstrated that **Gaussian ensemble models can prevent model overfitting and can better capture model uncertainty** (please refer to the details in **line 180-186**), the subsequent methods (MBPO P2P CMLO M2AC...) have all used this approach. But if you're considering specifically that samples generated by the policy for training the model are internally unbalanced, using the Gaussian distribution has obvious weaknesses. [6] Kurtland Chua et al. “Deep reinforcement learning in a handful of trials using probabilistic dynamics models”. In: Advances in neural information processing systems 31 (2018). ___ ## 4. The details of elite set The elite set first appeared as a trick in the source code published by the authors of MBPO. Each time the model is trained, a portion of the samples are set aside for holdout validation. Then, the algorithm is ranked from highest to lowest based on the validation error, and the $K$ lowest ranked is set as the elite set ($K<B$). **When generating rollouts, randomly selecting a model from the elite set rather than the original ensemble set gives better performance**. We set the same parameters as MBPO, namely the size of ensemble models is 7 and the size of the elite set is 5. ___ ## 5. The pullback interpretation Thanks again for your confirmation of the “pull-back” explanation! However, this is not for the no-update case. The no-update case means the fine-tuning optimization objective (Eq.(3)) basically does not change the updates of $M_2$ and it can be considered that the updates generated by MLE are appropriate for Eq.(3). When the MLE generates updates with large model shift which may impair the performance, the algorithm can fine-tune (**pull**) the model updates to get the performance improvement (**back**). We call this process of securing the performance improvement guarantee as “pull-back”. ## 6. Overall complexity Sorry, we can not analyze the overall complexity theoretically within the limited time, thus we choose to report the time cost as below and we leave this theoretical work as future work. | Env | USB-PO | SAC | | :-----:|:----: | :----: | | HalfCheetah | 2.29day(250K) | 12.53h(3M) | | Humanoid | 1.51day(300K) | 12.83h(3M) | | Walker2d | 1.65day(300K) | 12.55h(3M) | As the above table shows, though SAC needs a lot of steps to get asymptotic performance, it costs much less time compared to USB-PO, because USB-PO costs much more time to train the model in each step. **However, in some sample-dangerous scenarios, we have to sacrifice the time complexity (MFRL) to pursue the sample complexity (MBRL)**. As for the hybrid algorithms, as far as we know, when the model is used for generating rollouts, this approach is generally recognized as a model-based algorithm. ___ If the reviewer has more questions, please let us know and we'll be happy to answer them! --- Rebuttal Comment 1.1: Title: Reply Comment: The reviewer appreciates the authors replies to the questions posed. Also, the reviewer appreciates the specific changes put forward, including combining algorithm 1 and algorithm 2 statements, and incorporating the additional related work. I am satisfied with my review rating. --- Reply to Comment 1.1.1: Title: Thank you for your affirmative reply! Comment: Dear reviewer, We sincerely thank you for your high recognition of our work! Best wishes! The authors.
Summary: This paper studies the learning of the dynamics model in model-based reinforcement learning. The authors propose a novel method USB-PO that has provable properties. Strengths: 1. The paper studies an important problem in model-based RL, namely the model shift and model bias balance during model update without heavy dependence on the threshold and a lack of adaptability. 2. The algorithm is straightforward and the theoretical results also match the expectation. 3. The paper is clearly written and easy to follow. Weaknesses: 1. In the pseudocode Algorithm 1, what is the difference between $M$ and $M_2$? Are they supposed to be the same? 2. Can the authors elaborate the MBRL algorithms that have heavy dependences on the threshold? 3. Can phase 1 and phase 2 be integrated to a single joint optimization objective? It would be easier to implement if the algorithm is a trust-region style algorithm that maintains a single model and regulates the updated and the last-iteration model. 4. There are also previous MBRL works [1, 2, 3] that have a dual update procedure, in a very similar way to how the proposed method regulates the model updates. I recommend the authors to also discuss these related works in a later version of the manuscript. [1] Zhang et al. Conservative Dual Policy Optimization for Efficient Model-Based Reinforcement Learning.\ [2] Sun et al. Dual Policy Iteration.\ [3] Levine et al. Learning neural network policies with guided policy search under unknown dynamics. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weakness section above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See the weakness section above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer’s insightful comments on our paper. The relevant literature mentioned in your comment helps us better polish the paper! Our specific responses to the questions raised by the reviewer are as follows. ___ ## 1. The difference between $M$ and $M_2$ $M$ and $M_2$ are different. The purpose of Algorithm 1 is to put the overall ideas that don’t require the reader to understand the following details. Thus, in Algorithm 1 we do not distinguish the model backed up before training with MLE ($M_1$) and the model after training with MLE ($M_2$), but universally use $M$ to denote them. Later, to better illustrate our method theoretically, we denote $M_1$ and $M_2$ separately. Precisely, $M_2$ **denotes the model that needs to be fine-tuned after the training process by MLE in each iteration**. **To prevent misunderstanding, we revise the Algorithm pseudo-code and Section 4.1. Please refer to point 4 and 5 in global response. Also, we add a schematic diagram to illustrate USB-PO better. Please refer to point 8 in global response .** ___ ## 2. Elaborate the algorithm that has heavy dependence on the threshold CMLO sets a threshold for model shift to satisfy the monotonic performance improvement guarantee. We argue that this threshold plays a determinant role in the whole algorithm. ‘’Theorem 4.6 (Refined Bound with Constraint)’’ in CMLO stated $$ V^{\pi_2|M_2}-V^{\pi_1|M_1} \ge \kappa (\mathbb{E}\_{s,a\sim d^{\pi_1}}D\_{TV}[P(\cdot|s,a)||P_{M_1}(\cdot|s,a)] - \mathbb{E}\_{s,a\sim d^{\pi_2}}D\_{TV}[P(\cdot|s,a)||P_{M_2}(\cdot|s,a)]) - \frac{\gamma}{1-\gamma}L(2\sigma_{M_1,M_2}) - \epsilon_{opt} $$ , where $\sigma_{M_1,M_2}$ denotes the threshold to constrain model shift. Obviously, **to meet the monotonic performance improvement guarantee, the right-hand side of the upper inequality must be greater than 0**. Thus, this threshold needs to be carefully designed, as we stated in **line 33-37**. To further verify our description, we **devise an experiment** that sets three different thresholds for CMLO on Walker2d environment (1.0 for the lower threshold, 3.0 for the paper recommended and 5.0 for the higher threshold). As Figure 3 in global response pdf shows, the performance corresponding to the other two thresholds (1.0 and 5.0) is severely affected. ___ ## 3. Trust-Region like updates Nice view! We've thought about this idea. **However, this would introduce a new problem in determining the weights to balance the MLE updates (model bias) and trust region constraints (model shift), falling into the same situation as CMLO**. Though our method seems to be a little more complicated, it is **straightforward and effective** under our theoretical framework. Please see **Appendix C.4**, compared to MBPO, USB-PO does not introduce excessive computational resource cost. However, we believe that this idea may work to further optimize USB-PO with some technical support. ___ ## 4. Related works Thank you again for pointing out our omission regarding the relevant literature! According to your advice, we read the relevant papers covering **CPI [21], DPI [41], GPS with unknown dynamics [26] and CDPO [51]** then we update our related works as follows (changes are bolded). Performance improvement guarantee is a core concern in both MFRL and MBRL theoretic avenues. In MFRL, methods such as TRPO [39] **and CPI [21]** choose to optimize the performance difference bound, whilst most of the previous work in MBRL [29, 19, 50, 36, 23] choose to optimize the difference of expected return under the model and that of the real environment, which is termed return discrepancy. However, return discrepancy ignores model shift between two consecutive iterations compared to the performance difference bound under the MBRL setting, which can lead to performance deterioration due to excessive model updates. Although some recent methods have also employed performance difference bound to construct theoretical proofs, they still suffer from certain limitations. OPC [11] designs an algorithm to optimize on-policy model error, but it is similar to return discrepancy in nature. **DPI [41] uses dual updates to improve sample efficiency but tries to restrict policy updates within the trust region, thus inhibiting exploration.** CMLO [20] relies on a fixed threshold to constrain the impacts of model shift, resulting in a heavy dependence on the threshold and a lack of adaptability during the training process. Hence, we try to unify model shift and model bias to form a novel optimization problem, adaptively fine-tuning the model updates to get a performance improvement guarantee. **Still, some prior work [7, 34, 51] choose to consider regret bound, among which [51] also reduce the impacts of the model changing dramatically between successive iterations. Instead of unifying model shift and model bias, they choose to realize dual optimization by considering maximizing the expectation of the model value rather than that of the single model as a sub-process. Different from [51, 41, 26] that use dual optimization to train the policy, we devise an extra phase to fine-tune the model.** [21] Sham Kakade et al. “Approximately optimal approximate reinforcement learning”. In: Proceedings of the Nineteenth International Conference on Machine Learning. 2002, 358 pp. 267–274. [41] Wen Sun et al. “Dual policy iteration”. In: Advances in Neural Information Processing Systems 31 (2018). [26] Sergey Levine et al. “Learning neural network policies with guided policy search under unknown dynamics”. In: Advances in neural information processing systems 27 (2014). [51] Shenao Zhang. “Conservative Dual Policy Optimization for Efficient Model-Based Reinforcement Learning”. In: Advances in Neural Information Processing Systems 35 (2022), pp. 25450–25463. ___ We hope the reviewer can consider raising the score if we resolved the reviewer’s concerns and we would be happy to have further discussion. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' effort during rebuttal. Most of my concerns regarding the notations and novelty, especially the connections with previous works are addressed. I have therefore raised my score. --- Reply to Comment 1.1.1: Title: Thank you for the inspiring reply! Comment: Dear reviewer, Thank you for helping us improve the paper and update the score! We really appreciate your valuable comments! Best wishes! The authors.
Rebuttal 1: Rebuttal: We are very grateful to all the reviewers for the valuable feedback, helping us improve the paper further! We revise the paper and add suggested experiments according to the reviewer’s comments. The detailed revisions are as follows. The additional figures and a table are in the pdf file. ___ ## 1. Revisions of related work We revise the line 75-87: Due to the character limit, we put the revision of the related work in the reply to the reviewer C4Te. ___ ## 2. Revisions of the statement of Theorem 1 We revise the line 133-135 as: Obviously, compared to directly optimizing the return discrepancy of each iteration **[19]**, the performance difference bound chooses to optimize the return discrepancy of two adjacent iterations, **namely** $V^{\pi_2|M_2}-V^{\pi_2}\_{M_2}$ **and** $V^{\pi_1|M_1}-V^{\pi_1}\_{M_1}$ **respectively**, and the expected return variation between these two iterations, **namely** $V^{\pi_2}\_{M_2} - V^{\pi_1}\_{M_1}$, demonstrating better rigorousness. [19] Michael Janner et al. “When to trust your model: Model-based policy optimization”. In: Advances in neural information processing systems 32 (2019). ___ ## 3. Revisions of Eq.(7) in Theorem 3 We revise the typo as below: $V^{\pi_2|M_2} - V^{\pi_1|M_1} \ge \frac{2R\_{max}\gamma}{(1-\gamma)^2}(\epsilon^{\pi_1}\_{M_1} - \epsilon^{\pi_2}\_{M_2} - \epsilon^{M_2}\_{M_1}) - \frac{2R_{max}\epsilon\_\pi}{(1-\gamma)^2}$ ___ ## 4. Revisions of Section 4.1 We revise the line 114-122 as: The general algorithmic framework of USB-PO is depicted in Algorithm 1, **where the main difference compared to the existing MBRL algorithms is the two-phase model learning process, namely phase 1 and phase 2. Phase 1 uses traditional MLE loss to train the model, which may impair the performance by excessive model updates due to only considering the impacts of model bias. To mitigate this problem, we introduce phase 2 to further fine-tune the model updates, whose optimization objective is defined as Eq.(3)**. Eq.(3) unifies the model shift term and the model bias term in the second-order Wasserstein distance form, namely $W_2(p_{M_1},p_{M_2})$ and $W_2(p_{M_2},p_{M^*})$, thus achieving adaptive adjustment of their impacts during the fine-tuning process. As demonstrated in Section 4.2 and Section 5.3, this is not equivalent to the traditional methods of limiting the magnitude of model updates, but rather beneficial to get a performance improvement guarantee. ___ ## 5. Revisions of Algorithm pseudo-code To avoid confusion of notation between $p_M$ and $p_{M_1},p_{M_2}$, we integrate Algorithm 2 into Algorithm 1 and remove Algorithm 2 from the main paper: | *Algorithm 1* Meta-Algorithm of the USB-PO Framework | | :----| | 1: Initialize the policy $\pi$ and the learned model | | 2: Initialize the environment replay buffer $\mathcal{D}$ and the model replay buffer $\mathcal{D}_M$ | | 3: **for** each epoch **do** | | 4: $\qquad$ Use $\pi$ to interact with the real environment: $\mathcal{D}\leftarrow \mathcal{D}\cup\{(s,a,r,s')\}$ | | 5: $\qquad$ Backup the current learned model for future use and denote this backed-up model as $p_{M_1}$ | | 6: $\qquad$ Phase 1: use $\mathcal{D}$ to train the learned model with the supervision of MLE and denote this updated model as $p_{M_2}$ | | 7: $\qquad$ Phase 2: use Eq.(3) as optimization objective to further fine-tune $p_{M_2}$ | | 8: $\qquad$ Use $p_{M_2}$ to generate the imaginary rollouts: $\mathcal{D}_M\leftarrow \mathcal{D}_M\cup\{(s_M,a_M,r_M,s_M')\}$ | | 9: $\qquad$ Use $\mathcal{D}\cup\mathcal{D}_M$ to train the policy $\pi$ | | 10: **end for** | ___ ## 6. Revisions of the statement of Eq.(3) To prevent misunderstanding, we revise the line 171-173 as: **Hence, Eq.(3) can be applied as the optimization objective to get a performance improvement guarantee.** ___ ## 7. Revisions of time cost We add the statement of time cost after line 225: **Computational Cost. We report our computational cost compared to MBPO in Appendix C.4. Although USB-PO is a two-phase model training process, continuing to use the fine-tuned model for the next iteration has the potential to accelerate model convergence and then possibly reduce the training time.** ___ ## 8. An added simple schematic diagram To better illustrate USB-PO, we add an additional schematic diagram. Please see Figure 1 in global response pdf. We will put this into the appendix later. ___ The above is a summary of the revisions. For questions raised by each reviewer, please refer to the rebuttal to the specific reviewer. Pdf: /pdf/409a80886d1a02f55fb736c6f83787229f9d65df.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Unsupervised Image Denoising with Score Function
Accept (poster)
Summary: The paper introduces an extension of Noise2Score for unsupervised image denoising and demonstrates its efficiency on several noise models, including non-exponential family distributions. Strengths: The extension of Noise2Score allows to deal with different kinds of noise, as opposed to prior works that only focus on exponential family distributions. The paper is clearly-written, with a fair amount of examples and experiments. Weaknesses: The contribution is relatively incremental compared to Noise2Score, and is based on an approximation (equation (7) replaces equation (6) in the resolution) that is not really discussed. Even though Theorem 3.1 gives some upper bound on the distance between the estimate and the conditional expectation, it would have been interesting to get insights on the constant $C$ and on the covariance of $x$ given $y$ in practice, in particular with complex noise models the paper aims to deal with. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * If possible, could the authors give numerical details on the approximation error between equations (6) and (7) in practical cases that differ from exponential family distributions, and elaborate on the hypotheses (f invertible, inverse function Lipschitz continuous, and bounded Hessian) ? * Could the authors exhibit ill-posed cases for which this approximation wouldn't be accurate, and show the implication on the denoising algorithm ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: See remarks above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and constructive comments. We answer the raised questions below. We hope that our answers clarify the doubts and address the concern of the reviewer. Q1: The contribution is relatively incremental compared to Noise2Score, and is based on an approximation (equation (7) replaces equation (6) in the resolution) that is not really discussed. A1: Our approach serves as an important expansion of Noise2Score. However it enables image denoising for multiple noise models (please refer to Table 1). Equation 7 is not an approximation of Equation 6; rather, it is derived based on the structure of Equation 6, akin to how Equation 5 is derived from Equation 4. We apologize if certain notations in the paper are causing confusion. To clarify, the "x" in Equations 5 and 7 should be replaced with $\hat{x}$. Equations 5 and 7 do not pertain to the original clean x and noisy y, but rather they concern an unknown denoised result, $\hat{x}$, with y representing the given and known noisy image. Consequently, both Noise2Score and our method aim to solve equations related to $\hat{x}$. However, Noise2Score arrives at the solving equation via Tweedie's Formula (the cases of exponential family distribution), while our method devises the solving equation based on Proposition 3.1 (more general distribution). In the majority of cases, including most exponential family distributions, the solution of Equation 7 does not yield E[x|y] (in other words, the approximation error between E[x|y] and the denoised result also exists in Noise2Score). As such, we employ Theorem 3.1 to gauge the closeness between the solution and E[x|y]. Q2: Even though Theorem 3.1 gives some upper bound on the distance between the estimate and the conditional expectation, it would have been interesting to get insights on the constant and on the covariance of x given y in practice. A2: $C$ is related to the smoothness of score function, while the covariance of x given y is related to the noise level. A smoother score function and lower noise intensity, resulting in a smaller upper bound; conversely, a less smooth score function and higher noise intensity, leading to a larger upper bound. Q3: If possible, could the authors give numerical details on the approximation error between equations (6) and (7) in practical cases that differ from exponential family distributions, and elaborate on the hypotheses (f invertible, inverse function Lipschitz continuous, and bounded Hessian) ? A3: For an explanation regarding the approximation error, please refer to the response to the first question. Concerning hypotheses, please consult Proof 1.3 provided in the supplementary material. We list the explanations for them as follows: * f invertible: Given $\boldsymbol{y}$, $\boldsymbol{f}(\boldsymbol{x}, \boldsymbol{y})$ is invertible about $\boldsymbol{x}$. * inverse function Lipschitz continuous: Denote the inverse function of $\boldsymbol{f}(\boldsymbol{x}, \boldsymbol{y})$ as $\boldsymbol{f}_{\boldsymbol{y}}^{-1}$ where $\boldsymbol{y}$ is given. Thus the variable of the inverse function is $\boldsymbol{x}$. The Lipschitz continuous is an assumption on the property of $\boldsymbol{f}(\boldsymbol{x}, \boldsymbol{y})$. * bounded Hessian: Denote each component of $\boldsymbol{f}(\boldsymbol{x}, \boldsymbol{y})$ as $\boldsymbol{f}_i$, for any $i$, the Hessian matrix of $\boldsymbol{f}_i$ is bounded. --- Rebuttal Comment 1.1: Title: Thanks for the answer Comment: Thanks to the authors for their answer. After reading the other reviews and the rebuttal, I will keep my rating as is. --- Reply to Comment 1.1.1: Comment: Thank you for the comment!
Summary: This paper presents a self-supervised learning algorithm for image denoising which only requires the score of the noisy image distribution to perform denoising. The algorithm builds on ideas of a recent paper Noise2Score, extending the family of noise distributions that can be handled via an application of Fisher's identity. Strengths: - The paper presents a principled way of performing self-supervised denoising with a large family of noise distributions, which go beyond the exponential family. The previous Noise2Score only handles distributions belonging to the exponential family. - The paper demonstrates the good performance of the proposed approach on a large number of denoising experiments. Weaknesses: - The authors do not compare with SURE-based methods, which can handle mixture distributions, such as mixed Poisson Gaussian noise, e.g., see "An unbiased risk estimator for image denoising in the presence of mixed Poisson–Gaussian noise" by LeMontagner et al. - While the idea is theoretically stimulating, I don't see a lot of practical applications for the family of distributions where SURE or Noise2Score would fail. It would be good if the authors could specify some. - Some statements such as "Our approach is so powerful that" should be toned down. In particular, note that the approach cannot handle *any* noise distribution. See for example the discussion on Binomial noise in "Least squares estimation without priors or supervision" by Raphan and Simoncelli. Technical Quality: 3 good Clarity: 3 good Questions for Authors: If I'm not wrong, the main formula in Proposition 3.1 is just Fisher's identity? It would be good to name this accordingly and cite some references. In Theorem 3.1, what does "f(x,y) is invertible" means? Invertible with respect to all variables? There is also a typo in Lipschitz continuous. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations are not discussed in detail. It would be good to specify that the method strongly relies on the knowledge of the noise distribution, whereas other methods such as Neighbor2Neighbor do not. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and constructive comments. We answer the raised questions below. We hope that our answers clarify the doubts and address the concern of the reviewer. Q1: It would be good if the authors could specify some distributions where SURE or Noise2Score would fail. A1: The noise models of many real world data are not exponential family distribution or a mixture of exponential family distributions; for example, all the ultrasound images in both medical and non-medical applications have speckle noise (or approximated as Rayleigh noise), and thus our method would be more effective for these applications. More importantly, a lot of real world data has correlated noise (although the structure of correlation could be simple), such as the noise in medical images is always locally correlated because of partial volume effect due to limited resolution. When SURE or Noise2Score is applied to this type of application, it usually has to approximate the noise to uncorrelated noise. Q2: Is the main formula in Proposition 3.1 just Fisher's identity? A2: No, they are quite similar, but there are some differences. Fisher's identity: $$ \nabla_\theta \log p_\theta\left(\boldsymbol{y}\right)=\int \nabla_\theta \log p_\theta\left(\boldsymbol{x}, \boldsymbol{y}\right) p_\theta\left(\boldsymbol{x} \mid \boldsymbol{y}\right) \mathrm{d} \boldsymbol{x} $$ Prop 3.1: $$ \nabla_{\boldsymbol{y}} \log p(\boldsymbol{y})=\int p(\boldsymbol{x} \mid \boldsymbol{y}) \nabla_{\boldsymbol{y}} \log p(\boldsymbol{y} \mid \boldsymbol{x}) \mathrm{d} \boldsymbol{x} $$ Q3: In Theorem 3.1, what does "f(x,y) is invertible" means? Invertible with respect to all variables? A3: "f(x,y) is invertible" means that with y given, it is invertible with respect to x. This property make it possible to find a unique solution for x in Eq 7. --- Rebuttal Comment 1.1: Comment: Many thanks for answering my questions. After reading the rebuttal, I want to keep my original score. --- Reply to Comment 1.1.1: Comment: Thank you for the comment!
Summary: The following work attempts to solve a single image denoising task in an unsupervised fashion. The authors propose to predict the score function and then denoise the input noisy image by solving the system of equations. Moreover, the proposed approach is a generalization of the Noise2Score method, which can work for complex non-exponential family distribution. Extensive experimental results demonstrate comparable and superior results in complex cases over existing unsupervised denoising methods. Strengths: + Proposed extension of Noise2Score to images with complex noise distribution is very appealing and substantially increases the chances to apply the method to real-world scenarios. + Extensive experimental results on additive, multiplicative, and mixed noises. Weaknesses: - Proposed method is highly susceptible to noise parameter estimations according to Figure 2. - Moreover, one needs to understand the noise distribution type (Gamma Noise, Poisson Noise, etc.) of an arbitrary input image to estimate particular parameters of the distribution. Then those estimations are further used to solve Eq. 7. - It would be better to apply the proposed method to real image denoising given an estimated noise parameters. For example, raw image denoising (e.g. SIDD, DnD). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Suggestions: • It would be better to move some of the details in the implementation part. For example, AR-DAE, which is used for score function estimation. • Typos (e.g. line 114) • I would suggest to add some visual examples in the Supplementary (if the main paper does not have enough space). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and constructive comments. We answer the raised questions below. We hope that our answers clarify the doubts and address the concern of the reviewer. Q1: Proposed method is highly susceptible to noise parameter estimations according to Figure 2. A1: Inaccurate estimation of noise parameters could lead to a decline in the performance of the proposed method. Nevertheless, a strategy akin to the one employed by Noise2Score for handling unknown noise parameters can be applied. This entails selecting an auxiliary indicator that operates independently of ground truth, such as Total Variation (TV). By solving Equation 7 with different noise parameters, a range of denoised outcomes is obtained. The denoised result exhibiting the most favorable performance according to the indicator is subsequently chosen. Q2: It would be better to apply the proposed method to real image denoising given an estimated noise parameters. For example, raw image denoising (e.g. SIDD, DnD) A2: We tested our approach on the SIDD dataset using the estimated parameters provided by the dataset, and produced a PSNR of 32.3. This result is better than the 32.2 achieved by unsupervised method Nr2N and comparable to the 33.9 achieved by supervised learning, demonstrating that our method is practical for real-world denoising problems. --- Rebuttal Comment 1.1: Comment: Dear authors, thank you for your detailed response. I hope that provided results on raw image denoising will be added to the manuscript, which definitely make the work stronger. As for the final assessment, I`m willing to keep my original score as is. --- Reply to Comment 1.1.1: Comment: Thank you for the comment! We tested our approach on the SIDD dataset using the estimated parameters provided by the dataset, and produced a PSNR of 32.3. This result is better than the 32.2 achieved by unsupervised method Nr2N and comparable to the 33.9 achieved by supervised learning, demonstrating that our method is practical for real-world denoising problems. We will provided more results on raw image denoising in the future.
Summary: This paper proposes a general unsupervised image denoising approach by solving system with an estimated score function. The proposed system can be applied on multiple noise models by changing the equation system rather than retraining the model. Strengths: 1) Compared to the Noise2Score limited to the noise models of exponential family distributions, the proposed method unlocks the limitation and generalizes to the non-exponential family distributions. 2) For different noise model, the proposed method only requires to modify the equation system to be solved and keeps the trained score function estimator unchanged. 3) The applications on the additive noise, multiplicative noise, and mixture noise have been proved in the manuscript and the supplementary materials. Weaknesses: 1) Qualitative comparison should be provided in the supplementary materials. 2) The Noise2Score can be applied to the unknown noise parameters, and whether the proposed methods can achieve it? Please provide the discussion and experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The overall idea presented in this paper is interesting and the results are promising. Since I am not very sure about the application of the unknown noise parameters, I may defer my recommendation after authors' response. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and constructive comments. We answer the raised questions below. We hope that our answers clarify the doubts and address the concern of the reviewer. Q1: Qualitative comparison should be provided in the supplementary materials. A1: We will offer a one-page PDF comprising a qualitative comparison. Afterward, we will incorporate it into the supplementary materials. Q2: The Noise2Score can be applied to the unknown noise parameters, and whether the proposed methods can achieve it? A2: Noise2Score tackles unknown noise parameters by adopting a strategy of selecting an additional indicator that operates without relying on ground truth, such as Total Variation (TV). This method involves solving Equation 7 for various noise parameters, resulting in diverse denoised outcomes. The denoised outcome displaying the most favorable indicator performance is subsequently selected. This approach is equally suitable for our method. --- Rebuttal 2: Comment: Reviewer PKR9: Please respond to the rebuttal and give more details and context about your decision ASAP
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and constructive comments. The qualitative comparison is in the attached PDF. Pdf: /pdf/b41145f4e7840d27aa86b17e6f61582b42fbb129.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a unsupervised image denoising method based on score function. Compared with Noise2Score, it discards the noise distribution on exponential family, and derives the solutions for additive Gaussion noise, multiplicative noise, and mixture noise. The experiments verify the effectiveness of the proposed method. Strengths: 1. The proposed method is theoretically sound. 2. It is a general unsupervised method, being able to deal with many kinds of noise types. 3. The synthetic experimental results showcase that it is superior or comparable to recent SoTA method. Weaknesses: 1. The proposed method is built on the core equation of Eq. (7). The detailed mathematical derivation of this equation is not clearly presented. One of the main contributions of this work is that it dose not rely on the exponential family distribution. However, according to my understanding, Eq. (7) still follows the exponential family distribution. The writing of this part should be further improved. 2. The introduction section states that "Another advantage of our approach is that regardless of noise models, the training process of the score function neural network is identical." In my opinion, it still requires to individually train the models on various noise types. 3. The most limitation of this work is its practicality. It depends on the specific parameters of the noise distribution, thus cannot to deal with real-word denoising task. The experiments all focus on the synthetic datasets. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: As explained in the weakness, this work cannot handle the blind or real-world denoising task. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and constructive comments. We answer the raised questions below. We hope that our answers clarify the doubts and address the concern of the reviewer. Q1: The detailed mathematical derivation of this equation (Eq 7) is not clearly presented. A1: Equation 7 is derived based on the form of Equation 6, similar to how Equation 5 is derived based on the form of Equation 4. We apologize for any confusion stemming from certain notations in the paper. To clarify, the "x" in Equations 5 and 7 should be replaced with $\hat{x}$. Equations 5 and 7 are not equations about the clean x and noisy y, but rather about an unknown denoised result $\hat{x}$, where y is the given and known noisy image. Consequently, both Noise2Score and our approach aim to solve equations related to $\hat{x}$. However, Noise2Score derives its solving equation using Tweedie's Formula, while our method constructs the solving equation based on Proposition 3.1. As Equation 6 remains valid for any distribution, it naturally holds for exponential family distributions. If the distribution belongs to the exponential family, Equation 6 can be reformulated as Equation 4. Hence, under an exponential family distribution, Equation 7 can be expressed in the structure of Equation 5. In essence, our method (solving Equation 7) serves as an important expansion of Noise2Score that enables image denoising for multiple noise models. Q2: The introduction section states that "Another advantage of our approach is that regardless of noise models, the training process of the score function neural network is identical." In my opinion, it still requires to individually train the models on various noise types. A2: The underlying idea here is that when the noise model is provided, the training approach remains unchanged irrespective of its type and parameters. This approach focuses on estimating the score function, and the estimation method is independent of the noise model itself. In real-world scenarios, it's necessary to estimate the noise model's type and parameters in advance. If any changes occur in these estimations, the original score function estimation remains applicable as the noisy image remains unchanged. Consequently, retraining the model isn't needed; modifying Equation 7 and solving it anew would be sufficient. Q3: The most limitation of this work is its practicality. It depends on the specific parameters of the noise distribution, thus cannot to deal with real-word denoising task. A3: This is a limitation of our method, but it can be addressed by two approaches. For real-world denoising tasks: 1) Employ domain knowledge to estimate noise parameters; 2) Follow a method mentioned in Noise2Score, selecting an additional indicator that doesn't rely on ground truth, such as Total Variation (TV). By solving Equation 7 for various noise parameters, distinct denoised outcomes are achieved. The most optimal denoised outcome is then selected based on the indicator's performance. In fact, Noise2Score also has this limitation and Noise2Score addresses it by the second approach. --- Rebuttal Comment 1.1: Title: Response to authors Comment: 1. I suggest the authors to correct the notation mistakes of Eq.(5) and Eq. (7) in the revised version. 2. Reviewer PKR9 and Reviewer V5Ro also concerns the application of the proposed method in real-world denoising dataset. If some quantitative comparison results can be provided, I tend to increase my rating. --- Reply to Comment 1.1.1: Comment: Thank you for the comment! We tested our approach on the SIDD dataset using the estimated parameters provided by the dataset, and produced a PSNR of 32.3. This result is better than the 32.2 achieved by unsupervised method Nr2N and comparable to the 33.9 achieved by supervised learning, demonstrating that our method is practical for real-world denoising problems.
null
null
null
null
null
null
Structural Pruning for Diffusion Models
Accept (poster)
Summary: This paper presents Diff-Pruning, an efficient compression method for learning lightweight diffusion models from pre-existing ones. DPMs have shown impressive capabilities in generative modeling, but they often come with significant computational overhead during training and inference. Diff-Pruning addresses this challenge by introducing a Taylor expansion over pruned timesteps, which eliminates non-contributory diffusion steps and combines informative gradients to identify important weights. The authors empirically evaluate the proposed method on four diverse datasets, highlighting two primary benefits: 1) Efficiency, with a 50% reduction in FLOPs at a fraction of the original training expenditure, and 2) Consistency, as the pruned diffusion models retain generative behavior congruent with their pre-trained counterparts. Strengths: - The paper introduces a novel method, Diff-Pruning, **specifically designed** for compressing diffusion models. - The authors conduct empirical assessments on four diverse datasets, providing a comprehensive analysis of the proposed method's performance. The evaluation demonstrates the effectiveness of Diff-Pruning in terms of efficiency and consistency. - The paper appears to be well-written overall. Weaknesses: - Provide additional details about the pruning process: Although this paper describes the essence of diffo-pruning as Taylor expansions at pruning time steps, providing more specific details about the pruning process itself would enhance the clarity of the proposed method. Providing step-by-step explanations or pseudocode algorithms would help readers understand and replicate this approach. - In Eq. (4), (5), (6), (10), (11), the symbols $|\cdot|$, $||\cdot||$, and $||\cdot||_0$ appear. Are they representing the same thing? The author needs to carefully check if the symbols in the formulas are correct. Additionally, the notation $\nabla L_t(\mathbf{\theta})(\mathbf{\theta}^\prime - \theta)$ should be reviewed for its validity since $\theta$ is a vector. Furthermore, what does the "$\cdot$" symbol represent in Equation (6)? Is it the dot product? If so, why is "$|\cdot|$" included? - I am unsure how Eq. (6) is derived from Eq. (5). Why can it serve as an importance criterion? Does it have a direct relationship with the model's performance? If so, the author should demonstrate the relationship between this criterion and the performance of the diffusion model, such as through correlation analysis, and so on. - Why does the pruned model exhibit better performance than the pretrained model? Can the author explain this phenomenon? - I would like to know if the FID and SSIM metrics are sensitive to the pruned model. - Can the MACs metric demonstrate the superiority of the proposed method in terms of performance acceleration? I am interested in understanding the effectiveness of the proposed method on training or inference speed under different GPU, e.g. Frame Per Second (FPS). Overall, this article is well-written and easy to follow, but there are some questions that need to be addressed. If the author can address these questions, I would consider increasing the score. Technical Quality: 3 good Clarity: 3 good Questions for Authors: see Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper briefly mentions limitations but does not provide detailed explanations. Expanding on the limitations section will help readers understand the potential constraints or assumptions of the proposed method and its applicability in different scenarios. Additionally, it is necessary to discuss specific technical shortcomings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1:** Provide additional details about the pruning process like step-by-step explanations or pseudocode algorithms. **A1:** Thanks for the advice. Section 4 of the appendix provides a brief discussion about the pruning pipeline and training configurations. We will make it more detailed following your suggestions and provide pseudocode in the appendix or the main paper. --- > **Q2:** (a) The meaning of symbols $|\cdot|$, $||\cdot||$ and $||\cdot||_0$. > (b) Additionally, the notation $\nabla L_t(\theta) (\theta^\prime - \theta)$ should be reviewed for its validity since $\theta$ is a vector. > (c) Furthermore, what does the "$\cdot$" symbol represent in Equation (6)? Is it the dot product? If so, why is "$|\cdot|$" included? **A2:** Apologies for any confusion. We address the concerns as follows: * (a) In our formulation, $|\cdot|$ indicates the absolute value. The symbol $||\cdot||$, unless specified, corresponds to the L-2 Norm. Also, $||\cdot||$ denotes the L-0 norm, counting nonzero elements. To prevent confusion, we'll clarify this at the method's start. A typo in Line 118, $|\theta^\prime|_0$, will be corrected to $||\theta^\prime||_0$. * (b) In Eq. 5, $\nabla L_t(\theta) (\theta^\prime - \theta)$ is the dot product of vectors, yielding a scalar. For structural pruning, rows or columns of weight matrices are removed, warranting vector input for Taylor expansion. * (c) "$\cdot$" implies scalar multiplication. Eq. 6 gauges a weight's importance in the matrix. To estimate importance, we use the absolute value, as loss change can be negative (<0) or positive (>0), both impacting the model negatively. --- > **Q3:** (a) How Eq. (6) is derived from Eq. (5). Why can it serve as an importance criterion? > (b) Does it have a direct relationship with the model's performance? If so, the author should demonstrate the relationship. **A3:** The Taylor expansion in Eq. 2 measures the loss damage caused by pruning. By setting a single parameter scalar $ \theta_{ik} $ to 0 and keeping others unchanged. We will obtained a pruned weight vector $ (\theta^\prime_i - \theta_i) = [0, 0, ..., -\theta_{ik}, 0, 0, ..., 0] $, where all elements except $k$-th one are zero. If we apply the Taylor expansion in equation 2 and take the absolute value, we can obtain the importance for a parameter scalar: $\mathcal{I}(\theta_{ik}, x)= | \mathcal{L}_t(\theta^\prime) - \mathcal{L}_t(\theta) |$ $ = | (\theta_{i0}-\theta_{i0}) \cdot \nabla_{\theta_{i0}} + \dots + (0 - \theta_{ik}) \cdot \nabla_{\theta_{ik}} + \dots + (\theta_{iK}-\theta_{iK}) \cdot \nabla_{\theta_{iK}} | $ $= |\theta_{ik} \cdot \nabla_{\theta_{ik}} |$ For simplicity, we drop high-order term and use $\nabla_{\theta_{ik}}$ for $\nabla_{\theta_{ik}} L(\theta, x)$. If we set the whole vector $\theta_i = [0, 0, ...]$, we get the "vector" criterion in Line 133: $| \sum_k \theta_{ik} \cdot \nabla_{\theta_{ik}} \mathcal{L}_t(\theta, x) |$. Thus, this criterion directly estimates performance damage: $| \mathcal{L}_t(\theta^\prime) - \mathcal{L}_t(\theta) |$. We show this relation in the PDF figure depicting generation quality from various pruning algorithms without tuning. Our algorithms outperform other methods in generation quality. --- > **Q4:** Why does the pruned model exhibit better performance than the pretrained model? Can the author explain this phenomenon? **A4**: This is akin to double descent [1], where pre-trained networks are oversized for certain datasets like CelebA, and pruning mitigates overfitting. CelebA images share similar features, allowing fewer parameters in the network. But this phenomenon was not observed in datasets like CIFAR-10, Church, and Bedrooms due to their varied content and complexity. [1] Sparse double descent: Where network pruning aggravates overfitting. ICML, 2022. --- > **Q5:** I would like to know if the FID and SSIM metrics are sensitive to the pruned model. **A5**: We provide an empirical study in Table 3 of the main paper. FID and SSIM (Structural Similarity) are both sensitive to the pruning ratio. With pruning increasing from 0% to 70%, FID climbs from 4.19 to 9.33, while SSIM drops from 1.00 to 0.909. So, FID and SSIM can indeed reveal the effectiveness of pruning. --- > **Q6**: Can the MACs metric demonstrate the superiority of the proposed method in terms of performance acceleration? I am interested in understanding the effectiveness of the proposed method on training or inference speed. **A6:** Thanks for the comments. Following the reviewer's advice, we provide more results in the Table below. Models are trained on 4090 but tested on A6000. MACs indeed reveals the actual speed-up. | Method | \#Params | MACs | Inference Mem. | Training Mem. | Inference FPS | Training FPS | |--|-----|----|----|--|--|--| | Pretrained LDM | 400.92M | 99.80G | 11.03GB | 14.64GB | 12.87 | 4.26 | | Pruned LDM | 189.43M | 52.71G | 9.58GB | 11.95GB| 19.83| 6.37 | | Pretrained DDPM | 113.7M| 248.7G| 3.35GB| 5.59GB| 28.66| 11.02| | Pruned DDPM| 46.5M| 100.7G| 2.43GB | 4.13GB | 32.92 | 9.07 | --- > **Q7:** The paper briefly mentions limitations but does not provide detailed explanations. Expanding on the limitations section will help readers understand the potential constraints or assumptions. **A7:** Thanks for the suggestion. We discussed some failure cases in Fig.4 of the appendix and in the experiments part of the main paper, such as Line 246 (about performance) and Line 253 (about distortion). We will provide a limitation section in the revised version for the following limitations and technical shortcomings: 1. Performance: it is still very difficult to preserve the original performance after pruning, especially on large-scale datasets. 2. Distortion: the algorithm only tries to minimize the global distortion, but is unaware of important semantic information in the model, such as watermarks. --- Rebuttal Comment 1.1: Comment: We sincerely apologize for reversing the order of FPS for the pre-trained and pruned models in the table. The corrected version is as follows: | Method | #Params | MACs | Inference Mem. | Training Mem. | Inference FPS | Training FPS | | -- | -- | -- | -- | -- | -- | -- | | Pretrained LDM | 400.92M | 99.80G | 11.03GB | 14.64GB | 12.87 | 4.26| | Pruned LDM | 189.43M | 52.71G | 9.58GB | 11.95GB | 19.83 | 6.37| | Pretrained DDPM | 113.7M | 248.7G | 3.35GB | 5.59GB | 28.66 | **9.07*** | | Pruned DDPM | 46.5M | 100.7G | 2.43GB | 4.13GB | 32.92 | **11.02*** | Best Regards, Authors of Paper 2537 --- Rebuttal Comment 1.2: Comment: Thank you for your detailed reply and I tend to raise my score. --- Reply to Comment 1.2.1: Comment: We extend our sincere gratitude to Reviewer oE14 for the valuable comments and suggestions. The points raised regarding symbol definitions, derivation details, limitations, FPS, and technical shortcomings are unquestionably important. In line with the reviewer's insightful feedback, we will incorporate the above results and analyses into the revised version. Best Regards, Authors of Submission 2537
Summary: This work introduces a structural pruning method for diffusion models, called Diff-pruning. The authors leverage Taylor expansion on each term of the ELBO loss as the creteria to decide the pruned weights. By calculating the error of the final image induced by the pruning at timestep t and discussing about the effect of converged loss on the higher order terms of the Taylor expansion, this work manually drop the pruning creteria of timesteps near noise by setting a threshold $\mathcal{T}$ related to the relative loss. Experiments compare the performance of the proposed pruning method with other common pruning methods, as well as training from scratch. Strengths: 1. This paper proposes a novel pruning method that utilizes the multi-step property of the diffusion model. 2. The experiment results demonstrate the effectiveness of the proposed method compared with baseline methods. Weaknesses: 1. My major concern is the motivation of the method is not clearly explained: - The authors conclude an important conclusion from eq.9. However, the derivation process of eq.9 has mistakes and typos. There is a neural inference term with $x_t$ as input in eq.3, the final error $\delta_0$ can not be derived in such a simple form. Moreover, the meaning of the symbol "$\delta$" is ambiguous. It seems to represent noise prediction error in line 146, but $\delta_{t-1}$ and $\delta_0$ seems to represent the error of $x_t$ and $x_0$ caused by pruning. And the subscript $s$ of $\alpha_s$ is wrongly written in eq.8. - How could Equ. 9 derive the conclusion "prediction errors occurring at larger t primarily impact the high-level content of generated images, while smaller t values concentrate on refining the images with relatively small modifications"? Relevant paper presents that the main difference between high-level contents and details is that they come from different frequency components of the whole image [1] , while Equ.9 focus on the "amplitude of the error $\delta_0$". It requires further clarification. - The authors claim that the final distortion is progressively magnified according to eq.9, which indicates that the error at timesteps near the noise has a greater impact on the final image. However, the proposed method truncates the pruning criteria at large t, which does not match such observation. 2. Lack of detailed description of the baseline method ToMe. [1] Diffusion probabilistic model made slim. arXiv preprint arXiv:2211.17106, 2022. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. How is eq.9 derived and why the choice of $\alpha_t$ according to this equation is opposite to the proposed method (as described in Weaknesses)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors do not include the limitations and potential negative societal impact in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1:** The motivation of the method is not clearly explained. **A1:** In essence, we harness the informative gradients stemming from early timesteps ($t\rightarrow 0$) to identify unimportant parameters for the pruning process. As demonstrated in our experiments, we disregard larger timesteps ($t\rightarrow T$) due to the presence of vanished gradients, rendering them incapable of furnishing valuable insights for importance assessment. --- > **Q2:** The derivation process of eq.9 has mistakes and typos. (a) There is a neural inference term with $x_t$ as input in eq.3, the final error $\delta_0$ can not be derived in such a simple form. (b) Moreover, the meaning of the symbol "$\delta$" is ambiguous. It seems to represent noise prediction error in line 146, but $\delta_{t-1}$ and $\delta_0$ seems to represent the error of $x_t$ and $x_0$ caused by pruning. (3) And the subscript $s$ of $\alpha_s$ is wrongly written in eq.8. **A2:** I apologize for any confusion that may have arisen. In Equation 9, $\delta_t=\epsilon_{\theta^\prime }(x, t) - \epsilon_{\theta}({x}, t)$ (as defined in Line 146) denotes the error that emerges under the same input. On the other hand, $\delta_0$ refers to the distortion resulting from the pruning error $\delta_t$, which **does not include the error associated with input shift** due to the difficulty in analyzing the non-linearity of network forwarding. In order to enhance clarity, we would like to improve the notation by substituting $\delta_0$ with $\delta_{0\leftarrow t}$. The subsequent discussion employs this refined notation for clarification. * (a) Indeed, there's a network inference $\epsilon_\theta (x_{t-1} + \delta_{t-1}, t-1)$ in different timessteps. At Line 147, we assume "no additional prediction error emerges in other steps," indicating consistent inference across timesteps without introducing new pruning errors. Besides, we don't concentrate on changes caused by shifted inputs as they can't reflect the **functional shifts** due to pruning. Instead, we are interested in the pruning error as defined above, that is $\delta_{t-1}=\epsilon_{\theta'}(x_{t-1} + \delta_{t-1}, t-1) - \epsilon_{\theta}(x_{t-1} + \delta_{t-1}, t-1)$ . In this case, a simple functional error can be linearly derived as $\delta_{0\leftarrow t}$ in Eq. 9. However, there are still error terms caused by the input shift during inference, i.e., $\epsilon_{\theta}(x_{t-1} + \delta_{t-1}, t-1) - \epsilon_{\theta}(x_{t-1}, t-1)$. Analyzing this is challenging due to non-linearity. We'll revise Eq. 9 per your suggestion to address these concerns. * (b) Thanks for the comments. As defined in Line 146, $\delta_t=\epsilon_{\theta^\prime}(x, t) - \epsilon_\theta (x, t)$. It refers to the functional error with the same inputs. * (c) Thanks. The $\alpha_s$ should be $\alpha_t$. We will make the necessary correction to fix it. --- > **Q3:** How could Equ. 9 derive the conclusion "prediction errors occurring at larger t primarily impact the high-level content of generated images, while smaller t values concentrate on refining the images with relatively small modifications"?. **A3:** Thanks for the comment. Due to the scaling factor mentioned in Line 149, "the final distortion induced by $\delta_t$ is progressively magnified by a factor of $\frac{1}{\sqrt{\alpha_s}}>1$ along the sampling path". For a larger $t$, more scaling factors will be applied. Therefore the distortion at a large $t$ has a larger impact ($\delta_{0\leftarrow t}$) on the image space than that at a small $t$. --- > **Q4:** The authors claim that the final distortion is progressively magnified according to eq.9, which indicates that the error at timesteps near the noise has a greater impact on the final image. However, the proposed method truncates the pruning criteria at large t, which does not match such observation. **A4:** Thanks for the comment. The impact have two main components: the scaling factor $\frac{\beta_t}{\sqrt{\bar{\alpha}_t \cdot (1-\bar{\alpha}_t) } }$ and the initial error $\epsilon_t$ at step $t$. While it might seem reasonable to prioritize focusing on larger steps $t$ as a potential solution, there are certain challenges associated with this approach. As detailed in Line 172-177 and depicted in Figure 4, the gradients tend to vanish rapidly as $t$ approaches $T$, leading to inaccuracies in the applied Taylor expansion for $\delta_t$. Consequently, it becomes essential to truncate certain steps to prevent the accumulation of unreliable gradients. --- > **Q5:** Lack of detailed description of the baseline method To Me. **A5:** Thank you for your valuable feedback. The specific details of the baseline methods can be found in Lines 218-228 of the main paper. We will follow your suggestion to make it an independent paragraph in the revised version. --- Rebuttal Comment 1.1: Title: Thanks for the authors' response Comment: The authors' response addresses most of my concerns. Therefore, I'm increasing my score. BTW, I still have several questions. * A2 (a): According to my understanding, eq 9 assumes that the shift of $x_{t-1}$ (caused by pruning) does not affect functional shift at timesteps smaller than $t$, so that this error can be passed to $x_0$ in the way of sequenced multiplication. However, since neural network is a very complicated non-linear term that may have a significant effect on the final error. So, will this assumption be too strong to be true? What're the authors' opinions? * A3: My major concern is that the "high-level" details and "low-level" contents are directly related to the **frequency** of the image, not the amplitude of the error. So maybe it's inappropriate to derive the conclusion at Line 150-152 from eq 9. --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer 5jdL for the valuable comments and suggestions to improve our submission. And we will follow the reviewer's advice to polish our submission. > However, since neural network is a very complicated non-linear term that may have a significant effect on the final error. So, will this assumption be too strong to be true? What're the authors' opinions? We agree with the reviewer that the neural network is complicated due to the nonlinearity, which indeed has a significant effect on the final error. Equation 9 only provides a coarse estimation of the partial effect of pruning, under a ideal and strong assumption. So, we only use it as an interpretation of why we need to pay more attention to large timesteps. This is validated in Fig. 4 where incorporating some large timesteps can be beneficial to the image quality (SSIM, 0.78 $\rightarrow$ 0.82). We will polish Lines 141-149 following the reviewer's comment to emphasize that Eq.9 is a coarse estimation under strong assumptions and make the presentation more rigorous. > My major concern is that the "high-level" details and "low-level" contents are directly related to the frequency of the image, not the amplitude of the error. So maybe it's inappropriate to derive the conclusion at Line 150-152 from eq 9. We appreciate the comments. This conclusion is inspired by Figure 6 in DDPM, where they find that "Large scale image features appear first and details appear last."[1]. However, we agree with the reviewer that frequency is a more essential factor for this phenomenon. But to some extent, the magnitude of the error can also reveal the type of distortion as content changes inherently lead to substantial errors in the pixel space. Following the reviewer's comment, we will provide a discussion about the frequency and the error in the revised version. [1], Ho, Jonathan, Ajay Jain, and Pieter Abbeel. "Denoising diffusion probabilistic models." Advances in neural information processing systems 33 (2020): 6840-6851. Best Regards, Authors of Submission 2537
Summary: The paper introduces a pruning-based method aimed at addressing the efficiency challenges of diffusion models. Unlike existing approaches that focus on accelerating sampling or enhancing architectures, it specifically focus on the time cost of compression and the consistency in generated images. The authors propose a novel variant of Taylor approximation as the importance function for pruning, which effectively preserves both high-level content and low-level details of generated images, while minimizing the impact of noisy steps. Experimental results demonstrate that the compressed diffusion model successfully generates similar images to the pre-trained model. Strengths: I agree that efficiency and consistency are indeed crucial considerations in compressing diffusion models, which have received limited attention in prior works. This work contributes to this field by building several initial experiments on datasets such as Cifar, CelebA, and LSUN, effectively showcasing the advantages of the method. This work establishes a solid baseline for future explorations into efficient diffusion models. Moreover, the impressive ability of the pruned model to generate consistent results is particularly noteworthy, which is user-friendly in deployment. Weaknesses: It is observed that in certain cases, the generated images may still exhibit some distortions or changes, such as the watermarks shown in Figure 2. It is worth noting that some of these elements may contain important information that should ideally be preserved. To address this, it would be beneficial to explore methods that allow for controllable preservation of such information while pruning the models. This ability would provide the flexibility to focus on specific parts of the generated images based on different scenarios. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: It would be valuable for the author to include discussions regarding the controllability of the observed distortions or changes, as mentioned in the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1:** the generated images may still exhibit some distortions or changes, such as the watermarks shown in Figure 2. A discussion about controllable preservation of such information. it would be beneficial to explore methods that allow for controllable preservation of such information while pruning the models. This ability would provide the flexibility to focus on specific parts of the generated images based on different scenarios. **A1:** Thanks for your valuable comments. This work focuses on minimizing distortion at the pixel level and does not inherently offer direct controllability at a high level. Nevertheless, the idea of incorporating such functionality is highly valuable. To achieve this, one solution is to extend our method by utilizing a mask that prioritizes important regions, employing the following equation adapted from Equation 2 in the main paper: $$ \mathcal{L}( {\theta} ):= \mathbb{E}_{t, x_0\sim q(x), \epsilon\sim \mathcal{N}(0,1)} [ || m \odot ( \epsilon - \epsilon^{\prime}(\sqrt{\bar{\alpha}_t}x_0 + \sqrt{1-\bar{\alpha}_t}\epsilon, t) ) ||^2 ] $$ where $m$ is a binary mask in the image space and $\odot$ is an element-wise multiplication. Here we use $\epsilon^{\prime}$ instead of "\epsilon_{\theta}" because there is a display bug in OpenReview. This method allows the pruner to perverse those highlighted regions like the watermarks. However, it is important to note that this improvement relies on the availability of additional annotations. To address this limitation, we can explore the possibility of decoupling objects in images and achieving a balanced quality for each object. A naive way is to segment the images with general models like SAM [1]. We intend to include a discussion section in the appendix that delves into the controllability of image quality and its implications. [1] Kirillov, Alexander, et al. "Segment anything." arXiv preprint arXiv:2304.02643 (2023).
Summary: This paper presents Diff-Pruning, a novel structural compression technique for learning efficient diffusion models from pre-trained ones. The fundamental idea behind Diff-Pruning is the utilization of Taylor expansion on pruned timesteps, which effectively combines informative and clean gradients to estimate the importance of weights. The authors demonstrate that Diff-Pruning not only achieves compression of pre-trained models within a few training iterations but also maintains the original generation capabilities intact. Strengths: - The experimental results demonstrate that structural pruning can serve as a powerful and efficient compressor for diffusion models. An important advantage is that the pruned model inherently retains the generative behavior of the pre-trained model. Considering the time-consuming nature of training new diffusion models, this work proves valuable for various downstream applications. - The proposed method, which employs the Taylor expansion over pruned timesteps, is both well-motivated and practical. The idea of balancing the contribution of different stages with binary weighting is interesting and easy to practice. - The authors provide extensive experimental evidence that substantiates their claims regarding the efficiency and consistency of Diff-Pruning. Weaknesses: - Extra comparison of the memory requirement for different pruning methods might provide a more complete assessment of the proposed methods. - The experiments reveal that structural pruning can potentially have a negative impact on the performance of pre-trained models. This observation deviates slightly from the results seen in discriminative tasks such as classification, where lossless compression can be achieved for certain networks like ResNets. It would be beneficial for the authors to provide further clarification on this phenomenon Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please refer to the weaknesses above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: This work has no negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1:** Extra comparison of the memory requirement. **A1:** Thanks for the suggestion. Methods like Random Pruning and Magnitude Pruning do not require additional memory during pruning. Taylor pruning and the proposed diff-pruning require $O(N)$ space to store the gradient, where $N$ is the number of parameters. In addition, the memory consumption of pruned models, as well as their training and inference FPS (Frames Per Seconds), can be found in the table below. We tested a pruned DDPM and a conditional LDM (Please refer to the Q1 of Reviewer S8k8) on 256$\times$256 images, i.e., LSUN-Church and ImageNet-1K, with a single NVIDIA RTX A5000. All experiments were repeated for 30 times and the average results are reported. | Method | \#Params | MACs | Inference Mem. | Training Mem. | Inference FPS | Training FPS | |------------------------|---------------------------|-----------------------|--------------------------------|-----------------------------|------------------------------|--------------------------| | Pretrained LDM | 400.92M | 99.80G | 11.03GB | 14.64GB | 12.87 | 4.26 | | Pruned LDM | 189.43M | 52.71G | 9.58GB | 11.95GB | 19.83 | 6.37 | | Pretrained DDPM | 113.7M | 248.7G | 3.35GB | 5.59GB | 28.66 | 11.02 | | Pruned DDPM | 46.5M | 100.7G | 2.43GB | 4.13GB | 32.92 | 9.07 | --- > **Q2:** The negative impact on the performance of pre-trained models. Why Lossless compression can not be achieved. **A2:** Thank you for your insightful comments. We propose two factors that may cause this phenomenon: * Model Capacity: The model capacity required for generative models is typically larger compared to discriminative models [1,2,3]. In discriminative tasks, images are often downsampled and filtered hierarchically, resulting in less information encoded within the network. Conversely, generative tasks face challenges in training small networks while achieving high generation quality, which causes performance lost during pruning. * Sensitivity of Metrics: Classification metrics like accuracy are generally not very sensitive to slight distortion in model predictions, as long as the samples maintain their correct positioning in relation to the decision boundary. In other words, Accuracy is discrete. However, for diffusion models, we use FID, a continuously changed value, to evaluate the performance of models. Pruning will introduce distortions in the generated images, which are immediately reflected in the FID score. [1] Kang, Minguk, et al. "Scaling up gans for text-to-image synthesis." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [2] Brock, Andrew, Jeff Donahue, and Karen Simonyan. "Large scale GAN training for high fidelity natural image synthesis." arXiv preprint arXiv:1809.11096 (2018). [3] Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. --- Rebuttal Comment 1.1: Comment: We sincerely apologize for reversing the order of FPS for the pre-trained and pruned models in the table. The corrected version is as follows: | Method | #Params | MACs | Inference Mem. | Training Mem. | Inference FPS | Training FPS | | -- | -- | -- | -- | -- | -- | -- | | Pretrained LDM | 400.92M | 99.80G | 11.03GB | 14.64GB | 12.87 | 4.26| | Pruned LDM | 189.43M | 52.71G | 9.58GB | 11.95GB | 19.83 | 6.37| | Pretrained DDPM | 113.7M | 248.7G | 3.35GB | 5.59GB | 28.66 | **9.07*** | | Pruned DDPM | 46.5M | 100.7G | 2.43GB | 4.13GB | 32.92 | **11.02*** | Best Regards, Authors of Paper 2537
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive comments. Following the advice of Reviewer S8k8, we report our results on conditional LDMs on ImageNet-1K 256. Generated images are available in the attached PDF file. | Method | #Params $\downarrow$ | MACs $\downarrow$ | FID $\downarrow$ | IS $\uparrow$ | Train Steps $\downarrow$ | |--------------------------|---------------------------|-------------------------|----------------------|-------------------|------------------------------| | Pretrained LDM | 400.92M | 99.80G | 3.60 | 247.67 | 2000K | | Scratch Training | 189.43M | 52.71G | 51.45 | 25.69 | 100K | | Taylor Pruning | 189.43M | 52.71G | 11.18 | 138.97 | 100K | | Ours ($\mathcal{T}=0.1$) | 189.43M | 52.71G | 9.16 | 201.81 | 100K | | Dataset | Img Size | Pruning Ratio (MACs) | Lr | Batch | $\text{Step}_f/\text{Step}_p$ | weight decay | |-------------|----------|--------------------------------------|---------|-----------|-----------------------------------|--------| | ImageNet-1K | 256 | 47.2\% (99.80G $\rightarrow$ 52.71G) | 1.28e-5 | 64 | 5\% (100K/2000K) | 0 | Pdf: /pdf/7898c5ce6699ebb532a48ff7728fed4e42983d8f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposed a method to prune diffusion models to achieve 50% FLOPS reduction at 10% to 20% of original training budget. The authors show that prune and fine-tune strategy based on the random selection, magnitude based selection, Taylor expansion based selection all lead to non-ideal / suboptimal performance. They develops a method based on the modification of Taylor expansion based selection and show it achieves good performance-quality tradeoff. They quantitatively evaluated on standard datasets used for diffusion models at small and medium resolution (from 32^2 to 256^2). Strengths: The paper works on a popular task in recent literature of generative models and presented a practice to prune diffusion models while preserves its generative capability. They demonstrated empirical results on 256^2 resolution unconditional image generation, which seems promising. Weaknesses: The lack of objective quantitative metrics in image generation literature makes the evaluation of pruned model very difficult. In addition to FID score, the authors also use SSIM score to compare the image generated from the full model and the pruned model under the same random seed is used. However, this is still hard for me to evaluate the quality of empirical results given limited examples presented in the paper. It would be otherwise more informative, if the authors can demonstrate their method being valid for conditional diffusion models, or even latent diffusion models. The technical contribution is also limited, as the method authors used already exist in the literature but has not been adopted for diffusion models. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: I don't have other questions Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: I have a feeling that authors tried very hard to prove their technique working to some extent. It would be helpful as a research publication if the authors can present more results telling when and where their method fails or explore more on the boundary of quality and performance trade-offs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1:** The lack of objective quantitative metrics, and the effectiveness on conditional diffusion models like LDMs. **A1:** Thanks for your valuable comments. In the table below, we provide the results of pruning Conditional Latent Diffusion Models on ImageNet-1K (256$\times$ 256). We follow the same training protocol as provided in \cite{rombach2022high} to prune and fine-tune an off-the-shelf LDM. Due to the time limits of this rebuttal period, we only fine-tuned the pruned models for 4 epochs. The generated images can be found in the attached pdf file. | Method | #Params $\downarrow$ | MACs $\downarrow$ | FID $\downarrow$ | IS $\uparrow$ | Train Steps $\downarrow$ | |--------------------------|---------------------------|-------------------------|----------------------|-------------------|------------------------------| | Pretrained LDM | 400.92M | 99.80G | 3.60 | 247.67 | 2000K | | Scratch Training | 189.43M | 52.71G | 51.45 | 25.69 | 100K | | Taylor Pruning | 189.43M | 52.71G | 11.18 | 138.97 | 100K | | Ours ($\mathcal{T}=0.1$) | 189.43M | 52.71G | 9.16 | 201.81 | 100K | **Details of LDM Pruning** Here we also provide the details of LDM pruning: LDM consists of an encoder, a decoder, and a UNet diffusion model. Around 400M parameters come from the UNet architecture and only 55M from the autoencoder. So we only prune the UNet model for acceleration. Moreover, we are dealing with a **conditional** model. During importance estimation, we randomly sample classes and images to accumulate gradients for Taylor expansion. We used the threshold $\mathcal{T}=0.1$ to ignore those vanished gradients from large timesteps, which also makes the pruning process more efficient. With $T=0.1$, only 534 steps participate in the pruning process. After importance estimation, we apply a pre-defined channel sparsity of 30\% to all layers, leading to a lightweight UNet with 189.43M parameters. Subsequent fine-tuning follows official training scripts, using a scaled learning rate of $0.1\times lr_{base}$. After fine-tuning, we sample 50 images per class and report the FID and IS score of the pruned models. Besides, We only report the #Params and MACs of the UNet. Similar to the hyper-parameter tables in the appendix, we outline our training configurations as follows: | Dataset | Img Size | Pruning Ratio (MACs) | Lr | Batch | $\text{Step}_f/\text{Step}_p$ | weight decay | |-------------|----------|--------------------------------------|---------|-----------|-----------------------------------|--------| | ImageNet-1K | 256 | 47.2\% (99.80G $\rightarrow$ 52.71G) | 1.28e-5 | 64 | 5\% (100K/2000K) | 0 | --- > **Q2:** The technical contribution is also limited, as the method authors used already exists in the literature but has not been adopted for diffusion models. A2: Thanks for the comments. Pruning diffusion models is quite a new topic and there is a large design space to explore, such as importance estimation, fine-tuning, and pruning strategies. This work indeed adopts some popular methods like Magnitude Pruning and Taylor Pruning to diffusion models. However, as illustrated in Table 1, they do not show significant improvements even compared to random pruning. As an initial exploration, we aim to make the method as simple and practical as possible. This also brings the benefit that many existing techniques like second-order approximation can be further leveraged to boost the performance of pruning. We will follow your suggestion to make it a more "diffusion-style" work. --- > **Q3:** It would be helpful as a research publication if the authors can present more results telling when and where their method fails or explore more on the boundary of quality and performance trade-offs. **A3:** Thanks for the comment. We provided some failure cases in Fig. 4 of the appendix and also discussed some limitations in the controllability, such as the missing watermark in Line 252. In other words, diff-pruning only aims to minimize image distortion but does not know what kind of information is important in the images. We will provide a limitation section in the revised version. And for the boundary of quality and performance trade-offs, Table 3 provides some insights into the trade-off between performance and efficiency, where we observed prominent performance degradation with high pruning ratios. However, the pruning of diffusion models as well as generative models is till a new topic. I agree that more explorations on the boundary of quality and performance will be valuable.
null
null
null
null
null
null
Towards Test-Time Refusals via Concept Negation
Accept (poster)
Summary: The paper proposes a method to remove a negative concept in a text-to-image diffusion model during inference time; the approach is different from prior work, which assumed that the benign concepts and the removed concepts are independent and added a penalty score; instead, this paper proposes to remove the concepts by 1) extract features from prompts that are known to be negative, and 2) use the extracted features to refine the attention maps. Results on object removal benchmarks look promising. Strengths: The results seem strong, i.e., the method is outperforming previous methods according to Table 1. However, I'm not an expert in this field so I am not confident whether the evaluation is reliable. Weaknesses: - The paper reads confusing to me (probably because I did not work on text-to-image diffusion methods). This might not be a weakness depending on the audience, but its contribution might not deliver to the broader audience as it is currently written. - I am unsure about the motivation of the method: if we want to remove certain concepts, can we either 1) prompt ChatGPT to rewrite the prompt to remove the concept, or 2) change the prompt to explicitly mention that [original prompt] + "please do not include XXX"? This seems to me the most natural/straightforward approach, so perhaps it is useful to include this as a baseline. - I am slightly confused about the desired behavior. What should "Mickey Mouse eating ice-cream" - "Disney Character" look like? Would it be just the ice-cream Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: 078: unclear why “adding adversarial perturbations” would be able to protect images from being generated (speaking as a reader not familiar with adversarial training). Would be helpful to add a sentence to talk about the underlying intuition. Not a critical concern. Equation 2: This might be a stupid question, but why does $p(x, c, not \tilde(c)) \propto p(x)p(c|x) / p(\tilde(c)| x)$? Does this follow from probabilistic derivation, or it’s the definition proposed by the paper? If it’s the former, can you derive it in the appendix why it is true (without using the \propto symbol but state the equation fully by including the normalization factors excluded here); if it’s the later, can you briefly justify in the paper why it is a useful definition? 151: rather than creating a list of prohibited words, can we simply ask ChatGPT to detect and remove similar concepts and still make it a coherent prompt? 190 & figure 3: sorry I’m confused about the figure, which seems important in conveying the core idea of the paper. For example, in a), if the red dashed line represents the diffusion process, does it mean that the image changes from an unsafe image to a safe image? Why would the model generate an unsafe image in the first place under a safe user prompt? If an image is considered safe, then it should be in the orange circle, but there is nothing in c) and d) in the orange circle. I think the authors probably do have a good understanding of what they are doing but I cannot really tell what it is based on the figure. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: The authors discussed the limitation of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! Below we address your questions and concerns. Please feel free to post additional comments if you have further questions. 1. Deep learning architectures can be susceptible to adversarial perturbations. The introduction of deliberate perturbations can disrupt the internal processes of the targeted diffusion model, leading the model to misinterpret the perturbed image as unrelated content. This vulnerability becomes particularly significant when adversaries attempt to exploit the diffusion model for editing images to generate illegal content. By introducing perturbations to such images, it becomes possible to prevent the images from being altered to achieve the adversary's intended result. For a concrete visualization example demonstrating these concepts, please refer to Figure 1 in the paper titled "Raising the Cost of Malicious AI-Powered Image Editing." https://arxiv.org/pdf/2302.06588.pdf. 2. Thank you for bringing this to our attention. The Equation (2) as you mentioned was initially introduced in the works of [1-2]. We will provide its derivation and explanations in the appendix. Diffusion models demonstrate the ability to compute both unconditional generation ($p(x)$) and conditional generation given a prompt ($p(x|c)$). Notably, when Equation (2) is satisfied, the intricate multi-conditional generation process, involving conditions such as $c$ and $\hat{c}$, can be decomposed into a combination of distinct single-conditional generations, simplifying the overall generation process. [1] Du, Yilun, Shuang Li, and Igor Mordatch. "Compositional visual generation with energy based models." Advances in Neural Information Processing Systems 33 (2020): 6637-6647. [2] Liu, Nan, et al. "Compositional visual generation with composable diffusion models." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. 3. The strategy of eliminating negative concepts through rephrasing user prompts using a large language model like ChatGPT holds promise as a viable approach. However, the key hurdle lies in devising suitable rephrasing prompts. It is essential to carefully design these prompts to achieve the desired outcome effectively. Nevertheless, we must acknowledge a notable limitation of this approach. While attempting to condition images on normal prompts, the presence of negative concepts may inadvertently occur (as demonstrated in Figure 4, middle). Regrettably, relying solely on ChatGPT is insufficient to address this problem effectively. We also evaluate the pattern [original prompt] + "please do not include XXX". A comprehensive results will be provided in the appendix, offering a more complete understanding of its potential efficacy. 4. Figure 3 illustrates two generation scenarios: benign generation (user prompts without negative concepts) and unsafe generation (user prompts containing negative concepts). The red dashed line represents the refinement direction of our approach. We use this direction to show that our method has no impact on benign generation while effectively removing negative concepts during unsafe generation. In the case of benign generation, the refinement direction points to the normal space. Even when there are no negative concepts to remove, our method does not interfere with the regular image generation process. On the other hand, for unsafe generation, the refinement direction guides the diffusion process away from the negative space and towards the normal space. This indicates that our approach can successfully eliminate negative concepts after several iterations. For weakness 3: Consider the text prompt "mickey mouse eating an ice cream" and a pre-defined negative concept "N = {mickey mouse}". The remaining part of the prompt becomes the positive concept "P = {eating an ice cream}". A good generative result in this case would include ice cream but not mickey mouse. As we do not primarily focus on personalized generation for specific contents, a non-mickey mouse outcome or even a missing object is acceptable. Our primary objective is to prevent the generation of negative content and ensure that the remaining positive concepts given by the prompt are generated accurately. However, discussing what content should replace the removed negative concepts is beyond the scope of this paper. --- Rebuttal Comment 1.1: Title: The response clarifies all my confusions. Comment: Thanks for your response. It clarifies my confusions. (I would love to increase the score to 7 though I do not know how to do that) --> nvm just increased the score.
Summary: Generative models require refusal techniques to limit their output and uphold ethical and copyright standards. 'Concept negation' is a promising solution, yet current methods have limitations, such as only accommodating independent concepts without considering their interconnected nature. This paper proposes PROTORE, a novel framework that enhances concept negation by identifying and purifying negative concepts during testing. PROTORE leverages the language-contrastive knowledge of CLIP to refine the attention maps and extract negative features. Evaluations show PROTORE outperforms existing methods in efficiency of purification and image quality. Strengths: - The proposed PROTORE is plug-and-play method that can be easily adopted in practice. It is able to refine the negation concept from user prompt during test-time without additional training. - The proposed method shows strong results comparing other methods on Imagenette and (I2P) benchmarks. It also decently maintain performance on other classes in controllable comparision in Table 1 indicating its superior to ESD. - PROTORE is also able to maintain good FID comparing to other methods. Weaknesses: - The proposed method might be limited or upper bounded by the general language ability of CLIP. It have been known that CLIP hold limited composition or grammar understanding of text input. This means that the proposed method might not be able to correctly negate concept like "human riding a horse" when user prompt is "a human and a horse". It could be good to have those studies or discussion briefly in the paper such as in the limitation section. - The paper does not release the crawled negation prompts datasets. The method cluster the prompt into k concepts. It's unknown how the clustering method or the crawling method affect the effectiveness of the PROTORE method. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: How effective do you think the current method could handle complex negation with composition or relation information? How will crawling or clustering method affect PROTORE? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: The paper explained and included the limitation section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! Below we address your questions and concerns. Please feel free to post additional comments if you have further questions. 1. Thank you for bringing this to our attention. It is imperative to acknowledge that our current approach may exhibit diminished effectiveness when encountering user prompts that involve intricate negation with compositional or relational information. The generated images may, therefore, exhibit instances of attribution leaks, wherein characteristics of one entity, such as a horse, mistakenly manifest in another, like a person, as well as occurrences of missing objects, erroneously omitting human or equine elements, and so forth. We recognize that the capabilities of CLIP (Contrastive Language-Image Pretraining) in processing textual prompts do have an impact on the overall performance of the methods presented in our paper. We will discuss these limitations in our paper and foster further investigation in the future work. 2. We employ three distinct clustering methods to derive the prototype prompts. In the case of single-label to single-class refusals (e.g., ImageNet), we utilize the embedding of the corresponding label as the prototype prompt. For multiple-labels to multiple-class refusals (e.g., I2P datasets), the clustering center is computed using K-means, which then serves as the prototype prompt. As for multiple dependent concepts refusals (as depicted in Figure 4), we combine all the concepts using commas or the word "and" to form the prototype prompt. It is crucial to acknowledge that discrepancies or variations in semantic descriptions within the same class can significantly impact the computation of cluster centers. For instance, when considering the topic of "violence," there exist numerous descriptions that encompass different facets, such as the intentional use of physical force or power, potential harm, and psychological consequences, among others. The presence of such diverse descriptions poses challenges in deriving an appropriate prototype prompt using the K-means algorithm. In the context of this paper, our focus lies on investigating relatively uncomplicated scenarios that involve the rejection of well-defined objects or styles. Nonetheless, we are aware of the necessity to address more intricate situations, where concepts with complex abstract semantics are involved. As part of our future research efforts, we will explore methodologies to effectively eliminate or handle these intricate concepts, thereby enhancing the robustness and accuracy of our approach. --- Rebuttal Comment 1.1: Comment: Thank you for the response. The response addressed my concerns.
Summary: This paper tackles the problem of concept negation in the context of image editing (i.e., excluding user specified concepts in the generated image). To this end, this paper proposes ProtoRe, which consists of three steps: 1) first a collection of negative prompts are encoded using CLIP and eventually aggregated, 2) retrieve the model’s output features based on the negative concepts, and 3) refine the attention map using the retrieved negative features. In the experiments, this approach is evaluated in terms of accuracy on erased classes (lower is better) and other classes (higher is better). Overall, this approach outperforms the baselines such as Stable Diffusion, composable diffusion model, and safe latent diffusion (with a few exceptions). Strengths: - This work tackles an important problem of controlling contents generated by text-to-image generation models. The approach is kept simple yet effective on the target task. - The experimental results (both quantitative and qualitative results) suggested the effectiveness of this approach. Weaknesses: - Evaluating generated outputs can be challenging, so it would be beneficial to include more detailed discussions and analyses of both successful and unsuccessful cases. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - This paper might be related: Imagen editor and editbench: Advancing and evaluating text-guided image inpainting (https://openaccess.thecvf.com/content/CVPR2023/papers/Wang_Imagen_Editor_and_EditBench_Advancing_and_Evaluating_Text-Guided_Image_Inpainting_CVPR_2023_paper.pdf) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! The paper "Imagen Editor and EditBench: Advancing and Evaluating Text-Guided Image Inpainting" centers around a distinct task known as text-guided image editing (TGIE). Unlike complete image generation, TGIE involves the editing of pre-existing or captured visuals. In this process, the user provides three inputs: 1) the image to be edited, 2) a binary mask specifying the edit region, and 3) a text prompt—all of which guide the output samples. Due to this fundamental difference, we recognize that the relationship between TGIE and our approach is limited. Nevertheless, both TGIE and our method share the common objective of achieving efficient, automated, and controllable visual generation. Please feel free to post additional comments if you have further questions.
Summary: Paper is concerned with stable diffusion of images guided by a prompt but with stipulated concepts not present. In particular it is concerned with situations where it is unclear how the prompt may be visualized without visualizing the banned concepts. Method works by incrementally pushing the generated image latent feature away from a feature vector representing the negative concepts. Where the negative feature vector has been obtained by expanding the negative concept prompts using a relevant corpus. Strengths: A very important live topic urgently requiring effective solutions. Attempts to deal with the realistic scenario where the there is considerable overlap between the positive and negative prompts. Some results are impressive. Weaknesses: Paper is not easy to follow. In particular, figures 1 and 3 are not clarifying. Insufficient clarifying discussion on what the system should be outputting when positive and negative prompts are overlapping. Results insufficiently discussed - for example readers needs guidance to appreciate fig 5. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: On the lack of clear aim of the system: what would a good result would be for the exemplar 'mickey mouse eating an ice cream' - 'mickey mouse'. Would a good result be a non-mickey mouse eating an ice cream, for example 'minnie mouse', or a non-cartoon mouse, or a person rather than a mouse. Visualizing P alone is straightforward (maximize semantic relatedness to P); avoiding N is similarly (minimize s.r. to N); but together is not clear and surely some balancing is needed and can this be safely done? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Any system such as this has the potential for mis-use. Social manipulation could be achieved by covert banning of certain concepts e.g. 'police violence'. The fact of mis-use and the need to balance against potential benefits should be noted. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! Below we address your questions and concerns. Please feel free to post additional comments if you have further questions. 1. Thank you for your attentive reading. Positive and negative concepts can only overlap at the level of the textual prompt, occurring simultaneously in a sentence. A concept cannot be both positive and negative at the same time. Building upon this premise, the objective of Refusal is to retain positive concepts while removing negative ones. One plausible approach to achieve this is by designing a distribution that is inversely proportional to the concept itself (as shown in Equation (3)). This would result in placing high likelihoods on positive concepts and low likelihoods on negative concepts, aligning with the goal of concept negation. To illustrate, consider a text prompt like "mickey mouse eating an ice cream" and a pre-defined negative concept "N = {mickey mouse}". The remaining part of the prompt becomes the positive concept "P = {eating an ice cream}". In this scenario, a successful generative result would include the ice cream but exclude mickey mouse. While personalized generation that aims at producing specific contents is not our focus, a non-mickey mouse outcome or a missing object is acceptable. Our primary goal is to prevent the generation of negative content and ensure the proper generation of the positive concepts from the given prompt. However, the discussion of what content should replace the removed negative concepts is beyond the scope of this paper. For weakness: We will make revisions to the descriptions of Figures 1, 3, and 5 to enhance clarity. Additionally, Figure 5 will include more illustrative notes for better understanding. --- Rebuttal Comment 1.1: Title: rebuttal response Comment: The subtlety of X but not Y has not been addressed, even by discussion without resolution, leaving the reader with the impression that this is a non-issue when imho it is. The authors seem only to consider the scenario where Y is actually named in X, but in this case why not just remove the words from X? As I said before if this approach type of approach will need to balance realizing X against not realizing Y. --- Reply to Comment 1.1.1: Title: Further Clarification Comment: Thanks for the question. To clarify: In this work, we examine two scenarios involving the explicit mention of Y within X and its absence from X (as depicted in Figure 4, middle and right). Concerning the former scenario, rephrasing X to remove Y presents itself as an intuitive solution. Nonetheless, this approach encounters two challenges: 1. Creating a comprehensive list of prohibited terms encompassing abstract concepts like violent gore, specific art styles, etc., may pose difficulties (as illustrated in Figure 4, right); 2. It inadequately addresses the latter situation in which Y is not overtly present in X but emerges serendipitously in the generated output due to the pre-training knowledge of the diffusion model (as shown in Figure 4, middle). Our proposed method demonstrates its efficacy in effectively navigating these intricate circumstances. In essence, our method aims to achieve a harmonious state of "X without Y" within the attention map. This involves extracting Y through the cross-attention module, subsequently excluding it from the feature space, while preserving the residual features in an unaltered state. While this achieved equilibrium proves effective in the majority of scenarios, when confronted with more intricate situations (such as Y representing a substantial object or serving as an image background), a more refined equilibrium must be sought (as encapsulated in the Limitation section). Hopefully that addresses your question. Please feel free to ask if anything remains unclear.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes ProtoRe as a method for getting generative image models to refuse to show a certain concept. It incorporates the CLIP model’s knowledge, developing a prototype, and then refusing to generate (in effect) that prototype. The paper then presents a number of benchmarks showing ProtoRe’s performance both in generating the intended output and avoiding concepts that are supposed to be avoided. Strengths: The literature review is thorough, with other techniques described in detail. The topic is important and timely and would be of general interest to the NeurIPS community. The paper is clearly written overall. The approach is interesting. Weaknesses: The paper focuses on easily recognizable concepts. Would be good to know if it generalizes to less easily recognizable concepts. Quantitative performance seems strong, but it would be helpful to know how the qualitative evaluations (e.g., Figure 4) were selected. -l260: Repetition of saying Fig 4 shows qualitative results Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. The ImageNet subset chosen is 10 classes that are easily recognizable. Are the methods robust to using less easily recognizable concepts? 2. Figure 4 shows a number of qualitative refusals. How were these chosen and to what extent were they cherrypicked? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: 1. The ImageNet subset chosen is 10 classes that are easily recognizable. Are the methods robust to using less easily recognizable concepts? 2. Figure 4 shows a number of qualitative refusals. How were these chosen and to what extent were they cherrypicked? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! Below we address your questions and concerns. Please feel free to post additional comments if you have further questions. 1. In response to less easily recognizable concepts, we divide them into three groups: small objects, large objects (including backgrounds), and abstract concepts. Our method demonstrates strong performance in handling small objects and abstract concepts, as illustrated in Figure 4 (b) and (c), respectively. However, due to the absence of pre-trained classifiers, quantitative results are not available. On the other hand, our method is comparatively less effective in dealing with large objects, such as a church. This limitation can be attributed to the relatively small refinement step size employed in Equation (7). 2. Figure 4 showcases six refusal examples in columns. The text prompts used are randomly chosen from the internet. Columns 1, 2, 3, 5, and 6 do not require deliberate selection. Additional results under various random seeds will be provided in the Appendix. In column 4, the text prompt is "A dog is playing in a park", and the refusal concept is "ball". Since the term "ball" does not explicitly appear in the text prompt, most of the generated images conditioned on this prompt do not contain the refusal concept. We carefully select the images that do include the concept of "ball" and examine their refusal results. This evaluation of implicit concept refusals helps prevent their accidental appearance in images when using regular prompts.
Summary: This paper introduces an innovative concept negation approach designed to effectively eliminate violent, unethical, or copyright-infringing content from generated images in diffusion-based models. The method operates during inference and stands apart from existing techniques by grounding concept negation in natural language. This unique characteristic enables defenders to easily specify which concepts should be considered harmless or offensive. To achieve this, the proposed method utilizes CLIP to generate the prototype of negative concepts and identifies the corresponding features associated with these concepts. These identified features are then utilized to refine the attention maps, purifying the content of negative concepts in the feature space. Extensive evaluation of the proposed method has been conducted across multiple benchmarks, demonstrating its superior performance compared to state-of-the-art methods. Notably, the proposed approach exhibits enhanced purification effectiveness and significantly improves the fidelity of generative images. Strengths: (1) In this academic paper, the authors introduce a novel "Prototype, Retrieve, and Refine" refusal method, designed to effectively eliminate violent, unethical, or copyright-infringing content from generated images during inference. (2) Notably, the proposed method enables the definition of negative concepts using natural language, making it easily understandable for humans to specify which concepts should be considered objectionable. (3) Extensive experimentation on multiple benchmarks showcases the superiority of the proposed PROTORE over existing approaches in terms of purification effectiveness and the fidelity of generated images across various settings. Weaknesses: (1) The academic paper introduces a method with promising potential; however, it would greatly benefit from stronger theoretical underpinnings. While the authors offer some intuitive explanations, they fall short of providing fully convincing arguments to support their approach. (2) The architectural design presented in the paper appears solid, but it lacks sufficient detail, especially concerning the operational aspects of the "Prototype, Retrieve, and Refine" process. More comprehensive elucidation of these three steps would enhance the paper's clarity. (3) An observed limitation of the proposed method lies in its performance with regard to removing large objects, such as backgrounds, from the generated images. Further investigation and improvement in this aspect would be valuable for its practical applicability. (4) The creation of prototypes for negative concepts might be restricted in capturing all possible textual prompts related to these concepts. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: (1) In Equation (1), the mean of Gaussian distribution is $x_{t} + \sum_{i} \epsilon_{\theta}^{i}(x_{t}, t)$. However, the authors mentioned that they followed the work of “Compositional Visual Generation with Composable Diffusion Models”, where the mean is defined as $x_{t} - \sum_{i} \epsilon_{\theta}^{i}(x_{t}, t)$. The $\epsilon_{\theta}^{i}$ could be either the score function or the denoising direction in the context of DDPM. These two values differ by a negative sign. Could you explain this? (2) Can you explain the clustering process in detail as Equation (4)? (3) I don't quite understand the example of “a Mickey Mouse is eating ice cream”. If “Mickey Mouse” is considered a negative concept, filtering it out completely would result in undesired images. Otherwise, is it possible to generate another cartoon character eating ice cream? If yes, it still raises copyright issues. How did you deal with this issue? (4) At each time step $t$ of the proposed algorithm, the value $A_t$ is computed using a pre-trained diffusion model as $A_{t}=\psi (z_{t}, \textbf{\textit{c}}, t)$. Does $A_t$ use to denote the output features produced by cross-attention operations, like the U-net architecture used by Stable Diffusion? After that, the algorithm feeds the modified attention map $A_{t}^*$ to the diffusion model, which generates the final denoised latent representation $z_{t-1}^*$. It seems that the above steps are different from the sampling process used in traditional diffusion models. Could you please provide more explanation about this? (5) Does the proposed method slow down or speed up the sampling of the diffusion model? Could you report the computational complexity of sampling process? (6) Can you provide some examples produced by the proposed method on the I2P dataset? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: (1) The creation of prototypes for negative concepts might be restricted in capturing all possible textual prompts related to these concepts. Addressing this limitation and devising a more comprehensive approach to prototype generation would bolster the method's efficacy and accuracy. (2) It seems that the proposed method performs worse in removing large objects (such as background) in the generated images. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! Below we address your questions and concerns. Please feel free to post additional comments if you have further questions. (1) In this context, the $\epsilon_\theta^i$ signifies the output of a noise predictor implemented by a neural network. The plus or minus signs, regardless of their usage, only serve to signify changes in the noise distribution. For consistency with previous work, we revise the plus sign to a minus sign. (2) We employ three distinct clustering methods to derive the prototype prompts. In the case of single-label to single-class refusals (e.g., ImageNet), we utilize the embedding of the corresponding label as the prototype prompt. For multiple-labels to multiple-class refusals (e.g., I2P datasets), the clustering center is computed using K-means, which then serves as the prototype prompt. As for multiple dependent concepts refusals (as depicted in Figure 4), we combine all the concepts using commas or the word "and" to form the prototype prompt. (3) Given a text prompt such as "mickey mouse eating an ice cream" and a pre-defined negative concept represented by "N = {mickey mouse}", the remaining part of the prompt becomes the positive concept denoted by "P = {eating an ice cream}". In a desirable generative outcome, the result would contain the ice cream but exclude mickey mouse. While personalized generation targeting specific content is not our focus, the generation of a non-mickey mouse result or even an incomplete object is acceptable. Our primary objective is to prevent the generation of negative content and ensure the proper generation of positive concepts from the given prompt. However, the discussion of what content should be used to replace the removed negative concepts falls beyond the scope of this paper. Generating another cartoon character may still raise copyright issues. To tackle this problem, our approach can efficiently remove multiple concepts simultaneously (as demonstrated in Figure 4, left). Service deployers have the option to list multiple copyrighted characters, ensuring they do not appear in the generated image. An additional noteworthy benefit of this approach is its adaptability to the expiration dates of copyright on cartoon characters. Deployers can flexibly adjust the removal of characters (adding or deleting) in accordance with policies and regulations at any given time. (4) $A_t$ represents the output features generated through cross-attention operations, serving as an intermediate result of submodules within the U-net used by Stable Diffusion. Our approach does not alter the sampling rule for the diffusion model (e.g., the denoising process). Instead, we introduce modifications to $A_t$ specifically intended to remove certain concepts present within it. As these modifications are solely applied in the cross-attention module, our method can be viewed as a "restricted" conditional generation approach. (5) Our method has little effect on the sampling of diffusion models. Specifically, we introduced an extra ATTENTION operation with dimensions ranging from (2, 64, 320) to (2, 4096, 320) in the 16 CROSS ATTENTION modules of the diffusion model. With GPU acceleration, the computational overhead of constant-order attention is negligible. We compared the time overhead of generating 100 images before and after applying our approach on single RTX 3090: | user prompt | negative concept | whole inference time | except for loading model | | :----: | :----: | :----: | :----: | | a basket of fruit | | 551.3771 (s) | 535.2297 (s) | | a basket of fruit | grape | 566.4718 (s) | 554.9790 (s) | | (6) As per your suggestions, we will include additional experimental results on the I2P dataset in the appendix. --- Rebuttal 2: Title: Thank you for your responses Comment: I would like to express my gratitude for your responses addressing my inquiries. I will make the necessary adjustments to my evaluations based on the responses you've provided.
null
null
null
null
On the Gini-impurity Preservation For Privacy Random Forests
Accept (spotlight)
Summary: This paper presents a novel encryption mechanism for isolation forest algorithms that preserves the Gini impurity metric. The authors provide theoretical evidence demonstrating that the proposed mechanism effectively preserves the Gini impurity and withstands attacks. Additionally, the authors have conducted comprehensive experiments to evaluate the accuracy and security of the proposed mechanism. Strengths: This paper presents novel and intriguing findings. The proposed encryption mechanism is innovative, and the experiments conducted seem to be extensive and thorough. Additionally, the experimental results demonstrate the promising performance of the proposed mechanism in terms of both accuracy and security. Weaknesses: 1. It appears that there are concerns regarding the solidity of the proof for the theoretical guarantee in the security analysis (Theorem 3). After reviewing the proof provided in the appendix, it is evident that the proof relies on experimental analysis utilizing different random seeds rather than a traditional mathematical proof. This raises questions about the rigorousness of the proof and its validity. 2. The paper requires improvement in terms of writing quality. In addition to the listed typos and grammar mistakes below, it is advisable for the authors to provide more background knowledge to enhance the reader's understanding. Furthermore, I suggest that the authors explain the motivation behind the proposed mechanism (Equations (4) and (5)) using a more informal language to aid comprehension. I have noticed some typos and grammar mistakes in the abstract and introduction sections. However, I also observed grammar errors throughout the entire paper. 1. Line 1, "algorithms" should be ``algorithm". 2. Line 3, "from anonymization" might be "such as anonymization". 3. Line 4, "it rarely takes into account" should be changed to "they rarely take into account". 4. Line 8, "encrypt data features" should be changed to "encrypt the data features". 5. Line 14, "effectiveness, efficiency and security" should be changed to "effectiveness, efficiency, and security". 6. In the first sentence (Line 16), "one successful ensemble algorithms" should be ``one successful ensemble algorithm". 7. Line 30, "cryptosystems" should be "cryptosystem". 8. Line 22, "ingredients" should be "ingredient". 9. Line 61, "Given encryption function" should be "Given an encryption function". "and decryption function" should be "and a decryption function". "HE scheme" should be "the HE scheme" Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: Based on the concerns regarding the unsolid theoretical guarantee and unclear writing, it seems that the current version of the paper may not be suitable for publication in NeurIPS. While the paper presents interesting results, addressing these issues is crucial for ensuring the quality and suitability of the paper for a prestigious conference like NeurIPS. It is recommended that the authors revise and improve the theoretical guarantee and clarity of the writing before considering submission to NeurIPS or any other reputable publication venue. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [Q1] … regarding the solidity of the proof for the theoretical guarantee in the security analysis (Theorem 3) … the proof relies on experimental analysis utilizing different random seeds rather than a traditional mathematical proof. This raises questions about the rigorousness of the proof and its validity. [A1] We will clarify that Theorem 3’ proof follows the cryptographic framework (Kerschbaum et al., 2015), which shows that Algorithm 1 yields the same ciphertext for any random seed and for any plaintext sequences {a_i^0} and {a_i^1}, which does not rely on experimental analysis. We can also present a traditional mathematical proof with rigorousness, inspired by Popa et al. (2013). The basic idea is to prove that, by induction on n, $P({[a_0^0] ,…,[a_i^0]}|{a_0^0,...,a_i^0}) =P({[a_0^1],…,[a_i^1]}|{a_0^1,...,a_i^1}) &ensp;for &ensp;i=0…n$, and then restrict an attacker's ability of accurate guess to a chance of success no more than 1/2. Here, $[a]$ denotes the ciphertext of $a$. [Q2] I suggest that the authors explain the motivation behind the proposed mechanism (Equations (4) and (5)) … [A2] We will clarify that our motivation is that the Gini impurity does not changed by encrypting adjacent plaintexts with the same label into an identical value, which can be shown in Theorem 1. Hence, Eqn. (4) sorts and partitions plaintexts into different groups based on label information, and Eqn. (5) encodes different values within each group into the same ciphertext. We will improve the writing quality, and provide more background knowledge according to your suggestions. Thank you. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed comments. I would like to improve my rating to a 5, as I still have some concerns regarding your proof of Theorem 3. --- Reply to Comment 1.1.1: Comment: Dear Reviewer PTBU, We have presented a traditional mathematical proof for Theorem 3, which is partially motivated from (Popa et al. 2013). We will add the detailed proof in Appendix, and give some proof sketches as follows: &nbsp; Let ${a_1^0,…,a_n^0}$ and ${a_1^1,…,a_n^1}$ be two sorted sequences (plaintexts) in an ascending order, with corresponding labels ${y_1…,y_n}$ drwan i.i.d. from a uniform distribution. Theorem 3 follows if we can prove that, by induction on $n$, $P([a_1^0],…,[a_i^0] | (a_1^0,y_1),…,(a_i^0,y_i ))= P([a_1^1],…,[a_i^1] | (a_1^1,y_1),…,(a_i^1,y_i )) \quad \quad \text{ for }i=1,...,n \quad \quad (\text{I})$ where $[a]$ denotes the corresponding ciphertext of plaintext $a$. &nbsp; For $n=1$, It is easy to prove $P([a_1^0]=c_{\max}/2| (a_1^0,y_1))= P([a_1^1]=c_{\max}/2 | (a_1^1,y_1))=1$ and Eqn. (I) holds obviously, since $[a_1^0]= [a_1^1]=c_{\max}/2$ from Algorithm 1, where $c_\max$ is a given large number as done in (Kerschbaum et al., 2015). &nbsp; We assume that Eqn. (I) holds for $n=i$ $(i>1)$, and we will prove the case $n=i+1$. It suffices to consider two cases: i) If we do not require to split a node for data points $(a_{i+1}^b, y_{i+1}) (b=0,1)$, then binary search trees (constructed) have the same structure for $b=0$ and $b=1$ by induction assumption. We have $[a_{i+1}^0]=[a_{i+1}^1]$ from the same binary search trees and consistent order for plaintexts, which proves Eqn. (I) for $n=i+1$. ii) If we require to split a node for data points $(a_{i+1}^b, y_{i+1}) (b=0,1)$, then data points $(a_{i+1}^0,y_{i+1})$ and $(a_{i+1}^1,y_{i+1})$ fall into the corresponding node in binary search trees from Algorithm 1 (Section 3.2). According to Algorithm 2 (Section 3.2), we obtain new nodes $l^0$, $r^0$ and $l^1$, $r^1$. By induction assumption, the adversary obtains the same information for $b=0$ and $b=1$ when $n\leq i$, and hence ciphertexts of node $l^0$ and $l^1$ follow the same distribution. We have $P( l^0.cipher_1 | (a_1^0,y_1),…,(a_i^0,y_i),(a_{i+1}^0,y_{i+1} ) = P( l^1.cipher_1 | (a_1^1,y_1),…,(a_i^1,y_i),(a_{i+1}^1,y_{i+1} ) $ We could obtain similar result for $r^0$ and $r^1$. For $n=i+1$ and $b=0,1$, we finally have $P( [a_1^b],…,[a_i^b],[a_{i+1}^b] | (a_1^b,y_1),…,(a_i^b,y_i ),(a_{i+1}^b,y_{i+1})) = P( [a_1^b],…,[a_i^b] | (a_1^b,y_1),…,(a_i^b,y_i ) ) * P( l^b.cipher_1 | (a_1^b,y_1),…,(a_i^b,y_i), (a_{i+1}^b,y_{i+1}) * P( r^b.cipher_1 | (a_1^b,y_1),…,(a_i^b,y_i),(a_{i+1}^b,y_{i+1} ) , $ which proves Eqn. (I) with some subsititutions.
Summary: This work develops a new encryption to maintain the data's Gini impurity and combines it with CKKS to perform random forests training and predicting, which outperforms other SOTAs in terms of the trade-off between latency and security level. Strengths: 1. This paper gets solid technique details and corresponding theoretical proof. 2. The author gives extensive experiment results and a wide range of datasets which shows the scalability of the proposed methods. Weaknesses: 1. One weakness would be the limited application scenario. It would be better to have some variants for other applications. Technical Quality: 3 good Clarity: 3 good Questions for Authors: same with weakness Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: same with weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [Q1] One weakness would be the limited application scenario. It would be better to have some variants for other applications. [A1] We will clarify that our work takes the first step on encryption by incorporating some learning ingredient, i.e., Gini impurity, and we can present some variants such as gini index and information gain, where the key idea is to show the piecewise monotonicity of information statistics w.r.t. splitting point. Those schemes can be applied to more learning scenarios such as Boosting, GBDT, LightGBT, etc. We will improve this work according to your suggestions. Thank you.
Summary: This work investigates the privacy-preserving random forest, and the main contributions include an interesting property-preserving encryption method that can preserve data’s Gini impurity. Based on the proposed encryption method, the authors present an innovative privacy-preserving training and prediction algorithm for random forests in a client-server protocol, which can achieve the smallest communication and computational complexities in comparison with prior works. The method is supported by theoretical results, which provide justifications to preserve data’s Gini impurity and showed that the proposed algorithm is secure against Gini-impurity-preserving chosen plaintext attack. Finally, extensive experiments on various datasets are conducted to demonstrate the efficacy, efficiency, and security of the proposed method. Strengths: This paper is well-written and easy to follow. It provides novel algorithms and solid theories with encouraging empirical results. The main idea of designing new encryption algorithm for decision trees that can preserve minimum Gini impurity is interesting and convincing for me. - Novel perspective on privacy-preserving machine learning. This paper introduces a novel perspective on privacy-preserving random forest by focusing on preserving minimum Gini impurity in encryption, since Gini impurity is exactly the information that random forests require during training and inference. This sheds new insights into the design of privacy-preserving machine learning algorithms with corresponding new property-preserving encryption methods, which is worthy of systemic further study for the potential enhancement of the efficacy and accuracy of privacy computing. - Innovative solution with solid analysis. The idea of the proposed minimum Gini impurity preserving encryption is innovative and interesting. It uses a new binary search tree to encrypt data dynamically by incorporating labels’ information, which is a novel way to encrypt data for decision trees, and the proposed encryption method incurs little additional computational cost. The communication complexity, security and the correctness of proposed method are also well-analysed. - Well-founded experiments results. The empirical study provide accuracy and running time comparisons with various relevant methods, as well as a visual analysis of security. The encouraging result shows that the proposed method can effectively achieve a better balance among accuracy, communication, running time and data security. Weaknesses: Despite many strengths, there are several points where certain details could be further improved. - The paragraph about ``Gini-impurity-preserving chosen plaintext attack’’ feels somewhat abstract. As it relates to the security analysis of proposed method, additional explanation could help reduce the reading difficulty for readers. Including a more detailed explanation in appendix might make this critical point more accessible. - The interpretation of Figure 3 and the bitwise leakage matrices is a little confusing to me. It is not very clear to me what is the intuition of such matrices. It may be beneficial to provide a brief introduction on it to help readers comprehend Figure 3 more easily. - The introduction about homomorphic encryption and CKKS seems to be brief. More details may be provided in appendix for readers unfamiliar with homomorphic encryption. - In Figure 6 from the appendix, the ``Differential Privacy’’ at the top is blocked by the figure of bitwise leakage matrices. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Homomorphic encryption is used to encrypt labels, and it is known that homomorphic encryption has a high computation overhead; does this impact the efficacy of the proposed method? - As shown in Table 3, the proposed encrypted random forests perform better than the original random forests on two datasets. Can you provide explanation for the improved efficacy on these datasets? - Can the proposed Gini-impurity-preserving encryption method be extended to Gini index or information gain? - In Algo. 1 , will the selection of $c_{max}$ impact the security of proposed method? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have mentioned the limitations in security of the proposed method, such as the encryption method is secure against Gini-impurity-preserving chosen plaintext attack, and the server is honest-but-curious. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [Q1] Homomorphic encryption is used to encrypt labels … high computation overhead … does this impact the efficacy of the proposed method? [A1] We will clarify that the encrypt labels (by homomorphic encryption) are only used for majority voting in random forests, and it does not require bootstrapping in HE without high computations due to 3-depth homomorphic multiplicative. In contrast, Previous HE work (Wu et al., 2020; Akavia et al., 2022) requires expensive computation costs for bootstrapping during training and prediction. [Q2] … Table 3, the proposed encrypted random forests perform better than the original random forests on two datasets…explanation… [A2] We will clarify that our method could achieve better performance as for some noisy datasets, since we encrypt different plaintext features into one ciphertext feature, and this could improve data robustness to noise. [Q3] Can the proposed Gini-impurity-preserving encryption method be extended to Gini index or information gain? [A3] We will clarify that our Gini-impurity-preserving encryption can be generalized to other information-theoretic statistics such as Gini index or information gain, and the key problem is to show the piecewise monotonicity of information statistics w.r.t. splitting point. [Q4] In Algo. 1 , will the selection of c_{max} impact the security of proposed method? [A4] We will clarify that we select c_{max}=2^{lambda * log_2(# of plaintexts)} with lambda >6.4 as done in (Kerschbaum et al., 2015), while a smaller c_{max} does not guarantee ciphertext’s accuracy within expression range of computer. We will provide more details on CKKS, Gini-impurity-preserving chosen plaintext attack and bitwise leakage matrices, and improve Figure 6 according to your suggestions. Thank you. --- Rebuttal Comment 1.1: Comment: The rebuttal has solved my concerns.
Summary: This work focuses on security and privacy for machine learning, especially random forest and its splitting criterion, Gini impurity. In order to do that, this work proposes a new encryption scheme for features of data, which is against chosen plaintext attack. Moreover, this work utilizes a homomorphic encryption CKKS to encrypt labels of data for secure random forests model. Strengths: 1. In Table 1, this work shows comparisons of asymptotical analysis on training/predictive communication/computation cost and demonstrates their work has advantages over other existing approaches. 2. As shown in Table 3, this work uses a lot of datasets to compare with existing approaches to convince audience on this approach in practice. 3. BST encryption avoids computationally expensive sorting. Weaknesses: 1. Instead of putting proofs in appendix. It is better to use few sentences about how to prove theorems. 2. This work is lack of a related work section to walk through existing work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In section 4, authors mention [50, 72, 76, 77]. Since they relate to this work, do authors consider to update asymptotical analysis in table 1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: 1. Chosen plaintext attack somehow is not sufficient in the real world application. If the work can guarantee chosen ciphertext attack, it could be a stronger scheme. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [Q1] In section 4, authors mention [50, 72, 76, 77]. Since they relate to this work, do authors consider to update asymptotical analysis in table 1? [A1] We will clarify that our asymptotical analysis includes [50] in table 1, but does not include [72,76,77], since [76,77] only focuses on prediction without training, while [72] only studies order-preserving scheme without tree training and prediction. We will update relevant asymptotical analysis in table 1 as follows: |Scheme|Training |communication | Training comp. |complexity|Predictive |communication|Predictive comp. |complexity | Model privacy | | :----: | ----: | :---- | ----: | :----| ----: | :----| ----: | :---- |:----: | | | Rounds | Bandwidth | Client&emsp; | Server | Rounds | Bandwidth | Client &emsp; | Server | | |[72] | $×$ &emsp; | &emsp; $×$ | $×$ &emsp; | &emsp; $×$ | $×$ &emsp; | &emsp; $×$ | $×$ &emsp; | &emsp; $×$ | $×$| |[76]| $×$ &emsp; | &emsp; $×$ | $×$ &emsp;| &emsp; $×$ | $O(1)$ &emsp; | &emsp; $O(1)$ | $O(1) $ &emsp;| &emsp; $O(κ) $ | $√$ | |[77] | $×$ &emsp; | &emsp; $×$ | $×$ &emsp; | &emsp; $×$ | $ O(1) $ &emsp; | &emsp; $O(1) $ | $O(1)$ &emsp; | &emsp; $O(κ)$ | $ √ $| our | $O(h)$ &ensp; | &ensp; $O(κ\bar{ȷ})$ | $O(κ) $ &ensp; | &ensp; $O(κ\bar{ȷ}τn)$ | $O(1)$ &emsp; | &emsp; $O(1) $ | $O(1) $ &emsp;| &emsp; $O(h)$ | $√$ | Our method takes smaller computational complexity (O(h)) in server since h<κ. [Q2] Chosen plaintext attack somehow is not sufficient in the real world application. If the work can guarantee chosen ciphertext attack, it could be a stronger scheme. [A2] We will clarify that our chosen plaintext attack is a tradeoff between security and computation cost, and we can present the chosen ciphertext attack by modifying Alg. 2, motivated by Boldyreva et al. (2012). The basic idea is to introduce additional random perturbations and split each I_i in Alg. 2, whereas this yields expensive computational and space costs for encrypting and training. We will add a section to introduce relevant work, and present the proof sketches for main theorems according to your suggestions. Thank you. --- Rebuttal Comment 1.1: Comment: Thanks for your table! It looks nice! I think this rebuttal helps!
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work focuses on the privacy of random forests. The authors develop a new encryption to preserve the Gini impurity of data, which plays an important role on the construction of random forests. Based on this new encryption, the authors introduce an effective algorithm for privacy random forests under the client-server protocol. Theoretical analysis is presented to guarantee the correctness and security of the proposed algorithm, and extensive experiments validates the effectiveness, efficiency and security of the proposed algorithm. In particularly, the privacy random forests take significantly better performance than previous privacy random forests via encryption, anonymization and differential privacy, and are even comparable to original plaintexts random forests without encryption. Strengths: Random forests have been one successful algorithm with diverse applications, and this work focuses on the privacy of random forests, which is an important problem and direction in machine learning. Previous methods on this issue take anonymization, differential privacy and homomorphic encryption, to preserve the privacy of random forests, whereas this work explore the data encryption from the crucial ingredients of learning algorithm, i.e., the Gini impurity for random forests. 1. An interesting encryption with some potential direction: This work proposes an interesting encryption to preserve the minimum Gini impurity, which has been a crucial ingredient on the construction of random forests. The authors also propose an interesting binary search tree for data encryption with O(logn) computational complexity, and it is suitable for real-time or online implementation. This work may motivate us some new direction on data encryption from some crucial ingredients of learning algorithm, and such research could make a bridge between machine learning and encryption. 2. An effective and efficient privacy-preserved random forests: Based on the encryption of Gini-impurity preservation, the authors present an effective and efficient privacy-preserved random forests under the client-server protocol. As can be seen in Table 1, the proposed random decision tree takes the smallest communication and computational complexities during the training and predictive process of random forests in comparisons with previous privacy-preserving decision trees. 3. Some theoretical supports for proposed algorithm: The authors prove the preservation of minimum Gini impurity in ciphertexts without decryption, which plays an important role on the construction of random forests. The proposed scheme also satisfies the security against Gini-impurity-preserving chosen plaintext attack. 4. Good empirical results: Extensive experiments show that the proposed encrypted random forests take significantly better performance than previous privacy random forests via encryption, anonymization and differential privacy, and are comparable to original (plaintexts) random forests without encryption. The authors also show a good balance between computational cost and data security for the proposed algorithm. Weaknesses: 1. The English presentation should be improved over the whole submission. 2. The authors could present more detailed analysis on the Gini-impurity-preserving chosen plaintext attack, with necessary background knowledge on similar security threats. 3. The authors could present the space cost of the proposed encryption algorithm, and it remains unclear whether the algorithm's memory overhead during encryption is significant, especially when dealing with large-scale datasets. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is the proposed method able to handle multiclass data? Does the number of classes impact the efficiency of the encryption method? If the number of classes is large, could this potentially lead to a sparsity problem? 2. In Algorithm 1, could the selection of the parameter c_max be inappropriate and subsequently affect the ciphertext? 3. I notice that a few methods presented in Table 1 were not included in the experimental comparisons. Could the authors elaborate on this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors recognize certain constraints of their proposed method, including the security aspects highlighted in Theorem 3, and the general applicability across varying datasets as evidenced through experiments. However, I encourage the authors to review the proposed method to identify if any limitations persist, specifically those related to computational demands and sensitivity to hyperparameters. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [Q1] Is the proposed method able to handle multiclass data? Does the number of classes impact the efficiency of the encryption method? If the number of classes is large, could this potentially lead to a sparsity problem? [A1] We will clarify that our method could deal with multiclass data efficiently when the number of classes is smaller than 1000, and we could use permutation and compressions techniques for more classes to reduce the sparsity problem. [Q2] In Algorithm 1, could the selection of the parameter c_max be inappropriate and subsequently affect the ciphertext? [A2] We will clarify that the parameter c_max=2^{lambda * log_2(# of plaintexts)}, where lambda >6.4 is the security parameter for ciphertexts as in (Kerschbaum et al., 2015). The larger the c_max, the better the security. It is not inappropriate to select smaller c_\max since the ciphertext’s accuracy may exceeds the expression range of computer. [Q3] … a few methods presented in Table 1 were not included in the experimental comparisons. [A3] We will clarify that Table 3 summaries the experimental comparisons for the most representative privacy-preserving random forests in recent years, and we omit some comparisons with methods in Table 1 due to pages limitation, as well as relatively weaker performance, which has been shown by previous methods (Aminifar et al., 2021, Akavia et al., 2022). We will introduce Gini-impurity-preserving chosen plaintext attack with some details, present space cost, and improve this work according to your suggestions. Thank you. --- Rebuttal Comment 1.1: Title: Awaiting Your Feedback on Authors' Rebuttal Comment: Dear Reviewer 2RPe, Thank you for your hard work. The Author-Reviewer discussion ends on August 21. The authors and I are eager to learn whether their responses have adequately addressed your concerns. You are encouraged to directly reply to the authors' rebuttal. Please note that this is a public thread. If you prefer to reply to me individually, please use the internal discussion thread. Kind Regards, AC
null
null
null
null
null
null
Unbalanced Low-rank Optimal Transport Solvers
Accept (poster)
Summary: The authors focus on the problem of efficiently compute discrete Optimal Transport (OT) problems, which are known to have a cubic complexity w.r.t. the number of input samples. They propose to approximate it by assuming that the transport plan is a low-rank matrix and propagate this property in the computations, such that the complexity of matrix products is reduced. This idea has already been proposed to approximate Balanced OT and Gromov-Wasserstein (GW) problems. The authors extend this to variants called Unbalanced OT, as well as Unbalanced GW and Fused-Unbalanced GW. The authors introduce these unbalanced OT variants, they derive in each setting the associated low-rank optimization algorithms using mirror descent. Then they perform experiments on brain and cell biology data, where unbalanced OT was extremely powerful, to assess the performance of these algorithms in an applied setting. Strengths: - The contributions might look incremental at first (combining unbalanced OT with low-rank OT), but are actually very thorough. They do not restrict to this one combination, but consider all recent variants of unbalanced OT (Unbalanced OT, GW and Fused GW) which have been developped in the literature in the last years. They also leverage one recent optimization idea of 'translation-invariance' which improves the computational efficiency of computing Unbalanced OT problems. All in all, the authors aggregate several works altogether to propose a set of interesting algorithms and tools for practitioners, especially for the biology field. Weaknesses: - The low-rank approximation is not sufficiently discussed in this paper. I mentioned that it is considered to accelerate the computations. However, one might use this variant because it makes sense to have a prior of low-rank transport plan in their applications, because the data has some structural property which could be leveraged. To my mind, using a low-rank plan seems contradictory with the property that the optimal plan is a (full rank) permutation in some setting. Could the authors discuss this in detail ? Why does it make sense to have a low-rank plan ? Which kind of data is relevant with such assumption ? This is unclear to me. - Some experimental illustrations could be provided on the methodological side of their algorithms, for completeness. The authors propose some optimization problems and algorithms to compute them. To my mind, the authors do provide complexity per iteration of their algorithms, but the experimental aspect of measuring the time performance of their algorithms is omitted. In particular, have you checked experimentally that this 'translation-invariance' variant in Section (3.2) does accelerate the computations in your low-rank setting ? Could you provide a plot of the time to compute OT problems (standard or Sinkhorn vs. low-rank) as a function of the number of samples ? Also, the use of unbalanced OT involves two extra parameters $(\tau_1, \tau_2)$ which requires cross-validation. Could the authors give plots of the performance of their biology task as a function of $(\tau_1, \tau_2)$. This would provide interesting insights on the interpretation of these parameters, and their impact on learning tasks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See my question above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: They adressed the societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your encouraging review. > **The low-rank approximation is not sufficiently discussed in this paper. I mentioned that it is considered to accelerate the computations. However, one might use this variant because it makes sense to have a prior of low-rank transport plan in their applications, because the data has some structural property which could be leveraged. To my mind, using a low-rank plan seems contradictory with the property that the optimal plan is a (full rank) permutation in some setting. Could the authors discuss this in detail ? Why does it make sense to have a low-rank plan ? Which kind of data is relevant with such assumption ? This is unclear to me.** ⧐ Your intuition is indeed correct: In the general case, a coupling matrix $P^\star$ minimizing the linear objective in Eq. 1 (e.g. that returned by an LP solver) will be (i) sparse (up to $n+m-1$ non-zeros values for a $n\times m$ matrix) and (ii) full rank. The idea of low-rank constraints and/or entropic regularization is therefore to move *away* from these properties. - For the low rank approach, this is akeen to requiring a "cluster" structure when computing transport (i.e. all of the mass must go through $r$ virtual points) - For entropic regularization, this forces all sources and targets to transfer mass between each other. Both can be seen as **inductive biases** when trying to match points in large dimensions, that result in substantial computational and statistical gains. Both result in a maximally diffuse / rank 1 transport coupling $ab^T$ in the limit, when $r=1$ or when entropic regularization is infinite. These aspects are already well covered in the references provided in lines 45-50, which is why we did not use to much space in the introduction for these reminders. Following your comment (and some space gained my moving some of the more technical aspects to the appendix) we will add the sentences above (abridged). > **To my mind, the authors do provide complexity per iteration of their algorithms,** ⧐ We have improved this aspect, and have added compute complexities alongside iteration costs to be more explicit, as was done e.g. in [Scetbon'22] > **but the experimental aspect of measuring the time performance of their algorithms is omitted. In particular, have you checked experimentally that this 'translation-invariance' variant in Section (3.2) does accelerate the computations in your low-rank setting ?** ⧐ This is an excellent suggestion. We have now added various experiments that illustrate such speed-ups (see Figures 1 and 2 in attached Pdf). We agree this strengthens our submission and will continue exploring to what extent this TI modification is always a "win" (we have seen it become slightly less effective for very "loosened", i.e. low $\tau$ values). For now however, these figures reflect what we saw in practice in our experiments. > **Could you provide a plot of the time to compute OT problems (standard or Sinkhorn vs. low-rank) as a function of the number of samples ?** ⧐ Because Sinkhorn and Low-rank approaches solve slightly different problems, it is not easy to compare them in terms of timing. Note, however, that for the **balanced** such comparisons were proposed for the Linear OT problem in [Scetbon et al'20, Scetbon/Cuturi 22] whereas [Scetbon et al'22] compares them in a quadratic setting (GW) problem. In the unbalanced case, this is even harder, because there is not a 1-to-1 correspondence between their regularizers, convergence criteria etc... In other words, below 50k points, it's very easy to find a setup where unbalanced Sinkhorn and unbalanced LR Sinkhorn run roughly in the same time, by tweaking thresholds, regularizers etc... **What we can confidently report is that Unbalanced Sinkhorn basically blows up above 50k points, whereas Unbalanced LR Sinkhorn can be easily run "off the shelf", with 500k points and ranks of $r=10$ and aboveas shown in Fig. 1. > **Also, the use of unbalanced OT involves two extra parameters which requires cross-validation. Could the authors give plots of the performance of their biology task as a function of… This would provide interesting insights on the interpretation of these parameters, and their impact on learning tasks.** ⧐ All of the results provided in the paper do indeed deal with these two parameters using cross-validation. We did not have the time (nor space left in the pdf) to provide accuracy (surface) plots as a function of these two, but we will prepare them for the camera ready. Since our grid for $\tau$ was quite small, we will expand it to provide a more detailed picture as well. This might indeed be useful for practitioners, to see if there are "jumps" (we did not notice them so far, rather smaller variations). --- Rebuttal Comment 1.1: Title: Response Comment: Dear Authors, I thank you for your rebuttal, and for your additional experiment comparing the gain of translation-invariant Sinkhorn into your algorithm. While I understand the concerns of other reviewers, I tend to disagree with their focus on the 'incrementality' of your work. To my mind, this is compensated by the thoroughness of the UOT variants you compute, which are as many tools available for future works of the ML community and beyond. You could have tackled each formulation in a different paper to trade quality of work for quantity of publications, which you did not. I could slightly increase my score to support your work, but I'm not sure it would change the global consensus among all reviewers. Side remark: In your answer to fjNH, you say that unbalanced OT solvers are non-convex. It is a convex problem (like balanced OT and unlinke GW), thus all unbalanced OT solvers are also convex. The non-convexity should only arise from your low-rank asumption. --- Reply to Comment 1.1.1: Title: We are grateful for your reaction to our rebuttal Comment: Many thanks for your supportive comments and for taking the time to read our rebuttal. > **I thank you for your rebuttal, and for your additional experiment comparing the gain of translation-invariant Sinkhorn into your algorithm.** We are happy that you looked at these new results, and that you found them convincing. We heard your concerns on this point, and we will incorporate them into the draft. > **While I understand the concerns of other reviewers, I tend to disagree with their focus on the 'incrementality' of your work. To my mind, this is compensated by the thoroughness of the UOT variants you compute, which are as many tools available for future works of the ML community and beyond. You could have tackled each formulation in a different paper to trade quality of work for quantity of publications, which you did not.** We believe this is a natural and important debate, and we thank you for your encouraging comments. We also hear some of the reviewers' concerns on incrementality. However, at the end of the day, we think these extensions bring a much needed flexibility, and are always easier to derive in hindsight. They also present implementation challenges. For these reasons we believe there is genuine value and novelty in presenting these results to the community. > **I could slightly increase my score to support your work, but I'm not sure it would change the global consensus among all reviewers.** If, after reading our rebuttal and other reviewers' comments, you believe that our submission is worth appearing at Neurips and could receive a higher grade, then we believe the right course of action would be to update your score. Even if the paper is rejected, this would be an encouragement to us. Moreover, we think it might still play a role because: - anecdotical evidence points to the fact that grades at Neurips are fairly low this year, and the acceptance bar might be accordingly low, - some of the reviewers (e.g. **fjNH**) have not yet reacted to our rebuttal and/or we are still waiting for their final words (e.g. **1XVN**); they might change their score, and your update could be very important for the AC /SAC/PC to make a final decision. > **Side remark: In your answer to fjNH, you say that unbalanced OT solvers are non-convex. It is a convex problem (like balanced OT and unlinke GW), thus all unbalanced OT solvers are also convex. The non-convexity should only arise from your low-rank asumption.** Are you referring to this exchange? (copy/pasted for clarity) >> **Although the paper heavily sells low rank optimal transport, isn't the formulation in eqs 5/6 non convex? If so, doesn't it make the exact solution intractable? How to ensure convergence towards a good critical point?** > Yes, the formulation is indeed non-convex, as is also the case for all original unbalanced solvers we build upon. [...] If this is the case, then indeed, because Eqs.5/6 in our draft already include the low-rank constraint, making unbalanced LR-OT formulation non-convex. Did we write elsewhere that unbalanced linear OT (e.g. Sinkhorn or others) was non-convex? If we did this is indeed a mistake. Thanks again for taking the time to read our rebuttal.
Summary: In this paper, the authors combine two variants of optimal transport (OT) known as unbalanced OT and low-rank OT. While the former relaxes the marginal constraints to ease the modelling rigidities and discard possible outliers from input measures, the latter helps reduce the computational cost of $\mathcal{O}(n^3)$, and therefore, makes OT scalable. In addition to standard OT which quantifies the discrepancy between two measures in the same dimensional space, they also apply this combination to the scenarios when two input measures belong to distinct dimensional spaces, which are Gromov-Wasserstein and Fused-Gromov-Wasserstein. Finally, the authors choose the spatial transcriptomic matching problems to justify the practical usage of their proposed methods. Strengths: - Originality: This work is a novel combination of two versatile and scalable variants of optimal transport (OT), which are unbalanced OT and low-rank OT. - Quality: The derivations of algorithm proposed in this work are associated with theoretical proof. The empirical performance of the proposed method is demonstrated via the spatial transcriptomic matching problems. - Significance: the results in this paper are important to some extent as it allows practitioners to find a low-rank solvers for the problem of quantifying the discrepancy between two arbitrary (not necessarily probability) measures. Weaknesses: - Originality: Although the combination of unbalanced OT and low-rank OT is novel, key tools and techniques (e.g. reparametrization of low-rank couplings, Dykstra algorithm) used in this paper have been already introduced in [1] and [2]. Thus, I think this work is incremental to some extent. To address this concern, I suggest the authors should highlight the main challenges of solving unbalanced low-rank OT compared to its balanced counterpart more clearly. - Clarity: The presentation of this paper is not good as there are a lot of notations which are either not carefully defined or define inaccurately (see Requested Changes). Additionally, there are some important parts, namely the initialization of Algorithm 1, which should be presented in this paper rather than merely refer to another paper. **References** [1] Meyer Scetbon, Marco Cuturi, and Gabriel Peyré. Low-rank sinkhorn factorization. In International Conference on Machine Learning, pages 9344–9354. PMLR, 2021. [2] Meyer Scetbon, Gabriel Peyré, and Marco Cuturi. Linear-time Gromov-Wasserstein distances using low rank couplings and costs. ICML, 2022. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. In line 95, what is the definition of nonnegative rank of $P$? Please add this definition to the revision of this paper. 2. In equation (2), the term $Q\text{diag}(g)R$ seems to be incorrect. Should it be $Q\text{diag}(1/g)R^{\top}$? 3. In equation (7), are there any differences between $\mathrm{KL}(\cdot,\cdot)$ and $\mathrm{KL}(\cdot | \cdot)$? If any, they should be defined explicitly in the paper. 4. Are there any theoretical guarantees that the tuples $(Q_{k+1},R_{k+1},g_{k+1})$ approximate the solution of the optimization problem in equation (6)? 5. In the proposed algorithms for solving ULOT, ULGW and ULFGW, which values of the rank hyperparameter $r$ should we choose? Would it be as low as possible? **Requested Changes**: 1. References: In lines 38 and 39, when mentioning the usage of Sinkhorn algorithm in solving OT, the authors should cite more relevant papers, namely [1]. Similarly, in line 78, regarding solving unbalanced OT using entropic regularization, the authors should cite the paper [2]. 2. In the definition of cost matrix $C$ in Section 2, the index notation $1\leq i,j\leq n,m$ is inacurrate. It should be changed to $1\leq i\leq n$ and $1\leq j\leq m$. 3. Notations: When introducing the unbalanced OT in equation (3), the authors should explain the notations $\mathrm{KL}$, $\tau_1$ and $\tau_2$ rather than assume that readers implicitly understand. Analogously, the notation $A^{\odot2}$ in equation (4) should be defined explicitly. 4. Typo: in line 96, repamatrization --> reparametrization. **References** [1] Khang Le, Huy Nguyen, Quang M Nguyen, Tung Pham, Hung Bui, and Nhat Ho. On robust optimal transport: Computational complexity and barycenter computation. Advances in Neural Information Processing Systems, 2021. [2] K. Pham, K. Le, N. Ho, T. Pham, and H. Bui. On unbalanced optimal transport: An analysis of sinkhorn algorithm. In ICML, 2020. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitations are not discussed in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for providing so much feedback, and giving 3 **good** grades to the *soundness* / *presentation* / *contribution* of our paper. We hope we can convince you to raise your final score with the answers below. > **Originality: Although the combination of unbalanced OT and low-rank OT is novel, key tools and techniques (e.g. reparametrization of low-rank couplings, Dykstra algorithm) used in this paper have been already introduced in [1] and [2]. Thus, I think this work is incremental to some extent. To address this concern, I suggest the authors should highlight the main challenges of solving unbalanced low-rank OT compared to its balanced counterpart more clearly.** ⧐ This is an excellent suggestion, and we will emphasise this more clearly around lines 91. > **Clarity: The presentation of this paper is not good as there are a lot of notations which are either not carefully defined or define inaccurately (see Requested Changes).** ⧐ We apologize for this lack of clarity. We have incorporated all the changes you have requested. > **Additionally, there are some important parts, namely the initialization of Algorithm 1, which should be presented in this paper rather than merely refer to another paper.** ⧐ Finding an efficient initialization for Alg.1 is something of a research topic in itself. For instance, the ott-jax toolbox (https://github.com/ott-jax/ott/blob/main/src/ott/initializers/linear/initializers_lr.py) implements 4 different ways to do this. We will mention this line of work. > **In line 95, what is the definition of nonnegative rank of ? Please add this definition to the revision of this paper.** ⧐ Yes, the definition is.that in [Scetbon21,22], we will provide it again. > **In equation (2), the term [...]** ⧐ Yes, we apologize for these two bad typos > **In equation (7), are there any differences between KL? If any, they should be defined explicitly in the paper.** ⧐ We're sorry about this typo. These 3 terms all refer to the generalised KL computed either for vectors of matrices with positive entries, $ \textrm{KL}((p,q) = \sum_i p_i \log \frac{p_i}{q_i} -p_i +q_i$. We will switch to $\textrm{KL}(\cdot|\cdot)$ for all. > **Are there any theoretical guarantees that the tuples approximate the solution of the optimization problem in equation (6)?** ⧐ Such guarantees cannot be achieved, to our knowledge, because the problem is at least as non-convex as the balanced case. > **In the proposed algorithms for solving ULOT, ULGW and ULFGW, which values of the rank hyperparameter should we choose? Would it be as low as possible?** ⧐ In all experiments, we have used cross-validation on train sets to set rank $r$. Lower $r$ means faster computations, but they might end up in factorized transports that are too coarse to perform well. We believe $r$ should be set depending on the problem size vs. compute ability, much like k-means (or, to some extent, entropic regularization). > **References: In lines 38 and 39, when mentioning the usage of Sinkhorn algorithm in solving OT, the authors should cite more relevant papers, namely [1]. Similarly, in line 78, regarding solving unbalanced OT using entropic regularization, the authors should cite the paper [2].** ⧐ Thanks for these two very relevant references, and we apologize for having missed them, we have added them to our draft. > **In the definition of cost matrix in Section 2, the index notation [...]** ⧐ Sure, we happy to change this. > **Notations: When introducing the unbalanced OT in equation (3), the authors should explain the notations KL in equation (4) should be defined explicitly.** ⧐ We updated our paper, and followed your advice to clarify these notations from the start (KL and Hadamard exponent) > **Typo: in line 96, repamatrization --> reparametrization.** ⧐ We had a typo indeed, thanks! It seems reparamet**e**rization is more common, so we will use it. --- Rebuttal Comment 1.1: Comment: Dear authors, Thanks for making great effort to do the rebuttal, I really appreciate that. However, my following two concerns havenot been addressed yet. In particular, 1) Although the combination of unbalanced OT and low-rank OT is novel, key tools and techniques (e.g. reparametrization of low-rank couplings, Dykstra algorithm) used in this paper have been already introduced in [1] and [2]. Thus, I think this work is incremental to some extent. 2) What are the main challenges of solving unbalanced low-rank OT compared to its balanced counterpart? Additionally, I would like to apologize for making a mistake of grading the presentation score. As I said in the Weaknesses section, the presentation is not good as there are a lot of notations which are either not carefully defined or define inaccurately. Thus, I reduce the presentation score to 2, but still keep the current rating of 4 given your response in the rebuttal. --- Reply to Comment 1.1.1: Title: Thanks for your prompt reaction Comment: Thanks for your reaction. Let us clarfiy your final concerns. 1. We hear you when you say that you feel our work is `incremental to some extent`. We do note, however, that you have also graded the "contribution" part of our paper as **"good" (3)**, which indicates that, in a bigger picture, you are satisfied about the contributions themselves to a larger extent. Reviewer's `GuCe` statement summarizes very well our position: `The contributions might look incremental at first (combining unbalanced OT with low-rank OT), but are actually very thorough. They do not restrict to this one combination, but consider all recent variants of unbalanced OT (Unbalanced OT, GW and Fused GW) which have been developped in the literature in the last years.` As a community, a question worth asking is whether (1) the OT toolkit would benefit from an unbalanced formulation for low-rank solvers, which are taking an increasingly important role (2) the community would benefit from a solid algorithmic reference with exhaustive experiments in large scale settings that presents this. The reason we carried out all of these generalizations was motivated by several requests from practitioners to push into this direction, as summarized in Lines 51-54. **Therefore we believe your impression that the paper is "incremental to some extent" should be balanced with the timeliness, technicality and exhaustiveness of our proposal, as well as the wealth of experiments we have proposed.** 2. We understand your concern. There are always computational challenges when computing regularized OT, and quite often "hidden" problems. However, let us clarify in very clear terms that **there were no unexpected challenges, computationally and practically speaking, that have arisen *because* of our unbalanced generalizations**, compared to using LR OT or LR GW in general. While these tools can be challenging to use, our paper does not introduce "new challenges". The only obvious practical challenge is of course that of choosing regularizer strength, but this is the case for *all* unbalanced formulations, low-rank or not. We handled it as rigorously as we could, using cross-validation in all our experiments, in a way that is, in our opinion, more transparent than in most previous papers. Results have been positive on that front. The only computational issue that *might* have appeared would have arisen from the translation invariance problem, but we took care of it. This leads to clear improvements, as shown in our rebuttal pdf. **We hope this alleviates your second concern.** As usual, the ultimate answer on this will be to hear from practitioners that will be able to test our tool, once we open-source it. In the meantime, we believe Neurips would be the perfect venue to advertise properly these generalizations. We are happy to add anything to our paper that you feel we might have missed on that front, but we believe our current writing reflects accurately the message above. As for presentation, we do acknowledge a few minor typos that have peppered the paper (notably inconsistency in $KL(\cdot | \cdot)$ vs. $KL(\cdot, \cdot)$ or $g\rightarrow 1/g$ and $R\rightarrow R^T$ in the first equation), but this has been fixed in a few minutes. We believe this is where the value of reviewing comes from, and we are grateful for your help/time in this regard. We sincerely appreciate the time you have put into reading our rebuttal.
Summary: The authors proposed unbalanced low-rank OT (ULOT) solver, which is an extension of the balanced counterpart. They show how to adapt this solver to the other unbalanced low-rank settings, namely translation-invariant and GW. The experiments compare the performances of multiple low-rank solvers, and of the Unbalanced Fused GW. Strengths: The extensions from the balanced to unbalanced LOT, as well as from ULOT to ULFGW are not trivial and require some calculation effort. I also find that the writing is very instructive and all technical details are clearly presented and on point. Weaknesses: It seems to me that - The content in the main paper and appendix is not adequately partitioned, namely the section 3 is somewhat too long that there is few space left for the experiment section, and some experiment details in the Appendix should be put in the main. - The main contribution of the paper is quite incremental. More precisely, + While adding translation-invariant improves the convergence of the usual unbalanced OT, it is unclear about the real improvement in the experiments. IMHO, it is more or less an add-on of the ULOT solver and should be moved to Appendix because it would dilute the main message of the paper (which is about ULOT). It is enough that Algo 4 and 5 use the ULR-Dykstra solver (Algo 6), instead of Algo 3. + The Algo 4 is merely a special case of Algo 5, where alpha = 0, thus should be removed. While it is more instructive to start with the GW setting before moving to the fused GW one, all the details can be moved to the Appendix and section 3.3 and 3.4 should be restructured or/and merged. + It seems that the experiment section is not sufficiently diverse because the experiments and competing methods are restricted to just a family of the low-rank based methods (except FUGW). IMHO, since the contribution on the methodogoly (i.e. section 3) is not very significant, I would love to see more comparison with other methods, like (unbalanced) mini-batch OT, or quantized GW, and maybe on more experiments (e.g. on graphs). + I find it weird that the section 4.3 is unreasonably brief and not sufficiently discussed, while most of its details are moved to the Appendix. This makes the section 4.3 look incomplete in the main paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Algorithm 2: typo in 1 / gamma / tau1 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do not discuss the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for the wealth of comments you have provided. > **The content in the main paper and appendix is not adequately partitioned, namely the section 3 is somewhat too long that there is few space left for the experiment section, and some experiment details in the Appendix should be put in the main.** ⧐ Thanks. We agree with your suggestion and have removed many technical details from the main section to the appendix to free up space for experiments. > **While adding translation-invariant improves the convergence of the usual unbalanced OT, it is unclear about the real improvement in the experiments.** ⧐ We agree that this was missing. We have now added such comparisons, showing a clear-cut benefit when using our TI variant. Please look at our Pdf, Figure X **[TODO Michal]** > **The Algo 4 is merely a special case of Algo 5, where alpha = 0, thus should be removed. While it is more instructive to start with the GW setting before moving to the fused GW one, all the details can be moved to the Appendix and section 3.3 and 3.4 should be restructured or/and merged.** ⧐ As you mention, we felt it was instructive to start with GW. While Alg. 4 is a particular case of Alg. 5, we believe the most important conceptual gap lies in going from 3 to 4. We prefer following your recommendation to free up space by moving Alg. 5 (ULFGW) to the appendix. > **It seems that the experiment section is not sufficiently diverse because the experiments and competing methods are restricted to just a family of the low-rank based methods (except FUGW). IMHO, since the contribution on the methodogoly (i.e. section 3) is not very significant, I would love to see more comparison with other methods, like (unbalanced) mini-batch OT, or quantized GW, and maybe on more experiments (e.g. on graphs).** ⧐ Thanks for your suggestions, here are our answers: - **It is not clear to us why and how mini-batch OT could be a competitor to handle the large scale unbalanced GW problems we solve**. Mini-batch OT was introduced to *minimize* a _linear_ OT _loss_ (e.g. [FZFGC] Fatras, K., Zine, Y., Flamary, R., Gribonval, R., & Courty, N. (2019). *Learning with minibatch Wasserstein: asymptotic and gradient properties*), **to train typically a generative model**. There, the goal is to minimise a fitting term $\mathcal{L}(\theta)=W(p_\theta, q),$ and both (or either) $p_\theta$ and $q$ are continuous densities. The argument in favor of mini-batch is that sampling many times from $p_\theta,q$ and/or $q$ to average multiple gradients for $\theta$ is better than using the gradient computed from a single bigger batch. All our experiments consider instead two **fixed**, large datasets of point-clouds, in a _quadratic_ GW setting. Here a mini-batch approach would result in an arbitrary partitioning of each measure into sub-measures, and then match arbitrarily these smaller sub-measures. This would be very naive, suboptimal, and we are not aware of papers advocating this except when doing more elaborate hierarchical approaches (e.g. MREC, https://arxiv.org/abs/2001.01666). MREC was compared to LRGW (see [Scetbon+22]), but does not have an unbalanced formulation. - **Quantized GW is not an unbalanced method**, therefore we feel it would be out of place in this study. Yet, we have considered one of their experiments (the scene dataset) in our new experiments (see pdf in global response). The authors of Quantized GW provide very little details on their experimental setup (hyperparameter tuning in particular) hence we did not re-run their solver. We only used their reported accuracy and timing in the text (which we assume is the best they could get). With the limited time of this rebuttal, we get roughly the same accuracy as shown in our pdf using ~3 min of compute. - Note that all experiments in 4.3 compare two brain meshes, i.e. two graphs endowed with shortest path metrics. To summarize our thoughts, we believe that the natural competitor for our setting remains entropic unbalanced solvers, because: - they are by far the most studied alternative, having been used in dozens of papers; - they can be used in an unbalanced setting in a fairly comparable way to ours; > **I find it weird that the section 4.3 is unreasonably brief and not sufficiently discussed, while most of its details are moved to the Appendix. This makes the section 4.3 look incomplete in the main paper.** You are right. Because we were short of space at the time of submission, we mostly reported results and not experimental details (the benchmark is not ours). However, this can be easily corrected as we can expand on the material in our supplementary and bring it back to the main body. Thanks to the space freed by Algorithm 5, we can now rebalance this section, and move back relevant background in 4.3. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: I thank the authors for their response. It helps clarify most of my concern. I do agree that the two suggested competing methods (minibatch OT and quantized GW) don't really fit in the context of the paper. However, comparing to the prior work on LOT and low-rank GW, I would still consider the contribution of ULOT (and its extensions) to be incremental. I see it as a mere stack of many prior works (low rank, unbalanced, GW) and there has not yet been any theoritical analysis of the proposed method in the paper. Even if it is possible that some results in the prior works can be applied, I'm not convinced that it is enough for a real contribution, except something really stands out. For this reason, I expect to see more contribution on the application side to compensate for the lack of theoritical understanding. In conclusion, I decide to increase the score to 4, while still being more prone to the reject. --- Reply to Comment 1.1.1: Title: Thanks for reading our rebuttal Comment: Dear Reviewer, Many thanks for reacting to our rebuttal. We are grateful that you have raised your score despite your remaining concerns. We understand your impression that our work may seem incremental, but we hope that you can agree that: - there are many technical difficulties in our work. Even if you were to pick the most informed people in the OT community, we do not think that they would be able to carry out the computations we did without effort. - our goal was to answer various requests from users that see in LR approaches the key to scale up OT to very large datasets, but cannot yet, as we explained in Lines 51, use an unbalanced setting. This is in itself real problem for them. For these reasons, we believe there is merit in publishing these works. We also understand that you felt the experimental section was a bit rushed. However, we believe that there is already a lot of content in our experiments, notably if we add the few remaining bits provided in the attached pdf. Notice that: - Our spatial-transccriptomics datasets are among the largest we have seen tested for a FGW setting. - **You are perfectly right** when you claim that Section 4.3 is too short, and does not contain more details. Your concern was heard, and we are expanding that part. The take home message, however, is that our method slightly outperforms the method proposed in [https://arxiv.org/pdf/2206.09398.pdf], presented at Neurips last year, on the same dataset. Thanks again for reading our rebuttal, and for your detailed review. the authors
Summary: In its original exact formulation, optimal transport (OT) suffers from an $\mathcal{O}(n^3)$ cost when applied to clouds of $n$ points. In addition, the exact constraint on the marginals makes it sensitive to outliers. One solution for each of these problems exist: - unbalanced OT [Schiebinger et al., 2019] is less sensitive to outliers than regular OT, as it relaxes the marginal constraints into a KL-penalized form - entropic OT allows for the use of the celebrated Sinkhorn algorithm but still requires large matrix vector multiplications. Low rank OT [Forrow et al., 2018] is a promising direction to further improve upon entropic OT, by using a low rank cost matrix and low rank constraints of the plan. Building on a these two seminal papers and a line of work by Scetbon, Cuturi and coauthors, the paper combines low rank OT with unbalanced OT (Formulation in Eq 5). A dedicated algorithm is proposed, interpreted as proximal gradient descent in the KL geometry. The latter is improved based on the works of Séjourné et al [2022]. The approach is extended to the Gromov-Wasserstein and Fused Gromov-Wasserstein OT problems. Strengths: OT has found many applications in practice in the last decade; alleviating its cost extends its range of applications. The proposed formulation is sound, the treatment is relatively easy to follow and the derivations are made clear. Weaknesses: - Although the paper heavily sells low rank optimal transport, isn't the formulation in eqs 5/6 non convex? If so, doesn't it make the exact solution intractable? How to ensure convergence towards a good critical point? - Convergence of the algorithm - Known results about Dykstra's algorithm are used to compute the solution of Equation 7 via its dual. Can the authors provide a reference for the convergence of the whole algorithm, namely the iterations defined by 7? I may have missed it, but I could not find reference of convergence towards a critical point; - convergence seems to be measured in terms of difference between two successive iterates going to zero. It is well-known that this does not guarantee convergence of the iterates (e.g. take the harmonic series). To clarify, do the author have a classical convergence result? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In the last cost of eq(2) I believe it should be diag(1/g) not diag(g), and $R^T$ not $R$ Typos: - reparamaterized, repamatrization - mass conversation for mass conservation - misuse of \cite or \citep vs \citet , e.g. "of [Frogner et al., 2015, Chizat et al., 2018]", and several other places e.g. "notations from [Scetbon et al., 2021]" - towards a stationary points - such that with probability at least 0.99 that: extra "that" - Algo 1: (check me) - paper uses \star and * for $\lambda^\star$ Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your time and for your comments. > **Although the paper heavily sells low rank optimal transport, isn't the formulation in eqs 5/6 non convex? If so, doesn't it make the exact solution intractable? How to ensure convergence towards a good critical point?** Yes, the formulation is indeed non-convex, as is also the case for all original unbalanced solvers we build upon. While this removes several theoretical guarantees, practice suggests a “mild” form of non-convexity, comparable to e.g. k-means or non-negative matrix factorisation (NMF). Convergence to a critical point is ensured through Dykstra, but the solution depends on the initialization. We use the standard approaches considered for the balanced case (see e.g. https://github.com/ott-jax/ott/blob/main/src/ott/initializers/linear/initializers_lr.py ) > **Can the authors provide a reference for the convergence of the whole algorithm, namely the iterations defined by 7? I may have missed it, but I could not find reference of convergence towards a critical point;** Sure, in [Scetbon et al. , 2021] "Low-rank sinkhorn factorization", the authors obtain in Appendix A a generic approach to show the stationary convergence of the mirror-descent scheme as soon as the objective function to minimize is relatively smooth w.r.t the negative entropy. Also the authors show the constrained version of the objective defined in (5) is relatively smooth w.r.t the negative entropy. Therefore the stationary convergence of our algorithm is a corollary of the result obtained in [Scetbon et al. , 2021]. We will clarify this point in the paragraph on "Convergence and Complexity" written page 4. > **convergence seems to be measured in terms of difference between two successive iterates going to zero. It is well-known that this does not guarantee convergence of the iterates (e.g. take the harmonic series). To clarify, do the author have a classical convergence result?** For convex OT problems (e.g. Sinkhorn), the usual approach is to compute the gradient norm (marginal violations in balanced case) to monitor convergence. For nonconvex OT problems (unbalanced solvers but also Gromov-Wasserstein and Wasserstein barycenter problems with free support), it is more usual to monitor changes in the solution (as in, e.g., k-means). Note also that our iterates are all bounded (they are transport plans $Q,R$ and simplex vector $g$), there is therefore no "divergence" we might miss (as in your harmonic series example), and we have never seen such behaviour in practice. > **In the last cost of eq(2) I believe it should be diag(1/g) not diag(g) [...]** Indeed, we apologize for this unfortunate typo early in the paper. Thanks for pointing it out. > **reparamaterized, repamatrization** Fixed! we will use reparam**e**terized and reparameterization (https://en.wiktionary.org/wiki/reparameterize#English ). > **mass conversation for mass conservation** We apologize for this typo, likely due to an unfortunate auto-correct. > **misuse of \cite or \citep vs \citet , e.g. "of [Frogner et al., 2015, Chizat et al., 2018]", and several other places e.g. "notations from [Scetbon et al., 2021]”** To clarify: we use `\citep` to refer to the *paper*, and `\citet` to refer to the authors of the work. We meant therefore: “the framework of [these papers]” and “notations from [this paper]”. When crediting a group of authors we use `\citet`, as in line 132 (`Séjourné et al. [2022] have proposed`). We have found a few mistakes disagreeing with this convention, we have fixed them. > **towards a stationary points, such that with probability at least 0.99 that: extra "that", Algo 1: (check me), paper uses \star and * [...]** Many thanks for pointing out these typos and unfortunate "left over" comments. --- Rebuttal Comment 1.1: Comment: The authors' rebuttal and comments from reviewer `GuCe` on the non triviality of the combination of techniques, together with the extent of the addressed problem, make me increase my score. I thank the authors for their precise answers. --- Reply to Comment 1.1.1: Title: Many thanks for reading our rebuttal Comment: We are grateful for your comments, for catching these typos, and for your questions. We will use them to improve our draft. We are very thankful for your score increase.
Rebuttal 1: Rebuttal: **We thank the reviewers, AC and SAC assigned to this paper for their time and work looking into our submission**. We thank them in advance for reading our rebuttal and interacting with us for a few more days during the discussion period. We were happy to see that the paper was overall well received by all 4 reviewers: **fjNH** : *The proposed formulation is sound, the treatment is relatively easy to follow and the derivations are made clear.* **ckny**: *The extensions [...] are not trivial and require some calculation effort [...] the writing is very instructive and all technical details are clearly presented and on point.* **1XVN**: *the results in this paper are important to some extent[...]*, *This work is a novel combination* **GuCe**: *The contributions might look incremental at first (combining unbalanced OT with low-rank OT), but are actually very thorough.* The most important weaknesses highlighted by reviewers point to: - some sloppiness in our presentation: we proposed several tightly related, yet distinct algorithms. This led to some redundancy, as discussed below in our answers. Because of a lack of space, some of the experimental sections were left to their minimum, with only (good!) results presented. As mentioned by reviewer **ckny**, this is true in particular for Section 4.3. ➡ *We have incorporated comments by reviewers in our draft, and have removed an algorithm from the main body to the appendix, and added back experimental details.* - some clarifications in experiments, e.g. importance of using the translation-invariant adjustment, and comparing to quantized GW (which is not, however, an unbalanced method). ➡ *We have run novel experiments following their remarks. They all reinforce the message of our submission, and will be added either to appendix or main body, depending on space.* - a few "bad" typos (e.g. $D(g)$ instead of $D(1/g)$ and $R$ instead of $R^T$). ➡ *We apologize for these unfortunate typos, we have corrected them.* We believe we have addressed all of the points raised by reviewers. In addition, we will release code for *all* of the tools presented in the paper in one of the major OT python toolboxes. Our implementation can run on CPU but also natively on GPU. We will share the code before the end of August. At the moment, all reviewers have scored our paper as **good (3)** in *both* **soundness** and **presentation**. We have received an average 2.75 grade (2,2,3,4) in **contribution**, and we hope our rebuttal alleviates these concerns. We believe that the very supportive words found in all reviews are not reflected in the current distribution of (fairly low) scores of 7,4,4,3 . If reviewers agree with our assessment, we humbly ask them to reconsider their score. Pdf: /pdf/8aea9468cfcb543a44709a6ff96f8990996ad766.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Zero-Shot Anomaly Detection via Batch Normalization
Accept (poster)
Summary: The paper presents an approach to address the challenging task of zero-shot anomaly detection. The authors base their methodology on three crucial assumptions: the presence of a training set that shares relevant structures with the target task, performing batch-level analysis during inference, and having the majority of the batch comprised of normal class data. Leveraging these assumptions, the authors propose a method that effectively utilizes batch statistics to assign anomaly scores. The proposed method, named Adaptive Centered Representations (ACR), combines batch normalization with meta-training. Through re-centering and scaling the normal samples within a batch, ACR successfully brings them closer to the center while pushing anomalies further away. Experiments were conducted in the study across different types of data, including both image and tabular data. Strengths: The authors tackle the task of zero-shot anomaly detection, which holds significant practical importance in various domains. The clear and well-written notation and assumption section greatly contributes to understanding the authors' approach and provides a solid foundation for the rest of the paper. Experimental evaluations conducted demonstrate the effectiveness of the proposed approach. Weaknesses: The paper has some weaknesses that should be addressed. Mainly, the reliance on some critical assumptions, such as the availability of a meta-training set and batch-level anomaly detection, may limit the practical applicability of the proposed method. 1. The assumption of a meta-training set assumes strong prior knowledge about the types of anomalies, which may not be feasible or practical in many real-world scenarios. 2. Table 7 shows that strong performance required over 100 different classes during training. This raises concerns about the scalability and feasibility of the method when faced with a smaller number of training classes. This is indicated by the lower performance observed when classes were reduced to 20. Further exploration of the method's performance with a smaller number of training sets would provide valuable insights into its limitations and generalization capabilities. 3. The experimental section of the paper could benefit from improved clarity and organization. It was challenging to follow the exact details of the conducted experiments, requiring multiple passes through the main paper and the Appendix. Enhancing the clarity and structure of the experimental section would greatly improve reproducibility and understanding of the reported results. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The paper primarily addresses the one-to-many anomaly detection scenario, but it would be interesting to evaluate its performance in the more realistic many-vs-one scenario [1]. In this scenario, where the data is multi-modal and achieving compactness is more challenging. How the proposed method works on this setting? 2. The assumption of batch-level inference may not be applicable in many practical scenarios, such as the mentioned hospitals and clinics where doctors often assess patients individually rather than in large batches. The performance drop observed in Table 6 with a batch size of 20 raises questions about the method's effectiveness with even smaller batch sizes, such as 5 or 10. What is the performance of the proposed method with lower batch sizes? References: [1] Ahmed, F.; and Courville, A. Detecting semantic anomalies. AAAI 2020. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The assumption of a meta-training set is strong This assumption is common in the meta-training literature, both for general methods, such as MAML as well as meta-training AD work [8, 21, 31, 39]. There are multiple ways to mitigate this assumption. If one does not have a meta-training set at hand, one can train their model on a different but related dataset, .e.g, train on Omniglot but test on MNIST (see results below under this setting. We still get decent AUC results on MNIST). Anomaly ratio|1%|5%|10%|20%| |:---|---|---|---|---| |AUROC|84.4$\pm$2.4|85.2$\pm$2.5|84.3$\pm$2.5|82.2$\pm$2.4| Rebuttal Table 4: MNIST In our main paper, we provide several examples of how to construct a meta-training set for real applications: for our tabular data benchmark, we use the timestamps to construct a meta-training set; in our MVTec-AD experiment, we simply use multiple industrial object classes to construct the meta-training set; in medical domains, we have data from different hospitals or patients or organs to construct the meta-training set. > Experiments with the extreme number of classes during training (e.g., 1) Thank you for mentioning this extreme setting. We will add the ablation study to Table 7. The results show that even though we have only one class in the meta-training set, thanks to the batch norm adaptation, we can still get better-than-random zero-shot AD performance. |#Training classes|1|2|5|10|15| |:---|---|---|---|---|---| |AUROC|59.0$\pm$0.6|71.8$\pm$0.6|72.5$\pm$0.3|72.2$\pm$1.0|75.3$\pm$0.4| Rebuttal Table 5 > Experimental section lacks clarity and organization. We did multiple experiments on various data types and anomaly detection tasks. Our experimental section is structured by data types. For each data type, we first provide implementation details (train/val/test split and meta-training/testing construction) and then the results. In addition, pseudo-code and our codebase are provided in the supplement for reproducibility. We are grateful about more specific advice that we could incorporate. > How the proposed method works in other settings? Many-vs-one is the reversed version of the one-vs-rest setting. We study a more realistic setup: the N-vs-rest setting on Omniglot. We train the model under the one-vs-rest setup but test under multiple normal modes. The table below shows that as the number of normal classes (N) increases, the performance decreases because the sample mean and variance are not representative of all normal classes. We experiment with the anomaly ratio being 0.1 at test time. |N=1|N=2|N=5|N=10|N=20| |---|---|---|---|---| |99.0|95.6|81.5|65.1|28.9| Rebuttal Table 6 > Validity of batch-level anomaly detection. Batch-level predictions are widely used in real life. For example, people examine Covid19 test samples at a batch level out of economic and time-efficiency considerations (https://www.fda.gov/medical-devices/coronavirus-covid-19-and-medical-devices/pooled-sample-testing-and-screening-testing-covid-19). Our work makes similar assumptions as batch-level predictions in the literature [10, 47, 51, 64, 73]. Our method can be easily extended to score individual data by presetting the sample mean and variance in BatchNorm layers with a collection of data. These moments are then fixed when predicting new individual data. > Performance in lower batch sizes. The table below shows the performance in the lower batch size regime. We will add the results below to Table 6. Each batch contains 1 anomaly. |Batch size|3|6|11|16| |:---|---|---|---|---| |AUROC|66.4 $\pm$ 2.3|77.9 $\pm$ 2.8|82.3 $\pm$ 2.7|84.8 $\pm$ 2.0| Rebuttal Table 7 --- Rebuttal Comment 1.1: Comment: Thanks for providing detailed results and for responding to my concerns. I have read the author's responses and all other reviews, but I am keeping my rating the same. --- Reply to Comment 1.1.1: Title: Thank you for your time in reviewing our paper Comment: Thank you for your time in reviewing our paper! We appreciate your feedback.
Summary: This paper proposes a method called Adaptive Centered Representations (ACR) for zero-shot batch-level anomaly detection. The method utilizes off-the-shelf deep anomaly detectors trained on a meta-training set, combined with batch normalization layers, to achieve automatic zero-shot generalization for unseen anomaly detection tasks. The main contributions are introducing an effective method for zero-shot anomaly detection, demonstrating its applicability to tabular data, and achieving competitive results in anomaly segmentation on specialized image datasets. Strengths: In this paper, the effectiveness of the proposed method is verified by a large number of experiments. There is a rigorous mathematical proof process, and the effectiveness of the method is verified by experiments on multiple data sets The results are powerful. In most cases, the proposed methods perform very well. Weaknesses: 1.The article claims that Batch Normalization Layers can improve the model's adaptive ability in new tasks. In article 4.1.1, Anomaly Detection experiment of CIFAR100-C with Gaussian noise and medical image dataset OrganA was conducted. Why ACR-DSVDD with the addition of Batch Normalization Layers performs better overall than ACR-BCE is not further explained here. 2.As for the anomaly ratio π in this paper, I think it is also a major hyperparameter indicator. The paper only claims that the model is robust when it is equal to 0.01, 0.05, 0.1, 0.2, and 0.4. Ablation experiments with this parameter are expected to prove its robustness. In the Implementation Details of section 4.1.1 it is claimed that π=0.8, but the supplementary material says that π=0.2 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. In order to improve the clarity and understandability of this paper, it is suggested to add a schematic diagram outlining the overall method. The visualization diagram of the whole paper is too little, which makes it difficult to grasp the overall method principle. 2. When discussing meta-learning, I want to explain some basic concepts, rather than explaining Batch Normalization Layers, and I want to explain the pseudo-code of the supplementary material in the article, which ensures that the reader has a full understanding of the background and significance of these models. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Limitations are clearly described and I don't expect negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Why ACR-DSVDD with the addition of Batch Normalization Layers performs better overall than ACR-BCE is not further explained here. DSVDD has a specific inductive bias for anomaly detection to learn a compact normal data boundary, while a binary classifier doesn’t have such an inductive bias. Li et al. 2023 visualize this inductive bias through the normal data boundary in Fig 1, showing that a binary classifier using binary cross entropy loss (BCE) has a more conservative normal data boundary than DSVDD. [Li et al. Deep Anomaly Detection under Labeling Budget Constraints. ICML 2023.] > Robustness of the mixing hyperparameter $\pi$ in Eq. 6. Thanks for mentioning this ablation study. We conduct the following experiments with varying $\pi$. The experiment has the same setup as Table 1 on CIFAR100C with a testing anomaly ratio of 0.1. The results show that all tested $\pi$’s results are over 84% AUC. |0.99|0.95|0.9|0.8|0.6| |---|---|---|---|---| |85.8$\pm$0.5|85.4$\pm$0.5|84.1$\pm$0.4|85.9$\pm$0.4|84.4$\pm$0.6| Rebuttal Table 3 > Inconsistency values for the mixing hyperparameter $\pi$ in the main paper and the supplement. Thank you for noticing this inconsistency and we will correct the typos in the supplement. We report 1-$\pi$ in the supplement but report $\pi$ in the main paper. This will be fixed. --- Rebuttal Comment 1.1: Comment: Thank you for answering my opinion. I am happy to stand by my original decision to recommend acceptance of the paper. I might want to cause you to attention to adding a diagram outlining the overall approach to the essay, which will help the reader understand your essay and ensure quality. --- Reply to Comment 1.1.1: Title: Thank you for recommending acceptance of our paper Comment: Thank you for recommending acceptance of our paper! We will consider adding a diagram to explain our overall method in our revised version.
Summary: This paper proposes Adaptive Centered Representations (ACR) for zero-shot batch-level AD. The simple recipe, batch normalization plus meta-training, seems highly effective and versatile. The experiments are well conducted on multiple datasets. Strengths: 1. This paper is well-written, and very easy to follow. 2. The contributions are very clear. 3. The experiments are well conducted on multiple datasets. Weaknesses: The main issue with this paper is the confusion between the problems of anomaly detection and one-class classification, which leads to several problems, including uncertainty and irrationality in experimental settings, unfair comparisons with competing methods, and misleading statements in the related works section. 1. The key problem: The key problem lies in the authors' confusion between one-class classification and anomaly detection. In the context of industrial anomaly detection using datasets like MVTec, the training set consists of normal objects without any anomalies, while the testing phase targets specific fine-grained anomalies. This discrepancy makes the task extremely challenging. However, in one-class classification, particularly during meta-training, anomalies can be defined as any coarse-grained objects as they are with the same definition of the anomalies for the test, and the training set can include multiple objects, providing some form of supervision during meta-training. 2. Misleading in the related works: There is confusion in the related works section, specifically in lines 194-206, where the authors misinterpret the settings of RegAD [31] compared to other papers [21, 8, 39]. While all these papers involve few-shot learning during meta-testing (although [31] does not explicitly position itself as meta-learning in its original paper), the crucial difference lies in the meta-training phase. In [21, 8, 39], data from multiple classes are used for supervised learning during the meta-training process for the one-class classification meta-task. However, in [31], the data from multiple objects cannot be used for supervised learning since the real anomalies are fine-grained defects, which do not exist in the entire meta-training dataset. As a result, the difficulty of the two tasks in the meta-training phase is fundamentally different, and the authors should clearly clarify these distinctions to avoid misleading readers. 3. Uncertainty and irrationality of experimental settings: There are uncertainties and irrationalities in the experimental settings, particularly in the training process for defect detection on the MVTec dataset. It is unclear how anomalous data is included in the training set of the 14 classes, as the authors suggest that most of the data in one batch are normal. It would be helpful to clarify whether some anomalous data is included in the test set of these 14 classes for training purposes. Additionally, during testing, it is unclear how the test data batches are organized. If some classes in MVTec have more anomalous data than normal data, it becomes crucial to address how this imbalance is handled, as it deviates from the assumed scenario. 4. Unfair comparisons with competing methods: Unfair comparisons are made with competing methods. The traditional definition of zero-shot learning implies that the results can be determined for each independent sample independently. In Table 2, methods such as WinCLIP [35] can classify individual images as normal or anomalous independently. However, in this paper, decisions can only be made after processing a large batch of data, which deviates from the true zero-shot setting as demonstrated in [35]. Overall, this is a good paper with clear contributions; however, it can mislead readers and create confusion regarding some important experimental settings. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please refer to the weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > relation between one-class classification and anomaly detection. We wonder whether our understanding of the question aligns with your intention, so let us give our understanding of the anomaly detection (AD) problem. AD is a broad field that can be formulated in many ways (Table 3 in Ruff et al. 2021). We consider two setups: image-level anomaly detection (the object-level one-vs-rest setup) and anomaly segmentation (featuring the MVTec-AD dataset). In both cases, we aim to distinguish observed data (considered normal) from distributionally-different data (anomalies) encountered in the future. One-class classification (OCC) is a widely-used method to solve this problem. In contrast to binary classification, OCC tries to learn a tight (e.g., circular) decision boundary around the observed normal class (See Fig. 1 in Li et al., 2023), thus distinguishing it from everything else encountered in the future. This is different from binary classification, which usually involves an unbounded (e.g., linear) decision boundary. We don’t expect the potential problems laid out in this question since our approach promotes robustness to future anomalies, as we ensure that anomalies and normal data encountered during testing (e.g., MNIST digits 5-9) are held out during meta-training (which in the MNIST example only uses digits 0-4). [Ruff, Lukas, et al. "A unifying review of deep and shallow anomaly detection." Proceedings of the IEEE 109.5 (2021): 756-795.] [Li et al., Deep Anomaly Detection under Labeling Budget Constraints, ICML 2023] > anomalies can be defined as any coarse-grained objects as they are with the same definition of the anomalies for the test, and the training set can include multiple objects, providing some form of supervision during meta-training. We are not sure we understand the question–are you suggesting that our approach may not be robust to novel types of anomalies encountered during testing? Even though we use a meta-training set, our train-test split of the classes ensures that the objects or classes in the meta-training set are not present at test time, so there is no “information leakage” between the data distributions encountered between meta-training and testing. The availability of a meta-training set provides the possibility of supervising the anomaly detector with synthetic abnormal data from other classes. Second, our ablation study showed that using the supervision signal from synthetic abnormal data at training time leads to better zero-shot AD performance than not using it (compare “one-class loss” in Table 4 against the results in Table 1). > Additional information on RegAD [31] in the related works Thanks for your information on the additional differences between RegAD [31] and other related works. We will clarify by adding the following sentence in the related work: “RegAD[31] does not exploit the presence of a meta-set to learn a stronger anomaly detector through synthetic outlier exposure. We found that using images from different distributions as example anomalies during training is helpful for anomaly segmentation on MVTec.” > Meta-training and testing details on MVTec The implementation details can be found in Section 4.2, E.2, and our codebase. In summary, * When we aim to detect anomalies in one targeted class, we construct the meta-training set by removing the target class from the original MVTec training set. * We use class-level images to form supervision signals during meta-training. We don’t use any realistic defects from the official test set at training time. * We ensure that we have a clean split between distributions (object classes) available for meta-training and meta-testing. This includes training and test sets. * Since our goal is anomaly segmentation, we assign an anomaly score to each data patch, using a sliding window approach. * Note that the assumption that normal data are more present than anomalies still holds for anomaly segmentation since anomalous segments are usually localized in tiny regions in the images. (We also noticed that at the image level, there could be more anomalous images than normal ones but this isn’t the case at the patch level.) > Unfair comparisons with competing methods We disagree that our usage of batch-level prediction is unfair to other baselines that score data individually. Zero-shot learning relies on auxiliary information to inform a model of what the current task is [Xian et al., 2018]. Zero-shot AD approaches operate differently when it comes to identifying the “new normal” distribution. WinCLIP carefully selects the prompts handmade by human experts as auxiliary information for normality. In contrast, we use batch-level information as auxiliary information to achieve zero-shot anomaly detection, for which we don’t need any human experts at all. The benefits of our method are three-fold. 1) Batches are cheaper than human expert labeling and prompt engineering, so there is no additional cost. 2) Our method also works for small batches in our ablation studies. 3) We can extend our method to score individual data by presetting the sample mean and variance in BatchNorm layers with a collection of data. These moments are then fixed when predicting anomaly scores for new individual data. [Xian, Yongqin, et al. “Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly. TPAMI 2018] --- Rebuttal Comment 1.1: Comment: I greatly appreciate the authors' thoughtful response. To Q1 and Q2: Thank you for your comprehensive explanation. Please rest assured, as I have a clear understanding of the concept of anomaly detection. To emphasize the differences, in the context of One-Class Classification (OCC) using the MNIST dataset, models trained on digits 0-4, when with 0 as the norm, can identify digits 1-4 as anomalies. The meta-set teaches the model to recognize distinct digits based on their identity labels. When applied during testing, if digit 5 is designated as normal during testing, the model can accurately flag digits 6-9 as anomalies, without being troubled by potential variations such as "imperfectly written digit 5" or "blurred image of digit 5". This understanding is achieved through meta-training, allowing the model to grasp the fundamental anomaly types associated with different digit identities. even in cases where classes have no overlap. In contrast, in industrial anomaly detection on the MVTec dataset, anomalies are defects within the same object category. However, as the MVTec meta-training set lacks defective instances for any categories, the model lacks prior knowledge of potential anomaly types. Intuitively, using various defect-free images (like tiles or transistors) as training data doesn't adequately prepare the model for detecting nuanced defects within one category (e.g., identifying subtle defects in wood textures). Therefore, the latter task of MVTec is inherently more challenging than the OCC task. My intention isn't to raise doubts, but rather to highlight that the authors might not have explicitly differentiated between these tasks, potentially leading to a lessened emphasis on their distinct complexities. This could inadvertently cause readers to equate their difficulties, despite the inherent challenges in the MVTec task. I am happy to see that the authors already prepare to revise the related works to give a clear explanation. Could briefly summarize the differences in the above tasks and present them in the paper? To Q3: Could you clarify whether, during MVTec testing, a batch refers to patches within an image or the entire test dataset? Despite my careful review of the rebuttal and paper, this specific query remains unanswered. To Q4: I agree that using an entire batch for collective decision-making is reasonable. However, authors should openly acknowledge their informational advantage over WinCLIP in decision-making. While it's acceptable, clarity is vital. Additionally, I differ from the authors' portrayal of WinCLIP's prompt engineering. Notably, WinCLIP uses the same prompt template across categories, rather than tailoring individually. Also, the image-level results for WinCLIP in Table 2 should be 85.1%, but not 91.8%? --- Reply to Comment 1.1.1: Title: Thank you for your suggestions. We will incorporate them in the revised version. Comment: Thank you for your timely reply and clarifications. We agree to incorporate your suggestions into the revised version of the paper. See details for each point. > To Q1 and Q2 We agree with your perspective on the difference between the object-level anomaly detection task and the pixel-level anomaly segmentation task, which helps make our paper and empirical study more precise. In our related work, especially in our few-shot and zero-shot AD paragraphs, we will highlight the difference between the two tasks and separate the discussion of related works per task. Specifically, we will summarize as follows. “Tasks in visual anomaly detection often involve object-level anomaly detection and pixel-level anomaly segmentation. The goal of the former is to differentiate image-level objects and the latter is to localize defects within an image. While meta-training for object-level anomaly detection is generally simpler (it is easy to find anomaly examples, i.e., other objects different from the normal one), meta-training for anomaly segmentation poses a harder task since image defects may differ from object to object (e.g., defects in transistors may not easily generalize to subtle defects in wood textures).” Thanks for the suggestions. We appreciate it! > To Q3 A batch of patches refers to a collection of patches taken from a set of images that all share the same spatial position in the image. For example, we may stack the *top-left patch* of all testing `wood` images into a batch. Thank you for your careful review. We will add this information to the main paper and make it clear. > To Q4 Thank you for your suggestions. We will discuss the different strategies for identifying “new normal distributions” across methods in more detail. Specifically, we will add the following sentences to our revised paper under the Zero-shot AD paragraph in the related work section: “While previous pre-trained CLIP-based zero-shot AD methods adapt to new tasks through informative prompts given by human experts, our method enriches the zero-shot AD toolbox with a new adaptation strategy without human involvement. Our approach allows the anomaly detector to infer the new task/distribution based on a mini-batch of samples.” To your second question, we double-checked the WinCLIP results in our paper and believe the reported results are correct. We borrowed WinCLIP image-level results from Table 1 in the WinCLIP paper, for which they refer to as “anomaly classification”. In Table 4 they report WinCLIP’s pixel-level anomaly segmentation results. We thank you again for the careful and helpful reviews.
Summary: I think the paper has an interesting motivation but fails to deliver a convincing contribution. The method is incremental and lacks novelty and analysis. Strengths: The paper proposes a method for zero-shot batch-level anomaly detection using batch normalization and meta-training. The main idea is to train a deep anomaly detector on a set of interrelated data distributions and use batch normalization to adapt to unseen distributions at test time. The paper claims to achieve state-of-the-art results on image and tabular data. The paper is well-written and easy to follow Weaknesses: - The paper relies on three assumptions that may not hold in some real-world scenarios, such as the availability of a meta-training set of interrelated distributions, the batch-level anomaly detection setting, and the majority of normal data in each test batch. The paper does not justify the assumptions and design choices. The paper does not explain how realistic or generalizable these assumptions are, or how to relax or mitigate them in practice. - The paper lacks novelty and theoretical analysis. The method is essentially a combination of existing techniques: batch normalization, meta-learning, and deep anomaly detection. The paper does not provide new insights or theoretical guarantees on why or how the proposed method works. - The paper does not provide qualitative results to illustrate how its method works or what kind of anomalies it can detect. Such as some examples of test images or tabular data with their corresponding anomaly scores or segmentation masks generated by its method and compare them with those of the baselines. This would help to better understand the strengths and limitations of its method and provide some insights into its behavior and adaptability. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see the weakness 3. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The paper lacks novelty and theoretical analysis. The method is essentially a combination of existing techniques: batch normalization, meta-learning, and deep anomaly detection. The paper does not provide new insights or theoretical guarantees on why or how the proposed method works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The method is incremental and lacks novelty. We respectfully disagree. Before our work, zero-shot anomaly detection relied on large, pre-trained foundation models *specifically on image or language data*. We enrich the toolbox and provide a surprisingly simple solution. That is, one can use any existing deep anomaly detector (as long as their network structure contains batch normalization layers) plus proper training and evaluation schemes to get the zero-shot detection ability. In contrast to using foundation models, our approach may result in small, lightweight models that apply to data beyond text and natural images. We discover the effect of batch normalization in zero-shot AD and propose to use meta-training to learn to use batch normalization to adapt to zero-shot AD. These points are also highlighted in our contribution list in Section 1 and the method explanation in Section 2.3. > lacks theoretical analysis Our paper does contain several theoretical results: we 1) derive generalization guarantees to unseen data distributions in Supplement A, 2) analyze why DSVDD fails when batch normalization is not used (Eq. 17), and 3) provide theoretical justifications for Assumption 3. They are mentioned in the main paper and we will refer to them more clearly. > Justification of the three assumptions. Thanks for pointing this out; we'll revisit our paper for improving clarity in this regard. Our Assumptions 2 and 3 are already justified in the paragraphs directly following their definition; Assumption 2 is the same assumption used in batch-level prediction work [10, 47, 51, 64, 73]. Assumption 1 is the same assumption typically chosen in the meta-learning literature [Finn et al., ICML 2017; Nichol et al., 2018]. In practice, the meta-training set can be generated using available covariates (e.g. for our tabular data experiment, we used the timestamps, in medical data, one could use data collected from different hospitals or different patients to obtain separate sets for meta-training, and in MVTec-AD, we used the other training classes except for the target class to form the training set.) We both empirically and theoretically justified Assumptions 3. > The paper does not provide qualitative results to illustrate how its method works or what kind of anomalies it can detect. Thanks for pointing this out–in fact, some of these qualitative results are already shown in the supplement. They are mentioned in the main paper and we will point this out more clearly. Figure 2 (supplement) shows why batch normalization is an adaptive zero-shot AD module and Figure 3 (supplement) visualizes how our method captures the normality (i.e., projects the normal samples towards the neighborhood of the “center” and pushes abnormal samples away) in practice on the Omniglot dataset. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the rebuttal. For the response to novelty, the proposed method is a batch-level detector, while the pre-trained foundation models are more general. I think using meta-training to adapt BN for batch-level AD is incremental and wonder whether such kind of contribution reach the bar of NIPS. However, considering other reviewers' decisions, I will raise the score. --- Reply to Comment 1.1.1: Title: Could you update the score as promised in your last response Comment: Thank you for reading our responses and agreeing to raise the score. We further address your new concerns as follows and hope to convince you of the novelty of our work. It will be a great help if you indeed increase the score. > The proposed method is a batch-level detector, while the pre-trained foundation models are more general We agree that batch-level prediction is one assumption our work makes, but pre-trained foundation models don't need. However, as we explained in our contribution list, our method enriches the zero-shot AD toolbox and has many benefits that pre-trained foundation models don't have. First, our method is applicable to many data types. For example, zero-shot AD in tabular data is one of our applications that foundation models cannot be applied. Second, our method doesn't involve humans at all but foundation models rely on carefully selected prompts provided by human experts to inform the model what the current task is. Third, the resulting model of our method is lightweight and takes little memory than large foundation models. In addition, experiments in our paper show that our lightweight method has on-par or even better performance than large pre-trained foundation models in specialized visual domains. In applications that are involved in these settings, our method is indeed more general than pre-trained foundation models. > I think using meta-training to adapt BN for batch-level AD is incremental and wonder whether such kind of contribution reach the bar of NIPS Applying meta-training to learn to use batch normalization layers to accomplish zero-shot AD is non-trivial. Previous to our work, meta-training and batch normalization were two individual concepts. Although one can directly apply batch normalization as a naive way to achieve zero-shot AD (illustrated by Eq. 4 in our paper), the performance is limited. Meta-training realizes the power of batch normalization layers in deep models. We theoretically proved the zero-shot AD generalization by applying meta-training with batch normalizations (Eq. 15 in supplement A), but also empirically demonstrated that meta-training improves the zero-shot AD performance by a large margin over only using batch normalization on pre-trained features (see baseline ResNet152). In addition, applying a meta-training dataset and batch normalization is independent of pre-trained foundation models, thus applicable to various data types and not restricted to images and texts as prescribed by foundation models.
Rebuttal 1: Rebuttal: We thank the reviewers for all the valuable suggestions on additional ablations. Based on their inputs, we have run additional experiments to study different normalization layers (Rebuttal Tables 1 and 2), studying different values for the hyperparameter $\pi$ (Rebuttal Table 3), meta-OOD performance (Rebuttal Table 4), additional varying number of training classes (Rebuttal Table 5), N-vs-rest setting (Rebuttal Table 6), and tiny batch sizes (Rebuttal Table 7). The most interesting findings will be included in the main paper to complement the extensive empirical study: we have studied three different tasks (image-level anomaly detection, anomaly segmentation, and anomaly detection in tabular data) where each task contains multiple datasets and we compared at least six different methods for each task.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces Adaptive Centered Representations (ACR) to tackle the zero-shot anomaly detection task. To this end, the ACR enables off-the-shelf deep anomaly detectors to generalize to unseen anomaly detection via training data distribution adaption concerning batch normalization and meta-learning. The numerical experiments of image and tabular data conducted on CIFAR100-C, OrganA, MNIST, Omniglot, MVTec AD, Anoshift, and Malware show the effectiveness of the proposed method. Strengths: • This work introduces a simple yet effective training scheme, which integrates batch normalization and meta-learning, to enable zero-shot anomaly detection. The codes are included for reimplementation. • Experiments on several datasets show the superiority of the proposed method. Weaknesses: - The proposed method relies on batch-level normalization. How the other normalizations, such as LayerNorm, InstanceNorm, and GroupNorm, affect the proposed training scheme is expected to be discussed. - The authors propose to apply batch normalizations in multiple layers for different anomaly scorers. Yet, the discussion and analysis of which layers and how their affections are not included. - The assumption 2 (A2) is about batch-level prediction, i.e., the anomaly scores are estimated based on batches. While carrying out prediction, is it able to automatically select the threshold for recognizing abnormal data? Or it needs to define a threshold per data class manually? Technical Quality: 3 good Clarity: 3 good Questions for Authors: The main concern of this paper is threshold selection while implementing anomaly detection. Besides, some discussions about normalizations and implementations are suggested. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discussed their limitations in the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Effects of other normalizations, such as LayerNorm, InstanceNorm, and GroupNorm We thank the reviewer for mentioning other normalization techniques. As follows, we report new experiments involving LayerNorm, InstanceNorm, and GroupNorm for zero-shot AD. We will add these discussions to the revised paper. We stress that, while these methods may have overall benefits in terms of enhancing performance, they do not work in isolation in our zero-shot AD setup. A crucial difference between these methods and batch normalization is that they treat each observation *individually*, rather than computing normalization statistics across a batch of observations. However, sharing information across the batch (and this way implicitly learning about distribution-level information) was crucial for our method to work. Our experiments (AUROC results in the table below) with DSVDD on the Omniglot dataset support this reasoning. Using these normalization layers in isolations yields to random outcomes (AUROC=50): |LayerNorm|InstanceNorm|GroupNorm| |---|---|---| |50.0$\pm$0.9|50.6$\pm$0.7|50.2$\pm$0.5| Rebuttal Table 1 We also added a version of the experiment where we combined these methods with batch normalization in the final layer. The results dramatically improve in this case: |BatchNorm (BN) | LayerNorm + BN | InstanceNorm + BN | GroupNorm + BN| |---|---|---|---| |99.1$\pm$0.2|98.8$\pm$0.1|98.8$\pm$0.2|98.2$\pm$0.2| Rebuttal Table 2 Experimental details: We use DSVDD as the anomaly detector and experiment on the Omniglot dataset. Each nonlinear layer of the feature extractor for DSVDD is followed by the respective normalization layer. We apply the same training protocol as Table 9 in the paper. For GroupNorm, we separate the channels into 2 groups wherever we apply group normalizations. > The authors propose to apply batch normalizations in multiple layers for different anomaly scorers. Yet, the discussion and analysis of which layers and how their affections are not included. We indeed work with different AD backbone models and generally apply batch normalization in all layers, which is standard and does not add much to the computational complexity. We agree that it is interesting to analyze whether batchnorm helps more in earlier or later layers. Our new experiments done above suggest that batchnorm in the final layer seems to work fairly well. We leave more systematic studies for the final version of the paper. > While carrying out prediction (for different data batches), is it able to automatically select the threshold for recognizing abnormal data? Or it needs to define a threshold per data class manually? As commonly done in the anomaly detection literature, we use AUROC as our evaluation metric [62, 60, 68, 49, 35] to assess the ranking between normal and abnormal data. That is, we use the anomaly score to rank the data and leave thresholding to the user. Threshold values depend on user requirements such as minimizing false positives, minimizing false negatives, adding sample rejection options, or setting the thresholds as quantiles. That said, an interesting avenue for future research is whether our meta-learning framework allows us to learn to predict suitable thresholds based on the meta-training set. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer Jqv8 Comment: Thanks to the authors for providing additional results. Since the additional results are conducted on the anomaly detection task on the Omniglot dataset, it seems that carrying out the batch-norm in the final layer is work. Does it also prefer the final layer while tackling the anomaly segmentation task on other datasets? It would be great if the relation between the layers using batch-norm and the various AD tasks could be further explored. --- Reply to Comment 1.1.1: Title: We conducted experiments to analyze the effects of the BatchNorm layer's position Comment: We thank the reviewer for suggesting analyzing the effects of the BatchNorm (BN) layer’s position in various AD tasks. We conducted additional experiments on two visual anomaly detection tasks – anomaly segmentation on the MVTec-AD dataset and object-level AD on CIFAR100C. We used the same DSVDD model architectures as used in Tables 1 and 2 as the backbone model, except that we switched off BN in all but one layer. For anomaly segmentation, there are five possible BN layer positions; and there are four positions for the object-level AD model. We switched off the BN layers in all but one position and then re-trained and tested the model with the same protocol used in our main paper (For CIFAR100C, we tested the model with the test data anomaly ratio of 10%). We iterate this procedure across all available BN layer positions. We repeat every experiment with different random seeds five times and report the mean AUROC and standard deviation. The results are summarized in the tables below, where a smaller value of the BN position corresponds to earlier layers (close to the input), and a larger value corresponds to later layers close to the output. The final column is copied from our results in Tables 1 and 2 where BN layers are on all available positions. For both MVTec-AD and CIFAR100C, we average the performance across all test classes. Results on the two tasks have opposite trends regarding the effects of BN layer positions. Specifically, for anomaly segmentation on MVTec-AD, earlier BN layers are more effective, while for AD on CIFAR100C, later BN layers are more effective. This observation can be explained by the fact that anomaly segmentation is more sensitive to low-level features, while object-level AD is more sensitive to global feature representations. In addition, compared to the results in Tables 1 and 2 (copied to the last column in the table below), our results suggest that using BN layers at multiple positions does help re-calibrate the data batches of different distributions from low-level features (early layers) to high-level features (late layers) and shows performance improvement over a single BN layer. *MVTec-AD*: |BN position|1|2|3|4|5|(1,2,3,4,5)| |---|---|---|---|---|---|---| |Pixel-level|80.8$\pm$1.9|69.6$\pm$1.4|73.9$\pm$0.9|63.6$\pm$1.6|60.9$\pm$0.8|92.5$\pm$0.2| |Image-level|74.7$\pm$0.9|59.2$\pm$1.6|63.6$\pm$1.3|65.5$\pm$1.2|65.4$\pm$1.3|85.8$\pm$0.6| *CIFAR100C*: |BN position|1|2|3|4|(1,2,3,4)| |---|---|---|---|---|---| |AUROC|61.4$\pm$0.5|61.0$\pm$0.9|68.2$\pm$0.9|68.9$\pm$1.1|85.9$\pm$0.4|
null
null
null
null
null
null
Optimality of Message-Passing Architectures for Sparse Graphs
Accept (poster)
Summary: This paper studies a specific graph convolutional architecture (and the resultant estimators from an architecture with appropriate learned values) based in the setting of a contextual stochastic block model: one where a graph with latent clustering also has node features available that are informative for discovering the clustering. The paper studies this in the "constant average degree and constant feature dimension" regime and makes the following contributions: 1. Identifies the Bayes-optimal classifier for node clustering restricted to a constant size neighborhood in the graph (using local weak convergence theory) and shows this matches the GCN architecture proposed. 2. Specializes the classifier to Gaussian features and 2 latent clusters, proving a formula for the bayes optimal classifier 3. Shows that the local classifier is nearly optimal even for GCN architectures of $c \log n$ (for a small enough constant c) using the locally-tree-like nature of the graph. Strengths: The main strengths are: 1. Identifies a setting in which specific GCN architectures are provably optimal 2. Uses local weak convergence theory to show some simple but illustrative examples of how (local) Bayes-optimal classifiers aggregate information across features/graph structure. Weaknesses: It is hard to get excited about this paper. The main weakness to me is that the Bayes optimal classifier for the tree model is essentially obvious, given Bayes rule. It is far from clear that GCN architectures uninformatively initialized would converge to this, and the paper's experiments do not show this (or in fact consider it from what I could tell). They instead rely on the fact that the architecture they propose can approximate (with some setting of weights) the Bayes optimal classifier. At this point, it is well-known that the success of NN architectures cannot be explained by such existence/approximation theorems. If the experiments carried out show that naive initializations of the architectures converge to the optimal ones, then that is a priori surprising, and should be discussed more. I might even pivot the paper to start with that as an experimental finding, since the theoretical results are relatively unsurprising. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some questions, roughly in order of the paper reading: 1. Why is the notion of SNR $\frac{(a-b)}{a+b}$ when in prior notions of SBM in the constant degree regime it is usually $\frac{(a-b)^2}{2(a+b)}$ which makes sense in terms of the difference of mean / sd of poisson degrees (which is correct in the limit)? If not, how does this reconcile with prior work when features are not meaningful? 2. What happens to Corollary 1.1. when $\Gamma > 1$ (particularly the formula for c_k should be modified somewhat). Naively (using the same formula but with $\Gamma^k -1$), this means that c_k converges to a constant as $k$ diverges. Intuitively, it should be the case that the features should matter less and less as the neighborhood diverges in this regime, irrespective of the snr, which is the behavior for $\Gamma <1$. This suggests that the formula needs more than a naive change for $\Gamma >1$. 3. Same question as 1 on formula (3) and Theorem 3 (the specialization to the Gaussian case) 4. Theorem 3: the statement $\xi_\ell \ge 1$ looks like it needs to be probabilistic statement (almost surely?, with high probability?) or perhaps needs vanishing slack? E.g. take $a$ large, $\ell = 1$ and $b=1$, then there's a constant (but small as a diverges) probability that $\alpha_1$ is 0 and $\beta_1 \leq 2$ is not large so that $\xi_1 <1$. 5. Does the Bayes optimal classifier know the probabilities $Q$ and the densiites? Presumably this is necessary. 6. What about the GCN in Fig 1? Does it learn it from scratch, or initialized at true values, etc? 7. In figure 1, presumably the (roughly) 0.75 accuracy is basically $\Phi(-1)$ at $\Gamma = 0$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The most important weakness is highlighted in the weaknesses. Second, it is also not obvious how one would change this for dense graphs. Analogous results (though with *very* different proof techniques) might be expected to hold when graphs have degrees of order $n$ but with within/out cluster probabilities differing by order $1/\sqrt{n}$. How would one identify GCN architectures that might work there? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their concise and thought-provoking questions! They really help us develop a better perspective of our contributions and improve our presentation. We address the comments below. > It is far from clear that GCN architectures uninformatively initialized would converge to this, and the paper's experiments do not show this. They instead rely on the fact that the architecture they propose can approximate (with some setting of weights) the Bayes optimal classifier. We would like to clarify that our experiments indeed show that in the binary case, Architecture 1 uninformatively initialized converges to the ansatz, i.e., it is able to learn the right values of $Q$ and $\rho$. Both Figures 1 and 2 depict the test accuracy of trained models that were initialized uninformatively (uniformly at random). We have added this information in the supplementary material, and also **attached a pdf of the plots showing the convergence in the general rebuttal response** above. > If the experiments carried out show that naive initializations of the architectures converge to the optimal ones, then that is a priori surprising. I might even pivot the paper to start with that as an experimental finding, since the theoretical results are relatively unsurprising. We do indeed observe that naive initializations of the architecture converge to the optimal one in the binary Gaussian case (see the attached figures). We agree that this is a priori surprising, and we will add a discussion about this in the revision. However, we humbly request the reviewer to consider that pivoting the paper to start with this experimental finding will change the primary focus of our work, which is to show that the optimal estimator in the regime we study is realizable by a message-passing GNN architecture. > Why is the notion of SNR $\frac{|a-b|}{a+b}$ when in prior notions of SBM in the constant degree regime it is usually $\frac{(a-b)^2}{2(a+b)}$ which makes sense in terms of the difference of mean / sd of poisson degrees? How does this reconcile with prior work when features are not meaningful? When computing the optimal classifier, $\Gamma=\frac{|a-b|}{a+b}$ is what naturally shows up in the estimator's expression. We agree that in previous works where features are not present, the SNR is defined as $\frac{(a-b)^2}{2(a+b)}$. We believe that this is because of the nature of the previous work on detectability thresholds for SBMs. It is known (Mossel et al, Massoulie et al) that weak recovery is possible if and only if this SNR > 1, however, we do not study detectability thresholds. Instead, we compute the optimal architecture in terms of the parameters of the data model, and the quantities that naturally pop up in the expression of the estimator are what we interpret as signals in the data. > What happens to Corollary 1.1 when $\Gamma>1$. Naively (using the same formula but with $\Gamma^k - 1$), this means that c_k converges to a constant as $k$ diverges. Intuitively, it should be the case that the features should matter less and less as the neighborhood diverges in this regime, irrespective of the snr, which is the behavior for $\Gamma < 1$. This suggests that the formula needs more than a naive change for $\Gamma>1$. We would like to note that $\Gamma=\frac{|a-b|}{a+b}$ is always $\le1$ for $a,b\ge 0$. Hence, we are not concerned with the case $\Gamma>1$. However, the reviewer's intuition overall is exactly what we intended to communicate. > Theorem 3: the statement $\xi_\ell\ge 1$ looks like it needs to be probabilistic statement (almost surely?, with high probability?) or perhaps needs vanishing slack? Thank your for pointing this out. The statement should say $\xi_\ell>1$ a.s. We will fix this in the revision. > Does the Bayes optimal classifier know the probabilities $Q$ and the densiites? Presumably this is necessary. We believe that this relates to the first comment about the architecture's parameters converging to the optimal parameters with gradient descent. As we mentioned above, the architecture when trained on CSBM data indeed learns the right values of $Q$ and $\rho$. It does not know the right values in advance for the experiments we show in the paper. > What about the GCN in Fig 1? Does it learn it from scratch, or initialized at true values, etc? All the architectures in fig 1 (MLP, GCN, Architecture 1) are trained from scratch with uniform random initializations. We have added this information in the revision. Thank you for pointing this out, it helps us clarify that our results are indeed as strong as one would ideally expect. > In figure 1, presumably the (roughly) 0.75 accuracy is basically $\Phi(-1)$ at $\Gamma=0$? We believe the reviewer wanted to say that the accuracy is roughly $0.85$, which is basically $\1 - \Phi(-1) = \Phi(1)$ at $\Gamma=0$. Yes, this is the correct interpretation. > it is also not obvious how one would change this for dense graphs. Analogous results (though with very different proof techniques) might be expected to hold when graphs have degrees of order $n$, but with within/out cluster probabilities differing by order $1/\sqrt{n}$. How would one identify GCN architectures that might work there? This is a very interesting question, and it is true that our current technique is limited to sparse graphs. We rely on the fact that for the sparse regime that we study, a substantial fraction of nodes in the graph have locally tree-like neighbourhoods up to depth $c\log n$ for a suitably bounded $c$. This fails for dense graphs and make it difficult to compute an informative expression for the optimal architecture. By 'informative', we mean an expression that is well-defined in terms of recognizable signals in the data. We leave the question of extending our results to dense graphs as an interesting open problem and a challenging direction for future work. --- Rebuttal Comment 1.1: Title: Thanks for the detailed response Comment: First, thank you to the authors for the detailed response. ``` We would like to clarify that our experiments indeed show that in the binary case ... i.e., it is able to learn the right . Both Figures 1 and 2 depict the test accuracy of trained models that were initialized uninformatively (uniformly at random). We have added this information in the supplementary material, and also attached a pdf of the plots showing the convergence in the general rebuttal response ``` Thanks, this is quite interesting behavior and non-obvious. Please make sure to mention this in the final version! ``` We would like to note that Gamma is always < 1 ... ``` Whoops, my mistake. thanks for correcting. ``` When computing the optimal classifier... ``` This is interesting, because one could always reduce your model to the standard SBM by taking uninformative features (i.e. $\rho_+ = \rho_-$ in the example). This yields an uninformative guess from the max-belief propagation, which is expected because there is nothing to break the symmetry. This implies that you need $\rho_+ \neq \rho_-$ for a non-trivial result. It might suffice even that they be $\varepsilon_n$ different in some appropriate metric provided $\varepsilon_n \to 0$ slowly enough . This is standard weakness of belief propagation (also is present in ref 1, though spectral methods get rid of this, and one can imagine a non-asymptotic analysis to be possible). Raising my score to 6.
Summary: The paper considers the inference problem of node classification in feature-decorated locally tree-like sparse graphs. The authors introduce and motivate a notion of asymptotic local Bayes optimality for local estimators. The first main result, Theorem 1, concerns the characterisation of the Bayes optimal (in the former sense) classifier in the setting considered. This theorem implies that this estimator is implementable in a message passing graph neural network architecture. The authors then consider the case in which the features are Gaussian. In this context, Theorem 2 establishes the asymptotic local Bayes optimal error while Theorem 3 compares this optimal classifier with other standard classifiers in some extreme signal-to-noise settings. These results are contrasted with numerical experiments. Finally, Theorem 4 gives a finite size bound for the difference between the misclassification error of the estimator proposed and the minimum misclassification error of local classifiers. Strengths: - The questions addressed in the paper are relevant, interesting, and are part of an active research field. - The results are interesting, rigorous, and thorough and the proofs are clear. - The inclusion of finite size bounds for the difference between the performance of the classifier proposed and the optimal performance is of particular interest and renders the analysis presented very complete. Weaknesses: - Maybe the most relevant weakness of the work is, in my opinion, that it solely focuses in locally tree-like graphs. Although this kind of models are very extended in the literature, the topologies present in many of the applications mentioned are not expected to be of this kind. Indeed, in many social settings, for example, the kind of networks that are observed have diverse clustering coefficient values. Though the paper undoubtedly has theoretical value, this will probably limit the impact it could have in more applied research communities. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: General comments: - The analytical tools used require the graphs to be locally tree-like and extending the analysis to non locally tree-like graphs is clearly beyond the scope of the work. But it is my opinion that it would make the case for the paper much stronger if the authors could add some numerical simulations comparing the classifier proposed and other estimators in some non locally tree-like settings. This would address to some extent the comment on the Weaknesses' section above. - The analysis presented focuses mainly in the case of two classes. This is understandable as it renders the presentation more clear. However, it would be good if the authors could add a brief discussion on how the complexity of the estimator depends with respect to c. Is it feasible to compute it for reasonable graph sizes and a moderately large number of classes? Some particular comments: - In line 270 after the dot the next word should be capitalised. - In Figure 1(a), is the value of $\Gamma$ used above the transition? I guess it is but it would be good if this is explicitly stated. - The convergence rate of Theorem 4 is rather slow. It would thus be specially interesting to have some estimation of the constant for this bound. Do the authors think that this would be possible by some adaptations of the proof? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Although the limitations of the work are not explicitly stated, the context and reach of the results are clear in the presentation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive review and encouraging comments about our work! We address the questions below. > Maybe the most relevant weakness of the work is, in my opinion, that it solely focuses in locally tree-like graphs. Although this kind of models are very extended in the literature, the topologies present in many of the applications mentioned are not expected to be of this kind. Indeed, in many social settings, for example, the kind of networks that are observed have diverse clustering coefficient values. Though the paper undoubtedly has theoretical value, this will probably limit the impact it could have in more applied research communities. Thank you for raising this important concern. We agree that the analysis in our paper is limited to locally tree-like graphs, since we look at the regime of constant degree. Although the architecture we obtained can be evaluated on denser graphs as well, it may not necessarily be optimal in that regime. We believe that our work will inspire further research in this direction, where the quest for optimal architectures is pursued for other topologies suited to many other applications. We will add a discussion about this in the conclusion section of the paper. > The analytical tools used require the graphs to be locally tree-like and extending the analysis to non locally tree-like graphs is clearly beyond the scope of the work. But it is my opinion that it would make the case for the paper much stronger if the authors could add some numerical simulations comparing the classifier proposed and other estimators in some non locally tree-like settings. This would address to some extent the comment on the Weaknesses' section above. We agree with the reviewer that extending the analysis to locally tree-like graphs is beyond the scope of the current work, and is a very interesting potential future direction of research. We believe that a comparative study of this architecture against other baselines on graphs that are not locally tree-like would be very interesting and would make the architecture more appealing, and we thank the reviewer for this suggestion. We omitted this in the current work as our primary focus (and the main result, Theorem 1) is for the regime where the average degree is constant and $\ell<c\log n$ for a suitably bounded constant $c$. > The analysis presented focuses mainly in the case of two classes. This is understandable as it renders the presentation more clear. However, it would be good if the authors could add a brief discussion on how the complexity of the estimator depends with respect to c. Is it feasible to compute it for reasonable graph sizes and a moderately large number of classes? Our implementation and comparison with MLP and GCN is done for the binary case, however, our main result (Theorem 1) and the architecture we describe (Architecture 1) are for the general case of multiple classes. The estimator has a nice closed form and the architecture makes the estimator realizable. The preprocessing step is a bit computationally expensive for very large graphs for the case of multiple classes due to the calculation of non-backtracking walk matrices. However, we believe that further research in this direction will improve on our preprocessing and help us realize this estimator more efficiently for large graphs. > In Figure 1(a), is the value of $\Gamma$ used above the transition? I guess it is but it would be good if this is explicitly stated. The plots in Figure 1 show the accuracy of three different neural networks (MLP, GCN, and Architecture 1) trained on the CSBM. The value of $\Gamma$ is fixed for fig 1 (a), and we plot the accuracy is plotted for a test set on the same distribution but previously unseen features and graph. > The convergence rate of Theorem 4 is rather slow. It would thus be specially interesting to have some estimation of the constant for this bound. Do the authors think that this would be possible by some adaptations of the proof? We agree that the convergence rate $O(1/\log^2 n)$ is very slow. For an estimation of the constant, we refer to L. Massoulié. Community Detection Thresholds and the Weak Ramanujan Property. In Proceedings of the Forty-Sixth Annual ACM Symposium on Theory of Computing, page 694–703, 2014. Unpacking Lemma 4.2 in the reference, we see that the constant depends on many other constants that are used throughout its proof, and one can choose suitable values for these constants to arrive at a constant $<2$. --- Rebuttal Comment 1.1: Comment: I would first like to thank the authors for their detailed response. > Thank you for raising this important concern. We agree that the analysis in our paper is limited to locally tree-like graphs, since we look at the regime of constant degree. Although the architecture we obtained can be evaluated on denser graphs as well, it may not necessarily be optimal in that regime. We believe that our work will inspire further research in this direction, where the quest for optimal architectures is pursued for other topologies suited to many other applications. We will add a discussion about this in the conclusion section of the paper. Okay. So your point is that the motivation for the setting considered is not that it is interesting for applications per se. But rather that the ideas contained could motivate further research in settings that are interesting for applications. Is this right? I would appreciate if this is discussed in the revised version. > We agree that the convergence rate $\mathcal{O}(1/\log^2(n))$ is very slow. For an estimation of the constant, we refer to L. Massoulié. Community Detection Thresholds and the Weak Ramanujan Property. In Proceedings of the Forty-Sixth Annual ACM Symposium on Theory of Computing, page 694–703, 2014. Unpacking Lemma 4.2 in the reference, we see that the constant depends on many other constants that are used throughout its proof, and one can choose suitable values for these constants to arrive at a constant $<2$. Thanks for the clarification. This answers my concern. It would be interesting if this were to be mentioned in the revised version. I would also add that I share the opinion of the area chair that comparison with previous message-passing algorithms for large dimensional features would enrich the manuscript. I would be interested to see how this comparison pays out. --- Reply to Comment 1.1.1: Title: Further response and thanks Comment: > Okay. So your point is that the motivation for the setting considered is not that it is interesting for applications per se. But rather that the ideas contained could motivate further research in settings that are interesting for applications. Is this right? I would appreciate if this is discussed in the revised version. Thank you for the suggestion. This will help us clarify the objective and motivate further research! We agree and will include a discussion on this aspect in the revision. Our work is also partly in response to a line of work attempting to design and study GNN architectures that go beyond message-passing, which we mention in the introduction and related works sections. > I would also add that I share the opinion of the area chair that comparison with previous message-passing algorithms for large dimensional features would enrich the manuscript. I would be interested to see how this comparison pays out. As suggested, we performed these experiments by implementing the AMP algorithm in Deshpande et al. 2018. Please see our general response for details on this.
Summary: The authors propose a local analysis of the CSBM. They define and derive a local Bayes-optimal classifier and show it can be achieved by a message-passing architecture. They give an interpretation of it in the limiting cases when the graph caries no or all information and support it with numerical experiments. Strengths: The topic of this article is of great interest. CSBM has been widely used as an artificial dataset and deriving an optimal GNN for it would be good. For this reason I think this article should be published. Weaknesses: 1 In the CSBM usually one considers the high-dimensional limit where the dimension d of the features proportional to n, to model what happens in ML. Here the authors consider d=O(1), which seems limited. I do not see if this is just for convenience or if it is a stronger limitation to this work. For the high-dimensional Gaussian mixture Bayes-optimal classifiers are known and the MLP the authors use works. Could the authors better explain this point? 2 A weakness of this article is a lack of clarity: – the analysis the authors give is local, the graph is tree-like. For l ~ O(log n) it will not hold. I was a bit confused; maybe the authors should emphasize that l is fixed compared to n, that there are no loops and many things are independent; – in the definition of architecture 1, the authors should precise that the l and L of the first line (for H^(l)) have nothing to do with the calligraphic l of the second line (the size of the neighborhood). L. 121 "as a simple MLP" –> "as the output of a simple L-layer MLP"; – maybe introducing the model before theorem 1 is counter-intuitive. The authors could explain that the model is a way to realize theorem 1, that the NN just learns rho and Q. This would ease the introduction of tilde A; – the experiments of fig. 1 and 2 are not clear. What are the training procedure, l, L, training nodes, number of runs, ... 3 It is a pity that the numerical experiments deal with the interpolation and not the major point of the article: the optimality. The authors could compare against other GNNs. Also one has access to the conjecturally-optimal performances on CSBM (in Deshpande '18 for instance); how far is this l-neighborhood model? does the limit large l converge to these or is there a gap (due to the cycles)? 4 How does this model deal with train labels? In architecture 1 the authors assume there is no train node in the l-neighborhood of u, no? otherwise the Bayes-optimal classification would take the labels in account, M_{u,i} = (H_{u,j=label of u} + log Q_{i,j=label of u}). The prior distribution of node labels is not iid uniform in the semi-supervised setting. I would be pleased to give a higher rating to this article if the authors improve or comment on these points. If the authors develop them in their revised manuscript maybe they could summarize part 3.5. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Some more general questions: 1 As to the training: can the performances of the trained model be compared to theorem 2? do the learned Q and rho match the ones of the binary CSBM? 2 Does this neural network generalize well to other datasets? 3 Would the authors have an idea how to generalize their results to non-local large l estimators? Typos : l. 150 "for a class of estimators this general," ? ref. l. 409 Andrea Montanari is missing. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging review and insightful comments that helped us improve the clarity and credibility of our work. We address these below. > In the CSBM usually one considers the high-dimensional limit. the authors consider d=O(1) which seems limited. We agree that the high-dimensional limit is usually considered where $d$ is proportional to $n$ in the CSBM literature. However, in popular benchmark datasets for node classification, the number of features ($d$) is relatively very small compared to $n$; see for example, OGB datasets: products (n=2.4m, d=100), proteins (n=130k, d=8), papers100m (n=100m, d=128). Therefore, it is not very clear that the high-dimensional scaling is the relevant one. The existing literature on CSBM is focused on $d$ being proportional to $n$, while we consider the setting with fixed $d$. We added a discussion about this in the revision. > the analysis is for locally tree-like graphs. For $\ell = O(\log n)$ it will not hold... Although the locally tree-like property will not hold for $\ell= O(\log n)$ in general, in Proposition 3.1 we show that if $\ell<c\log n$ for a bounded $c$, then with high probability the graph is still locally tree-like. We also show in Theorem 4 that for this setting of $\ell$, even in the non-asymptotic case the performance of the classifier $h^{*}$ is close to that of the optimal classifier ${\rm argmin}\_{h\in\mathcal{C_\ell}}\\,\mathcal{E}_n(h)$. Thus, although we do not consider the entire regime where $\ell= O(\log n)$, we do consider the case beyond fixed $\ell$. > Definition of arch 1: l and L of the first line have nothing to do with the calligraphic l. Thank you! We incorporated this in the revision. > The authors could explain that the model is a way to realize theorem 1... We understand that it may seem reasonable to introduce the model after Theorem 1 from a theoretical perspective, but we also wanted to show the message-passing architecture before a theoretical analysis, and then motivate the architecture using Theorem 1. We discuss right after Theorem 1 about Architecture 1 being a way to realize the classifier in Theorem 1. > What are the training procedure, l, L, training nodes, number of runs... The plots are for $\ell=2$ (two-hop neighbourhoods) and $L=1$ (single-layer MLP). Our experiments with $\ell,L\in\{1,2,3\}$ yield the same result in terms of the plots. The networks are trained on a dataset with 10k nodes, and tested on another dataset with 10k nodes, plotting the average test accuracy across 50 trials. We added this in the revision. > the numerical experiments deal with interpolation and not optimality. The authors could compare against other GNNs and the conjecturally-optimal performances on CSBM We aimed to show optimality to some extent through the interpolation. We find that with strong graph signal, the optimal classifier mimics a GCN, while with weak graph signal, it mimics an MLP disregarding the noisy graph. This interpolation showcases model optimality across signal strengths, ranging from MLP to GCN extremes and surpassing both throughout. The conjecturally optimal performance on CSBM is for the case where $d$ is proportional to $n$, and our regime of study does not consider this setting. As suggested by the reviewer in an earlier comment, we have added a discussion about this in the revision. > how far is the l-neighborhood?... We take $\ell$ to be as far as $c\log n$ for a suitably bounded constant $c$, as mentioned in Proposition 3.1 and Theorem 4. Beyond this limit, we are not able to guarantee that the $\ell$-neighbourhoods of a substantial fraction of nodes in the graph are cycle-free with high probability, hence we do not consider larger values of $\ell$. > In architecture 1 the authors assume there is no train node in the l-neighborhood of u, no? otherwise the Bayes-optimal classification would take the labels in account. The prior distribution of node labels is not iid uniform in the semi-supervised setting. We agree that the architecture doesn't assume labels for the $\ell$-neighbourhood of $u$. The problem's objective is: Given a node's features and its $\ell$-neighbourhood, predict its label. The learning process computes gradients using only node $u$'s training label, despite the output considering features of all $\ell$-neighbourhood nodes. Although the prior label distribution might not be iid uniform in the semi-supervised setup, introducing this complexity alters the Bayes classifier, complicating analysis. Nonetheless, generalizing to this scenario isn't challenging, and the model would adapt to learn the appropriate node label distribution in that case. > Can the performances of the trained model be compared to theorem 2? do the learned Q and rho match the ones of the binary CSBM? Yes! Our plots in fig 1,2 are where randomly initialized models were trained. The learned $Q$ and $\rho$ match the optimal value for the binary CSBM. We have added this information in the supplementary document now with plots showing convergence of model parameters to the right $Q$ and $\rho$ against the training iterations. **These plots can be found in our rebuttal one-page pdf response**. > Does it generalize to other datasets? We evaluate the model on a completely different dataset (but with the same distribution) than the one it is trained on. Figures 1 and 2 are plots of the performance on these unseen datasets. > Can results generalize to non-local large l estimators? Thanks for this important question! We are currently limited in this regard because to increase $\ell$ beyond $c\log n$, we need mathematical tools that can deal with the large amount of correlations in the data due to the presence of a large number of cycles in the neighbourhoods of a non-diminishing fraction of nodes. We leave this as a very interesting direction for future work. Thank you for pointing out the typos. We have fixed them in the revision. --- Rebuttal Comment 1.1: Comment: > The plots are for $\ell=2$ (two-hop neighbourhoods) and $L=1$ (single-layer MLP). Our experiments with $\ell,L\in{1,2,3}$ yield the same result in terms of the plots. The networks are trained on a dataset with 10k nodes, and tested on another dataset with 10k nodes, plotting the average test accuracy across 50 trials. We added this in the revision. Thanks for the details. I did not understand that the test was on a new graph sampled from the CSBM. A more common and realistic setting is the semi-supervised setting, when one has only one graph and a fraction of its labels are revealed (eg PubMed, Cora or OGB arxiv:2005.00687). The authors should precise this when introducing the model. Could the authors precise what a training batch consists in? does the network see all the node labels of the train graph? >> In architecture 1 the authors assume there is no train node in the l-neighborhood of u, no? otherwise the Bayes-optimal classification would take the labels in account. The prior distribution of node labels is not iid uniform in the semi-supervised setting. > We agree that the architecture doesn't assume labels for the $\ell$-neighbourhood of $u$. The problem's objective is: Given a node's features and its $\ell$-neighbourhood, predict its label. The learning process computes gradients using only node $u$'s training label, despite the output considering features of all $\ell$-neighbourhood nodes. Although the prior label distribution might not be iid uniform in the semi-supervised setup, introducing this complexity alters the Bayes classifier, complicating analysis. Nonetheless, generalizing to this scenario isn't challenging, and the model would adapt to learn the appropriate node label distribution in that case. If the test nodes and the train nodes are not connected then my comment has less importance: the classifier cannot directly use the train labels to predict the test label. However, in the semi-supervised setting, to call this a Bayes-optimal classifier would be a bit misleading: one could do better simply taking in account the given train labels. > Yes! Our plots in fig 1,2 are where randomly initialized models were trained. The learned $Q$ and $\rho$ match the optimal value for the binary CSBM. We have added this information in the supplementary document now with plots showing convergence of model parameters to the right $Q$ and $\rho$ against the training iterations. These plots can be found in our rebuttal one-page pdf response. Ok, thanks. >> Does it generalize to other datasets? > We evaluate the model on a completely different dataset (but with the same distribution) than the one it is trained on. Figures 1 and 2 are plots of the performance on these unseen datasets. When turning the classifier of theorem 2 into a neural network, one expected advantage would be to make it more robust, more able to generalize, eg on CSBM with different parameters. --- Overall, this article is interesting, and worth a publication I think, because it proposes a way to derive an optimal (in a particular sense) GNN. It raises many questions. A limitation, as pointed out by reviewer S322, is that a classifier on a tree has little interest. Also, the properties of the resulting network seem interesting but they are not well studied. --- Reply to Comment 1.1.1: Title: Further response and thanks Comment: > Thanks for the details. I did not understand that the test was on a new graph sampled from the CSBM. A more common and realistic setting is the semi-supervised setting, when one has only one graph and a fraction of its labels are revealed (eg PubMed, Cora or OGB arxiv:2005.00687). The authors should precise this when introducing the model. Thank you! We have conducted further experiments for the semi-supervised setting, and as expected from a theoretical standpoint, we obtain precisely the same results. This is expected because given that our dataset is synthetic, there is no difference between the distributions of an unseen node in the same graph as the training set or a new node in a completely unseen graph with the same distribution. > Could the authors precise what a training batch consists in? does the network see all the node labels of the train graph? In the current set of experiments, this is true. The network sees all node labels in the train graph, but the test graph is completely new. As the reviewer says later, the classifier cannot use the labels of the train graph directly. However, we also did experiments suggested by the reviewer with the semi-supervised setting and obtained similar results. We will include a discussion in the revision for clarity.
Summary: The focus of this research paper was the investigation of node classification problem on sparsely populated graphs that exhibit local tree-like structures. Thy introduced asymptotic local Bayes optimality to define the ideal performance standard for node classification tasks. By utilizing this criterion, the paper derived the optimal classifier for a wide range of statistical data models with diverse distributions of node features and edge connectivity. The research paper further assessed the generalization error of this classifier and conducted a theoretical comparison of its performance with existing learning methods. This evaluation was carried out on a model that inherently possesses identifiable signal-to-noise ratios (SNRs) in the data. Strengths: * Existing works explored conventional message-passing graph neural network architectures. However, their analyses heavily depend on two key assumptions: firstly, the graph is not excessively sparse, and secondly, the node features are represented as a Gaussian mixture. In contrast, this study investigates the realm of highly sparse graphs, where the expected degree of a node is on the order of O(1). Moreover, it considers nodes that extend beyond the immediate neighbors, encompassing nodes at fixed distances. * Their result holds for a general multi-class statistical model with arbitrary continuous or discrete feature distributions and arbitrary edge-connectivity probabilities between all pairs of classes. * They showed that in scenarios where the graph signal-to-noise ratio (SNR) is extremely low, the architecture simplifies to a basic MLP that disregards the underlying graph structure. Conversely, when the SNR is significantly high, their architecture transforms into a standard convolutional network that aggregates information from all nodes within the local neighborhood. However, in the intermediate SNR regime, it exhibits interpolation behavior and outperforms both the simple MLP and the typical graph convolutional network (GCN). Weaknesses: * The analysis in the paper seems interesting but more experimental results are required to support the claims. For example, comaprison with the baselines on existing benchmarks in GCN literature as well as more synthetic types of graphs (e.g scale-free, random etc) with controllable degree distribution to enforce different level of sparsity. * The proposed method relies on the pre-processing step to calculator A^\~(k) that models a non-backtracking walk of length k that considers new nodes in the distance-k neighborhood that were not discovered. Specifically A\~(k)uv=1 If and only if v is present in the distance k neighborhood of u but not within the distance (k-1) neighborhood. Such a calculation seems to be expensive and might not be scalable. Would you please elaborate on that? * Q^k models the probability of observing a distance k path between a pair of nodes in two classes. I am wondering why the log (Q^k) is considered in calculation of M. What is the intuition? What would happen if the Q^k has been used? Having abolition study would help in understanding. * How is the performance of proposed approach in compare to the traditional GCN baselines in real datasets? The papers just considered synthetic graphs with controllable degree but more experimental results are needed on exiting benchmarks. Also for the synthetic graph what type of graph is it and what is the underlying distribution? Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Analysis of the cost associated with the preprocessing step and discussion on scalability of the proposed approach * What is impact of log(Q^k) vs Q^k on the performance? Please look at the weakness for more detailed question. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper discussed the limitation of the proposed approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful comments and questions. We address the comments below. > The analysis in the paper seems interesting but more experimental results are required to support the claims. For example, comaprison with the baselines on existing benchmarks in GCN literature as well as more synthetic types of graphs (e.g scale-free, random etc) with controllable degree distribution to enforce different level of sparsity. We sincerely request the reviewer to consider that our claims are stated as theorems and are rigorously proved in the paper. We agree that more experimental results will help demonstrate our results more, but in our opinion, they do not provide more insight unless we change the setting of the experiments substantially (one example would be what the reviewer suggested: scale-free or power-law degree-distributed graphs). We note that the primary focus of our work is to show that the optimal GNN architecture for the CSBM with arbitrary feature distributions is a message-passing architecture. We believe that this is a very important result in itself, and decided to leave similar results on many other random graph models (other than the stochastic block model) as future work. Since our theorems specifically rely on the graphs being sparse enough so that $\ell=c\log n$ depth neighbourhoods for a substantially large number of nodes are tree-like, we do not consider dense graphs for experiments. > The proposed method relies on the pre-processing step to calculator A^(k) that models a non-backtracking walk of length k that considers new nodes in the distance-k neighborhood that were not discovered. Specifically A^(k)_{uv}=1 If and only if v is present in the distance k neighborhood of u but not within the distance (k-1) neighborhood. Such a calculation seems to be expensive and might not be scalable. Would you please elaborate on that? We completely agree with the reviewer that the pre-processing step is expensive and may not be scalable for extremely large graphs with more than tens of millions of nodes. However, there exist neighbour-sampling techniques that can be explored as a potential future direction of work to make the architecture practically more useful. The primary scope of our paper is to introduce a message-passing architecture that is provably optimal for a very general statistical data model, and thus, we decided to leave the study of computational pre-processing costs as potential future work. We sincerely believe that further research will be able to construct more efficient pre-processing techniques for implementing this architecture for large-scale graphs. > Q^k models the probability of observing a distance k path between a pair of nodes in two classes. I am wondering why the log (Q^k) is considered in calculation of M. What is the intuition? What would happen if the Q^k has been used? Having abolition study would help in understanding. The $\log Q^k$ shows up in the architecture because of the maximization of log-likelihood. Our goal was to show that the optimal architecture in the regime we study is a message-passing architecture. Message-passing architectures aggregate messages from all the nodes in the neighbourhood to classify a node of interest. The distribution of the neighbourhood can be expressed as a product of probabilities, and taking a $\log$ helps consider the distribution in terms of a sum of log probabilities. We could have used $Q^k$ instead of $\log Q^k$ in the architecture, but then the message-passing mechanism will need to consider products instead of sums of messages from the nodes. > How is the performance of proposed approach in compare to the traditional GCN baselines in real datasets? The papers just considered synthetic graphs with controllable degree but more experimental results are needed on exiting benchmarks. Also for the synthetic graph what type of graph is it and what is the underlying distribution? Thank you for this important question! The main objective of our paper is to establish the theoretical foundation and principles behind the optimal message-passing architecture. We humbly emphasize that our goal is to provide insights about message-passing that could be applied across various domains, and conducting experiments on real datasets is an interesting future endeavour that could follow our theoretical work. We define the synthetic data model that we experiment with, in detail in Sections 3.2 (general case) and 3.4 (binary case and Gaussian case), where we describe the underlying distributions of both the graph and the node features.
Rebuttal 1: Rebuttal: In this comment, we respond to the general comments and questions of the reviewers about the convergence of the architecture's parameters, referred to as $(\theta_1, \theta_2)$ in the attached pdf, to the optimal values associated with $\rho$ and $Q$. We emphasize that figures 1 and 2 in the paper are indeed for a network that is initialized uniformly at random, and then trained using a CSBM dataset. In the attached pdf of this rebuttal, we show two plots showing the convergence of the parameters of the architecture to the optimal values associated with $\rho$ and $Q$, i.e., $\mu$ and $\log(\frac{1+\Gamma}{1-\Gamma})$. In the first plot, we show that the normalized inner product between $\theta_1$ and $\mu$ converges to $1$ as the number of iterations increase. Similarly in the second plot we show that $\theta_2$ converges to $\log(\frac{1+\Gamma}{1-\Gamma})$ as the number of iterations increase (here we show that the absolute difference between the values goes to 0). The settings of these plots are the same as that in Fig 1b with $\Gamma=0.2$. We address all other comments and questions individually for each reviewer. Pdf: /pdf/54ea9b7808e090d8cb76cf8663b65a9b293927f4.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
VRA: Variational Rectified Activation for Out-of-distribution Detection
Accept (poster)
Summary: This paper conducts a fine-grained analysis of ReAct, a simple but effective OOD method by rectifying high activation. The authors perform a variational method to analyze the optimal activation function by clipping and amplifying features. Strong empirical results are presented to support the effectiveness of the proposed methods. Strengths: 1. The proposed method is well-motivated both intuitively and theoretically. ReAct presented a promising solution to distinguish OOD samples by clipping activation. So it is natural to perform a fine-grained analysis of how to better clip activations. Actually, ASH [1] has a similar idea but this paper takes a step further to find the optimal solution of clipping from a theoretical point of view. The variational method seems valid and the assumption of Hilbert space is mostly valid for activations of deep neural networks. The theoretical analysis well motivates the methodology and shaping. 2. The authors propose two different variants of the method: one by simply clipping as React, and the other with middle activation amplification. This allows more possibilities to perform OOD detection. 3. The empirical results are very strong. In the ImageNet benchmark, it even outperforms MOS which is a training-needed approach. >[1] Extremely Simple Activation Shaping for Out-of-Distribution Detection. ICLR 23. Weaknesses: 1. My main concern is that the methodology seems to need to calculate $\rho_{out}$ in advance. However, the statistics of OOD data are usually unknown to the model. So having access to the density might reveal some information about OOD data, which might be unfair to the baselines. This might limit the real-world usage of the method. 2. There are some very relevant OOD papers missing in the reference [1,2,3,4]. As stated before, ASH[1] actually has done the similar fine-grained analysis of ReAct. Furthermore, the concept of shaping in ASH covers more operations such as pruning, binary, and scale. RankFeat [2] is also motivated by ReAct and removes the dominant single value of the last feature map. Moreover, it analyzes the lower bound of their method and ReAct. The bound analysis seems to apply to the method of this paper. The comparison and discussion with [1,2] are thus needed in the paper. 3. Some more operations such as binary and scaling can be explored. Currently, only amplification and pruning are supported. >[1] Extremely Simple Activation Shaping for Out-of-Distribution Detection. ICLR 23. > >[2] RankFeat: Rank-1 Feature Removal for Out-of-distribution Detection Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see weakness and limitations. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: My primary concern is the availability of OOD statistics required by this method. I suggest the authors at least explicitly mention it in the limitation section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer JjUp We thank the reviewer's appreciation of the writing and sufficient experimental results. We try to address your concerns as follows: **Q1**: My main concern is that the methodology seems to need to calculate $p_{out}$ in advance. However, the statistics of OOD data are usually unknown to the model. So having access to the density might reveal some information about OOD data, which might be unfair to the baselines. My primary concern is the availability of OOD statistics required by this method. I suggest the authors at least explicitly mention it in the limitation section. **A1**: Thanks for your valuable comments. As you pointed out, VRA does need to know $p_{out}$. If we know $p_{out}$ in advance, we can achieve near-perfect OOD detection results (see Table 4). But in real-world scenarios, we cannot obtain real OOD data in advance. For a fair comparison with baselines, we use Gaussian noise images as virtual OOD data in our implementation (see Section 3.1). According to Tables 1$\sim$2, we can still achieve state-of-the-art performance in OOD detection. Therefore, our comparison with baselines is fair. In the revised paper, we will add more discussions in the limitation section. **Q2**: There are some very relevant OOD papers missing in the reference. As stated before, ASH actually has done a similar fine-grained analysis of ReAct. Furthermore, the concept of shaping in ASH covers more operations such as pruning, binary, and scale. RankFeat is also motivated by ReAct and removes the dominant single value of the last feature map. The comparison and discussion with are thus needed in the paper. **A2**: Thank you for your valuable comments. Although VRA, ASH and RankFeat share some similarities, they are quite different in motivation, method, and results: *Motivation*: ASH argues that representations produced by over-parameterized neural networks are excessive for the task at hand, and therefore could be greatly simplified without much deterioration on the original performance while resulting in a surprising gain in OOD detection. RankFeat observes that the OOD feature matrix tends to have a larger dominant singular value than the ID feature, and the class predictions of OOD samples are largely determined by it. Different from ASH and RankFeat, VRA tries to find the theoretically optimal operation to maximize the gap between ID and OOD. Therefore, the motivations behind these methods are quite different. *Method*: Although all methods are extended from ReAct, their implementation process is different. ASH uses sample-wise pruning to remove a majority of the activation of the entire representation. Same to ASH, RankFeat is a also sample-wise pruning strategy. It removes the rank-1 matrix composed of the largest singular value and the associated singular vectors from the high-level feature. Differently, VRA follows ReAct and uses feature-wise pruning. According to Tables 1$\sim$2, our method can achieve the best results among all baselines. *Results*: We compare the performance of VRA-based methods with ASH and RankFeat. For a fair comparison, we use the same ID data, OOD data, and network architecture. Experimental results in Tables 1$\sim$2 demonstrate that VRA-based methods outperform previous works in OOD detection, verifying the effectiveness of our method. We will add these comparison results in the revised paper. **Table 1: Comparison with ASH. We use DenseNet-101 for CIFAR and ResNet-50 for ImageNet.** |Method|CIFAR-10(FR/AU)|CIFAR-100(FR/AU)|ImageNet(FR/AU)|Average(FR/AU)| |-|-|-|-|-| |ASH-P|23.45/95.22|64.53/82.71|50.32/89.04|46.10/88.99| |ASH-B|20.23/96.02|48.73/88.04|22.73/95.06|30.56/93.04| |ASH-S|15.05/96.61|41.40/90.02|22.80/95.12|26.42/93.92| |VRA|17.74/96.47|47.12/90.21|25.49/94.57|30.12/93.08| |VRA+|15.89/96.90|43.31/90.61|23.32/94.96|27.51/94.16| |VRA++|15.52/96.87|35.20/91.80|18.63/95.75|23.12/94.81| **Table 2: Comparison with RankFeat. We use ResNetv2-101 for ImageNet.** | Method | FR/ AU | | - | - | | RankFeat (Block 4)|39.69/87.84| | RankFeat (Block 3)|45.80/90.80| | RankFeat (Block 3+4)|36.80/92.15| | VRA|34.95/93.34| | VRA+|**30.85**/**93.71**| **Q3**: RankFeat analyzes the lower bound of their method and ReAct. The bound analysis seems to apply to the method of this paper. **A3**: Thanks for your valuable comments. We have followed a similar method for analysis, but the space here is too small to write, and we will show it in our subsequent discussion and appendix. **Q4**: Some more operations such as binary and scaling can be explored. Currently, only amplification and pruning are supported. **A4**: Good idea! In this paper, we have not considered more operations such as binary and scaling. We denote these new variants as VRA-B and VRA-G: \begin{equation} \text{VRA-B}(z)=\begin{cases} 0, z \leq \alpha \nonumber \\\\ \alpha, z> \alpha\nonumber \end{cases}, \end{equation} \begin{equation} \text{VRA-G}(z)=\begin{cases} 0, z< \alpha\nonumber \\\\ kz, \alpha \leq z \leq \beta \nonumber \\\\ \beta, z > \beta \\\\ \end{cases}, \end{equation} where $\alpha$ and $\beta$ are thresholds and $k$ controls the scaling factor. For VRA-G, we treat the $\eta_\alpha$-quantile (or $\eta_\beta$-quantile) of activations estimated on ID data as $\alpha$ (or $\beta$), in line with VRA. In Table 3, we investigate the performance of different VRA-based variants. Experimental results demonstrate that VRA++ outperforms other variants in OOD detection. **Table 3: Performance of different VRA-based variants with DenseNet-101 on CIFAR-10.** |Method|FR/AU| |-|-| |VRA|17.74/96.47| |VRA+|15.89/**96.90**| |VRA++|**15.52**/96.87| |VRA-B$(\alpha=0.5)$|30.01/94.02| |$\alpha=0.6$|**20.37**/**95.84**| |$\alpha=0.7$|21.18/95.64| |$\alpha=0.8$|35.84/93.31| |VRA-G(k=0.5)|28.14/94.12| |k=0.8|19.62/96.11| |k=1.0|17.74/96.47| |k=1.2|17.32/96.64| |k=1.5|**16.81**/**96.66**| |k=2.0|17.05/96.56| --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response! The reply addresses most of my concerns, but I have one final question: >A3: Thanks for your valuable comments. We have followed a similar method for analysis, but the space here is too small to write, and we will show it in our subsequent discussion and appendix. Can the authors give the results of the analysis here? There is no need to write the full derivations but I am just interested in the final results of the analysis. --- Reply to Comment 1.1.1: Title: Thanks for the response! Comment: Thank you for taking your precious time to read our rebuttal and give us a response. Based on your valuable suggestions, we provide theoretical proof from the perspective of boundary analysis to answer your final question. Consider output logits $f(\mathbf{z})=\mathbf{W}\mathbf{z}+\mathbf{b}$, where $\mathbf{z}$ is the feature vector of the penultimate layer. We denote the $i^{th}$ element in $f(\mathbf{z})$ as $f_i$, and $c$ is the number of categories. We compute the energy score of the output logits $f(\mathbf{z})$ as the OOD score, consistent with RankFeat: \begin{align} S(\mathbf{z}) = \log \sum_{i=1}^c e^{f_i} = \log \sum_{i=1}^c e^{f_i - \max_{i=1}^c{f_i}} + \max_{i=1}^c f_i \le \log c + \lVert f(\mathbf{z}) \rVert_{\infty}, \nonumber \end{align} For any norm $\lVert \cdot \rVert_{p} (p > 0)$, there exist $k_1>0$ and $k_2>0$ satisfying: \begin{align} k_1 \lVert f(\mathbf{z}) \rVert_p \le \lVert f(\mathbf{z}) \rVert_{\infty} \le k_2 \lVert f(\mathbf{z}) \rVert_p. \end{align} Then, we get: \begin{align} S(\mathbf{z}) \le k_2 \lVert \mathbf{W}\mathbf{z}+\mathbf{b} \rVert_{p} + \log c. \nonumber \end{align} According to the triangle inequality $\lVert \mathbf{W}\mathbf{z}+\mathbf{b} \rVert _p \leq \lVert \mathbf{W}\mathbf{z}\rVert _p + \lVert \mathbf{b} \rVert_p $ and consistence of matrix norms $\lVert \mathbf{W}\mathbf{z}\rVert _p \leq \lVert \mathbf{W}\rVert _p \lVert \mathbf{z} \rVert_p$, and set $p = 1$, we can get: \begin{align} S(\mathbf{z}) \le k_2 \lVert \mathbf{W} \rVert _1 \lVert \mathbf{z} \rVert _1 + k_2 \lVert \mathbf{b} \rVert_1 + \log c. \end{align} According to the above inequality, maximizing $E_{in}[z] - E_{out}[z]$ in VRA means increasing the upper bound gap between ID and OOD. For a clear comparison we write the results in RankFeat: \begin{align} S(\mathbf{z}) \le K\lVert \mathbf{W}\rVert_{\infty} (\sum_{i=1}^N s_i) + \lVert \mathbf{b} \rVert_{\infty} + \log c, \end{align} where $K>0$ and $s_i$ is the singular value of high-level feature map. Specifically, RankFeat "removes the rank-1 matrix from the high-level feature", which means replace $\sum_{i=1}^Ns_i$ with $\sum_{i=1}^N s_i - s_1$, where $s_1$ is the largest singular value. Then RankFeat think "OOD feature usually has a much larger $s_1$", so the operation "removes the rank-1 matrix" in RankFeat can increases the upper bound gap between ID and OOD; VRA maximizes $E_{in}[z] - E_{out}[z]$ to also increase the upper bound gap between ID and OOD. But the upper bound is different between VRA and RankFeat. Above is the analysis of VRA from the similar perspective of RankFeat.
Summary: The authors propose a technique called "Variational Rectified Activation (VRA)", which simulates these suppression and amplification operations using piecewise functions. Theoretical analysis is provided to illustrated VRA. Extensive experiments demonstrate the effectiveness and generalization for the proposed method. Strengths: 1. It is technically sound to use variational methods to find the optimal function that maximizes the gap between ID and OOD. 2. The whole method is simple and effective, and performs well in different datasets and models. The proposal is also compatible with different scoring functions. 3. The paper is well written, well-structured and easy to understand. Weaknesses: 1. The proposed method of simultaneously truncating high and low activations is very similar to the existing method in [1], which also corrects features by suppressing high and low activations, thus limiting the novelty of this proposal. 2. The formula in Eq.3 confuses me a bit. Why maximizing Ein(z) - Eout(z) maximizes the gap between ID and OOD. Why not maximize the squared difference? What happens if you maximize Eout(z) - Ein(z)? 3. The VRA piecewise function is designed based on the feature histograms in one model and one ID dataset, which may not hold for other datasets and model architectures. For example,for a Transformer-based model, the features in the penultimate layer contains many negative values,how the piecewise function should be in Transformer-based function. &nbsp; [1] Boosting Out-of-distribution Detection with Typical Features. NeurIPS 2022. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Please compare the VRA framework with [1] and discuss the similarities and differences between the two. 2. In Fig.1, I'm curious about how the optimal function is look like if you use ViT as the classification model, and CIFAR as ID dataset. Could you provide more visualizations in the appendix. &nbsp; [1] Boosting Out-of-distribution Detection with Typical Features. NeurIPS 2022. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Please see the comments above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer J9PQ We sincerely thank the reviewer for your detailed comments and suggestions. We try to address each comment as satisfactorily as possible: **Q1**: The proposed method of simultaneously truncating high and low activations is very similar to the existing method in BATS, which also corrects features by suppressing high and low activations, thus limiting the novelty of this proposal. Please compare the VRA framework with BATS and discuss the similarities and differences between the two. **A1**: Thank you for your valuable comments. Although VRA and BATS share some similarities, they are quite different in motivation, method, and results: *Motivation*: BATS relies on BatchNorm. For OOD detection, BATS exploits the phenomenon that the features of the training data more frequently fall in the interval $[ \mu -\lambda \sigma, \mu + \lambda \sigma ]$. Differently, VRA does not depend on BatchNorm. It tries to find the theoretically optimal operation to maximize the gap between ID and OOD. *Method*: BATS relies on BatchNorm and uses mean and std to determine thresholds for low and high activations. Differently, VRA does not rely on BatchNorm and determines thresholds for low and high activations by quantiles. And the supressing is different, we suppress towards zero, they suppress towards mean. Experimental results in Table 6 of our main text show that VRA can achieve good performance no matter with or without BatchNorm (see VGG-16 and VGG-16-BN). Therefore, our VRA has better compatibility with different backbones compared with BATS. In addition, VRA+ and VRA++ further amplify intermediate activations. Table 1 shows amplification process further improves the OOD detection performance. *Results*: We further compare the performance of VRA-based methods and BATS, we use DenseNet-121 for CIFAR and ResNet50 for ImageNet. Experimental results in Table 1 demonstrate that VRA-based methods outperform BATS in OOD detection, verifying the effectiveness of our method. We will add these comparison results in the revised paper. **Table 1: Comparison with BATS. (FR / AU)** | Method |CIFAR-10 | CIFAR-100 | ImageNet | | - | - | - | - | |BATS | 24.30 /95.32|59.32/86.79|27.11/94.28| |VRA|17.74/96.47|47.12/90.21|25.49/94.57| |VRA+ |15.89 /96.90|43.31/90.61|23.32/94.96| |VRA++|15.52/96.87|35.20/91.80|18.63/95.75| **Q2**: The formula in Eq. 3 confuses me a bit. Why maximizing $E_{in}(z) - E_{out}(z)$ maximizes the gap between ID and OOD. Why not maximize the squared difference? What happens if you maximize $E_{out}(z) - E_{in}(z)$? **A2**: The objective function $max E{in}g(z) - E_{out}g(z)$ is derived from ReAct, a well-known and effective OOD detection technique. ReAct proves that increasing $E_{in}g(z) - E_{out}g(z)$ leads to better OOD detection performance. In this paper, we extend the idea of ReAct and propose a new operation VRA to maximize $E_{in}g(z) - E_{out}g(z)$. Experimental results prove that we can achieve better performance than ReAct in OOD detection. **Q3**: The VRA piecewise function is designed based on the feature histograms in one model and one ID dataset, which may not hold for other datasets and model architectures. **A3**: Thanks for your valuable comments. In fact, we design VRA based on the results of multiple models and multiple ID datasets. Due to page limitations, we did not include all visualization results in this paper. In Figure 2 (see attached PDF in Global Response), we provide more visualization results and obverse the same phenomenon. We will add these visualization results in the appendix. **Q4**: For a Transformer-based model, how the piecewise function should be in the Transformer-based function. I'm curious about how the optimal function looks like if you use ViT as the classification model. Could you provide more visualizations in the appendix? **A4**: Good question! Our core function comes from ReAct, which relies on ReLU-based backbones where the penultimate layer does not contain negative values. For the case of many negative values, we conduct the following analogy analysis. For positive values, ReAct tries to increase $E_{in}g(z) - E_{out}g(z)$. It uses an operation to make OOD data suppress more than ID data. Therefore, for negative values, we should also make OOD data suppress more than ID data. This process results in an increase of $E_{in}[-g(z)] - E_{out}[-g(z)]$. To unify the positive and negative cases, we should maximize $\mbox{sgn}(z)(E_{in}g(z) - E_{out}g(z))$, where $\mbox{sgn}(\cdot)$ is the sign function. Similar to Eq. 3$\sim$13, we can get the optimal activation: \begin{align} g^*(z) = z + \lambda \mbox{sgn}(z) \left(1 - \frac{p_{out}(z)}{p_{in}(z)}\right). \end{align} We visualize the optimal function $g^*$ on different ID and OOD data. Due to page limitations, we will put these visualization results in the appendix. We observe a different $g^*$ in ViT compared to ReLU-based backbones. Specifically, we should amplify activations with low absolute features and suppress activations with high absolute features. To mimic this operation, we design a new piecewise function called VRA-ViT: \begin{equation} \text{VRA-ViT}(z)=\begin{cases} -\alpha, z \leq -\alpha \nonumber \\\\ \beta z, -\alpha < z < \alpha\nonumber \\\\ \alpha,z \ge \alpha \nonumber \end{cases}, \end{equation} where $\alpha>0$ controls the threshold for determining low and high activations, and $\beta>0$ controls the gradient. Experimental results in Table 2 show the effectiveness of VRA. **Table 2: Compatibility with ViT on ImageNet.** |Method|Backbone|FR / AU| | - | - | - | |Energy|B/16|67.41/74.30| |+ReAct||64.99?80.74| |+VRA-ViT||60.99/85.76| |Energy|B/32|76.69/76.40| |+ReAct||75.22/79.41| |+VRA-ViT||65.98/84.20|71.21/ 83.11| |Energy|L/16|68.48/74.59| |+ReAct||64.99 / 81.67| |+VRA-ViT||62.45 / 86.16| 65.55 / 85.45| |Energy|L/32|72.22 / 73.98 | |+ReAct||70.61/79.09 | |+VRA-ViT|| 66.64/ 84.75 | --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: I would like to thank the authors for their rebuttal. But my main concerns have not been addressed. 1. The VRA optimal operation is more like a simple variant of existing feature clipping methods, such as ReAct[1], BAST[2], ASH[3]. ReAct clips high activations, BAST clips both low activations and high activations and ASH-B clips low activations. Although the author emphasizes the difference from BAST, in fact VRA only add three hyper-parameters, while BAST utilized mean and variance to get the min and max thresholds. 2. There are still some problems in the interpretation of formula 3. ReAct clips the high activations and proves that the average activation reduction of OOD is larger than the reduction of ID, which is reasonable. Eq.3 is lack of insight, the author may provide more explanations about why the objective function (Eq.3 ) benefits OOD performance. 3. The optimal operation is based on feature histograms of ID set and OOD set. For different models and OOD datasets, the estimated optimal operation functions might be different, which makes it difficult to obtain the optimal function. For example, the activation functions on iNaturalist, SUN and CIFAR-100 are quite different. Besides, it seems that the optimal function is estimated based on the test ID/OOD set, rather than a validation set. In general,It is interesting and novel to utilize variational method to estimate the optimal activation function. But there are some significant shortcomings, making it hard to be accepted by NeurIPS. I believe the paper can be significantly improved if the author could provide a more reasonable and insightful objective function, and estimate a robust operation function. - [1] ReAct: Out-of-distribution Detection with Rectified Activations. - [2] Boosting Out-of-distribution Detection with Typical Features - [3] Extremely Simple Activation Shaping for Out-of-Distribution Detection. --- Reply to Comment 1.1.1: Title: Thanks for your response. Comment: We greatly appreciate your reply and want to address your concern as much as possible. **Q1**: The VRA optimal operation is more like a simple variant of existing feature clipping methods, such as ReAct, BAST, ASH. ReAct clips high activations, BAST clips both low activations and high activations and ASH-B clips low activations. Although the author emphasizes the difference from BAST, in fact VRA only add three hyper-parameters, while BAST utilized mean and variance to get the min and max thresholds. **A1**: Thank you very much for your valuable comments. In addition to our responses to Reviewer J9PQ (see A1) and Reviewer JjUp (see A2), we also try to restate the difference between VRA-based methods with existing approaches. In this paper, we extend ReAct and propose new activation functions in OOD detection. VRA-based approaches are not simple variants of existing approaches but have strong motivations and theoretical guarantees. Compared with ReAct, BAST, and ASH which merely truncate low and high activations, we further prove the necessity of amplifying intermediate activations. Such a simple modification can achieve state-of-the-art OOD detection performance, which fully validates its effectiveness. As Reviewer JjUp comments, our VRA-based methods "allow more possibilities to perform OOD detection". **Q2**: There are still some problems in the interpretation of formula 3. ReAct clips the high activations and proves that the average activation reduction of OOD is larger than the reduction of ID, which is reasonable. Eq.3 is lack of insight, the author may provide more explanations about why the objective function (Eq.3 ) benefits OOD performance. **A2**: Thanks for your comments and we apologize for our unclear description. First, let's review the theoretical analysis in ReAct (see Section 5 of ReAct ). It proves that the rectification operation affects OOD activations more severely compared to ID activations and results in a large $ E_{out} ( z_i - \bar{z_i} ) - E_{in} ( z_i - \bar{z_i} ) $ (see Remark 1 of ReAct). The increased separation between OOD and ID activations can transfer to the output space as well (see Remark 2 of ReAct), thus enlarging the gap between OOD and ID score. Rather than just increasing separation like ReAct, this paper attempts to maximize the separation. Based on this motivation, we propose the objective function in Eq. 3. Experimental results on benchmark datasets also demonstrate the effectiveness of our method. **Q3**: The optimal operation is based on feature histograms of ID set and OOD set. For different models and OOD datasets, the estimated optimal operation functions might be different, which makes it difficult to obtain the optimal function. For example, the activation functions on iNaturalist, SUN and CIFAR-100 are quite different. **A3**: Thanks for your valuable comments. In Figure 2 (see attached PDF in Global Response) and Figure 1 (see the main text), we provide visualization results on multiple datasets and obverse the same phenomenon. From this phenomenon, we verify the necessity of suppressing abnormally low and high activations and amplifying intermediate activations. Although different models and OOD datasets might have different optimal operation functions, we can adjust the hyper-parameters in activation functions to approximate the optimal operation function. **Q4**: Besides, it seems that the optimal function is estimated based on the test ID/OOD set, rather than a validation set. **A4**: In fact, we use Gaussian noise images as the validation set for hyper-parameter tuning (see Implementation Details in the main text). If we choose the best hyper-parameter according to the test ID/OOD set, we can achieve even higher OOD detection performance. For example, we set $\eta_{\alpha} = 0.6$, $\eta_{\beta} = 0.95$ and report "17.74/96.47(FPR95/AUROC)'' in Table 1 of our main text. If we choose the best hyper-parameter based on the test ID/OOD set, we can get "16.78/96.63 (FPR95/AUROC)''.
Summary: This work leverages the variational method to find optimal operation and proposes a new technique variational rectified activation (VRA) for out-of-distribution (OOD) detection. This paper finds the best operation for OOD detection and verifies the necessity of suppressing abnormally low and high activations and amplifying intermediate activations. The proposed VRA method is compatible with different network architectures and scoring functions. Extensive experiments on a number of benchmark datasets show the effectiveness of the proposed method. Strengths: 1. This work tackles out-of-distribution (OOD) detection tasks, which is important for building reliable machine learning models in the real world. 2. The key idea of variational rectified activation that mimics suppression and amplification operations using piecewise functions is simple and flexible to compatible with different scoring functions and network architectures. This work also provides a theoretical understanding of the VRA method for OOD detection from the perspective of the variational method. 3. This paper conducts extensive experiments on several benchmark datasets to validate the effectiveness of the proposed VRA method. Weaknesses: 1. Can the authors report the mean and std for the main results in the experiment section? 2. It would be interesting if the authors can compare the visualization of the distribution of ID and OOD uncertainty scores before and after variational rectification. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Refer to the detailed comments on weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This work does not present any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer Tdya We thank the reviewer's appreciation of our clear writing and sufficient experimental results. We try to address your concerns as follows: **Q1**: Can the authors report the mean and std for the main results in the experiment section? **A1**: Thanks for your valuable comments. Since the standard deviations are relatively small, we did not include them in the main results. In Table 1, we take the main results on CIFAR-10 as an example. **Table 1: Mean and std for the main results. We use DenseNet-101 for CIFAR-10.** | Method | CIFAR-10 (FR $\downarrow$ / AU$\uparrow$) | | ------------- | ----------------------------------------- | | MSP | 48.74$\pm$0.00 / 92.46$\pm$0.00 | | ODIN | 24.57$\pm$0.00 / 93.71$\pm$0.00 | | Mahalanobis | 31.42$\pm$0.00 / 89.15$\pm$0.00 | | Energy | 26.55$\pm$0.00 / 94.57$\pm$0.00 | | ReAct | 26.45$\pm$0.00 / 95.30$\pm$0.00 | | KNN | 25.83$\pm$0.00 / 94.39$\pm$0.00 | | DICE | 20.84$\pm$1.58 / 95.25$\pm$0.24 | | SHE | 26.82$\pm$0.00 / 92.98$\pm$0.00 | | VRA | 17.74$\pm$0.00 / 96.47$\pm$0.00 | | VRA+ | 15.85$\pm$0.00/ 96.91$\pm$0.00 | **Q2**: It would be interesting if the authors can compare the visualization of the distribution of ID and OOD uncertainty scores before and after variational rectification. **A2**: Thanks for your valuable suggestion, In Figure 1 (see attached PDF in Global Response), we visualize the distribution of ID and OOD uncertainty scores before and after variational rectification. We observe that our rectification operation can reduce the overlap between ID and OOD data, verifying the effectiveness of our strategy. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks for the authors' rebuttal. I read the response and other reviewers' comments. After consideration, I decided to keep my original score. --- Reply to Comment 1.1.1: Title: Response to Reviewer Tdya Comment: Thank you for your valuable comments. We decide to present our novelty, contribution, related work, and technology more clearly. We kindly ask the reviewer to reconsider the rating. Novelty and Contribution: To the best of our knowledge, this is the first work that exploits the variational approach to find the optimal operation and validates the necessity of amplifying intermediate activations in OOD detection. VRA-based approaches are not simple variants of existing works but have strong motivations and theoretical guarantees. As Reviewer JjUp comments, our VRA-based methods "allow more possibilities to perform OOD detection". Experimental results on multiple benchmark datasets demonstrate that our method outperforms existing post-hoc strategies. Therefore, this paper is novel and points out a new direction for OOD detection. Related Works: In this paper, we have taken the reviewers' suggestions and compared our VRA-based methods with more related works, including SSD+, CSI, CIDER, MaxLogit, HEAT, BATS, ASH, and RankFeat. Compared with existing works, our method is different in motivation and practical form (see our previous response). Such a simple strategy can achieve state-of-the-art OOD detection performance, which fully validates the effectiveness of our method. Please specify which relevant works we are missing. We can add more discussions and comparisons. Technology: We have taken the reviews' suggestions and proposed more variants for VRA, including VRA-G (see the response to Reviewer GN4S), VRA-ViT (see the response to Reviewer J9PQ), and VRA-B (see the response for Reviewer JjUp). Our method allows more possibilities for OOD detection.
Summary: In this paper, the authors explore the out-of-distribution detection ability. They propose Variational Rectified Activation based on ReAct. Specifically, they suppress both the high and low values of the penultimate layer rather than only focusing on high values. Their performance on several benchmark dataset surpasses existing post-hoc strategies. Strengths: 1. They give a theoretical analysis to the proposed method. 2. They compare VRA-based methods with competitive post-hoc strategies, and performs the best. Weaknesses: 1. The theory part should be more clear about the relationship between the objective you minimize and the final score. 2. Please add more baselines to demonstrate the effectiveness of VRA, comparing your method with contrastive learning trained backbones (SSD+[1], CSI[2], CIDER[3]), and other state-of-the-art post-hoc methods (maxlogit[4] and HEAT[5]). [1] https://openreview.net/forum?id=v5gjXpmR8J [2] https://openreview.net/forum?id=o5RKoLQlK4olF [3] https://openreview.net/forum?id=aEFaE0W5pAd [4] https://arxiv.org/abs/1911.11132 [5] https://openreview.net/forum?id=tpCynHFviX Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In equation 3, why it is needed to maximally preserve the input? And with this term, why the optimal operation maximize the gap between ID and OOD? 2. In figure 1, your optimal g function does not seem to agree with VRA function. Have you ever try to adjust the gradient when z is between alpha and beta? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It seems that the paper did not include a limitation section or paragraph explicitly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer GN4S We thank the reviewer's appreciation of our theoretical analysis and rich experimental results. We try to address your concerns as follows: **Q1**: The theory part should be more clear about the relationship between the objective you minimize and the final score. **A1**: ReAct has proved that increasing the gap between ID and OOD (see Eq. 2) can improve the final score in OOD detection. We adopt the same objective function as ReAct and try to find the optimal operation to widen the gap. Therefore, the theoretical proof of the relationship between the objective function and final scores is equivalent to ReAct. Due to page limitations, we have not included these parts. **Q2**: Please add more baselines to demonstrate the effectiveness of VRA, comparing your method with contrastive learning trained backbones (SSD+, CSI, CIDER), and other state-of-the-art post-hoc methods (MaxLogit and HEAT). **A2**: Due to the good compatibility of VRA, for a fair comparison, we test the performance of VRA under the same experimental settings as the baselines (including ID data, OOD data, and network architecture). For contrastive-learning-based methods, we conduct experiments under the same settings as CIDER. For post-hoc methods, we conduct experiments under the same settings as HEAT. Experimental results in Tables 1$\sim$2 demonstrate that VRA outperforms existing OOD detection strategies. **Table1: Comparison with contrastive learning trained backbones. We use ResNet-34 for CIFAR-100.** | Method | FR $\downarrow$ / AU$\uparrow$ | | ------ | --------------------------------------- | | SSD+ | 67.2 / 85.9 | | CSI | 67.5 / 84.8 | | CIDER | 46.9 / 87.7 | | VRA | **41.9** / **87.9** | **Table 2: Comparison with post-hoc methods. We use ResNet-50 on ImageNet.** | Method | FR $\downarrow$ / AU$\uparrow$ | | -------- | --------------------------------------- | | MaxLogit | 58.0 / 87.0 | | HEAT | 34.4 / 92.6 | | VRA | **25.5** / **94.6** | **Q3**: In equation 3, why it is needed to maximally preserve the input? And with this term, why the optimal operation maximize the gap between ID and OOD? **A3**: We apologize for this unclear description. In this paper, we treat $\max_g \mathbb{E}{\text{in}}[g(z)] - \mathbb{E}{\text{out}}[g(z)]$ as the core objective function derived from ReAct and $\min_g \mathbb{E}{\text{in}}[(g(z)-z)^2]$ as the regularization term. The motivation behind this regularization term is twofold: (1) Many post-hoc methods do not change the feature space but also achieve promising results on OOD detection (such as MSP, Energy, and ODIN). Therefore, $g(z)=z$ is an acceptable operation. (2) If we only maximize $\mathbb{E}{\text{in}}[g(z)] - \mathbb{E}{\text{out}}[g(z)]$ without the regularization term, the optimal solution will not exist. The proof is by contradiction. Suppose there is an optimal operation $g^*(\cdot)$. Then, we can easily get a better operation $2g^*(\cdot)$ with larger $\mathbb{E}{\text{in}}[g(z)] - \mathbb{E}{\text{out}}[g(z)]$. Therefore, this regularization term can guarantee the existence of the optimal solution. There may be better regularization terms. In the future, we will explore other regularization terms for OOD detection. In Eq. 14$\sim$15, we prove that the optimal operation $g^*(\cdot)$ driven by our objective function can widen the gap between ID and OOD by at least $\frac{1}{2\lambda} \mathbb{E}{\text{in}}[(g^*(z)-z)^2] \geq 0$. Therefore, adding this term does not change the core objective of VRA. Meanwhile, Table 4 (see the submitted paper) shows that $g^*(\cdot)$ can achieve near-perfect results on OOD detection. Therefore, although this regularization term slightly changes the objective function, it does not significantly affect the performance of OOD detection but ensures the existence of the optimal solution. Just like the regularization term in neural networks. Although the regularization term changes the loss function, it prevents overfitting and improves the generalization ability of neural networks. **Q4**: In figure 1, your optimal g function does not seem to agree with VRA function. Have you ever try to adjust the gradient when z is between alpha and beta? **A4**: Good idea! In this paper, VRA+ introduces a hyper-parameter $\gamma$ to amplify intermediate activations. However, we have not considered adjusting the gradient at $\alpha \leq z \leq \beta$. We denote this new variant as VRA-G: \begin{equation} \text{VRA-G}(z)=\begin{cases} 0, z< \alpha\nonumber \\\\ kz, \alpha \leq z \leq \beta \nonumber \\\\ \beta, z > \beta \\\\ \end{cases}, \end{equation} where $k$ controls the gradient. Same with VRA, we treat the $\eta_\alpha$-quantile (or $\eta_\beta$-quantile) of activations estimated on ID data as $\alpha$ (or $\beta$). It is worth noting that VRA-G with $k=1.0$ is equivalent to VRA. Experimental results in Table 3 show that setting the gradient a little steeper (i.e., $k>1$) can amplify intermediate activations and achieve better performance. But over-amplification leads to performance degradation. **Table 3: Performance of VRA-G. We use DenseNet-101 for CIFAR-10.** | Method | CIFAR-10 (FR $\downarrow$ / AU$\uparrow$) | | --------------- | ----------------------------------------- | | VRA | 17.74 / 96.47 | | VRA+ | **15.89** / **96.90** | | VRA-G $(k=0.5)$ | 28.14 / 94.12 | | VRA-G $(k=0.8)$ | 19.62 / 96.11 | | VRA-G $(k=1.0)$ | 17.74 / 96.47 | | VRA-G $(k=1.2)$ | 17.32 / 96.64 | | VRA-G $(k=1.5)$ | **16.81** / **96.66** | | VRA-G $(k=2.0)$ | 17.05 / 96.56 | --- Rebuttal Comment 1.1: Comment: Thanks for the response. I appreciate the author(s)' effort in this work, but I also realize that the paper has some weaknesses, e.g., in the aspect of novelty, contribution, related work, and technology as other reviewers and I raised in the comments. So, I am inclined to maintain my initial rating. --- Reply to Comment 1.1.1: Title: Response to Reviewer GN4S Comment: **Q1:** I appreciate the author(s)' effort in this work, but I also realize that the paper has some weaknesses, e.g., in the aspect of novelty, contribution, related work, and technology as other reviewers and I raised in the comments. So, I am inclined to maintain my initial rating. **A1:** Thank you for your valuable comments. To address your concerns, we decide to present our novelty, contribution, related work, and technology more clearly. We kindly ask the reviewer to reconsider the rating. *Novelty and Contribution:* To the best of our knowledge, this is the first work that exploits the variational approach to find the optimal operation and validates the necessity of amplifying intermediate activations in OOD detection. VRA-based approaches are not simple variants of existing works but have strong motivations and theoretical guarantees. As Reviewer JjUp comments, our VRA-based methods "allow more possibilities to perform OOD detection". Experimental results on multiple benchmark datasets demonstrate that our method outperforms existing post-hoc strategies. Therefore, this paper is novel and points out a new direction for OOD detection. *Related Works:* In this paper, we have taken the reviewers' suggestions and compared our VRA-based methods with more related works, including SSD+, CSI, CIDER, MaxLogit, HEAT, BATS, ASH, and RankFeat. Compared with existing works, our method is different in motivation and practical form (see our previous response). Such a simple strategy can achieve state-of-the-art OOD detection performance, which fully validates the effectiveness of our method. Please specify which relevant works we are missing. We can add more discussions and comparisons. *Technology:* We have taken the reviews' suggestions and proposed more variants for VRA, including VRA-G (see the response to Reviewer GN4S), VRA-ViT (see the response to Reviewer J9PQ), and VRA-B (see the response for Reviewer JjUp). Our method allows more possibilities for OOD detection.
Rebuttal 1: Rebuttal: # Global Response Dear Reviewers, Area Chairs, and Program Chairs: We would like to express our gratitude to all reviewers for taking their valuable time to review our paper. We sincerely appreciate all reviewers for their positive comments on our theoretical analysis and technique soundness. For example, Reviewer JjUp points out that "the theoretical analysis well motivates the methodology and shaping". Reviewer JjUp notes that the proposed method is "well-motivated both intuitively and theoretically". We also thank the reviewers for noting that this paper is "well-written, well-structured, and easy to understand" (J9PQ), the proposed method is "promising" (JjUp) and "allows more possibilities to perform OOD detection" (JjUp), the performance is "competitive" (GN4S), "effective" (Tdya, J9PQ) and "strong" (JjUp). Meanwhile, we appreciate the reviewers for pointing out the shortcomings. Your valuable comments help us improve this paper. We try to address each comment as satisfactorily as possible. In the response, we compare with more baselines (including SSD+, CSI, CIDER, MaxLogit, HEAT, BATS, ASH, and RankFeat), verify the compatibility of our method with more backbones (such as ViT), present more visualization results (including the distribution of ID and OOD uncertainty scores before and after variational rectification, and the optimal operation in CIFAR-10 and CIFAR-100), and study more VRA-based variants (such as VRA-B and VRA-G). Please find the responses to each reviewer’s comments below. We kindly ask the reviewers to take the above clarifications into account when considering score adjustments. We welcome any further discussion with the reviewers. Best regards, Paper4986 Authors Pdf: /pdf/3a5c60d697d614a3e1d2f201a9a7eb0aa526335a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On the Role of Randomization in Adversarially Robust Classification
Accept (spotlight)
Summary: The paper investigates randomized classifiers and their robustness against adversarial examples. The authors' contributions can be found along three different axes. First, the authors give the necessary and sufficient conditions so that from a set of deterministic classifiers one can build a randomized classifier that outperforms the best deterministic classifier from the original set (w.r.t. adversarial risk). In sequence the authors present some results in the opposite direction in the binary classification case; e.g., for every randomized ensemble classifier there exists a deterministic weighted ensemble with better adversarial risk. Finally, the authors identify the conditions under which randomization can provably help on a deterministic base hypothesis set. After rebuttal: Upgraded my score from "Weak Accept" to "Accept". Strengths: Some of the ideas of the paper are building on ideas and findings of prior work by Dbouk and Shanbhag; namely, the observation that a randomized classifier can achieve a decisive advantage over any deterministic classifier, because the adversary can only force a misclassification on a subset of all vulnerable classifiers, where "vulnerable" means classifiers that can be fooled with perturbations of certain magnitude. This observation and the continuation of the work is interesting and the paper is well-written for the largest part, though, near the end I found it more dense than what I would have preferred it to be. I expect this paper to influence to some extent the work that will be done on the robustness of randomized classifiers in the near future. Overall an interesting paper with interesting findings, though, for completeness, I would like to see the authors add some information in the supplementary material even for simple claims; e.g., clearly write down the calculations for Figure 1b, or clarify other small remarks that they have here and there. I also like Section 6; the conclusion that the authors have written -- a clear summary of their contributions and a summary of ideas related to their work that can be explored as future work. Weaknesses: Figure 1 appears 2 pages after it is first mentioned. One page difference is ok, but two pages is strange. The authors assume that the label y does not change within the perturbation budget $\epsilon$ around $x$. While this is a reasonable assumption, and in fact it is the prevailing assumption in adversarial robustness, nevertheless, it is not the only assumption one can make. Clarifying this and citing any of the following two papers (or both?) so that someone can find more information about the topic, I believe can be important and potentially also motivate further avenues for randomized classification. The two papers are: [M1] Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution https://proceedings.neurips.cc/paper_files/paper/2018/file/3483e5ec0489e5c394b028ec4e81f3e1-Paper.pdf [M2] On the Hardness of Robust Classification https://www.jmlr.org/papers/volume22/20-285/20-285.pdf Line 102: Write down explicitly the adversarial risk of a deterministic classifier in the appendix, if it does not fit in the main part of the paper, so that there is no ambiguity. I think the "Matching Penny" notions are not clearly defined in the paper. It is apparent that some of the ideas have appeared in prior work but the authors spend nearly no time explaining the original ideas and the original context. I believe this should change. In particular, the authors cite two books in lines 168-170 without saying much more; the reader may have no idea about these books and this line of work. It is quite unfortunate that Definition 1 splits between pages because the notation is a bit overloaded and it is clarified on the following page. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1. In lines 144-146 it is stated "Consequently, we get that there always exists a probabilistic classifier that is at least as good as the best deterministic classifier in the BHS $\mathcal{H}_b$." Can you please explain why and how this follows from the previous sentence? Q2. In lines 210-211 we have "Observe that every classifier in $H_b$ is vulnerable at $(x,y) = (0,1)$ and so $R_{\epsilon}(h) = 1$ for all $h\in H_b$". I am not sure I follow; since $x = 0$, then $w^T x = 0$ and therefore 1{$w^Tx < 1$} should return 1, which is really the label $y$. Can you explain? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I think the paper is ok in this regard, especially in the light of the clear ideas for future work that one can pursue and are listed in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and pointing out relevant references. We are very pleased to know you found our work "interesting and well-written for the largest part". We also share the expectation that our contribution will "influence to some extent the work that will be done on the robustness of randomized classifiers in the near future". Below we answer your questions and concerns. > The authors assume that the label $y$ does not change within the perturbation budget $\epsilon$ around $x$ Yes, we indeed work on the *corrupted-instance* setting according to [1] (*constant-in-the-ball* [2]), as opposed to the *error region* or *prediction change* formulations [1]. As discussed in [1, 2], this definition can be problematic because even the true concept class can have positive and even constant adversarial risk. However, we consider the *corrupted-instance* setting because it is commonly studied in adversarial examples literature [3, 4, 5] and in the work that is closely related to ours, like [6, 7]. We will include in the paper a comment mentioning the different notions of adversarial risk and clearly clarify that our work corresponds to the *corrupted-instance* setting. > I think the "Matching Penny" notions are not clearly defined in the paper Please refer to the Global response. > In lines 144-146 it is stated "Consequently, we get that there always exists a probabilistic classifier that is at least as good as the best deterministic classifier in the BHS $\mathcal{H}_b$." Can you please explain why and how this follows from the previous sentence? We thank you for pointing out this sentence, as we agree it needs to be rephrased to really transmit the correct message. There is a simpler argument to support this remark, which is expressed in the proof of Theorem 3.1 in the Supplementary Material, lines 482-486. The idea is that the original deterministic classifiers $h \in \mathcal{H}\_b$ can be seen as probabilistic classifiers by considering $\mathbf{h}\_{\mu}$ with $\mu = \delta\_{h}$, the Dirac measure over $h$. This being said, one can take infimum over all probabilistic classifiers over $\mathcal{H}_b$, and obtain that for any $h' \in \mathcal{H}_b$ the following holds: $$\inf\_{\mu \in \mathcal{P}(\mathcal{H}_b)} \mathcal{R}\_{\epsilon}(\mathbf{h}\_{\mu}) \le \mathcal{R}\_{\epsilon}(\mathbf{h}\_{\delta\_{h'}}) = \mathcal{R}\_{\epsilon}(h')$$ > In lines 210-211 ... [About Example 1] As you state, each classifier $w$ satisfies that $w^Tx = 0$, and therefore $\mathcal{1}\\{w^Tx < 1\\} = 1$, which means that all $w$ predict the correct label for the *clean* input $x$. Now we want to see that every $w$ is vulnerable at $x$. Recall that we defined $\mathcal{H}\_b$ as the space of linear classifiers $w$ such that $\left\lVert w \right\rVert\_{\star} = \frac{1}{\epsilon}$. In our case, $\mathcal{X} = \mathbb{R}^d$ with (usually) an $L_p$ norm $\left\lVert \cdot \right\rVert$, so we can rewrite the dual norm as $\left\lVert w \right\rVert_{\star} = \sup\_{\left\lVert z \right\rVert=1} \{ w^Tz \}$. Moreover, this supremum is attained by some $z_w$, so for each $w$ we get $z_w$ of norm 1, such that $w^T z_w = \frac{1}{\epsilon}$. Now, for each $w \in \mathcal{H}_b$, consider the adversarial example $\epsilon \cdot z_w$. It is a valid adversarial example because it has norm $\epsilon$, and $w^T (\epsilon \cdot z_w )= \epsilon \cdot \frac{1}{\epsilon} = 1$, meaning that the classifier predicts $\mathcal{1}\\{w^T (\epsilon \cdot z_w) < 1\\} = 0$, the wrong class. The next step is to guarantee that this perturbation is unique for each $w$, and your question made us realize that we need to add an extra assumption for this: we need to work with the $L_2$ norm so that $z_w$ is the only vector attaining the supremum on the dual norm. Having that $\epsilon \cdot z_w$ fools $w$ and *only* $w$, we can consider for simplicity $\mu$ the uniform distribution over $\mathcal{H}_b$. Let us compute the sets $\mathcal{H}\_{vb}$ and $\mathcal{H}^{max}\_{svb}$ to be able to compute the *matching penny gap* for this example. As we just saw, every $w$ is itself vulnerable. This means that $\mathcal{H}\_{vb} = \mathcal{H}\_{b}$, and therefore $\mu(\mathcal{H}\_{vb}) = 1$. Given the unicity of $z_w$, we know that no two classifiers can be fooled by the same perturbation. Therefore the family of simultaneously vulnerable classifiers only contains singletons $\{w\}$. As $\mu(\{w\}) = 0$ for every $w$, taking the supremum gives $\mu(\mathcal{H}^{max}_{svb}) = 0$. Finally, the matching penny gap is $\pi_{\mathbf{h}\_{\mu}}(x,y) = \mu(\mathcal{H}\_{vb}) - \mu(\mathcal{H}^{max}\_{svb}) = 1$. In other words, the mixture in this example has the best adversarial risk possible, even though it is built from classifiers with the worst adversarial risk possible individually. We will add the complete explanation on the Supplementary material, and add to the main paper that the example works for the $L_2$ norm. [1] Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution [2] On the Hardness of Robust Classification [3] Towards Deep Learning Models Resistant to Adversarial Attacks [4] The Limitations of Deep Learning in Adversarial Settings [5] The Many Faces of Adversarial Risk [6] Adversarial Vulnerability of Randomized Ensembles [7] Mixed Nash Equilibria in the Adversarial Examples Game --- Rebuttal Comment 1.1: Title: Thank you for the responses Comment: I have read the other reviews as well as the response that the authors have provided to the various issues that were raised in the reviews and I am happy with the explanations. I can see that the paper will have more clarity and better content overall in the end and I will upgrade my vote from "Weak Accept" to "Accept". Thank you for a very interesting paper! --- Reply to Comment 1.1.1: Title: Thank you Comment: We are very glad that our response has made the paper clearer. We thank you for your upgrade !
Summary: Several proposed adversarial defenses involve randomized classifiers. Prior theoretical work analyzes both the randomized and non-randomized cases. This paper answers: 1) When does randomization 'help' in adversarial classification? 2) When does it suffice to consider deterministic classifiers? The authors start by describing three common paradigms ( randomized ensemble classifier (REC), weight-noise injection classifier (WNC), input-noise injection classifier (INC)) for constructing randomized classifiers. Subsequently, the authors give a condition describing when randomized classifier over a hypothesis set $\mathcal H_b$ will outperform deterministic classifiers. Next, the authors show that one can always threshhold a binary randomized classifier to get a better performing deterministic classifier. Lastly, the authors show that if $\cH_b$ is all measurable sets, then randomization over $\cH_b$ does not improve the adversarial risk. Strengths: - The paper answers and important question in the field of adversarial learning: when does randomization help with adversarial classification? - Proofs seem simple and clear Weaknesses: The exposition is not great. Here are some specific issues. 0. Please state your threat model. Your adversary has access to the randomized classifier, but does not know which random function the learner uses every time a point is classified. 1. Equation 1: $\mathcal R_\epsilon ^y$ is not defined before equation (1) Note that you also need to explain that $\sup_{x'\in B_\e(x)} \ell^{0-1}((x',y),h)$ is measureable. See appendix A of the following reference for this result: Existence and minimax theorems for adversarial surrogate risks in binary classification. N. S. Frank, J. Niles-Weed. 2. Foot note 1: I understand how such a proof would go, but I would expect to see a proof with a maximizing sequence rather than a maximizer in the supplementary material 3. I couldn't understand the argument in line 191-192 4. lines 203-204: I couldn't understand the sentence. "More generally, assumption..." 5. I had a hard time understanding the parallel decision boundaries discussion in example 2 6. line 276: I think an REC built from 2 base classifiers takes on continuously many values. Can you better explain 276-278? 7. The supplementary material is quite disorganized Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - line 108: The Borel sigma algebra is defined in terms of the open sets on a topological space. How are you defining the topology on the set $\mathcal H_b$? It seems that in the REC, ENC, INC paradigms, there is a "continuous" map from $\mathbb R^d$ to $\mathcal H_b$. Perhaps you could use this map. - Can you explain the name "penny matching gap"? - In section 4.1, I would expect that one can always take the threshhold $\alpha=1/2$. Why is this not the case? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: Some of the results only hold in the binary case. The paper discusses this limitation Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for all the effort you put in reviewing our work and pointing out relevant references. We appreciate that you find that our paper "answers an important question in the field of adversarial learning". We will put all out effort in clarifying the exposition of the paper to make it clearer and avoid confusion. Below are some comments addressing your suggestions and questions. Also, as your remarks and questions were very important and numerous, we will be answering some of them in an official comment to this rebuttal if we are allowed. > Please state your threat model We will add in Section 2 the details about our threat model, in which, as you state, the adversary has full access to the models (parameters, gradients, number of models, etc.), but no control nor access over the randomness used by the defender to choose which model to use for each inference pass. That is, we are in a white-box threat model without access nor control over the random number generators. > line 108: About the Borel sigma algebra over $\mathcal{H}_b$ In all our particular use cases, the space $\mathcal{H}_b$ is either finite or identifiable to some usual subset of $\mathbb{R}^d$. In the case of INC, $\mathcal{H}_b$ is identifiable with the input space $\mathcal{X} \subset \mathbb{R}^d$. For WNC, when classifiers are parametrized over some $\mathbb{R}^p$ (neural networks, linear classifiers), this is also the case. So in any of these cases, the topology considered is the usual topology and the Borel $\sigma$-algebra is defined accordingly. In Remark 4 we consider a setting closer to [1, 2, 3], where $\mathcal{H}_b$ is much more complex. However, this remark holds when one takes any finite number of optimal classifiers, so even if the family $\mathcal{H}^*$ of optimal classifiers is complicated, we can simply take any pair of them ($\mathcal{H}_b$ finite, with two elements) and say that their matching penny gap must be zero. We thank you for pointing out the need for clarification. We will make it clear in Section 2 that, for this work, $\mathcal{H}_b$ will always be either finite or directly identifiable with some subset of $\mathbb{R}^p$. Furthermore, we will also reformulate Remark 4 accordingly. > Note that you also need to explain that $\sup_{x' \in B_{\epsilon}(x) } \ell^{0 -1}((x', y), h)$ is measureable We totally agree that the measurability of the 0-1 loss under attack is non-trivial, and it is important to discuss it. Your remark sparked an important discussion within the team that led to the following conclusions: To ensure measurability, we add the assumption that for every $y$, the evaluation function $f : \mathcal{X} \times \mathcal{H}_b \to \mathbb{R}$ defined as $f(x, h) = h(x)_y$ (the $y$-th component of $h(x)$ seen as a vector function) is Borel measurable in $\mathcal{X} \times \mathcal{H}_b$. This might not be true in general (see this thread https://mathoverflow.net/a/28114), but it holds in our setting. What we mean is that in practice, $f$ takes a simpler form, like $f(x, w) = (w^T x)\_y$ in the case of linear classifiers, or $f\_{w}(x)\_y$ in the case of parametrized neural networks. In all these settings, the function $f$ can be assumed to be well-behaved function from $\mathbb{R}^d \\times \mathbb{R}^p$ to $\\mathbb{R}^K$ that satisfies the measurability condition. Assuming that for every $y$, the mapping $(x, h) \to h(x)\_y$ is Borel measurable, we can then write the expected 0-1 loss as $$\\ell^{0 -1}((\\cdot, y),\\mathbf{h}\_{\\mu}) = \\mathbb{E}\_{h \\sim \\mu} \\left[ \\mathcal{1}\\{h(x) \\ne y \\} \\right] = \\mathbb{E}\_{h \\sim \\mu} \\left[ 1 -h(x)\_y \\right] = \\int_{\\mathcal{H}\_b} 1 -h(x)\_y d\\mu(h)$$ and by Fubini-Tonnelli's Theorem, this function will be Borel measurable in $\mathcal{X}$. The final step would be to apply [1, Appendix A, Theorem 27] to prove that the loss under attack is universally measurable, and that the adversarial risk is well-defined in the universal $\sigma$-algebra. > Proof with a maximizing sequence We will add this proof in the supplementary material. We will also modify the definitions in page 5 accordingly. In particular, we will define $\\mu^{max}(x, y) = \\sup\_{\\mathcal{H}' \\in \\mathcal{H}\_{svb}} \\mu(\\mathcal{H}')$ instead of using the $argmax$, which somehow assumes the existence of such $\mathcal{H}^{max}\_{svb}$ (and $x^{max}$ in the proof). > I couldn't understand the argument in line 191-192 The main intuition is that the optimal attack for a mixture of models, i.e. in which the defender randomly samples a model at inference, consists exactly on crafting an attack that simultaneously attacks as many classifiers as possible. This lines however will be modified according to the last answer. > Can you explain the name "penny matching gap"? Please refer to the global response. [1] Existence and minimax theorems for adversarial surrogate risks in binary classification [2] The Many Faces of Adversarial Risk [3] On the Existence of the Adversarial Bayes Classifier (Extended Version) [4] Randomization matters. How to defend against strong adversarial attacks [5] Adversarial Risk via Optimal Transport and Optimal Couplings --- Rebuttal Comment 1.1: Comment: **measurability issue** That's interesting about measurability! This heart of the issue seems to be showing that a projection of a Borel measurable function is measurable. Under mild assumptions, The projection of a Borel measurable function is measurable with respect to the analytic sigma algebra (which is contained in the universal sigma algebra). The reference [1] includes a variant of this argument for $\mathbb R^d\times S$, where $S\subset \mathbb R^d$. You might also be able to find the fact you need in [6] **further references** Relating to theorem 4.1: Section 5.2 of [7] provides an example of a distribution with 3 classes for which randomized classifiers preform strictly better than deterministic classifiers. [6] Stochastic optimal control: the discrete time case [7] The multimarginal optimal transport formulation of adverarial multiclass classification --- Reply to Comment 1.1.1: Title: Thanks again for the useful references Comment: We would like to thank you for pointing out these useful references. After reading [6], we found that another way of ensuring the Borel-measurability of the projections would be to satisfy [6, Proposition 7.14]. We would need $\mathcal{X}$ to be a Borel space and $\mathcal{Y}$ a product of Borel spaces (already satisfied), and this would yield that Borel-measurability of $h: \mathcal{X} \to \mathcal{Y} \subset \mathbb{R}^K$ is enough to ensure measurability of individual components $h_y$. These assumptions are simpler and close to the settings that are used in practice. With respect to the Example in [7, Section 5.2], we find it fascinating and closely related to our setting. Thank you for pointing it out. In their case 4-i authors show that a randomized classifier that assigns $\frac{1}{2}$ to every "conflicting" point or region is a saddle point. This is closely related to the Nash equilibrium in the original game of matching pennies. Indeed, if one restricts to a pair of points, there is a matching pennies game going on: authors say "The adversary gathers classes as much as possible and distributes them as uniform as possible". That is, at each point $\bar{x}_{ij}$ there is the same probability that this point came from class $i$ and $j$, then the equilibrium strategy is to predict uniformly at random between class $i$ and $j$. It is also interesting that the optimal randomized classifier in this example (weak partition, page 39 of [7], bottom) can be built as a uniform mixture of 6 deterministic ones $f_{ijk}$ that predict $i$ in $B\_{\epsilon}(x_i)$, $j$ in $B\_{\epsilon}(x_j) \setminus B\_{\epsilon}(x_i)$ and $k$ in $B\_{\epsilon}(x_k) \setminus (B\_{\epsilon}(x_i) \cup B\_{\epsilon}(x_j))$. Each $f\_{ijk}$ has standard power of 1, and adversarial power of $w_i$ against an optimal attack, like the constant classifiers. However, the uniform mixture of these 6 classifiers would have standard power of 1, and adversarial power of 0.5, meaning an increase in performance of $0.5 - w_1 \gt 0$. One can compute the matching penny gap of this mixture of 6 classifiers in this dataset, and conclude it is $\frac{1}{6}$. The average risk is $\frac{2}{3}$, and the risk of the mixture is $\frac{2}{3} - \frac{1}{6} = \frac{1}{2}$. Lastly, this example shows that Theorem 4.1 does not hold in multiclass! Indeed, authors show that their randomized classifier is an equilibrium, and that "In fact, it is unavoidable to introduce weak partitions". They further say that "In other words, even this simple discrete measures, it is necessary to extend strong partition to weak partition in order to achieve the minimax value". We will definitely add this important remark in the paper. The conclusion would therefore be that in the multiclass setting, there exist data distributions for which randomization is necessary, proving an even stronger point for randomization for adversarial robustness. Understanding these situations would be an interest avenue of future work, because as the authors say, it "depends on both the geometry of data distributions and their magnitudes" and even with discrete toy examples, there are non-trivial situations. **Answers to other questions** > Parallel decision boundaries To simplify the example, consider $A = \\{0\\} \subset \mathbb{R}$ and the parallel classifiers $h_r = A \oplus B_r(0)$ of the form $h_r(x) = \mathcal{1}\\{ x \in (-r, r) \\}$ for $r > 0$.. Recall that a matching penny configuration for two classifiers arises when 1) **both are vulnerable**, but 2) **not simultaneously**. We will see that this cannot arrive with this family of classifiers that are "parallel". W.l.o.g, take any point $x > 0$ and suppose it is of class is 0. Take any two classifiers $h\_{r_1}, h\_{r_2}$ with $r_1 < r_2$ and fix the attacker budget to $\epsilon$. Note that $h_{r_1}$ is vulnerable at $x$ if an only if $x - \epsilon \le r_1$. That is, the attacker must be able to move $x$ inside $(-r_1, r_1)$ with its budget of $\epsilon$. This also holds for $h_{r_2}$. To satisfy condition 1), we must ensure that $x - \epsilon \le r_1$ and $x - \epsilon \le r_2$. However, any $x$ that satisfies $x - \epsilon \le r_1$ immediately satisfies $x - \epsilon \le r_2$, as $r_1 < r_2$, meaning that the same perturbation induces an error on both classifiers. In conclusion, if condition 1) is satisfied, 2) cannot be. For the case of general parallel sets $A^{r_1}, A^{r_2}$, note that if $r_1 < r_2$ then $A^{r_1} \subset A^{r_2}$. Then the argument is similar to the example above.
Summary: This paper provides an analysis of the role of randomization in developing adversarially robust classifiers. It explores the conditions under which randomized ensembles outperform deterministic classifiers regarding adversarial risk. The study also highlights the existence of deterministic classifiers that is at least the same adversarial risk as the probabilistic model, where the authors also connect the theory to weighted ensembles and randomized smoothing. Overall, the paper offers valuable insights into randomization for building robust classifiers in adversarial settings. Strengths: 1. The paper is well-written, ensuring clarity and ease of understanding for readers. 2. The paper offers a foundational understanding of the significance of randomization in building robust models. Especially, I find constructing a probabilistic model from a deterministic model and back particularly interesting. 3. The paper includes an insightful analysis of existing methods (e.g., randomized smoothing ), showcasing their effectiveness. Weaknesses: 1. While the theoretical analysis presented in this paper is intriguing, it leaves me questioning its practical implications in constructing robust classifiers. Specifically, I am concerned about how these theoretical findings translate into practical strategies for building classifiers that perform effectively in real-world scenarios. For example, Theorem 4, in its current form, only addresses the binary case, limiting its applicability. I think the authors can add more discussion on how to guide the future empirical robust model design. 2. The authors show there exists "a probabilistic classifier that is at least as robust as the best deterministic classifier." However, I think it is more important to guide future research to find such a better probabilistic classifier in a principled way. 3. Minor: It is recommended to explain the term "Borel σ-algebra." 4. Minor: Moving the fig1 to near page 4 is better. As I am not an expert in theoretical research, evaluating the novelty and contribution is hard for me. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In adversarial literature, we might consider the attackers can submit malicious input multiple times, where one misclassification is considered a "success attack." I believe the probabilistic model might be more likely to be broken. What happens to the current theorem if considering such an attacking scenario? I found a recent work [1] which is considering this issue. 2. This paper considers a BHS composed of infinitely many linear classifiers. How does the theorem change if BHS is composed of non-linear classifiers? 3. The paper mainly discussed robustness. How does the clean performance change of building a probabilistic model from deterministic models and back? [1] Lucas et al. Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators. https://arxiv.org/pdf/2302.13464.pdf Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors are suggested to include a paragraph discussing the limitations of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your review. We are very happy to know that you find that our work offers a foundational understanding of the significance of randomization in building robust models. Furthermore, we also agree with you in that this theoretical contribution should be followed by practical ones. Particularly, we are very eager to work on practical algorithms to optimize the *matching penny gap* of a set of classifiers. Below we address your remarks and questions. > In adversarial literature, we might consider the attackers can submit malicious input multiple times, where one misclassification is considered a "success attack." Thank you for the interesting reference. We found out that it shares some ideas with our work. More precisely, their conclusion seems to agree with our conclusion in Section 4, where taking out the noise of a probabilistic classifier could give a potentially better deterministic one. However, they conclude this under a threat model that is particularly pessimistic for randomized models. With the definitions we use for probabilistic classifiers, at any point $(x, y)$ we have that $\mathbf{h}(x)_y$, the $y-$th component of $\mathbf{h}(x)$, is the probability that we predict the correct label for $x$. In general, these probabilities will lie between 0 and 1, without being exactly 0 or 1, which means that simply making $n$ inferences on the same point, without even computing an attack, will decrease the chance of correctly predicting $n$ times in a row. Under this threat model, the performance of a probabilistic classifier will degrade as $n$ gets bigger. Consider a point $(x, y)$ and $M$ classifiers that are in a *matching penny* configuration, that is, **they are all individually vulnerable, but no attack can fool two classifiers simultaneously**. Recall that this is the scenario in which a mixture of classifiers provides the greatest gains in robustness w.r.t the original $M$ models considered. All of the $M$ original models have 0 accuracy on $(x,y)$ (under attack). However, using the mixture of the $M$ models gives an expected accuracy of $\frac{M-1}{M}$ because no matter which classifiers the attacker chooses, there is a probability of $\frac{M-1}{M}$ that we use another model to predict. This represents a gain of $\frac{M-1}{M}$ in expected accuracy. Now, under the threat model with $n$ inference passes, this gain passes from $\frac{M-1}{M}$ to $(\frac{M-1}{M})^n$, which will decrease rapidly as $n$ increases. As an example, if $M=5$ and $n=5$, the gain passes from $0.8$ to $0.32$. If $n=10$, then it is only $0.107$. On the other side of the spectrum, adding $n$ is also detrimental for those cases in which considering a mixture actually *added a vulnerability*. Just as the gains of the mixture become less important, also the new vulnerabilities become more critical. To see this, imagine the situation in which at a point $(x, y)$ there are $M-1$ models that are robust, and only 1 classifier that is vulnerable. An optimal attacker will attack this single model, as it is the only vulnerable one. In our setting, the best model had an accuracy of 1 at the point, and the mixture had an expected accuracy of $\frac{M-1}{M}$, meaning the mixture introduced a loss in performance of $\frac{1}{M}$ at the point. Now if we introduce $n$ and the attacker proposes the same attack, the mixture will have an expected accuracy of $(\frac{M-1}{M})^n$, meaning the vulnerability introduced by the mixture is now $1 - (\frac{M-1}{M})^n$, which tends to 1 as $n$ gets bigger. In conclusion, our results show that mixing can be beneficial in certain cases, but if one considers this new, more pessimistic threat model parametrized by $n$, the gains provided by mixtures decrease, and the vulnerabilities introduced increase, making it harder for the mixture to bring any improvement w.r.t the original deterministic classifiers. > How does the clean performance change of building a probabilistic model from deterministic models and back? The clean risk of a mixture equals the average risk of the base classifiers. There is no gain nor loss of accuracy when considering mixtures. This can be seen in the following equation: $$\\mathcal{R}(\\mathbf{h}_{\\mu}) = \\mathbb{E}\_{x, y \\sim \\rho} \\left[ \\mathbb{E}\_{h \\sim \\mu} \\left[ \\mathcal{1}\\{h(x) \\ne y \\} \\right] \\right] = \\mathbb{E}\_{h \\sim \\mu} \\left[ \\mathbb{E}\_{x, y \\sim \\rho} \\left[ \\mathcal{1}\\{h(x) \\ne y \\} \\right] \\right] = \\mathbb{E}\_{h \\sim \\mu} \\left[ \\mathcal{R}(h) \\right] $$ The average risk is lower bounded by the lowest risk achievable in $\mathcal{H}_b$. This means that mixing does not offer any gain in standard risk, which is in contrast with what is shown in Section 3, where we study the conditions under which mixing does improve adversarial risk. When it comes to Theorem 4.1, the proof can be replicated with equality, meaning that for a binary probabilistic classifier $\mathbf{h}: \mathcal{X} \to \left[0, 1 \right]$, it holds that $\mathcal{R}(\mathbf{h}) = \int_0^1 \mathcal{R}(h^{\alpha}) d\alpha \ge \inf_{\alpha} \mathcal{R}(h^{\alpha})$. So we can say that inside the same family of $\alpha$-threshold classifiers, there exists one with better standard risk than the original probabilistic classifier. > How does the theorem change if BHS is composed of non-linear classifiers? We do not consider linear classifiers only, apart from certain examples, like Example 1 or Figure 1. The results that we prove hold for non-linear classifiers too. We agree we need to add some explicit assumption on the family of classifiers $\mathcal{H}_b$, but by no means is this family restricted to linear classifiers. Our work covers many practical scenarios of interest, like neural networks with continuous activations and many other parametric families of hypothesis. [1] Lucas et al. Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators. --- Rebuttal Comment 1.1: Comment: Thank the authors for the rebuttal. I appreciate the detailed explanation. I hope the authors can incorporate discussion in the rebuttal to improve the paper. It is encouraged to mention [1] and discuss the differences and similarities. I think this can make the paper to be accessed by more audiences. I increase my score to 7. [1] Lucas et al. Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators. --- Reply to Comment 1.1.1: Title: Thank you Comment: We are very honoured by your appreciation or our paper. We forgot to include it explicitly in our first answer, but indeed we will be including the discussion about the threat model considered in our work. Here we will discuss [1], as it shows that the role of randomization can change depending on the chosen threat model and the hypothesis about the attacker's capabilities.
Summary: The paper addresses the theoretical problem of how randomization affects adversarial robustness. It establishes a condition that is both necessary and sufficient for enhancing the robustness of probabilistic models. Furthermore, it proves that in binary classification, a deterministic model can always achieve at least the same level of robustness. The paper also discusses the crucial condition for the effectiveness of randomization. Although the assumptions are strict, the theories in this work can serve as a foundation to further investigate adversarial robustness in probabilistic classifiers. Strengths: 1. The paper is well-written and well-organized. 2. The paper works on an interesting theoretical problem of the effect of using randomization on adversarial robustness. The theoretical result gives a promising hint to better understand the randomization methods. Weaknesses: 1. The theories and proofs in this work heavily rely on binary classifiers with a misleading title. This restricts the practical application of these theories in real randomization scenarios. 2. The work presented lacks in applying and evaluating the theory, even in binary cases. It does not explore the potential of the proposed theory to improve randomization or identify scenarios where a deterministic classifier is possible. For instance, can we find or estimate better deterministic models corresponding to baseline probabilistic classifiers? How can we apply the theories to enhance adversarial robustness? Including quantitative evaluations would significantly improve the quality of the work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N.A. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We agree with you in that a very interesting path for future work is the practical implementation of this theory. We are very interested in exploring how we can actually boost adversarial robustness, either by considering the right probabilistic classifiers (that maximize the *matching penny gap*), or by better understanding the threshold step used in Section 4 for binary classifiers. We believe that this theory should definitely give intuitions on practical actions towards implementable robust classifiers. Below, we adress some of your remarks in more detail. 1. Section 3 does not assume binary classification, it also works in multiclass. It is Section 4 which is restricted to binary classification. 2. We provide theoretical evidence that we can indeed estimate the better deterministic models by training weighted ensembles and randomized smoothing classifiers in opposition to randomized ensembles and any kind of noise injection classifiers, which are abundant in the literature. 3. The question of enhancing adversarial robustness is very interesting, and the theory we have developed can motivate new methods that rely on diversity promotion. It should also motivate the crafting of more powerful attacks that can adapt to the mixture scenario, i.e. attacks that maximize the mass of simultaneously attacked classifiers. --- Rebuttal Comment 1.1: Comment: Thank you for your response. It has addressed all of my concerns, and the work has the potential for further investigation. I will increase my score to 6.
Rebuttal 1: Rebuttal: We thank all the reviewers for their very valuable comments and questions. Many of them also pointed out interesting references that sparked interesting discussions within the team. We are glad that multiple reviewers found that our work tackles an interesting question related to adversarial robustness, and that it should contribute to future work on the use of randomized classifiers for adversarial robustness. Below we address comments that were raised by multiple reviewers > About the term *matching penny gap* We agree that it would be better to add a more complete explanation on what the term means and why it is relevant to our analysis. Below is a more in depth explanation that we would like to include in the Supplementary material, and a summary of it if possible in the main paper, Section 3 with the introduction of the matching penny gap. Recall that in the original matching penny game between player 1 (attacker) and player 2 (defense), each player has a penny coin and has to secretly position its penny in either heads or tails. Then, both coins are simultaneously revealed and the outcome of the game is decided as follows: attacker wins if both pennies match. If they do not match, then defender wins. The parallel with our work can be simply explained as follows: Take a point $(x, y)$ and two classifiers $tails, heads$ that correctly classify $x$. Suppose that both $tails$ and $heads$ are vulnerable at $x$ for some given threat model, but add the key assumption that they cannot be attacked at the same time (Figure 1.a). That is, even though an optimal attacker can fool each classifier individually, there is no point in the allowed perturbation region $B\_{\epsilon}(x)$ in which both are simultaneously fooled. Consider that the defender is using a randomized ensemble that picks either $tails$ or $heads$ at random for inference. This was introduced in [1] with the name *mixtures of classifiers*, related to mixed strategies in the context of game theory. In such setting, the optimal attacker that faces such mixture is now in a matching pennies game situation. At each inference pass, the attacker must choose which classifier to attack in order to craft the adversarial example $x'$, and if the chosen classifier matches with the one the defender used, then the attacker wins. If they do not match, the prediction made by the defender will be correct and the attacker would have lost. Notice that if we increase the number of choices (classifiers) in matching pennies configuration to $M>2$, the game becomes harder for the attacker. Indeed, for every possible choice the defender can make, there is one out of $M$ choices that lead to a successful attack, and $M-1$ that lead to a correct classification. An extreme example of such benefit to the defender is shown in Example 1. > Completeness of the supplementary material We agree that adding deeper explanations of examples and the calculations of Figure 1 would increase the overall quality and completeness of the paper. We are going to add these detailed explanations and calculations, as well as the modified proof of Theorem 3.2 with a maximizing sequence. We will also add the comment related to the measurability of the 0-1 loss under attack. Other things we will add : - Motivation of the name *matching penny gap* - Adversarial risk for deterministic classifiers (line 102) as ZKUe suggests. > Figure placement We will do our best to ensure Figure 1 appears closer to the relevant section. *** **Supplementary pdf** In the supplementary PDF page we added three plots that show how the adversarial risk of the deterministic classifiers $h^{\alpha}$ behaves w.r.t $\alpha$, and compare it with the adversarial risk of the base probabilistic classifier $\mathbf{h}$. These plots show that, following Theorem 4.1, there is always some $\alpha$ such that $h^{\alpha}$ performs better that $\mathbf{h}$. The dataset used was CIFAR-10, where classes were grouped into `animal` and `vehicle` to create a binary classification task, creating a 60%-40% class distribution. [1] Randomization matters. How to defend against strong adversarial attacks Pdf: /pdf/0812ceb12ba1cc0ce9965ce43d902732f313a5c4.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Exploring Diverse In-Context Configurations for Image Captioning
Accept (poster)
Summary: This paper presents a case study that examines the design of prompts and its effectiveness in generative visual language models for image captioning tasks. The authors delve into four methods for image selection and four methods for assigning captions to create contextually relevant learning samples. To evaluate different prompt design formats, the authors employed OpenFlamingo, an open-source version of the Flamingo paper. Several ablations and experiments were conducted to investigate the optimal inclusion of images and captions in the prompt of the Visual Language Model (VLM) to enhance performance. The authors put forth several findings: 1. Unlike single-modal NLP cases, the performance of the model is significantly influenced by the multi-modal mutual synergy. 2. The authors observe that the descriptiveness and language patterns of the captions have varying impacts on performance. In cases where selected images compensate for descriptiveness issues, better performance can be achieved by using simpler sentence patterns. 3. Additionally, the authors discover that when the in-context images resemble the test image, the VLM may take a shortcut by directly utilizing the in-context captions instead of genuinely learning to generate captions. Strengths: Generally, this paper is not novel, as there are similar papers in the NLP literature (see [1, 2]). While the authors mention [1] as one of the prior works, they appear to have overlooked [2]. In contrast to the aforementioned prior works, I believe this paper seeks to delve into prior studies in more complex tasks, specifically open-ended generation tasks such as image captioning, and employing visual language models. In terms of quality, the paper explores a distinct aspect of the research questions and presents compelling findings that align with some prior works. Experimentally, the paper demonstrates rigor by thoroughly exploring different prompt constructions. [1]. "Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?" EMNLP 2022. [2]. "What Makes Good Examples for Visual In-Context Learning?" presented at ICML 2023. Weaknesses: The paper has a few notable weaknesses that can be addressed: - The writing, particularly in lines 141-150, is challenging to comprehend. It would be beneficial for the authors to rephrase or provide visual aids to clarify this section, as it contains one of the paper's ablation studies. - Sections 4.2.1 and 4.2.2 should be swapped, considering that in the pipeline, images are typically selected first, followed by the assignment of captions. The current ordering of the text makes it difficult to parse. - It would be helpful to confirm the accuracy of the statement, "Then, as demonstrated in Figure 5(d), even MGC-TF@66 surpasses GTC." - The author's choice of abbreviations throughout the manuscript makes the reading experience challenging. It requires readers to frequently refer back and forth between different parts of the text to fully understand a paragraph. It would be advisable to use less abbreviated phrasing. - Including more elaborate captions for the figures would enhance their comprehensibility as standalone visual aids. - The paper exclusively explores one visual language model, specifically OpenFlamingo, with 9B parameters. It is important to acknowledge that the findings may not be generalizable to other models. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The paper could benefit from considering alternative visual language models to enhance the quality of the results. One such model is FROMAGe [1], which also utilizes a sequence of interleaved images and texts, similar to OpenFlamingo. It would be valuable for the authors to explore the use of FROMAGe in addition to OpenFlamingo and examine if the findings remain consistent across different models. 2. Additionally, it is worth investigating the impact of using smaller language models within OpenFlamingo. As per the OpenFlamingo GitHub repository, there are newer versions pre-trained on smaller LLMs. The authors could conduct ablations with these models to determine how the findings vary when using models with fewer parameters. This would provide insights into the robustness of the results across different model sizes. [1]. Grounding Language Models to Images for Multimodal Inputs and Outputs, ICML 2023 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: There are not any limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. DIIR in Lines 141-150.** In Diversity-based Image-Image Retrieval (DIIR), we first extract the discrete semantic tags including object categories, attributes, and relations from an image. Suppose there are total $M$ kinds of tags, we divide them into $N$ clusters and there are $M/N$ kinds of tags in each cluster. Then in each tag cluster, we only use the tags in that cluster to calculate the similarity score between two images, i.e., the number of overlapped tags of two images. For DIIR-TR, it randomly divides all the tags into $N$ clusters, while DIIR-TT divides the tags according to their types, e.g., one cluster only contains attribute tags and another one only contains relation tags. **2. Section 4.2.1 and 4.2.2.** We will swap Section 4.2.1 and 4.2.2. **3. Accuracy Statement.** We will revise this statement into "Then, as demonstrated in Figure 5(d), in the 4-shot case, even MGC-TF@66 (the blue point) surpasses GTC (the grey point)." **4. Abbreviations.** We will use the full name at the beginning of each section and in the title of the tables and figures to avoid confusions. **5. Figure Titles.** We will add some key takeaways in some figures. Please refer to **"7. Takeaways."** in the response to the **Reviewer vuiF** to see two examples about Figure 3 and 4. **6. Experiments on FROMAGe.** Since FROMAGe does not provide codes for in-context learning (ICL) and lacks quantitative analyses in the paper, such as computing CIDEr scores, we modify parts of its code originally designed for multi-turn chat. We adopt a prompt structure like "<img1>caption1<img2>caption2...<test img>" for in-context learning. The test results are presented below. | Image Selection | Caption Assignment | 4-shot| 8-shot| |-----------------|--------------------|--------|--------| | RS | GTC | 2.50 | 3.11 | We can find that it achieves a very low CIDEr score. These observations are consistent with the analysis in the FROMAGe paper whose Figure.7 shows that when provided with 5-shot inputs, FROMAGe's output is more story-like and less relevant to the target image. Thus FROMAGe does not support the study about in-context configurations. **7. Experiments on Smaller Open-flamingo and Otter.** We test our methods in another two VLMs: [Open-Flamingo 3B model](https://github.com/mlfoundations/open_flamingo) and [Otter](https://github.com/Luodian/Otter), which are proposed **after the NeurIPS submission deadline**. However, due to the computational limitations and the tight rebuttal timeframe, we employ parts of experiments to validate the robustness of our two key conclusions shown in Lines 227-229 and 302-306. We first show the results about the first conclusion, where the top and bottom tables are from Open-Flamingo 3B model and Otter, respectively. | Image Selection | Caption Assignment | Mean CIDEr| |-----------------|--------------------|------| | RS | GTC | 90.31 | | RS | MGC-TF@135 | 88.47| | SIIR-CLIP | GTC | 94.36 | | SIIR-CLIP | MGC-TF@135 | 103.56| | Image Selection | Caption Assignment | Mean CIDEr| |-----------------|--------------------|------| | RS | GTC | 88.36 | | RS | MGC-TF@135 | 80.77 | | SIIR-CLIP | GTC | 90.72 | | SIIR-CLIP | MGC-TF@135 | 97.37 | where RS and SIIR-CLIP denote randomly sampling and clip Similarity-based Image-Image retrieval, GTC uses the first ground-truth caption among 5 human-labelled ones, and MGC-TF@135 uses the model-generated captions. We can find that in both two Tables, when RS is used to select images, GTC outperforms MGC-TF@135. For example, in the smaller open-flamingo model (the top one), RS-GTC achieves 90.31, which is higher than RS-MGC-TF@135's 88.47. However, when selecting similar images as in-context images, i.e., using SIIR-CLIP, MGC-TF@135 achieves higher performances, e.g., in the last two rows of the bottom table, MGC-TF@135 achieves 97.37, which is larger than GTC's 90.72. These comparisons in both two tables are consistent with the findings in Lines 230-238, which confirm the robustness of our conclusion given in Lines 227-229 that when the visual cues are abundant, the consistent and simple patterns of MGC has more advantages. Then we show the results about the second conclusion that similar images lead to short-cut inference, where the top and bottom ones show the results from the Open-Flamingo 3B model and Otter, respectively. | | In-Context Images | GTC CIDEr | ICC CIDEr | |---|-------------------|------------------|------------------| |(1)| Test Image | 38.50 | 150.70 | |(2)| SIIR-CLIP | 55.98 | 102.70 | |(3)| RS | 70.72 | 43.50 | | | In-Context Images | GTC CIDEr | ICC CIDEr | |---|-------------------|------------------|------------------| |(1)| Test Image | 66.40 | 37.90 | |(2)| SIIR-CLIP | 73.04 | 27.90 | |(3)| RS | 84.32 | 4.80 | From (1) to (3), we respectively use (1) the test image, (2) the retrieved similar images as the test image, (3) randomly sampled images as the in-context images, where the similarity to the test image from method (1) to (3) declines. Then we observe that in both tables, from (1) to (3), the CIDEr scores diverge more from in-context captions (ICC) but converge towards ground-truth captions (GTC). These results are consistent with the findings in Lines 315-319, and thus the conclusion "similar images lead to short-cut inference" still validates in smaller open-flamingo 3B model and Otter. **8. Limitation.** We acknowledge that the findings may not be generalizable to other models in Lines 340-346. --- Rebuttal Comment 1.1: Title: Post-rebuttal comments Comment: Thanks authors for their detailed clarifications and for providing the experiment I requested. I have also reviewed the comments made by other reviewers and the authors' responses to them. The authors' rebuttal effectively addressed my concerns, and as a result, I will not be lowering my score. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our work and provide us with your feedback. We deeply appreciate your constructive comments and are pleased to hear that our clarifications and additional experiments were satisfactory. And we will further polish the writting by following your suggestions.
Summary: The paper studies how to choose in-context examples (images and text) for open flamingo models on the CoCo captioning task. Extensive experiments are run with many baselines. Based on these experiments, the authors conclude (1) given sufficient descriptiveness, simpler captions are better, (2) similarity of in-context images to the test image is less important when the corresponding captions to said images are poor. Strengths: S1. Strong empirical investigation of how different image and text sampling strategies affect in-context image captioning performance. The paper is correctly framed and emphasizes experimentation over methodology. S2. Extensive baselines implemented for the investigation. S3. Seemingly strong performance relative to random sampling on the train set, which is traditionally used to generate in-context examples. Weaknesses: W1. Presentation. Consider adding some of the key findings to the abstract. 7.3 CIDEr points is significant! It may be nice to mention this in the abstract as well. W2. Presentation. Some of the images in Figure 1 are hard to follow, consider making the example more demonstrative, or making the images bigger. Also grouping the good and bad captions within the same bubble was initially confusing and it took me some time to parse what was going on. W3. Presentation. L51-59 potentially have too much detail for the introduction. Consider distilling the content. The key takeaway from this paragraphs seems to be that the authors experimented with many sampling strategies. W4. Experiments. It would be nice to see similar analysis for the VQAv2 dataset, which is also supported in the OpenFlamingo evaluation suite. One natural question is do some of the findings for image captioning translate to VQA-like tasks, which are potentially more similar to the QA tasks shown in Figure 1. W5. Presentation. Consider removing the abbreviations SIIR, DIIR, etc. from the intro. They were a bit tough for me to follow as they are not standard to the best of my knowledge and made reading the intro more difficult. In general, the many abbreviations in the paper make the writing difficult to follow. W6. Related work. Consider adding a related work section on image captioning, since this is a focus of the paper. W7. Clarity. A bit of a nit, but please specify how how the samples are randomly sampled. I am assuming uniformly at random? W8. Clarity. Can you please expand on the SIIR-TAG baseline? In DIIR, how can SIIR-TAG be applied to a collection of images (a cluster)? W9. Missing baseline. What about using models like BLIP2 for synthetic captioning (MGC)? W10. Clairity. It is hard to tell what the takeaways from Figures 3 and 4 are from glancing at them, especially because all of the abbreviations can be confusing. Consider adding they key takeaways to the figure captions. It is not currently clear to me how to read these figures. Is the takeaway that MGC is best? In general, it is not always clear how the conclusions or claims in the paper are supported numerically by the figures or tables (e.g., claims in L231-233). W11. Presentation. In some cases numbers are averaged over multiple shots and in other cases this is not the case. Why did the authors choose to do this? And why is there not consistency in this choice? Technical Quality: 3 good Clarity: 1 poor Questions for Authors: The following questions are distilled from my most significant concerns: Can the authors address the concerns surrounding presentation (W1, W2, W3, W5, W7, W8, W10, W11)? While the baselines seem thorough, they are not always clear for me to follow. Additionally, while the authors run many experiments, the presentation of these experiments is not always interpretable and it is not clear to me that the experiments actually support the conclusions that the authors draw. Some more clarity surrounding this would be greatly appreciated. Is it possible to do some experiments for the best, middle, and worst baselines on VQAv2 to see if trends are similar for another image to text task (W4)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: The authors acknowledge that findings for the open flamingo model tested may not translate to the more powerful, but also closed-source flamingo model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Presentation Issues.** We will add key findings in "Abstract" (W1), enlarge Figure 1 (W2), divide the good and bad captions in Figure 1 (W2), distill the contents in Lines 51-59 (W3), and use the full names of different sampling strategies in "Introduction" and at the beginning of each sections to avoid confusions (W5). **2. VQA Results (W4).** Please refer to **"3. VQA Results."** in response to **Reviewer zeNE**. **3. Related Work about Captioning (W6).** Due to the space limitation, we show a short version of the related work about captioning and will expand it in the revision. Image Captioning (IC)[A] aims at correctly verbalizing one image by descriptive languages, which can be solved through both retrieve[B] and generation[A], where the former one seeks a whole sentence from a given corpus, while the latter one generates the words one by one. Researchers have recently combined these approaches by first retrieving image-caption pairs and inputting them into generation models[C,D]. This process resembles in-context captioning, which also retrieves image-caption pairs to help captioning. However, in contrast to them[C,D], our work introduces novel image and caption selection strategies to study in-context captioning and these methods can also enhance traditional retrieval-generation methods. [A] Show and tell: A neural image caption generator. [B] Probabilistic Embeddings for Cross-Modal Retrieval. [C] SmallCap: Lightweight Image Captioning Prompted with Retrieval Augmentation. [D] Retrieval-augmented transformer for image captioning. **4. Randomly Sampling (W7).** "Randomly Sampling" means uniformly sampling and we will revise it. **5. SIIR-TAG (W8).** SIIR-TAG first calculates how many objects, attributes, and relations tags are overlapped between the images with the test image and then returns the ones which have more overlapping tags. For example, given one test image containing three tags "dog", "white", "drink", the image A contains three tags "cat", "white", "sit", the image B contains three tags "dog", "brown", "drink". Then SIIR-TAG will return image B since it has two overlapping tags with the test one. To efficiently count how many tags are overlapping, we list all the discrete tags in a one-hot manner and then apply the "AND" operation. Please refer to **"1. DIIR in Lines141-150."** in the response to **Reviewer tg5h** to see details of Diversity-based Image-Image Retrieval (DIIR). Briefly, DIIR divides all the tags into different clusters. Then when calculating the ratio of the overlapped tags, only the tags in that cluster are used. This is how we use SIIR-TAG to retrieve images in each cluster. **6. BLIP2 Captions (W9).** We use the official BLIP2 code from [link](https://github.com/salesforce/LAVIS/tree/main/projects/blip2) to produce model-generated captions (MGC) with a CIDEr score of 124. The subsequent table presents results using Random Sampling (RS) and CLIP Similarity-based Image-Image Retrieval (SIIR-CLIP) for image selection. | Image Selection | Caption Assignment | Mean CIDEr| |-----------------|--------------------|--------| | RS | MGC-BLIP2@124 | 71.76 | | RS | MGC-TF@88 | 76.40 | | SIIR-CLIP | MGC-BLIP2@124 | 89.67 | | SIIR-CLIP | MGC-TF@88 | 83.49 | From the table, despite BLIP2 captions achieving a CIDEr score of 124, outperforming MGC-TF@88's score of 88, it only surpasses MGC-TF@88 in the SIIR-CLIP case. This might be because BLIP2, trained on various captioning datasets, introduces styles different from COCO. When randomly sampling some images, this diversity might affect VLM's caption quality. However, BLIP2's detailed descriptions, due to its extensive training, enhance VLM's outputs when in-context images are similar to the test image. This highlights the interdependency of image and caption selection strategies. **7. Takeaways (W10).** We will highlight the key takeaways in figure titles. For instance, Figure 3 suggests that regardless of the image-selection strategy employed, the use of MGCA (using model-generated captions to select a ground-truth caption as the in-context caption) consistently outperforms GTC (simply selecting the first ground-truth caption as the in-context caption). This is evident as the blue blocks surpass the grey dash lines. In Figure 4, each sub-figure shows captions of varying quality. Across these sub-figures, six image selection strategies yield diverse rankings, e.g. SIIR-CLIP tops in (a) and (d) but performs less optimally in other sub-figures. A primary insight here is that the image selection strategy must adapt when using captions of diverse quality to ensure optimal performance. **8. Claim in Lines 231-233 (W10).** The claim in Lines 231-233 (ground-truth caption has better language patterns than the model-generated captions) is not a claim observed from our experiments, but is a consensus in image captioning. We employ this consensus as a foundation for analyzing our experimental results to support our claim in Lines 227-229. We will add one footnote in the revision to avoid confusion. **9. Average or Single-Shot Scores (W11).** During analysis, in most cases, as the number of shots changes, we observe consistent conclusions, leading us to utilize average scores. For example, in Lines 259-273, MGCA consistently improves performance irrespective of the shot number. However, in some cases, the shot count becomes pivotal, revealing interesting details. For example, as highlighted in Lines 239-240, when only scarce 4-shot in-context examples are available, simpler patterns can be better recognized by VLM for better captions, as shown in Figure 5 (b-d) where MGC-TF@88 > GTC in the 4-shot scenario. But as shot numbers increase to 16-shots, VLM can better recognize the complex patterns of GTCs, producing superior results as illustrated in Figure 5 (b-d) where GTC > MGC-TF@88 in the 16-shot scenario. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their detailed responses and additional experiments! I especially appreciate the VQAv2 results and BLIP2 captioning results. My outstanding complaint is the paper writing, which is I find to be inaccessible, especially with all the abbreviations. I find that the key takeaways are not always clear. An alternative approach would be to provide a table with the axes of variation in methodology that you all consider. You can then provide checkmarks do differentiate the different methods. I am electing to raise my score to a 5. --- Reply to Comment 1.1.1: Comment: Thanks for your appreciation of our detailed response. We will further polish the writting by following your suggestions.
Summary: This paper examines the effects of varying configurations on Vision-Language (VL) in-context learning, specifically in the field of image captioning. The authors developed four strategies each for image selection and caption assignment to determine how different methods of configuring image-text pairs impact the learning process. The study revealed two insights: 1) Caption quality, influenced by descriptiveness and language patterns, impacts captioning performance, with simpler language yielding better results when paired with descriptive captions. 2) The effectiveness of similar images relies on caption quality, with excessive similarity potentially misleading the model when paired with low-quality captions. Strengths: + The general goal is reasonable and interesting. The field of in-context learning in multimodality is well worth exploring. + The experiments were comprehensive. The authors compared numerous in-context learning strategies, including some new proposals of their own. + The paper provides some useful insights about how to choose in-context samples. Weaknesses: - The authors said that “the performance ... heavily relies on the caption quality” in Fig 1 Caption. I'm curious about how the authors define the quality of a caption. The example given in Figure 1 suggests that the caption in the blue box is better than the one in the red box. It is easy to understand because the red box caption contains clear errors, such as repeated phrases. When selecting in-context samples, such samples obviously should not be selected. But, in practice, such samples should not even be available as candidates, right? Could the authors display what captions the model considers as high-quality or low-quality? Or intuitively illustrate what captions are expected to consider as a high quality among several ground truths? - One insight of the paper is that "When captions adequately describe salient image objects, simpler language patterns may yield better results." But aren't the ground truth captions in the training set in the language style expected by the benchmark? If we use captions generated by the model, then how does the model know what style is required by the benchmark? - I'm not certain whether the experimental setting in this paper can effectively evaluate the capabilities of in-context learning. In-context learning aims to use a small number of samples as demonstrations to learn some new things, such as a new task, new input-output format, new object naming rules, etc. This work conducts experiments on the MSCOCO caption dataset. I'm unsure what the purpose of in-context learning is in this case, as these samples seem quite common for large models. So, what patterns or things are expected to learn by model from these in-context samples? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: My major concerns are about the current setting of in-context learning, and I would like to hear more about how to define the quality of the captions, which is important in choosing samples. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Quality of Image Captions.** Image captioning quality hinges on the ability of the caption to accurately and grammatically describe the main events of an image. Inadequate grammar or descriptions that either neglect or misinterpret key events indicate poor quality. Human-labeled ground truth captions (GTC) typically adhere to these standards and are thus deemed high-quality. This paper assesses various model-generated captions (MGC) with adjustable quality to explore their influence on in-context captioning. Directly quantifying MGC quality based on the aforementioned standards is challenging. Hence, researchers suggest computing the similarity between MGC and several human-labeled captions using CIDEr[A] as the primary metric. As noted in lines 167-172, we allocate a CIDEr score to each MGC to measure its quality, e.g., MGC-TF@66 implies the model-generated captions with a CIDEr score of 66. Higher CIDEr scores generally suggest superior quality. Our experiments reveal that varying caption qualities, denoted by different CIDEr scores, impact the final outcomes. Regarding Figure 1, it underscores that even when the used in-context images are similar to the test one, if the in-context captions have low-quality, the generated captions will still have low quality. To emphasize this, we present examples of particularly low-quality captions. **2. Diversity in Ground-Truth Caption Patterns.** Since ground-truth captions (GTC) are labelled by different annotators, they usually have much more diverse sentence formats compared to model-generated captions (MGC). Consider the diverse styles of these four GTC from the COCO dataset which are used as in-context captions: - A homemade pizza with sauce and cheese on foil. - Man leaning his mouth down to a plate that has a sandwich on it and a blue water bottle behind it. - Bunches of bananas hanging from fire on a line. - There are several birds that are standing at the beach. Then as demonstrated in Lines 236-237, we show that such diverse patterns can be more challenging for a VLM to recognize compared to simple, uniform patterns, hence MGC sometimes outperforms GTC. For the Transformer models shown in Lines 202-205 that are used to generate MGC, they are trained following [B] where the CIDEr scores between the generated captions with ground-truth captions are used as the objectives. In this way, these Transformer models will still learn the sentence style of the GTC captions. For more discussions about caption quality and patterns, you can refer to **"1. Consistent Format."** and **"2. Salient Objects."** in the response to the **Reviewer FMu1**. **3. Significance of Our Study.** We agree with you that the primary objective of In-context learning (ICL) is to use a few samples to learn new knowledge. However, this is not the major motivation for our research. Instead, we aim to explore how different image and caption selection strategies affect the performance of multi-modal ICL. The significance of our research is that it provides a roadmap for researchers in multi-modal ICL. For instance, our research shows that, unlike the NLP domain where using similar samples as in-context examples usually leads to good performance, in the multi-modal domain, the strategies for selecting images and texts influence each other, e.g., one major finding of our study is that it is not guaranteed that choosing similar images and high-quality texts will always yield the best results. [A] Cider: Consensus-based image description evaluation. [B] Self-critical Sequence Training for Image Captioning --- Rebuttal Comment 1.1: Comment: Thank you for the author's detailed response and the comments from other peers. The author's answers have addressed my main concerns, and I will raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for reconsidering your score and acknowledging our detailed response. We appreciate your constructive feedback throughout this review process.
Summary: This work investigates the configuration of in-context image-caption pairs for few-shot image captioning in a VL model: OpenFlamingo. Specifically, authors compare four different image selection methods: (1) random selection, (2) image-image similarity based selection (SIIR), (3) image-caption similarity based selection (SICR), and (4) image-image diversity based selection (DIIR). On top of the selected images, the authors also test out four different caption assignment methods: (1) ground-truth caption (GTC), (2) model-generated caption (MGC), (3) Iterative prompting (IP), and (4) Model-generated Captions as Anchors (MGCA). The findings from this work suggest that (1) selecting similar images with the test image is better than random sampling; (2) simpler language patterns in captions may be better if they adequately explain the salient info of the image; and (3)similar in-context images can induce short-cut inference from their captions. Strengths: Analyses on in-context sampling method for VL models can expand the literature. Especially, the fact that MGCA results lead to better performance than random ground-truth captions is interesting. Weaknesses: All experiments are conducted on a single open-source VL model; this leads to questions whether these results hold generalizability. Further analyses should have been conducted to backup the authors’ claims: “simpler caption pattern improves caption generation” & “verbalizing the major patterns help identify which GTC provides more detailed info about these patterns”. Please see the question section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) I find it interesting that GTC performs worse most of the time compared to MGC. Why do you think this the case? I am very suspicious that it is due to the consistent format of captions across in-context samples instead of the simple pattern of the MGCs. However, this is hard to analyze with the current setup. MGCs will have very similar sentence structures for similar images compared to GTCs, which is also shown in Figure 7-(a). I suggest controlling the format consistency across GTCs within the in-context samples. For example, you can set captions to have the same complex structure and see how that results in performance. If simplicity is the true cause of improved performance, setting consistent but complex format should not give improve performance. Also, you can try very diverse format across captions but make them simple in structure. (2) Don’t almost all GTCs include the most salient object in their captions? I find it hard to believe that MGCA improves performance because it finds the GTC that describes the salient pattern with more info. The selected GTCs from MGCA will probably have similar structure with the MGCs, hence it reduces the format variance of GTCs across the in-context samples. What happens if you select the GTC that is least similar with the MGC? (3) When giving the in-context samples, do the images really help in performance? What if you drop all in-context images and give only the captions instead? (4) I think it would be better to swap the location of Figure 3 & 4 with Figure 5 & 6. Is there a reason why you put Figure 3 & 4 first? I found it difficult to go back and forth because the main draft explains Figure 5 & 6 first. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors also acknowledge the limitation that these analyses were conducted on a single model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Consistent Format.** When we say that the model-generated captions (MGC) have "simple patterns", part of the meaning we wish to convey is that they often have a consistent format for captions, e.g., in Lines 236-237, we mention that ground-truth captions (GTC) contain more diverse patterns than MGC, which challenges the VLM to recognize one uniform format. We agree with you that the statement of "consistent format of captions" is more accurate than "simple pattern" and we will use this statement in the revision. To further validate this claim and follow your suggestion, we conduct an additional experiment by manually selecting 1000 captions which have a consistent format and building a candidate set $\mathcal{D}$ by these selected captions and their corresponding images. Then we apply random sampling (RS) to select images and use these captions with a more consistent format as in-context captions to build a baseline "CF" shown in the following table. | Image Selection | Caption Assignment | Mean CIDEr| |-----------------|--------------------|--------| | RS | GTC | 80.04 | | RS | CF |82.25 | | RS | MGCA-TF@88 | 83.06 | Note that "GTC" used in this table denotes to use the first ground-truth caption among 5 human-labelled ones and MGCA-TF@88 denotes to select one ground-truth caption by using the caption of MGC-TF@88 as the anchor. It can be seen that using the captions with a consistent format outperforms GTC, suggesting that consistent format plays a significant role in in-context captioning. However, MGCA still outperforms CF, suggesting that consistent format is not the only factor that improves the performance. **2. Salient Objects.** One image usually contains lots of objects where some are salient and some are not. However, different human annotators have diverse preferences in that they may describe different details of an image, including some non-salient objects. For example, [one image](https://cocodataset.org/#explore?id=280810) from the COCO dataset has the following ground-truth captions: - a small plate holds a pot pie and broccoli. - pot pie and broccoli on a plate on a table with a laptop in the background - a plate holds a meat pie, carrots and broccoli. - lunch is in front of the laptop computer. - a red and white plate topped with a pot pie and broccoli. We can find that only the 2nd one contains both the objects "plate" and "laptop". Then an MGC "a plate near a laptop" helps select the 2nd caption, which also remedies some details about the plate that may help generate better captions as discussed in Lines 268-270. To quantitatively verify the conclusion, we use VinVL [A] to extract the salient objects from each image based on the predicted probabilities and then calculate how many salient objects are overlapped with the nouns in the selected caption. We use three kinds of captions, (1) the caption which has the highest CIDEr score with the MGC, which gets **0.954** overlapping ratio, (2) the first caption among five ground-truth captions, which gets **0.845** overlapping ratio, and (3) the caption which has the lowest CIDEr score with the MGC, which gets **0.737** overlapping ratio. These findings confirm that MGCA-selected captions are more likely to contain salient objects, leading to improved performance. We also conduct an experiment using the caption with the lowest score to MGCA as our in-context captions. Some results are given here: | Image Selection | Anchor | MGCA Score | Mean CIDEr| |-----------------|----------------|---------|------| | RS | GTC | -- | 80.45 | | RS | MGC-TF@88 | highest | 83.11 | | RS | MGC-TF@88 | lowest | 73.84 | | SIIR-CLIP | GTC | -- | 91.97 | | SIIR-CLIP | MGC-TF@88 | highest | 99.58 | | SIIR-CLIP | MGC-TF@88 | lowest | 74.39 | We see that selecting the caption with the lowest score to MGCA greatly reduces performance, suggesting that the captions with the lowest scores contain fewer salient objects. **3. Dropping In-Context Images.** We follow your suggestion to drop all in-context images and give only the captions in in-context samples. The results are given in the following, where the RS and GTC strategies are used to select the images and captions: | | Mean CIDEr| |-----------|-------| | $w$ image | 80.04 | | $w/o$ image | 2.99 | We can find that without the images, the model cannot generate meaningful captions. We check the generated captions and find that they usually trend to repeat some of the provided in-context captions, suggesting the importance of the visual cues of the in-context images. **4. Figure 3-6.** We will follow your suggestion to swap Figure 3 & 4 with Figure 5 & 6. **5. Generalizability of the Key Conclusions.** Please refer to **"7.Experiments on Smaller Open-flamingo and Otter."** in the response to the **Reviewer tg5h**. [A] VinVL: Revisiting Visual Representations in Vision-Language Models --- Rebuttal Comment 1.1: Comment: I have read the authors' response. Thank you for running the additional experiments. It would be great to see these experiments included in the updated version. p.s. I should've suggested randomly swapping the images rather than dropping all the images, as that would lead to a serious OOD and performance crash. --- Reply to Comment 1.1.1: Title: What ''randomly swapping the images'' means? Comment: Thanks for your appreciation of our response. What ''randomly swapping the images'' means, should we swap the order of the in-context image-caption pairs or should we swap the images to make mismatched image-caption pairs?
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper explores the in-context configurations for few-shot ability in Vision-Language Models (VLMs). Therefore, they design four strategies for image selection and four for caption assignment to explore the influences of in-context pairs in image captioning. As a result, extensive experiments uncover two valuable insights (1) The captions adequately describe salient image objects and simpler language patterns may yield better results. (2) Excessive similarity might cause VLMs to create a short-cut inference from in-context captions. Finally, they introduce the iterative prompting for images with limited or no ground-truth captions, which boosts the performance of models with an average CIDEr improvement of 7.3. Strengths: 1. **The considered problem is very relevant and timely for the AI community.**. It is urgent to explore the in-context configurations for vision-language pre-trained models. 2. **The observations are valuable.**. They observe that better performance may be achieved with simpler sentence patterns when selected images compensate for descriptiveness issues. Moreover, they observe that the VLM may build a shortcut rather than learn to caption when the in-context images are similar to the test one. 3. **The experiments are comprehensive.**. They devised four strategies for image selection and four for caption assignment to explore the influences of in-context pairs in image captioning. Weaknesses: 1. **The selected metric is not convinced.** They use CIDEr to evaluate caption models' performances. However, CIDEr is a statistical indicator related to description length, which is hard to completely reflect the quality of the generated captions. Therefore, it is necessary to explore generative evaluation indicators such as CLIPScore[1] and GPTScore[2] that use pre-trained models. 2. **The choices of VLMs are limited, which may lead to unobjective conclusions.** They just use the OpenFlamingo as the multi-modal learner. The authors should add more VLMs to explore the in-context configurations for image captioning, e.g. MiniGPT4[3], LLaVA[2]. 3. **The observation is a little overclaiming.** The in-context configurations observation from the image caption is not equal to the VL in-context learning. There are many visually-conditioned VL tasks that need to be explored such as VQA[4], and dense image captioning. 4. **The method section of iteratively prompting (IP) is confusing.** It is hard to understand how the algorithm of IP works, and the lack of analysis for IP. [1] Hessel, Jack, et al. "Clipscore: A reference-free evaluation metric for image captioning." arXiv preprint arXiv:2104.08718 (2021). [2] Liu, Haotian, et al. "Visual instruction tuning." arXiv preprint arXiv:2304.08485 (2023). [3] Zhu, Deyao, et al. "Minigpt-4: Enhancing vision-language understanding with advanced large language models." arXiv preprint arXiv:2304.10592 (2023). [4] Tsimpoukelli, Maria, et al. "Multimodal few-shot learning with frozen language models." Advances in Neural Information Processing Systems 34 (2021): 200-212. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: I tend to increase the review score once the following questions are answered well. The following questions are shown in weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. CLIPScore Assessment.** Upon your recommendation, we employ the CLIPScore to re-assess the quality of our generated captions. Due to space constraints, we present only the key findings here which validate two key conclusions shown in Lines 227-229 and 302-306. We first show the results about the first conclusion: | Image Selection | Caption Assignment | Mean CLIPScore | |-----------------|---------|------| | RS | GTC | 79.14 | | RS | MGC-TF@135 | 78.05 | | SIIR-CLIP | GTC | 80.46 | | SIIR-CLIP | MGC-TF@135 | 79.79 | Here, RS denotes Randomly Sampling, SIIR-CLIP denotes CLIP embedding Similarity-based Image-Image Retrieval, GTC denotes using the first ground-truth caption among 5 human-labelled ones, and MGC-TF@135 denotes using the model generated captions whose CIDEr score is 135. From the table, it's evident that under RS, GTC (79.14) outperforms MGC-TF@135 (78.05), while utilizing SIIR-CLIP causes MGC-TF@135 (80.46) surpassing GTC (79.79). This observation is consistent with the findings given in Lines 230-238, which also supports our claim in Lines 227-229. Then we show the results about the second one conclusion that similar images lead to short-cut inference, where we measure the captions whose CIDEr scores are reported in the following table: | | In-Context Images | GTC CLIPScore | ICC CLIPScore | |---|-------------------|------------------|------------------| |(1) | Test Image | 50.70 | 77.78 | |(2) | SIIR-CLIP | 53.50 | 77.44 | |(3) | RS | 73.90 | 59.03 | where we still observe that as in-context images more similar to the test image, VLM tends to mimic in-context captions (ICC), which is consistent with the findings in Lines 315-319. For instance, as the similarity from method (1) to (3) declines, the CLIPScore diverges more from ICC but converges towards ground-truth captions (GTC). More results using CLIPScore and GPTScore will be incorporated in revision. **2. LLaVa and MiniGPT.** LLaVa and MiniGPT are primarily designed for instruction tuning where it works in a multi-round chat manner instead of an in-context learning manner. The network architectures of both models are not feasible to handle extensive, interleaved vision and language data as Flamingo. Moreover, they lack the appropriate training losses to effectively handle few-shot prompts inputs in the manner that Flamingo does. As such, they may be capable of processing 4-shot inputs in the manner of multi-turn chat, while will fail to deal with much more shot data like 8 or 16-shots. However, to investigate the impact of diverse configurations, a model should be able to handle much more shot data. Considering these disadvantages, we do not use them to explore in-context configurations. However, we still try some other VLMs to explore in-context configurations. Please refer to **"7. Experiments on Smaller Open-flamingo and Otter."** in the response to **Reviewer tg5h** to see the results test on Otter and a smaller version of Open-Flamingo, which are two models with in-context learning ability published after the NeurIPS submission deadline. **3. VQA Results.** We follow your suggestion to explore another VL task to test the major conclusion of our study. Given our computational limitations and the tight rebuttal timeframe, we choose to explore VQAv2 since it is more different from image captioning compared to dense captioning, which is also suggested by Reviewer vuiF. To explore different configurations in VQA. We use two different image selection strategies which are Random Selection (RS) and CLIP Similarity-based Image-Image Retrieval (SIIR-CLIP). Moreover, to ensure varying quality levels of in-context text—best, middle, and worst, we use matched (MAT), half-matched (HMAT), and mismatched (MMAT) question-answer pairs. Here, MAT, HMAT, and MMAT signify that all, half, or none of the answers are correct, respectively. | Image Selection | QA pairs | Mean Accuracy | |-----------------|--------------|--------------| | RS | MAT| 47.97| | RS | HMAT| 47.34| | RS | MMAT| 47.13| | SIIR-CLIP | MAT| 48.48| | SIIR-CLIP | HMAT| 47.55| | SIIR-CLIP | MMAT| 46.95| We can find that when the text quality is high, i.e. using matched question-answer pairs (MAT), we have SIIR-CLIP > RS, suggesting that using similar images achieves higher accuracy than random sampling. However, when using low text quality, e.g., HMAT or MMAT, similar images do not always lead to better results, e.g., with MMAT, SIIR-CLIP < RS. Such observation is consistent with the finding in Lines 230-238 and supports the major conclusion that image and text selection strategies influence each other. **4. Iterative Prompting (IP).** Briefly, IP employs VLM for iterative in-context caption generation. It is designed for scenarios with abundant images but limited human-labelled captions. Consider $\mathcal{D}=\{(I_1,C_1);...;(I_n,C_n), \tilde{I}_1;...;\tilde{I}_N\}$ where $n<<N$ (e.g., $n=32$ and $N=10000$) and only $n$ images contain human-labelled captions. Given a test image $\hat{I}$, traditional image selection becomes impractical due to the scarcity of available captions. Therefore, we suggest generating a caption $\tilde{C}$ for each $\tilde{I} \in \mathcal{D}$ using Eq.(1) where $\mathcal{S}=\{(I_1,C_1);...;(I_n,C_n)\}$. This provides each image in $\mathcal{D}$ with a corresponding caption, allowing the use of earlier image selection methods like SIIR or SICR (Similarity-based Image-Caption Retrieval). This process can be iteratively executed, hence the name Iterative Prompting. In essence, MGC-VLM(N) introduced in Line 169 is equal to employing this method once. Given the in-depth analysis of the captions of MGC-VLM(N) in section 4.2.1, we bypass their re-examination in lines 274-286 but try to know how many iterations IP needs to make the performance saturate. We will clarify this in the revision.
null
null
null
null
null
null
Efficient Neural Music Generation
Accept (poster)
Summary: This paper introduces an LM-guided diffusion model designed for the generation of music audios. The proposed system follows a three-step process, starting with the extraction of audio-text embeddings (MuLan tokens) using a pre-trained MuLan model. Next, semantic tokens (wav2vec tokens) are generated with the aid of a language model. Finally, the synthesized music audio is produced using a dual-path diffusion model. Experimental results highlight the superiority of the proposed model, showcasing its practical advantages in terms of sampling speed and the ability to generate music seamlessly. Strengths: 1. The paper exhibits clear and well-written sections before the experiments, ensuring the reader’s comprehension and engagement. 2. The paper addresses a highly meaningful topic and task, aligning with the current trends and hot topics in language modeling. 3. The proposed method in this paper is well-designed, effectively integrating domain knowledge in music and demonstrating a deep understanding of the subject matter. Weaknesses: 1. The overall structure of the paper’s writing could benefit from optimization. For example, important experimental results should ideally be included in the main body rather than being relegated to the appendix. 2. The paper lacks a comprehensive objective evaluation of the main experiments, particularly in comparing the proposed method with the baseline approaches instead of only using objective metrics in the ablation study. 3. Based on the findings in Table 3, the proposed method exhibits a noticeable drop in musicality and text correlation compared to the baseline approaches. This raises questions about the motivation behind the proposed method. Is the trade-off in performance for the sake of efficiency justified? Does the efficiency gained by the proposed method outweigh the potential loss in important aspects such as musicality and text correlation? 4. The appendix file is difficult to locate, as there is no indication or hint for readers that it is available on the demo page. Clear guidance or references within the paper would greatly assist reviewers and readers in accessing the relevant appendix materials. 5. The paper lacks a clear explanation for certain design choices in the proposed method, such as the motivation behind utilizing an angle parameterization for the diffusion schedule and the specific design of Equation 4. Without a detailed rationale and justification for these design decisions, it becomes difficult to comprehend their significance and how they improve the overall performance and effectiveness of the method. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: NA Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the the reviewer's comments and suggestions. In the following, we address each of the concerns raised by the reviewer. ### Response to concern #1 **Optimizing overall structure of this paper**: We are thankful for the reviewer's constructive comment, which is taken seriously to revise this paper for a clearer presentation. Thanks to the concrete suggestions raised by all reviewers, we have already made several changes in the revised paper. Some of the amendments are described below: - (1) We have re-organized Section 4 to make it more condense and self-contained. A lot of details regarding network architecture has been moved to the Appendix. We put more efforts to summarize the idea in sentences instead of equations. Also, some leaps of definition were avoided in the revised paper. For example, the diffusion velocity was ill-defined in Section 4 (as pointed out by reviewer **4HjZ** and **ieEr**). We added a brief description in advance to make it self-contained. - (2) As we make the Section 4 to be more condense, the ablation studies is now moved from the Appendix to the main paper, as suggested the reviewer. - (3) We also extended the related work to add more discussion on audio/music generation methods as well as audio representation learning, as suggested the reviewer **cbjn**. --- ### Response to concern #2 **Lack of comprehensive objective evaluation in the main experiments**: For each of the individual modules, we were eager to investigate suitable objective measures for leave-one-out rigorous comparisons, e.g., SI-SNR for different diffusion architectures. However, for an overall assessment on the whole generation system, since the competing systems are not publicly released and were trained with entirely different datasets, the fairness of objective comparison cannot be guaranteed. Besides, as insisted in the title of this paper, MeLoDy is proposed to solve the computational problem associated with the SOTA MusicLM, and to make efficient music generation from free-form text possible. We focus on the comparison against MusicLM in terms of various aspects -- speed, quality, musicality and text correlation. Among these 4 aspects, only the assessment of speed was objectively evaluated since, to the best of our knowledge, no known objective measures can be as precise as the subjective evaluations of music professionals when assessing the audio quality, musicality and text correlation of an arbitrary audio sample. --- ### Response to concern #3 **``noticeable drop in musicality and text correlation''**: First, we would like to note that the baseline samples we used for comparisons are (at least partially) cherrypicked, as stated in the main paper of Noise2Music. In contrast, we used the non-cherrypicked ones for all the text prompts taken from MusicLM and Noise2Music. In fact, the number of samples released on our demo page is more than 300, which prohibits the chance of cherrypicking sample for each prompt. Presumably, the sample variances of diffusion and LM could cover marginal differences in musicality and text correlation. In other words, from a practical sense, if we need to improve the musicality and text correlation given a text prompt, we can re-sample with the same prompt until we get a desired sample. Whereas, the generation speed and audio quality (denoting upper bound of the generation quality) aspects are rather steady -- they would not drastically vary even if we draw for more samples. --- ### Response to concern #4 **Appendix**: We would like to apologize for the missing appendix due to some sudden network issues happened around the appendix deadline. We chose to immediately upload it to the demo page (the commit time in Github is tractable). Although the supplementary material is regarded as optional, the reviewer is right that we strongly recommend the readers to look into the Appendix for a thorough understanding of this paper. We will definitely supplement the Appendix file if this paper can be accepted. --- ### Response to concern #5 **Lack of clear explanation for certain design choices**: We acknowledge the reviewer's comment regarding presentation clarity. A similar argument has been made by reviewer **cbjn**. We take this seriously, and add more texts for explaining the motivations and the reasons behind the choice of each module (e.g. angle parameterization has been compared against noise parameterization and performed the best in [1]). As pointed out before, we have restructured Section 4 to provide more incentives in textual forms instead of mathematical forms. We believe our revisions made for this paper sufficiently improve the clarity and address the problems concerned by the reviewer. [1] Flavio Schneider, et al. Moûsai: Text-to-music generation with long-context latent diffusion. 2023.
Summary: The authors present a text to music model, with the aim of achieveing faster than real time generation on a V100 gpu. The authors introduce a latent space diffusion model. The latent space is obtained with an adverdarial VAE (replacing the VQ VAE from MusicLM), and a first LM is trained on the semantic tokens just as in MusicLM. The LM is also conditioned on the quantized output of a Mulan model. Then a latent diffusion model is trained to predict the latent representation of the VAE, conditioned on the semantic tokens. In order to improve the speed of the model, the authors uses a dual path architecture, i.e. they alternate between modeling long range and short range dependencies. Short range dependencies are modeled with a simple RNN, while long range ones are modeled with an attention based model. The model is trained on ~260k hours of music with tag labels, enriched with ChatGPT outputs into full descriptions. The authors compare with the publicly available samples of MusicLM and Noise2Music. They show that their model achieves better quality but overall worse musicality and text fidelity. They also report the FAD on MusicCaps (5.1) which is worse than that of MusicLM (4.0) or Noise2Music (2.1). Strengths: - large improvement in terms of speed compared with the many models and diffusion steps required by Noise2Music. - improvements on the diffusion schedule. - interesting idea of using a dual path architecture to improve the runtime. This allows to take full advantage of the non causal aspect of the diffusion process used here instead of auto-regressive modeling. - Runtimes on Table 2 are impressive. - subjective evaluations show improved quality. Weaknesses: - still requires a large number of models to train, e.g. Mulan model, a semantic model w2v-bert + one LM for the semantic tokens, and one dual path diffusion model, without counting the VAE. - ablation studies are limited to the noise schedule. Not strong objective motivation for most of the design choices. - authors mention that the Mulan Cycle consistency score is worse for the test set MusicCaps than their model. However that could be an artefact of how the authors' Mulan model is trained, e.g. on weak labels generated from partial tags rather than full descriptions, in particular considering that their own model is trained with this Mulan model, so that it could overfit any bias it has. - when comparing the number of function calls in Table 3, the authors forget to mention that for MusicLM, each call require the evaluation of a single time step, while their own model require to process all the timesteps of the input sequence. In particular, in the context of batched generation, this can make a big difference between the two models. - objective metric FAD is worse than existing methods, and two of the subjective metrics (musicality and text correlation). Subjective evaluations are done on the samples released by the other methods which might be better than the average though. Note that the authors provide their supplementary material through a website, which might not be an acceptable way of doing so, as it doesn't enforce the deadline for the supplementary material submission. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: What are "velocities" ? this term is not properly defined when introduced. Why are the authors using different noise levels for different segments? I see in the supplementary this is used for inpainting. Other than that regular sampling would always use the same noise level for all segments? Is there a benefit to this training method for regular sampling? In Section 5.1, the authors mention they filter the data to focus on non vocal music. Is the 257k hours number given before or after this filtering? The sample rate of the proposed of the VAE seems relatively large compared to the one used in MusicLM, is this something the authors experimented with? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors fail to discuss potential adverse effect of automation of creative jobs. The authors fail to discuss the potential breaches to intellectual property for the data used. In particular, the authors should discuss how the dataset was assembled. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's careful reading of our paper. We hope the following fully address all concerns mentioned by the reviewer: ### Response to concern #1 **``large number of models to train''**: The reviewer is correct that MeLoDy comprises **5 components** for a successful training, i.e., a semantic LM, a diffusion model (DPD), a SSL module (Wav2Vec2-Conformer), a prompt encoder (MuLan) and an autoencoder (VAE-GAN). Yet, in comparison to the SOTA MusicLM, the design MeLoDy is already simpler, since MusicLM consists of training **6 components**: 3 LMs (semantic, coarse & fine), a SSL module (w2v-BERT), a prompt encoder (MuLan) and an autoencoder (SoundStream). In this sense, we could reckon this work as one step towards a less complex system. --- ### Response to concern #2 **Motivations behind the design choices**: Noticeably, reviewer **cbjn** and **KL1t** also share a similar argument. We take these comments seriously, and restructure Section 4 with several amendments: - (1) We re-organized Section 4 to make it more condense and self-contained. We added textual explanations for the design choice of each module (e.g. angle parameterization has been compared against noise parameterization and performed the best in [1]). A lot of mathematical details regarding network architecture has been moved to the Appendix. - (2) Some leaps of definition were avoided in the revised paper. For example, the diffusion velocity was ill-defined in Section 4, as pointed out by the reviewer. - (3) As we compress Section 4 to be more condense, the ablation studies is now moved from the Appendix to the main paper, as suggested the reviewer **KL1t**. Besides, the reviewer mentioned the **objective motivation** for the design choices. In fact, for each of the individual modules, we investigated suitable objective measures for rigorous comparisons, e.g., SI-SNR for diffusion architectures. However, for other modules like SSL, it is very hard to tell which one is better with an objective measure (if there is any, please let us know.) We therefore can only listen to some random samples (10-100 per system) led by different SSL modules (i.e., Wav2Vec2, Wav2Vec2-Conformer, and w2v-BERT), and subjectively vote for a better module. In common practice, this has been a simple yet effective way to select an audio generative model, since no objective metric can compete with human ears in audio quality assessment. [1] Flavio Schneider, et al. Moûsai: Text-to-music generation with long-context latent diffusion. 2023. --- ### Response to concern #3 **Mulan Cycle Consistency**: The reviewer is right that there is an inductive bias in the MCC measurement. Yet, we note that the objective of using MCC here and in MusicLM was to examine whether the conditioning MuLan embeddings form a cycle in the conditional generation (MuLan -> MeLoDy -> MuLan). The MCC results in Table 2 was meant to confirm that our proposed DPD is capable of consistently completing the MuLan cycle at significantly lower costs. --- ### Response to concern #4 **Fairness of comparing the number of function calls**: The reviewer's argument is thought-provoking. Yet, we note that each function call in an LM also need to take previous tokens into account. In the case of decoder-only LM, the computational cost of each call is $\mathcal{O}(L^2)$ with $L$ being the length of tokens. It is actually comparable to the cost of each call of a diffusion model ($\mathcal{O}(L^2)$ if we also use attention in diffusion). Let alone the big-O notation, in practice the cost of each call of a DPD is in fact much lower than that of a Transformer-based LM, since the segmentation leads to $L=L'/K\approx \sqrt{L'}$, where $L'$ is the original sequence length. The cost of attention in DPD thus becomes almost linear to the length of sequence. --- ### Response to concern #5 **``objective metric FAD is worse than existing methods''**: Notably, it is unfair to directly compare the FAD of MeLoDy against MusicLM and Noise2Music, since MeLoDy was mainly trained on processed non-vocal data and most samples in MusicCaps contain speech or vocal. --- ### Response to concern #6 **Worse subjective metrics (musicality and text correlation)**: First, we would like to note that the baseline samples we used for comparisons are (at least partially) cherrypicked, as stated in the main paper of Noise2Music. In contrast, we used the non-cherrypicked ones for all the text prompts taken from MusicLM and Noise2Music. In fact, the number of samples released on our demo page is more than 300, which prohibits the chance of cherrypicking sample for each prompt. Presumably, the sample variances of diffusion and LM could cover marginal differences in musicality and text correlation. In other words, from a practical sense, if we need to improve the musicality and text correlation given a text prompt, we can re-sample with the same prompt until we get a desired sample. Whereas, the generation speed and audio quality (denoting upper bound of the generation quality) aspects are rather steady -- they would not drastically vary even if we draw for more samples. --- ### Response to other questions - The definition of velocities ($\mathbf{v}\_t:=\frac{\partial \mathbf{z}\_t}{\partial t}$) has been added to the main context. - Using different noise levels for different segments makes possible the infinitely continuable generation (see the detailed response to reviewer **hBX7**). - The 257k hours of data has already gone through the filtering. - Yes, we have tried a number of different configurations for VAE-GAN, and we can conclude that output rate of the encoder (a.k.a. frequency of the latent sequence) is especially important, as the audio reconstruction from a higher-frequency latent sequence (e.g. 250Hz in MeLoDy) appears to be much better than a lower-frequency one (e.g. 50Hz in MusicLM). We have added this justification in our revised paper. --- Rebuttal Comment 1.1: Title: about the complexity Comment: Regarding concern #4: this is only true if the attention is dominating the run time, which should be experimentally validated. I don't see any novel elements regarding ablation studies, although the authors state they have reorganised the paper. Having no new elements, I will not increase my score. Regarding the multiple noise levels, another possibility is to use the same noise level everywhere but to do "teacher forcing" on the already generated segments, i.e. discarding the output of the model for those segment, and using the previously obtained output, interpolated with the proper amount of noise. It would be interesting to compare the two, as the "teacher forcing" approach has a simpler training procedure. Regarding the VAE-GAN: if the authors have any results on that, it would make the paper stronger to include those. --- Reply to Comment 1.1.1: Title: Response to the reviewer Comment: Thanks for the reviewer's insightful comments. We agree to the reviewer's suggestion on providing additional empirical evidence to strengthen our arguments on concern #4. We thereby would like to supplement the actual time costs measured for different LMs in comparison to DPD on a V100 GPU. | Component | Time for generating 10s music (s) | | -------- | :-: | | Semantic LM | 3.5 | | Coarse Acoustic LM | 26.4 | | Fine Acoustic LM | 67.7 | | **DPD** (5 steps) | **1.3** | | **DPD** (10 steps) | **2.3** | | **DPD** (20 steps) | **4.2** | *P.S.* The time costs of LMs were measured with a popular unofficial implementation of MusicLM [1] (using 12 RVQ quantizers -- 4 for coarse and 8 for fine acoustic LMs, respectively). In other words, adopting DPD in place of coarse and fine LMs can speed up generation roughly 22x to 63x. We will also add these results in our final paper. To address the reviewer's other concerns, we will also try to experiment on the "teaching forcing" strategy for training and adding more analysis on VAE-GAN. Yet, because of the tight schedule, please pardon us if these results could not be released during this discussion phase. [1] https://github.com/lucidrains/musiclm-pytorch.
Summary: The authors introduce a framework (MeLoDy) for music generation. This new framework significantly improves the inference speed of MusicLM while producing  high-quality results. in terms of musicality, audio quality, and text correlation. The main reason for this speedup is that the authors replaced the coarse and fine acoustic stages in MusicLM and AudioLM from a language model to a dual-path diffusion model (DPD) in the latent space,  while keeping a LM to perform the high level semantic modeling process similar to MusicLM. The authors show that the resulting model can perform similar to MusicLM while being faster at sampling. Strengths: 1. The authors address efficiency, which is a very important issue and can be easily overlooked in new research. 2. I find the idea of combining LM and diffusion models very intuitive and has a lot of potential. 3. The authors put a lot of effort into evaluating the generated samples by using human feedback.    4. The generated samples sound good (since they are not cherrypicked). Weaknesses: 1. Complex pipeline system: The authors propose a complex pipeline with many components, moving parts, and many hyperparameters. This makes the system hard to reproduce, and the contribution of individual components is not clear. The authors provide a limited ablation study in the appendix. I think in a system like this, an extensive ablation study is needed. Examples: (a) What role does the encoder play? (b) Would different encoders affect the quality? (c) The authors used different Mulan audio and text towers compared to MusicLM; what effect does that have? (d) The authors mention they used attention on the course path and an SRU on the fine path; what role do these play? (e) What is the effect of merging and repeating? I think a lot of these decisions sound arbitrary making the work read like a technical report. 2. Reproducibility, private dataset: the dataset is very vaguely described, making the proposed system even harder to reproduce and evaluate. The dataset plays a large role in the quality of the generation. 3. Clarity: Section 4 in particular is very hard to read; the description of the system is not very clear. Additionally, some details are omitted, which can be in part because of the complexity of the pipeline. I recommend a restructure of this section. The appendix significantly helps in understanding the details. 4. Evaluation: I think a comparison with the more specialized models can greatly improve the quality of the evaluation. Even if the specialized models (like genre) perform better under specific conditions. I believe it's important to illustrate the gap between these models and those that rely on free-form text generation. I think the current evaluation is limited. 5. I believe the related work section is limited. If possible, I think this section should be extended to include related work and alternatives that can be used in the proposed pipeline. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: A- Would you please address the weekness above? B- In line 158, you say: "In our experiments, we find that the stability of generation can be significantly improved if we use token-based discrete conditions to control the semantics of the music and let the diffusion model learn the embedding vector for each token itself." For me, it's not clear what you mean here: are the tokens coming from the SSL Wav2Vec2-Conformer? How are you "letting the diffusion model learn the embedding vector for each token" if the embedding vectors are coming from the k-means clusters of the Wav2Vec2-Conformer? C-Line 183, The notation is not very clear for me; I assume the same MLP is performed in parallel for each token. and this is a different MLP than the one in equation 10? Is this correct? If so, I suggest stating the parameterization to make it more clear. D-Line 192, In order to reduce complexity, you set k = sqrt(L), which allows you to process long sequences with complexity that grows linearly with the sequence length L (after the self-attention over the coarse path); this is the reason you state in the abstract that "infinitely continuable generation." Is that correct? Does not this come with the limitation that as this number K grows, the coarse path becomes even more coarse (more abstract)? Is this explained somewhere? E-Line 284, why did you choose to use an outdated VGGish model? Wouldn't the results be more accurate if one used more capable models, like a transformer (in AudioGen) or models that are trained to perform musical tasks, e.g., detect musical instruments? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors discussed the limitations of their work. One additional thing that comes to mind is copyright in music, which is a general issue for generative models. Is there a way to determine if the model reproduces copyrighted material from its training data? Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the thorough review regarding our study. We provide detailed responses to all reviewer's concerns, as summarized below. ### Response to concern #1 The reviewer concerned about the **complex system design** of the proposed MeLoDy with **``many components, moving parts, and many hyperparameters''**. We conceive that the term "many" used here could be arguably subjective without a suitable comparing target for comparison. Therefore, we would like to emphasize the following points: - (1) Since MeLoDy is proposed for an efficient generation of high-quality music, rationally the best comparing target would be the SOTA method/system proposed for music generation up to the date of this submission, i.e., MusicLM. - (2) Having confirmed a suitable comparing target, we now compare MusicLM against MeLoDy in terms of the number of components: - **MusicLM**: 3 LMs (semantic, coarse & fine), a SSL module (w2v-BERT), a prompt encoder (MuLan) and an autoencoder (SoundStream). In total, there are **6 components** for training MusicLM. - **MeLoDy**: 1 LM, 1 diffusion model (DPD), a SSL module (Wav2Vec2-Conformer), a prompt encoder (MuLan) and an autoencoder (VAE-GAN). In total, MeLoDy consists of **5 components**, which is less than that in MusicLM. - (3) In fact, we agree with the reviewer on the direction of making less complex music generation system for the ease of reproduction and analysis of individual components. However, it remains hypothetical whether a simple system can generate comparable or even better music audios without particular designs from different perspectives of understanding, e.g., semantic and acoustic perspectives. --- ### Response to concern #2 The reviewer also concerned the **contribution of individual components** and the **limitation of ablation studies**. - Firstly, the reviewer is right that we have experimented over many possible alternative modules for different sub-tasks, e.g., w2v-BERT v.s. Wav2Vec2-Conformer. In general, this can be casted into a searching problem for a best-performing model, which leads to a combinatorial cost. The ideal starting position of searching for us would be to replicate the settings in MusicLM. - Similar to many other research works, we used a leave-one-out comparison to greedily select the best module in each of the sub-task with an acceptable cost. It is noteworthy that the cost of training each possible recipe of our generation model is huge, as we used a large-scale training dataset (257k hours). - For each of the individual modules, we investigated suitable objective measures for rigorous comparisons, e.g., SI-SNR for diffusion architectures. However, for other modules like SSL, it is very hard to tell which one is better with an objective measure (if there is any, please let us know.). We therefore can only listen to some random samples (10-100 per system) led by different SSL modules (i.e., Wav2Vec2, Wav2Vec2-Conformer, and w2v-BERT), and subjectively vote for a better module. In common practice, this has been a simple yet effective way to select an audio generative model, since no objective metric can compete with human ears in audio quality assessment. - Response to **``(a) What role does the encoder play? (b) Would different encoders affect the quality?''**: If we understand correctly, the reviewer refers to the audio encoder in VAE-GAN. In this sense, the role of encoder is to construct a latent feature space that is robust to the generation errors, as discussed in [1]. We have tried a number of different configurations for VAE-GAN, and we can conclude that different encoders would certainly affect the quality. The output rate of the encoder (a.k.a. frequency of the latent sequence) is especially important, as the audio reconstruction from a higher-frequency latent sequence (e.g. 250Hz in MeLoDy) appears to be much better than a lower-frequency one (e.g. 50Hz in MusicLM). We will add this justification to our revised paper. - Response to **``(c) The authors used different Mulan audio and text towers compared to MusicLM; what effect does that have?''**: Since the code and training data of MuLan is not publicly available, the best we can do was to train the model from scratch following the network choices stated in their papers (AST for audio tower; BERT for text tower). We admit that the MuLan we trained on our data cannot be aligned with the one used in MusicLM. Yet, we actually used comparable network architectures for audio and text towers. - Response to **``(d) The authors mention they used attention on the course path and an SRU on the fine path; what role do these play?''**: If we understand the question correctly, the reviewer is confused about the design choice of applying attention and SRU for coarse-path and fine-path processing, respectively. In the main paper, we linked the de-noising task in diffusion to the separation problem. We referred to the conclusion in a separation work [2], where the authors found two consistent patterns: (1) in local modeling, the recurrent model performed better than the attention model; (2) in global modeling, the attention model performed better than the recurrent model. - Response to **``(e) What is the effect of merging and repeating?''**: Segment merging and repeating operations are originated from [3], where the authors stated two benefits: (1) Dynamic segment scale (termed as multi-granularity in [3]), and (2) speed up coarse-path processing. We will add more discussion on these effects in the revised paper. [1] R. Rombach, et al. High resolution image synthesis with latent diffusion models. In CVPR, pages 10684–10695, 2022. [2] M. WY Lam, et al. Effective low-cost time-domain audio separation using globally attentive locally recurrent networks. In 2021 IEEE SLT, pages 801–808. IEEE, 2021. [3] M. WY Lam, et al. Sandglasset: A light multi-granularity self-attentive network for time-domain speech separation. In ICASSP 2021, pages 5759–5763. IEEE, 2021. --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for the reply. First, of course, I'd like to agree with the authors that the complexity of the method is subjective. I'd like to stress what I mean for clarity: I think presenting a complex system without an ablation study that shows the contributions of the components, the design choices, and the hyper-parameters is only acceptable in the context of a technical report. I think for a conference paper, the reader should be able to (1) reproduce your work and (2) understand the effect of each component you propose on the results. > which leads to a combinatorial cost. I agree that trying different combinations is, of course, not feasible, but as you explain, leave-one-out comparisons are standard in the literature. However, I didn't seem to find the leave-one-out comparison in the paper or the appendices. For example, you explain that you started with the musicLM framework and switched from w2v-BERT to Wav2Vec2-Conformer but there are no results showing the effect of this change. Another example, replacing the components of mulan. There, you can also show the role this plays in your results. Another examples include augmenting the text for training mulan etc... --- Reply to Comment 1.1.1: Title: Response to the reviewer Comment: Thanks for the reviewer's insightful comments. We would like to reply point by point here: - (1) **Reproducibility**: We agree that the reproduction of MeLoDy is essential for this paper, therefore we made much effort to formally describe every details of our proposed method (mainly on the design of dual-path diffusion). We believe that our detailed presentation as well as the setting of all hyper-parameters would facilitate the reproduction of this model and our results. - (2) **Understanding the effect of each component**: We would like to express our regret that not all the design choices have been explained well for their motivations, as we tried to present all the details for better reproducibility. As discussed with other reviewers, we have accordingly modified the structure of our paper, such that more explanations in textual form has been added in the main paper and much detail has been moved to the Appendix. The author rebuttals to the reviewers also include a part of description explaining the design choice of our model. For example, reviewer **ieEr** replied that our further explanations have adequately addressed all his/her concerns. - (3) **Switching from w2v-BERT to Wav2Vec2-Conformer**: As mentioned in the rebuttal, considering the scale of data we are training and the difficulty of measuring the effects of SSL module, "we can only listen to some random samples (10-100 per system) led by different SSL modules (i.e., Wav2Vec2, Wav2Vec2-Conformer, and w2v-BERT), and subjectively vote for a better module." We will also add this justification in our revised paper. We believe objectively evaluating the SSL module in music/audio generation is still an unsolved open research problem, and is beyond the scope of this paper. If the reviewer has a better suggestion on this evaluation, please let us know. - (4) **Replacing the components of mulan**: As per mentioned in the rebuttal, we would like to emphasize that "we actually used comparable network architectures for audio and text towers". Please correct us if we misunderstood the meaning of "replacing the components of mulan" stated by the reviewer. - (5) **``Another examples include augmenting the text for training mulan''**: We agree to the reviewer on this point that the effects of text augmentation on MuLan were not well discussed in the main paper. We would like to express our apology and justify the training pipeline of our MuLan in the revised paper. In fact, after the text augmentation, some objective metrics used in music retrieval, e.g., mAP, were declined yet the music generation on long-form text prompt was obviously improved (improvements were subjectively determined by music professionals). As a result, we finally take the subjective test as the golden rule in evaluation, and ignored somewhat contradictory objective measures. - (6) **``I didn't seem to find the leave-one-out comparison in the paper or the appendices.''**: We would like to re-claim our results of leave-one-out comparisons here. In the main paper, since we highlighted two novelties of this paper, each of which has been compared to the conventional methods in our experiments: 1. The novel **dual-path architecture** for the latent diffusion model (resulting in dual-path diffusion (DPD)): the dual-path architecture has been compared to UNet-1d [1] and UNet-2d [2]: | Architecture | Velocity MSE (↓) | SI-SNR (↑) | | -------- | ------- | ------- | | UNet-1d [1] | 0.13 | 5.33 | | UNet-2d [2] | 0.15 | 4.96 | | **DPD** (Ours) | **0.12** | **6.15** | 2. The novel **angle schedule** for angular-parameterized diffusion: | Angle Schedule | Steps | FAD (↓) | MCC (↑) | | -------- | ------- | ------- |------- | | Uniform [1] | 10 | 8.52 | 0.45 | | Uniform [1] | 10 | 6.31 | 0.49 | | Ours proposed in Eq. (4) | 10 | **5.93** | **0.52** | | Ours proposed in Eq. (4) | 20 | **5.41** | **0.53** | Despite an argument of "technical report", we sincerely ask the reviewer to re-consider the significance of this work, as a novel music generation model that first allow generating music audio of high quality and competitive musicality faster than real-time on a consumer-level GPU. We believe our revised paper has been greatly improved with the proposed amendment in the rebuttal and has addressed all concerns about the justification of the design choices. [1] Flavio Schneider et al.. Moûsai: Text-to-music generation with long-context latent diffusion. 2023. [2] S Forsgren and H Martiros. Riffusion - stable diffusion for real-time music generation. 2023.
Summary: This paper proposes MeLoDy, a diffusion-based text-to-music generation system. The system comprises four parts: 1) the MuLan contrastive model, 2) semantic language modeling using Wav2Vec2-Conformer, 3) a dual-path diffusion model to generate the latent, and 4) a 1D VAE-GAN autoencoder for learning the latent. The greatest contribution of this paper is the dual-path diffusion model, while other components follow a similar paradigm to that proposed in previous models, albeit with minor upgrades. In the dual-path diffusion model, instead of applying diffusion on the full audio length, the paper suggests segmenting the target of length L into approximately sqrt(L) chunks. A RoFormer model is then used to process inter-segment, while an RNN is utilized to process the intra-segment. The dual-path diffusion model trains on a velocity objective with a novel linear angle schedule. A subjective listening test with music producers as listeners demonstrates that the proposed model outperforms previous methods in quality but falls short in musicality and text correlation. Strengths: The paper proposes a novel dual-path diffusion model, which could be an effective alternative to the current design of the diffusion model. A new schedule for the diffusion process is also proposed. The model generates satisfactory results, as shown on the demo website, and the writing in the paper is clear. Weaknesses: From my perspective, there are two minor weaknesses: 1. The authors should further justify the choice of the dual-path diffusion model in terms of its effectiveness. When compared to an alternative diffusion module for the proposed system, the prediction length in the dual-path diffusion is smaller. However, according to the paper, in the dual-path diffusion, at each diffusion (reverse) step, the input needs to run through two modules: RoFormer and SRU. While I can imagine that the RoFormer can be run in parallel for both inter-segment and intra-segment, I believe the SRU, as a type of RNN, still requires sequential computation. Therefore, an alternative diffusion model that works on 1D (waveform) or 2D (spectrogram) latent could leverage the benefits of parallel computing and run faster than the dual-path diffusion model on a GPU under the same sampling steps. I would appreciate the authors' insights on this point. It would be beneficial if this could be included in the paper to provide a more solid justification for the design choice. 2. The experiments show that the proposed model does not outperform in terms of musicality and text correlation, which calls the design of the proposed model into question. However, I acknowledge that the data used in each model differ, making a controlled and reproducible comparison nearly impossible. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Typos: Line 277: The sentence appears to be incomplete: "For sampling, the predicted velocity is linearly combined as ." Issues with introducing the system: In section 4.1.1, the concept of "velocity" should be introduced beforehand. Its sudden appearance made my initial reading confusing. Also, I would like to know about the justification of the dual-path diffusion design choice. Please see the first point in the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations of the proposed system are adequately discussed in the final section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the reviewer's overall positive response. The concerns and questions raised by the reviewer is addressed below. ### Response to concern #1 **Justification of the choice of the dual-path diffusion model**: Thanks for the insightful comments. We would like to first clarify one fact about dual-path network used in DPD. Suppose the input to the diffusion model is $\mathbf{z}\_\text{noisy}\in\mathbb{R}^L$. After segmentation, there are two shortened sequences to be processed by the DPD blocks: - **Intra-segment sequences** (fine path): Each intra-segment sequence has a length of $K\approx \sqrt{L}$. The processing is parallelized across $S=\lceil{\frac{2L}{K}}\rceil+1$ segments. In practice, suppose the input to DPD block has a shape of $[B, D, S, K]$, we can easily implement this parallelization in PyTorch by `.permute(0, 2, 3, 1).reshape(B * S, K, D)`, where $B$ is the batch size and $D$ is the hidden dimension. When using SRU, the computation cost is $\mathcal{O}((BS)(D^2K))=\mathcal{O}((B\sqrt{L})(D^2\sqrt{L}))$. - **Inter-segment sequences** (coarse path): On the other hand, each intra-segment sequence has a length of $S$. The processing is parallelized across $K$ frames (or merged frames). Let's first ignore the case of segment merging and repeating. The parallelization can be similarly implemented in PyTorch by `.permute(0, 3, 2, 1).reshape(B * K, S, D)`. When using attention-based sequence processing method, the computation cost is $\mathcal{O}((BK)(D^2K+DS^2))=\mathcal{O}((B\sqrt{L})(D^2\sqrt{L}+DL))$. As mentioned by the reviewer, since the batched computation is specifically speeded up in GPU (with batch sizes of $BS$ and $BK$, respectively, for the above two sequences), we separate the considerations of batch axis and other axes in Big-O notation. On the other hand, considering the ``alternative diffusion model that works on 1D (waveform) or 2D (spectrogram) latent'' mentioned by the reviewer, we note that, without segmentation, an attention-based network operating on the full input sequence of length $L$ gives rise to a cost of $\mathcal{O}((B)(D^2L+DL^2))$. In this sense, we can show in Big-O notation that DPD should be faster than any attention-based network operating on the full sequence of length $L$: - When $D$ is much larger than $L$, the dominant term in the attention network of interest would be $\mathcal{O}((B)(D^2L))$. The dominant term in DPD would be $\mathcal{O}((B\sqrt{L})(D^2\sqrt{L}))$, caused by SRU. Since the batch axis in DPD can processed much faster with GPU, we conclude that DPD is faster than than the attention network of interest. - When $L$ is much larger than $D$, the dominant term in the attention network of interest would be $\mathcal{O}((B)(DL^2))$. The dominant term in DPD would be $\mathcal{O}((B\sqrt{L})(DL))$, caused by RoFormer. It is obvious that, even not considering the GPU acceleration on the batch axis, the cost of DPD in $\mathcal{O}(BDL^{3/2})$ is still lower than that in the attention network of interest $\mathcal{O}(BDL^2)$. We also reach to a consistent conclusion in practice. During our ablation on different architectures, we also validated that the DPD model is faster than alternative diffusion models (i.e., 1D-UNet and 2D-UNet with cross-attention) for sampling one step, when both networks are constrained with the same model size. Regarding the particular choice of SRU and Roformer in DPD, we refer to the linkage between the de-noising task in diffusion to the separation problem, as stated in the main paper. In the context of separation, we accord to the findings in a separation work [2]: (1) in local modeling, the recurrent model performed better than the attention model; (2) in global modeling, the attention model performed better than the recurrent model. Following from these conclusions, we experimented through different RNNs (i.e., GRU, LSTM, and SRU) and attention networks (i.e., Transformer and Roformer), and eventually selected overall the best combinations (SRU+Roformer). --- ### Response to concern #2 **``the proposed model does not outperform in terms of musicality and text correlation''**: First, we would like to note that the baseline samples we used for comparisons are (at least partially) cherrypicked, as stated in the main paper of Noise2Music. In contrast, we used the non-cherrypicked ones for all the text prompts taken from MusicLM and Noise2Music. In fact, the number of samples released on our demo page is more than 300, which prohibits the chance of cherrypicking sample for each prompt. Presumably, the sample variances of diffusion and LM could cover marginal differences in musicality and text correlation. In other words, from a practical sense, if we need to improve the musicality and text correlation given a text prompt, we can re-sample with the same prompt until we get a desired sample. Whereas, the generation speed and audio quality (denoting upper bound of the generation quality) aspects are rather steady -- they would not drastically vary even if we draw for more samples. --- Rebuttal Comment 1.1: Comment: Thanks, authors, for explaining. According to the reply and the reply to other reviewers, I believe my concerns have been addressed.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces MeLoDy, a music generation method combining Language Models (LMs) and Diffusion Probabilistic Models (DPMs). It addresses the challenge of generating music from diverse free-form text descriptions while mitigating high computational costs associated with current state-of-the-art models like MusicLM and Noise2Music. Key contributions include the development of MeLoDy, which significantly reduces computational requirements, the proposal of efficient Dual-Path Diffusion (DPD) models, an improved sampling scheme for DPD, and a successful implementation of audio VAE-GAN for effective continuous latent representation learning. Strengths: The DPD model is particularly innovative as it is a variant of continuous-time diffusion probabilistic models that operates on low-dimensional latent representations. As for significance, the paper offers a substantial contribution to the field of music generation by showcasing how complex relationships across long-term contexts can be modeled and by overcoming limitations of prior works. Weaknesses: The proposed method builds upon existing research, employing the diffusion probabilistic model for music generation. Although the combination with the dual-path architecture is intriguing, it still largely borrows from previous architectures like AudioLM, MusicLM, and MuLan. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: What's the rule of thumbs for selecting the number of chunks when training? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have discussed about the limitations of this work in the discussion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the reviewer's an overall positive response. The question raised by the reviewer is addressed below. **Question:** What's the rule of thumbs for selecting the number of chunks when training? **Answer:** If we understand correctly, by "the number of chunks", the reviewer refers to the hyperparameter $M$, which defines the number of chunks for dividing the diffusion input with different noise scales. To understand the rule of thumbs for determining $M$, we first would like to re-visit the functionalities of using multi-chunk input. In essence, introducing such a input chunking technique leads to ``infinitely continuable generation'' for DPD. Recall that the network $v_\theta(\mathbf{z}\_{\text{noisy}}; \mathbf{c})$ is conditional on with a noise scale vector ($\boldsymbol\delta\in\mathbb{R}^{L}$) that records the time-aligned noise scales of all elements in $\mathbf{z}\_{\text{noisy}}\in\mathbb{R}^{L}$. Because of this time-aligned condition, the model is capable of de-noising partial regions of the noisy input. For example, given a model trained on 10s of music audio and let M = 4, at the first run we can generate 10s sample. To extend for 2.5 more seconds, we can drop the first chunk of the generated sample (7.5s left) and append a new chunk of 2.5s white noise to the end of the sample. The resulted 10s is passed to the diffusion model as the input. Then, taking advantage of the input chunking scheme we used for training, we can intentionally set the first $\frac{3L}{4}$ values in $\boldsymbol\delta$ to zeros and set the last $\frac{L}{4}$ values in $\boldsymbol\delta$ to one. In this way, the model is capable of ignoring the first 3 chunks and only de-noises the the last appended chunk. At the same time, the de-noising process of the last chunk (2.5s) would depend on the first previously generated 7.5s for audio continuation. By concerning the effect of $M$ on generation, we can select the number of chunks by considering 4 rules: - (1) **The length of training input**: MeLoDy was trained with 10s music clips, therefore it is better to select an $M$ such that $L$ is divisible by $M$. In our setting, 10s of music audio leads to $L=2500$ (250Hz * 10s). Setting $M=4$ leads to a length of $\frac{L}{M}=625$ latents in each chunk. - (2) **The length of segment size in DPD**: In order to perform segmentation in DPD, we should ensure $\frac{L}{M} \geq K$. In our setting, $K=80$, therefore the constraint is satisfied. - (3) **The speed of continuation**: A larger $M$ leads to a smaller chunk $\frac{L}{M}$, and the continuation would become slower. - (4) **The quality of continuation**: While different $M$ values correspond to different length of historical contents that assist the de-noising of the last chunk, when we set a smaller $M$ the length of previously generated contents used also becomes shorter. From rule (1), we have the possible values $M\in\set{1, 2, 4, 5, 10, 20, 25, 50, 100, 125, 250, 500, 625, 1250, 2500}$ (but $M=1$ is a dummy case that does not support continuation). From rule (2), we can further narrow the value set to $M\in\set{1, 2, 4, 5, 10, 20, 25}$. From rule (3), we conceive that it is more practically feasible to extend one-fifth to one-fourth of the generated audio at each run of the de-noising process. From rule (4), we experimented both $M=4$ and $M=5$ and assess their quality in continuation. We empirically found subtle differences in quality when using $M=4$ and $M=5$, therefore we finally opted for $M=4$ to enjoy a faster continuation.
null
null
null
null
null
null
Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception?
Accept (spotlight)
Summary: The paper points out that current metrics, such as PSNR, MSE, SSIM, FID, and LPIPS, do not always reflect human's judgements on evaluation of model privacy risk under reconstruction attacks. Therefore, it proposes a learning-based measure called SemSim which is more compatible with human opinions. SemSim was trained on a binary-annotated dataset using a standard triplet loss and demonstrated good generalizability in terms of model and dataset. Strengths: The paper has good motivation and is well-written. It is the first in the literature to compare human and machine metrics for measuring privacy leakages under reconstruction attacks. The proposed measure had good generalizability. Weaknesses: This paper is based on the assumption that human perception is superior to measuring model privacy preservation ability than the machine. The number of human annotators is small and their scores are binary instead of multi-value, e.g., from 0 to 5. The reviewer believes that using non-binary labels for a more advanced triplets loss function during training SemSim will result in better performance of the proposed metric. It is easy to guess that a metric network trained using human scores will produce scores close to human scores. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. There is an inconsistency between the claim mentioned between lines 63-65 and the experimental setting mentioned from lines 230 to 231 and 256 to 257. I think the latter is correct, right? 2. Why does the paper use binary annotations for privacy leakage, which is less informative than multiscale ones? Furthermore, using non-binary annotations enable the usage of semi-hard triplets, improving the metric network. 3. Are there any reasons not to use the absolute values of Spearman's and Kendall's correlation? Mixing positive and negative values in the tables may cause readers confusion. 4. Is it possible to use the targeted network or something similar for measurement besides the standard metrics mentioned in the paper? For example, in face recognition, a face recognition system can be used as a metric to judge how similar the two facial images are. Another example is ImageNet classification. We can use an image classifier as a feature extractor, then calculate the distance between the embeddings of the original and the reconstructed image. Or maybe the perceptual loss? The reviewer believes that such scores should be investigated. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. Below we address your concerns. Please let us know if further clarification is needed. **Q1: Inconsistency between lines 63-65 and the experimental setting mentioned from lines 230 to 231 and 256 to 257. I think the latter is correct, right?** Thank you very much for pointing out this inconsistency. Our experimental setup aligns with the description in Lines 230-231 and Lines 256-257. During rebuttal, we conducted experiments using the setup described in lines 63-65. Please refer to our reply to Q2/reviewer **j2WQ** for details. We will correct lines 63-65 for consistency in the revised paper. **Q2: Why does using binary annotations for privacy leakage which is less informative than multiscale ones? (using non-binary annotations enable the usage of semi-hard triplets, improving the metric network.)** Great question. In this work, we use binary annotations mainly for their simplicity: a simple yet effective neural network can be trained with the standard triplet loss, providing a clear and practical methodology. As discussed in lines 194-196, privacy leakage can take continuous values or multiple scales, which has the potential to offer richer supervision for training SemSim. That said, the multi-level scoring might be limited by its subjectivity, where it is non-trivial to design a robust and consistent scoring procedure. We would like to explore this idea in our future investigation. **Q3: Are there any reasons not to use absolute values of Spearman's and Kendall's correlation?** Great suggestion. We confirm absolute values of Spearman's and Kendall's correlation can be used without causing confusion. We will use these absolute values in the revised paper for easier result interpretation. **Q4: Is it possible to use the targeted network or something similar for measurement besides the standard metrics mentioned in the paper? For example, in face recognition, a face recognition system can be used as a metric to judge how similar the two facial images are. Another example is ImageNet classification. We can use an image classifier as a feature extractor, then calculate the distance between the embeddings of the original and the reconstructed image. Or maybe the perceptual loss? The reviewer believes that such scores should be investigated.** *Using the target network.* Yes, it is possible. We discussed this option in detail in Lines 183-193. While this method seems promising and easy to implement, it suffers from a few limitations. First, it requires the original classifier to be trained on a dataset that aligns with the categories of the task at hand. Second, the accuracy of the original classifier significantly impacts the applicability of this approach, thereby constraining its scope and effectiveness. In comparison, our method can be used in a wider scope, is not dependent on classifier performance, and is demonstrated to have a good correlation with human perception. In another consideration, using the target network for privacy leakage evaluation is an alternative to human perception (Lines 183-193). Both give insights into privacy leakage, and we choose to focus on the latter in this paper. *Using the ImageNet-trained classifier for feature extraction.* Following this suggestion, we computed the ImageNet features for original and reconstructed images during rebuttal and used their feature distance to measure privacy leakage. The resulting absolute Spearman’s ρ values are 0.7495 and 0.4637 on the CIFAR-100 and Caltech-101 datasets, respectively, which are significantly lower than SemSim's 0.8637 and 0.8182. We will incorporate these discussions in the paper and explore more possible methods in future work. **Q5: This paper is based on the assumption that human perception is superior in measuring model privacy preservation ability to the machine.** Great comment. In Lines 183-193, we discuss in length the use of a classifier (machine) for privacy leakage evaluation and recognize that it can be an alternative to human opinions (Lines 192-193). In fact, we do not intend to compare human and machine in privacy leakage evaluation. We simply focus on the former, which should be an important opinion for privacy leakage and forms a nice benchmark to assess the alignment of metrics with human. **Q6: It is easy to guess that a metric network trained using human scores will produce scores close to human scores.** We agree. But please note that human scores are an important source of judging privacy leakage, so we feel it makes sense to train a metric that resembles human opinions. Importantly, through this training scheme, we find that the resulting SemSim metric has very strong generalization ability across datasets, which is critical for its future use. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their responses, which address my concerns. I am looking forward to the following up work after NeurIPS.
Summary: This paper evaluates the limitations of existing hand-crafted image quality metrics in indicating privacy leakage in reconstructed images. The findings reveal weak correlation and contradictions among these metrics and with human perception, highlighting the risks of solely relying on them. To address this issue, the authors propose SemSim, a learning-based measure that assesses semantic similarity between original and reconstructed images. SemSim shows a higher correlation with human judgment compared to existing metrics when evaluating privacy leakage risk across various models, datasets, and attack methods. The contributions of this paper include valuable insights for privacy leakage assessment and the introduction of SemSim as a more reliable metric. Strengths: - Comprehensive study: The paper conducts a comprehensive evaluation of hand-crafted image quality metrics in privacy leakage assessment across multiple datasets and models, providing valuable insights into privacy assessment for reconstructed images. - Clear problem statement: The paper effectively shows the weak correlation between existing metrics and human perception of privacy leakage, highlighting the significance of the problem in privacy assessment. - Proposed learning-based measure: SemSim, which considers the semantic similarity between the original and rebuilt images, is useful for a more precise assessment of privacy leakage. In comparison to other metrics, SemSim shows a higher association with human judgement when assessing privacy leaks. - Good results: The authors demonstrate the potential generalizability of SemSim's correlation with human judgment on different datasets. - New datasets: This paper provides new datasets for further research in privacy assessment. - Insightful Discussion: The paper provides insightful discussions on privacy leakage in reconstructed images. Weaknesses: - Lack of details about existing metric FID: Providing more information about the specific models used for feature extraction in the FID metric can enhance reproducibility and clarity. - Domain generalization ability of the proposed metric should be discussed, such as only having one set of reconstructed images for training. It is important for the application of proposed measure in real world. - L2 distance is used by Semsim. What will happen to Semsim's performance if alternative distance measures are used? Ablation study should be provided. - Typos: The paper has a few typos, such as in line 241 with the value "0-.0989," which should be corrected. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please address the above weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed the limitations, e.g. the requirement of training data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. Below we address your concerns. Please let us know if further clarification is needed. **Q1: Details of FID.** In lines 118-120 of the submission, we referred to [8] where FID is utilized to assess the similarity between generated and real images. Its calculation uses Eq. 3 in the Supplementary Material. Specifically, we use FID to compute the distance between a set of reconstructed images and a set of original images. Specifically, we use InceptionV3 pre-trained on ImageNet to extract features from both image sets, resulting in two sets of feature vectors. Then, we compute the mean and variance for each set and use Eq. 3 in the Supplementary Material to calculate FID between the reconstructed and original images. We will provide a more detailed introduction to FID computation in the Supplementary Material. **Q2: Domain generalization ability of the proposed metric should be discussed, such as only having one set of reconstructed images for training.** We greatly appreciate this suggestion and will add more discussion in the revised paper. In all of our experiments, we apply SemSim on datasets with (mostly) disjoint label spaces from datasets where SemSim is trained. We show that SemSim exhibits higher faithfulness to human perception than existing metrics. Notably, in Section 5.1, SemSim surpasses FID and PSNR on CIFAR-100, even when it is trained with only 50 human annotations from datasets other than CIFAR-100. It illustrates the capacity of SemSim to generalize to unseen data with limited training data. During rebuttal, following the suggestion, we trained SemSim on one dataset (CIFAR-100) and applied it to other datasets than CIFAR-100. Results in the table below indicate that SemSim achieves higher absolute Spearman correlation values on all four test sets, compared with existing metrics. This further demonstrates the generalization capability of SemSim and will be added to the revised paper. | Dataset | Abs(Spearman’s $\rho$) | SemSim| |--------|--------|--------| | Caltech-101 | 0.7349 (MSE) | 0.7517 | | Imagenette | 0.7349 (LPIPS) | 0.7653 | | CelebA | 0.7495 (PSNR) | 0.7682 | | Stanford Dogs| 0.5031 (LPIPS) | 0.6194 | Limitation. We recognize that the effectiveness of SemSim may be influenced when being used for very different tasks from its training domains such as medical images. In Lines 177-181, we discuss potential strategies to improve SemSim's flexibility to broader domain variations through annotating diverse data types, which we will investigate in future work. These discussions will be incorporated into the revised text. **Q3: L2 distance is used by SemSim. What will happen to SemSim's performance if alternative distance measures are used?** **L2 distance is also used in the triplet loss for SemSim training**, so using L2 distance during inference is a natural choice. Moreover, during both training and inference, L2 distance is computed on the **normalized feature vectors** of reconstructed and original images. So if we use cosine similarity, there will be no change to the correlation observations. We will include this discussion in the revised paper. **Q4: Typo.** Thank you for pointing out the typos. We will correct these typos and double check our paper to ensure accuracy and clarity. --- Rebuttal Comment 1.1: Comment: Thanks for the response. It solved my concerns on the genealization performance.
Summary: The paper establish a system for human to annotate whether a reconstruction (from privacy attacks) is recognizable (thus breaches privacy). Based on human annotation results, a new learning-based metric is proposed, which shows significantly better alignment and faithfulness with human perception. Strengths: 1. High originality and novelty. To my knowledge, the work is the first to utilize human annotation to address the faithfulness of evaluation metrics in privacy attacks. 2. Meaningful for the machine learning privacy community. In terms of breaching privacy, it is highly important and meaningful to align the results with human evaluations. This area currently lacks such faithful and aligned metrics to evaluate whether a privacy attack is indeed successful. 3. Paper written in good quality: motivation is well-depicted, the approach is described in good clarity. The figures give clear and convincing illustrations. Weaknesses: No obvious weakness, good work. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. The authors claim that the metric generalizes well among several datasets. I am curious whether it can still generalize well enough when the distribution change a lot, for example, on medical imaging datasets (However as you said, the privacy definition of such tasks could be different from a professional view, for example, a doctor may no longer think patch-level similarity is important in privacy for medical images). Readers could be happy to see more discussions on how the definition of "privacy" could change for other cases, and figure out how many extra efforts need to be paid to extend the current approach (For example changing ordinary human labelers to professionals like physicians) Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: 1. If we want to make the evaluation metric more robust and more generalizable, in the future, we may consider scaling up the annotations to more images and more datasets. Even if we don't scale up, for new tasks or new datasets, we may need to go through the annoation from scratch. In other words, the scalability of the system is somewhat limited. 2. The metric is designed to evaluate reconstruction similarity. However, if such an evaluation metric is open-sourced and accessible to everyone, attackers may use this metric to train an even stronger privacy attack model. The risk needs to be further discussed and carefully handled. Flag For Ethics Review: ['No ethics review needed.', 'Ethics review needed: Privacy and Security (e.g., consent, surveillance, data storage concern)'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the constructive comments. We address the concerns raised by the reviewer below. Please let us know if further clarification is needed. **Q1: Can the proposed method generalize well enough when the distribution changed a lot, such as medical images?** Great question. The proposed method is shown to perform well on a diverse range of tasks such as generic object recognition, face recognition, and fine-grained classification. For medical images, it is not feasible for us to conduct experiment in a short time to verify the effectiveness of SemSim, because expert annotations are needed to judge whether a reconstructed image still exhibit certain medical conditions. In our best guess, SemSim would be less effective on medical images due to the very different styles (*e.g.,* X-rays and ultrasound image). We discussed potential strategies to improve SemSim's flexibility to broader domain variations through annotating diverse data types (please refer to Lines 177-181), which we will investigate in future work. These discussions will be incorporated into the revised paper. **Q2: Provide more discussion on 1) how the definition of "privacy" could change for other cases, and 2) how much extra efforts need to be paid to extend the current approach *e.g.,* considering professionals for annotation** Thank you for raising these points. In Lines 197-200, we discussed that the definition of privacy may vary in different tasks. For example, for object counting, privacy information may reside in the number of objects instead of individual objects themselves. We will enhance this section by adding insights from our reply to Q1. The efforts required to extend the current method to other tasks also depend on the nature of the tasks. If we evaluate privacy leakage for the counting task, it would be useful to ask human annotators to count the number of objects in the reconstructed image: if it equals the number of objects in the original image, then privacy may be considered leaked. For this particular counting task, we speculate that manageable efforts will be needed to extend our current approach. On the other hand, tasks that need specialized or professional annotations will likely require more efforts, such as medical image understanding. These discussions will be integrated into the paper. **Q3: Scaling up the annotations to more images and datasets.** Great suggestion. In Lines 179-181 of the submission, we recognize the importance of scaling up data annotation to more images and datasets and will do so in our future investigation. **Q4: Risk of open source code.** Interesting comment. By open-sourcing the metric and training data, we intend to promote transparency and collaboration. We agree on the mentioned risk. But we would like to point out that SemSim will also allow for better development of defence mechanisms and privacy-preserving models. In light of this comment, we will consider releasing SemSim under licenses that only allow for academic uses.
Summary: This paper studies the faithfulness of hand-crafted metrics to human perception of privacy information from the reconstructed images, and discovers hand-crafted metrics only have a weak correlation with the human evaluation of privacy leakage. A learning-based measure called SemSim is proposed to evaluate the Semantic Similarity between the original and reconstructed images. Strengths: + a comprehensive comparison of traditional metrics for privacy leakage of reconstructed image + proposes a learning-based metric with human ratings + improves over all existing image quality metrics Weaknesses: - The definition of privacy is based on image content classification, which may miss important privacy attributes in local image regions. - The details of human rating collection procedure are needed to ensure no subjective bias in the metric training. For example, who are the people giving the rating, how many participants, what questions are asked to the participant. - The metric learning method is very simple. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can we just apply the original classifier on the reconstructed images as a metric for privacy leakage? - According to Table 1, how to explain PSNR/MSE is sometimes better than LPIPS/FID, which is used more widely to measure image generation quality? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Important details missing for human study, which is centric to the learned metric. Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns', 'Ethics review needed: Privacy and Security (e.g., consent, surveillance, data storage concern)'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for many insightful comments. We answer the questions in what follows. Please let us know if further clarification is needed. **Q1: Can we just apply the original classifier on the reconstructed images as a metric for privacy leakage?** Yes, and we have discussed this possible method in Lines 183-193. This method has some limitations. First, it requires the classifier to be trained on a dataset that has the same categories with the task at hand. Second, the classifier accuracy significantly impacts the effectiveness of this approach. These limitations constrain the scope and effectiveness of using the original classifier for privacy leakage measurement. We will add more discussions about this method to highlight its potential benefits and inherent constraints in the revised paper. **Q2: According to Table 1, how to explain PSNR/MSE is sometimes better than LPIPS/FID, which is used more widely to measure image generation quality?** Great question. Table 1 is on privacy leakage assessment, which is different from measuring image generation quality. In Lines 201-208 we discussed the connections and differences between these two evaluation problems. Due to the difference, although LPIPS and FID are widely used to **measure image generation quality**, they are not necessarily better than PSNR/MSE in **measuring privacy leakage**. Some visual examples of using LPIPS, PSNR and MSE for privacy leakage assessment are provided in Fig. 1 in the submission: in some cases LPIPS is consistent with human opinion while in others PSNR/MSE is more aligned. **Q3: The definition of privacy is based on image content classification, which may miss important privacy attributes in local image regions.** We agree that image content classification may miss important privacy attributes in local regions. It would be an interesting direction for improvement, along with a few other suggested ways (please refer to our reply in "general response to all reviewers" for more discussions). Nevertheless, as an early attempt to define privacy using image semantics, our current definition based on global image content has proven to be more faithful to human perception than existing metrics in evaluating privacy leakage. We will add these discussions to our paper. **Q4: The details of human rating collection procedure are needed to ensure no subjective bias in the metric training.** Thanks for the kind suggestion. In Section 4.1, we detailed the number of participants and the annotation procedure. Specifically, each image or image pair is annotated by 5 independent annotators randomly selected from a pool of 23 annotators. These annotators are from various backgrounds, including college students, housewives, and part-time workers from different industries. To further minimize the influence of subjective bias, we use a relatively objective formulation: whether the reconstructed image can be correctly labeled. Specifically, for CIFAR-100, Caltech-101, and Imagenette, we provide up to 20 candidate categories and see if the annotators can correctly recognize the reconstructed image; for more difficult tasks like face recognition and fine-grained classification (Celeb-A and Stanford Dogs), we give both the original and the reconstructed images and ask the annotator if they are of the same identity or species. Detailed graphical user interfaces (GUI) for annotation are shown in Section A of the Supplementary Material. We will add these details in the revised text. **Q5: The metric learning method is very simple.** We agree that the proposed metric is simple and feel that being simple is an advantage. In the comprehensive evaluation, this simple metric outperforms existing metrics *w.r.t* faithfulness to human perception. This conclusion would benefit future explorations in semantic-aware privacy assessment, while the metric itself will create opportunities for further refinement and study.
Rebuttal 1: Rebuttal: We thank all reviewers for the thoughtful feedback. We are greatly encouraged that the reviewers appreciate this manuscript for its originality and novelty, its implications for the machine learning privacy community (reviewer **oRwe**), clear motivation (reviewers **oRwe** and **Lywc**), comprehensive comparisons (reviewers **AN5f** and **j2WQ**), and good writing (reviewers **oRwe** and **Lywc**). Below we discuss a common question summarized from reviewers' comments. We will then answer the questions of each reviewer separately. * **How to further improve the performance of the learning-based metric?** As an initial work utilizing human annotation to address the faithfulness of evaluation metrics in privacy attacks (Reviewer **oRwe**), we acknowledge that there is room for further improvement to the proposed SemSim. As suggested by reviewers, potential solutions include using local region information (Reviewer **AN5f**), non-binary labels (Reviewer **Lywc**), and annotating more data to enlarge the application scope (Reviewer **oRwe**). In the current manuscript, we also discussed several potential ways on how to further improve SemSim, *e.g.,* Lines 180-181 and 284-256. With that said, we believe that the potential for further improvement does not impact the contribution of work, which offers valuable insights into privacy assessment and can further stimulate efforts within the community to develop more robust evaluation metrics. We will answer the questions of ethics reviewers during the discussion session, following the guidelines for rebuttal replies.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ITEM3D: Illumination-Aware Directional Texture Editing for 3D Models
Reject
Summary: The research introduces ITEM3D, a model that enhances texture editing in 3D modeling, addressing challenges like complexity and text ambiguity. Leveraging a diffusion model, ITEM3D uses images to bridge text and 3D representations, optimizing texture and environment map. It employs a relative editing direction, reducing noise difference and semantic ambiguity between source and target texts, and adjusts direction during optimization to limit texture domain deviation. Experiments show ITEM3D outperforms previous methods and can effectively control lighting. Strengths: + The research introduces an optimization pipeline for texture and environment map editing in 3D models, adhering to text prompts. + This paper presents a relative direction diffusion-based approach for 3D texture optimization, mitigating issues of noisy details and inconsistent appearances resulting from semantic ambiguity between texts and images. Weaknesses: 1. This paper asserts its realism and efficiency in generating new textures through optimization, yet its claims are evaluated solely on rendered images. It remains uncertain whether this method is applicable to in-the-wild objects. I suggest that the authors conduct additional experiments using real 3D objects, such as those in the DTU dataset. Furthermore, I find a lack of supportive material regarding its efficiency claims. Given that the method works based on optimization, I question whether it can truly generate high-quality textures swiftly, as stated in the introduction. 2. The introduction lacks clarity and adequate context. The paper outlines two challenges when applying the diffusion model to 3D objects, but it doesn't discuss any related works aimed at addressing these challenges. This omission makes it difficult to gauge the novelty of the proposed method in relation to existing solutions. 3. The proposed relative direction loss appears strikingly similar to that of NeRF-Art[1], albeit the latter employs the CLIP model to implement this constraint. The authors should highlight the distinguishing factors between their method and NeRF-Art in the introduction and add a direct comparison. [1] Wang, C., Jiang, R., Chai, M., He, M., Chen, D. and Liao, J., 2023. Nerf-art: Text-driven neural radiance fields stylization. IEEE Transactions on Visualization and Computer Graphics. 4. In Figure 4, the examples provided are not representative enough. It would be beneficial to showcase results with brown, golden, and porcelain materials. The current examples make it difficult to evaluate the method's performance. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. The reconstruction results presented in Figure 5 seem inferior to those achieved by the original NeRF. The results appear quite coarse. Could the authors elaborate on the reasons for this? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: This paper has acknowledged its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude for your valuable feedback on our paper. **Q1: [Experiments on real-world dataset and experiments about efficiency]** **A1**: We have incorporated a qualitative experiment on a real-world 3D dataset, as shown in **Fig.1** of the rebuttal PDF. The dataset comprises hundreds of products listed in E-commerce, and each object is reconstructed from 300 multi-view images captured by a professional camera. We present three examples in the dataset: a vegetable cat, a piggy doll, and a red sneaker. As demonstrated in **Fig.1** of the rebuttal PDF, our model, ITEM3D, successfully synthesizes realistic and natural textures based on the given text guidance. For instance, when provided with the prompt "a vegetable tiger toy", ITEM3D edits the original object into a cute toy tiger with detailed fur and vegetable decorations. When given the prompt "golden sneaker," ITEM3D can bake the golden material instead of original material into the texture and show a realistic shoe. Similarly, the example of "a pink porcelain piggy toy" showcases ITEM3D's capacity for appearance and material editing. To support our efficiency claims, we have conducted experiments to compare the efficiency of our approach with the state-of-the-art method, instruct-NeRF2NeRF. In Table below, we present the results of comparing editing time and memory consumption. Our findings demonstrate that ITEM3D outperforms instruct-NeRF2NeRF, requiring significantly less time (50 times less) while maintaining comparable memory consumption during texture editing. | method | Instruct NeRF2NeRF | ITEM3D | | :-----------: | :----------------: | :----: | | training time | 10h | 10min | | GPU memory | 15GB | 9GB | Thank you for your valuable input, and we appreciate your emphasis on both qualitative results and efficiency comparisons. It helps strengthen our claims and provides a more comprehensive evaluation of our approach. **Q2: [The introduction lacks clarity and adequate context]** **A2**: We appreciate your attention to the lack of clarity and adequate context in the introduction. We will ensure appropriate revisions are made in the final version. Previous generative models, such as DreamFusion, faced the challenge of injecting 2D priors into 3D objects. This resulted in occasional failure to generate accurate 3D shapes and corresponding textures due to insufficient 3D knowledge. Additionally, the direct utilization of SDS loss, as proposed in DreamFusion, mislead the object's starting point, leading to a blurry image. Alternative works, such as Instruct NeRF2NeRF and HeadSculpt, utilized InstructPix2Pix to provide relative direction. However, our ITEM3D propose to obtain relative direction, rather than directly modifying the prompt. Our method yields comparable results, as shown in **Fig.2** of the rebuttal PDF. **Q3: [The proposed relative direction loss is similar to that of NeRF-Art]** **A3**: As mentioned in line 206 of our study, our method draws inspiration from the idea presented in StyleGAN-Nada. StyleGAN-Nada first propose the concept of directional loss to the CLIP model, and NeRF-Art is one of the approaches that follows this direction for CLIP-based editing tasks. However, applying the directional CLIP loss directly to diffusion models is not a straightforward task. In our research, we have explored the similarities and differences between CLIP-based editing and diffusion-based editing. By understanding these distinctions, we were able to make improvements based on the specific characteristics of the SDS loss. This enabled us to successfully leverage the advantages of the directional loss and apply them to texture editing based on the diffusion prior. By adapting and refining the directional loss for diffusion models, we aimed to enhance the texture editing capabilities and achieve more realistic and accurate results. Besides, we compare our method with CLIP-based model in **Fig.4** of the rebuttal PDF, to demonstrate the advantages of diffusion-based models. **Q4: [Results with brown, golden, and porcelain materials]** **A4**: We present additional editing results on real-world objects in **Fig.1** of the rebuttal PDF. Our demonstrations showcase a remarkable proficiency in object editing, exemplified by the conversion of an original cat toy into a vegetable tiger toy, a piggy doll into a pink porcelain piggy toy and a sneaker into a golden sneaker. The edited cat toy exhibits a fantastic furry material, which is truly impressive. Similarly, the golden sneaker showcases the distinctive texture of a metallic golden material. Furthermore, the piggy doll demonstrates our capability in appearance and material editing through the use of the "pink porcelain piggy toy" prompt, resulting in a toy made of porcelain material. **Q5: [The reason for coarse results in Fig.5]** **A5**: In **Fig.5** of original paper, there are two main reasons that contribute to the coarse results observed. The first reason is the environment lighting condition. In some cases, when the lighting conditions are strongly bright, it can obscure or hide certain texture details. This can result in the coarser appearance of the edited object. The second reason is related to the use of nvdiffrec for efficient differential rendering. While nvdiffrec enables faster rendering compared to the lengthy optimization process of the original NeRF, the learning process of nvdiffrec may occasionally lead to inferior results. This may affect the overall quality and fine details of the geometry in certain cases. It is important to note that these limitations are inherent to the specific techniques employed in our study. Future advancements in differentiable rendering may help address these challenges and improve the quality of texture editing results. We appreciate your understanding and acknowledgment of these limitations. --- Rebuttal Comment 1.1: Comment: The authors have addressed most of my concerns, including the loss function, results with various materials such as brown, golden, and porcelain, and experiments conducted on a real-world dataset. However, I continue to have the following concerns: (1) The comparison of relative direction loss with NeRF-Art should be introduced and discussed in the introductory section. (2) The efficiency comparison as it currently stands is not equitable, since different 3D representations are employed in Instruct-Nerf2Nerf and the proposed method. This discrepancy is likely the reason why the paper demonstrates much faster results than Instruct-Nerf2Nerf. The authors should compare both methods under the same 3D representation conditions and provide a thorough analysis to explain why this method is more efficient. (3) The quality of the coarse results presented in Fig.5 appears to be lacking. I do not believe that the lighting conditions are to blame. In contrast, other works such as NeRF-Factor seem to handle similar cases with more competence. It seems that the reconstruction itself may be the issue. I would like to understand why this work is unable to effectively address this specific case. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful comments and suggestions regarding our paper. We have carefully considered your feedback. Below is our response: **A1**: It is better to include a discussion on the relative loss in the introduction to elucidate our contribution towards enhancing the SDS loss. We will discuss the first proposed relative clip loss in StyleGAN-Nada and explore related works such as NeRF-Art, and others. **A2**: Indeed, one of the primary factors contributing to the faster speed of our method compared to Instruct NeRF2NeRF is the utilization of DMTET instead of NeRF. However, unlike Instruct NeRF2NeRF, our approach circumvents the need for iterative dataset modifications to maintain multi-view consistency when editing objects. By directly editing the disentangled texture map, our method preserves the inherent multi-view consistency and avoids the alternative updating of 3D representation and data, resulting in time savings. On the other hand, it is true that ITEM3D exhibits lower efficiency compared to end-to-end architectures due to the optimization process involved. However, end-to-end architectures typically require a large volume of 3D datasets and are often limited to class-specific tasks, such as Rodin [1]. **A3**: We acknowledge that the reconstruction method we adopted, Nvdiffrec [5], does have certain limitations. The challenge of reconstructing surfaces with high specular reflectance has been extensively discussed in a recent work called ENVIDR [2]. When faced with this challenge, most existing methods take one of two major approaches. The first category of methods, such as the original NeRF and its extensions [3, 4], involves explicitly representing virtual lights or images underneath the surface to capture complex view-dependent appearance. While this approach can improve rendering quality, it often sacrifices the accuracy of the reconstructed surface and limits the ability to edit scenes. Nvdiffrec adopted by our method falls into the second category, which incorporates knowledge of inverse rendering to model the interaction between light and surface. These methods [5, 6, 7] decompose rendering parameters, but they often suffer from relatively lower rendering quality compared to top-performing NeRF models. This is because that the simplified or approximated rendering equation used in these models cannot account for all complex rendering effects. While NeRF-factor [7] also decomposes rendering parameters, it builds upon a pre-trained NeRF and employs additional joint optimization to further enhance the quality. So, NeRF-factor can handle such cases with more competence. However, the complex procedure of NeRF-factor results in a runtime of nearly 8 hours. Even excluding the base NeRF reconstruction, the decomposition and joint optimization process alone requires approximately 50 minutes. To address these challenges and improve the quality of reconstruction results, we will explore advancements such as ENVIDR [2] in our future work. ENVIDR learns an approximation of physically based rendering (PBR) using three decomposed MLPs, which are trained using images with various materials and environments synthesized by existing PBR renderers. This may help to mitigate the limitations we have encountered. [1] Wang T, Zhang B, Zhang T, et al. Rodin: A generative model for sculpting 3d digital avatars using diffusion[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 4563-4573. [2] Liang R, Chen H, Li C, et al. ENVIDR: Implicit Differentiable Renderer with Neural Environment Lighting[J]. arXiv preprint arXiv:2303.13022, 2023. [3] Liu L, Gu J, Zaw Lin K, et al. Neural sparse voxel fields[J]. Advances in Neural Information Processing Systems, 2020, 33: 15651-15663. [4] Yu A, Li R, Tancik M, et al. Plenoctrees for real-time rendering of neural radiance fields[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 5752-5761. [5] Munkberg J, Hasselgren J, Shen T, et al. Extracting triangular 3d models, materials, and lighting from images[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 8280-8290. [6] Zhang K, Luan F, Wang Q, et al. Physg: Inverse rendering with spherical gaussians for physics-based material editing and relighting[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 5453-5462. [7] Zhang X, Srinivasan P P, Deng B, et al. Nerfactor: Neural factorization of shape and reflectance under an unknown illumination[J]. ACM Transactions on Graphics (ToG), 2021, 40(6): 1-18. --- Rebuttal 2: Comment: Reviewer CxP2, Please read the rebuttal provided by authors and raise a discussion if your concerns are not well addressed. Best, AC
Summary: Given a set of multi-view calibrated images, the authors aim to first reconstruct the 3D objects based on the input images and then edit them via a pre-trained diffusion model. To accomplish this, they developed ITEM3D, which utilizes a differentiable marching tetrahedron (DMTet) as its 3D implicit representation. To achieve the editing task, the authors proposed two methods: (1) relative direction based optimization that respects both descriptive and editive prompts, and (2) a gradient direction adjustment that integrates changes from previous iterations to improve network robustness. Qualitative and quantitative experiments demonstrated ITEM3D's superior capabilities compared to current state-of-the-art methods. Strengths: The strengths of this paper can be summarized as: 1. The paper employs both descriptive and editive directions to effectively address the editing task. Although it's adapted from the recent DDS paper, I believe the tricks proposed by the authors are useful. 2. Additionally, the authors integrate changes to adjust the gradient direction, enhancing the network's robustness. Weaknesses: The weaknesses of this paper can be summarized as: 1. The paper overlooks several noteworthy recent studies, such as Make-it-3D, DreamBooth3D, DreamAvatar, Instruct-NeRF2NeRF, Rodin, and so on, which should be included in the references. 2. The authors fail to provide a thorough discussion and qualitative comparisons with Instruct-NeRF2NeRF, the most recent and relevant paper. 3. the ablation study presented in Figure 4 may not be immediately apparent. The "w/o adj" may be a better result for the "A brassy cattle" scenario. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Besides the weaknesses listed above, I have several questions for the authors to consider in the rebuttal: 1. The descriptive prompt may not perfectly describe the object since the 3D DMTet is initially optimized via the input multi-view images, which could potentially impact the final results in a negative manner. 2. Following the approach proposed in HeadSculpt, it may be beneficial to input the rendered images from the original DMTet instead of the optimized DMTet. I am wondering whether the IESD loss proposed in HeadSculpt would achieve comparable or superior results compared to the approaches in ITEM3D 3. Figure 2 currently depicts prompts as "A green chair" and "A green wooden chair." It may be worth exploring the possibility of changing the prompts to "A green chair" and "make it wooden," as proposed in HeadSculpt. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Limitations and potential impacts have been discussed in the submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you sincerely for your positive and encouraging feedback on our paper. **Q1: [Lack of related works]** **A1**: We appreciate your suggestion to include these references and provide a more comprehensive discussion of the related works. Based on recent state-of-the-art studies, we have investigated and briefly discussed the relevant works here. Among these recent studies, Make-it-3D, DreamBooth3D, and Instruct-NeRF2NeRF are more closely related to our methods as they also utilize text prompts to guide the appearance editing of 3D objects. These studies employ NeRF as the 3D representation, whereas ITEM3D utilizes a disentangled representation consisting of mesh, texture, and environment map. Instruct-NeRF2NeRF and DreamBooth3D both utilize 2D diffusion to update the training images and fine-tune the NeRF training. They also design updating strategies to maintain multi-view consistency during the editing process. Make-it-3D, following Dreamfusion, incorporates the SDS loss to directly optimize the NeRF and further refines the results using a texture point cloud. In contrast, ITEM3D chooses to optimize the texture map directly while preserving consistency due to its disentangled representation. Other works, such as Rodin, DreamAvatar, and AvatarBooth, focus on diffusion-based text-driven avatar synthesis. Rodin trains a tri-plane based diffusion model using approximately 100k 3D head models. DreamAvatar and AvatarBooth optimize a NeRF-like representation using the SDS loss, similar to DreamFusion and Make-it-3D. These methods leverage SMPL to initialize a parameterized body, which is used in the body generation process. In contrast, our method focuses on generalized editing, and we aim to explore our capability in avatar editing in future work. Your input helps enhance the context and understanding of our research within the broader field of text-guided 3D object editing and synthesis. We will add the discussion above in our final version. **Q2: [Qualitative comparisons with Instruct-NeRF2NeRF]** **A2**: We provide qualitative comparisons with Instruct-NeRF2NeRF in **Fig.2** of the rebuttal PDF. Both Instruct-NeRF2NeRF and ITEM3D are capable of achieving text-consistent texture editing. However, Instruct-NeRF2NeRF falls behind our method in certain aspects. For example, when editing the chair using prompts such as "Turn the chair into a red stone chair" or "Turn the chair into a green wooden chair", Instruct-NeRF2NeRF exhibits a lack of details and presents a smooth material. In contrast, ITEM3D's edited textures showcase clear patterns and wood grain, resulting in a more realistic appearance. Furthermore, the ficus case highlights another limitation of Instruct-NeRF2NeRF in handling disentangled editing. When prompted to "Turn the pot into a blue pot," Instruct-NeRF2NeRF applies the blue color to both the pot and the ficus, failing to disentangle the two components. In contrast, our method, ITEM3D, successfully disentangles the pot and accurately turns it blue while leaving the ficus unchanged. These qualitative comparisons demonstrate that ITEM3D outperforms Instruct-NeRF2NeRF in terms of capturing fine details, maintaining texture patterns, and effectively handling disentangled editing tasks. **Q3: [The ablation study presented in Fig.4 of original paper]** **A3**: In the example you mentioned, the result labeled as "w/o adj" in our figure fails to eliminate the original two cute eyes and instead shows four eyes on the cattle head. We acknowledge that this misrepresentation could be overlooked due to visual deception. We sincerely apologize for any confusion or misleading caused by this figure. To rectify the situation, we have revised the figure and marked the problematic areas with red circles to draw attention to the issue. This clarification aims to ensure transparency and accuracy in our presentation. We appreciate your understanding and diligence in pointing out this discrepancy. **Q4-Q5: [Ablation study on IESD loss]** **A4-A5**: HeadSculpt is a recent arxiv paper submitted in June. We recently investigated the method proposed in the HeadSculpt paper and introduced its IESD (Identity-aware editing score distillation) loss into our ablation study, as shown in **Fig.1** of the rebuttal PDF. The IESD loss utilized in HeadSculpt aims to balance the source object and target object, presented as $\alpha\left(\epsilon_{\phi}\left(z_{t};\hat{y},t\right)-\epsilon\right)+\beta\left(\epsilon_{\phi}\left(z_{t};y,t\right)-\epsilon\right)$, where in HeadSculpt, $\alpha$ and $\beta$ are set at 0.6 and 0.4, respectively. In contrast, our method employs $\alpha$ and $\beta$ values of 1 and -1, respectively, with both methods sharing a unified mathematical form. As depicted in **Fig.1** of the rebuttal PDF, both our loss function and the IESD loss function yield comparable results. They are capable of editing the texture into another realistic and natural texture based on the given prompt, with minimal differences between the outcomes. It need further research to determine the superiority of these two different parameter settings. **Q6: [Changing the prompts to "make it wooden"]** **A6**: We have conducted an ablation study, as shown in **Fig.5** of the rebuttal PDF, to explore the effect of different text descriptions. As shown, using prompt "make it wooden" in ITEM3D cause degradation in the editing results. This is because that our method introduces a relative direction for editing, which makes it more appropriate to use source and target text prompts as starting and ending points. On the other hand, prompt formats that directly describe the direction, such as "make it wooden", may be more suitable for models like instruct-pix2pix, which is the base model of HeadSculpt and instruct-NeRF2NeRF. --- Rebuttal Comment 1.1: Comment: Post Rebuttal: Thanks for the efforts from the authors. With regards to my preliminary reviews and the authors' rebuttal: A1. I kind of agree with the authors. But I believe it would be beneficial to reference and incorporate discussions involving similar existing methods, even if they do not address the exact same task. I encourage the authors to conduct a thorough literature review to ensure that no relevant papers have been overlooked. A2. I believe the comparisons with Instruct-NeRF2NeRF are evident, even though determining superiority becomes challenging since it relies more on personal evaluation rather than quantitative evaluation. A3. But I think the results with two cute eyes are better as we may want to keep the original identity, right? A4-A5. Thanks for the experiments. A6. Thanks for the ablation studies. Then I think it's better to solve this problem. Because in real-world applications, we often lack a specific and definitive text prompt for the subject, such as "A green chair." Overall I tend to keep or upgrade the original rating. Thanks. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback. We appreciate your acknowledgment of the value of our experiments. **A1**: We sincerely acknowledge your point about the importance of thoroughly reviewing the existing literature. While they may not focus on the exact same task, we agree that they can contribute significantly to our work. Therefore, we have recently conducted an exhaustive review of recent works in this area. The revised version will include a thorough discussion of these works, enabling us to situate our research in the broader context and highlight its unique contributions. **A3**: To some extent, you are correct. We have taken your suggestion into careful consideration regarding the target of our editing. It has come to our attention that preserving the identity holds value in specific downstream tasks, meanwhile it should not sacrifice the reasonableness of results. In this case, the ranking of the results is as follows: two cute eyes > two real eyes (the result with adjustments) > four eyes (the erroneous result without adjustments). Consequently, we should make an effort to prioritize both identity preservation and the prevention of erroneous outcomes in future similar cases. **A6**: I agree that the subject lacks a definitive description. In fact, our method does not necessarily require an exact prompt. The source and target prompts serve the purpose of determining the relative direction of the editing process. Even a general and coarse description can have an impact on complex objects. Our main objective is to express our intention as the distinction between the source and target texts, while ensuring that the prompts remain reasonable. However, it is indeed more convenient to directly express the editing purpose like "make it wooden". We will explore the possibility to improve our methods on this aspect.
Summary: This paper proposes ITEM3D, a model for automatic 3D object editing according to text prompts. ITEM3D bridges the gap between 3D representation and natural images using rendered images. It optimizes disentangled texture and environment maps using a relative editing direction to bypass ambiguous text descriptions. It gradually adjusts the editing direction to reduce deviation caused by texture projection. Results demonstrate ITEM3D outperforms SDS-based methods on various 3D objects. It also allows for explicit control over lighting with text-guided relighting. This paper also suggests directly applying the diffusion model to 3D objects is challenging due to conflicts in optimization goals and extreme semantic bias. Strengths: Optimizing disentangled texture and environment maps using a relative editing direction to bypass ambiguous text descriptions looks interesting. Results demonstrate ITEM3D outperforms SDS-based methods on various 3D objects. The explicit control over lighting with text also looks interesting. Weaknesses: One major weakness of the proposed method is the lack of comparisons with other important related works, such as CLIPNeRF, ARF[1], and SINE[2]. While CLIPNeRF is cited, the others are not. Additionally, there are several related works that do not use text descriptions for editing NeRF, including NeRV[3], NeRD[4], NerFactor[5], and EditNeRF[6], that are also missing from the comparison. Another related line of work, SDFusion[7], is not discussed at all. Although the authors may have overlooked recent related works in the crowded field of text-based editing with NeRF + Diffusion models, it is crucial to compare the proposed method to CLIPNeRF and ARF to understand its advantages. Comparisons to intrinsic image decomposition-based methods are also important, especially for relighting, to determine the strengths and weaknesses of the proposed method. While the expectation is not for the method to outperform NeRD or NeRFactor, comparing it to these methods is important to identify what needs to be done to achieve similar results. Another important and simple baseline is missing: how do the results compare to a simple 2D-based image editing method without using explicit 3D texture and illumination map optimization? [1] Zhang, Kai, et al. "Arf: Artistic radiance fields." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. \ [2] Bao, Chong, et al. "Sine: Semantic-driven image-based nerf editing with prior-guided editing field." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. \ [3] Srinivasan, Pratul P., et al. "Nerv: Neural reflectance and visibility fields for relighting and view synthesis." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. \ [4] Boss, Mark, et al. "Nerd: Neural reflectance decomposition from image collections." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. \ [5] Zhang, Xiuming, et al. "Nerfactor: Neural factorization of shape and reflectance under an unknown illumination." ACM Transactions on Graphics (ToG) 40.6 (2021): 1-18. \ [6] Liu, Steven, et al. "Editing conditional radiance fields." Proceedings of the IEEE/CVF international conference on computer vision. 2021. \ [7] Cheng, Yen-Chi, et al. "Sdfusion: Multimodal 3d shape completion, reconstruction, and generation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: It would be helpful to compare the current methods with other recent works mentioned in the weakness to better understand their effectiveness. Additionally, it is unclear how text-guided lighting is mapped to the disentangled illumination representation. Is it solely through the use of diffusion priors or are new text embeddings or directions learned for specific lighting conditions? If diffusion models already include such priors, would optimization in 2D suffice without the need for explicit 3D texture and illumination map optimization? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The paper lacks several key related works and comparisons to them, including simple 2D-based comparisons. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing additional references and addressing the lack of comparison with other baselines in our paper. **Q1-Q2: [Lack of related works and comparison experiments]** **A1-A2**: CLIP-NeRF, ARF, and SINE are CLIP-based editing models that, compared to diffusion-based models, lack the capability to handle complex editing tasks in generalized objects.To demonstrate the advantages of NeRF + Diffusion models, we have compared our ITEM3D approach with two baselines: CLIP-NeRF and a 2D baseline called ControlNet. In **Fig.4** of our rebuttal PDF, we present the results of this comparison. For the CLIP-NeRF baseline, we utilized the official code and the provided pre-trained lego NeRF model from the CLIP-NeRF repository to reproduce its performance. Given the prompt "A red real excavator", we observed that the 2D baseline successfully edited the input views into a realistic red excavator. However, the results lacked multi-view consistency, with distinct patterns and slight variations in the excavator's color across the collection of five-view images. On the other hand, while CLIP-NeRF achieved 3D consistency, it failed to edit the excavator into a red color and instead changed the color of the base to red. In contrast, our ITEM3D approach was able to both edit the color according to the prompt and maintain the inherent 3D consistency of the object. Besides, In **Fig.2** of our rebuttal PDF, we present the results of the comparison experiment between ITEM3D and the recent state-of-the-art work, Instruct-NeRF2NeRF. Both ITEM3D and Instruct-NeRF2NeRF are capable of achieving text-consistent editing results with natural textures. However, there are certain areas where Instruct-NeRF2NeRF falls short compared to our approach. For example, when given the prompt "Turn the pot into a blue pot", Instruct-NeRF2NeRF fails to disentangle the pot and the ficus, resulting in undesired changes to the ficus. On the other hand, our ITEM3D approach is able to successfully edit the pot into a blue color while preserving the integrity of the ficus. When editing chairs, Instruct-NeRF2NeRF tends to synthesize smooth textures with little details. In contrast, our ITEM3D approach performs better in capturing and preserving fine details during texture editing. Similar to Instruct-NeRF2NeRF, SDFusion optimizes the neural radiance fields (NeRF) to edit the texture of reconstructed 3D object. The optimized process is guided by the sds loss as proposed in DreamFusion. For non-text editing works, NeRV enables the rendering of novel views with varying lighting conditions, including indirect illumination effects. NeRD decomposes a scene into its shape, reflectance, and illumination components, enabling real-time rendering with novel illuminations. NeRFactor introduces a method to reconstruct the shape and spatially-varying reflectance of an object from multi-view images, allowing for rendering novel views under different lighting conditions and material editing. These methods utilize decomposed representations to edit illumination and material aspects, which aligns with the approach of our methods. However, the main contribution of our work lies in text-guided texture editing, which is an aspect they do not specifically focus on. **Q3: [How to map text-guided lighting to the disentangled illumination representation]** **A3**: In our pipeline, we employ a process that disentangles the 3D model into three components: the appearance texture map, the environment lighting map, and the geometric mesh. This disentanglement allows us to edit each component separately to achieve the desired changes. During the illumination editing stage, we fix the texture and mesh components and focus on optimizing the environment lighting map based on the given text prompt. By leveraging the differential rendering process, where the 3D model is rendered into 2D images, our diffusion model can utilize the lighting prior information from these 2D images to guide the optimization process for the lighting map. However, it's important to note that directly optimizing the 2D images using the diffusion model cannot maintain multi-view consistency. This limitation is overcome by our method, which preserves the multi-view consistency by utilizing the inherent 3D representation of the object. By disentangling the 3D model and leveraging the differential rendering process, our approach maintains the 3D consistency of the object while achieving desired edits. This ensures that changes made to the lighting map, texture map, or geometric mesh are coherent across different views and result in visually consistent and realistic edits. --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: Thank you for taking the time to address the concerns raised in the review. I appreciate your efforts in providing clarifications. However, there are still some areas that require further elaboration and evidence. Please find my comments below: - **Comparison with ARF and SINE:** Thank you for the comparison to CLIP-NeRF (CVPR 2022). However, a comparison between the proposed approach, ARF (ECCV 2022), and SINE (CVPR 2023) would be valuable, especially since both have been shown to outperform CLIP-NeRF. The claim that these methods "lack the capability to handle complex editing tasks in generalized objects" needs experimental evidence. For instance, SINE has demonstrated strong editing capabilities on real cars, which have intricate material properties. I'd like evidence supporting that ITEM3D outperforms ARF and SINE. - **Related Work:** The distinction provided with NeRV, NeRD, and NeRFactor is insightful. I hope these papers will be incorporated into the related work section in any revised version of this paper. - **Text-guided Lighting:** Thank you for clarifying how text-guided lighting is mapped. However, the claim that "directly optimizing the 2D images using the diffusion model cannot maintain multi-view consistency" needs substantiation. Methods like SDFusion and Dreamfusion have shown strong multi-view consistency. It would be beneficial to see experimental evidence backing this claim. --- Reply to Comment 1.1.1: Comment: Thank you for your detailed and insightful feedback. * **Comparison with ARF and SINE**: Regarding the comparison with ARF and SINE, we apologize for the coarse discussion in our initial rebuttal. We would like to address them explicitly here and provide a more thorough discussion in our revised paper. ARF and SINE are both NeRF-based optimization methods that utilize image guidance. In contrast, our method focuses on text-guided texture editing. While ARF and SINE can be extended to text-prompt editing, they mainly rely on an off-the-shelf text-guided method called Text2live [1], as mentioned in the SINE paper. In the context of text-editing applications, SINE first utilizes Text2live to edit a single-view image corresponding to its pre-trained NeRF. The edited single-view image then serves as the input target image to optimize the NeRF representation, following the same procedure as the image-guided pipeline. There are several differences in the objectives of these methods. ARF aims at scene stylization, focusing on global color in the 3D scene but ignoring fine-grained details. On the other hand, SINE emphasizes detailed semantic editing. To summarize the differences among CLIP-NeRF, SINE, ARF, and our ITEM3D: | | Guidance | Task | Representation | | :-----------: | :------: | :---------: | :---------------------------: | | **CLIP-NeRF** | Text | Editing | NeRF | | **SINE** | Image | Editing | NeRF | | **ARF** | Image | Stylization | NeRF | | **ITEM3d** | Text | Editing | Explicit texture and mesh | As shown, CLIP-NeRF and SINE are more closely aligned with our task, while ARF is not. We would have liked to conduct a comprehensive comparison with both CLIP-NeRF and SINE. However, since the code for SINE has not been released yet, it would be challenging for us to reproduce its results within the limited timeframe of this rebuttal. Therefore, we have provided a comparison with CLIP-NeRF in our current submission. Once the official code for SINE becomes available, we will perform the necessary comparison and include the results in our final version. Additionally, to demonstrate the effectiveness of our method in editing 3D objects using text, we conducted a comparative experiment with Instruct-NeRF2NeRF[2] in **Fig.2** of the rebuttal PDF to provide further support for ITEM3D. Because Instruct-NeRF2NeRF is also more related to our ITEM3D, with text guidance, diffusion-based editing task and NeRF representation. **Thank you once again for your valuable reference, and we ensure that all the points raised in the review will be incorporated in our revised paper.** * **Related Work**: We appreciate the acknowledgement. We will certainly incorporate these papers into the related work section to highlight their contributions and distinguish them from our own method. * **Text-guided Lighting**: Indeed, SDFusion and Dreamfusion are capable of preserving 3D consistency; however, they do not directly optimize 2D images but instead optimize the implicit 3D representation. By "directly optimizing 2D images", we refer to the utilization of diffusion-based methods or other 2D methods to manipulate the multi-view 2D images. DreamFusion employs SDS loss to optimize the NeRF representation and applies shading based on sampled point light, while SDFusion leverages the SDS loss to optimize the SDF representation conditioned on an input image. Despite the absence of explicit 3D texture and illumination maps, both DreamFusion and SDFusion incorporate NeRF and SDF as implicit 3D representations. The inherent 3D representations in both methods are the primary contributing factor for keeping 3D consistency, as opposed to 2D images that lack such 3D representation. The issue of 3D inconsistency in direct optimization of 2D images is also mentioned in recent diffusion-based methods, such as Instruct-NeRF2NeRF[2], Make-it-3D[3], and others. These methods share a common objective of resolving the problem arising from the presence of 3D inconsistency among multi-views of 2D edited images. The 3D inconsistency observed when directly modifying the illumination of 2D images resembles the inconsistency encountered when editing the appearance of 2D images. As we are restricted from providing additional results during the discussion period, please refer to **Fig.4** of our rebuttal PDF. [1] Omer Bar-Tal, Dolev Ofri-Amar, Rafail Fridman, Yoni Kasten, and Tali Dekel. Text2live: Text-driven layered image and video editing. arXiv preprint arXiv:2204.02491, 2022. [2] Haque A, Tancik M, Efros A A, et al. Instruct-nerf2nerf: Editing 3d scenes with instructions[J]. arXiv preprint arXiv:2303.12789, 2023. [3] Tang J, Wang T, Zhang B, et al. Make-it-3d: High-fidelity 3d creation from a single image with diffusion prior[J]. arXiv preprint arXiv:2303.14184, 2023.
Summary: This paper tackles the task of texture editing for 3D models with the guidance of text. The task aims to manipulate the surface properties based on the text guidance to create corresponding visually realistic appearance. The prior works that this work targets to improve upon is the SDS loss related methods. In high-level, this paper represents the scene with a 3D representation, differentiably render the representation into 2D images, and use a pretrained diffusion model to guide the editing through an optimization process. The proposed method contains two key components. First, the 3D scene is a decomposed representation, containing triangle meshes, texture map, and environment map (following the prior work nvdiffrec). Second, the paper explores for a better editing direction during the optimization process. Specifically, the relative direction and a gradual direction adjustment scheme are proposed to achieve better editing results. Evaluation is done on synthetic object data. The results are qualitatively visualized, and quantitatively evaluated with CLIP-based scores and user study. Strengths: This paper contains a good amount of originality: - The task definition is well motivated. - The combination of the decomposed representation in nvdiffrec is novel and a well-grounded design choice. - The analysis and method design of edit direction are informative and sensible. Quality: - The qualitative performance of the current model still has a significant room for improvements. But it’s acceptable considering the challenges of the task. Clarity: - This paper is well written and easy to follow. - The analysis is informative and convincing. Significance: - The functionality this paper achieve is among the first works in the literature. Weaknesses: - The quantitative evaluation is relatively weak. There is no quantitative ablation for the proposed method designs. - The “illumination-aware” in the title is slightly misleading -- the major results and the demo video do not show editing capability for illumination. Fig.5 is one experiment to show simple editing of light intensity. But the quality is still preliminary, and it does not show evidence to edit lighting effects such as reflections and shadows. It fits the acronym quite well, but maybe not the most accurate in describing the method. Maybe is it more like “reflectance-aware”? - The method adopts the decomposed representation following nvdiffrec. Can the proposed method properly handle the ambiguity of material and light? To give a concrete example, if the original reconstructed object is under a flat and uniform lighting, but the intended edit result is with a directional lighting and thus contain strong shadow. In this case, will the proposed method still bake the shadow directly into the texture map? If this is the case, it’s best to also explicitly mention into main paper as a limitation. - The current editing is mainly done on synthetic objects. Can it handle real world reconstructed objects? - The editing is often quite simple, such as editing of color or brightness. How does the model work for more complex editing? E.g. add a mustache or a hat to the cow / fish? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please see weaknesses section above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors discussed some limitations at the end of the paper. I would encourage the authors to also properly discuss the model's ability and limitation on editing of lighting effects. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for bringing up this insightful comment. **Q1: [Quantitative evaluation]** **A1**: When evaluating generative models, quantitative metrics are often limited. In our study, we utilize CLIP-scores, which are commonly employed in evaluations of Instruct-NeRF2NeRF and other related research, for quantitative assessment. Additionally, we conduct a quantitative ablation study to compare the performance of our model with and without the direction adjustment component, thereby demonstrating the impact of this specifically designed feature. Besides, we add additional quantitative results compared to the Instruct-NeRF2NeRF in Table below. | Method | SDS-based | ITEM3D | Instruct NeRF2NeRF | | :-------------------------: | :-------: | :----: | :----------------: | | Global Score$\uparrow$ | 0.30 | 0.32 | **0.33** | | Directional Score$\uparrow$ | 0.18 | **0.23** | **0.23** | | method | Instruct NeRF2NeRF | ITEM3D | | :-----------: | :----------------: | :----: | | training time | 10h | 10min | | GPU memory | 15GB | 9GB | **Q2: [The “illumination-aware” in the title is slightly misleading]** **A2**: Our illumination editing is achieved through the optimization of the environment lighting map. However, it is true that our current capability is limited when it comes to editing shadows. We apologize for any misdirection caused by the term "illumination-aware" in the title of our paper. In the revised version, we will make the necessary changes to address this issue. **Q3: [Limitation in editing shadows]** **A3**: We have indeed encountered difficulties in editing shadows with our ITEM3D. Despite our attempts to explore directional lighting conditions in an experiment, we were unable to achieve satisfactory results. We will make sure to explicitly mention this limitation in our paper. **Q4-Q5: [Real-world editing and complex editing]** **A4-A5**: We additionally conducted a qualitative experiment of our ITEM3D on a real-world object dataset. The objects in this dataset are reconstructed from hundreds of multi-view images captured by professional cameras. In this dataset, we also performed more complex editing to demonstrate the performance on challenging cases. As shown in **Fig.1** of the rebuttal PDF, our method achieves impressive editing results on complex objects, such as the vegetable cat, Peppa Piggy dolls, and shoes. For example, given the prompt "a vegetable tiger toy", our model can edit the texture in exquisite detail, conforming to the original structure, and synthesize a cute toy tiger with realistic fur and vegetable embellishment. Furthermore, the cases of the "golden sneaker" and "porcelain pig" demonstrate our ability to edit materials effectively. These cases exemplify the capability of ITEM3D in handling complex editing tasks. However, it is important to acknowledge that there are still challenges for ITEM3D in certain scenarios, such as adding a mustache or a hat. In the case of adding a mustache, we attempted to add a mustache to the cow, as shown in **Fig.3** of the rebuttal PDF. While it did generate a black mustache, it inadvertently altered the mustache located on the cow's body instead of its face. This highlights a limitation in our current approach.In the "hat" case, our model currently lacks the ability to edit the mesh, leading to another failure case. We will continue our research and strive to enhance the capabilities of ITEM3D in handling mesh editing. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the rebuttal. The rebuttal addresses my concerns. --- Reply to Comment 1.1.1: Comment: We are glad that our rebuttal has effectively addressed your concerns, and we would like to express our sincere gratitude for your positive response.
Rebuttal 1: Rebuttal: **General Response**: Dear Reviewers, We would like to express our gratitude for your thoughtful feedback on our submitted academic paper. Your comments and suggestions have been invaluable in refining and strengthening our work. In this general response, we will address the three important parts that were commonly mentioned in the discussions, namely the experiment on a real-world dataset, the comparison experiment with Instruct-NeRF2NeRF, and the efficiency claims. **(We attach a PDF file that provides additional results on the bottom of the global response.)** ●**Experiment on Real-World Dataset** [4exv, pR6T, CxP2]: We acknowledge the importance of evaluating our proposed method on a real-world dataset to demonstrate its applicability and generalization. We have taken your advice into consideration and conducted extensive experiments on a diverse and challenging real-world dataset, shown in **Fig.1** of the rebuttal PDF. By providing qualitative results on this dataset, we aim to showcase the effectiveness and robustness of our approach in handling real-world scenarios. ●**Comparison Experiment with Instruct-NeRF2NeRF** [4exv, 67rK, 1P22]: We appreciate your suggestion to compare our ITEM3D approach with state-of-the-art work, Instruct-NeRF2NeRF. In response, we have performed a thorough comparison experiment between the two methods in **Fig.2** of the rebuttal PDF. The results clearly demonstrate the superior performance of ITEM3D in disentangling objects and preserving fine details during texture editing. We have highlighted these advantages in our rebuttal, emphasizing our method's ability to maintain object consistency while achieving text-guided appearance editing. ●**Efficiency Claims** [4exv, CxP2]: We greatly appreciate your concerns regarding the efficiency of our method. In order to address these concerns and showcase the efficiency of our model, we have performed an experiment to substantiate our claims. We have included comparisons of editing time and memory consumption with a relevant method to provide a comprehensive understanding of the efficiency of ITEM3D, as shown in the below table. | method | Instruct NeRF2NeRF | ITEM3D | | :-----------: | :----------------: | :----: | | training time | 10h | 10min | | GPU memory | 15GB | 9GB | One of the key reasons behind the efficiency of ITEM3D is its utilization of an explicit texture/normal/environment map as the bridge between 2D rendered images and the 3D representation. This allows ITEM3D to directly optimize the 2D texture, rather than optimizing the complex 3D representation. As a result, the number of parameters to be optimized is significantly reduced, leading to a more efficient editing process. We believe that these revisions and additions significantly strengthen our paper and contribute to the advancement of the field. We are grateful for your valuable input and appreciate the opportunity to enhance the quality and impact of our work. Thank you once again for your time and consideration. Pdf: /pdf/57b4f66b25f82d12b5b9f558c6e6f8385c6a57f1.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces ITEM3D, an illumination-aware model for automatic 3D texture editing based on text prompts. The authors address the challenges of complex 3D models and ambiguous text descriptions in texture editing. They propose leveraging the power of the diffusion model and optimizing disentangled texture and environment map representations using rendered images as an intermediary. The paper introduces a relative editing direction based on noise differences between source and target texts to improve appearance consistency. The authors also gradually adjust the editing direction to mitigate unexpected deviations caused by texture projection. The contributions include an efficient optimization pipeline for texture editing, the introduction of the relative editing direction, and the proposal of gradual adjustment to handle deviations. Strengths: The paper presents a novel approach to automatic 3D texture editing based on text prompts. It combines the power of the diffusion model with the use of rendered images as intermediaries, introducing a relative editing direction and gradual adjustment techniques. This combination of methods and the focus on addressing challenges specific to 3D modeling make the approach original and distinct from previous works. The paper is well-written and provides detailed explanations of the proposed method and its components. The authors offer clear insights into the challenges of texture editing in 3D models and provide a sound rationale for their approach. The experiments and evaluations are thorough, demonstrating the effectiveness of ITEM3D in generating visually natural textures and enabling relighting. The use of qualitative and quantitative analyses strengthens the quality of the results. The paper addresses an important task in 3D modeling—automatic texture editing—and offers a practical solution with significant implications. By leveraging the diffusion model and incorporating text prompts, the proposed method enables users to manipulate the surface properties of 3D models in a realistic and visually appealing manner. Weaknesses: While the paper has several strengths, there are also a few areas where it could be improved: The paper mainly focuses on the editing of textures in synthetic nerf dataset. However, it would be valuable to investigate the generalization of the proposed method to more complex 3D models and real world scenes, such as those with intricate geometry or high levels of detail. Assessing the performance and scalability of ITEM3D on such complex models would enhance the practicality and applicability of the proposed method. While the paper mentions the importance of efficiency in texture editing, it would be beneficial to provide more insights into the computational requirements and optimization strategies employed by ITEM3D. Specifically, discussing methods to improve the computational efficiency, reducing memory consumption, and addressing scalability issues would enhance the practical usability of the proposed method. Comparison with Alternative Approaches: The paper compares the proposed ITEM3D with diffusion-based editing methods and mentions their limitations. However, it would be valuable to include a more comprehensive comparison with other state-of-the-art methods for texture editing in 3D models, such as Instruct-NeRF2NeRF. This would provide a clearer understanding of the advantages and limitations of ITEM3D in relation to existing alternatives. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Generalization and Real-World Application: Can the proposed ITEM3D method be applied to more complex and real-world 3D models beyond the synthetic NeRF datasets? The paper briefly mentions the importance of efficiency in texture editing, but it would be beneficial to provide more insights into the computational requirements and optimization strategies employed by ITEM3D. While the paper compares ITEM3D with diffusion-based editing methods, it would be valuable to include a more comprehensive comparison with other state-of-the-art methods for texture editing in 3D models. How does ITEM3D fare against other approaches, such as Instruct-NeRF2NeRF or other relevant techniques? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The limitation is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your kind words. We appreciate your advice. **Q1: [Experiment on real-world dataset]** **A1**: We have conducted additional experiments on a real-world dataset, which comprises several complex objects reconstruced from multi-view images captured by professional cameras. As depicted in **Fig.1** of the rebuttal PDF, our ITEM3D approach showcases remarkable texture editing capabilities for real products, including a shoe, a piggy doll, and a toy cat. For instance, when given the text prompt "a vegetable tiger toy", ITEM3D effectively transforms the texture into that of a cute, furry tiger while preserving the fundamental structure of the original toy cat. Furthermore, when the prompts involve both material and texture descriptions (e.g., "golden" and "porcelain"), our model successfully achieves realistic material editing along with consistent texture modifications as per the provided text. These results clearly demonstrate the generalization ability of ITEM3D in handling complex real-world objects. However, it is important to acknowledge that our method primarily focuses on texture editing for 3D models. Manipulating real scenes, which often lack readily available texture maps, presents a challenge for ITEM3D. We agree that addressing this limitation will be a valuable aspect of our future research. **Q2: [Computational efficiency]** **A2**: Conducting experiments to compare efficiency is crucial in substantiating our claims. In our study, we compared the editing time and memory consumption of our approach with the state-of-the-art method, instruct-NeRF2NeRF. As demonstrated in the Table bleow, our ITEM3D outperforms instruct-NeRF2NeRF, requiring significantly less time (50 times less) while maintaining comparable memory consumption during texture editing. | method | Instruct NeRF2NeRF | ITEM3D | | :-----------: | :----------------: | :----: | | training time | 10h | 10min | | GPU memory | 15GB | 9GB | **Q3: [Comparison with the state-of-the-art method]** **A3**: In response, we conducted a comparative experiment between our method and Instruct-NeRF2NeRF, considering its recent release. In **Fig.2** of the rebuttal PDF, we visually demonstrate that both ITEM3D and Instruct-NeRF2NeRF are capable of achieving prompt-consistent editing for general objects. However, Instruct-NeRF2NeRF exhibits some limitations compared to our approach in certain aspects. For instance, when editing chairs with prompts like "Turn the chair into a red stone chair" or "Turn the chair into a green wooden chair", Instruct-NeRF2NeRF produces results with less details and smooth textures, lacking the fine grain that our method captures. Furthermore, when editing a ficus plant with the prompt "Turn the pot into a blue pot", Instruct-NeRF2NeRF faces challenges in distinguishing between the pot and the plant, resulting in both elements being edited to the same blue color. In contrast, our ITEM3D leverages the concept of Relative Direction to achieve disentangled editing, effectively addressing such issues. --- Rebuttal 2: Comment: Reviewer 4exv, Please read the rebuttal provided by authors and raise a discussion if your concerns are not well addressed. Best, AC --- Rebuttal 3: Comment: The response solves most of my concerns. Thus, I improve my final rating to borderline accept. --- Rebuttal Comment 3.1: Comment: We are pleased to hear that our answers have addressed most of your issues and led to an improved final rating of borderline accept. We greatly appreciate your careful consideration of our work.
null
null
null
null
null
null
GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction
Accept (poster)
Summary: The goal of this paper is to utilize locally available models, such as vicuna and opt, to learn the tool utilization capability of black-box models like GPT-3.5. They use the large models to generate instructions and samples using image context and apply instruction tuning on these data with the fine-tunable models. Various data augmentation techniques such as negative samples and context samples were also applied. Furthermore, they propose several automated evaluation methods, including the success rate based on the combination of thought, action, and arguments. Experimental results demonstrate significant improvement in tool invocation effectiveness for their fine-tuned models compared to the original backbone models. Strengths: 1. This paper proposes a method for generating instructions using image context, which enables the large language model to consider visual information when generating multimodal instructions. 2. The method of generating negative data has not been considered in previous works. 3. This paper introduces a novel automated evaluation metric that combines multiple success rates. Weaknesses: 1. The method proposed in this paper, which utilizes self-instruction techniques of large models to enable tool using capabilities for primitive models, is not novel. Similar techniques have been proposed in several works, such as tool-llama (https://arxiv.org/abs/2304.08354), tool-alpaca (http://arxiv.org/abs/2306.05301), and gorilla (http://arxiv.org/abs/2305.15334). The authors have not made significant contributions to the SFT method. 2. One of the main contributions of this paper, image context, is not thoroughly elaborated in this paper. The self-instruction method seems to be also similar to that of MM-react and LLaVA. 3. For the evaluation metrics, the success rate of thought is not clearly explained. Thought should be a natural languge text with certain diversity, but it is unclear whether the generated thought needs to be identical to the ground truth thought to achieve success rate 1. If they need to be identical, I doubt that the success rate of thought would not be as high as shown in the experimental results, as the model may generate thought that is semantically consistent but not textually identical to the groundtruth. If they do not need to be identical, it would be helpful to provide details on how the success rate of thought is calculated. 4. Some recent strong multimodal baselines, such as HuggingGPT (http://arxiv.org/abs/2303.17580) and gorilla (http://arxiv.org/abs/2305.15334), are missing in this paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is there a technical error in Equation 1? where it states that "Y" is sampled from the teacher model based on "X" to obtain the prompt. However, based on the textual description, Equation 1 should refer to sampling the instruction based on "X" and "P_t." 2. How the image context was constructed? 3. How is the success rate of thought be computed? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1:** *The method proposed in this paper, which utilizes self-instruction techniques of large models to enable tool using capabilities for primitive models, is not novel. Similar techniques have been proposed in several works, such as tool-llama (https://arxiv.org/abs/2304.08354), tool-alpaca (http://arxiv.org/abs/2306.05301), and gorilla (http://arxiv.org/abs/2305.15334). The authors have not made significant contributions to the SFT method.* **A1:** We appreciate the comments but believe there exists a big misunderstanding. Our GPT4Tools is the first public work (before all the mentioned works) to enable a primitive language model to use multi-modal tools [51], including demo and codes. Besides, all the mentioned works were pre-printed after the submission deadline. Please note that the mentioned Tool-LLaMa could be [https://arxiv.org/pdf/2307.16789.pdf] instead of the BMTools [https://arxiv.org/abs/2304.08354]. The BMTools relies on the prompt engineering of ChatGPT instead of off-the-shelf language models. By contrast, our GPT4Tools aims to empower open-sourced language models invoking tools through self-instruction tuning. Our contributions mainly lie in three points: (1) Our method first enables primitive open-source language models to use multi-modal tools, eliminating the dependence on advanced proprietary LLMs like ChatGPT. (2) We design a new approach based on multi-modal contexts for self-instruction and augmentation, which significantly promote multi-modal tool usage and can be deployed in different approaches. (3) We propose a new benchmark to assess the effectiveness of using tools, and our method shows remarkable improvements. [51] Yin S, Fu C, Zhao S, et al. A Survey on Multimodal Large Language Models[J]. arXiv preprint arXiv:2306.13549, 2023. **Q2:** *One of the main contributions of this paper, image context, is not thoroughly elaborated in this paper. The self-instruction method seems to be also similar to that of MM-react and LLaVA.* **A2:** Please note that our GPT4Tools is the concurrent work with MM-REACT and LLaVa. Besides, as far as we known, the MM-REACT is based on prompt engineering instead of self-instructions, and the LLaVa aims to achieve a better VQA model rather than pursue the capability of invoking tools. The LLaVa can be also a tool in our GPT4Tools to enhance the visual understanding. In addition, the image context is consist of the image captions and the coordinates of objects. As presented in Section 3.1 and Table 10 of supplementary material, we use these image contexts to build prompt for ChatGPT and then generate tool-related user instructions. We will clarify it in the revision. **Q3:** *For the evaluation metrics, the success rate of thought is not clearly explained. Thought should be a natural languge text with certain diversity, but it is unclear whether the generated thought needs to be identical to the ground truth thought to achieve success rate 1. If they need to be identical, I doubt that the success rate of thought would not be as high as shown in the experimental results, as the model may generate thought that is semantically consistent but not textually identical to the groundtruth. If they do not need to be identical, it would be helpful to provide details on how the success rate of thought is calculated.* **A3:** Different from many open-ended tasks, invoking tools need to strictly follow a pre-defined interaction format. The model arbitrarily outputting in unexpected formats, such as wrong tool names and number of parameters, will lead to failure in properly invoking the tools. As presented in Section 3.3, we evaluate the models from three aspects: whether a tool is needed, tool type, and tool parameters. For open-ended text in the tool parameters, we adopt BLEU metric with threshold to assess correctness. **Q4:** *Some recent strong multimodal baselines, such as HuggingGPT (http://arxiv.org/abs/2303.17580) and gorilla (http://arxiv.org/abs/2305.15334), are missing in this paper.* **A4:** Please refer to Q1. Besides, we evaluate our model with the Visual-ChatGPT in Table 2 and Table 5, which uses the same GPT-3.5 LLM with the HuggingGPT. **Q5:** *Is there a technical error in Equation 1? where it states that "Y" is sampled from the teacher model based on "X" to obtain the prompt. However, based on the textual description, Equation 1 should refer to sampling the instruction based on "X" and "P_t."* **A5**: As shown in Table 10 of supplementary material, the P_t is the system prefix prompt, which is made up of the basic task definition, the usage description of tools and the placeholder for image context. We utilize ChatGPT to generate diverse user instructions by placing different image contexts into the placeholder of the system prompt. More details will be added into the revision. **Q6:** *How the image context was constructed?* **A6:** Please refer to Q2. As illustrated in Figure 1 and Table 10 of supplementary material, the image context is consist of the image captions and the coordinates of objects. We use these image contexts to build prompt for ChatGPT and then generate tool-related user instructions. We will clarify it in the revision. **Q7:** *How is the success rate of thought be computed?* **A7:** Please refer to Q3. As presented in Section 3.3, we evaluate the models from three aspects: whether a tool is needed, tool type, and tool parameters. For open-ended text in the tool parameters, we adopt BLEU metric with threshold to assess correctness. --- Rebuttal Comment 1.1: Title: Final review on Submission1848 by Reviewer F8mq Comment: Thank you to the authors for their rebuttal. The authors has explained its difference in using open-source language models to use multi-modal tools and the first work to conduct this. It appears that the authors have addressed the weakness in their response, so I revise my score from 4 to 5. Just as mentioned in Reviewer MACc, I also remain the score at this level duo to that the limited novelty of the proposed method.
Summary: This work propose GPT4Tools, a self-instruction approach to teach large language models to use tools to solve vision tasks. The authors construct an instruction-following dataset by ChatGPT from the combinations of a set of images and 23 image-related tools. In addition, they augment the dataset to enable to capability to not using tools or using tools in multiple turns. Then they apply Low-Rank Adaptation (LoRA) to open-source LLMs like LLaMa and Vicuna to enable them to use tools. They also propose to evaluate the model's performance on tool-using by measureing the success rate from multiple ascepts. The empirical results show that GPT4Tools can successfully utilize the tools and also have capability to use new tools. The study introduces GPT4Tools, a self-instruction method that teaches large language models (LLMs) to utilize tools for solving vision tasks. The authors establish an instruction-following dataset generated by ChatGPT using combinations of images and 23 image-related tools. The dataset is augmented to allow for the capability of non-usage of tools or multi-turn tool usage. The researchers then apply Low-Rank Adaptation (LoRA) to open-source LLMs, like LLaMa and Vicuna, enabling these models to use tools. The paper also proposes an evaluation measure for the model's success rate from multiple aspects. Empirical results reveal that GPT4Tools can effectively use tools and demonstrate the capability to adopt new tools. Strengths: - The paper presents an innovative approach to teaching LLMs to use tools for visual tasks through self-instruction - The impact of enabling LLMs to utilize tools could be substantial and pervasive. Weaknesses: - The paper falls short in providing a comparison with VisProg[1], which also instructs LLMs to use visual tools through few-shot demonstrations. Although GPT4Tools has the benefit of not utilizing proprietary LLMs in inference, a performance comparison would enrich the paper. - The evaluation method appears less compelling because both the training and testing data are generated by using the specific set of tools. Evaluation on standard benchmarks, such as GQA as in [1], could be more convincing. - The paper could benefit from enhanced clarity in its descriptions and explanations. Suggestions include: - Clarifying the variables P_t, X_C, Y, possibly through their corresponding parts in Figure 1 or mentioning corresponding examples in the appendix. - The subscript of P_t could be confusing as the total number of tools is defined as N. - The abundant font colors in Figure 1 and Figure 3 may confuse readers, and it would be helpful to indicate which parts are generated by LLMs - (Minor) Some crucial details appear to be omitted, such as the process of extracting X_C from images, the image source, and the filtering process of the dataset. - (Minor) Typos: - Page 6, line 172: "and and" - Page 6, line 173: ""items.." [1] Gupta, Tanmay, and Aniruddha Kembhavi. "Visual programming: Compositional visual reasoning without training." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Can GPT4Tools effectively do multi-turn planning, given the multi-turn examples in the augmented data? Why does the evaluation data only contain single-turn examples? - How scalable is GPT4Tools when using a larger number of tools? - (Minor) What the numerical codes (e.g., 179.44, 105.55) represent in the image content shown in Figure 1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 4 excellent Limitations: While the authors discuss potential overfitting for longer tuning iterations and the inferior performance of OPT-13B, more detailed discussions regarding the limitations of the proposed evaluation method, the multi-turn capability, and the scalability of GPT4Tools would add value. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1:** *How effectively can the proposed instruction data collection procedure generalize to other domains?* \ **A1:** Different from VisProg which utilizes few-shot prompt engineering, our method uses a smaller open-source language model and zero-shot manner. As shown in Table 2, our method demonstrates good generalization capability on unseen tools. Furthermore, to further validate efficacy, we conducted ablation studies on GQA following the VisProg by creating a subset for evaluation. As shown in Table 15, the GPT4Tools further improve the accuracy by 3.4% over the VILT-VQA baseline model. We will add more experiments into the revision. Table 15: Comparison on GQA Dataset. **Method** | **Accuracy** | |:-------------------:|:--------:| VITL-VQA | 47.8 | GPT4Tools | 51.2 | **Q2:** *The paper could benefit from enhanced clarity in its descriptions and explanations.* **A2:** We appreciate the valuable suggestions! Accordingly, we will improve the presentation by clarifying symbols and font colors of figures in the revision. **Q3:** *Some crucial details appear to be omitted, such as the process of extracting X_C from images, the image source, and the filtering process of the dataset.* **A3**: All images utilized in GPT4Tools are sourced from the training set of COCO [43]. The X_C consists of ground-truth captions and bounding boxes with tags corresponding to the images, which are used to construct prompts for ChatGPT to generate data. We then utilize postprocessing to filter out data with: duplicate instructions, incorrectly formatted instructions, calls with incorrect tool names, and calls with incorrect tool-arguments formats. The comprehensive description about the image source and filtering procedure will be provided in the revision. Meanwhile, the corresponding code will be made available. **Q4:** *Can GPT4Tools effectively do multi-turn planning, given the multi-turn examples in the augmented data? Why does the evaluation data only contain single-turn examples?* **A4:** Yes, GPT4Tools can perform multi-turn planning. For example, as shown in Figure 7 of supplementary material, the user instruction in a sample "Generate a man watching a sea based on the pose of the woman." requires the following multi-turn planning: detect the woman in the image, extract the pose of the woman, generate an image of a man watching the sea using the pose. In the GPT4Tools, we term these multi-turn planning samples as context samples, which are also included in the validation set. **Q5:** *How scalable is GPT4Tools when using a larger number of tools?* **A5:** LLMs are currently evolving towards supporting long context, for example Long-form [49] supports 1B token inputs. This makes the prompts for a massive number of tools not an unresolvable issue in the future. In addition, for models with limited context length, we can use tool retrieval techniques to filter out the a small subset of toolsets that may be involved for the user input, and then apply GPT4Tools for inference. To demonstrate effectiveness, we utilized BM25[50] to retrieve the top K tools from 23 tools based on the user input for inference, whose results are shown in Table 14. This strategy can mitigate the reliance on long context models for a large number of tools to some extent. We will provide more experiments on this in the revision. Table 14: GPT4Tools using Top-K related tools. | Model | SR_t | SR_act | SR_args | SR | |-----------------|------|--------|---------|------| | Vicuna-13B | 69.2 | 24.5 | 24.6 | 12.3 | | + GPT4Tools (Top-1) | 87.0 | 55.8 | 57.5 | 54 | | + GPT4Tools (Top-2) | 93.1 | 70 | 69.5 | 67.8 | | + GPT4Tools (Top-3) | 95.8 | 74.4 | 72.9 | 73.1 | | + GPT4Tools (All) | 99.2 | 98.1 | 92.9 | 89.8 | [49] Ding J, Ma S, Dong L, et al. LongNet: Scaling Transformers to 1,000,000,000 Tokens[J]. arXiv preprint arXiv:2307.02486, 2023. [50] Robertson, S.E., S. Walker, S. Jones, M.M. Beaulieu, M.Gatford. Okapi at TREC-3. TREC-3, (1994), p. 109-126 --- Rebuttal Comment 1.1: Comment: I appreciate the authors for the comprehensive response and for conducting additional experiments. Many of my initial concerns have been addressed, prompting me to revise my score from 4 to 5. However, the score remains at this level due to the limited novelty of the proposed method (instruction-tuning) and the clarity issues mentioned above. Thank you for the understanding.
Summary: This paper utilizes ChatGPT to generate instruction data for training moderately-sized open-source models to use tools. The authors fine-tune LLMs using LoRA and demonstrate their effectiveness on various visual problems. Additionally, the paper establishes a benchmark for evaluating the LLMs' tool usage ability. Strengths: 1. Using ChatGPT, this paper represents an early effort to consider visual information to produce diverse instruction data that trains moderately-sized open-source models to use tools, resulting in effective learning outcomes. 2. This paper sets a benchmark standard for evaluating the competency of LLMs in tool usage. 3. The results of the experiment demonstrate the efficacy of instruction tuning, as evidenced by the superior performance of the tuned 13B model in comparison to the 175B GPT-3.5 model. Additionally, the study highlights the framework's capacity for generalization to unseen tools. Weaknesses: 1. It would be intriguing to understand the reasons behind the significant discrepancy in generalization ability between OPT and LLaMA in Table 2. 2. The prefix prompt used in the study includes both system messages and tool definitions. Although the current context length of open LLMs appears to be sufficient for the 23 tools examined in the paper, it may not be enough if a broader range of tools is considered. Therefore, the present framework would need to address this limitation if it is to accommodate a more diverse range of tools. Technical Quality: 3 good Clarity: 3 good Questions for Authors: please refer to Weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: please refer to Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1:** *It would be intriguing to understand the reasons behind the significant discrepancy in generalization ability between OPT and LLaMA in Table 2.* **A1:** Compared to Vicuna and LLaMA, one possible reason is that OPT has weaker capabilities for multi-turn interaction. There are many samples in Validation-II that require multi-turn tool invocations, and OPT demonstrates poor generalization performance on these samples. In contrast, Vicuna, which is optimized through multi-turn dialogues, performs the best. On the contrary, as shown in Table 5 of supplementary material, OPT exhibits good generalization performance on Validation III which only contains single-turn interactions. **Q2:** *The prefix prompt used in the study includes both system messages and tool definitions. Although the current context length of open LLMs appears to be sufficient for the 23 tools examined in the paper, it may not be enough if a broader range of tools is considered. Therefore, the present framework would need to address this limitation if it is to accommodate a more diverse range of tools.* **A2:** LLMs are currently evolving towards supporting long context, for example Long-form [49] supports 1B token inputs. This makes the prompts for a massive number of tools not an unresolvable issue in the future. In addition, for models with limited context length, we can use tool retrieval techniques to filter out the a small subset of toolsets that may be involved for the user input, and then apply GPT4Tools for inference. To demonstrate effectiveness, we utilized BM25[50] to retrieve the top K tools from 23 tools based on the user input for inference, whose results are shown in Table 14. This strategy can mitigate the reliance on long context models for a large number of tools to some extent. We will provide more experiments on this in the revision. Table 14: GPT4Tools using Top-K related tools. | Model | SR_t | SR_act | SR_args | SR | |-----------------|------|--------|---------|------| | Vicuna-13B | 69.2 | 24.5 | 24.6 | 12.3 | | + GPT4Tools (Top-1) | 87.0 | 55.8 | 57.5 | 54 | | + GPT4Tools (Top-2) | 93.1 | 70 | 69.5 | 67.8 | | + GPT4Tools (Top-3) | 95.8 | 74.4 | 72.9 | 73.1 | | + GPT4Tools (All) | 99.2 | 98.1 | 92.9 | 89.8 | [49] Ding J, Ma S, Dong L, et al. LongNet: Scaling Transformers to 1,000,000,000 Tokens[J]. arXiv preprint arXiv:2307.02486, 2023. [50] Robertson, S.E., S. Walker, S. Jones, M.M. Beaulieu, M.Gatford. Okapi at TREC-3. TREC-3, (1994), p. 109-126
Summary: The authors propose GPT4Tools, a new approach to data collection for tool use, along with a novel evaluation method. In this approach, ChatGPT is utilized as a teacher to generate instruction-following data for vision-language tasks related to tools. In addition to the single-turn samples, the authors suggest synthesizing negative samples to instruct the student model on when not to use a tool. They also propose using context samples to guide the student model in utilizing relevant tools based on contextual information. Experimental results indicate that by employing the proposed successful rate(s), open source models like LLaMA-13B and Vicuna-13B achieve significant performance improvements after LoRA fine-tuning. Furthermore, the authors demonstrate a certain level of capability in utilizing unseen tools after the fine-tuning process. Strengths: * The proposed procedure for instruction data collection is straightforward and easy to follow. The performance improvement of the open-source models is evident. The effectiveness of the proposed negative samples and context samples is clearly demonstrated in facilitating downstream tool usage. * The proposed evaluation metric is logical and valuable in establishing a standard for future research in this field. * The conducted ablation study on data augmentation, model size, and tuning iterations provides valuable insights into understanding the behavior of the model in depth. Weaknesses: * I believe the work focuses on a straightforward procedure for instruction data collection and a standardized evaluation method. Therefore, I don't identify any significant weaknesses, but rather some points for further discussion: * The process of creating synthetic data from ChatGPT and using it to fine-tune smaller models can be seen as a distillation process. Previous studies have demonstrated that, for many tasks, distilled student language models can outperform their teacher language models across various tasks [A1]. What are the authors' thoughts on the relationship between instruction tuning and distillation in the context of tool use? * Vision tasks often exhibit stronger interdependencies. How effectively can the proposed instruction data collection procedure generalize to other domains, such as manipulating a vast number of APIs for cloud services, etc? * I did not find a clear distinction between the two sub-figures in Figure 2. Could the authors emphasize the key area that readers should focus on? * There is a typo on line 155: "single-tune" should be corrected to "single-turn." [A1] Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes. ACL 2023. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see the weaknesses section above. Overall I enjoy reading the submission. Since the work mostly discusses instruction data collection procedure, it would be great to expand the discussion between the proposed method and other related fields, such as distillation, other instruction tuning works, etc. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations are included Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1:** *The process of creating synthetic data from ChatGPT and using it to fine-tune smaller models can be seen as a distillation process. Previous studies have demonstrated that, for many tasks, distilled student language models can outperform their teacher language models across various tasks. What are the authors' thoughts on the relationship between instruction tuning and distillation in the context of tool use?* **A1:** From the perspective of training data, distillation requires input samples to be given and then fed into a network to generate training targets. On the other hand, self-instruction tuning uses a network to generate both input samples and training targets. Therefore, distillation can also be a way to build training targets for self-instruction tuning. However, the most critical challenge in enabling language models to use tools is the lack of diverse input samples, specifically user instructions. Therefore, we leverage the self-instruction technique to utilize advanced language models in generating user instructions based on image content and tool descriptions. In addition, the effectiveness of distillation is limited by the capabilities of the teacher model, which may introduce much noise into the training targets. However, in our GPT4Tools, since we know in advance the toolchains involved in the input samples, we can greatly reduce the noise in the training targets through heuristic rules. **Q2:** *Vision tasks often exhibit stronger interdependencies. How effectively can the proposed instruction data collection procedure generalize to other domains, such as manipulating a vast number of APIs for cloud services, etc?* **A2:** This is a thoughtful question. In order to enable LLMs to use tools, the following points need to be achieved: judge whether a tool is needed, select the appropriate tool, generate suitable parameters, and interact with the tool in the correct format. For GPT4Tools, we mainly enable the LLM to understand the prompts for toolset definitions and output in the correct format, while selecting the right tools and parameters relies on the own capabilities of models. This allows our method to perform well on unseen toolsets, which is shown in Table 2 and Table 5. We will add more additional tools unrelated vision in the revision. **Q3:** *A clear distinction between the two sub-figures in Figure 2. Could the authors emphasize the key area that readers should focus on?* **A3:** Figure 2 illustrates the diversity comparison of the collected instructional dataset with and without the image content. Without image context priors, language models tend to generate objects of visual instructions towards a small subset, which is reflected in t-SNE as sparser clusters. In contrast, using image context notably increases diversity, reflected in the visualization as denser and more widely distributed results. Therefore, the language model tuned by image-conditioned data has a higher success rate compared to models without the image content (81.6% v.s. 36.9%). --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their rebuttal. I had similar concerns to those raised by reviewers F8mq and MACc. It appears that the authors have addressed these concerns in their response. The authors also mostly addressed my concerns as well.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On the Learnability of Multilabel Ranking
Accept (spotlight)
Summary: This paper is mainly about the learnability of multilabel ranking. The authors show that a ranking hypothesis class embeds $K^2$ different binary hypothesis classes and defines two families of ranking loss functions that capture most, if not all, ranking losses used in practice. The paper addresses the fundamental question of when a multilabel ranking problem is learnable, which remains unanswered despite the vast literature on multilabel ranking. Strengths: The paper provides a significant contribution to understanding when a multilabel ranking problem is learnable, addressing a fundamental question that has remained unanswered despite the extensive literature on multilabel ranking. The authors also provide sufficient conditions for the agnostic Probably Approximately Correct learnability of score-based ranking hypothesis classes. Weaknesses: This paper does not contain any experiments for their theories and algorithms. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I have no questions about this paper. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: This paper did not contain experiments for any proposed algorithm and losses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for noting that our work provides a significant contribution to the learnability of multilabel ranking problems. **Q**: *"This paper does not contain any experiments for their theories and algorithms"* A: We thank the reviewer for noting this concern. We would like to emphasize that the main focus of our work is theoretical as we aim to characterize the learnability of multilabel ranking problems. No prior characterization of learnability was known for multilabel ranking. Developing ranking algorithms with strong practical guarantees has been extensively studied, but we believe it is out of scope for this paper.
Summary: This work study the learnability of multilabel ranking problems in both batch and online settings for different ranking losses family. They show that the multilabel ranking learnability is equivalent to the learnablities of a family of binary classification problems. Strengths: * The paper tackles a well-motivated problem. Besides, since the theory on the learnability of multilabel ranking is blank before this work, it seems that the novelity and originality of this paper can be promised. * The paper is overall well-written. The presentation of theoretical results is rigorous. * The proofs are solid based my judgement. * The results are surprisingly concise. * The authors present a example stating that linear ranking hypothesis class is learnable, which shows that theri results are pratical. Weaknesses: As the authors confirmed, the bounds may be not optimal. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for finding that our paper tackles a well-motivated problem and presents a novel and original contribution to the theoretical foundations of multilabel ranking.
Summary: The authors in their work focus on a single theoretical contribution - showing that the multilabel ranking hypothesis class is PAC learnable for a wide range of ranking losses in both batch and online settings. The authors first define two general families of ranking losses that are equivalent to popularly used losses. (like Pairwise Rank Loss, Discounted Cumulative Gain, Reciprocal Rank, Average Precision, Precision@p) Then they show for the defined losses and both batch and online settings, the hypothesis class of multilabel ranking losses $\mathcal{H}$ can be expressed using binary hypothesis classes $\mathcal{H}_i^j$ for $i, j \in [K]$, where ($K$ is a number of labels), where $\mathcal{H}_i^j$ tels whether the label $i$ should be ranked in the top $j$ positions. The authors show that binary class $\mathcal{H}_i^j$ is PAC learnable, and as a result of that, it's also $\mathcal{H}$, and existing combinatorial dimensions for batch and online learning like VC and Littleston characterized learnability in the considered multilabel setting. To be honest, I'm far from the best person to review this work, as I don't know related works that are the most important to this paper except the book of Shalev-Shwartz and Ben-David, which I read some time ago, so I cannot judge it in the context of recent works in this area. Since the proof is very slightly outlined in the main paper, I tried to check the appendix as much as time allowed me. I checked Appendix A, B, and most of C. I did not check Appendix D and E. Assuming that all the parts are correct, I believe this is a solid contribution that should be accepted. Strengths: - The paper contributes to the fundamental understanding of statistical learning theory for learning multilabel ranking. - The presented proofs are non-trivial but, at the same time, clearly presented in the appendix. Weaknesses: - The main proofs are only slightly outlined in the main paper; by slightly, I mean they don't give out any idea behind the proof, only the intermediate results. Because of that, the main paper feels more like an introduction to notation, and then it basically forces the reader to switch to reading a long appendix (or the read can just stop reading and believe the authors the result is valid). The paper would definitely benefit from a longer form. Also, the appendix would benefit from including the original theorems and lemmas to reduce the need for switching between the main paper and the appendix. - It seems to me that two families of defined ranking losses capture "most if not all ranking losses used in practice" is a bit overstated. The result is not surprising as losses equivalent to these families (listed in Appendix A) are simple losses with (I think) the same form of an optimal hypothesis that predicts top $p$ or $K$ labels in descending order of expected relevance. These families may not capture more complex measures, but I agree that these are very popular. - This is a notation-dense paper; as a reader, I would appreciate some more comments/full-sentence explanations on notation in Sections 2 and 3 to confirm my understanding of it before jumping to the main results. For example, it was not clear to me if relevance is descending, meaning B is the most relevant and 0 is the least relevant or its opposite. I could only confirm it by reading definitions of $\ell_{\text{sum}}^{@p}$, and $\ell_{\text{prec}}^{@p}$ and figuring out which one makes sense, but that was one page later. Again this I believe is a fault of limited space. - NIT: The discussion section is not really a discussion, just a short summary. - NIT: Lack of equation numbers is inconvenient when reading the paper like this (for example, to make notes on equations, as there is no defined way to address them). - NIT: There are equations that go over the right margin, which is a bit unelegant when there is a strict space limit. - NIT: Bit string? Why not just binary vector? To me bit string is a compact implementation of a binary vector. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I don't really have questions. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There are no direct negative social impacts of this work. The limitation of tightness of sample complexity and regret is mentioned in the last section. I do not see any other limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for finding that our paper contributes to the fundamental understanding of statistical learning theory for multilabel ranking. **Q1**: *"The main proofs are only slightly outlined in the main paper; "* A1: We thank the reviewer for pointing this out. We will make sure to add detailed proof sketches in the main text for the camera-ready version. In addition, we will include the original theorems and lemmas in the Appendix. **Q2**: *"It seems to me that two families of defined ranking losses capture "most if not all ranking losses used in practice" is a bit overstated. "* A2: We will dial back this claim and instead rephrase it to "many popular ranking losses used in practice". That said, our loss families do capture most of the ranking losses used in practice. For example, all the ranking losses found in the popular MULAN library can be categorized into one of the 2 families. **Q3**: *"This is a notation-dense paper;"* A3: We agree with the reviewer that this is a notation-dense paper. Will will make sure to add more explanations of the notation in Sections 2 and 3. Moreover, we will make sure to explicitly highlight that larger scores imply higher relevance. **Q4**: *"The discussion section is not really a discussion, just a short summary."* A4: We will expand on the discussion section by including limitations and future directions in the camera-ready version. **Q5**: *"NITS"* A5: We thank the reviewer for pointing out these issues. We will address all these comments in the camera-ready version. --- Rebuttal Comment 1.1: Title: Re: Rebuttal by Authors Comment: Thank you for your response. Regarding your response to Q2, I don't think MULAN lib is a good indicator/example here as it is no longer popular and widely used, and actively developed. Also, it would be nice to see a bit more complete answer to my Q4 and to Q4 of Reviewer edyo. Meaning to see at least a summary of these promised discussions. --- Reply to Comment 1.1.1: Title: Response to Reviewer euAG Comment: Our discussion will roughly address the following points: 1. Our lower and upper bounds on the sample complexity and regret are not optimal. Given a particular ranking loss, It will be interesting to derive the optimal sample complexity and regret in the realizable and agnostic settings. In addition, our bounds depend on the number of labels K. Recently, K-free bounds have been achieved for multiclass classification problems in both batch and online settings. An interesting future direction is to explore whether K-free bounds are possible for multilabel ranking. 2. Since our focus was on establishing learnability, our algorithms are not computationally efficient. However, in practice, people typically run ERM. So, tt is an interesting future direction to tightly quantify the sample complexity of ERM in the batch setting. 3. In learning theory, combinatorial dimensions play an important role in giving a tight quantitative characterization of learnability. It is an interesting future direction to identify combinatorial dimensions that characterize multilabel ranking learnability for specific loss functions. 4. Our paper only studies a specific class of ranking loss functions. Accordingly, we leave it open to characterize the learnability of other natural ranking loss functions. Two such loss functions are recall@p and the thresholded version of the sum loss described in answer A7 of our respond to Reviewer edyo. With regards to consistency, we will expand on the following: there have been several works that have studied the consistency of convex surrogates of natural ranking losses such as the pairwise ranking loss, NDCG, Average Precision, and so forth [1, 2, 3, 4, 5, 6]. However, even for these aforementioned losses, the question of learnability has remained open. We close this gap by characterizing the learnability of these natural losses. Citations: 1. Duchi, John C., Lester W. Mackey, and Michael I. Jordan. "On the Consistency of Ranking Algorithms." ICML. 2010. 2. Buffoni, David, et al. "Learning scoring functions with order-preserving losses and standardized supervision." The 28th International Conference on Machine Learning (ICML 2011). 2011. 3. Gao, Wei, and Zhi-Hua Zhou. "On the consistency of multi-label learning." Proceedings of the 24th annual conference on learning theory. JMLR Workshop and Conference Proceedings, 2011. 4. Ravikumar, Pradeep, Ambuj Tewari, and Eunho Yang. "On NDCG consistency of listwise ranking methods." Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings, 2011. 5. Calauzenes, Clément, Nicolas Usunier, and Patrick Gallinari. "On the (non-) existence of convex, calibrated surrogate losses for ranking." Advances in Neural Information Processing Systems 25 (2012). 6. Dembczynski, Krzysztof, Wojciech Kotlowski, and Eyke Hüllermeier. "Consistent multilabel ranking through univariate losses." arXiv preprint arXiv:1206.6401 (2012).
Summary: The authors study the problem of learnability of label ranking for a large family of ranking losses. The paper is purely theoretical. The main results have been built on the equivalence of realizable and agnostic learnability of the PAC model for binary classification. Using the techniques from Hopkins et al. (2022) and Raman et al. (2023), the authors show the learnability of label ranking in batch and online settings. Furthermore, they prove that ranking based on linear functions is learnable in the batch setting. Strengths: Strengths: - Important theoretical contribution - Practical learning problem The paper is an important theoretical contribution for a practical learning problem. The result is novel and seems to be correct. Nevertheless, it is rather a straightforward extension of previous results. Weaknesses: Weak points: - Confusing problem setup - No discussion on related work - Structure of the paper The problem setting seems to be a bit confusing. I would call it rather label ranking (potentially with ties) instead of multi-label ranking. The latter suggests that the feedback information is binary. Alternatively, a wider discussion on differences between learning problems should be given. The paper lacks discussion on related work. The obtained results should be discussed in light of the previous theoretical results on multi-label ranking (e.g., Dembczynski et al., 2012, Gao and Zhou, 2011, Koyejo et al., 2015). Unfortunately, the link between those results is not given in the paper. Having a page limit, it is always hard to properly divide the results between the main text and the appendix. I appreciate that the authors decided to give the most important proofs in the main text. Unfortunately, the price for it is that the paper lacks the context (clear definition of the problem, discussion of ranking loss functions), wider discussion on the results (examples, related work), and discussion on limitations and practical implications. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Learnability is an important concept, but consistency is important as well. How do the results from the submitted paper relate to the previous results concerning consistency of multi-label ranking? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The realizable-to-agnostic conversion is not computationally efficient. How does it impact the main findings of the submission? It seems there is many other loss functions of "ranking" type in multi-label classification. How those results would apply to, for example, recall@p? It also seems that macro-metrics could also be interpreted as a kind of "ordering" functions. The abstract says that the results "capture most, if not all, losses used in practice." Is the statement not too strong? What are examples of losses not included in the two families considered in the paper? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for noting that this work is an important theoretical contribution to a practical learning problem. **Q1**: *"The problem setting seems to be a bit confusing. I would call it rather label ranking (potentially with ties) instead of multi-label ranking. The latter suggests that the feedback information is binary."* A1: We thank the reviewer for pointing this out. We note that by taking B = 1 in Line 55, our model allows for the feedback to be binary. We will also include a broader discussion on the differences between learning problems in the camera-ready version. **Q2**: *"The paper lacks a discussion on related work."* A2: We thank the reviewer for pointing out these works. We will make sure to include a discussion of these works along with a comparison to our results in the camera-ready version. **Q3**: *"Unfortunately, the price for it is that the paper lacks the context (clear definition of the problem, discussion of ranking loss functions), wider discussion on the results (examples, related work), and discussion on limitations and practical implications."* A3: We thank the reviewer for bringing up these concerns. We will consider moving all proofs to Appendix and expand more on the problem setup, ranking loss functions, related work, limitations, and practical implications. **Q4**: *"Learnability is an important concept, but consistency is important as well."* A4: We thank the reviewer for pointing out the problem of consistency. While we do think that consistency is important, the main focus of this paper is establishing the learnability of multilabel ranking, which apriori, was not known. In the camera-ready version, we will make sure to reference and state the prior results on consistency for multi-label ranking and mention that the main focus of this work is on learnability. **Q5**: *"The realizable-to-agnostic conversion is not computationally efficient."* A5: We thank the reviewer for pointing this out. Indeed, our realizable-to-agnostic conversion is not computationally efficient. However, the main focus of this work is statistical and more specifically to provide a characterization of multilabel learnability. That said, we do think that establishing learnability via efficient reductions and constructing computationally efficient learning algorithms for multilabel ranking is an interesting future direction. **Q6**: *"It seems there is many other loss functions of "ranking" type in multi-label classification. How those results would apply to, for example, recall@p?"* A6: We thank the reviewer for pointing out that there are other loss functions of ranking type in multilabel classification, like the recall@p loss (under binary relevance score feedback). Unfortunately, the recall@p loss does not fall in either the sumloss@p family or the precision@p loss family. This is because if the number of relevant items is more than p, then recall@p will never be 0 (since it is not possible to return all the relevant items). In addition, the pairwise ranking loss is a popular "ranking" type loss in multi-label classification which we do capture in the sumloss family. Overall, we agree that the loss functions captured by the sumloss@p and precloss@p families are not exhaustive. However, we do still believe that they capture the most popular ranking loss functions used in practice. **Q7**: *"The abstract says that the results "capture most, if not all, losses used in practice." Is the statement not too strong? What are examples of losses not included in the two families considered in the paper?"* A7: We will dial back this claim and instead rephrase it to "most ranking losses used in practice". That said, our loss families do capture most of the ranking losses used in practice. For example, all the ranking losses found in the popular MULAN library can be categorized into one of the 2 families. One example of a ranking loss that is not included in one of the two families is a thresholded version of the sum loss. That is, consider the loss function that is defined by first computing the sum loss and then setting it to zero if its value is smaller than some preselected threshold. --- Rebuttal Comment 1.1: Title: After rebuttal Comment: I thank the Authors for their responses. They in general answer my questions. Nevertheless, the paper needs substantial editorial work as the Authors promise to improve the text by adding additional clarifications, to move "all proofs to Appendix" and to "expand more on the problem setup," but also to "add detailed proof sketches in the main text" (as promised to Reviewer euAG). This is quite a challenge, but hope the Authors will succeed with it.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Demystifying Softmax Gating Function in Gaussian Mixture of Experts
Accept (spotlight)
Summary: This paper addresses the long-standing challenge of parameter estimation in softmax gating Gaussian mixture of experts. The authors identify three theoretical challenges related to softmax gating: (i) the non-uniqueness of parameter estimation, (ii) the interaction between softmax gating and expert functions, and (iii) the complex dependence in the conditional density. They propose Vononoi loss functions to overcome these challenges and establish convergence rates for the maximum likelihood estimator (MLE). Additionally, they discover a connection between the MLE rate and a polynomial equation solvability problem when the number of experts is unknown or over-specified. Strengths: The paper proposes a novel solution to a long-standing problem in the literature, addressing three fundamental challenges associated with softmax gating. By introducing Vononoi loss functions and establishing convergence rates for the maximum likelihood estimator, the authors offer a fresh perspective on parameter estimation in these models. The paper contributes to the theoretical understanding of softmax gating Gaussian mixture of experts. It tackles the issues of non-uniqueness, interaction between gating and expert functions, and complex dependence in the conditional density. By resolving these challenges, the authors enhance our knowledge of the underlying principles and dynamics of these models. The paper establishes a connection between the MLE rate and a solvability problem of a system of polynomial equations. This connection adds an interesting aspect to the study, highlighting the potential relationships between statistical estimation and mathematical problem-solving. Weaknesses: Firstly, the absence of a comparison or discussion of popular techniques such as Gumbel-softmax [1] and max-propagate [2] raises concerns. These techniques are widely used in probabilistic MOE methods, and their exclusion from the paper may limit its completeness and applicability. Secondly, the lack of experimental validation is another potential issue. The practical relevance of the proposed methods is claimed by the authors, but without empirical evidence or experiments, it is challenging to assess the true impact and effectiveness of the proposed approach in real-world scenarios. [1] E. Jang, et al. "Categorical Reparameterization with Gumbel-Softmax." International Conference on Learning Representations (2017). [2] J. Ren, et al. "Probabilistic mixture-of-experts for efficient deep reinforcement learning." arXiv preprint arXiv:2104.09122 (2021). Technical Quality: 3 good Clarity: 3 good Questions for Authors: No Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The absence of a comparison or discussion of popular techniques such as Gumbel-softmax and max-propagate raises concerns. These techniques are widely used in probabilistic MOE methods, and their exclusion from the paper may limit its completeness and applicability.** Thanks for your suggestion, we will consider adding discussion about the Gumbel-Softmax and max propagation to the revision of our paper. However, we would like to emphasize that the main objective of this work is to study the convergence rates for parameter estimation in the Gaussian mixture of experts with softmax gating, which is commonly used in many machine learning and deep learning applications [1, 2] but remains missing theoretical understanding for long time. Therefore, the Gumbel-Softmax and max propagation techniques go beyond the scope of our paper, and we believe that proof techniques need to be further developed to handle these settings. **Q2: The lack of experimental validation is another potential issue, which makes it challenging to assess the true impact and effectiveness of the proposed approach in real-world scenarios.** Thanks for raising your concerns. Actually, in Appendix C of the supplementary material, we already carried out several experiments to empirically verify our theoretical results. Therefore, you can refer to that appendix for further details of experimental validation. Regarding the implication of theoretical results to real-world scenarios, please refer to the Common Question 2 in the General Response section. **References** [1] Hussein Hazimeh, Zhe Zhao, Aakanksha Chowdhery, Maheswaran Sathiamoorthy, Yihua Chen, Rahul Mazumder, Lichan Hong, and Ed H. Chi. Dselect-k: Differentiable selection in the mixture of experts with applications to multi-task learning. In Advances in Neural Information Processing Systems 34, 2021. [2] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In International Conference on Learning Representations 2017. --- Rebuttal Comment 1.1: Comment: Thanks for the author's rebuttal! My concerns have been sufficiently addressed and I raised my score. --- Reply to Comment 1.1.1: Title: Thank You Comment: We thank Reviewer 8L63 for your positive evaluation of our paper after the rebuttal and for increasing the score to accept. Best, The Authors
Summary: The authors explore the theoretical complexity of standard Gaussian Mixture of Experts and propose a Vononoi loss which estimates different components of the MLE using different Voronoi cells. Update: I have read the rebuttal and the other reviewers' responses, and will keep the score at 7 (accept). Strengths: The authors establish versions of the Voronoi loss in multiple scenarios, both in fixed settings and over-fitted settings. The Voronoi loss function in both settings is computationally efficient up to a linear order of the number of experts, assuming that the true number of experts is fixed. Weaknesses: Unfortunately, there does not seem to be any practical exploration of the Voronoi loss in the real world setting to verify the theoretical findings, that should be fine due to the posit of the paper as a more theoretical work. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Minor notes: there seems to be a switching between Vononoi vs Voronoi in the paper. Make sure to standardize them! Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Unfortunately, there does not seem to be any practical exploration of the Voronoi loss in the real world setting to verify the theoretical findings, that should be fine due to the posit of the paper as a more theoretical work.** Thanks for your comment. We would like to refer the reviewer to Appendix C in the supplementary material where we run several experiments under synthetic settings to empirically validate our theoretical results. For the connection of our theoretical results to real-world settings, please refer to the Common Question 2 in the General Response section. **Q2: Minor notes: there seems to be a switching between Vononoi vs Voronoi in the paper.** Thanks for pointing out, we will correct it in the revision of our paper.
Summary: This paper establishes convergence rates for parameters (Gaussian mean / variance and expert weight) for softmax Gaussian mixture of experts. The rates depend on the setting: exact-fitted vs. over-fitted. The solutions are in terms of Voronoi losses and Hellinger distances. Strengths: **Originality** To the best of my knowledge, this paper is the first to provide convergence rates for softmax Gaussian mixture of experts, making it an original paper. Likewise, the methodology for obtaining this result involves a rather involved setup, which struck me as highly non-trivial, again, supporting claims of originality. Of course, this is not my area of expertise, so it is difficult for me to accurately assess these points. **Quality** I believe the quality of the paper is strong. The authors provide a thorough description of the setup for softmax Gaussian mixture of experts, with rigorous proofs for the cases of exact-fitted and over-fitted number of experts. I have not assessed the quality of the proofs in the appendix. **Clarity** Despite tackling a complex theoretical problem and proposing a fairly involved solution, I found the paper was presented well. Terms are defined clearly, and the authors walk the reader through the main results before diving in to more detail. **Significance** The results seem marginally significant, as the authors claim that they have some practical implications for setting the number of experts and post-processing techniques, like merge-truncate-merge. However, fully demonstrating these practical insights, as well as empirical verification of the convergence rates, are not included in the paper, somewhat limiting their significance. Weaknesses: I have several concerns regarding the **significance** of the results. First, these results target the convergence of the softmax mixture of expert parameters, however, in practical settings, I would imagine that these parameters are the outputs of deep networks, which may complicate the convergence picture. Second, the convergence results are derived by considering the number of “true” experts, however, it’s unclear to me whether there is a notion of a “true” expert, particularly if the parameterization is used inside of a neural network / model to model a latent variable. Finally, while I understand that this is an entirely theoretical paper, connecting it more closely to the practical settings in which softmax mixture of experts are used would help to more clearly demonstrate the significance of the paper. For instance, even if it’s in a toy setting, is there any way to empirically verify the convergence rates for exact-fitted vs. over-fitted setups? Regarding **presentation**, I found the paper overall quite clear given the complexity of the result. However, some diagrams (there are none in the main paper) could possibly fill in any gaps in the reader’s understanding. For instance, diagrams showing the softmax Gaussian mixture of experts setting, as well as other concepts, like Voronoi cells, Hellinger distance, etc., would be useful additions to the paper. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: My main question is regarding the significance of the result and connecting the result to practical empirical settings — what can be done to make this connection clearer? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: The authors have adequately addressed the main limitations of their work in the Discussion section of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The paper targets the convergence of the softmax mixture of expert parameters, however, in practical settings, I would imagine that these parameters are the outputs of deep networks, which may complicate the convergence picture.** Thanks for your question. Regarding the softmax gating Gaussian mixture of experts with general expert functions (including deep neural networks), the conditional density of $Y$ given $X$ is given by $$g_{G_*}(Y|X):=\sum_{i=1}^{k_*}\dfrac{\exp((\beta^*_{1i})^{\top}X+\beta^*_{0i})}{\sum_{j=1}^{k_*}\exp((\beta^*_{1j})^{\top}X+\beta^*_{0j})}f(Y|h_1(X,\theta^*_{1i}),h_2(X,\theta^*_{2i})),$$ where $f(\cdot|\mu,\sigma)$ denotes a Gaussian density with mean $\mu$ and variance $\sigma$, while $h_1:\mathcal{X}\times\Theta_1\to\Omega_1\subset\mathbb{R}$ and $h_2:\mathcal{X}\times\Theta_2\to\Omega_2\subset\mathbb{R}^+$ are referred to as expert functions which can be chosen as deep neural networks. We call that $h_1$ and $h_2$ are algebraically independent if they are twice differentiable w.r.t their parameters $\theta_1\in\Theta_1\subset\mathbb{R}^{q_1}$ and $\theta_2\in\Theta_2\subset\mathbb{R}^{q_2}$, and all the following sets are linearly independent w.r.t $X$: (S.1) $\{\dfrac{\partial h_1}{\partial\theta_1^{(u)}}(X,\theta_1)\dfrac{\partial h_1}{\partial\theta_1^{(v)}}(X,\theta_1), X^{w}\dfrac{\partial h_2}{\partial\theta_2^{(z)}}:1\leq u\leq v\leq q_1, 1\leq z\leq q_2,0\leq |w|\leq 1\}$, (S.2) $\{\dfrac{\partial h_2}{\partial\theta_2^{(u)}}(X,\theta_2)\dfrac{\partial h_2}{\partial\theta_2^{(v)}}(X,\theta_2):1\leq u\leq v\leq q_2\}$, (S.3) $\{\dfrac{\partial h_1}{\partial\theta_1^{(u)}}(X,\theta_1)\dfrac{\partial h_2}{\partial \theta_2^{(v)}}(X,\theta_2):1\leq u\leq q_1, 1\leq v\leq q_2\}$, (S.4) $\{X^{w}\dfrac{\partial h_1}{\partial\theta_1^{(u)}}(X,\theta_1):1\leq u\leq q_1,0\leq|w|\leq 1\}$, where $\theta_1^{(u)}$ are the $u$-th entry of $\theta_1$. For example, if $h_1(X,\theta_1)=\theta_1^{\top}X$ and $h_2(X,\theta_2)=\theta_2$, then $h_1$ and $h_2$ are algebraically independent. Under over-fitted settings, if $h_1$ and $h_2$ are algebraically independent expert functions, we propose using the follwoing Voronoi loss for the convergence analysis: $\mathcal{D}(G,G*):=\inf_{t_1,t_2}(\sum_{j:|\mathcal{A}_j|>1,i\in\mathcal{A}_j}[$ $\exp(\beta_{0i})||(\Delta_{t_2}\beta_{1ij},\Delta\theta_{1ij},\Delta\theta_{2ij})||^2]$ $+\sum_{j:|\mathcal{A}_j|=1,i\in\mathcal{A}_j}[$ $\exp(\beta_{0i})||(\Delta_{t_2}\beta_{1ij},\Delta\theta_{1ij},\Delta\theta_{2ij})||]$ $+\sum_{1\leq j\leq k_*}[$ $|\sum_{i\in\mathcal{A}_j}[$ $\exp(\beta_{0i})]-\exp(\beta^*_{0i}+t_1)|])$, where $\Delta_{t_2}\beta_{1ij}:=\beta_{1i}-(\beta^*_{1j}+t_2)$, $\Delta\theta_{1ij}=\theta_{1i}-\theta^*_{1j}$, $\Delta\theta_{2ij}=\theta_{2i}-\theta_{2j}$. **(we apologize for the typing of the loss function as it is too complex to compile in one line)** In particular, we find out that the MLE $\widehat{G}_n$ defined as in our work still converges to the true mixing measure $G_*$ under the above loss at a rate of $\mathcal{O}(n^{-1/2})$ (up to some logarithmic term). Consequently, the estimation rates for over-fitted parameters $\beta^*_{1j},\theta^*_{1j},\theta^*_{2j}$ are $\mathcal{O}(n^{-1/4})$, while those for exact-fitted parameters are $\mathcal{O}(n^{-1/2})$. However, when $h_1$ and $h_2$ are not algebraically independent, then the estimation rates for true parameters will vary with the choices of expert functions. For instance, in our work, $h_1(X,(a,b))=a^{\top}X+b$ and $h_2(X,\sigma)=\sigma$ are not algebraically independent since the corresponding set (S.4) is linearly dependent. **Q2: The convergence results are derived by considering the number of “true” experts, however, it’s unclear to me whether there is a notion of a “true” expert.** Thanks for your question. We would like to clarify that we did not use the term ``the number of true experts'' in our paper. However, we used two other terms which possibly related to that term: 1) ``the true number of experts'' (line 118), which was denoted by $k_*$; 2) ``true experts'' (first line on page 6), which indicates the experts with true parameters, e.g. $f(Y|(a^*_i)^{\top}X+b^*_i,\sigma^*_i)$. **Q3: Is there any way to empirically verify the convergence rates for exact-fitted vs. over-fitted setups?** Thanks for your question. Ones can refer to Appendix C in the supplementary material where we conducted several experiments to empirically justify the theoretical convergence rates for both exact-fitted and over-fitted settings. **Q4: Some diagrams could possibly fill in any gaps in the reader’s understanding. For instance, diagrams showing the softmax Gaussian mixture of experts setting, as well as other concepts, like Voronoi cells, Hellinger distance, etc.** Thanks for your suggestion, we will add the illustration of Voronoi cells in the PDF file (attached at the end of the General Response section) to the revision of our paper. Image Caption: Illustration of Voronoi cells under the over-fitted settings. Blue triangles represent true components while red rounds indicates fitted components. By definition, each Voronoi cell corresponds to only one true component, i.e. one blue triangle, and its cardinality is the number of corresponding fitted components, i.e. the number of red rounds. For instance, in cell 1, the blue triangle is approximated by two rounds, implying that the cardinality of cell 1 is two. Then, from the comments on Theorem 2 (line 237-244), since $\bar{r}(2)=4$, the estimation rates for $\beta^*_{11},b^*_1$ are $\mathcal{O}(n^{-1/8})$, while those for $a^*_1,\sigma^*_1$ are $\mathcal{O}(n^{-1/4})$ (up to some logarithmic term). **Q5: Regarding the significance of the result and connecting the result to practical empirical settings — what can be done to make this connection clearer?** Thanks for your question. Please refer to the Common Question 2 in the General Response section for our response. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thank you for your rebuttal. After reading the other reviews and the authors' responses, I have decided to raise my score. --- Reply to Comment 1.1.1: Comment: We thank Reviewer txMC for your positive evaluation of our paper after the rebuttal and for increasing the score to accept (7). We really appreciate that! Best, The Authors
Summary: This paper studies convergence rate of Maximum Likelihood Estimate (MLE) of parameters of Gaussian mixture of experts with softmax gating. Unlike some of the previous papers, the authors allow for the gating probabilities to be input dependent. Convergence rate (upper bound) is provided for both the setting where the number of components is known, and when only a lower bound is known. For the latter case, the authors show connection between the MLE convergence rate and solvability of a central system of polynomial equations. Strengths: * Makes progress on a difficult problem complicated by lack of identifiability and complex interactions between parameters of the gating mechanism and of the individual components. * Proposes a new "Voronoi loss", which lower bound the Hellinger distance, and is crafted to capture properties of the mixture of experts model (the involved Voronoi cells roughly correspond to the subsets of the input space dominated by a particular expert under the ground truth). * AFAICT provides the first MLE rates for models where the gating mixture is input dependent. Weaknesses: * As someone with only cursory understanding of the area (emergency reviewer), the paper was hard to follow and full of technical detail that to me felt like obfuscating rather than facilitating understanding. * Unclear what can a practitioner take away, except that they should set the number of components as low as possible. (This is a very minor weakness, as I don't think every paper needs to have direct application.) Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * With the problem of identifiability, what happens if we introduce some penalty ($\ell^1$, $\ell^2$)? I was surprised that this (perhaps naive) solution was not discussed in the background section or anywhere else. * Could you make more explicit how the level of separation between individual components of the ground truth impacts the convergence rate? Intuitively, the more separated, the easier the estimation problem should be. * Potentially related question: What do you mean when you say that a particular Voronoi cell contains only one component of the MLE? Isn't the MLE a random variable? (Wouldn't that imply that the identity of cells which have only one component changes? Also what about the cells which have zero MLE components in them?) Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: As someone with only cursory understanding of the area, the paper was hard to follow and full of technical detail that to me felt like obfuscating rather than facilitating understanding.** Thanks for your comment. We would like to refer you to Common Question 1 in the General Response to understand our paper better. **Q2: Minor issue: Unclear what can a practitioner take away, except that they should set the number of components as low as possible.** Thank you for raising your concerns. In fact, there are two other important messages: (1) The slow convergence rates of the MLE may provide important thresholds for the merge-truncate-merge procedure, a procedure that has been used to estimate the true number of components in standard mixture models [1], to consistently estimate the true number of experts $k_*$ in our mixture of experts; (2) We also illustrate the heterogeneity convergence rates in the MLE for solving the parameter estimation of the softmax gating Gaussian mixture of experts via exact-fitted and over-fitted models. Although these empirical rates of convergence are similar for the two models, they imply that the convergence behaviour of the individual fitted parameters is very different in an over-fitted setting, which is already discussed in more detail in the Section 3.2 of our paper. **Q3: With the problem of identifiability, what happens if we introduce some penalty $(\ell_1,\ell_2)$?** Thank you for your question. As far as we know, we are not sure if introducing a penalty $(\ell_1,\ell_2)$ will solve the identifiability problem. Furthermore, proof techniques should be further developed to extend our proposed approach to these settings in order to establish the convergence rates for the parameter estimation problem in penalized mixture of experts. **Q4: Could you make more explicit how the level of separation between individual components of the ground truth impacts the convergence rate? Intuitively, the more separated, the easier the estimation problem should be.** Thanks for your question. In this work, we do not take into account any assumptions on the level of separation between ground-truth parameters. Moreover, it can be seen from Theorem 1 and Theorem 2 that the estimation rates for ground-truth parameters are independent of that separation level. Instead, under the over-fitted settings, those estimation rates depend on the solvability problem of a system of polynomial equations. For instance, Theorem 2 implies that a true parameter $a^*_j$ has an estimation rate of order $\mathcal{O}(n^{-1/\bar{r}(|\mathcal{A}_j|)})$ (up to some logarithmic term) for any $j\in\{1,2,\ldots,k_*\}$. Since $\bar{r}$ is a monotonically increasing function of $|\mathcal{A}_j|$, then the lower the cardinality of the Voronoi cell $\mathcal{A}_j$ is, the better the previous estimation rate for $a^*_j$ becomes. **Q5: What do you mean when you say that a particular Voronoi cell contains only one component of the MLE? Isn't the MLE a random variable? (Wouldn't that imply that the identity of cells which have only one component changes? Also what about the cells which have zero MLE components in them?)** Thanks for your question. We would like to emphasize that as the MLE $\widehat{G}_n$ is a random variable, then the Voronoi cells $\mathcal{A}^n_j$ for $j\in\{1,2,\ldots,k_*\}$ corresponding to the MLE are also random. Under the over-fitted settings, the number of components of the MLE $\widehat{G}_n$ is strictly larger than that of the true mixing measure $G_*$, i.e. $\widehat{k}_{n}>k*$. Thus, according to the pigeonhole principle, there must exist some Voronoi cell having at least two elements. Additionally, we could also find some Voronoi cell $\mathcal{A}^n_j$ which has only one element $i$, and this element corresponds to the component $(\widehat{\beta}_{1i},\widehat{a}_i,\widehat{b}_i,\widehat{\sigma}_i)$ of the MLE. This is what we mean when we mention a Voronoi cell which contains only one component of the MLE. Regarding the question about Voronoi cells having zero MLE component, we get from Theorem 2 that the Voronoi loss $\mathcal{D}_2(\widehat{G}_n,G*)$ converges to zero as $n$ goes to infinity. Then, it follows from the definition of this Voronoi loss that $(\widehat{\beta}_{1i},\widehat{a}_i,\widehat{b}_i,\widehat{\sigma}_i)$ converges to $\(\beta^*_{1j},a^*_{j},b^*_{j},\sigma^*_{j})$ for any $i\in\mathcal{A}^n_j$ and $j\in\{1,2,\ldots,k_*\}$. This result implies that each Voronoi cell $\mathcal{A}^n_j$ must have at least one element. **References** [1] A. Guha, N. Ho, and X. Nguyen. On posterior contraction of parameters and interpretability in Bayesian mixture modeling. Bernoulli, 27(4):2159–2188, 2021.
Rebuttal 1: Rebuttal: **General Response** Dear AC and reviewers, Thanks for your thoughtful reviews and valuable comments, which have helped us enhance our paper substantially. In this section, we will address some common concerns from the reviewers, and then include these changes to the revision of our paper. **CQ1: Main challenges in the paper?** To the best of our knowledge, previous work on comprehending the convergence rates for parameter estimation problem in mixture of experts or mixture models [1] all assume that the corresponding gating or mixing proportion is independent of the covariates $X$. This assumption is limited and far from recent applications of mixture of experts in machine learning and deep learning [2], which leverage covariate-dependent gating such as softmax gating and its variants. For those reasons, understanding parameter estimation of softmax gating Gaussian mixture of experts has remained a long-standing open problem in the literature. There are three main challenges in our work: **(i)** Firstly, parameters $\beta^*_{1i}, \beta^*_{0i}$ of the softmax gating are not identifiable as those of the covariate-independent gating in previous work. Instead, they are identifiable up to translation, that is, the softmax gating does not change when we translate the parameters as follows: $\beta^*_{1i}+t_1$ and $\beta^*_{0i}+t_2$. As a consequence, we need to introduce an infimum operator in the Voronoi loss function to deal with this issue. **(ii)** Secondly, it is clear that the numerators and denominators of softmax gating are dependent. Thus, in Step 1 of our proofs, if we applied the Taylor expansion directly to the conditional density difference $g_{G_n}(Y|X)-g_{G_*}(Y|X)$ as in previous work [1], we were unable to represent the conditional density difference as a linear combination of elements belonging to some linearly independent set, which is a key step in the proof techniques. To this end, we consider the product of the softmax gating's denominator and the conditional density difference, which is denoted by $Q_n$. Subsequently, we decompose the product $Q_n$ such that the decomposition includes two functions $\exp(\beta_{1i}^{\top}X)f(Y|a_i^{\top}X+b_i,\sigma_i)$ and $\exp(\beta_{1i}^{\top}X)g_{G_n}(Y|X)$ as in line 291. Then, we have to apply two Taylor expansions of different orders to these functions, respectively, rather than only one Taylor expansion as in [1]. Now, $Q_n$ is written as a linear combination but there are some linearly dependent terms due to the intrinsic interaction between the softmax gating's numerator $\exp(\beta_{1i}^{\top}X+\beta_{0i})$ and the Gaussian density function $f(Y|a_i^{\top}X+b_i,\sigma_i)$ via the partial differential equation in Eq.(3): $$\frac{\partial u(X,Y)}{\partial\beta_1}\cdot\frac{\partial u(X,Y)}{\partial b}=\frac{\partial u(X,Y)}{\partial\beta_0}\cdot\frac{\partial u(X,Y)}{\partial a},$$ where $u(X,Y):=\exp(\beta_1^{\top}X+\beta_0)f(Y|a^{\top}X+b,\sigma)$. Therefore, it takes us much effort to group these linearly dependent terms together as in Eq.(25) to formulate $Q_n$ as a linear combination of linearly independent terms. **(iii)** Finally, in Step 2 of our proofs, we assume that all the ratios of coefficients in the above linear combination to the Voronoi loss converge to zero. Then, via some transformations, those limits lead to the system of polynomial equations in Eq.(6). Compared to the system in previous work [Eq.(6), 1], our system is much more complex and challenging due to the interaction between the softmax gating's numerator and the Gaussian density. As a result, it takes us greater effort to comprehend our system in Lemma 1. **CQ2: Connection between theoretical results and real-world settings?** There are two different real-world settings that the current theoretical results will yield important insights into: (i) Well-specified setting: In this setting, we assume that the data are generated from Gaussian mixture of experts with softmax gating. It is the setting that we mainly study in the current work. Our results in the paper suggest that when we overspecify the number of experts in the well-specified setting, the convergence rates of some parameters can be very slow and depend on the amount of components that we overspecify the model while those of the remaining parameters can be very fast. These rates can be captured precisely by the novel Voronoi losses. The precise rates are important for developing model selection to choose the correct number of experts in the well-specified settings, which is important to improve the complexity of using Gaussian mixture of experts in practice (Please refer to the Practical Implication paragraph in the Introduction). (ii) Misspecified setting: In this setting, the data are not generated from Gaussian mixture of experts with softmax gating while we fit the data by Gaussian mixture of experts. Different from the well-specified setting, the MLE $\widehat{G}_{n}$ converges to the mixing measures $\bar{G}$ which is an optimal solution for the problem of minimizing the following Kullback-Leibler divergence: $KL(g_{G}(Y|X), P(Y|X))$, where $P(Y|X)$ is the true conditional density function of $Y|X$, which is not a Gaussian mixture of experts, $g_{G}(Y|X)$ is Gaussian mixture of experts with mixing measure $G$. The insights from our theories in the well-specified setting indicate that the Voronoi losses can be used to obtain the precise rates of individual parameters of the MLE $\widehat{G}_{n}$ to those of $\bar{G}$. The detailed theoretical development of parameter estimation under the misspecified setting is beyond the current scope of the paper and left for future work. **References** [1] N. Ho, C.-Y. Yang, M. I. Jordan. Convergence rates for Gaussian mixtures of experts. Journal of Machine Learning Research, 2022. [2] H. Hazimeh. Dselect-k: Differentiable selection in the mixture of experts with applications to multi-task learning. In NeurIPS, 2021. Pdf: /pdf/bd0a02a1c5367aa0ba4be94e6de3123463f4824e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work considers the problem of parameter estimation for the **softmax-gated mixture of Gaussian experts**. For simplicity, first consider the case in which the mixture weights do not depend on the covariates X. Then, this model can be seen as a mixture of linear regressions where each “regression component” is corrupted by Gaussian noise. The paper considers the much more challenging case where the mixture weights depend on the covariates. The mixing weights are modeled as linear functions which are normalized by applying softmax to the vector of mixing weights. The paper proposes a novel "Voronoi distance" for the parameter space, which takes into account the invariances of the parametric model, and is used to determine convergence rates for the maximum likelihood estimator (MLE). In the well-specified case where $k$, the number of components of the model used, is equal to $k^*$, the true number of components, the MLE parameters converge at the parametric rate $n^{-1/2}$. In the overparametrized case where $k > k^*$, the rates of the component parameters are slower and depend on how many components of the overparametrized model fall inside the Voronoi cell of the true model’s components. Strengths: - The paper overcomes significant technical challenges that come from using softmax gates for the mixing weights. The main contribution of the paper is showing that the expected total variation distance of the MLE’s conditional distribution $\hat{p}(Y|X)$ from the true conditional distribution $p(Y|X)$ is *lower bounded* by the proposed Voronoi distance. The Voronoi distance takes care of the inherent symmetries of the parameter space. For example, the mixing weight parameters can only be identified up to global translations due to the softmax operation. - Since one can show that the expected TV distance of the MLE converges at the parametric rate $n^{-1/2}$ by computing the bracketing number of the parametric family [van de Geer, 2000; Theorem 7.2], this leads to an upper bound on the Voronoi distance and thus the $\ell_2$ distance of the MLE parameters. - To be more precise, the paper proposes a family of Voronoi distances, for both the well-specified case and overparameterized case. The convergence rates for overparametrized cases are determined implicitly by a system of polynomial equations which in turn come from analyzing the Taylor expansion of the model’s conditional density w.r.t. the parameters. Weaknesses: **Exposition.** Though the technical contributions of this paper are solid, its exposition could be improved to make it more accessible to non-expert readers. - **System of polynomial equations (Section 1).** The full details of the system of polynomial equations in the introduction is difficult to follow and seems unnecessary since it is reproduced in Section 3. It would be easier and less intimidating for the readers if the authors omitted the full details and simply noted that $r(m)$ is related to whether some system of polynomial equations admits no non-trivial solutions. Then, one could end with “we conjecture r(m) = 2m, and establish this for the first few cases m=1,2,3, in Lemma 1.” It would also be great if the authors could provide more explanation for how one arrives at these polynomial equations. - **Computation of the Voronoi loss function (Section 3.2).** This would fit better in the experiment section. In typical estimation settings, we do not have access to the true parameters anyway. This quantity is only computable in synthetic settings where the experimenter knows the true parameters. - **Practical implication paragraphs.** These paragraphs in Section 1 and Section 3.2 are redundant. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - The authors mention in Section 1 that understanding the parameter estimation problem for mixture of experts models has been a long-standing open problem. Could the authors provide explanation for why this was the case? What were the technical barriers and what recent technical advances or breakthroughs have paved the way for the results established here? - Which aspects of the Voronoi distances are novel? Assigning the model’s components according to the Voronoi cells of true components seems like a natural thing to do for any mixture model. Is it the \inf over translations of the mixing weights? Or the rate functions r(m) related to the system of polynomial equations? **Editorial comment** - In the introduction, it would motivate the paper’s result more if the authors provided simple and practical examples showing the importance of parameter estimation in this particular mixture of experts model rather than just estimating the density. Perhaps there are examples in which the parameters in the mixture of experts model are interpretable and have practical value? - Typo. First paragraph of Section 3.3. Total variation distance should be V not h. - It would be helpful to explain what mixing measures when they are first introduced in Section 1. Perhaps after Eq.(1) in the **Problem setting** paragraph. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The full details of the system of polynomial equations in the introduction are unnecessary and should be omitted. Instead, the authors should briefly introduce the quantity $r(m)$ and its properties. Additionally, how do the authors arrive at these polynomial equations?** Thanks for your suggestion, we will consider it and edit our paper accordingly. Regarding the derivation of the system of polynomial equations, please refer to parts (ii) and (iii) of the Common Question 1 in the General Response section for brief explanation. For further details, ones can refer to Step 2 of the proof of Theorem 2 from line 629 -649 in the supplementary material for further details. **Q2: Regarding the computation of the Voronoi loss functions, since we do not have access to the true parameters in typical estimation settings, these functions are only computable in synthetic settings where we know true parameters.** We agree with the reviewer that our proposed Voronoi loss functions are only computable when ones have access to true parameters. However, we would like to emphasize that the important feature of the proposed Voronoi losses is its ability to provide exact convergence rates of individual parameters of the MLE, i.e., we know the separation among the estimated parameters based on their distances to the true parameter or how small the weights of certain experts to 0 (see our response to Q5 for more details). From the practical perspective, these characterizations of separation among estimated parameters or the values of weights of experts give insight into the thresholds to merge these parameters when they become too close or truncate certain experts when their weights are too small. This procedure yields a useful method to reduce the complexity of the Gaussian mixture of experts and consistently determines the correct number of experts when the sample size is sufficiently large. **Q3: The practical implication paragraphs in Section 1 and Section 3.2 are redundant.** Thanks for your comment, we will consider removing the practical implication paragraph in Section 3.2 in the revision of our paper. However, we may keep the one in Section 1 as it provides an insight of how to choose the number of experts to achieve the best possible convergence rates of parameter estimation and connection to the merge-truncate-merge procedure, which is used to estimate the true number of experts. **Q4: Could the authors explain why understanding the parameter estimation problem for mixture of experts models has been a long-standing open problem? What were the technical barriers and what recent technical advances or breakthroughs have paved the way for the results established here?** Thanks for your questions. We would like to refer you to Common Question 1 in the General Response for further details. **Q5: Which aspects of the Voronoi distances are novel? Is it the $\inf$ over translations of the mixing weights? Or the rate functions $r(m)$ related to the system of polynomial equations?** Thanks for your question. Our proposed Voronoi loss functions are considered novel since they are able to characterize accurately the estimation rate for each parameter. In particular, from our Theorem 2, we know that the Voronoi loss $\mathcal{D}_{2}(\widehat{G}_n,G*)$ converges to zero at a rate of order $\mathcal{O}(n^{-1/2})$ (up to some logarithmic term). Then, it follows from the formulation of this loss that for each $j=1,2,\ldots,k_*$, all the resulting estimation rates for parameters $\beta^*_{1j},a^*_{j},b^*_{j},\sigma^*_{j}$ depend on the quantity $\bar{r}(|\mathcal{A}_j|)$, which possibly admits distinct values for different indices $j$. More specifically, if two Voronoi cells $\mathcal{A}_1$ and $\mathcal{A}_2$ does not share the same cardinality, i.e. $|\mathcal{A}_1|\neq|\mathcal{A}_2|$, then the estimation rates for $\beta^*_{11},a^*_1,b^*_1,\sigma^*_1$ are different from those of $\beta^*_{12},a^*_2,b^*_2,\sigma^*_2$, respectively. By contrast, if we adopted the Wasserstein loss as in [1], the estimation rate for $\beta^*_{1j}$ (resp. $a^*_j,b^*_j,\sigma^*_j$) remains unchanged for any $j=1,2,\ldots,k_*$, which is not accurate. **Q6: In the introduction, the authors should provide simple and practical examples showing the importance of parameter estimation in this particular mixture of experts model.** Thanks for your suggestion. Mixture models and mixture of experts are well-known for modelling heterogeneous data, and they are commonly used in medicine [2] and physical sciences [3]. In these applications, mixture parameters play a vital role in capturing the heterogeneity of the data, thus, the main objective in [2, 3] is to conduct statistical inference for those parameters. This leads to a need for determining the convergence rates for parameter estimation in mixture models and mixture of experts. We will add this discussion to the revision of our paper. **Q7: Typo. First paragraph of Section 3.3. Total variation distance should be $V$ not $h$.** Thanks for pointing out, we will correct it in the revision of our paper. **Q8: It would be helpful to explain what mixing measures are when they are first introduced in Section 1.** Thanks for your suggestion, we will add the definition of a mixing measure to the revision of our paper. Basically, a mixing measure is referred to as a mixture of some measures. For instance, in line 53 of our work, since $G_*$ is defined as a mixture of Dirac measures, it is called a mixing measure. **References** [1] N. Ho, C.-Y. Yang, and M. I. Jordan. Convergence rates for Gaussian mixtures of experts. Journal of Machine Learning Research, 2022. [2] Q. Li, R. Shi, and F. Liang. Drug sensitivity prediction with high-dimensional mixture regression. PloS one, 2019. [3] Kuusela, M. Semi-supervised anomaly detection—towards model-independent searches of new physics. In Journal of Physics: Conference Series, volume 368, 2012. --- Rebuttal Comment 1.1: Comment: Thank you for your thorough response and clarification! --- Reply to Comment 1.1.1: Comment: We thank Reviewer PaPA for your positive evaluation of our paper after the rebuttal and for keeping your score of 7. Best, The Authors
null
null
null
null
null
null
Counterfactual Generation with Identifiability Guarantees
Accept (poster)
Summary: This paper proposes a framework for unsupervised style transfer by disentangling the content and style representations. Unlike previous research, they do not assume independence between the content and style variables in the generation process but rather only a lower influence of the style variable compared to the content variable on the generation process. Based on this assumption, the paper then introduces two theories with corresponding proofs of the identifiability of both the style and content variables under these assumptions. That is, the content information can be preserved without needing style. Given the theoretical discussion, the paper then proposes a framework based on variational autoencoders (VAE) to address the task of style transfer through style and content disentanglement. The framework is then evaluated empirically on the task by comparing it to other state-of-the-art baselines. In the results, the paper demonstrates the gain from their implementation in most measures considered. The ablation study further illustrates the importance of every component added to their model. Strengths: - This paper contributes to the style transfer task by eliminating the independence assumption between style and content. - It supports its claims by providing both theoretical backing and empirical evidence. - It is clear in most of its parts. Weaknesses: I do not see any major weaknesses in this paper. However, the authors can address some clarity issues in the questions section. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - The submission title differs from the title of the paper. Is that on purpose? - Line 153-156: If I understood correctly, then the described subspace should consist of all vectors whose entries are *not zero* for all indices in S. but in the paper says *zero for all indices in S* - In Section 5.3, the paper mentions that the s_(transfer) is computed from the average of randomly sampled style values of the *desired style*. What is the desired style here? - The paper considers the G-score the most important measure because of its comprehensive assessment. It would be nice if the paper clarified what it means to have a comprehensive assessment. - In section 6.1, It would be nice if the paper clarified the implemented model and the baselines. I learned later in the results section that the model is built on top of CPAVE - In Table-2 what are src1,2, and 3 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: - The central limiting aspect of this paper is the assumption that style has a lower influence on the generated text compared to the content, which doesn't hold for all text generation tasks. However, this limitation is acknowledged by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for the valuable comments and suggestions. We also hope that our method could provide insights for the following research. We respond below to address your concerns. ### Title Thank you for letting us know this. The title should be “Counterfactual generation with identifiability guarantee”. We will make it consistent. ### Subspace and zero-vector You are totally right! Thank you for spotting this typo; we will correct it in our manuscript. ### Desired style Great question! For general sentence transfer, the desired style is the target attribute of the sentence, e.g., positive sentiment, formal writing style, female subject, etc. We calculate the $s_\text{transfer}$ over sentences with the target attribute. ### Comprehensive assessment Thank you for the good question! There is an inherent trade-off between sentiment transfer and semantic preservation because a larger perturbation could lead to a higher sentiment flip success rate but may distort the original semantics to a greater extent. See Table 2 in [1] for an illustration “$\beta$-VAE under 1/2/3 sigma perturbation”. Therefore, G-score was proposed in [2] as a single metric to simultaneously consider both aspects and reflect the overall quality of the transfer, and it is followed by lots of related work [3,4]. ### Implementation clarification Thanks for pointing out this issue. You are right -- our model is built on CPVAE. We will improve the baselines section in the revised manuscript to make it clearer. ### src in Table-2 src-* refers to the original sentences that express one particular sentiment and will be transferred to new sentences with the opposite sentiment by the listed approaches. Please let us know if you have further concerns -- thank you! ### References: [1] On variational learning of controllable representations for text without supervision. ICML20. [2] Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach. ACL18. [3] Reformulating Unsupervised Style Transfer as Paraphrase Generation. EMNLP20 [4] Deep Learning for Text Style Transfer: A Survey. CL22 --- Rebuttal Comment 1.1: Comment: Thanks a lot for clarifying my questions. I hope that these answers get integrated into the final version of the paper. --- Reply to Comment 1.1.1: Comment: We will improve our draft according to your suggestions as indicated above—many thanks for the time and effort dedicated to our work!
Summary: In the paper, the authors take varying dependence between content and style into account in the counterfactual generation process. The identification problem is addressed when using a VAE for the task. It is proved that the subspaces of latent variables for contents and style are identifiable. Then based on the theorems and assumptions the authors propose to build a VAE based model with sparsity regularization to solve the problem. The proposed framework is tested on four datasets and gets relatively high scores compared with other unsupervised baselines. Ablation study and case study are also conducted to consolidate the conclusion. Strengths: 1. Detailed mathematical proof is done for the conclusion. 2. The model performs well compared with other unsupervised baselines. 3. The proposed framework may be applied in various models in the future. Weaknesses: 1. Human evaluation might be necessary to prove the framework actually reaches the original expectation. The automatic metrics may not be adequate. 2. The MATTE does not perform well enough in the experiments. It performs poorer in accuracy and perlexity than beta-VAE. The higher scores based on word-overlap can mean little in such situation. 3. Performance of VAE-based models always varies according to random factors, and it’s better to note the mean and std. of the results in multiple trials. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Thanks for authors' detailed rebuttal, which addressed my concerns. I'll keep my score. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitation discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for the insightful comments and suggestions. We hope that generation tasks, including advanced LLMs would benefit from principles for representation learning, as developed in our work. ### Human Evaluation Thank you for providing us with valuable suggestions to enhance our work. In line with the settings in [10], we have established the following criteria to assess the style, meaning, fluency, and overall transfer quality. Due to time constraints, we randomly selected 100 examples from the four datasets and gathered the generated sentences from the top-performing baselines in each group, namely CPVAE, Optimus, and our approach. We have invited three English-fluent evaluators to rate the first three metrics using a 5-point Likert scale (higher scores indicating better performance) and ask them to rank the three generated sentences (tied ranks are permitted). The Cohen’s Kappa coefficient among every two annotators over the rank-based metric 0.46. The results are supplied in **Table 1** in the attached PDF. The results show that human annotators favour Optimus in terms of content preservation and fluency, which can be attributed to its powerful decoder in generating fluent sentences. After considering the style transfer correctness, Matte ranks as the best-performing method with more than 58% support set. ### $\beta$-VAE performance Thanks for pointing out this result. $\beta$-VAE indeed demonstrates higher acc and lower perplexity. However, we have observed that many generated sentences follow simple but repetitive patterns. For example, 2.2\% transferred sentences in the Yelp domain containing the phrase “I highly recommend” while only 0.6\% original sentences do. While these sentences are fluent and express the desirable sentiment, they significantly differ from the original sentences and indicate a generation degradation problem. In this context, BLEU can play a role in avoiding pitfalls because the significantly lower BLEU probably indicates the decline in the model reconstruction ability-language modelling ability. Therefore, we use G-score as the primary metric to balance sentiment acc and semantic preservation. We also noticed another metric, “diversity-n”[9] can be a good indicator for oversimplified and repetition generation. It measures the ratio of distinct n-grams in all the n-grams in the generated sentences. All the other methods except for $\beta$-VAE generated sentences with similar diversity-2 as the original sentences, but the sentences generated by $\beta$-VAE have much lower diversity than the original ones (see Table below). We will make it clearer to the audience. Thank you for your valuable comments again. **Table**: Diversity-2 for the transferred sentences. Diversity for the original sentences is added in the bracket for comparison. $\beta$-VAE has significantly fewer distinct 2-gram than original datasets. | Dataset | IMDB (0.34) | Yelp(0.63) | Amazon(0.64) | Yahoo(0.44) | |-------------|-------------|------------|--------------|-------------| | $\beta$-VAE | 0.11 | 0.46 | 0.37 | 0.22 | | JointTrain | 0.21 | 0.59 | 0.56 | 0.37 | | CPVAE | 0.32 | 0.59 | 0.57 | 0.45 | | MATTE | 0.32 | 0.62 | 0.61 | 0.45 | ### BLEU metric Thank you for sharing your insights. Although many existing papers adopt BLEU [1, 2, 3, 4, 5] and it is still an open question of how to measure semantics preservation due to word variation [6], we completely agree with you that BLEU has limitations in capturing semantic relatedness beyond literal word-level overlap. To make this issue clearer to the audience, we will make a footnote in the paper. We can observe that another metric, sentiment accuracy, also favours our methods similarly. Furthermore, we adopt one metric, CTC score [7], to avoid the aforementioned issues of BLEU, as it considers the matching embeddings, i.e., cosine similarity of pretrained word embedding rather than the “hard match”. The results in **Table 4** in the attached PDF show $\beta$-VAE display the least impressive performance and, Optimus and Matte exhibit the overall best results. Admittedly, results are less discriminative than BLEU -- this phoneme is also observed in [8]. ### Mean and std. Thank you for the suggestion! We will add the mean and std to our draft. We also included the results in **Figure 1** in the PDF. Please let us know if you have further concerns -- thank you! ### References: [1] Semi-Supervised Formality Style Transfer with Consistency Training, ACL22 [2] Deep Learning for Text Style Transfer: A Survey, CL21 [3] A Probabilistic Formulation Of Unsupervised Text Style Transfer. ICLR20 [4] Style transformer: Unpaired text style transfer without disentangled latent representation, ACL19 [5] Unsupervised text style transfer using language models as discriminators. Neurips18 [6] Repairing the cracked foundation: A survey of obstacles in evaluation practices for generated text. JAIR23 [7] Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation. EMNLP21 [8] Composable Text Controls in Latent Space with ODEs. Arxiv 2022 [9] A diversity-promoting objective function for neural conversation models. ACL16 [10] A Review of Human Evaluation for Style Transfer. GEM21 --- Rebuttal Comment 1.1: Comment: Thanks a lot for your clarification with details. I wonder if you could update the paper properly within the limitation of the pages, since you give so many updates for the reviewers. --- Reply to Comment 1.1.1: Comment: Thank you for the thoughtful question. We have incorporated the indicated updates as follows to ensure that the paper is informative and meets the page limit. - We included the human evaluation results and results from the tense-transfer task as two separate tables in Section 6.1 and Section 6.3 (a new subsection). - We included a discussion on the degradation issue of $\beta$-VAE in Section 6.1 (sentiment transfer performance), and we refer the readers to Appendix A.4 for the Diversity-2 measurement table. - All results now feature both mean and std. - We included remarks on comparing our principle-based generation approach with the anatomy-replacement method and ChatGPT at the closing of Section 5, and we refer the readers to Appendix A.5 for experimental results. We included the CTC score–BLEU alternative evaluation and multiple style classifier evaluation results in Appendix A.3 and allude to them in Section 6 (Evaluation metric). - We included suggested references in Section 2 (related work). - For short text edits (e.g., typos, minor remarks, more details, and footnotes), we made adjustments to the corresponding paraphrases. To abide by the page limit, we shortened and merged the two “contrast with prior work” paragraphs in Section 4.1 and Section 4.2 and placed the abridged version by the end of Section 4. We shortened the baseline description in Section 6.1 and deferred the detailed version to the Appendix. We condensed texts in Sections 1 (introduction) and 7 (conclusion). As uploading drafts is not permitted at this stage, we share a few revised paragraphs below. The discussion on the comparison with anatomy-replacement methods and ChatGPT (now in Section 5): >As well acknowledged, large language models (LLMs) have demonstrated an impressive ability to generate fluent text. That said, we view the principles for counterfactual generation as complementary to the development of LLMs, and we hope that our theoretical insights can further enhance LLMs. We supply examples in Table 11 (Appendix) that LLMs falter on sentiment transfer for overlooking overall and implicit sentiments, although they can precisely replace sentiment-related words. Thus, one may anticipate that LLMs would benefit from principles for representation learning, as developed in our work. Contrast with prior work (previously Section 4.1 and 4.2, now merged and placed in Section 4.2): >Zheng et al. enforce absolute sparsity constraints on latent component influences. In comparison, Theorem 1 requires relative sparsity between two subspaces, which could be better suited for language-related applications. Unlike Kong et al., which assumes subspace independence, our method acknowledges the interdependence between the two subspaces, a common scenario in language contexts. Assumption 2 enforces separate influences for content and style to facilitate style identification. Conversely, Kong et al. hinge their proof on content-style independence, limiting its applicability here. We refer the readers to Appendix A.8 for a detailed comparison. The degradation of $\beta$-VAE issue is added in Section 6, sentiment transfer performance: >Among LSTM baselines, $\beta$-VAE shows high sentiment transfer accuracy and fluency but poor content preservation. We have observed that many generated sentences follow simple but repetitive patterns, e.g., 2.2% transferred sentences in Yelp containing the phrase “I highly recommend” while only 0.6% original sentences do. These sentences are fluent and express the desirable sentiment but significantly differ from the original sentences, indicating generation degradation.~\footnote{The metric diversity-n (Li et al.) can also indicate repetitive generation pattern as it measures the ratio of distinct n-grams in all the n-grams in the generated sentences. We add the complete evaluation results in Table 9 (Appendix).} Thank you for your constructive comments, and please let us know if you have any suggestions – many thanks!
Summary: This paper wants to do a controllable text style transfer by tackling the dependence between the content and the style variables. They adopt the concept of influence sparsity requiring the influence of the style variable to be sparser than the content variable. They evaluate their method on several NLP datasets to show the style transferred text generation. Strengths: The paper aims to do controllable text style transfer which is an interesting application in NLP domain. In this paper the authors relax the independence between the style and context fills the gap in the literature. Weaknesses: The main idea of this paper is to disentangle the content from the style. However throughout the whole paper, I don’t think the authors define clearly what is a “style”. In the literature, people usually define the sentence structure (e.g, dependency parsing tree) as the “style” and the semantics as the “content”. While in this paper, in Introduction (line 37), the author mentioned a positive sentiment is “style”, but later in Sec.3 (Line 113), they refer to the positive descriptions of something as “content”. The experiment only shows the style transfer on sentiment perspective and lacks comparison to many advanced baselines. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - The author mentioned their method could leverage the data from multiple domain. How is this reflected in your results? - In your experiment, the goal is to do a “style” transfer which means transferring the sentiment. However the accuracy (which means the intended attributes are expressed, Line 299) is very low, how can you guarantee your style transfer is successful? Also what is the label distribution for each data? What is the accuracy of a random guess in each case? - From Table 2, the transferred texts seem only change several tokens. So how does your method compare to a naive antonym replacement? - There are other unsupervised text-style transfer learning algorithms that are more up-to-date, please check: https://github.com/fuzhenxin/Style-Transfer-in-Text Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for the valuable comments and suggestions. We address each point as follows. ### Advanced baselines and evaluation on other styles Thanks for providing the comprehensive list! It should be noted that our proposed method does not rely on attribute/style labels in the training, nor the paired data, while the unsupervised methods in the list are only free from the paired data. They still rely on the style annotation for each training sample. In this sense, our method is similar to unsupervised disentangled learning ($\beta$-VAE, DAAE[1], DCTC [2]). As far as we know, the most recent work without style labelling should be CPVAE [3], and both CPVAE and our Matte only need a small set of sentences with the desired styles to derive $\tilde{s}_\text{transfer}$ when conducting the sentiment transfer (inference phase). See Sec5.3. Our main contribution is to propose two flexible assumptions to replace existing unrealistic assumptions. It can be applied to broader generation problems -- the sentiment transfer is one example that satisfies our assumption that sentiment is relatively sparse and is influenced by the content. Thanks for your comments about experiments on more styles, we add an experiment on tense transfer, which transfers the tense between present and past. Specially, we follow [1] to extract the tense for the main verb using the StanfordNLP Parser, and measure the tense of the transferred sentences by the parser. We compare with DAAE and DCTC and the results are in **Table 3** in the PDF. Our method outperforms the other baselines over 6\%, with an illustration of a discriminative representation s for different tense. ### Style or Content Thanks for raising this fundamental question. We agree that the usage of 'style' might cause confusion, as you pointed out. In this paper, we decompose latent variables into two parts, which follow the assumed generating process to facilitate counterfactual generation. It uses a broader sense of "style", which are latent variables that have a sparse influence on the text (i.e., relative to the "content" latent variable) and are influenced by specific content variables. We believe that sentiment, in many cases, falls into such a category, as discussed in the examples, e.g., lines 36-39. Thanks to your comment, we will consider renaming the two latent variables to avoid confusion cause by “content” and “style” or add a footnote to make this point clear to readers. ### Domain in results Thank you for the nice question! We train all baselines and the proposed approach with multiple domain datasets. They differ in how to leverage the domain information. The effects can be seen from both quantity and qualitative aspects: In Table 1, all the baselines in the LSTM group first derive a domain embedding $u$ and then concatenate $u$ to the sentence-level representation to differentiate their domain source. For our approach built on CPVAE, we adopt $u$ in the content-style dependency modelling described in Eqn(2). We can see clear improvement over CPVAE. Besides, in Table 2, the first example, “The guy is an awful actor → The guy is very flavour” shows that the simple concatenation of domain indexes in CPVAE fails to capture the domain-adaptive content-style dependency: the sentiment change leads to unnatural content-style dependence, although the polarity is correctly flipped. ### Analysis of style acc results We apply a sentiment classifier with 95% acc in all the validation datasets to evaluate whether the intended sentiment is expressed in the transferred sentences. That is, if the original sentence is positive/negative (according to the data annotation) and the transferred sentence is negative/positive (evaluated by the classifier), then the sentiment accuracy numerator is increased by one. So, it is not applicable that the sentiment acc has to be over 50% (random guess) if there is only binary sentiment polarity. ### Antonym replacement baseline Thanks for sharing your insights. We totally agree that word substitution is important in many text rewriting tasks, and naive antonym replacement can flip the overall sentiment with minor modifications when the given sentences have simple structures and explicit sentiment indications. We supply the comparison results in the table below with antonym replacement over the four datasets. Specially, we adopt the NLTK tool to do pos tagging to the original sentence and replace the words annotated as “ADJ” (adjective) with its antonym found in WordNet. The performances vary a lot from different datasets. Our method performs relatively stable across different datasets. It can be partly explained by the variations of the sentences and the representation learning approach can benefit generalizability. Some examples are shown below. We will add a footnote to Table 2 to elaborate on this insight. **Table**: Sentiment acc of antonym replacement and Matte. | | Imdb | Yelp | Amazon | Yahoo | |---------------------|-------|-------|--------|-------| | Antonym replacement | 11.40 | 14.40 | 10.30 | 21.75 | | Matte | 32.43 | 34.30 | 34.50 | 38.45 | We also supply some cases below: ~~~ src: Need a cheap car charger and it seems to do the job. anto: Need a expensive car charger and it seems to do the job . ours: Need a cheap car charger and it seems to be unaffordable. src: I will stick to my Xbox for now, thanks. anoto: I will stick to my Xbox for now, thanks. ours: I will buy it to replace my Xbox, thanks. ~~~ Please let us know if you have further concerns, and please consider raising the score if we have addressed your concerns -- thank you! ### References: [1] Educating Text Autoencoders: Latent Representation Guidance via Denoising. ICML20 [2] Disentangling Generative Factors in Natural Language with Discrete Variational Autoencoders. EMNLP21 [3] On variational learning of controllable representations for text without supervision. ICML20 --- Rebuttal Comment 1.1: Comment: We've taken your initial feedback into careful consideration and incorporated them into our manuscript as indicated in our response. Could you kindly confirm whether our responses have appropriately addressed your concerns? If you find that we have properly addressed your concerns, we kindly request that you consider adjusting your initial score accordingly. Please let us know if you have further comments. Thank you for your time and effort in reviewing our work. --- Rebuttal Comment 1.2: Comment: Thanks for the rebuttal. However, after reading other reviews as well, I feel there are quite some efforts needed for the paper to improve and solve all the reviewers' concern. I will keep my original score. --- Rebuttal Comment 1.3: Comment: Thanks for your follow-up insights! As indicated in our responses to other reviewers, we have responded to all the explicit concerns raised by all the reviewers and updated our manuscript accordingly. As uploading drafts is not permitted at this stage, we share some revised paragraph examples below: To highlight the **effects of domain $u$ in our results**, we modified the paragraph **sentiment transfer performance** in **Section 6.1**. >Our model is built on top of CPVAE with the proposed causal influence modules and sparsity regularisations. Specially, we adopt $u$ to establish the domain-varied dependency between content and style, illustrated in Eqn(2), while all the baselines in the LSTM group first derive a domain embedding and then concatenate it to the sentence-level representation to differentiate their domain sources. We can see clear improvements over the best-performing baseline CPVAE, which can be partly explained by our novel method of incorporating domain information. Moreover, the first example "*The guy is very flavour*" in Table 2 shows that the simple concatenation of domain index in CPVAE fails to capture the domain-adaptive content-style dependency and leads to an unnatural content-style match. We included remarks on comparing our principle-based generation approach with the **anatomy-replacement method and ChatGPT at the closing of Section 5**, and we refer the readers to **Appendix A.5, Table 11 and Table 12** for experimental results. > **Comparison with large language model**. As well acknowledged, large language models (LLMs) have demonstrated an impressive ability to generate fluent text. That said, we view the principles for counterfactual generation as complementary to the development of LLMs, and we hope that our theoretical insights can further enhance LLMs. We supply examples in Appendix, Table 11 that LLMs falter on sentiment transfer for overlooking overall and implicit sentiments, although it can precisely replace sentiment-related words. Thus, one may anticipate that LLMs would benefit from principles for representation learning, as developed in our work~\footnote{We also provide the results of anatomy replacement in Appendix, Table 12, to further compare the interventions that occur in input-space and latent-space.}. We included the results of the tense transfer task in **Section 6.3 (a new subsection)**, with newly added **Table 5** and **Figure 6** as illustrations. >To further verify our theoretical insights, we apply MATTE to tense transfer between past and present, in which tense is the style variable with relatively sparse influence to sentence. We reuse the model trained on the above four datasets and do inference on the Yelp dataset. Specially, we follow Shen et al. to identify the tense of sentences by extracting the main verb using the StanfordNLP Parser. In order to transfer the tense from past to present, we collect 100 present sentences in the dev set to derive $s_\text{transfer}$ as a replacement of the original $s$ for the past sentences, and vice versa. >The tense transfer accuracy results on Yelp test tet (Table 5) show significant improvements, 6% from MATTE over the best-performing baseline. We compare the learnt style variable $s$ derived from CPVAE (left) and MATTE (right) in Fig 6. There is a clear bond between past and present sentences in MATTE, while some past (blue dots) are mixed in the bottom of the red district in CPVAE, which implies that MATTE learns a better disentangled tense representation. We hope the updated text could address your concerns. Please kindly let us know if you have further concrete questions or concerns. Thank you for engaging in our work! --- Reply to Comment 1.3.1: Comment: Dear reviewer zc3n, Once again, we are grateful for your time and efforts. Since the discussion period will end in one hour, we will be online waiting for your feedback on the further response we provided yesterday. We would highly appreciate it if you could take into account our response when updating the rating and having discussions with AC and other reviewers. Thanks for your contribution to NeurIPS 2023! Authors of #1309
Summary: The work "Controllable Text Generation with Identifiability Guarantee" presents a model for unsupervised counterfactual generation based on a model with disentangled style and content. The model is based on a variational autoencoder augmented with two flow-based models operating in the latent space. The flow-based models perform the disentanglement and, since they are invertible, allow to intervene on the style variable for counterfactual generation. Strengths: Overall, I find the basic idea and the model itself interesting and novel. I like how the paper presents theoretical results that are then illustrated by the model rather than just an experimental evaluation. However, there are several reservations that weigh the scales towards rejection for me. Weaknesses: First, the experimental evaluation is underwhelming, for several reasons: (i) the only setting provided is sentiment transfer, which significantly restricts the claims made for counterfactual generation in general; in fact, I would argue that the introduction and abstract are much more general than the actual results, and would advise to rewrite the introduction to mention that the model is only proven to work for sentiment transfer; (ii) I'm not sure that the BLEU metric and consequently the G-score make a lot of sense here since BLEU simply shows how much of the original wording is preserved; e.g., replacing a word with a synonym reduces BLEU but, all else being equal, arguably makes for better counterfactual generation since it makes generated sentences more varied and hence useful, e.g., as synthetic data; (iii) moreover, I'm not sure I understood the accuracy metric as presented: e.g., the IMDB dataset only has positive and negative sentiments, the authors claim that their classifier has 95% accuracy on original validation sets (this makes sense), but then Table 1 shows IMDB accuracies for counterfactuals ranging from 14% to 38% -- so that's much worse than chance for all methods including the supervised upper bound?.. it may be a misunderstanding on my part, but the paper does not clarify this at all; (iv) the qualitative results in Table 2 are also unclear: e.g., the authors claim that Optimus alters the semantics but doesn't MATTE also do it in Src 2? plus, I couldn't understand Src 4 at all, either the original or transferred versions, they don't make any sense. Second, I'm afraid that these days a text generation model has to compare itself with modern large language models, while the authors choose a GPT-2-based model from 2019 as their best baseline (and a baseline that serves as an unachievable upper bound since it's supervised). I wonder how well GPT-3.5 would do if you just asked it nicely to "please invert the sentiment while preserving content as much as possible in the following sentence that originates from the domain of movie reviews", maybe with a couple of generic examples in the prompt? Third, related work would benefit from a section on topic modeling that has developed joint sentiment-topic models such as ASUM or USTM that are quite similar in their basic assumptions to the presented model. Topic models obviously cannot serve for counterfactual generation but it looks like they were an influence. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please see previous section Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The approach proposed in the paper has some technical limitations leading to a quality decrease. However, the proven identifiability guarantees lead to the mitigation of some of the potential societal risks. E.g. if the algorithm is used to change the style of some text, its semantics should be kept intact, to avoid the occasional creation of misinformation messages. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for the valuable comments and suggestions. We have revised and included some results and we hope the revision could address your concerns. ### Task is limited Thank you for the suggestion! We will make it explicit that our empirical study is limited to sentiment transfer. At the same time, to make our contribution even clearer, we will highlight that our primary contribution is to establish a theoretical foundation for counterfactual generation and reason about theoretical guarantees, which is often missing in the existing literature. The resulting theory and framework are general and can inform the algorithmic design for many other tasks. Furthermore, we evaluate MATTE on tense transfer task and results are in **Table 3** in the PDF. Specially, we follow [5] to extract the tense for the main verb using the StanfordNLP Parser, evaluate the transferred sentences by the parser. It outperforms the best baseline over 6\% with impressive style representation mapping results. ### BLEU is not good Thank you for sharing your insights. Although many existing papers adopt BLEU [1, 2], we completely agree with you that BLEU has limitations in capturing semantic relatedness beyond literal word-level overlap. To make this issue clearer to the audience, we will make a footnote in the paper. We can observe that another metric, sentiment accuracy, also favours our methods in the same way. Furthermore, we adopt one metric, CTC score [3], to avoid the aforementioned issues of BLEU, as it considers the matching embeddings, i.e., cosine similarity of pretrained word embedding rather than the “hard match”. The results in **Table 4** (in the PDF) show that $\beta$-VAE displays the least impressive performance, and Optimus and Matte exhibit the overall best results. Admittedly, results differences are less discriminative than BLEU -- this phoneme is also observed in [4]. ### Sentiment acc We apologize for any confusion. The evaluation has successfully achieved a 95% accuracy for binary sentiment classification on the original sentences in the dev set, showcasing its reliability as a sentiment classifier. Consequently, we employ this classifier to assess whether the intended sentiment is accurately conveyed. Specifically, if the transferred sentence is assessed as the opposite sentiment (evaluated by the sentiment classifier), then the sentiment accuracy numerator is incremented by one. In the worst case, all the generated sentences fail to express the opposite sentiment after the intervention, and the sentiment acc is 0. Therefore, the random sentiment transfer acc is not 50\% as in classification problems. ### Case study examples are not clear Thanks for your valuable comments on the qualitative results. We will improve this part by providing clearer examples and descriptions in the manuscript. ### Compared to GPT3.5 We fully agree with you that evaluating LLMs will provide valuable insights. As well acknowledged, LLMs have demonstrated an impressive ability to generate fluent text. That said, we view the principles for counterfactual generation as complementary to the development of LLMs, and we hope that our theoretical insights can further enhance LLMs. In fact, we supply examples below that LLMs falter on sentiment transfer. Chatgpt can precisely detect the sentiment words and find a good replacement to complete a fluent sentence. This word-level replacement favours sentences with explicit and straight sentiment expressions. Our methods, based on representation learning, can probably better capture mixed and implicit sentiment expression, where ChatGPT neglects the overall sentiment. Thus, one may anticipate that LLMs would benefit from principles for representation learning, as developed in our work. Examples: Source sentence (S), ChatGPT (C), MATTE (M). ~~~ S: I had to leave a message, and they never called me back. C: I had to leave a message, and they promptly called me back. M: I had to leave a message, and they kept thinking that ended up that finally helped me. S: This case is cute however this is the only good thing about it. C: This case is not cute; however, it is the only good thing about it. M: This case is cute and overall a valuable product. S: The buttons to extend the arms worked exactly one time before breaking. C: The buttons to extend the arms failed to work from the beginning, never functioning even once. M: The buttons to extend the arms worked exactly as described. ~~~ ### Connection with topic modelling: Thank you for the great suggestion! Taking the ASUM as an example, it is built on Latent Dirichlet Allocation (LDA) and models the generation of words based on the hierarchy of latent variables that is conditional on sentiment. Words are generated from the sentiment-aspect-word multinomial distribution. It requires a predefined list of polarity words to set the asymmetric sentiment priors. In contrast, in our model, the style (or sentiment) depends on the content (or aspect), and we don’t need to pre-define the sentiment prior. More concretely, the key differences are (a) we introduce the domain index $u$ to allow for varying dependencies between content and style across different domains; (b) we can apply an arbitrary function to model the dependency between content and style, i.e., Eqn (2); (c) based on the generation process, we propose two flexible constraints to the distributions to achieve the identifiability guarantee. Please let us know if you have further concerns, and please consider raising the score if we have clarified your concerns -- thank you! ### References [1] Semi-Supervised Formality Style Transfer with Consistency Training [2] Deep Learning for Text Style Transfer: A Survey [3] Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation [4] Composable Text Controls in Latent Space with ODEs [5] Educating Text Autoencoders: Latent Representation Guidance via Denoising --- Rebuttal Comment 1.1: Comment: We've taken your initial feedback into careful consideration and incorporated them into our manuscript as indicated in our response. Could you kindly confirm whether our responses have appropriately addressed your concerns? If you find that we have properly addressed your concerns, we kindly request that you consider adjusting your initial score accordingly. Please let us know if you have further comments. Thank you for your time and effort in reviewing our work. --- Reply to Comment 1.1.1: Comment: Dear reviewer pDz1, Once again, we are grateful for your time and efforts. We have been eagerly waiting for your feedback on our point-to-point response. We will be here waiting and hope to see it before the discussion period ends. We understand that you are very busy, but would highly appreciate it if you could take into account our response when updating the rating and having discussions with AC and other reviewers. Thanks for your time, Authors of #1309 --- Rebuttal Comment 1.2: Comment: Thanks authors for carefully addressing my comments. New experiments provided in the rebuttal dismiss most of my concerns. I raise the score to 6. --- Reply to Comment 1.2.1: Comment: Thank you so much for providing valuable feedback and acknowledging our work! We will incorporate them carefully into the future version -- many thanks for your time and effort!
Rebuttal 1: Rebuttal: We all thank all reviewers for their valuable feedback and dedicated time! We address the individual comments in separate response and will incorporate the reviews' suggestions in our revision. Pdf: /pdf/d6b520c328c048451d9ef12b9feb9e47344c4bec.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper discusses controllable text generation and tackles the dependence between content and style in the counterfactual generation task. Identification guarantees are proven and used to enhance disentangling of the variables. Strengths: Style and content disentanglement, especially for scenarios with sparse data, is very challenging and important to language generation. A theoretical discussion on sparsity constraints is interesting. The following practice is motivated by the theory and the experimental results are quite positive. Weaknesses: + The writing requires improvement: The definition of some key concepts of this paper are missing, such as 'identifiability guarantee' and 'relative sparsity'. The connection between the generative model (described in Sec 3) to previous works is not discussed. Comparison to an existing counterfactual generation framework is appreciated. The paper confounds 'sentiment' and 'style', which gets worse when it is heavily used as the running example throughout the paper. Sentiment is more of the semantic aspect of the text. Many methods and technical choices are only expressed in math without enough explanation and motivation. + Some technical questions: u is used in Sec 3 but missed in Sec 4. How is it considered? What is the definition of T(z)? Assumption 1.i requires g to be invertible, but sparse matrices are usually not invertible, as they are not full rank. The connection between the proposed theory and practice is loose. One issue is the assumptions are strict, while they are used as losses (which means the assumptions may not hold during training). Considering there are many missing definitions and explanations, I cannot judge the correctness of the theoretical part. The latest best baseline methods [28,52] are published in 2020, you may consider later work such as [A,B] as baselines. There is no human study and style classification only relies on one model. There could be some biases in the evaluation. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: + Presentation Issues What is 'relative sparsity'? L21: 'state-of-art' -> 'state-of-the-art' 'grey shade' -> 'grey-shaded nodes' Not sure about the difference between ';' and ',' in Eqn 2. Some explanation of the assumptions using natural language would assist readers understand the work. + Missing reference [A] A causal lens for controllable text generation (NeurIPS 2021) [B] Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation (EMNLP 2022) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 3 good Limitations: The evaluation is limited to 1) only automatic methods without human inspection and 2) using a singular model architecture in experiments. All these may lead to biased discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable time and your detailed feedback! We address each point as follows. ### Writing **(1) Key Concepts.** **Identifiability guarantee** refers to the standard notion of identifiability in statistics (https://en.wikipedia.org/wiki/Identifiability), which describes the possibility of learning the true statistical model (up to certain equivalent classes) from its samples. The formal characterization of identifiability is presented in Theorem 1 (line 174) and Theorem 2 (line 216). In revision, we will highlight that identifiability refers to the standard notion in statistics. In case you see any other missing concepts, please let us know. **Relative sparsity** is formally presented as Assumption 1. iii and Figure 2. Intuitively, it prescribes that the influence from the style latent variable on the text is sparser than (thus “relative”) that from the content latent variable, as discussed in lines 188-198. This assumption encodes the belief that content information often plays a more prominent role in determining the text than style information. **(2) Comparison with existing methods.** Thanks for your constructive suggestion -- we will highlight this part to improve our paper. Our main contribution is to propose two flexible assumptions to replace existing unrealistic assumptions (e.g., the independence between $c$ and $s$) via regularisations. It can be applied to broader generation problems -- the sentiment transfer is one example that satisfies our assumption. Our framework is fully unsupervised in training -- without paired data and style labels. In contrast, [A] incorporates style classifiers and [B] is pretrained on style-labeled data. Thus, they are not directly comparable with our approach, which is similar to unsupervised disentangled learning (Line 74). Based on CPVAE, the SOTA unsupervised style transfer model, MATTE has the following merits: (i) It is driven by the identifiability guarantee that enforces style sparsity and intersection minimization. Therefore, it is theory-grounded for the content and style disentanglement. (ii) Domain variables are introduced in the data generation process, and the dependency between content and style varies from different domains. CPVAE does not consider the multiple-domain situation. **(3) Sentiment is a style?** Thanks for raising this fundamental question. We agree that the usage of 'style' might cause confusion, as you pointed out. In this paper, we decompose latent variables into two parts, following the assumed generating process to facilitate counterfactual generation. It uses a broader sense of "style" to refer to the latent variables with a sparse influence on the text and influenced by specific content variables. We believe that sentiment, in many cases, falls into such a category, e.g., gender, tense. Thanks to your comment, we will consider renaming the two latent variables to avoid confusion or adding a footnote to make this point clear to readers. ### Technical **(1) $u$ not discussed in Sec4.** Great question! Actually, even without multiple domains, the properties (graph structures & sparsities) of the generating process grant identifiability. Therefore, we didn’t discuss $u$ in Section 4. We will add a remark to clarify this point in our revision. **(2) Definition of the $T(z)$.** $T(z)$ (defined in line 164) is the transition matrix between the Jacobian of the true generating function $J_{g}$ and that of the estimated generating function $J_{\hat{g}}$. **(3) Invertibility and sparsity.** Excellent question! We note that our conditions do not constrain the absolute sparsity of the entire Jacobian matrix. Concretely. Assumption 1. iii. only requires the Jacobian matrix dimensions corresponding to the style variable to be sparser than those for the content variable, which can be true even if the Jacobian matrix is dense. Therefore, the sparsity and the invertibility assumptions are compatible. **(4) Assumptions may not hold during training.** Thank you for raising this question! We note that the assumptions are made w.r.t. the true data-generating process that generates the dataset. We implement losses to drive the estimated model to satisfy these conditions at the training optimum. It does not affect our theory whether these conditions are met during training. The decrease in corresponding loss actually indicates better identifiability. **(5) Human evaluation.** Thank you for the valuable suggestions. Inspired by [1], we established the criteria to assess the four aspects: style, meaning, fluency, and overall transfer quality. The first three metrics use a 5-point Likert scale (higher scores indicate better performance), and the last is rank-based. We randomly selected 100 examples and compared the results from the top-performing baselines in **Table 1** in PDF. **(6) Multiple style evaluators.** We totally agree that using multiple classifiers is better to avoid performance fluctuations, though much existing work neglects this issue [2,3,4]. We adopt the pretrained BertClassifier and fine-tune it for one epoch. The model with the best dev performance (96.23%) is used in our evaluation. The results evaluated by BertClassifier are in **Table 2** in PDF. ### Presentation We use a semicolon “;” to delineate the function inputs and the function parameters, i.e., $f$( input1, …; parameter1, …), which is often adopted in the literature. In Eqn 2, we use this notation to indicate that $g_{s}$ transforms $\tilde{s}$ to $s$, and this transformation is determined by $c$ and $u$. Please let us know if you have further concerns and consider raising scores if we have clarified your concerns -- thank you! [1] A Review of Human Evaluation for Style Transfer. GEM21 [2] Style Transfer from Non-Parallel Text by Cross-Alignment. Neurips17 [3] Multiple-attribute text rewriting. ICLR19 [4] A Probabilistic Formulation Of Unsupervised Text Style Transfer. ICLR20 --- Rebuttal Comment 1.1: Comment: Thank you for your clarifications! After reading the responses, I feel this paper could be improved in the next round of modification, regarding (1) clarifying a series of fundamental concepts; (2) better explanations of the methodology and its assumptions; and (3) better organization and presentation of the paper. --- Reply to Comment 1.1.1: Comment: Thank you for feedback and the effort for reviewing our work. As indicated in our previous response, we have explicitly incorporated your feedback into our manuscript, including explicitly defined concepts like identifiability, a discussion on the references you pointed us to, and suggestions on evaluation. As uploading drafts is not permitted at this stage, we share a few revised paragraphs below: We explicitly defined identifiability at the opening of Section 4: >The notion of identifiability describes the possibility of learning the true statistical model (up to certain equivalent classes) from its samples [4]. The identifiability of a variable $z$ indicates that the estimated variable $\hat{z}$ contains all the information of $z$ without mixing the information of other latent variables. Formally, there exists an *invertible* mapping $ h $, s.t. $ \hat{z} = h ( z ) $. This notion of identifiability is widely adopted in prior work [24,46,50]. Suggested references [A] and [B] have been added to Section 2 (related work), shown below: > A line of work adopts pretrained language models as the encoder and the decoder in their model. For instance, one of the most popular pretrained VAE models, Optimus [30], employs BERT as its encoder, and GPT2 as its decoder. To train a latent connector between BERT and GPT2, it is pretrained on the wikitext dataset via an unsupervised reconstruction objective. On top of Optimus, the model in [A] introduces a pretrained classifier conditioned on the style labels, as well as two counterfactual objectives. [B] focus on transferring learning after pretraining on the style-annotated training dataset. Our model also leverages the latent variables to model the data-generating process and conducts style transfer in the latent space. However, unlike [A,B], we do not need style labels in the training phase. The human evaluation schema and results have been added in two parts of Section 6 (Experiments), i.e., evaluation metric and sentiment transfer performance: >Evaluation metric: We conduct both automatic and human evaluation. For human evaluation, we invited three English-fluent evaluators to rate the sentiment reverse, semantic preservation, fluency and overall transfer quality using a 5-point likert scale (higher scores indicating better performance) and rank the generated sentences from different models (tied items are permitted in rank). >Sentiment transfer performance: Based on the automatic evaluation results, we randomly selected 100 examples from the four datasets and gathered the generated sentences from the top-performing baselines in each group, namely CPVAE, Optimus, and MATTE. The results in Table 2 (table is updated for human evaluation results) show that human annotators favour Optimus in terms of content preservation and fluency, after considering the style transfer correctness, Matte ranks the best-performing method with more than 58% support set. We were wondering whether your technical concerns had been properly addressed by our responses so far (If yes, could you please adjust the rating accordingly?). Please let us know if you have further concrete questions or concerns that we can address. Thank you for your engagement with our work.
null
null
null
null
null
null
Efficient RL with Impaired Observability: Learning to Act with Delayed and Missing State Observations
Accept (poster)
Summary: This paper studies online learning in tabular MDPs with impaired observability, which means one of two models: states are observed in stochastic delay or some observations are missing (sampled independently). The authors present algorithms based on optimism and prove regret bounds that scale optimally with $S,A,K$ and polynomially with $H$. Strengths: **Summary:** I don't have a strong conviction about this paper. On the one hand the paper is well-written and the model is interesting and novel, but on the other hand the regret bounds are straightforward and there is no interesting algorithmic novelty in my opinion. **Strengths:** 1. RL with impaired observability is an interesting model which obviously has a lot of real-world applications. The authors do a good job of introducing the model, the challenges and the motivation. 2. The paper is generally well-written and easy to follow (except for a few technical parts, see questions). It conveys well the need for the impaired observability model and the idea behind the algorithmic approaches to solve it. 3. The algorithms are simple and the regret bounds are close to optimal (although one can argue about what is close to optimal in this setting). Weaknesses: **Weaknesses:** 1. I think that the novelty of this paper is limited to the definition of the new models. In terms of algorithmic contribution, the application of optimism here is straightforward and so is the regret analysis. In fact, unless I am missing something, the augmented MDP is a factored MDP and then the algorithms (and analysis) actually exist already, so there is no surprise that the regret is polynomial. If that is the case, then the authors should also discuss the factored MDP literature, e.g., "Near-optimal Reinforcement Learning in Factored MDPs" by Ian Osband and Benjamin Van Roy. 2. The algorithms run in exponential time in $H$. I understand that this is probably inevitable, but it still makes the contributions of this paper very limited. 3. The regret analysis is similar to previous works and does not investigate in-depth the dependence on the delays or the missing observations rate. The regret of algorithm 2 "hides" the dependence in the distribution of the delays in the extra $H$ factors. I think that the actual dependence is interesting in this setting because it can show which delays actually become a problem. While the dependence on the missing rate is shown in the regret of algorithm 3, the optimality is not discussed and instead there is an additional algorithm that replaces this optimality with another assumption (that to me looks like it makes the whole thing easy because very few observations are missing). Moreover, There is not enough discussion about the assumptions 2.1 and 2.2, and what happens if they are relaxed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Questions:** 1. The definition of the augmented MDP is very hard to follow. Can that authors please explain in words what this MDP looks like? 2. Can the authors please discuss the assumptions a little more? For example, is the assumption that feedback arrives in order necessary? what happens when there are large delays and why is the regret always bounded within $H$? 3. Is the augmented MDP a factored MDP? 4. Can you think of a scenario where the computational complexity is not exponential? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! >**Q1**: The novelty of this paper is limited to the definition of the new models. It appears to be a factored MDP with known solutions. **A1**: We disagree respectfully. Our augmented MDP is not a factored MDP. Factored MDPs encode rewards and transitions exhibiting some conditional independence structure among factors. Although our augmented MDP has sparse transition probabilities, it does not have immediate conditional independence structures. Thus, there was no prior solution to the delayed MDP except for naive exponential-regret methods. More importantly, we are first to provide concrete regret bounds scale with $\sqrt{S A}$ by developing several novel techniques regarding the augmented MDP formulation and bonus function construction and analysis. 1). To tackle the exponentially large state space in the augmented MDP, we construct a reformation tilde_MDP_aug with past reward and an extended horizon. This new reformulation provably attains the same expected reward as MDP_aug and serves as the basis of Algorithm 2. 2). In Algorithm 2, we provide a novel construction on the bonus function to ensure optimism with high probability. The intriguing fact of the bonus function is that it is akin to the bonus of UCBVI applied to the original MDP. However, this is a consequence of our analysis exploiting the sparse structure in the transition kernel. A direct application of the UCBVI algorithm to tilde_MDP_aug will easily end up with annoying $N(s_{t_h}, a_{t_h:h-1})$ counting numbers, which leads to exponential regret. In addition, we also provide uncertainty quantification on the arbitrary delay distribution, which to our best knowledge is never done in prior work. >**Q2**: The algorithms run in exponential time. **A2**: There seems to be some misunderstanding. While our main focus is the learning efficiency, our proposed algorithms actually work with any planning oracle and achieve polynomial sample complexity. The technical contributions are discussed in response A1. Depending on the planning oracle to be chosen by a practical user, the time complexity might vary and would be exponential in $H$ only with the worst choice. In practice, one often solves the planning problem using policy gradient methods with function approximation, which proved quite efficient even in large-scale problems. >**Q3**: The regret analysis is similar to previous works and does not investigate in-depth the dependence on the delays or missing observations rate. **A3**: We do not assume any distribution on the length of delay. Hence, the regret bound holds for arbitrary delay in the worst case. The idea is that we consider finite-horizon MDPs, therefore, we can truncate the length of delay at $H$ (Line 221-225). When the delay is much better behaved, e.g., is bounded by some constant $D_{\max}$ smaller than $H$, we can obtain better regret bounds. With the same algorithm, a very slight modification on our proof leads to a regret of $\tilde{\mathcal{O}}((D_{\max}+1)^{3/2} H^{5/2} \sqrt{SAK})$. As can be seen, when $D_{\max}$ is small, the regret becomes small. Moreover, when $D_{\max} = 0$, i.e., there is no delay, the regret recovers that in standard MDPs without delay. Please also see the response A3 to **Reviewer yWj8** for technical details of how to obtain the modified regret bounds. >**Q4**: While the dependence on the missing rate is shown in the regret of algorithm 3, the optimality is not discussed and instead there is an additional algorithm that replaces this optimality with another assumption. **A4**: Proposition 5.1 confirms that obtaining polynomial regret with missing observations is possible. Yet the $S^2$ dependence may not be optimal. With an assumption on the missing rate, we indeed show a $\sqrt{SA}$ regret, which matches the minimax optimal dependence on S, A in the standard MDP setting. We further discuss the assumption on the missing rate in Line 304 - 310, while leaving the study of missing rates larger than $1/A$ as future work. The analysis to Theorem 5.2 is rather complicated, in contrast to the seemingly “easy” small missing rate. The major difficulty is to handle the summation over multi-step counting numbers (from Line 593 to the end of Appendix C). >**Q5**: Moreover, there is not enough discussion about the assumptions 2.1 and 2.2, and what happens if they are relaxed. **A5**: Assumption 2.1 and 2.2 are fairly general themselves and encode arbitrary delay and missing distributions (see a discussion in Line 123-127). Our theory holds even in the worst scenario. To further relax Assumption 2.1 and 2.2, such as allowing interarrival time $\Delta_h$ to be negative, i.e., $s_{h}$ can be observed before $s_{h-1}$, and missing rate dependent on the underlying state, goes beyond the scope of the current paper. >**Q6**: The definition of the augmented MDP is very hard to follow. **A6**: Roughly speaking, the augmented MDP is to form an enlarged state space with the nearest observed state and all the history actions thereafter (see definition of $\tau_h$ in Line 154). The reward and transition probabilities are slightly complicated, but built upon the original MDP. The reward function is the expected reward (Line 157 - 158) over the belief of the unseen current state, given all the observed information. The transition probabilities are sparse, as past actions cannot be altered (Line 164 - 165). >**Q7**: A scenario where the computational complexity is not exponential? **A7**: While investigating practical planning algorithms for specific problem is beyond the scope of our paper, we believe that our augmented MDP formation is compatible with *any* planning oracle for accelerated solution of RL. As a particular example, when the underlying transition is nearly deterministic, Thomas J Walsh, Ali Nouri, Lihong Li, and Michael L Littman. “Planning and learning in environments with delayed feedback” show a polynomial planning complexity. --- Rebuttal Comment 1.1: Comment: I thank the reviewers for all their detailed responses. I will keep my positive score.
Summary: The paper considers reinforcement learning with delayed and missing observations. It is shown that a MDP with delayed observations is equivalent to an augmented MDP with perfect observations. Based on the augmented MDP, an optimistic algorithm is proposed for delayed MDPs and it is shown to achieve a near-optimal regret order. For MDPs with missing observations, two optimistic algorithms are proposed with near-optimal regret order under some assumptions on the missing rate. Strengths: - For MDPs with delayed observations, under some independent assumption and an assumption requiring the delayed observations will still arrive in order, the paper constructs an equivalent augmented MDP with perfect observations. Based on the augmented MDP, an optimistic algorithm is proposed for the delayed MDP. Despite the increased cardinality of the augmented MDP, the proposed algorithm is shown to achieve a regret with sharp dependence on the state and action spaces. - The effect of observation delay on performance degradation is discussed with some performance gap bounds provided between the optimal policy with and without observation delays. - For MDPs with missing observations, one optimistic algorithm is proposed and its regret is shown to be sub-linear in the number of episodes. - When the missing rate is small, another algorithm based on the augmented MDP is proposed. This algorithm is shown to have near-optimal regret order if the number of episode is large enough. Weaknesses: - There are some possible typos in key equations which may highly affect the derivations and results. - In the definition of the transition below line 164, the collections of actions for time $h$ and $h+1$ may have different sizes. Is that just some typo or some additional things need to be handled? - In the equation above line 182, should the reward be $r_{t_h}$? - In the bonus function in Algorithm 2, $a_h$ seems to be a typo. - There seems to be several missing steps in the proof of Proposition 5.1. The entire derivation of the proof should be conditioned on some high probability event for the inequalities to hold, but those steps are missing. Inequality (i) is claimed to be true from valid optimism, but optimism for Algorithm 3 is not provided. Inequality (ii) might also need some further discussion. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Can the authors fix some of the typos and missing steps? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations are adequately addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! >**Q1**: In the definition of the transition below line 164, the collections of actions for time $h$ and $h+1$ may have different sizes. Is that just some typo or some additional things need to be handled? **A1**: Varying sizes of actions is a consequence of our general delay distribution, i.e., at time $h$, there is a possibility that there is no new state observation. We can only increase the action sequence in the augmented space. Yet our analysis tackles the varying sizes of actions. We bound the worst case cardinality of the augmented state space in Line 467 and the optimism holds for any augmented state. >**Q2**: In the equation above line 182, should the reward be $r_{t_h}$? **A2**: Thanks for pointing out the typo. >**Q3**: In the bonus function in algorithm 2, $a_h$ seems to be a typo. **A3**: Thanks for pointing out the typo. >**Q4**: There seems to be several missing steps in the proof of Proposition 5.1. The entire derivation of the proof should be conditioned on some high probability event for the inequalities to hold, but those steps are missing. Inequality (i) is claimed to be true from valid optimism, but optimism for Algorithm 3 is not provided. Inequality (ii) might also need some further discussion. **A4**: We are confident about the correctness and rigor of our proof. Thanks for the comment and we will add more clarifications. The high probability event we are conditioned on is that the ground-truth transition (denoted by $\theta^*$ notation) falls into the set $\mathcal{B}_k$. This can be shown by a direct application of Hoeffding’s inequality. We will add this argument in the next version. Inequality (i) of Proposition 5.1 follows from Line 4 of Algorithm 3. As the ground truth environment is contained in $\mathcal{B}_k$, the value of the best executable policy for the ground truth environment is no larger than the largest value under the double maximization. Inequality (ii) follows from a standard telescoping expansion of the value function over time by utilizing the fact that $Q_h(s, a) = r_h(s, a) +[P_h V_{h+1}](s, a)$ for any $h$. We apply this expansion to both $V_{\theta^k}^{\pi^k}$ and $V_{\theta^*}^{\pi^k}$. --- Rebuttal 2: Title: Are you satisfied by the answers? Comment: Dear reviewer, Would you please indicate whether the authors' response is satisfactory for you? If not, please engage with the authors, so we can get a better assessment of this work. Thank you, Area Chair --- Rebuttal Comment 2.1: Title: Follow-up Comment: I'll like to follow up on this. You raised an issue with the proof of Prop 5.1. Is the authors' response satisfactory?
Summary: This paper aims to provide a theoretical analysis of reinforcement learning with delayed and missing state observations, and establish near-optimal regret bounds, for RL in both the delayed and missing observation settings. Despite impaired observability posing significant challenges to the policy class and planning, the results demonstrate that learning remains efficient, with the regret bound optimally depending on the state-action size of the original system. The policy under impaired observability is evaluated. Strengths: 1. The problem addressed is critical for applying RL to real-world scenarios. 2. The theoretical analysis is comprehensive and well-executed. Weaknesses: 1. The paper does not include preliminary experimental results, which may hinder the empirical understanding of the methods. 2. Building connections between the theoretical analysis and existing empirical studies in the literature would be beneficial. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. How can the theoretical analysis inspire further algorithm design to address impaired observations? 2. What are the main challenges preventing you from conducting empirical studies for the proposed method? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: 1. The scalability for large action spaces is limited. 2. The lack of empirical studies makes the applicability of the method uncertain. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! >**Q1**: The paper does not include preliminary experimental results, which may hinder the empirical understanding of the methods. **A1**: Thanks for the suggestion. In the RL theory literature, developing theory for tabular MDP is usually the first step, and most papers in this category did not need experiments due to the nature of tabular problems [1-3]. In contrast, we followed your suggestion and **conducted additional experiment on a toy model (see https://openreview.net/attachment?id=Pd2GMnkFUx&name=pdf for results)**. In particular, we consider constant delays and set the MDP with horizon $H = 6$. Detailed transitions and reward are summarized in Table 1 and Table 2 in the attached PDF. We vary the length of delay to be 1 and 2, and run our proposed policy learning Algorithm 2 for 10000 episodes. The regret is plotted in Figure 1. As can be seen, the regret increases as the length of delay increases. In Figure 2, we investigate the performance degradation caused by delayed observations. When the transition is almost deterministic, we observe that the gap is relatively small. However, when the transition is more random, the gap increases. These observations validate our main theory results such as Proposition B.2 and 4.2. [1] “Near-optimal reinforcement learning in polynomial time” by Michael Kearns and Satinder Singh. [2] “Minimax Regret Bounds for Reinforcement Learning” by Mohammad Gheshlaghi Azar, Ian Osband, Rémi Munos. [3] “Near-optimal regret bounds for reinforcement learning” by Thomas Jaksch, Ronald Ortner, and Peter Auer. >**Q2**: The scalability for large action spaces is limited. **A2**: Our main focus is the learning efficiency. The proposed algorithms actually work with any planning oracle and achieve polynomial sample complexity. Depending on the planning oracle to be chosen by a practical user, the time complexity might vary and would be exponential in only with the worst choice. In practice, one often solves the planning problem using policy gradient methods with function approximation, which proved quite efficient even in large-scale problems. --- Rebuttal 2: Title: Are you satisfied by the answers? Comment: Dear reviewer, Would you please indicate whether the authors' response is satisfactory for you? If not, please engage with the authors, so we can get a better assessment of this work. Thank you, Area Chair
Summary: This paper studies reinforcement learning in the setting where MDP is with delayed state information, but delayed observations still arrive in order, and in the setting where the state information of MDPs could be missed and never be observed. For both settings, this paper provides provably efficient RL algorithms with sub-linear regrets, whose dependencies on the number of episodes are optimal. In the setting with delayed state information, this paper shows the impact of delay $d_h$ on the performance of the optimal policy. In the setting with missing state information, this paper shows the impact of observation rate $\lambda_h$ on the regret. The results seem interesting to me. Strengths: 1. This paper studies MDPs with delayed state information and MDPs with missing state information, which seems to be an important problem. 2. This paper provides algorithms for these two settings, and proves the regret guarantees. 3. This paper shows the impacts of the values of delays and observation rates on the performance of the policies for learning in MDPs with impaired observability. Weaknesses: 1. The novelty of the proposed algorithms is not clear to me. For example, algorithm 2 seems to be a simple application of the tabular RL algorithm to the case with delayed state information? 2. It is not clear for me how to understand the regret. For example, why the benchmark in the regret is the optimal policy with impaired observability? Is it more reasonable to compare with stronger optimal policy, e.g., one with full observability? Since if the optimal policy is also with impaired observability, why is the problem more challenging? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Why is the regret in Theorem 4.1 for delayed state information independent of delay? Is it because the optimal policy also suffers from the delay? Intuitively, the performance of the online policy with larger delay should be worse. For example, should we hope much better regret in the case with no delay than the case with infinite delay? 2. Even though when the delay $d_h$ is 0, there is still a gap in the regret in Theorem 4.1? Could you give some insights whether this can actually be sharpened or why is this true? 3. Similarly, in section 5 with missing state information, even when the missing rate is 0, the regret seems to keep to be larger than the best regret that could be obtained? 4. Above line 119, the executable policy class is defined based on the action sequence. However, based on the notation, the action is always known, why the action sequence $a_{t_h:h-1}$ is important there? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Please see weakness and questions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! >**Q1**: Novelty of the proposed algorithms. **A1**: A naive application of tabular RL algorithm to our problem would yield exponential $\tilde{\mathcal{O}}({\rm poly}(H) \sqrt{SA^HK})$ regret. We are able to achieve regret that scales with $\sqrt{S A}$ by utilizing structures of the delayed MDP and novel construction on bonus functions. 1). To tackle the exponentially large state space in the augmented MDP, we construct a reformation tilde_MDP_aug with past reward and an extended horizon. This new reformulation provably attains the same expected reward as MDP_aug and serves as the basis of Algorithm 2. 2). In Algorithm 2, we provide a novel construction on the bonus function to ensure optimism with high probability. The intriguing fact of the bonus function is that it is akin to the bonus of UCBVI applied to the original MDP. However, this is a consequence of our analysis exploiting the sparse structure in the transition. A direct application of the UCBVI algorithm to tilde_MDP_aug will easily end up with annoying $N(s_{t_h}, a_{t_h:h-1})$ counting numbers, which leads to exponential regret. In addition, we also provide uncertainty quantification on the arbitrary delay distribution, which to our knowledge is never done in prior work. >**Q2**: How to understand the regret. **A2**: Our theory provides comparisons to both the best executable policies and the even stronger full-observability optimal policies, where the former is termed as regret (Line 204) and the latter is termed as gap (Line 232-233). Briefly speaking, regret measures the learnability of the best executable policy and the gap characterizes the unavoidable performance degradation due to impaired observability. In our separation theory (Proposition 4.2), we have shown that the gap is heavily instance-dependent and can be large in the worst case. We also provided a general bound on the performance gap in Proposition B.2, which is deferred to appendix due to space limit. In the worst case, there is a constant gap compared to the optimal policy without impaired observability, indicating the divergence of the gap as $K$ increases. On the other hand, even compared to the optimal executable policy, as mentioned in the introduction, naive approaches easily result in an exponential regret. Our algorithm and analysis led to the first **tractable** solution to RL with impaired observability, i.e., achieving complexity that scales polynomially with $S$ and $A$. >**Q3**: Regret in Theorem 4.1 independent of delay? **A3**: We do not assume any distribution on the length of delay. Hence, the regret bound holds for arbitrary delay in the worst case. The idea is that we consider finite-horizon MDPs, therefore, we can truncate the length of delay at $H$ (Line 221-225). When the delay is much better behaved, e.g., is bounded by some constant $D_{\max}$ smaller than $H$, we can obtain better regret bounds. With the same algorithm, a very slight modification on our proof leads to a regret of $\tilde{\mathcal{O}}((D_{\max}+1)^{3/2} H^{5/2} \sqrt{SAK})$. As can be seen, when $D_{\max}$ is small, the regret becomes small. Moreover, when $D_{\max} = 0$, i.e., there is no delay, the regret recovers that in standard MDPs without delay. (Technical details of how to obtain these regret bounds are provided at the end.) It is worth pointing out that the dependence on $S$, $A$, and $K$ stays the same and is sharp. Technical details on how to show the $\tilde{\mathcal{O}}((D_{\max}+1)^{3/2} H^{5/2} \sqrt{SAK})$ regret. 1) The cardinality of S_aug is bounded by $HSA^{D_{\max}+1}$ now (Line 467 in Appendix B.1). 2) The high probability event holds uniformly over S_aug. Therefore, the confidence band is narrowed by replacing $H$ with $D_{\max} + 1$ in the square root. 3) In Eqn. (B.7) and (B.10), we replace a factor of $H$ by $D_{\max}+1$ as the delay is bounded by $D_{\max}$. Putting 1), 2), and 3) together leads to the claimed regret bound. 4) In the case of no delay, we do not need to estimate the distribution of delays. Therefore, the bonus function is simplified and there is no need to sum up in Eqn. (B.4). With these simplifications, we recover the regret bound in standard MDPs. >**Q4**: When the delay $d_h$ is 0. There is still a gap in the regret in Theorem 4.1? **A4**: Our regret in Theorem 4.1 is sharp in S, A, and K. As mentioned, we recover the minimax optimal dependence on S, A, and K in the standard MDP setting. Our analysis covers arbitrary delay in the worst case. When the length of delay is bounded, we can modify the regret bound as shown in the previous response A3. >**Q5**: When the missing rate is 0, the regret seems to be larger than the best regret? **A5**: In Proposition 5.1 (the last inequality above Line 558), when the observation rate $\lambda_0$ is 1, i.e., no missing, we recover the same regret of optimistic planning in standard MDPs. Also, as discussed in Line 278 - 279, the dependence on the observation rate is approximately $1 / \lambda_0^2$ (the square comes from consecutive observations for transition estimation). In Theorem 5.2, the dependence of the missing rate appears in the non dominating term. When the missing rate is 0, i.e., $v = \infty$, we can recover the regret bound in the standard MDPs with a slight modification on the proof similar response A3. >**Q6**: The executable policy class is defined based on the action sequence. However, based on the notation, the action is always known. Why is the action sequence $a_{t_h:h-1}$ important? **A6**: One has to keep track of past actions $a_{t_h:h-1}$ and choose actions based on them because they influence the current unknown state. It is necessary to augment state space with both past actions and the nearest observed state $s_{t_h}$, in order for the Markov property to hold and make the augmented MDP well defined. Missing any of $a_{t_h:h-1}$ would lead to incomplete information and ill-defined probability space. --- Rebuttal Comment 1.1: Comment: Thank you for the response and clarification. --- Rebuttal 2: Title: Are you satisfied by the answers? Comment: Dear reviewer, Would you please indicate whether the authors' response is satisfactory for you? If not, please engage with the authors, so we can get a better assessment of this work. Thank you, Area Chair
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for a thoughtful review and valuable comments, which help to improve the quality of the paper. We have provided our response to your individual questions and concerns. In addition to clarify our contributions and technical novelties, we conduct numerical experiments as suggested by Reviewer kTAd. The results are summarized in the attached PDF file. Pdf: /pdf/17f51d42c027dbdcce6d42d13681ce94dc997bb1.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null