title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Online Learning of Delayed Choices | Accept (poster) | Summary: The paper studies a task of learning MNL parameters with delayed feedback. The authors study both the cases where feedback with extremely high delay is ignored or taken into account as well. They prove that for both settings the optimal regret is $\tilde{\Theta}(\sqrt{NT})$ where $T$ is the horizon and $N$ is the number of products. They also conducted experiments that support their theoretical findings.
Strengths: 1) Assuming that results are correct, the paper gives a first solution to a question that seems natural in the domain of online advertising.
2) The bounds are fairly tight.
3) The paper includes an extensive literature review.
4) The authors conducted experiments to support their theoretical findings.
Weaknesses: 1) I think that the main results could have been better presented. In the current formulation, it seems like there is no difference between the guarantees of DEMBA and PA-DEMBA, which makes the reader wonder why do we need both of them. It also makes it hard to understand what improvements can we hope to achieve in future work.
2) The setup seems kind of specific to this problem. It could have been interesting to know if one can define it more generally, and still use the same or similar solutions. This could have help the reader to conjecture if the same techniques may be used to solve other variations of the problem.
Technical Quality: 3
Clarity: 3
Questions for Authors: Questions:
1) What is $\mathcal{S}$ in algorithm 1? The set of all assortments of size at most K?
2) $K$ is not mentioned in the upper bounds. They hold for all K?
3) What is the tradeoff between DEMBA and PA-DEMBA? The guarantees mentioned in Theorems 5.1 and 6.2 are identical.
4) In line 288, I am not sure how you reached the conclusion that "Our lower bound suggest an improvement on regret by considering the delay distribution via $\psi_\mu$." (there is also a typo here: suggest --> suggests). I don't understand how can a lower bound suggest an improvement to the upper bound. For example, $\Omega(1)$ is also a valid lower bound.
Suggestions:
1) The guarantees written in Theorems 5.1, 6.2 are identical. Consider showing the factors who make a difference between the results.
2) I think that the introduction is a bit lack of justifications or examples. For example, in line 23, you write "fall short in scenarios....". Why? Is there a line of work showing this?
3) A minor suggestion: In line 237, it is not clear why to use $\tilde{O}(\cdot)$ in this context. First, the lower bound has no logarithm factors in it so there's no point in concealing them. Second, $O(\sqrt{NT})$ can also be O(1), which is not the point of the lower bound. Therefore, I suggest writing "...better regret than $\Omega(\sqrt{NT})$".
Typos:
1) line 181: "at least at least".
2) Theorem 5.1: Add "Then" before "$\pi^{DEMBA}$".
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First of all, we would like to extend our sincere thanks for your detailed review and constructive feedback. We have carefully considered your comments and have addressed them as follows:
> W1) I think that the main results could have been better presented. In the current formulation, it seems like there is no difference between the guarantees of DEMBA and PA-DEMBA, which makes the reader wonder why do we need both of them. It also makes it hard to understand what improvements can we hope to achieve in future work.
We appreciate your feedback regarding the presentation of the results. You are correct that the current use of $\tilde{O}$ notation may obscure the differences between the guarantees of DEMBA and PA-DEMBA. To address this, we will revise the manuscript to include all relevant factors in our regret bounds. This change will highlight the differences more clearly and provide a better understanding of the improvements that can be achieved with each algorithm.
> W2) The setup seems kind of specific to this problem. It could have been interesting to know if one can define it more generally and still use the same or similar solutions. This could have helped the reader to conjecture if the same techniques may be used to solve other variations of the problem.
Regarding the generalizability of our setup, we acknowledge that our work is indeed focused on the specific problem of delayed learning in the multinomial logit model. This specificity allows us to provide detailed analysis and guarantees for this particular setting, which is important in areas such as online advertising and recommendation systems. However, we believe that some of our techniques, particularly those dealing with delayed feedback, could potentially be adapted to other choice models or learning scenarios with delayed information. Future work could explore extending our approach to other discrete choice models or investigating how our methods for handling delay could be applied in different online learning contexts. While our current focus is on providing a thorough solution for the multinomial logit model with delayed feedback, we agree that exploring broader applications could be a valuable direction for future research.
> Q1) What is $\mathcal{S}$ in algorithm 1? The set of all assortments of size at most \(K\)?
$\mathcal{S}$ represents the set of all assortments of size at most K. We will make this clearer in the revised manuscript.
> Q2) $K$ is not mentioned in the upper bounds. Do they hold for all \(K\)?
> Q3) What is the tradeoff between DEMBA and PA-DEMBA? The guarantees mentioned in Theorems 5.1 and 6.2 are identical.
$K$ and the difference between DEMBA and PA-DEMBA appear in logarithmic terms of our regret bounds, in the revised manuscript we will include logarithmic terms to clarify the distinctions between the algorithms.
> Q4) In line 288, I am not sure how you reached the conclusion that "Our lower bound suggests an improvement on regret by considering the delay distribution via $\psi_\mu$." (there is also a typo here: suggest --> suggests). I don't understand how a lower bound can suggest an improvement to the upper bound. For example, $\Omega(1)$ is also a valid lower bound.
The key point we were aiming to make is that our lower bound and upper bound do not match in terms of threshold terms $\psi_\mu$ and $\mu$ and this suggest a possible improvement on handling delay and/or threshold mechanism in our analysis. This distinction will also be clearer when we explicitly include the logarithmic terms in our regret guarantees in the revised manuscript.
> S1) The guarantees written in Theorems 5.1, 6.2 are identical. Consider showing the factors who make a difference between the results.
We agree with your suggestion. In the revised manuscript, we will explicitly detail the factors that differentiate the results, including constants and other relevant terms.
> S2) I think that the introduction lacks justifications or examples. For example, in line 23, you write "fall short in scenarios....". Why? Is there a line of work showing this?
We will modify this sentence with an example to illustrate that classical multi-armed bandit (MAB) models are not suitable for our setting. Specifically, MAB models require selecting one arm at each round, while in our case, we need to select multiple products (arms) to form an assortment. Moreover, adding or subtracting a product from the assortment affects the probabilities of selection for other products, which introduces a complexity not captured by traditional MAB models. This stochastic reward process and the impact of assortment changes on future decisions are better captured by our discrete choice model using the multinomial logit (MNL) framework. We will provide relevant justifications and examples to illustrate these points.
> S3) A minor suggestion: In line 237, it is not clear why to use $\tilde{O}(\cdot)$ in this context. First, the lower bound has no logarithmic factors in it, so there's no point in concealing them. Second, $O(\sqrt{NT})$ can also be $O(1)$, which is not the point of the lower bound. Therefore, I suggest writing "...better regret than $\Omega(\sqrt{NT})$"
Thank you for pointing out this typo. We will correct this and use $\Omega$ while calling our lower bound.
> Typos
We will correct the repetition of “at least” and we will add “Then” before “$\pi^{DEMBA}$" as suggested.
We appreciate your feedback and believe these revisions will significantly improve the manuscript. Thank you for the opportunity to address these issues.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for addressing my comments and questions. | Summary: The authors consider the setting in which a business is required to select a set of options to a customer in order to maximize the generated revenue. This task is challenging as the options presented to a customer may interact with each other and alter the choice of the customer, and the feedback on the choice of the customer could be received by the business after a considerable delay. This class of tasks can be addressed in literature using Multinomial choice (MNL) models. The authors aim to address two challenges that arise in this scenario, namely the unknown MNL parameters and the delayed feedback. The authors consider two settings, *threshold* and *non-threshold*, and propose two algorithms to address them, DEMBA and PA-DEMBA, respectively.
Strengths: The paper is clear and well-written. A deep analysis of related works is provided by the authors, clearly stating what gap in the literature the work aims to fill.
The work combines existing ideas to propose a solution to a problem that might have some significance to some members of the NeurIPS community.
Weaknesses: W1) The definition of the "feedback observed by the seller", $o_{i,s,t}$ is inconsistent and seems also incorrect. At line 148, it is defined as:
$$
o_{i,s,t} = c_{i,s,t} a_{i,t},
$$
whereas in the proof sketch of Lemma 4.1 it is used as $o_{i,s,t} = c_{i,s,t} a_{i,s}$.
Both definitions seem to be incorrect, as in the first case (i.e., with $a_{i,t}$), $o_{i,s,t}$ evaluates to 1 considering the option chosen by the consumer at round $t$ instead of round $s$, in which the product was sold.
In the second case (i.e., with $a_{i,s}$), this issue is solve however the definition of $c_{i,s,t}$ makes it so that $o_{i,s,t}$ would evaluate to 1 at every round $t \ge d_s + s$ (under the condition that $d_s \le \mu$), potentially causing a choice of the consumer to count more than once towards the estimation of the preference.
W2) The proof sketch of Lemma 4.1 at line 188 (and in the appendix) seems to contain an error, as the conditions of $c_{i,s,t}$, which in the definition are in a logical AND, are split using a summation, which seems to be incorrect, and could invalidate all the subsequent steps of the proof.
W3) The observation that "each alternative can act as a substitute or competitors to others, impacting the customer's final decision" stated in Section 1 would have been better represented in the problem formulation with attraction parameters that depend on the options in the proposed set. Indeed, the provided formulation of the customer choice probabilities works fine, but cannot comprehensively capture product substitution dynamics.
Technical Quality: 2
Clarity: 4
Questions for Authors: The reviewer would like the authors to address the reported weaknesses, and to respond to the following questions:
Q1) Can the authors provide a formal proof of the first passage of the proof of Lemma 4.1 at line 184:
$$\frac{ \sum_{\tau \in E_{\tau} (i) } \tilde{v}\_{i, \tau}}{| E_{\tau} (i) |} = \frac{\sum_{s=1}^{t_{\tau}^{end}} o_{i,s,t_{\tau}^{end}}}{| E_{\tau} (i) |},$$
as it is simply stated as a trivial definition however by applying the definition of $\tilde{v}_{i, \tau}$ it does not seem to be correct, considering also the remarks of W1).
Q2) Can the authors clarify how the epochs are defined?
Q3) Why have the authors not considered any of the algorithms reported in Section 2 in the experimental evaluation of the proposed algorithms?
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: No limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First of all, we would like to extend our sincere thanks for your comments. We have taken due diligence to address the concerns raised and we believe your suggestions have greatly helped us in improving our manuscript.
> W1) The definition of the "feedback observed by the seller", $o_{i,s,t}$ is inconsistent and seems also incorrect.
Thank you for identifying this typo. We have updated line 148 as $o_{i,s,t} = c_{i,s,t} a_{i,s}$. We will explain the potential issue of multiple counting in the discussion of Q1.
> W2) The proof sketch of Lemma 4.1 at line 188 (and in the appendix) seems to contain an error, as the conditions of \(c_{i,s,t}\), which in the definition are in a logical AND, are split using a summation, which seems to be incorrect and could invalidate all the subsequent steps of the proof.
We can write the condition containing the logical AND as
$$
\mathbb{I}(d_s \le t-s \text{ and } d_s \le \mu) = \mathbb{I}(d_s \le \min(t-s, \mu)).
$$
In line 188, we split the expression based on the periods where the minimum attains a particular value. Therefore, our analysis is valid. We will add an explanation to make this step clear in our revised manuscript.
> W3) The observation that "each alternative can act as a substitute or competitor to others, impacting the customer's final decision" stated in Section 1 would have been better represented in the problem formulation with attraction parameters that depend on the options in the proposed set. Indeed, the provided formulation of the customer choice probabilities works fine but cannot comprehensively capture product substitution dynamics.
We agree that the multinomial logit (MNL) model has limitations. While the MNL model does not explicitly make attraction parameters dependent on the proposed set, the relative nature of the choice probabilities ensures that each item’s selection probability is influenced by the presence or absence of other items in the assortment. This captures a form of substitution effect. We acknowledge that more complex substitution dynamics could be modeled by making attraction parameters assortment-dependent. However, our current formulation offers a balance between capturing essential substitution behavior and maintaining model tractability. Exploring more nuanced substitution dynamics could indeed be an interesting direction for future research.
> Q1) Can the authors provide a formal proof of the first passage of the proof of Lemma 4.1 at line 184:
$$
\frac{ \sum\_{\tau \in E\_{\tau} (i) } \tilde{v}\_{i, \tau}}{| E\_{\tau} (i) |} = \frac{\sum\_{s=1}^{t\_{\tau}^{end}} o\_{i,s,t\_{\tau}^{end}}}{| E\_{\tau} (i) |},
$$
as it is simply stated as a trivial definition; however, by applying the definition of $\tilde{v}\_{i, \tau}$, it does not seem to be correct, considering also the remarks of W1).
Considering your comment in W1, we have refined our approach for representing the total observation count to avoid potential multiple counting. Hence, we modified the definition of $\tilde{v}\_{i, \tau}$ as
$$
\tilde{v}\_{i, \tau} = \sum\_{s=1}^{t^{end}\_\tau} o_{i,s, t^{end}\_\tau}
$$
and we can compute $\hat{v}\_{i, \tau}$ directly as
$$
\hat{v}\_{i, \tau} = \frac{\tilde{v}\_{i, \tau}}{| E\_{\tau} (i) |} = \frac{\sum\_{s=1}^{t^{end}\_\tau} o_{i,s, t^{end}\_\tau}}{| E\_{\tau} (i) |}.
$$
By this representation, we avoid any possible double counting and simplify the step asked in Q1.
> Q2) Can the authors clarify how the epochs are defined?
Epochs are defined by immediate no-purchase decisions. When an immediate no-purchase decision occurs, it closes an epoch and starts a new one. In line 164\&165, we mention that epochs are based on immediate no-purchase outcomes, however, we agree that this expression is not clear enough. We will make this definition explicit in our revised manuscript.
> Q3) Why have the authors not considered any of the algorithms reported in Section 2 in the experimental evaluation of the proposed algorithms?
Initially, we did not consider any algorithms from the literature in our experiments since our algorithm is the first one that considers delays and does not have a competitor in this regard. However, following your suggestion, we performed additional experiments comparing our algorithm (DEMBA) with MNL-Bandit (Agrawal et al.) and included them in the revised manuscript. We have also uploaded a file with these figures. We have three figures: starting with no delay and increasing the delay in the second and third figures. We observe that when there is no delay, the performance of MNL-Bandit and DEMBA are almost identical. However, as we increase the delay, the performance of MNL-Bandit deteriorates, clearly indicating that it fails to address delayed feedback effectively, whereas DEMBA continues to perform well.
Once again, we would like to convey our sincere thanks for your thorough reading and your valuable feedback. We also hope that our revised version addresses all your comments. | Summary: The authors studied the problem of learning with delated feedback under the Multinominal Logit model. Prior work in bandits with delayed feedback does not accommodate settings where multiple items can be offered simultaneously. The authors instead proposed two algorithms: DEMBA for thresholded setting where the seller discarded delay longer than a certain threshold, and PA-DEMBA when the threshold is infinity (when all delayed feedback are considered). Both algorithms achieve $O(\sqrt{NT})$ regret bound, and the authors provided matching lower bound up to a log term. Finally, the authors provided a set of numerical experiments to support their theoretical findings.
Strengths: - The studied problem of delayed feedback in bandits, while not new, is interesting in the case where the seller can offer a slate of items to the customer at every round.
- The paper is well-written and easy to follow.
- The theoretical regret guarantee is provided with matching lower bound. These results and the numerical experiments provided a complete set of result for this setting.
- The proof-sketch provided offers good intuition to help understand the analysis and the result.
Weaknesses: - The analysis did not attempt to learn the unknown delay distribution.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Does the problem setup change dramatically when the reward for each item in the assortment are not drawn i.i.d. For example, if the position of the item is correlated with the reward such that items put first in the list yields higher reward (or lower cost of browsing), does the current analysis still hold?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and for your positive evaluation of our work. We appreciate your recognition of the strengths of our paper and your constructive feedback.
> Does the problem setup change dramatically when the reward for each item in the assortment is not drawn i.i.d.? For example, if the position of the item is correlated with the reward such that items put first in the list yield higher reward (or lower cost of browsing), does the current analysis still hold?
In our current setting, we use the multinomial logit model which assumes that the consumer choice behavior is solely affected by the subset of products being offered. We can extend this model by considering the choice probabilities, and therefore the expected reward of a product, is dependent to its position. We can define $\gamma_k$ as the visibility coefficient of the product in position $k$. We can also assume that $\gamma_k$ for each position is known to the learner. Indeed, $\gamma_k$ can be estimated from historical data. Then, the attraction parameter will be multiplied by the visibility parameter to calculate the expected reward, i.e., $r_i \frac{\gamma_{k(i)} v_i}{\sum_{j \in S} \gamma_{k(j)} v_j}$.
Our analysis holds for this setting if we make an additional assumption that the customer viewed the whole assortment. Without this assumption, we cannot calculate choice probability by $\frac{\gamma_{k(i)} v_i}{\sum_{j \in S} \gamma_{k(j)} v_j}$ and we would need a different model for this case. In scenarios where customers may not view the entire assortment, our current model would require significant modifications to accommodate partial visibility. While other approaches such as cascading bandits (e.g., Combes et al., Craswell et al., Kveton et al.) explicitly model sequential item examination, integrating such models into our framework would be an interesting direction for future research rather than a direct solution within our current setup. With our whole-assortment viewing assumption and by using an appropriate argmax oracle (e.g., the work of Abeliuk et al. provides an efficient algorithm), we can cover the position-dependent scenario within our framework.
We appreciate your insightful question and the opportunity to address it.
A. Abeliuk, G. Berbeglia, M. Cebrian, and P. Van Hentenryck. Assortment optimization under a multinomial logit model with position bias and social influence. In 4OR, 14:57-75, 2016.
R. Combes, S. Magureanu, A. Proutière, and C. Laroche. Learning to rank: Regret lower bounds and efficient algorithms. In Proc. of the 2015 ACM SIGMETRICS Int. Conf. on Measurement and Modeling of Computer Systems, 2015.
N. Craswell, O. Zoeter, M. Taylor, and B. Ramsey. An experimental comparison of click position-bias models. In Proc. of the Int. Conf. on Web Search and Data Mining. ACM, 2008.
B. Kveton, C. Szepesvári, Z. Wen, and A. Ashkan. Cascading bandits : Learning to rank in the cascade model. In Proc. of the 32nd Int. Conf. on Machine Learning, 2015. | Summary: This paper works under the MNL bandit settings where the environment feedback is delayed, motivated by real-world application scenarios like e-commerce platforms, while balancing exploitation and exploration. For the two proposed algorithms, the authors provide corresponding theoretical analysis, resulting in regret upper and lower bounds. Experiments are also conducted to show the effectiveness.
Pros:
- The paper is generally well-organized with clear narratives and derivations. Indeed, delayed feedback is an important characteristic for modern recommender systems where users can remain neutral before disclosing their final preference towards recommendations.
- From my personal perspective, the theoretical analysis pipeline is novel, and the results look decent, with both the regret upper bound and lower bound presented.
Cons and questions:
- One question from my side is that for the current theoretical analysis, the regret bound mainly depends on the expectation of the delay, without modeling the skewness of the delay distribution $f_d$. In this case, I am wondering what the regret bound would look like if we take the skewness/variance of the distribution into account. For example, with $f_d$ being Gaussian, how will the variance interact with the final regret upper bound/lower bound? Is it possible to achieve a tighter regret bound when the distribution is decaying super fast compared to that of a long-tail distribution?
- Although the contribution of this paper mainly lies in the theoretical analysis perspective, it would be better if the authors could include more algorithms for comparison in their experiments. Some MNL bandit baselines from the related works section would be good.
Strengths: Please see my comments above.
Weaknesses: Please see my comments above.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see my comments above.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see my comments above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First of all, we would like to extend our sincere thanks for your detailed review and for the insightful comments.
> One question from my side is that for the current theoretical analysis, the regret bound mainly depends on the expectation of the delay, without modeling the skewness of the delay distribution . In this case, I am wondering what the regret bound would look like if we take the skewness/variance of the distribution into account. For example, with being Gaussian, how will the variance interact with the final regret upper bound/lower bound? Is it possible to achieve a tighter regret bound when the distribution is decaying super fast compared to that of a long-tail distribution?
Taking the skewness of the delay distribution into account can improve the regret upper bound, especially in the non-thresholded setting, although it will not change the asymptotic bound. However, this consideration is important in practice, as it can lead to tighter regret bounds under certain conditions. This would involve using Bernstein-type inequality, assumptions on tail characteristics(e.g. sub-exponential tails) or Gaussian assumption with known or unknown variance.
In our revised manuscript, we will add a remark to explain that in practice, variations in the delay distribution with certain assumptions can be favorable. Specifically, for distributions with fast decay rates, such as Gaussian distributions, we can expect a better regret performance compared to long-tail distributions.
Regarding the improvement mentioned in Remark 5.3 (reducing the $\mu$ factor in our upper bound), our current analysis does not admit improvement by incorporating the skewness of the delay distribution because currently we are using $\mu$ directly to bound one term in our analysis. We leave improving it as a future research direction.
> Although the contribution of this paper mainly lies in the theoretical analysis perspective, it would be better if the authors could include more algorithms for comparison in their experiments. Some MNL bandit baselines from the related works section would be good.
We agree and performed additional experiments comparing our algorithm (DEMBA) with MNL-Bandit (Agrawal et al.) and we will include them in our revised manuscript. We have also uploaded a file with these figures. We have three figures, we start with no delay and increase the delay in our second and third figure. We observe that when there is no delay, the performance of MNL-Bandit and DEMBA are almost identical. However, as we increase the amount of the delay the performance of MNL-Bandit deteriorates, clearly indicating that it fails to answer delayed feedback.
Once again, we appreciate your insightful comments, which have significantly contributed to the enhancement of our manuscript. We hope these clarifications adequately address your concerns.
---
Rebuttal 2:
Title: Thank you for the response
Comment: I would like to thank authors for your explanations. I will keep my current positive evaluation of your manuscript. | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewers’ thoughtful and comprehensive comments and feedback. As per the suggestions of the review team, we’ve performed an additional experiment and are sharing the results in this file.
Pdf: /pdf/200d37a3e05dd134af4d047a1907ad17e7ae5e04.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper considers on online learning problem in the setting of discrete choice models with delayed feedback. The paper assumes a multinomial logit model where a decision maker has some (unknown) valuation v_i for item i. When presented with a menu S of choices, the agent chooses a single item from S such that the probability of selecting item i is proportional to v_i. In the online learning setup, at each time step t, the learner offers an assortment (menu) S_t and then agent chooses one item from the assortment. The paper considers a setup with delayed feedback, where the feedback regarding which item was chosen is delayed (according to an unknown distribution).
The paper considers two settings - (i) with censorship - where feedback delayed by more than some fixed deadline \mu is censored, and (ii) without censorship - where feedback can be delayed indefinitely (but expected delay is known to the learner). In both settings, the paper presents algorithms that obtain almost best possible regret \tilde O(NT) where N is the total number of items and T is the time horizon.
The algorithms are based on UCB and perform learning in epochs - i.e. offer the same assortment at all time steps throughout an epoch in order to reduce variance. An epoch is determined by times when the learner receives explicit negative feedback, i.e., the agent did not pick any item from the assortment.
Strengths: - The paper considers online learning of discrete choice models in a general setting with delayed feedback. The setup is broadly applicable.
Weaknesses: - I found the comparison with prior work a bit lacking. Since I am not directly familiar with works in this area, I would have appreciated more details about how the paper differs from prior work. In particular, Online learning with MNL choice models (but no delayed feedback) admits UCB based algorithms [Agrawal et al]. How much does the current work differ from that work? What additional technical complications are introduced by delayed feedback? Are they different from the challenges introduced by delayed feedback in other online learning settings (say classic MAB)?
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive feedback on our paper. We appreciate your recognition of the strengths of our work and the detailed suggestions for improvement.
Following your suggestion, we will expand the related work section in our revised manuscript to include a more comprehensive discussion of how our contributions differ from other available online learning methods with MNL choice models and other delayed learning settings. We also performed additional experiments to compare our algorithm with the literature.
In particular, Agrawal et al. rely on unbiased estimations of attraction value i.e. $v_i$). However, in our setting with delayed feedback, we only observe biased samples which requires a different approach for estimation. We develop a concentration inequality in Lemma 1 that accounts for not-yet-observed choices. Without considering not-yet-observed choices, the high probability bounds would be invalid. Moreover, we performed additional experiments comparing our algorithm (DEMBA) with MNL-Bandit (Agrawal et al.) and we will include them in our revised manuscript. We have also uploaded a file with these figures. We have three figures, we start with no delay and increase the delay in our second and third figure. We observe that when there is no delay the performance of MNL-Bandit and DEMBA are almost identical. However, as we increase the amount of the delay the performance of MNL-Bandit deteriorates, clearly indicating that it fails to answer delayed feedback.
The challenges in our problem are similar to those in other online learning settings with delays in the higher level: dealing with the uncertainty due to not-yet-observed rewards. For UCB-based solutions, correcting bias due to delays is a common approach in the delayed bandit literature with stochastic delays. However, existing solutions for other bandit algorithms are not applicable to our problem due to the unique nature of assortment feedback in discrete choice models. Particularly, in MAB settings with delayed feedback, the primary concern is updating estimates of arm rewards. In our discrete choice model, we must maintain and update an assortment of items, creating a more complex interaction between choices and feedback. The expected reward structure is determined by an assortment of items, and we consider the bias at the item level. Delayed feedback affects both item value estimation and the assortment composition offered at each time step. Therefore, we developed a novel concentration inequality and used it to construct upper and lower confidence bounds of item attraction values that results in optimistic assortments.
Once again, we thank you for your valuable feedback and for helping us improve our paper. We hope these clarifications adequately address your concerns. | null | null | null | null | null | null |
DH-Fusion: Depth-Aware Hybrid Feature Fusion for Multimodal 3D Object Detection | Reject | Summary: This study reveals that modalities have varying impacts depending on depth, leading to the proposal of DH-Fusion. This method
dynamically adjusts feature weights using depth encoding, improving multi-modal 3D object detection. Results on nuScenes show DHFusion outperforms prior methods.
Strengths: 1. This paper is well-presented. The structure is clear and easy to follow.
2. Comprehensive experiments on the nuScenes dataset are conducted to validate the effectiveness of the proposed DH-Fusion.
Weaknesses: 1. Lake of Novelty: The Depth Encoder in DH-Fusion is similar to the 3D Position Encoders in PETR (PETR: Position embedding transformation for multi-view 3d object detection). The Depth-Aware Global Feature Fusion (DGF) module and Depth-Aware Local Feature Fusion (DLF) module in DH-Fusion are analog to the Hierarchical Scene Fusion (HSF) module and Instance-Guided Fusion (IGF) module in IS-Fusion (IS-Fusion: Instance-scene collaborative fusion for multimodal 3d object detection). In conclusion, the contribution of this work seems like "A+B," which is limited.
2. For the nuScenes test leaderboard, DH-Fusion achieved a Top 10 ranking only with 384x1056 image size and SwinTiny backbone.
Please provide results when using larger 900x1600 image size and ConvNeXtS backbone.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1.Please analyze the theoretical reasons for DH-Fusion's robustness advantage against various corruptions, as in nuScenes-C.
2. In Table 1, for experiments on the nuScenes dataset, It's necessary to include metrics like mATE, mASE, mAOE, mAVE, and mAAE as
done in other papers
3.Do the authors plan to release the code and provide pre-trained, especially nuscenes test leaderboard models for further research?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude to you for the valuable comments and constructive feedback. Below, we address each each of question or comment in detail.
**Comment 1:** "Lake of Novelty: The Depth Encoder in DH-Fusion is similar to the 3D Position Encoders in PETR (PETR: Position embedding transformation for multi-view 3d object detection). The Depth-Aware Global Feature Fusion (DGF) module and Depth-Aware Local Feature Fusion (DLF) module in DH-Fusion are analog to the Hierarchical Scene Fusion (HSF) module and Instance-Guided Fusion (IGF) module in IS-Fusion (IS-Fusion: Instance-scene collaborative fusion for multimodal 3d object detection). In conclusion, the contribution of this work seems like "A+B," which is limited."
**Response 1:** We disagree. In this work, we for the first time point out that depth is an important factor to consider during feature fusion by providing insightful analysis. This new finding makes our method differ from previous fusion methods fundamentally. On one hand, our proposed depth encoder is used to adaptively adjust the weights of features from different modalities at different depths; it is conceptually different from 3D Position Encoders in PETR, which are used to relate 2D image features with reference point positions for obtaining 3D features. On the other hand, our proposed DGF and DLF modules perform feature fusion in an adaptive way, i.e., the RGB/point cloud features are assigned with different weights as depth varies; such a dynamic mechanism is novel and has not been used by IS-Fusion or previous fusion methods. Therefore, we believe our method is novel, and the provided insights are inspiring for the community.
**Comment 2:** "For the nuScenes test leaderboard, DH-Fusion achieved a Top 10 ranking only with 384x1056 image size and SwinTiny backbone. Please provide results when using larger 900x1600 image size and ConvNeXtS backbone."
**Response 2:** Thank you for your feedback. We provide a more comprehensive evaluation of the performance of DH-Fusion with different image encoders, including using larger 900x1600 image sizes and the ConvNeXtS backbone. We observe a further improvement while using a larger backbone and image size, indicating the scalability of our method.
**Table 1: Performance with different image encoders**
| Image Encoder | Resolution | NDS | mAP |
|---------------|------------------|------|------|
| ResNet18 | 256 × 704 | 73.3 | 69.8 |
| ResNet50 | 320 × 800 | 74.0 | 71.2 |
| SwinTiny | 384 × 1056 | 74.4 | 72.3 |
| ConvNeXtS | 900 × 1600 | 74.9 | 72.9 |
**Comment 3:** "Please analyze the theoretical reasons for DH-Fusion's robustness advantage against various corruptions, as in nuScenes-C."
**Response 3:** DH-Fusion’s robustness against various data corruptions in the nuScenes-C dataset is attributed to the combined effects of depth encoding and cross-attention. On one hand, depth encoding is helpful in the way of allowing the model to dynamically weigh features as depth varies. For example, on foggy days, those objects at a far distance are more invisible on RGB images than those at a near distance, and thus the RGB features should be down-weighted at a far distance. On the other hand, the cross-attention mechanism allows DH-Fusion to dynamically focus on the most relevant features and suppress those ineffective features caused by corruptions across modalities at global and local levels. In this way, it reduces the negative impact of corruptions to some extent.
**Comment 4:** "In Table 1, for experiments on the nuScenes dataset, It's necessary to include metrics like mATE, mASE, mAOE, mAVE, and mAAE as done in other papers."
**Response 4:** Thank you for your valuable suggestion. We will provide the results of our method on the metrics mATE, mASE, mAOE, mAVE, and mAAE.
**Table 2: Results on nuScenes test set**
| Methods | NDS ↑ | mAP ↑ | mATE ↓ | mASE ↓ | mAOE ↓ | mAVE ↓ | mAAE ↓ |
|-----------------------|-------|-------|--------|--------|--------|--------|--------|
| DH-Fusion-light (Ours)| 74.2 | 70.9 | 26.1 | 24.3 | 32.4 | 17.8 | **12.2** |
| DH-Fusion-base (Ours) | 74.7 | 71.7 | 25.2 | 23.6 | 32.9 | 18.5 | 12.7 |
| DH-Fusion-large (Ours)| **75.4** | **72.8** | **24.7** | **23.2** | **32.1** | **17.7** | 12.5 |
**Table 3: Results on nuScenes validation set**
| Methods | NDS ↑ | mAP ↑ | mATE ↓ | mASE ↓ | mAOE ↓ | mAVE ↓ | mAAE ↓ |
|-----------------------|-------|-------|--------|--------|--------|--------|--------|
| DH-Fusion-light (Ours)| 73.3 | 69.8 | 27.2 | 25.0 | **26.4** | 17.9 | 18.3 |
| DH-Fusion-base (Ours) | 74.0 | 71.2 | 26.8 | 24.8 | 27.9 | 17.9 | 18.2 |
| DH-Fusion-large (Ours)| **74.4** | **72.3** | **26.3** | **24.7** | 26.5 | **17.8** | **18.2** |
**Comment 5:** "Do the authors plan to release the code and provide pre-trained, especially nuScenes test leaderboard models for further research?"
**Response 5:** We appreciate the reviewer's interest in our work and the potential for further research using our models. We plan to release our code and pre-trained models, including those used for the nuScenes test leaderboard, upon the acceptance of our paper.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detailed response. The completeness and rigorousness are enlarged with the new experiment results. My dominant concern is still the significance of the contribution, which is also expressed by other reviewers. The universality for 3D detection task and other datasets has not been properly verified, especially when the current performance promotion is not significant against the SOTA methods. I would like to keep my current recommendation.
---
Rebuttal 2:
Comment: Thank you for your continued feedback. We would like to highlight that our method balances both accuracy and efficiency. Specifically, our DH-Fusion-light is the first to achieve over 10 FPS on a RTX 3090 GPU, while maintaining comparable accuracy, thereby meeting the requirements for real-time applications. Additionally, while the latest methods we compared are evaluated solely on the nuScenes dataset, we have further evaluated our methods on nuScenes-C, providing a more comprehensive evaluation. Furthermore, as reviewer qiTv also noted, the datasets we employed are sufficient to demonstrate the effectiveness of our approach. | Summary: This paper proposed a LiDAR-camera modality feature fusion method based on depth encoding for robust 3D object detection. Based on the observation that the LiDAR and camera modality information should have dynamic relative importance depending on the distance of object to be detected, the paper proposed a Depth-Aware Hybrid Feature Fusion (DH-Fusion) strategy which consists of a Depth-Aware Global Feature Fusion (DGF) module and a Depth-Aware Local Feature Fusion (DLF) module. Experiment on the public nuScenes and nuScenes-C dataset demonstrates that the proposed method is robust to various kinds of corruptions and achieves SOTA performance on 3D object detction.
Strengths: 1. The idea of depth-aware multimodality feature fusion for 3D object detection is reasonable, especially for the detection of distant objects.
2. The ablation study clearly demonstrates the effectiveness of the proposed DGF&DLF module when using BEVFusion as baseline
3. The presentation is clear and the ability of the proposed method on the detection of distant object in Figure 6 is impressive
Weaknesses: 1. How about the algorithm's performance on small object detection? small object could be normal-sized object at far distance or small-sized object in near distance, is it possible that the proposed depth-aware module hurts the detection performance of small-sized object in near distance? since according to Figure 5, LiDAR modality will have relatively larger weights at near distance, but it is in low resolution, so not good for small object detection.
2. Compare with SOTA, the achieved performance improvement is not that significant. as shown in table 1, the performance gap between the proposed method and IS-Fusion is small and IS-Fusion even achieves slightly better mAP, it is not clear whether the proposed method can achieve similar performance improvement as indicated in ablation study when using IS-Fusion as baseline.
3. In Figure 5(b), it would be good to add a color bar to indicate the magnitude corresponding to each color
Technical Quality: 2
Clarity: 3
Questions for Authors: The main concern is on the experiment verification of the proposed method, as listed in the weakness part. I may adjust my rating if such concerns are well addressed
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: There is no paragraph explaining the weakness of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude to you for the valuable comments and constructive feedback. Below, we address each of question or comment in detail.
**Comment 1:** "How about the algorithm's performance on small object detection? small object could be normal-sized object at far distance or small-sized object at near distance, is it possible that the proposed depth-aware module hurts the detection performance of small-sized object at near distance? since according to Figure 5, LiDAR modality will have relatively larger weights at near distance, but it is in low resolution, so not good for small object detection."
**Response 1:** Thank you for raising this important point. To address your concerns, we consider cars as normal-sized objects, and pedestrians, motorcycles, and bicycles as small-sized objects. We conduct experiments to evaluate our method on normal-sized objects at far distance and small-sized objects at near distance. For these above small objects, our method outperforms the state-of-the-art method IS-Fusion, as well as our baseline BEVFusion, demonstrating our robustness for small object detection. It is true that LiDAR modality has larger weights at near distance, as it is in high resolution as shown in Fig. 1(b) of the original submission, so it does not hurt but in fact helps small-sized object detection at near distance.
**Table 1: Performance on small objects, including normal-sized objects at far distance (>30m) and small-sized objects at near distance (0-20m). The numbers are AP.**
| Methods | >30m | | | 0-20m| |
|-----------------------|------|-|------------|------------|---------|
| | Car | | Pedestrian | Motorcycle | Bicycle |
| BEVFusion | 72.1 | | 92.9 | 89.9 | 75.7 |
| IS-Fusion | 76.1 | | 94.1 | 90.2 | 78.4 |
| DH-Fusion-large (Ours)| **77.2** | | **94.2** | **91.5** | **78.6** |
**Comment 2:** "Compare with SOTA, the achieved performance improvement is not that significant. As shown in Table 1, the performance gap between the proposed method and IS-Fusion is small and IS-Fusion even achieves slightly better mAP, it is not clear whether the proposed method can achieve similar performance improvement as indicated in ablation study when using IS-Fusion as baseline."
**Response 2:** Thank you for your feedback. To address the concern, we conduct an experiment using IS-Fusion as the baseline, and integrate our depth encoder into its IGF module, which allows to adjust the weights of image features with depth during instance feature fusion. We note that our method still achieves improvements when applied to a stronger baseline. This demonstrates the generalization ability of our approach across different baselines. We plan to dedicate more time to refining our experiments in future work to achieve more significant performance improvements.
**Table 2: Ablation studies using IS-Fusion as the baseline.**
| Methods | NDS | mAP |
|--------------|----------------------|--------------------|
| IS-Fusion | 73.6 | 72.5 |
| w/ DE | **74.1 (+0.5)** | **72.7 (+0.2)** |
**Comment 3:** "In Figure 5(b), it would be good to add a color bar to indicate the magnitude corresponding to each color."
**Response 3:** Thank you for your valuable suggestion. We will add a color bar to Figure 5(b) in the final version of the paper to indicate the magnitude corresponding to each color.
**Comment 4:** "There is no paragraph explaining the weakness of the proposed method."
**Response 4:** We apologize for the oversight. We condense the discussion of our method's limitations into the conclusion due to limited space, and do not dedicate a separate section to it. We will address this in the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for their detailed response with additional experiment, Most of my concerns have been addressed. So I will increase my rating
---
Rebuttal 2:
Comment: Dear Reviewer kR8p
Thanks for reviewing this work. Would you mind to check authors' feedback and see if it resolves your concerns or you may have further comments?
Best wishes
AC | Summary: This paper introduces a novel strategy for LiDAR-camera 3D object detection that emphasizes the importance of depth information in feature fusion processes. The authors argue that different modalities, such as LiDAR point clouds and RGB images, contribute variably at different depths, and this variation has been overlooked in previous works. The key contribution is the Depth-Aware Hybrid Feature Fusion (DH-Fusion) strategy that dynamically adjusts the weights of point cloud and image features based on depth encoding at both global and local levels. The DH-Fusion method surpasses previous state-of-the-art methods in terms of NDS on the nuScenes dataset and demonstrates robustness to various data corruptions. In general, the design is reasonable and performance is impressive.
Strengths: 1. The paper is well-structured, with a clear abstract, introduction, methodology, experiments, and conclusion sections that logically flow from one to the next.
2. The authors effectively communicate complex ideas through clear language and comprehensive illustrations, aiding the reader's understanding of the proposed method.
3. The motivation of design is clear and experiments are extensive.
4. The idea of depth encoding for dynamical fusion is interesting and reasonable.
5. The performance is very impressive and the robustness makes the method more applicable to challenging scene.
Weaknesses: The paper has no obvious weakness except they didn't do experiments on other datasets.
But I think the nuScenes is already large enough to demonstrate the general effectiveness.
Technical Quality: 3
Clarity: 3
Questions for Authors: I noticed this paper encodes depth information using cosine functions, but I haven't seen experiments validating the impact of cosine functions. Would there be a significant performance drop if distances were used directly instead of cosine functions?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There is no discussion of limitation in main text, but a justification is given in Checklist: using an attention-based approach to interact with the two modalities makes the detection results sensitive to modality loss.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude to you for the valuable comments and constructive feedback. Below, we address each question in detail.
**Comment 1:** "I noticed this paper encodes depth information using cosine functions, but I haven't seen experiments validating the impact of cosine functions. Would there be a significant performance drop if distances were used directly instead of cosine functions?"
**Response 1:** We appreciate the reviewer's question regarding the use of cosine functions for encoding depth information. We conduct the experiments using normalized depth directly as the depth encoding in our feature fusion module, without applying cosine functions. Our experimental results in the table below show a performance drop when using normalized depth directly. We believe that depth encoding benefits from the use of cosine functions to capture the periodicity and symmetry of the depth information relative to the ego vehicle. The cosine function helps in better representing the variations in depth, leading to model performance improvement.
**Table 1: Ablation studies of cosine functions.**
| Methods | NDS | mAP |
|------------------------|-----------------|-----------------|
| Baseline + DGF | 72.4 | 69.4 |
| w/o cosine functions | 72.1 (-0.3) | 68.5 (-0.9) |
| Baseline + DLF | 72.7 | 69.3 |
| w/o cosine functions | 72.3 (-0.4) | 68.6 (-0.7) |
**Comment 2:** "There is no discussion of limitation in main text, but a justification is given in Checklist: using an attention-based approach to interact with the two modalities makes the detection results sensitive to modality loss."
**Response 2:** We apologize for the oversight. We condense the discussion of our method's limitations into the conclusion due to limited space, and do not dedicate a separate section to it. We will address this in the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: After thoroughly reviewing the feedback from reviewers and the author's responses, I've noted that some reviewers advocate for further validation on additional datasets. However, I think that the conducted experiments on the nuScenes and nuScenes -C datasets provide enough evidence to substantiate the efficacy of the proposed method. The author has addressed my concerns, and as such, I maintain my original recommendation without alteration. | Summary: The paper introduces DH-Fusion, a novel Depth-Aware Hybrid Feature Fusion strategy for multimodal 3D object detection that leverages LiDAR and camera data. The key innovation lies in dynamically adjusting the weights of point cloud and RGB image features based on depth encoding at both global and local levels. The authors propose two modules: Depth-Aware Global Feature Fusion (DGF) and Depth-Aware Local Feature Fusion (DLF), which enhance feature integration and compensate for information loss during the transformation to Bird's-Eye-View (BEV) space. Experiments on the nuScenes dataset demonstrate that DH-Fusion surpasses state-of-the-art methods in terms of Novelty Detection Score (NDS) and is more robust to data corruptions, as evidenced by superior performance on the nuScenes-C dataset.
Strengths: 1. The paper proposes a novel feature fusion strategy that adaptively adjusts the weights of LiDAR point cloud and RGB image features based on depth
2. The introduction of depth encoding at both global and local levels allows for more nuanced and context-aware feature integration, enhancing the detector's ability to understand the scene's depth structure.
Weaknesses: 1. The authors only present results on nuScenes dataset. The alogrithms should be also evaluated on other prevailing public dataset like KITTI.
2. The depth-aware fusion might be tailored to the specific characteristics of the training dataset, potentially leading to overfitting and reduced performance on diverse or unseen data.
3. While the paper includes ablation studies, a more extensive set of experiments that isolate the impact of different components of the system could provide deeper insights.
Technical Quality: 2
Clarity: 3
Questions for Authors: What is the computational complexity of the DH-Fusion model, and how does it compare with other state-of-the-art methods in terms of runtime and resource usage?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: 1. While the method shows strong performance on the nuScenes dataset, its generalizability to other datasets or varied real-world conditions might require further investigation.
2. The paper does not provide a detailed discussion on the computational efficiency, which is crucial for practical applications, especially in terms of processing time and resource usage.
3 .The method assumes high-quality, synchronized data from LiDAR and camera sensors, which might not always be guaranteed in real-world scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude to you for the valuable comments and constructive feedback. Below, we address each of question or comment in detail.
**Comment 1:** "What is the computational complexity of the DH-Fusion model, and how does it compare with other state-of-the-art methods in terms of runtime and resource usage?"
**Response 1:** Thank you for your question. The parameters of our DH-Fusion-light, DH-Fusion-base and DH-Fusion-large are 40.38M, 53.05M, and 56.94M, respectively. The flops of these models are 271.6G, 822.8G, 1508.2G, respectively. The runtime of these models are 72.46ms, 114.94ms, and 175.44ms on a 3090 GPU, respectively. In Table 1 of the original submission, we compare our method with other SOTA methods in terms of FPS. Specifically, under the same configuration, our DH-Fusion-light runs faster than BEVFusion, and achieves a real-time inference speed; our DH-Fusion-base maintains comparable inference speed, compared to FocalFormer3D; our DH-Fusion-large runs 2x faster than IS-Fusion.
**Comment 2:** "The authors only present results on nuScenes dataset. The algorithms should be also evaluated on other prevailing public dataset like KITTI; While the method shows strong performance on the nuScenes dataset, its generalizability to other datasets or varied real-world conditions might require further investigation."
**Response 2:** In our original submission, we have actually presented results on two datasets: the nuScenes dataset and the nuScenes-C dataset. The detailed results can be found in Table 1 and Table 2. In particular, our experiments on the nuScenes-C dataset, which includes various realistic noise conditions, show that our method exhibits high robustness under diverse real-world corruption conditions and very good generalization ability. Since the KITTI dataset provides only stereo images instead of multi-view images, our method cannot be directly applied on it, and we believe that the nuScenes dataset, providing 700 different scene sequences for training, 300 scene sequences for validation and testing, is sufficiently large and diverse, serving as a standard benchmark in the field of 3D object detection. The methods we compared [1, 2] against typically conduct their experiments solely on this dataset as well.
[1] Liu, Z., Tang, H., Amini, A., Yang, X., Mao, H., Rus, D.L., Han, S.: Bevfusion: Multi-task multi-sensor fusion with unified bird’s-eye view representation. In: ICRA (2023)
[2] Yin, J., Shen, J., Chen, R., Li, W., Yang, R., Frossard, P., Wang, W.: Is-fusion: Instance-scene collaborative fusion for multimodal 3d object detection. In: CVPR (2024)
**Comment 3:** "The depth-aware fusion might be tailored to the specific characteristics of the training dataset, potentially leading to overfitting and reduced performance on diverse or unseen data."
**Response 3:** No, our method is not tailored to any dataset. Specifically, our depth-aware fusion method adaptively adjusts the weights of different modalities based on depth, which is inherently independent of the specific dataset used for training. We acknowledge that the performance of our method on the nuScenes-C dataset, where we achieve 68.67 NDS and 63.07 mAP, shows a decrease compared to the nuScenes dataset. The reduced performance on nuScenes-C can be attributed to the increased difficulty of this dataset, which includes various corruptions. Despite this, our method still outperforms other approaches, demonstrating its robustness.
**Comment 4:** "While the paper includes ablation studies, a more extensive set of experiments that isolate the impact of different components of the system could provide deeper insights."
**Response 4:** We appreciate the reviewer's suggestion for a more extensive set of experiments to isolate the impact of different components of the system. In fact, we have already provided a detailed discussion on the impact of various components on the model in the paper. Specifically, section 4.4 thoroughly examines the influence of different components, including the effect of DGF and DLF, and the effect of the depth encoding.
---
Rebuttal 2:
Comment: Dear Reviewer tCBR
Thanks for reviewing this work. Would you mind to check authors' feedback and see if it resolves your concerns or you may have further comments?
Best wishes
AC | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable comments and constructive suggestions, and are glad they appreciate that "The paper proposes a novel feature fusion strategy that adaptively adjusts the weights of LiDAR point cloud and RGB image features based on depth" (Reviewer tCBR), "The idea of depth encoding for dynamical fusion is interesting and reasonable" (Reviewer qiTv), "The idea of depth-aware multimodality feature fusion for 3D object detection is reasonable, especially for the detection of distant objects" (Reviewer kR8p), and "Comprehensive experiments on the nuScenes dataset are conducted to validate the effectiveness of the proposed DH-Fusion." (Reviewer kgkY).
Our paper presents a novel approach, Depth-Aware Hybrid Feature Fusion (DH-Fusion), for multi-modal 3D object detection. This method leverages depth encoding to adaptively adjust feature weights during fusion, which significantly enhances detection performance. We highlight the following key contributions of our work:
- To the best of our knowledge, we for the first time identify depth as a crucial factor in the fusion of LiDAR point cloud and RGB image features for 3D object detection. Our statistical and visualization analyses reveal that the role of image features varies with depth, emphasizing the need for depth-aware adjustments in feature fusion.
- We propose a depth-aware hybrid feature fusion strategy that dynamically adjusts feature weights at both global and local levels by integrating depth encoding. This strategy comprises the Depth-Aware Global Feature Fusion (DGF) module and the Depth-Aware Local Feature Fusion (DLF) module. The DGF module utilizes a global-fusion transformer encoder with depth encoding to adaptively the weight of image BEV features, while the DLF module refines local instance features by utilizing the original instance features using a local-fusion transformer encoder with depth encoding. This approach ensures high-quality feature extraction and optimal utilization of multi-modal data across varying depths.
- Our method has been evaluated on the nuScenes dataset and the more challenging nuScenes-C dataset. The results demonstrate that DH-Fusion not only outperforms previous multi-modal methods but also maintains robustness against various types of data corruption. This highlights the effectiveness and reliability of our proposed method in real-world scenarios.
Considering the valuable feedback provided by all reviewers, we conduct additional experiments and provide detailed results in PDF:
- Table 1: Ablation studies of cosine functions.
- Table 2: Performance on small objects, including normal-sized objects at far distance (>30m) and small-sized objects at near distance (0-20m).
- Table 3: Ablation studies using IS-Fusion as the baseline.
- Table 4: Performance with different image encoders.
- Table 5: More detailed results on nuScenes test set.
- Table 6: More detailed results on nuScenes validation set.
We hope additional experiments address the concerns raised and further validate the effectiveness of our proposed method, which will be included in the supplementary materials.
Finally, we believe our method offers useful insights for feature fusion in the field of multi-modal 3D object detection. We hope our contributions will inspire further research and development in this area.
Pdf: /pdf/5e6a97a3a2ff584ea86046fe8545a775480bfa1c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DINTR: Tracking via Diffusion-based Interpolation | Accept (poster) | Summary: This paper applies the diffusion mechanism to the image interpolation process, which can realize object tracking tasks. Five object representation object tracking tasks, such as bbox, point, and text, are applied using diffusion-based interpolation. The benchmark experiments show promising results.
Strengths: 1. The diffusion mechanism applied in tracking tasks is novel, which may inspire others to apply generative methods to tracking.
2. Five object representations in tracking tasks are applied, which shows the application ability of the method.
3. Experiments are good and extensive, and results are promising.
Weaknesses: 1. The tracking process, including the training and inference process, needs to be explained more clearly.
2. Some results are still lagging behind the diffusion-based methods. For example, DiffPose in posetrack. DiffMOT in mot benchmark.
3. More ablation is needed. It lacks enough analysis of the diffusion scheme.
Technical Quality: 4
Clarity: 3
Questions for Authors: See weakness
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's feedback regarding our details and experimentation.
#### 1. Process Details
We encourage the reviewer find our discussion about the Implementation Details with Reviewer **oiCF** and Section F in our Appendices. The training and inference processes are outlined in Algorithms F.4 and F.5. These details will be organized to include in the revision.
#### 2. Generalization vs Specialization
Our work focus is a **comprehensive generalization** across five representations and seven tracking benchmarks, as highlighted in your Strengths section. This level of grand unification is unprecedented, surpassing not only existing diffusion-based methods (DiffPose, DiffMOT), but also methods in other video understanding tasks. Other than that, we included DiffPose and DiffMOT because of the similarity in approach, but technically DiffPose is for Pose Estimation task (not tracking as ours) and DiffMOT models the motion on coordinate space.
#### 3. Ablation Study
We have significantly expanded our ablation studies, as detailed in the global response. This comprehensive analysis provides a deeper understanding of DINTR's behavior under various configurations. Key aspects of our expanded study include:
- A systematic evaluation of diffusion steps and their impact on image reconstruction quality and computational efficiency.
- A comparative analysis of different interpolation techniques, including our novel offset learning approach.
- An exploration of various temporal modeling strategies and their relative merits. This thorough investigation enhances the robustness of our findings and provides a solid foundation for further advancements in the field.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. It addresses my concerns. I maintain the ratings as accept.
---
Rebuttal 2:
Comment: Thank you for reading our rebuttal and getting back to us with the positive rating.
We appreciate the valuable feedbacks you have provided and we will revise our paper accordingly in the final revision. | Summary: The paper introduces DINTR (Diffusion-based INterpolation Tracker), an object-tracking framework that uses diffusion models to perform tracking in the visual domain. It proposes a new "Tracking-by-Diffusion" paradigm that reformulates tracking based on visual iterative diffusion models. DINTR uses an interpolation approach instead of the standard denoising process, which is claimed to be faster and more stable. The method can handle multiple types of object representations (points, poses, bounding boxes, segments, text) in a unified manner.
Experiments show competitive or state-of-the-art performance across several tracking benchmarks.
Strengths: - DINTR can handle multiple types of object representations in a unified framework, which is flexible.
- The method achieves competitive or state-of-the-art results on several benchmarks across different tracking tasks.
Weaknesses: - While faster than standard diffusion approaches, the method may still be computationally expensive compared to some traditional tracking methods. A more detailed description of the model size and speed should be provided.
- The authors need to provide more experimental details. How is the training? What datasets are used during training? How are multiple tasks unified during training?
- The paper lacks an overall framework, making it difficult to understand.
Technical Quality: 4
Clarity: 3
Questions for Authors: Please refer to the weakness.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: NO.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and effort in reviewing our paper. We hope to address your concerns by directing your attention to relevant sections where these details were already discussed in our original submission.
#### 1. Model Size and Speed
We have included speed metrics in the rebuttal PDF for your reference. The model size is similar to LDM [13] and ADM [116] checkpoints as we initialize our weights from their public models. Please refer to our **Feasibility** discussion with Reviewer **ddR4**.
#### 2. Experimental Details
We provided comprehensive experimental details in Section F of Appendices. This includes Algorithm F.4, which outlines our online training for the Reconstruction process, and Algorithm F.5, which details our tracker's operation.
Our method diverges from traditional tracking approaches in its training and operation. ***We encourage viewing this work through a novel generative perspective, not similar to traditional trackers*** (ddR4 - 5. Questions). Our network captures and models video content, allowing conditioned instances in different modalities to be extracted seamlessly from this modeling process. Key points include:
- No explicit object location training: Unlike conventional trackers, our approach doesn't require training on specific datasets or modalities to predict object locations.
- Frame reconstruction focus: The model learns to reconstruct actual frames of the *testing video* through an autoregressive process (essentially, next frame prediction).
- Flexible training options:
- Online fine-tuning: The model can be adjusted as a new frame of the *testing video* is received.
- Offline training: Similar to **offline** tracking methods like SUSHI [B], our model can be trained to capture the complete visual distribution of a video.
- Moreover, our model can also be distilled to operate single-step diffusion, as mentioned in our global response.
- Generalization to different modalities (point, pose, box, segment, and text)
Unification of multiple tasks: We utilize a unified diffusion approach similar to ControlNet [C]. Specific operations are used for each representation, such as Gaussian kernel for points, masking operation for bounding boxes and segments, word embedding model for text. Extracted representation will be passed to a unified attention layer to compute feature correspondence. We encourage the Reviewer refer to DIFT [87] or ControlNet [C] implementation for this operation.
#### 3. Overall Framework
We illustrated our overall autoregressive framework in Figure B.4 in our original submission, which builds upon the conditional Diffusion process shown in Figure B.3. For your convenience, we have included these figures in the rebuttal PDF with detailed captions that describe the process thoroughly.
#### 4. Limitations
We have addressed the limitations of our approach in the paper, as noted by Reviewers **ddR4** and **kgMr**.
[B] Cetintas, Orcun, Guillem Brasó, and Laura Leal-Taixé. Unifying short and long-term tracking with graph hierarchies. In CVPR 2023.
[C] Zhang, Lvmin, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In CVPR 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. It addressed most of my concerns. I will raise the rating to accept. I hope the authors will improve the structure and clarity of the paper in the final version.
---
Reply to Comment 1.1.1:
Comment: Thank you for reading our rebuttal and getting back to us with the positive rating! We appreciate the valuable feedbacks you have provided and we will revise our paper accordingly in the final revision. | Summary: The paper "DINTR: Tracking via Diffusion-based Interpolation" introduces a novel approach for object tracking using diffusion models. The proposed methodology, Diffusion-based INterpolation TrackeR (DINTR), leverages diffusion mechanics to model temporal correspondences and reconstruct actual frames in video sequences. The authors claim that their interpolation mechanism offers a more interpretable, stable, and faster approach tailored specifically for the object tracking task, outperforming existing methods on several benchmarks.
Strengths: 1. The use of diffusion models for object tracking is a novel idea that has the potential to advance the field. The authors' proposal to replace the extensive mapping to a Gaussian noise domain with a more efficient interpolation process is compelling.
2. DINTR supports multiple types of indication representations, including points, bounding boxes, segments, and textual prompts, making it versatile for different tracking tasks.
3. The experimental results show that DINTR achieves superior performance on seven benchmarks across five different indication types. The method's ability to handle both single-target and multiple-target tracking tasks is impressive.
4. The paper provides a thorough explanation of the proposed interpolation mechanism and its advantages over existing diffusion-based approaches. The inclusion of algorithm descriptions and detailed equations enhances the reproducibility of the work.
5. The authors benchmark their method against several state-of-the-art tracking methods, demonstrating the superiority of DINTR in various scenarios. The use of multiple metrics (e.g., MOTA, IDF1, HOTA) provides a comprehensive evaluation of the method's performance.
Weaknesses: 1. In the Ablation Study (Section 5.3), the impact of different configurations is briefly mentioned, but detailed analysis and discussions on parameter sensitivity are lacking.
Recommendation: Conduct comprehensive ablation studies, systematically varying key parameters such as the number of diffusion steps, noise levels, and interpolation techniques. Discuss how changes in these parameters affect the overall performance and stability of DINTR.
2. While the authors claim near real-time performance, the actual feasibility of deploying DINTR in real-time applications is not thoroughly explored. More experiments focusing on real-time constraints and efficiency improvements could strengthen the paper.
3. The paper does not mention the real-time image processing speed (i.e., frame rate) of the proposed tracking model, which is crucial for practical applications of tracking models.
4. While the authors connect multiple tracking tasks using diffusion models, there is a lack of comprehensive comparison with the latest state-of-the-art models.
Technical Quality: 3
Clarity: 3
Questions for Authors: In Section 5.2, the model is described as being fine-tuned online. Is this a fair comparison with models that are trained offline?
The paper mentions the use of 4 NVIDIA Tesla A100 GPUs for training, which may not be accessible to all researchers. Discussing the computational requirements and potential optimizations for less powerful hardware would be useful.
Include more qualitative results, such as side-by-side comparisons of tracked objects using DINTR and other methods. Adding error analysis tables to show where and why DINTR performs better or worse would be useful.
Include a table that explicitly states the experimental setup for each method being compared. This should cover the datasets used, hardware specifications, and evaluation metrics.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations have been briefly addressed in the paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's feedback regarding our ablation study, practicality, and comparison.
#### 1. Ablation Study
We have significantly expanded our ablation studies, as detailed in the global response. This comprehensive analysis provides a deeper understanding of DINTR's behavior under various configurations. Key aspects of our expanded study include:
- A systematic evaluation of diffusion steps and their impact on image reconstruction quality and computational efficiency.
- A comparative analysis of different interpolation techniques, including our novel offset learning approach.
- An exploration of various temporal modeling strategies and their relative merits.
This thorough investigation enhances the robustness of our findings and provides a solid foundation for further advancements in the field.
#### 2. Feasibility
We have expanded our analysis to include Table I (subtables C and D) and Table II, which provide comprehensive data on processing speed and FPS for DINTR under various configurations. Specifically, our model can be flexibly trained offline (similar to Tune-A-Video [114]) for known video lengths or applied online for variable-length videos (as detailed in Implementation Details in the main paper). We have added a new scenario in Table I.C and I.D where our offline-trained model is distilled to a single-step inferencing diffusion model (T = 1).
Please note that, **offline trackers**, such as SUSHI [B], continue to play a vital role in scenarios where comprehensive analysis (**multimodality** in our case) is needed. Additionally, as demonstrated in our Supplementary video, our interpolation process is adaptable to any framerate (e.g., sampled 10x skip frame in the video), showcasing the flexibility of the diffusion process.
#### 3. Further Practical Applications
While real-time deployment is achievable as discussed above, we emphasize that DINTR's primary contribution lies in its robust theoretical framework for **autoregressive video modeling**, focusing on flexible instance-based understanding. This method offers significant potential **beyond just tracking**, opening new avenues for video applications based on these aspects:
- Generative point and pose regression
- Generative bounding box and segmentation prediction
- Generative textual referring
Additionally, in Future Work, we elaborated on potential extensions of our framework to other instance-based understanding tasks, including visual content manipulation, visual motion modeling, temporal displacement analysis. DINTR is groundbreaking in its ability to bridge the gap between generative methods and fine-grained instance-based understanding tasks.
#### 4. Comprehensive Comparison
We would like to emphasize the comprehensive nature of our comparisons in the main paper. Table 1 provides a feature and modality comparison with State-of-the-Art models across all paradigm categories, including tracking-by-regression, -detection, -segmentation, -attention, -unification.
For numerical comparisons, we included a wide range of approaches, extending ***beyond traditional tracking methods*** to encompass recent advancements in related fields, including PoseTrack21: FAMI-Pose, DiffPose (*Pose Estimation*, not tracking - CVPR 2023); LaSOT: methods with and without textual prompt input; MOT: diffusion-based approaches (AAAI 2023 and CVPR 2024) and unification approaches. This comprehensive comparison allows for a thorough evaluation relative to the latest SOTA across various tasks and methodologies.
We welcome the reviewer's suggestions for additional recent methods to include in our revision, further enhancing the depth of our comparative analysis.
#### 5. Questions:
5.1. Fairness of Comparison: Our fine-tuning approach is considerably fair to compare with offline-trained models, and may even be at a disadvantage for our model. Unlike existing methods that **explicitly learn to predict object location**, our model learns to reconstruct or interpolate real frames **without learning to predict direct location from training data**. Note that this work should be seen from a novel generative perspective, where apple-to-apple comparison to traditional methods could not be always fit.
5.2. Computational Requirements: The computational resources mentioned in our paper represent the hardware available to us for conducting this research. The minimum requirement for running our model is a GPU with at least 10GB of VRAM.
5.3. As we always strive to improve our paper quality, we will incorporate qualitative comparisons to strengthen the comparative analysis in the revision.
We believe these clarifications and additions will provide a more comprehensive understanding of our model's performance and requirements relative to existing approaches. We will include these clarifications to the revision.
[B] Cetintas, Orcun, Guillem Brasó, and Laura Leal-Taixé. Unifying short and long-term tracking with graph hierarchies. In CVPR 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, which has addressed most of my concerns. I hope that the contents of the rebuttal can be incorporated into the paper. I will maintain my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for reading our rebuttal and getting back to us. We appreciate the valuable feedbacks you have provided and we will revise our paper accordingly in the final revision. | null | null | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers' insightful comments and suggestions. The feedback highlights our paper's strengths, including its novel generative approach, the method's impressive versatility, thorough explanation for reproducibility, and comprehensive evaluation. Reviewers **ddR4** and **kgMr** lean towards acceptance (Weak Accept and Accept, respectively) due to the novelty and extensive experimentation. Reviewer **9rR3** assigned a Borderline Reject rating, primarily due to a perceived lack of specific details in our original submission. We will first address the common key point (KP) about additional Ablation Study, followed by individual responses to specific comments.
### (KP) Additional Ablation Studies:
We appreciate the reviewers' feedback on our ablation study. In response, we have conducted more comprehensive ablation studies, as presented in Tables I and II of the rebuttal file:
#### 1. Diffusion Steps
We systematically varied the number of diffusion steps (1, 50, 100, 150, 200, 250) and analyzed their impact on performance and efficiency. Results show that with a timestep bound T = 250 in the reconstruction process, we can reconstruct an image extremely close to the original (per-pixel MSE of 0.04). In Table I.C and I.D, we added a new scenario where our offline-trained model is distilled to a single-step inferencing diffusion model (T = 1).
#### 2. Noise Scheduler
We maintained a linear noise scheduler across all experiments, as it is the default in all available implementations and directly dependent on the number of diffusion steps above.
#### 3. Interpolation Techniques
We compared four interpolation methods: Linear (2a), Two learning methods (2b & 2c), Our proposed offset technique (DINTR). Results demonstrate that our offset learning approach, which uses two anchor latents to deterministically guide the start and destination points, yields the best performance. This method provides superior control over the interpolation process, resulting in more accurate and visually coherent output. The performance difference between methods 2b and 2c, which use a single anchor at either the start or destination point respectively, is minimal. However, we observed slightly higher effectiveness when controlling the destination point (2c) compared to the starting point (2b), suggesting that end-point guidance has a marginally stronger impact on overall interpolation quality.
#### 4. Temporal Modeling
We evaluated three additional diffusion-based temporal modeling approaches: Pseudo-noise Latents (i), Inflated Self-Attention (ii), Semi-online Processing (iii). Their formulation, relative strengths, and weaknesses are discussed as follows:
**(i) Pseudo-noise Latents**
The real image $\\mathbf{I}\_{t}$ itself does not come from the training distribution of the U-Net $\\epsilon\_\\theta$. DIFT[87] proposed a straightforward approximation. Sampled noise respective to time step $k$ is directly ***added*** to the real image latent $\\mathbf{z}\_{k}$. Without temporal modeling, this process approximately moves the image into the noise distribution that the U-Net was learned to reconstruct without fine-tuning, formally presented as follows:
$$
\\tilde{\\mathbf{z}}\_{k} = \\epsilon\_\\theta(\\mathbf{z}\_{0} + \\epsilon, k), \\text{where } \\epsilon \\sim \\mathcal{N}(0, 1)
$$
It enables extracting latent features even though the real image does not match the training distribution. However, this approach could only partially bridge the distribution shift. As a result, this approach performs the worst performance overall.
**(ii) Inflated Self-Attention**
Instead of the one-shot fine-tuning strategy, another approach can approximate this goal. To maintain the temporal coherence, VDMs [A] proposed to further extend the spatial 2D convolution layers and self-attention to the spatiotemporal domain. Specifically, the ***inflated*** self-attention is derived as:
$$
\\text{from } Attn \\Big(\\epsilon\_\\theta(\\mathbf{z}\_k, k), \\epsilon\_\\theta(\\mathbf{z}\_k, k)\\Big) \\text{ to } Attn \\Big(\\epsilon\_\\theta(\\mathbf{z}\_k, k), \\epsilon\_\\theta\\big([\\mathbf{z}\_k \\| \\mathbf{x}\_k], k\\big)\\Big)
$$
where $[\\cdot \\| \\cdot]$ is the concatenation operation, and the attention parameters only need to be reshaped **without fine-tuning pre-trained weights**. This solution is feasible for generating longer videos due to its flexibility.
However, similar to (i), the actual distribution is not well captured, resulting in lower performance.
Converting to this approach from our DINTR base leads to 5\% - 8\% performance drop as in Table I. This decrease is anticipated as target distributions cannot fully be incorporated into the reconstruction process.
**(iii) Semi-online Processing**
In addition to frame-by-frame operation, we extend (ii) to a clip-by-clip paradigm. Formally, given a video clip $v\_t \in \\mathbb{R}^{I \times H \times W \times 3}$, where $I$ is the ***fixed*** clip length (e.g. $I$ = 16), we pass it into the conditioned diffusion model. This semi-online approach extracts multiple frame features $\\mathbf{z}^{v\_t}\_k$ via the U-Net $\\epsilon\_\\theta$. Here, a sparse causal attention computes matrices between frame $\\mathbf{z}^{v\_t}\_k$ and two previous frames $\\mathbf{z}^{v\_0}\_k$ and $\\mathbf{z}^{v\_{t-1}}\_k$ as:
$$
\text{from } Attn\\Big(\\epsilon\_\\theta(\\mathbf{z}^{v\_t}\_k, k), \\epsilon\_\\theta(\\mathbf{z}^{v\_t}\_k, k)\Big) \text{ to } Attn\Big(\\epsilon\_\\theta(\\mathbf{z}^{v\_t}\_k, k), \\epsilon\_\\theta\\big([\\mathbf{z}^{v\_0}\_k \| \mathbf{z}^{v\_{t-1}}\_k], k\big)\Big)
$$
The outputs constitute $I$ trajectory predictions across the $I$ frames of the clip. This approach achieves mediocre performance, better than (i) but lower than (ii) because of the feature discrepancy between batches.
[A] Ho, Jonathan, et al. Video diffusion models. Advances in Neural Information Processing Systems 35, 2022.
Pdf: /pdf/b64cb9affacb532a0b95a78b20127d5e14cb2906.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DHA: Learning Decoupled-Head Attention from Transformer Checkpoints via Adaptive Heads Fusion | Accept (poster) | Summary: The paper proposes a novel method to convert the Transformer attention mechanism from multi-head attention (MHA) to the proposed decoupled-head attention (DHA). DHA employs fewer unique KV heads than MHA or GQA by representing the heads as a linear combination of unique heads within the group, where each group contains similar heads. This conversion process is performed at the parameter level, so basically any MHA model can be converted to a DHA model. After conversion, continued pre-training helps mitigate performance drops. The core novelty of the proposed conversion mechanism comes from its head fusion strategy: first, it searches for similar head groups, optimizes the fusion weights to obtain many-to-one mapping, and then allocates a different number of heads to different layers within the resource budget. Experiments show that DHA requires considerably fewer training resources while achieving lower LM loss compared to GQA. Furthermore, DHA is beneficial for reducing the KV cache size.
Strengths: * The paper identifies significant redundancies in the attention KV heads and proposes a practical solution to address the problem. Empirical evidence for the motivation is also provided.
* The learning-based fusion optimization and adaptive budget allocation offer a systematic approach to finding optimal mappings, adding non-trivial value to the proposed method.
* Related works are sufficiently addressed.
Weaknesses: * It is unclear whether the paper uses inter-layer grouping of heads or only intra-layer grouping. The introduction and Figure 1 suggest that KV heads from different layers can also be merged, but this does not seem to be reflected in the method and the models used in the experiments.
* If the grouping is only performed in an intra-layer manner, the paper should be revised accordingly for clarity.
* If the grouping is performed in an inter-layer manner, the optimization process would become much more complex. Also, during inference, each layer would need to call a large number of parameters (essentially all weights of the corresponding group across layers), which would slow down the inference due to the memory-bound nature of LLMs.
* If the grouping is performed in an inter-layer manner, a visualization similar to Figure 2 across layers could strengthen the claim.
* It is assumed that "GQA" in all experiments refers “MHA -> GQA” conversion using simple mean pooling. Is this a correct understanding? Is it a common approach to convert MHA to GQA instead of training a GQA model from scratch? Can we consider the uniform division of heads and apply mean pooling for GQA construction as a solid and strong baseline? For example, using CKA to identify similar heads before mean pooling could improve GQA’s performance.
* It is difficult to say that DHA shows "much less training overhead" because it starts from MHA (line 294), especially compared to the models that begin with GQA from scratch.
Technical Quality: 3
Clarity: 2
Questions for Authors: * Although the paper introduces the method as performing MHA to DHA conversion at the parameter level (checkpoint-level), the Lagrangian min-max training requires LM loss computation for training. This means the fusion step in the conversion process requires task-oriented training using real data. Is this correct?
* How long does it take for the searching process? The paper mentions 100 search steps (line 197) for each layer.
* Providing a demonstration of how merging weights change after each process step would help in understanding the method better.
* Figure 2 visualizes the weight similarity instead of CKA, which makes sense because similar weights would generate similar outputs, and the method merges similar parameters. Then, is CKA-based analysis necessary (also there is no CKA visualization)?
* (minor) Typo: Adaptvie --> Adaptive in the caption of Table 2.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The paper adequately addresses the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer wgtQ
Thank you for your thorough review and positive comments on our work.
**Q1: Inter-layer grouping of heads or only intra-layer grouping?**
**R1:** **Only intra-layer grouping and fusion** is conducted in DHA. Figure 1 meant to illustrate the *decoupled*-heads where the number of key and value heads can be different among layers. We apologize for any misunderstandings and will improve the Figure 1 and intro. part for better clarification.
The DHA method employs parameter fusion within each layer for two reasons:
- **Higher redundancy of heads within layer for fusion**. The heads within a layer exhibit high similarity and redundancy, which provides a good starting point for parameter fusion.
- **More complex optimization for inter-layer fusion**. The optimization process between layers is very complex and, as you mentioned, requires memory operations for cross-layer calls, which inherently increases the inference cost.
- **Promising future work by introducing inter-layer fusion**. This paper represents an early exploration of applying parameter fusion methods within model parameters. The inter-layer fusion approach you suggested is indeed a valuable direction for future exploration.
**Q2: GQA method's initialization in Experiments.**
**R2:** It's a common and effective approach to convert MHA to GQA using mean pooling instead of training from scratch. The author of GQA (Ainslie et al., 2023) tested several methods for the initialization of GQA and found it works best using simple mean pooling from MHA. Indeed, training GQA from scratch will cost trillions tokens budget to match the performance of MHA which is inefficient and costly.
In response to the new baseline issue, we provided the experimental result as follows:
- **CKA-Grouping, then mean pooling**. We have considered grouping the heads with high similarity using CKA, then performed GQA's mean pooling initialization as a more solid and strong baseline. However, this approach only achieved comparable performance to the original implementation in GQA (Ainslie et al., 2023) paper. See table below:
| Method | DHA-7B-25\% (5B) | GQA-7B-25\% (5B) | GQA(CKA-Grouping)-7B-25\% (5B) |
|----------------|-----------------|-----------------|-------------------------|
| Avg ACC | 62.4 | 60.3 | 60.4 |
| PPL | 7.29 | 7.54 | 7.51 |
% - We believe the reason for this is that the head grouping learned by DHA is based on the fusible nature between heads, which cannot be completely equated with CKA similarity. More importantly, DHA not only groups heads based on similarity but also learns the fusible parameters. This allows it to eliminate the influence of redundant parameters and retain more important information during the initialization process, which is not possible with mean initialization.
**Q3: DHA's Training overhead compared with GQA.**
**R3:** As mentioned in our response to Question 2, both DHA and GQA are constructed based on MHA. The difference is that DHA employs a training method, whereas GQA uses pooling.
- **Construction based on MHA**. DHA achieved comparable performance to GQA using only 0.2B tokens, whereas GQA requires 12 times the amount of training data to reach the same level of loss as DHA. (see line 253). This is because DHA effectively retains information through its fusion method.
- **Training from scratch**. We supplement the experiment in the setting of training from scratch, see the [Anonymous Figure](https://mjj.today/i/jqaTqd). DHA demonstrates a faster training speed than GQA. This is sufficient to prove that the DHA architecture is more efficient in training.
**Q4: DHA's conversion process requires task-oriented training using real data?**
**R4:** You are correct that DHA requires data to learn the fusible relationships between heads, but it needs only a small portion (0.25\%) of the total training data. DHA is primarily used in the pre-training phase of very large models and can save a considerable amount of training cost compared to the widely used GQA.
**Q5: Time in DHA's Search Step.**
**R5:** Using the experimental setup described in the paper, the search process for the LLaMA2 model requires 42 minutes.
**Q6: Providing a demonstration of how merging weights change.**
**R6:** Thank you for your suggestions. You can refer to Figure 3(a), where we show the weight variation diagram. In the fusion process of heads 0-3, head 0 initially constitutes 100\% as the starting head of the MHA. As the fusion process progresses, the parameters of the important heads increase, and the proportions of all heads become more balanced. This indicates that the algorithm attempts to retain information from different heads by balancing the parameter proportions of each head. This process results in a slight increase in loss, but not significantly. We will include more relevant information in the final version‘s appendix.
**Q7: Visualizes the weight similarity of CKA.**
**R7:** We apologize for any misunderstanding. The parameter similarities defined in Figures 2, 3b, and 9 are calculated using CKA, as noted in lines 93-94 and Appendix B.1. Based on this consistent measurement method, we can clearly see that DHA is inspired by similarity redundancy and ultimately achieves a similar effect through merging. We will provide more detailed explanations in the final version.
**Q8: Typos**
**R8:** Thank you for your careful proofreading! We will address this in the final version.
---
(Ainslie et al., 2023) GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints. EMNLP 2023.
---
Rebuttal 2:
Title: Thank you for the response
Comment: Thank you for addressing my concerns and additional experiments.
I increased my score from 5 to 7.
In addition to my comments, please consider defining "head similarity" first - readers may understand "head similarly" in two ways: (1) parameter similarity and (2) attention weight similarity (paying attention to similar positions). And also consider specifying "parameter" and "attention probability" instead of solely using the term "weight".
---
Rebuttal Comment 2.1:
Title: Response to Reviewer wgtQ’s Feedback
Comment: Thank you so much for taking the time to read our response and for acknowledging our work! We are pleased that our reply could address your concerns, and we are grateful for the valuable suggestions you provided to improve the quality of our article. The more rigorous expression of the technical terms you pointed out is reasonable and necessary, and we will pay special attention to revise this part in the subsequent versions. | Summary: This paper introduces a novel mechanism to optimize large language models (LLMs) by addressing the computational and memory costs associated with the Multi-Head Attention (MHA) mechanism. The authors propose Decoupled-Head Attention (DHA), which adaptively configures the sharing of key and value heads across layers, aiming to strike a balance between performance and efficiency. The transformation from MHA to DHA is achieved through a process of head parameter clustering and linear fusion, which retains the knowledge from the original model with minimal performance degradation.
Strengths: -The paper demonstrates a remarkable balance between performance and efficiency, showing that DHA can achieve high performance with significantly reduced computational resources.
-The proposed method is not limited to specific model size, indicating that it could be broadly applied to various existing MHA Transformer models.
Weaknesses: The effectiveness of DHA on a wider range of model architecture remains unexplored. Could the method work well on models beyond LLAMA?
The process of transforming MHA to DHA involves several stages of training and fusion that might be complex to implement and reproduce for researchers without access to similar computational resources.
The paper relies solely on linear methods for parameter fusion. There might be room for improvement by exploring non-linear fusion techniques that could potentially offer better optimization.
Technical Quality: 3
Clarity: 2
Questions for Authors: See the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer Txec
Thank you for your appreciation to the novelty of our work and the thoughtful reviews.
**Q1:The effectiveness of DHA on a wider range of model architecture**
**R1:** Thanks for your suggestions! DHA is primarily designed for models based on the Transformer Decoder architecture and can be adapted to all models with this architecture. We chose LLaMA as the experimental baseline because it is a classic model using the decoder architecture in LLMs. Other open-source LLM models differ from LLaMA only in certain details (such as activation functions and training methods), which do not affect DHA's training. Successfully applying DHA to LLaMA indicates that it can be used in most decoder-only models. Thank you for your suggestion. Due to time constraints, we will include experiments with other models in the final version.
**Q2: Complex to implement and reproduce for researchers without access to similar computational resource.**
**R2:** DHA is primarily used in the pre-training phase of very large models and can save a considerable amount of training costs compared to the widely used GQA. For researchers without access to similar computational resources, we can consider the following methods:
- Conduct experiments with the DHA architecture on smaller models before applying it to large-scale models.
- The efficient architecture of DHA can serve as a reference, and adopting the DHA configuration in model design can directly accelerate training speed. Refer to the [Anonymous Figure](https://mjj.today/i/jqaTqd), when not constructed based on MHA, both DHA and GQA start training from random initial points, and DHA demonstrates a faster training speed than GQA.
- Models trained with DHA can be combined with other optimization methods.
**Q3: Exploring non-linear fusion techniques.**
**R3:** Thank you for your valuable suggestions! To our best knowledge, DHA appears to be the initial attempt to explore learning linear fusion within models for fine-grained LLM's parameter compression, and we believe it holds significant value in the current academic field. We are continuing to explore this in our subsequent research, including using nonlinear fusion methods for more refined integration.
---
Rebuttal 2:
Comment: Dear Reviewer Txec
The author-reviewer discussion is ending soon.
The authors have provided a rebuttal with explanations and new data.
Please take part in the discussion and respond directly to the authors.
Best regards
AC | Summary: The paper introduces Decoupled-Head Attention (DHA), a new efficient attention mechanism for large language models. DHA adaptively configures group sharing for key and value heads across layers, transforming Multi-Head Attention checkpoints through a three-stage process. It achieves 97.6% of the original performance while using only 0.25% of the pre-training budget and reducing KV cache by 75%. DHA outperforms other efficient attention methods in terms of training speed and performance under limited resources.
Strengths: 1. Novel approach: Introduces Decoupled-Head Attention (DHA), an innovative method to optimize attention mechanisms in large language models.
2. Efficiency gains: Achieves significant reductions in computational and memory costs (75% KV cache reduction) with small performance loss.
3. Rapid adaptation: Requires only 0.25% of the original pre-training budget to achieve 97.6% of performance.
Weaknesses: 1. While the proposed method seems to be effective for MHA, the authors did not demonstrate its compatibility with GQA, which is increasingly popular recently and starts to replace MHA in almost all state-of-the-art models.
2. While efficient adaptation is important, it is more important to see whether the performance of DHA is similar to GQA/MQA in the end of the training process, when the entire model converges. If DHA just converges faster, I think its value is rather limited.
3. While the accuracy loss is not as large as other KV compression methods, a ~5% performance drop for Llama2-7B is still too large. It's almost equivalent to the performance gap between Llama-2-7B and Sheared-Llama-2.7B. This indicates that the process is extremely lossy, and the resulting model might not be useful after DHA process.
4. It is unclear how DHA combines with existing KV cache compression techniques, such as KV cache quantization.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please answer my questions listed in "Weaknesses"
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer FLzu
Thank you for your appreciation to the novelty of our work and the thoughtful reviews.
**Q1:DHA's Compatibility with GQA**
**R1:** Thank you for your suggestions! Here, we provide two feasible methods to convert GQA to DHA.
- **Easiest method in less than 1 minute**. GQA can be losslessly converted into MHA by simply replicating the GQA' KV heads. Then, we can perform the DHA transformation on the MHA architecture.
- **Minor modification by grouping KV**. DHA only needs to group and fuse the Key and Value heads. When constructing DHA on GQA, we initially group the Key and Value, maintaining alignment with GQA functionality. During the training phase, the fused head parameters can replace the original GQA heads for sharing.
**Q2: The performance of DHA in the end of the training process.**
**R2:** Under the same training budgets, DHA performs better than GQA.
- **Steady improvement of DHA over GQA in limited training budget setting**. In Figure 6, we showed that in the setting of limited 5B training budget, DHA effectively and consistently outperforms GQA on downstreming tasks. At the end of the 5B training, the optimization is converging while the performance gap remains.
- **Improvement remains with more training budgets**. We have supplemented our experiments with more data budgets, and the results are shown below. Our method consistently achieves better perplexity (PPL) than GQA. In the final version, we will include experiments with more training budgets.
| Model\\ Training Tokens | 5B | 7.25B | 10B |
|-------------------|-----|-------|-----|
| GQA-7b-25\% (PPL) | 7.41| 7.15 | 6.72|
| DHA-7b-25\% (PPL) | 7.2 | 7.03 | 6.61|
The superior performance of DHA over GQA is mainly due to two factors: 1. DHA retains more important information through linear fusion; 2. The DHA architecture is more efficient than GQA. By allocating more parameters to more important components, training directly with the DHA architecture also yields better performance than GQA. Refer to the [Anonymous Figure](https://mjj.today/i/jqaTqd).
**Q3: Accuracy loss after transformation.**
**R3:** The performance gap between the results shown in the paper and MHA is primarily due to the following two reasons:
- **The gap of pre-training data**. The MHA model was not trained on the same data used for DHA. Since LLaMA's training data is not directly open-sourced, we used an experimental open-sourced pre-training data following Sheared-LLaMA (Xia et al. 2024). In Table 1, we directly reported the performance of original MHA model without continue-training as we found the performance decline of the MHA model when continue-training on the open-sourced pre-training data. In other words, the improved pre-training data will close the gap between DHA and MHA. In the table below I provide a performance comparison of MHA after pre-training on the same data.
| Model | SciQ | PIQA | WinoGrande | ARC-E | ARC-C (25) | HellaSwag (10) | LogiQA | BoolQ (32) | LAMBADA | AVG |
|---------------------|------|------|------------|-------|-------------|----------------|--------|-------------|---------|------|
| LLaMA2-7B | 94.1 | 78.1 | 69.1 | 76.3 | 49.7 | 58.9 | 25.7 | 80.8 | 74.1 | 67.4 |
| LLaMA2-7B-CT(5B) | 93.7 | 77.9 | 68.8 | 76.1 | 49.2 | 58.3 | 23.7 | 79.2 | 73.8 | 66.7 |
| GQA-7B-50% (5B) | 90.8 | 76.2 | 64.8 | 69.2 | 41.6 | 52.8 | 22.5 | 70.5 | 65.2 | 61.5 |
| DHA-7B-50% (5B) | 93.8 | 76.5 | 66.6 | 72.0 | 43.8 | 55.3 | 21.4 | 75.7 | 67.2 | 63.6 |
| GQA-7B-25% (5B) | 89.5 | 75.7 | 61.9 | 68.0 | 38.9 | 50.9 | 23.8 | 67.1 | 64.1 | 60.0 |
| DHA-7B-25% (5B) | 91.7 | 76.8 | 64.4 | 70.9 | 42.8 | 54.2 | 21.8 | 74.6 | 68.4 | 62.8 |
- **In the setting of limited training budget**. In Table 1, we presented DHA's experimental results which reflect the early training phase with about 5B tokens, not the performance of Sheared-LLaMA after 50B tokens. More training budget can be allocated to DHA to further improve its performance.
- **Parameter size difference**. Compared to MHA, DHA compresses 50% or 25% of attention heads, requires only 0.05% of pre-training data and achieves approximately 5% loss. The number of parameters of MHA is much larger than that of DHA, so performance loss is inevitable during conversion. Compared with GQA, a strong baseline with the same number of parameters, DHA has shown higher training efficiency and performance advantages. Due to the high efficiency of DHA, DHA can use more heads than MHA with the same number of parameters, and has the opportunity to achieve better performance.
**Q4: DHA combines with existing KV cache compression techniques.**
**R4:** DHA is an efficient GQA architecture, so it has similarly good compatibility. In Sections 2 and 4.1, we demonstrated the relationship between DHA and GQA. We tested the compatibility of the DHA model with the KVCache eviction method NACL (Chen et al., 2024). "NACL 25\%" indicates retaining only 25\% of the KVCache. The experiment results are shown in the table below. DHA and GQA exhibit equally good compatibility with KV cache compression techniques. We will include more experiments in the final version.
| Method | log(PPL) |
|-------------------------------|----------|
| GQA-7b-25\% | 2.89 |
| DHA-7b-25\% | 2.84 |
| GQA-7b-25\% (NACL 25\%) | 3.01 |
| DHA-7b-25\% (NACL 25\%) | 2.93 |
---
(Xia et al., 2024) Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning. ICLR 2024.
(Chen et al., 2024) NACL: A General and Effective KV Cache Eviction Framework for LLMs at Inference Time. ACL 2024.
---
Rebuttal 2:
Comment: Dear Reviewer FLzu
The author-reviewer discussion is ending soon.
The authors have provided a rebuttal with explanations and new data.
Please take part in the discussion and respond directly to the authors.
Best regards
AC
---
Rebuttal 3:
Title: Acknowledgement of rebuttal
Comment: I appreciate the authors' substantial efforts in addressing the concerns raised. After careful consideration, I have decided to update my score to **5: Weak Accept**. My decision is based on the following:
1. Why I improved the score: The authors have successfully demonstrated performance gains over GQA within a fixed training budget.
2. Why not higher score: While the results presented were based on a 5B token training set, which is considerably smaller than ideal (e.g., Sheared-Llama's 50B tokens), I acknowledge the time constraints of the rebuttal period.
Should this paper be accepted, I strongly encourage the authors to include results from training on significantly larger corpora in the camera-ready version. This would provide a more comprehensive evaluation of DHA's effectiveness.
---
Rebuttal Comment 3.1:
Title: Response to Reviewer FLzu’s Acknowledgement
Comment: We sincerely thank the reviewer and AC for their extraordinary efforts during the discussion phase! We feel fortunate to have encountered such diligent, responsible, and sincere reviewers and chairs.
We are delighted that reviewer FLzu recognized the merits of our work and provided detailed reasoning, while also showing a reasonable understanding of the time constraints during the discussion. As authors, we understand reviewer's reasonable request to see DHA’s performance with more training data. Therefore, we continued training DHA during this stage and have supplemented the results with DHA’s performance under 10B training, as shown in the table below:
|Model|SciQ|PIQA|WinoGrande|ARC-E|ARC-C (25)|HellaSwag (10)|LogiQA|BoolQ (32)|LAMBADA|AVG|
|---|---|---|---|---|---|---|---|---|---|---|
|LLaMA2-7B-CT(5B)|93.7|77.9|68.8|76.1|49.2|58.3|23.7|79.2|73.8|66.7|
|DHA-7B-25% (10B)|92.6|76.9|66.2|72.2|45.2|57.1|23.6|77.6|71.7|64.8|
|DHA-7B-25% (5B)|91.7|76.8|64.4|70.9|42.8|54.2|21.8|74.6|68.4|62.8|
From the table above, we can see that after training with an additional 5B data, DHA achieved an approximate 2 absolute performance improvement, retaining 97.16% of its performance while significantly reducing the number of heads. In our preliminary experiments, training a 500M-sized Decoder-only DHA model with 50B data (<5% pretrain budget) retained 99% of the original model's performance. We commit to supplementing the next version of this paper with DHA’s performance under a 50B or even larger training budget to more comprehensively demonstrate DHA’s performance ceiling. We deeply appreciate reviewer FLzu’s feedback, which has made this paper more thorough and reliable.
Additionally, we would like to bring a small note to the reviewer’s attention. After NeurIPS 2023, the scoring standards have changed from “5: Weak Accept” to “6: Weak Accept.” For more details, please refer to the [2024 Reviewer Guidelines](https://neurips.cc/Conferences/2024/ReviewerGuidelines) under the Review Form section. We would be very grateful if the reviewer could consider the new scoring standards.
Once again, thank you for your exceptional contributions! We welcome any further questions and will respond promptly! | Summary: The paper introduces a novel attention mechanism, Decoupled-Head Attention (DHA), designed to enhance the efficiency of large language models (LLMs) with minimal performance loss. DHA adaptively configures key and value heads across layers by leveraging insights from attention redundancy, leading to significant savings in computational and memory costs. The authors propose a method to transform a standard multi-head attention (MHA) model into DHA with a minimal pre-training budget, achieving near-original performance while reducing the key-value (KV) cache requirements substantially. The experiments demonstrate the effectiveness of DHA in maintaining performance with reduced resources.
Strengths: The paper proposes a novel Decoupled-Head Attention (DHA) mechanism that addresses the computational and memory costs associated with Multi-Head Attention (MHA) in large language models (LLMs). The idea of adaptively configuring group sharing for key and value heads is innovative. DHA initialization can recover the performance with a very small amount of restorative pre-training, compared to Group-Query Attention.
Weaknesses: 1. **Comparison with GQA and MHA Training Data Requirements:** It would be beneficial for the authors to report the performance of the DHA model when continue pretrain with a larger dataset, compared to GQA and MHA models.
2. There is a noticeable accuracy gap between DHA and MHA models on academic datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could the authors clarify why an increase in the fusion phase token budget leads to a decrease in average accuracy post-CT as observed in Table 3?
2. Are there any experimental results available that demonstrate how the DHA model performs on larger datasets?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author has mentioned limitations in Appendix
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer aEta
Thank you for your appreciation to our work and insightful comments.
**Q1: Comparison with GQA and MHA Training Data Requirements in Large Dataset.**
**R1:** Under the same training budgets, DHA performs better than GQA.
- **Steady improvement of DHA over GQA in limited training budget setting**. In Figure 6, we showed that in the setting of limited 5B training budget, DHA effectively and consistently outperforms GQA on downstreming tasks. At the end of the 5B training, the optimization is converging while the performance gap remains.
- **Improvement remains with more training budgets**. We have supplemented our experiments with more data budgets, and the results are shown below. Our method consistently achieves better perplexity (PPL) than GQA. In the final version, we will include experiments with more training budgets.
| Model\\ Training Tokens | 5B | 7.25B | 10B |
|-------------------|-----|-------|-----|
| GQA-7b-25\% (PPL) | 7.41| 7.15 | 6.72|
| DHA-7b-25\% (PPL) | 7.2 | 7.03 | 6.61|
The superior performance of DHA over GQA is mainly due to two factors:
1. DHA retains more important information through linear fusion;
2. The DHA architecture is more efficient than GQA. By allocating more parameters to more important components, training directly with the DHA architecture also yields better performance than GQA. Refer to the [Anonymous Figure](https://mjj.today/i/jqaTqd).
**Q2: Accuracy gap between DHA and MHA models on academic datasets.**
**R2:** The performance gap between the results shown in the paper and MHA is primarily due to the following two reasons:
- **The gap of pre-training data**. The MHA model was not trained on the same data used for DHA. Since LLaMA's training data is not directly open-sourced, we used an experimental open-sourced pre-training data following Sheared-LLaMA (Xia et al., 2024). In Table 1, we directly reported the performance of original MHA model (LLaMA2-7B) without continue-training as we found the performance decline of the MHA model when continue-training on the open-sourced pre-training data. Compared to S.-LLaMA-1.3B trained on the same dataset, our method only loses 3.5% when compressed by 50% with only 5B data budget. In other words, the improved pre-training data will close the gap between DHA and MHA. In the table below I provide a performance comparison of MHA after pre-training on the same data.
| Model | SciQ | PIQA | WinoGrande | ARC-E | ARC-C (25) | HellaSwag (10) | LogiQA | BoolQ (32) | LAMBADA | AVG |
|---------------------|------|------|------------|-------|-------------|----------------|--------|-------------|---------|------|
| LLaMA2-7B | 94.1 | 78.1 | 69.1 | 76.3 | 49.7 | 58.9 | 25.7 | 80.8 | 74.1 | 67.4 |
| LLaMA2-7B-CT(5B) | 93.7 | 77.9 | 68.8 | 76.1 | 49.2 | 58.3 | 23.7 | 79.2 | 73.8 | 66.7 |
| GQA-7B-50% (5B) | 90.8 | 76.2 | 64.8 | 69.2 | 41.6 | 52.8 | 22.5 | 70.5 | 65.2 | 61.5 |
| DHA-7B-50% (5B) | 93.8 | 76.5 | 66.6 | 72.0 | 43.8 | 55.3 | 21.4 | 75.7 | 67.2 | 63.6 |
| GQA-7B-25% (5B) | 89.5 | 75.7 | 61.9 | 68.0 | 38.9 | 50.9 | 23.8 | 67.1 | 64.1 | 60.0 |
| DHA-7B-25% (5B) | 91.7 | 76.8 | 64.4 | 70.9 | 42.8 | 54.2 | 21.8 | 74.6 | 68.4 | 62.8 |
- **In the setting of limited training budget**. In Table 1, we presented DHA's experimental results which reflect the early training phase with about only 5B tokens, not the performance of Sheared-LLaMA after 50B tokens. More training budget can be allocated to DHA to further improve its performance.
- **Parameter size difference**. Compared to MHA, DHA compresses 50% or 25% of attention heads, requires only 0.05% of pre-training data and achieves approximately 5% loss. The number of parameters of MHA is much larger than that of DHA, so performance loss is inevitable during conversion. Compared with GQA, a strong baseline with the same number of parameters, DHA has shown higher training efficiency and performance advantages. Due to the high efficiency of DHA, DHA can use more heads than MHA with the same number of parameters, and has the opportunity to achieve better performance.
**Q3: Clarify why an increase in the fusion phase token budget leads to a decrease in average accuracy post-CT.**
**R3:** Table 3 shows how the budget is allocated to fusion and CT throughout the training phase. Since DHA fusion is very efficient and fast, allocating only 0.1B of the fusion budget can provide a good starting point, after which pre-training can quickly restore the model's capabilities. If we allocate more (0.2B) to the fusion budget, we can achieve an even better starting point, but within the total constraint of a 5B budget, we have less budget (4.8B) for CT. When the starting point is sufficiently good, increasing the CT budget is a better choice. Overall, continuously allocating the fusion phase token budget will not degrade performance but will raise overall training costs.
---
(Xia et al., 2024) Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning. ICLR 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, it solved part of my concerns and I will keep my ratings.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer aEta’s Feedback
Comment: Thank you very much for reading our response and for all your efforts in the review process! Your valuable comments have greatly helped us improve the paper. If you have any other concerns, we are happy to answer them. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Parameter Disparities Dissection for Backdoor Defense in Heterogeneous Federated Learning | Accept (poster) | Summary: This paper focuses on backdoor defenses in the setting of heterogeneous federated learning. The authors reveal that benign and malicious clients present distinct parameter importance degree. Based on these observations, they propose a method to exclude malicious participants by evaluating parameter importance. The paper includes a thorough comparison with various methods across different datasets.
Strengths: - The paper is well-organized and easy to follow. The authors provide a comprehensive literature review.
- The targeted topic is critical in federated learning. The proposed method uses random public dataset for defense under heterogeneous scenarios, without relying on two assumptions in many current methods: homogeneous distributions and proxy datasets.
- The observation that benign and malicious clients appear distinct parameter important degree is interesting. The corresponding method is novel and reasonable.
- Experiments under various FL datasets, heterogeneity degrees, malicious ratios, and random datasets show the effectiveness of the proposed method.
Weaknesses: - In page 6, the authors discuss the clustering methods in cooperative cluster module and compare with K-Means and DBSCAN. But it is unclear how these methods are achieved, e.g., how the authors choose the hyper-parameters.
- For Figure1, whether this Client parameter importance degree similarity would be affected in the large client scale.
Technical Quality: 4
Clarity: 3
Questions for Authors: no
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Response to Reviewer AqVq**
Dear Reviewer AqVq:
We thank the reviewer for the appreciation and valuable comments. We are pleased you found our paper well-organized and our literature review comprehensive. Your recognition of our novel approach and its effectiveness in various scenarios is encouraging. We aim to address your concerns in detail below.
### Weakness
**W1: In page 6, the authors discuss the clustering methods in the cooperative cluster module and compare them with K-Means and DBSCAN. But it is unclear how these methods are achieved, e.g., how the authors choose the hyper-parameters.**
A1: In our study, we compare both K-Means and DBSCAN. For K-Means, it iteratively assigns points to a fixed number of groups and focuses on partitioning the observed samples into k clusters. Each instance belongs to the cluster with the nearest cluster center, thereby minimizing within-cluster variances. We set the cluster center scale to 2 for the benign and malicious groups. As for DBSCAN, it is a density-based clustering algorithm that identifies densely packed areas of data points and distinguishes them from sparser, noise-labeled regions. DBSCAN operates with two primary hyperparameters: eps, the maximum distance between two points for one to be considered in the neighborhood of the other, and min_samples, the minimum number of points required to form a dense region. For our experiments, we selected eps = 0.05 and min_samples = 1. We will add the introduction and hyperparameter details in the manuscript!
**W2: For Figure1, whether this Client parameter importance degree similarity would be affected in the large client scale.**
A2: The similarity in client parameter importance degree is calculated based on client distribution behaviors, ie., benign and malicious intents, and thus remains unaffected by the scale of client participation. We further validate this through experiments conducted on the Cifar-10 dataset ($\beta=0.5$) involving 15 clients, which includes eleven normal participants and four backdoor attackers, as shown in the following table. The results demonstrate a distinct difference in the client parameter importance between benign and malicious clients.
*Table: **The similarity matrix for client parameter importance** reveals significant parameter importance differences between benign and malicious groups. Experiments were conducted on the CIFAR-10 dataset ($\beta=0.5$) with **four** backdoor and **eleven** benign clients. We measured the parameter importance based on the Fisher Information.*
|Client Index |0|1|2|3|4|5|6|7|8 |9 |10 |11 (Evil)|12 (Evil)|13 (Evil)| 14 (EVil)|
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|**0**|1.0000| 0.1930| 0.3412| 0.5356| 0.5621| 0.1975| 0.3853| 0.4329| 0.1797| 0.2294| 0.4735| 0.0784| 0.1114| 0.1776| 0.0665|
|**1**|0.1930| 1.0000| 0.1448| 0.1967| 0.2762| 0.1802| 0.2707| 0.5343| 0.5597| 0.1454| 0.2181| 0.0622| 0.0949| 0.1249| 0.0481|
|**2**|0.3412| 0.1448| 1.0000| 0.2716| 0.2752| 0.2747| 0.3368| 0.3262| 0.1652| 0.2502| 0.4663| 0.0651| 0.0841| 0.1722| 0.1077|
|**3**|0.5356| 0.1967| 0.2716| 1.0000| 0.5244| 0.1536| 0.2647| 0.3254| 0.2212| 0.1416| 0.5704| 0.0642| 0.0605| 0.0927| 0.0395|
|**4**|0.5621| 0.2762| 0.2752| 0.5244| 1.0000| 0.2014| 0.4969| 0.5118| 0.3136| 0.3476| 0.4479| 0.1006| 0.1742| 0.1462| 0.0853|
|**5**|0.1975| 0.1802| 0.2747| 0.1536| 0.2014| 1.0000| 0.2942| 0.2802| 0.1090| 0.1484| 0.1820| 0.0548| 0.1157| 0.1601| 0.0643|
|**6**|0.3853| 0.2707| 0.3368| 0.2647| 0.4969| 0.2942| 1.0000| 0.5005| 0.1661| 0.4577| 0.3612| 0.0962| 0.1296| 0.1181| 0.0665|
|**7**|0.4329| 0.5343| 0.3262| 0.3254| 0.5118| 0.2802| 0.5005| 1.0000| 0.2883| 0.4163| 0.4292| 0.1024| 0.1614| 0.1712| 0.0640|
|**8**|0.1797| 0.5597| 0.1652| 0.2212| 0.3136| 0.1090| 0.1661| 0.2883| 1.0000| 0.1025| 0.2159| 0.0741| 0.0955| 0.1207| 0.0654|
|**9**|0.2294| 0.1454| 0.2502| 0.1416| 0.3476| 0.1484| 0.4577| 0.4163| 0.1025| 1.0000| 0.2142| 0.0761| 0.1473| 0.1354| 0.0904|
|**10**|0.4735| 0.2181| 0.4663| 0.5704| 0.4479| 0.1820| 0.3612| 0.4292| 0.2159| 0.2142| 1.0000| 0.0414| 0.0727| 0.0814| 0.0467|
|**11(Evil)**|0.0784| 0.0622| 0.0651| 0.0642| 0.1006| 0.0548| 0.0962| 0.1024| 0.0741| 0.0761| 0.0414| 1.0000| 0.2374| 0.2529| 0.1504|
|**12(Evil)**|0.1114| 0.0949| 0.0841| 0.0605| 0.1742| 0.1157| 0.1296| 0.1614| 0.0955| 0.1473| 0.0727| 0.2374| 1.0000| 0.5076| 0.5568|
|**13(Evil)**|0.1776| 0.1249| 0.1722| 0.0927| 0.1462| 0.1601| 0.1181| 0.1712| 0.1207| 0.1354| 0.0814| 0.2529| 0.5076| 1.0000| 0.4377|
|**14(Evil)**|0.0665| 0.0481| 0.1077| 0.0395| 0.0853| 0.0643| 0.0665| 0.0640| 0.0654| 0.0904| 0.0467| 0.1504| 0.5568| 0.4377| 1.0000|
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The authors have addressed my concerns. I will maintain my positive score. | Summary: This paper presents an innovative approach to mitigating backdoor attacks in federated learning systems. The authors introduce the Fisher Discrepancy Cluster and Rescale (FDCR) method, which leverages Fisher Information to assess parameter importance in local distributions. By reweighting client parameter updates and identifying significant discrepancies, the FDCR method effectively identifies and mitigates backdoor attackers. The paper demonstrates the efficacy of this approach through empirical results on various federated learning scenarios, highlighting its robustness and effectiveness.
Strengths: 1. Innovative Methodology: The FDCR method introduces a novel approach by using Fisher Information to measure parameter importance, which is a significant contribution to backdoor defense in heterogeneous federated learning.
2. Combination of Client Selection and Parameter Aggregation: The dual approach of client selection and parameter aggregation enhances the overall effectiveness of the method, addressing multiple aspects of the backdoor attack problem.
3. Robustness Across Multiple Scenarios: The empirical results demonstrate that FDCR consistently outperforms other methods across different datasets and backdoor attack scenarios, highlighting its robustness and generalizability.
Weaknesses: 1. Long-term Stability: The long-term stability of the FDCR method over multiple communication rounds is not fully evaluated. It is important to assess whether the method remains effective as the federated learning process continues over many iterations.
2. Clustering Description: Although authors provide a full comparision for different clustering methods. But the selected cluster method (Finch) is not thoroughly introduced. This could make readers feel confused. The authors should provide a clear descprition of how to use the Finch in your method.
Technical Quality: 3
Clarity: 4
Questions for Authors: The authors are expected to address the concerns in the block of "Weaknesses".
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitations are discussed in the paper. For the negative societal impact, I didn't find any concern from my side.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Response to Reviewer 7qoM**
Dear Reviewer 7qoM:
Thank you for your thoughtful review. We appreciate your recognition of our innovative use of Fisher Information in our method, the dual approach of client selection and parameter aggregation, and its robustness across various scenarios. We aim to address your concerns in our detailed responses below, hoping to provide clarity and demonstrate the effectiveness of our proposed approach.
### Weakness
**W1: Long-term Stability: The long-term stability of the FDCR method over multiple communication rounds is not fully evaluated. It is important to assess whether the method remains effective as the federated learning process continues over many iterations.**
A1: We acknowledge the importance of evaluating the long-term stability of our method over multiple communication rounds. To address this, we have conducted extensive experiments to assess the effectiveness of our method over an extended number of iterations, i.e., 100 epochs. Our results, as shown in the following table, demonstrate that the FDCR method maintains robust backdoor defense capabilities across different datasets and varying degrees of data heterogeneity. We will update the experimental results in the final version!
*Table: **Comparison with the state-of-the-art backdoor robust solutions**: in Cifar-10, Fashion-MNIST,
and USPS scenarios with skew ratio $\beta=0.5$ and malicious proportion $\Upsilon=20\%$. *A* and *R* mean federated benign performance and backdoor failure rate. *V* measures the heterogeneity and robustness trade-off.*
| Dataset | | Cifar-10 | | |Fashion-MNIST | | |USPS| |
|:---:| :---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Method |*A*| *R*| *V* |*A*| *R*| *V* |*A*| *R*| *V*|
| DnC | 62.97 | 77.52 | 70.24| 87.34 | 88.54 |**87.94** | 95.18 | 4.56 | 49.87 |
| FLTrust| 48.29 | 69.91 | 59.10 |71.02 | 6.36 | 38.69| 94.76 | 72.01 | 83.38 |
| SageFlow|65.90 | 48.88 | 57.39 |88.55| 4.85 | 46.70| 96.32 |4.94 |50.63|
| Our FDCR| 55.91 | 95.66 |**75.79**| 86.39 | 88.05 | 87.22 | 96.67 | 89.86 |**93.26**|
**W2: Clustering Description: Although authors provide a full comparision for different clustering methods. But the selected cluster method (Finch) is not thoroughly introduced. This could make readers feel confused. The authors should provide a clear descprition of how to use the Finch in your method.**
A2: The selected clustering method, Finch, considers the nearest neighbor of each sample as sufficient support for grouping and implicitly selects characteristic prototypes, as prototypes from different domains are less likely to be first neighbors. In our work, we use the Euclidean metric to evaluate the distance between any two client gradient update discrepancies and view the weight with the minimum distance as its “neighbor”, sorting it into the same set. After clustering, we regard the group with the maximum mean weight as the malicious clients and then eliminate their aggregation weights for backdoor defense in heterogeneous federated learning. In our final version, we will provide a comprehensive description of the selected cluster method. | Summary: The paper addresses the issue of backdoor attacks in federated learning systems, where malicious clients introduce triggers in their local models to compromise the global model. They use Fisher Information to determine parameter importance, reweight client updates, and identify malicious clients. The method is designed to handle backdoor attacks even in heterogeneous federated scenarios, showing empirical effectiveness through various experiments.
Strengths: 1. The method is novel. By leveraging Fisher Information to quantify parameter importance and reweight client updates, the paper presents a unique and effective solution to a challenging problem.
2. The method is specifically designed to work in heterogeneous federated learning environments. The empirical results demonstrate its robustness and effectiveness, making it a valuable contribution to the field.
3. The paper provides extensive empirical validation through experiments on multiple datasets and backdoor attack scenarios. This thorough evaluation adds credibility to the proposed method and shows its practical applicability.
4. The method shows faster and more stable convergence rates in various experimental settings.
Weaknesses: 1. More details about Data Heterogeneity can be provided in Section 4.1
2. More analysis on the details of the experimental results can be provided to support the conclusion.
Technical Quality: 3
Clarity: 4
Questions for Authors: See weakness.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: There is no negative impact in this paper. The limitations mentioned by the authors can provide a more comprehensive method in the future research direction.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Response to Reviewer oCkz**
Dear Reviewer oCkz:
Thank you for affirming our work and raising insightful questions. We are pleased you found our method novel and effective, leveraging Fisher Information to quantify parameter importance and reweight client updates. We appreciate your acknowledgment of its practical applicability and faster, more stable convergence rates.
### Weakness
**W1: More details about Data Heterogeneity can be provided in Section 4.1.**
Thank you for your advice. Regarding data heterogeneity, we focus on generating non-independent and identically distributed (non-IID) distributions among clients. In our work, we draw $p_c \sim Dir_c (\beta)$ from the Dirichlet distribution and allocate a $p_{c,k}$ proportion of the instances of class $c$ to participant $k$, where $Dir(\beta)$ is a concentration parameter controlling the similarity among clients. With this partitioning strategy, increased data heterogeneity results in each party having relatively fewer data samples in some classes. Thus, the smaller the $\beta$ value, the more imbalanced the local distribution. We set $\beta$ to 0.5 and 0.3 for the subsequent experimental comparisons. We will update the details about data heterogeneity to make it easy to understand!
**W2: More analysis on the details of the experimental results can be provided to support the conclusion.**
Thanks for your suggestion! We provide a comprehensive discussion comparing existing methods with our approach. As data heterogeneity severity and the number of backdoor attackers increase, various methods naturally present a certain degree of defense degradation. Specifically, for Distance Difference Defense methods, such as Multi-Krum and DnC, measure the distance among client updates to identify backdoor attackers. These methods face a significant decrease in defense ability with challenging data heterogeneity, i.e., $\beta=0.3$. Furthermore, statistical distribution defense methods, such as Trimmed Mean and Bulyan, calculate general statistical information to depict normal client behavior, making them sensitive to large scales of malicious attackers. For instance, with a malicious ratio $\Upsilon=30\%$, these methods demonstrate fragile defensive capabilities against backdoor attackers. In contrast, our method leverages inherent network characteristics to measure parameter importance towards agnostic distribution, revealing that benign and malicious clients exhibit distinct degrees of parameter importance. Thus, our method demonstrates stable robustness towards varying data heterogeneity and different scales of backdoor attacks. We will provide a detailed experiment analysis in our final version!
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I have raised my score. | Summary: This paper studies backdoor defenses in federated learning. Existing backdoor defenses either assume homogeneous data, existence of validation data or client optimization conflicts. In order to circumvent these limitations, the authors proposed FDCR method. FDCR is based on the observation that parameter importance degree is different between benign heterogeneous distribution and malicious triggered distribution. In particular, FDCR identify malicious clients by importance degree re-weighted parameter discrepancy. Empirical results demonstrate the robustness of the proposed FDCR.
Strengths: * The motivation that the parameter discrepancy between benign heterogeneous gradient and malicious gradient different is different is novel. The method is clearly motivated.
* The paper is clearly written and easy to follow.
* The experiments are extensive and sufficient.
Weaknesses: * The computation cost of Fisher information matrix is unclear.
* Theoretical analysis can better validate the effectiveness of the proposed FDCR
Technical Quality: 4
Clarity: 4
Questions for Authors: Please refer to weaknesses.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: There is no potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Response to Reviewer M9TV**
Dear Reviewer M9TV:
We sincerely appreciate your time and effort in reviewing our paper. Your positive feedback on the novelty of our approach, the clarity of our writing, and the comprehensiveness of our experiments is very encouraging. We are glad that our novel method of addressing the parameter discrepancy between benign and malicious gradients was well-received, and that our writing and experiments were clear and thorough. we hope that our responses below will address your concerns and update the score.
### Weaknesses
**W1: The computation cost of Fisher information matrix is unclear.**
A1: Thank you for the feedback. In our work, we require different clients to calculate the parameter importance based on the local distribution via the Fisher Information [1,2]. To save the computational effort, we follow previous works and approximate the Fisher information matrix as the diagonal matrix, i.e., ${F}_w \in \mathbb{R}^{|w|} $ [3]. Thus, the computation cost complexity for the Fisher information matrix in our methodology is $\mathcal{O}(|w|)$, where $\mathcal{O}$ means complexity degree. We will add computation cost discussion to enhance readability in our revised manuscript!
[1] Fisher, R. A. On the mathematical foundations of theoretical statistics. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 222(594-604):309–368, 1922.
[2] Amari, S. Neural learning in structured parameter spaces-natural riemannian gradient. Advances in neural information processing systems, pp. 127–133, 1997.
[3] Kirkpatrick J, Pascanu R, Rabinowitz N, et al. Overcoming catastrophic forgetting in neural networks[J]. Proceedings of the national academy of sciences, 2017, 114(13): 3521-3526.
**W2: Theoretical analysis can better validate the effectiveness of the proposed FDCR.**
A2: In our work, we introduce the Fisher Discrepancy Cluster and Rescale, abbreviated as FDCR, to establish the backdoor defense in heterogeneous federated learning. The core insight is to distinguish benign and malicious behaviors based on different parameter importance degrees. We estimate the importance of each parameter using an empirical approximation of Fisher information for each client distribution. The rationale is that Fisher information is proposed to measures the information carried by an observable random variable about the unknown parameters of the distributio. Thus, the Fisher Information Matrix measures parameter importance by quantifying the sensitivity of the likelihood function to parameter changes, capturing the curvature of the likelihood surface, and reflecting the precision of parameter estimates. With respect to our method, we find that benign and malicious clients manifest as noticeable parameter importance discrepancies because they focus on fitting on distinct distribution manners and thus appear different information capabilities for the same parameter elements. Therefore, we employ the Fisher Information to distinguish between benign and malicious distributions based on distinct degrees of parameter importance.
Furthermore, we demonstrate the effectiveness of the proposed method from the communication aspect. Our method conducts backdoor defense from the server side. The server collects the updated client models and corresponding parameter importance matrices to identify client behavior and mitigate the backdoor effect during aggregation. Thus, compared to existing solutions, while our method linearly increases computation cost, it significantly enhances backdoor defense effectiveness, as shown in the following table. We will provide a computation complexity comparison in the final version. Thanks for the advice!
*Table: **Computation burden comparison** on Cifar-10 with $\beta=0.5$ and evil ratio $\Upsilon=0.2$. $w$ refers to the network, $K$ represents client scale, and *O* indicates complexity degree. *A* and *R* mean federated benign performance and backdoor failure rate. *V* measures the heterogeneity and robustness trade-off.*
| Method | Computation Burden |*A*| *R*| *V*|
|:---:| :--:|:---:|:---:|:---:|
| Multi Krum |$\mathcal{O}(K \times \|w\|)$| 50.93 | 85.27 | 68.10 |
|TrimmedMedian |$\mathcal{O}(K\times \|w\|)$|46.80 | 73.69 | 60.25|
| DnC|$\mathcal{O}(K \times \|w\|)$| 60.87 | 84.70 | 72.78 |
| Our SDEA|$\mathcal{O}(2 K\times \|w\|)$ | 65.19 |93.59 |**79.39**|
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your response. I will keep my score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Fully Explicit Dynamic Gaussian Splatting | Accept (poster) | Summary: The author proposed a Fully Explicit Dynamic Gaussian Splatting method to decrease the training time and memory cost w/o losing the high fidelity. The core of the proposed approach based on the decomposition of the static and dynamic Gaussians during training is to sample dynamic Gaussians at sparse timestamps and then do interpolation to estimate the dynamic Gaussians within each time interval. Also, the proposed backtracking technique is used to prune the useless dynamic Gaussian points. However, there are still some unclear questions. Will consider increasing the rating if the authors can provide reasonable response.
Strengths: 1. unlike the existing works, the authors attempt to select keyframes and utilize different interpolation techniques on keyframes to model the dynamic Gaussians.
2. The results on the N3V dataset show that the proposed method achieves comparable or higher performance than the existing methods, while largely reducing the memory and training time cost.
3. The novel techniques to extract the dynamic points and model the temporal opacity seem very interesting.
Weaknesses: 1. From the results shown in Fig.6 and the supplementary materials, the extracted points from the 'dynamic points extraction' technique seem actually to be the points with color change. I think the results are not very concrete to support the contribution. More experiments in the scenes including objects with large movements might be helpful.
2. From the proposed opacity modeling technique and the limitations claimed in Sec.6, it seems the proposed method can only work well in such scenes that the objects within the scenes are stable w/o new objects coming in. I would like to know the performance of the proposed method when some new objects come in while progressively processing. It seems that the results do not contain such scenarios.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In lines 238 and 239, why the top 2% points are converted? why consider the distant points as the potential bias?
2. Can the authors provide more details on how the parameters of keyframes are optimized?
3. In line 258, what does it mean by 'use only COLMAP point clouds from the first frame'?
4. What if the movements between key frames are not regular motions?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Please refer to 'Weaknesses'.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: [W1] The main motivation of separating dynamic points is to allow FESGS to handle temporal changes with either color or displacement variations, so stationary objects but temporally changing colors are also trained as dynamic points.
We carry out an additional experiment for the case where points with color changes are regarded as static points. We train 3DGS by masking only the moving parts of N3V, and measure the performance gain (To measure overall gain, we replace dynamic parts with ours) on the remaining parts. Please refer to Table A for the detailed results.
As shown in the evaluation, if we do not consider the points with color changes as dynamic points, it is hard to obtain satisfactory results.
In the case of large movements, we conduct an experiment with longer frames on the birthday scene in Figure D of the uploaded PDF file. The rendering result shows that FEDGS detects the movements of objects as well as the color changes. The quantitative result in detail for the large movements will be covered in the next answer.
[W2] We conduct an experiment with different frames of the Technicolor dataset to see this issue. The selected interval is from the timestamp when the person is completely invisible to the timestamp when the bag is placed on the desk. The result is reported in Table B and Figure D of the uploaded PDF file. In this case, the initialized point clouds only use the COLMAP of the first timestamp (no prior point cloud for the person). This result shows that FEDGS can learn about appearing objects on the scene. This is because the densification of 3DGS works well due to the explicit representation that allows splitting from neighboring Gaussians.
[Q1] This is because 2% of the static points with the biggest movement are related to the dynamic part. We note that the 2% is determined empirically, and we observe that extracting more points causes too many dynamic points and extracting fewer points result in not enough dynamic points. The performance change according to the ratio is shown in Table C.
[Q2] The keyframe is only associated with dynamic points. And, the parameters optimized in each keyframe are about the position and rotation values, and they are optimized by RAdam.
Using Equation 7, the CHIP interpolates the position values stored in the four neighboring keyframes for the dynamic points at timestamp $t$. The interpolated position is represented as a $R^3$ vector whose gradient can be obtained by the differential rasterizer of 3DGS. After obtaining the gradient, by Equation 6, the gradient of each keyframe's position can also be obtained by partial difference (we use PyTorch's autograd function) and used to update the position values stored in each keyframe. It is worth noting that thanks to CHIP, FEDGS can also learn the velocity change and momentum value of the dynamic point.
Similarly, the keyframe rotation value can also be optimized by Equation 8 and the differential rasterizer in 3DGS. The difference with the keyframe position is that it uses two neighboring keyframes.
[Q3] This means that for every timestamp, only the image from the earliest timestamp is used. For the N3V dataset, for example, the COLMAP result is provided for the first frame (about 20 images) out of a total of 300 frames. Note that 4DGS models or STGs use all frames (about 6000 images). This prevents COLMAP from taking an excessive computational time to optimize.
[Q4] Irregular motion can be handled by CHIP, given a small enough keyframe interval. CHIP uses the tangent and position of two neighboring keyframes. As shown in Equation. 7, the difference of positions between two adjacent keyframes is used for tangent. The position values are explicitly stored in each keyframe. With the two values, CHIP approximates dynamic motion as a polynomial. If the keyframe is too sparse, it becomes difficult to approximate the motion with CHIP (i.e higher degree polynomials are required), but if it is dense enough, most of the motion can be approximated. This is true even if the motion is velocity-variant or non-linear. This can be seen in the results in Table D. The experiment is performed on cook spinach scene in the N3V dataset. This scene includes a motion that changes direction, accelerates and decelerates. We also include experiments that skip frames to simulate larger movements (the more frames skipped, the larger the movement). These results show that if the keyframe of FEDGS is dense enough, it can be handled (even if the motion is not regular). In general, setting a keyframe interval of 10 is reasonable considering overall performance and storage size.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their explanations. However, there are still some left questions.
Thanks to the authors for their explanations. However, there are still some left questions.
1. In your answer to [W2], I think you want to claim that your proposed method can work well in scenarios with new objects appearing. However, you also claimed in limitations that that is one of the limitations. I am confused with it. It seems the qualitative result presented is good, but the quantitative result is not as good as the performance of your method on the other dataset.
2. Please clarify which data does the experiments on the dynamic point ratio in Tab C are conducted on.
3. I am still concerning the empirically selected dynamic points ratio and the intervals between keyframes might limit the generalization of the proposed methods.
4. I am still confused about the answers to [Q3], what do you mean by 'the COLMAP result is provided for the first frame (about 20 images) out of a total of 300 frames'? Does it mean these 20 images are multi-view images at the same time?
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback, your comments are valuable to us. Here are the answers to your questions:
1. We mean that the newly appeared object mentioned in the Section 6 has **"no relevant 3D Gaussians in neighboring frames"**. Here "relevant 3D Gaussians" refers to **3D Gaussians that are close to be densified from the object**. These Gaussains are not only Gaussians corresponding to the same object, but also Gaussians in neighboring objects (in Figure D of the uploaded pdf, neighboring Gaussains of the person could be a table and decorations). So, it is true that FEDGS can handle new objects that appear. However, it is still difficult to represent suddenly appeared objects with no neighboring objects. Although we wanted to show this example, unfortunately, there is no scene where objects suddenly appear in the public datasets. Instead, as shown in Figure D of the uploaded pdf, we display the result on slowly appearing objects. \
In this rebuttal, we chose the video with a different time range for the additional experiment. The frame range includes newly appearing objects **(Frame #50~#169)**. Therefore, the performance can be different between the additional experiments and the main paper **(Frame #151~#200)**.
2. We experiment Table C on the Cooking Spinach scene from the N3V dataset.
3. First, for the static point extraction, **the percentage does not determine the ratio of dynamic points**. In our model, pruning and densification are applied to both static and dynamic points, automatically reducing or increasing the number of both. Even if we convert more static points to dynamic points, and if it is not excessive, the static points will be split and the dynamic points will be pruned to correct the erroneous conversion, and vice versa. Therefore, the extraction percentile is varied during optimization, not fixed value to determine the ratio of dynamic points. Actually, when we used the same extraction percentile, 56.4% of the points were finally considered as dynamic points on the Birthday scene from the Technicolor dataset, while 32.3% of the points were dynamic points on the Cook Spinach scene from the N3V dataset. Unlike the extraction percentile, we do not control the ratio of dynamic points; it depends on the dataset or scene. \
For more generality of the keyframe interval, as the SWinGS (ECCV 24), mentioned by reviewer Vpue, an adaptive keyframe interval selection based on a magnitude of optical flow between frames can be one possible solution. By using this, we can expect that dynamic keyframe allocation is available, varying keyframe intervals even in the same scene. In total, we agree with your concern on the generality and will mention it as one of our future works in the revised version of this paper.
4. Exactly. In the case of the N3V dataset, it is a multi-view video dataset with 300 frames captured by approximately 20 time-synchronized cameras (the number of cameras varies from scene to scene). We only use the 20 images taken in the first frame.
We hope our answers will help you better understand our work. And if you have any other questions, don't hesitate to ask us.
---
Rebuttal 2:
Title: Reference tables
Comment: ### Table A. Quantitative results of the experiment on handling color changes without dynamic points.
| Model | PSNR | SSIM | LPIPS |
|-|-|-|-|
| 3DGS | 21.69 | 0.851 | 0.126 |
| 3DGS + ours dynamic | 26.07 | 0.891 | 0.089 |
| Ours | 29.03 | 0.922 | 0.068 |
$~$
$~$
### Table B. Quantitative results on the Birthday scene in the Technicolor dataset.
| Technicolor Birthday longer | PSNR | SSIM | LPIPS |
|-|-|-|-|
| Ours | 29.12 | 0.900 | 0.094 |
$~$
$~$
### Table C. Experimental results on the conversion rates to dynamic points.
| Percent | PSNR | SSIM | LPIPS |
|-|-|-|-|
| 0.5 | 32.55 | 0.956 | 0.043 |
| 1 | 32.86 | 0.957 | 0.042 |
| 2 | 33.04 | 0.956 | 0.041 |
| 4 | 32.90 | 0.956 | 0.042 |
| 8 | 31.72 | 0.955 | 0.042 |
$~$
$~$
### Table D. Experimental results on different keyframe intervals and skipped frames.
| Skipped frames | 1 | | | | 2 | | | | 3 | | | |
|-|-|-|-|-|-|-|-|-|-|-|-|-|
| Keyframe interval | PSNR | SSIM | LPIPS | Size (MB) | PSNR | SSIM | LPIPS | Size (MB) | PSNR | SSIM | LPIPS | Size (MB) |
| 1 | 31.17 | 0.948 | 0.057 | 595 | 31.47 | 0.948 | 0.056 | 415 | 31.81 | 0.946 | 0.051 | 142 |
| 2 | 32.06 | 0.952 | 0.051 | 314 | 32.33 | 0.954 | 0.049 | 322 | 31.81 | 0.953 | 0.044 | 101 |
| 5 | 31.70 | 0.953 | 0.047 | 206 | 32.29 | 0.954 | 0.043 | 126 | 32.53 | 0.954 | 0.045 | 80 |
| 10 | 33.04 | 0.956 | 0.041 | 119 | 32.65 | 0.956 | 0.043 | 93 | 31.79 | 0.953 | 0.046 | 74 |
| 20 | 32.78 | 0.955 | 0.043 | 90 | 32.07 | 0.952 | 0.047 | 78 | 32.08 | 0.953 | 0.048 | 73 |
| 50 | 32.14 | 0.955 | 0.046 | 79 | 31.93 | 0.951 | 0.052 | 72 | 30.91 | 0.949 | 0.056 | 73 | | Summary: The paper proposes a new method in the field of novel view synthesis for video input. The authors propose a Gaussian Splatting-based algorithm that introduces a fully explicit representation at keyframes and models interpolation of gaussians (position, rotation, opacity) between the frames. Additionally, the paper proposes to learn scene separation into static and dynamic parts in the progression of training. The method is evaluated on popular video novel view synthesis literature datasets.
Strengths: - The idea of modelling the scene explicitly in keyframes is certainly an interesting choice - it has the potential to offer a solution to modelling long-range dependencies. In combination with the use of Gaussian Splatting, it is definitely a novel concept.
- The choice of interpolators seems to be good and well supported through experiments, specifically two-gaussian opacity modelling is a neat way of dealing with objects appearing and disappearing.
- Separation into static and dynamic Gaussians is a good idea and based on the results and visualisations seems to work well in the proposed methodology. The premise of the majority of the scene being modelled as static was explored in NeRF-based solutions. However, this work provides a novel solution of how to implicitly divide the scene into dynamic and static gaussians.
- The description of the method is easy to follow.
- The authors provide a large number of ablation experiments with many components of the methods focused on.
- The proposed method reports very convincing training times, especially for training on videos.
- The supplementary video presents the qualitative results in a very clear manner.
Weaknesses: - I would like to see more detailed descriptions of the approach throughout the paper. Mainly:
- How does the temporal opacity modelling relate to the densification of Gaussians?
- More details on keyframe selection would be useful - why fixed step, how is the step chosen.
- What is the motivation of static gaussians having linear position interpolation?
- It would be good to see the same detailed breakdown per scene for the Technicolor dataset. Similarly, the Technicolor experiment shows fewer comparisons to other methods (I would try to include more, at least Gaussian Splatting based ones).
- The keyframe interpolation naturally draws attention to the method potentially being better at handling big movements or longer videos. Therefore, I think such a comparison is missing in this work. It could be done on the same datasets. Other works compare only short portions of the Technicolor dataset which hides the potential issues with longer videos and bigger movements, however, full scenes are available. Similarly, Neural 3D Video offers a very long sequence for the *Flame Salmon* scene.
- Even though the ablation is extensive, there could be more insights on the results provided.
- Firstly, we are missing the information on what is the ablation performed on - which dataset, one or more scenes.
- Is w/o Dynamic point extraction treating everything as static, just with position linear interpolation?
- It seems that all the components are highly relevant (removal of one degrades the quality by almost $0.5dB$ at least). Is that a consistent result between multiple runs? Also, is this scene-specific or an average?
- It would be great if the authors provided some statistical analysis of the results.
Technical Quality: 3
Clarity: 3
Questions for Authors: - HumanRF [1] pays particular attention to how to manage bigger and smaller movements in the video via non-uniform segmentation into manageable chunks - did you notice the high sensitivity of your method to keyframe interval?
- Relating to mentioned more detailed descriptions:
- It seems that only the initial set of gaussians is being modified throughout the training. Therefore, how does the appearance of a new object happen? For a big object, there have to be a lot of new gaussians. Also, all appearances in the video have to be modelled with two gaussians. Are there any issues when an object appears and disappears repetitively (e.g. the rotating decoration in the Technicolor Birthday scene)?
- Point backtracking for pruning is not fully clear to me. To make sure, does $\mathcal{D}$ include all training views in all timesteps? Further, what is the pruning rule (is it carried over from the original Gaussian Splatting)?
- Regarding linear position change for static gaussians - why is this necessary given the gaussians should be static? Is it to correct tiny movements? Is there no need to allow linear rotation interpolation as well?
- I would encourage authors to check [2] - it seems that this method also models the gaussian representation explicitly in keyframes, however, takes a different approach for motion modelling. Note that this has only just been published at ECCV and does not affect my opinion on this paper, it may just be a worthwhile mention in related work of how the proposed method stands out with respect to others.
[1] Mustafa Işık, Martin Rünz, Markos Georgopoulos, Taras Khakhulin, Jonathan Starck, Lourdes Agapito, Matthias Nießner, *HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion*, ACM Transactions on Graphics (TOG), 2023
[2] Richard Shaw, Jifei Song, Arthur Moreau, Michal Nazarczuk, Sibi Catley-Chandar, Helisa Dhamo, Eduardo Perez-Pellitero, *SWinGS: Sliding Windows for Dynamic 3D Gaussian Splatting*, European Conference on Computer Vision, 2024
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors provided a brief limitations paragraph. I believe the issue with new object appearance is a good mention.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: [W1] Each answer is as follows. The revised version will reflect this description.
1. FEDGS follows the densification algorithm of the original 3DGS. This algorithm accumulates the gradient magnitudes of visible Gaussians (that is, the gradient in the x,y direction of the image space) for a camera during training, selects Gaussians whose gradient magnitudes are greater than a threshold and applies splitting. Here, temporal opacity does not directly involve the densification, but increases or decreases the gradient in 2D direction with respect to visibility over time, allowing the dynamic Gaussian to consider only the gradient in the timestamp in which it is visible.
2. The reason why we fix the keyframe interval is because of optimization issues. While a varying keyframe has advantages in terms of storage (e.g., allocate wider intervals for slow motion and narrower intervals for fast motion), it imposes a high cost on indexing, since the keyframe index of each Gaussian must be searched for every timestamp. On the other hand, if we use fixed timestamps, this cost can be greatly reduced by simple indexing (we can chunk keyframe data as a vector). We choose the fixed step keyframe because we believe that the benefit of reducing the computational cost outweighs the storage wasted by using fixed timestamps, and it is trained to be a reasonable size for the actual size of the storage.
The keyframe interval is empirically chosen in consideration of rendering quality and efficiency. Shorter keyframes can handle more complex motion but are also easy to overfit and require more storage. On the other hand, large keyframe intervals often are inadequate to handle complex motion. We add the test result according to a variety of intervals on the N3V dataset in Table A.
3. The reason, why we assume that static Gaussians have linear motions, is to distinguish dynamic points from the other static points. We observe that there is a high correlation between their spatial movements and dynamicity when we train a scene having dynamic motions. We find that when we apply only a linear transformation (i.e., Equation 5) to the 3DGS model, the points with the largest transformation closely coincide with the dynamic part (e.g., selecting 2% of the N3V dataset in order of size matches points corresponding to person). Of course, we could adopt more sophisticated assumptions for the movement, but the linear motion assumption is enough to account for it. We note that the linear motion is normalized based on a distance between each Gaussian and a camera. This aims to impose higher weights on an object close to the camera than a distant one if they have the same distance.
[W2] Thanks for pointing this out. We have added Table B, Table C and Table D with scene breakdown results of PSNR, SSIM and LPIPS respectively for the Technicolor dataset. The reason why there are no results for the other methods is that there are implementation issues because the Technicolor dataset is not officially supported. Nevertheless, we add some baselines to reflect this as best we can.
[W3] We test our model on a 1000-frame scene from the N3V dataset as recommended. We have added an quantitive evaluation in Table E. The rendered images can be found in Figure E of the uploaded PDF file This result shows reasonable performance and memory usage.
[W4] The answers for each item are shown below. We will clarify them in the revised version of this paper.
1. We report the average results of all scenes in the ablation study N3V dataset.
2. In w/o dynamic points, the linear motion transformation in Equation 5 is applied to the static points.
3. As described in W4-1, our ablation reports average over all scenes in N3V dataset. We have added Table F, Table G and Table H with scene breakdown ablation results of PSNR, SSIM and LPIPS respectively for the N3V dataset. We ran the experiment 5 times for each ablation to get multi-run statistics of the ablation result, whose result is reported below. Compared to other cases, ours shows the highest mean PSNR and lowest STD values. Please check it, and please let us know it if you have more questions.
[W5] We report multi-run experiments on all scenes in the N3V dataset which reports only “ours” results of Table F, Table G and Table H. We run our method for 10 times per scene and report mean and STD for each. It shows that our method has lower STD values than other cases, which suggests that each component of our FEDGS contributes to stable performance. Please refer the Table I for detailed results.
The answers to [Q1, 2] are followed by the next comment.
---
Rebuttal 2:
Title: Answers to [Q1, 2]
Comment: [Q1] The sensitivity of motions depends on keyframe intervals. However, instead of directly measuring motions, we decide to determine the keyframe intervals based on rendering quality. As explained in W1-2, the keyframe interval is based on both storage usage and rendering quality. Here, we assume that sufficiently small keyframe interval (in our cases, 10) can handle most of complex motion. We report the experimental evidence in Table J. To simulate and measure the motion speed, we intentionally skip frames in videos.
[Q2] Each answer is as follows:
1. As described in W1-1, we adopt the densification algorithm of the original 3DGS model, and this densification is applied to both static and dynamic points. In addition, static points are periodically converted to dynamic points so that even if there is a new object, it can be treated as a dynamic point by splitting the Gaussian around it. In our experiments, the initial Gaussians start at about 7000, but by the end of training, the scene usually includes more than 200,000 Gaussians. Although densification can handle new objects as long as there are Gaussians around the dynamic object that can be split (e.g., static Gaussians can be converted to dynamic after splitting, or dynamic Gaussians can be split and optimized), unfortunately, when there is a sudden appearance of an object as described in Section 6, it is difficult to learn because there are no Gaussians around it to split. This is the same problem as the original 3DGS, which does not learn well unless there is a Gaussian to split around it (e.g., random initialization). Also, for objects that appear repeatedly, there is no way to determine if the disappearing object and the reappearing object are the same because its temporal consistency is required to determine this. Our model represents these as separate objects when an object is temporally discontinuous. In our experiment, the rotating decoration that is occluded and reappears, our model learns that the dynamic points disappear and new dynamic points appear in Figure C of the uploaded PDF file. In this figure, you can see that a number of point clouds changes before and after the object flips. This shows that FEDGS creates different Gaussians when the object reappears. Nevertheless, you can see that the rendering results look good.
We further carry out an experiment with the presence of occlusion. We have added comparison evaluation in Table K. Figure A of the uploaded PDF file shows the comparison results when occlusion is present. We select 100 frames of the train scene in the Technicolor dataset and compare FEDGS with the other models. We use the first frame as input and do not use the point cloud information of other frames. The results show that FEDGS renders dynamic objects well, even when they disappear and reappear. 4DGS learns dynamic parts well, but struggles to render static parts, while STG struggles to render dynamic parts. 4D Gaussians fails to render dynamic objects.
2. $\mathcal{D}$ represents the images of all timestamps in all training sets.The original 3DGS does not have it, and we add this because the opacity-based pruning method of the original 3DGS does not completely remove floaters caused by dynamic objects. This is because some dynamic objects are trained to move to an invisible area, instead of vanishing. This method measures how much each Gaussian contributes to the error in the rendered image, much like accumulating a gradient in densification. More specifically, using this operation on a rendered image, we can get the sum of the error (L1 or SSIM error) and the sum of the alpha values of all the Gaussians in the image. Dividing this cumulative error by the sum of the alpha values can normalize it. We threshold this error to remove Gaussians that cause large errors.
3. The reason, why we assume that static points have linear motion, is to distinguish dynamic points, as explained in W1-3. This does not aim to model the motion of static parts exactly. We agree with your idea of modeling the rotation of the static points, but in our method, the magnitude of the position change is enough to distinguish dynamic points. Different from the original 3DGS model learning positional information of static points, all static points in FEDGS are approximated to have linear movement. And we measure a magnitude of the motion between frames to reassign static point to dynamic point if the magnitude is more than a predefined threshold value.
Interestingly, although static points are represented as linear motions over time, they can explain all temporal changes, including color, rotation, and opacity. This is because the static points are considered as moving points during optimization if one of the temporal changes exists. Therefore, instead of using a more complex model to represent static points, we use a simple yet effective way that can further reduce the computational complexity.
Answer to [Q3] is followed by the next comment.
---
Rebuttal 3:
Title: Answer to [Q3]
Comment: [Q3] Thanks for introducing a relevant paper. We have looked at this paper and will state the differences between it and this paper.
SWinGS disentangles static and dynamic scenes using MLP. For dynamic regions, the model slices the timestamp using a sliding window strategy and trains a deformation MLP of the canonical space by window.
SWinGS has similarities with FEDGS in that it divides static and dynamic parts and slices timestamps, but the differences between them are as follows:
* __Detecting Dynamic part__: SWinGS uses MLP to classify static and dynamic in a binary manner. It performs classification by thresholding L1 error and uses spatial location as input. FEDGS, on the other hand, determines dynamic points based on the magnitude by which the static gaussian moves. This can distinguish dynamic points even if two points are in the same location.
* __Dynamic representation__: SWinGS uses an implicit function defined for each window to find the deformation in the canonical space of the dynamic grid. On the other hand, we store the dynamic motion of each point independently in an explicit way. Therefore, we do not need the coordinates of the canonical space. We only need the position information of the neighboring keyframes.
* __Keyframe selection__: SWinGS uses the magnitude of the optical flow to measure the magnitude of the motion and divides the keyframes based on this. FEDGS, on the other hand, uses a fixed keyframe to take advantage of indexing, as described earlier. This interval is a hyperparameter.
* __Optimization__: SWinGS adds a loss of pixel difference to achieve temporal consistency. In FEDGS, point backtracking for dynamic point pruning, conversion of static points to dynamic points, and progressive learning scheme are added.
---
Rebuttal 4:
Title: Reference tables
Comment: ### Table A. Experimental results on different keyframe intervals and skipped frames.
| Keyframe interval | PSNR | SSIM | LPIPS | Size (MB) |
|-|-|-|-|-|
| 1 | 31.17 | 0.948 | 0.057 | 595 |
| 2 | 32.06 | 0.952 | 0.051 | 314 |
| 5 | 31.70 | 0.953 | 0.047 | 206 |
| 10 | 33.04 | 0.956 | 0.041 | 119 |
| 20 | 32.78 | 0.955 | 0.043 | 90 |
| 50 | 32.14 | 0.955 | 0.046 | 79 |
$~$
$~$
### Table B. Quantitative evaluation on the Technicolor dataset using PSNR.
| PSNR | Birthday | Fabien | Painter | Theater | Train | Average |
|-|-|-|-|-|-|-|
| DyNeRF | 29.20 | 32.76 | 35.95 | 29.53 | 31.58 | 31.80 |
| HyperReel | 29.99 | 34.70 | 35.91 | 33.32 | 29.74 | 32.73 |
| STG(Sparse) | 31.97 | 34.54 | 36.50 | 30.55 | 32.66 | 33.25 |
| 4DGS | 28.04 | 26.22 | 33.80 | 31.50 | 27.93 | 29.50 |
| 4D Gaussians | 30.87 | 33.56 | 34.36 | 29.81 | 25.35 | 30.79 |
| Ours | 32.55 | 34.54 | 36.56 | 31.11 | 31.72 | 33.30 |
$~$
$~$
### Table C. Quantitative evaluation on the Technicolor dataset using SSIM.
| SSIM | Birthday | Fabien | Painter | Theater | Train | Average |
|-|-|-|-|-|-|-|
| DyNeRF† | 0.952 | 0.965 | 0.972 | 0.939 | 0.962 | 0.958 |
| HyperReel | 0.922 | 0.895 | 0.923 | 0.895 | 0.895 | 0.906 |
| STG(Sparse) | 0.942 | 0.886 | 0.925 | 0.877 | 0.942 | 0.915 |
| 4DGS | 0.905 | 0.867 | 0.900 | 0.874 | 0.839 | 0.877 |
| 4D Gaussians | 0.906 | 0.867 | 0.886 | 0.848 | 0.729 | 0.847 |
| Ours | 0.943 | 0.884 | 0.929 | 0.875 | 0.921 | 0.910 |
†: Use structural similarity function from scikit-image library
$~$
$~$
### Table D. Quantitative evaluation on the Technicolor dataset using LPIPS.
| LPIPS | Birthday | Fabien | Painter | Theater | Train | Average |
|-|-|-|-|-|-|-|
| DyNeRF | 0.067 | 0.242 | 0.146 | 0.188 | 0.067 | 0.142 |
| HyperReel | 0.053 | 0.186 | 0.117 | 0.115 | 0.072 | 0.109 |
| STG(Sparse) | 0.039 | 0.135 | 0.098 | 0.122 | 0.033 | 0.086 |
| 4DGS | 0.089 | 0.199 | 0.138 | 0.157 | 0.166 | 0.150 |
| 4D Gaussians | 0.088 | 0.188 | 0.162 | 0.189 | 0.272 | 0.180 |
| Ours | 0.043 | 0.152 | 0.089 | 0.143 | 0.071 | 0.100 |
$~$
$~$
### Table E. Quantitative results on the extremely long duration video sequences (Flame Salmon) in the N3V dataset.
| | PSNR | SSIM | LPIPS | Size (MB) |
|-|-|-|-|-|
| Ours | 28.77 | 0.919 | 0.076 | 392 |
---
Rebuttal 5:
Title: Reference tables
Comment: ### Table F. Statistical analysis (Mean ± STD) of ablation studies measured by PSNR.
|PSNR|Coffee Martini|Cook Spinach|Cut Roasted Beef|Flame Salmon|Flame Steak|Sear Steak|Average|
|-|-|-|-|-|-|-|-|
|w/ Linear position|28.04 ± 0.3174|32.26 ± 0.3547|33.02 ± 0.1712|28.93 ± 0.2574|32.92 ± 0.3782|33.46 ± 0.1081|31.44 ± 0.2645|
|w/ Linear rotation|28.25 ± 0.1379|32.21 ± 0.1264|32.78 ± 0.4129|28.68 ± 0.3950|33.08 ± 0.2415|33.49 ± 0.2382|31.41 ± 0.2587|
|w/ Linear position&rotation|28.17 ± 0.2974|32.48 ± 0.2138|32.61 ± 0.9545|28.99 ± 0.4750|32.92 ± 0.1230|33.27 ± 0.3726|31.41 ± 0.4061|
|w/o Dynamic point extraction|27.59 ± 0.3643|28.95 ± 1.0637|29.44 ± 1.3829|27.43 ± 0.2988|29.18 ± 1.4691|31.39 ± 0.9176|29.00 ± 0.9161|
|w/o Temporal opacity|28.18 ± 0.3947|32.23 ± 0.2807|32.18 ± 1.1642|28.81 ± 0.4199|32.84 ± 0.1935|33.58 ± 0.0855|31.30 ± 0.4231|
|w/o Progressive growing|28.26 ± 0.2780|32.36 ± 0.1803|32.92 ± 0.3817|29.49 ± 0.2209|32.70 ± 0.6805|32.99 ± 0.3908|31.45 ± 0.3554|
|w/o Regularization|28.06 ± 0.4647|32.39 ± 0.1820|32.70 ± 0.7968|28.72 ± 0.2011|33.10 ± 0.3345|33.37 ± 0.1111|31.39 ± 0.3484|
|w/o Point backtracking|28.02 ± 0.3477|32.38 ± 0.3267|32.93 ± 0.2917|28.63 ± 0.6187|32.99 ± 0.1653|33.48 ± 0.1177|31.40 ± 0.3113|
|Ours|28.43 ± 0.1691|32.77 ± 0.2060|33.31 ± 0.1972|29.11 ± 0.1202|33.41 ± 0.2379|33.23 ± 0.2406|31.71 ± 0.1952|
$~$
$~$
### Table G. Statistical analysis (Mean ± STD) of ablation studies measured by SSIM.
|SSIM|Coffee Martini|Cook Spinach|Cut Roasted Beef|Flame Salmon|Flame Steak|Sear Steak|Average|
|-|-|-|-|-|-|-|-|
|w/ Linear position|0.9180 ± 0.0019|0.9518 ± 0.0012|0.9549 ± 0.0008|0.9234 ± 0.0014|0.9621 ± 0.0010|0.9622 ± 0.0002|0.9454 ± 0.0011|
|w/ Linear rotation|0.9178 ± 0.0010|0.9507 ± 0.0005|0.9550 ± 0.0008|0.9213 ± 0.0016|0.9621 ± 0.0005|0.9624 ± 0.0008|0.9449 ± 0.0009|
|w/ Linear position&rotation|0.9174 ± 0.0020|0.9526 ± 0.0010|0.9551 ± 0.0005|0.9233 ± 0.0017|0.9620 ± 0.0006|0.9616 ± 0.0006|0.9453 ± 0.0011|
|w/o Dynamic point extraction|0.9102 ± 0.0014|0.9345 ± 0.0048|0.9402 ± 0.0047|0.9166 ± 0.0015|0.9501 ± 0.0039|0.9519 ± 0.0026|0.9339 ± 0.0032|
|w/o Temporal opacity|0.9164 ± 0.0016|0.9505 ± 0.0013|0.9530 ± 0.0004|0.9213 ± 0.0024|0.9612 ± 0.0006|0.9622 ± 0.0004|0.9441 ± 0.0011|
|w/o Progressive growing|0.9166 ± 0.0025|0.9503 ± 0.0015|0.9556 ± 0.0008|0.9266 ± 0.0010|0.9611 ± 0.0018|0.9613 ± 0.0008|0.9453 ± 0.0014|
|w/o Regularization|0.9179 ± 0.0013|0.9518 ± 0.0006|0.9545 ± 0.0017|0.9231 ± 0.0008|0.9628 ± 0.0011|0.9625 ± 0.0004|0.9454 ± 0.0010|
|w/o Point backtracking|0.9176 ± 0.0008|0.9521 ± 0.0011|0.9547 ± 0.0015|0.9208 ± 0.0036|0.9622 ± 0.0007|0.9624 ± 0.0005|0.9450 ± 0.0014|
|Ours|0.9154 ± 0.0013|0.9562 ± 0.0004|0.9572 ± 0.0004|0.9253 ± 0.0005|0.9621 ± 0.0007|0.9603 ± 0.0009|0.9461 ± 0.0007|
$~$
$~$
### Table H. Statistical analysis (Mean ± STD) of ablation studies measured by LPIPS.
|LPIPS|Coffee Martini|Cook Spinach|Cut Roasted Beef|Flame Salmon|Flame Steak|Sear Steak|Average|
|-|-|-|-|-|-|-|-|
|w/ Linear position|0.0723 ± 0.0011|0.0490 ± 0.0009|0.0428 ± 0.0003|0.0759 ± 0.0011|0.0350 ± 0.0006|0.0365 ± 0.0003|0.0519 ± 0.0007|
|w/ Linear rotation|0.0723 ± 0.0014|0.0495 ± 0.0008|0.0441 ± 0.0011|0.0777 ± 0.0019|0.0351 ± 0.0002|0.0363 ± 0.0006|0.0525 ± 0.0010|
|w/ Linear position&rotation|0.0720 ± 0.0029|0.0486 ± 0.0003|0.0437 ± 0.0011|0.0762 ± 0.0012|0.0354 ± 0.0002|0.0372 ± 0.0007|0.0522 ± 0.0011|
|w/o Dynamic point extraction|0.0786 ± 0.0010|0.0749 ± 0.0051|0.0716 ± 0.0096|0.0836 ± 0.0014|0.0642 ± 0.0065|0.0617 ± 0.0070|0.0724 ± 0.0051|
|w/o Temporal opacity|0.0725 ± 0.0023|0.0490 ± 0.0004|0.0452 ± 0.0010|0.0772 ± 0.0026|0.0354 ± 0.0007|0.0368 ± 0.0007|0.0527 ± 0.0013|
|w/o Progressive growing|0.0699 ± 0.0021|0.0511 ± 0.0007|0.0443 ± 0.0005|0.0734 ± 0.0008|0.0384 ± 0.0036|0.0403 ± 0.0034|0.0529 ± 0.0019|
|w/o Regularization|0.0713 ± 0.0030|0.0495 ± 0.0008|0.0437 ± 0.0012|0.0756 ± 0.0016|0.0343 ± 0.0004|0.0369 ± 0.0006|0.0519 ± 0.0013|
|w/o Point backtracking|0.0716 ± 0.0019|0.0491 ± 0.0003|0.0452 ± 0.0038|0.0777 ± 0.0027|0.0350 ± 0.0007|0.0364 ± 0.0004|0.0525 ± 0.0016|
|Ours|0.0721 ± 0.0019|0.0417 ± 0.0004|0.0416 ± 0.0005|0.0659 ± 0.0005|0.0346 ± 0.0010|0.0357 ± 0.0012|0.0486 ± 0.0009|
$~$
$~$
### Table I. Statistical analysis (Mean ± STD) on the N3V dataset.
|Scene|PSNR|SSIM|LPIPS|
|-|-|-|-|
|Coffee Martini|28.43±0.169|0.915±0.0013|0.0720±0.0009|
|Cook Spinach|32.77±0.206|0.956±0.0004|0.0420±0.0004|
|Cut Roasted Beef|33.31±0.197|0.957±0.0004|0.0416±0.0005|
|Flame Salmon|29.11±0.120|0.925±0.0005|0.0659±0.0005|
|Flame Steak|33.41±0.238|0.962±0.0007|0.0346±0.0010|
|Sear Steak|33.23±0.241|0.960±0.0009|0.0357±0.0012|
|Average|31.71±0.195|0.946±0.0007|0.0490±0.0009|
---
Rebuttal 6:
Title: Reference tables
Comment: ### Table J. Experimental results on different keyframe intervals and skipped frames.
| Skipped frames | 1 | | | | 2 | | | | 3 | | | |
|-|-|-|-|-|-|-|-|-|-|-|-|-|
| Keyframe interval | PSNR | SSIM | LPIPS | size (MB) | PSNR | SSIM | LPIPS | size (MB) | PSNR | SSIM | LPIPS | size (MB) |
| 1 | 31.17 | 0.948 | 0.057 | 595 | 31.47 | 0.948 | 0.056 | 415 | 31.81 | 0.946 | 0.051 | 142 |
| 2 | 32.06 | 0.952 | 0.051 | 314 | 32.33 | 0.954 | 0.049 | 322 | 31.81 | 0.953 | 0.044 | 101 |
| 5 | 31.70 | 0.953 | 0.047 | 206 | 32.29 | 0.954 | 0.043 | 126 | 32.53 | 0.954 | 0.045 | 80 |
| 10 | 33.04 | 0.956 | 0.041 | 119 | 32.65 | 0.956 | 0.043 | 93 | 31.79 | 0.953 | 0.046 | 74 |
| 20 | 32.78 | 0.955 | 0.043 | 90 | 32.07 | 0.952 | 0.047 | 78 | 32.08 | 0.953 | 0.048 | 73 |
| 50 | 32.14 | 0.955 | 0.046 | 79 | 31.93 | 0.951 | 0.052 | 72 | 30.91 | 0.949 | 0.056 | 73 |
$~$
$~$
### Table K. Quantitative comparisons on the Train scene in the Technicolor dataset.
| Technicolor Train longer | PSNR | SSIM | LPIPS |
|-|-|-|-|
| STG | 32.17 | 0.940 | 0.035 |
| 4DGS | 29.11 | 0.877 | 0.119 |
| 4D Gaussians | 23.31 | 0.657 | 0.385 |
| Ours | 32.18 | 0.938 | 0.044 |
---
Rebuttal Comment 6.1:
Title: Thanks for the response
Comment: I'd like to thank the authors for their detailed response to my comments. I think that with respect to questions regarding explanations, I am fully clear in what authors did in their method (and I think this should be reflected in the final version of the manuscript). The breakdown per scene for Technicolor looks interesting as well (it invites some analysis/comments on why the authors believe their methods improve in particular scenes etc.). As for the long scene experiment , it would be great to see at some point a comparison with other methods (I am not asking for this within rebuttal, just believe that it can increase the value of the experimental section in the future).
---
Reply to Comment 6.1.1:
Comment: We are pleased that our responses help to clarify the understanding of our work. Your insights and suggestions are valuable for us. We are also interested in experimenting with long durations and see results when we use different hyperparameters/iterations. Additionally, we report the experimental result on the long duration video over the comparison methods below:
| | PSNR | SSIM | LPIPS |
|-|-|-|-|
| 4DGS | 26.73 | 0.899 | 0.112 |
| 4D Gaussians | 28.48 | 0.905 | 0.095 |
| **Ours** | **28.77** | **0.919** | **0.076** |
We will add the additional results and analysis to the revised version. | Summary: This paper models dynamic scenes using 3DGS, unlike other methods that model dynamic scenes with both implicit and explicit representations. It proposes Fully Explicit Dynamic Gaussian Splatting (FEDGS), a method that models 4D scenes using a purely explicit approach. FEDGS employs a Cubic Hermite Interpolator to predict positions at different times and a Spherical Linear Interpolation for predicting rotations at different times. FEDGS can fit the dynamic scenes with sparse initialization in the N3V dataset without any auxiliary modules or tricks to encode lengthy temporal information.
Strengths: 1. Modeling 4D scenes with a purely explicit method is rare, and FEDGS can render at 120fps under 157MB, and the training speed is fast.
2. FEDGS, as a purely explicit method, achieves better results in the case of sparse COLMAP point clouds. Many methods based on 3DGS for dynamic scene reconstruction face challenges with sparse initial point clouds.
Weaknesses: 1. In the supplementary video, compared to other methods, many positions that should be static are moving, causing flickering in objects like hats and tables, and even some Gaussian spheres can be seen rotating. Is this a drawback of the method described in section 4.2.2? This is completely different from the results of previous methods like NeRFPlayer [69] that separate dynamic and static elements, and this part lacks further explanation and clarification.
2. The most attractive aspect of FEDGS is its dynamic scene reconstruction results under sparse COLMAP point cloud conditions, which may be an advantage of the purely explicit approach. However, the paper does not clearly explain this, and providing more experiments could highlight the advantages of the purely explicit method, enhancing the paper's contribution.
3. Compared to 4DGaussians, the rendering speed is slower. Is this due to slower predictions of positions and rotations for 3DGS? However, 4DGaussians use hexplane predictions, which should have a higher computational cost than formulas 6 and 9.
Technical Quality: 3
Clarity: 2
Questions for Authors: Why the purely explicit approach can bring improvements under sparse COLMAP point cloud conditions is not clearly explained in the paper. I would be pleased to see the author address the issues mentioned in the weakness section, improving the quality and contribution of the paper.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: After separating dynamic and static Gaussians, the rendering result of the static parts may experience jitter, which is the main limitation, and it's worth more exploration here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: [W1] We thank you for your careful comment about the static points. In practice, we observe that there are temporal color and luminance changes even if their position does not change. In this work, our method is designed to handle all the temporal changes as dynamic points. This includes not only the movement of objects, but also changes over time, such as appearances, disappearances, and color changes. In some cases, such as Hats and tables in our supplementary video, there will be temporal shadows or changes in luminance at some timestamp. In these cases, dynamic points account for the light or rotation changes as well, thanks to our spherical harmonics.
To validate this, we conduct an experiment with learning from a stationary object to be completely static in Figure B of the uploaded PDF file. We train 3DGS on the coffee martini scene with a mask on dynamic objects which are a person and a glass. We report the results of the test set when 3DGS is used alone and when 3DGS replaces the dynamic parts with our model. The numerical results are shown in Table A. As you can see, learning completely static points, even those that do not change position, leads to poor rendering quality because they cannot handle the other temporal information such as color changes or shadows. Therefore, it is beneficial to treat them as dynamic points.
[W2, Q1] The reason why we have an advantage in the sparse COLMAP condition is that the explicit representation makes the densification algorithm of 3DGS work better. Since the sparse COLMAP condition is trained with fewer points, the empty space needs to be filled with new Gaussians, which are split from existing Gaussian points. In 3DGS, the splitting algorithm works on points that threshold the gradient of the x-/y-direction of the image space. This gradient must be large enough for the Gaussian to be split. However, when using an implicit function, the deformation of the implicit function is optimized first, rather than the gradient of the Gaussian. This interferes with the Gaussians splitting, so implicit models require a more accurate 3D prior.
We perform several experiments in this regard. The following experiments show results for scenes that are difficult to train. In these experiments, the ability to reconstruct dynamic scenes from sparse COLMAPs implies that these scenes can also be handled by FEDGS.
First, we select the 100 frames including the occlusion of dynamic objects in the train scene of the Technicolor dataset and compare FEDGS with others. We use the point cloud prior of the first frame to give no information about the reappearing object after the object is occluded. The results are reported in Table B and the example is displayed in Figure A of the uploaded PDF.
In these results, the explicit-based models STG, 4DGS, and Ours perform significantly better. In the case of STG, the dynamic part is not learned well as it goes to the later and later frames, and 4DGS may show that the dynamic part looks good, but it has difficulty in handling the static part which has negative impact on the overall performance. In particular, 4DGaussians, which is an implicit model, fails to disentangle the static and dynamic parts, resulting in missing rendering of the dynamic part. Our model, on the other hand, shows good performance and ability to learn both static and dynamic parts well.
Next, we carry out an experiment to see if FEDGS can learn about newly appearing objects. To handle new objects well, we need to split the dynamic part well. To test this, we select 120 frames from the Technicolor Birthday scene in which a person appears from nowhere. We use the point cloud prior to a frame where the person is invisible. The numerical results are shown in Table C and the rendered images are shown in Figure D of the uploaded PDF file. This result shows that the FEDGS model is beneficial for splitting Gaussians in dynamic scenes and is able to handle newly appearing dynamic objects.
Finally, we conduct an experiment with longer frames (1000 frames, 20,000 images in total) of the flame salmon scene from the N3V dataset. The result is reported in Table D and the rendered image in Figure E of the uploaded PDF file. This result shows that our model can learn well with reasonable storage, even for extremely long videos.
[W3] This is because 4DGaussians can be trained with fewer Gaussians on the N3V dataset than on complex scenes such as the Technicolor dataset. However, 4DGaussians suffers from a huge FPS drop when the minimum number of Gaussians needed is large (i.e. complex scene configurations). To prove it, we conduct an additional experiment on a more complicated scene, Technicolor, and the result is in Table E.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response!
I have carefully read the rebuttal, and most of my concerns have been addressed. Although Table E shows good rendering speed, it does not show the storage cost, which is 392MB in the complex scene shown in Table D. There is still a concern about the balance between speed and storage in the explicit method proposed in this paper.
---
Reply to Comment 1.1.1:
Comment: Thanks for your comments that our rebuttal addresses almost all your concerns. Here is the answer to your additional question:
We report the results on both the Technicolor dataset and the N3V dataset. As shown in the tables below, the 4D Gaussians model has a larger variation in FPS, despite having a similar storage size in both datasets. This validates that rendering speed is more dependent on the scene configuration than on storage size. Our method, on the other hand, shows consistent performance regardless of scenes.
We note that regarding the storage size, around 300MB should be manageable for most devices in these days (note that the storage size of public datasets used is about GB). As another aspect, we measure shared GPU memory of ours and the 4D Gaussian on a machine with one NVIDIA 2080Ti. The difference between them is marginal, which means both models can be worked on the same GPU machine.
Nevertheless, we recognize that our model takes more storage than 4D Gaussians. We will include an introduction of compression techniques [1, 2] into our model to save storage sizes as one of our future works. We expect that it will be feasible because our model preemptively separates a static part from a scene which is easily compressed.
$~$
### Table F. Averaged results from Technicolor dataset
| Method | PSNR | SSIM | LPIPS | FPS | Size (MB) | GPU memory (GB) |
|-|-|-|-|-|-|-|
| 4D Gaussians | 30.79 | 0.847 | 0.180 | 45 | **44** |**1.3**|
| **Ours** | **33.30** | **0.910** | **0.100** | **62** | 307 |1.6|
$~$
### Table G. Averaged results from N3V dataset
| Method | PSNR | SSIM | LPIPS | FPS | Size (MB) | GPU memory (GB) |
|-|-|-|-|-|-|-|
| 4D Gaussians | 26.69 | 0.923 | 0.074 | **147** | **34** |1.4|
| **Ours** | **29.04** | **0.940** | **0.052** | 121 | 157 |**1.1**|
$~$
[1] Simon N, et. al., _Compressed 3D Gaussian Splatting for Accelerated Novel View Synthesis_, Conference on Computer Vision and Pattern Recognition, 2024
[2] Joo Chan L, et. al., _Compact 3D Gaussian Representation for Radiance Field_, Conference on Computer Vision and Pattern Recognition, 2024
---
Rebuttal 2:
Title: Reference tables
Comment: ### Table A. Quantitative results of the experiment on handling color changes without dynamic points.
| Model | PSNR | SSIM | LPIPS |
|-|-|-|-|
| 3DGS | 21.69 | 0.851 | 0.126 |
| 3DGS + ours dynamic | 26.07 | 0.891 | 0.089 |
| Ours | 29.03 | 0.922 | 0.068 |
$~$
$~$
### Table B. Quantitative comparison on the Train scene in the Technicolor dataset.
| Technicolor Train longer | PSNR | SSIM | LPIPS |
|-|-|-|-|
| STG | 32.17 | 0.940 | 0.035 |
| 4DGS | 29.11 | 0.877 | 0.119 |
| 4D Gaussians | 23.31 | 0.657 | 0.385 |
| Ours | 32.18 | 0.938 | 0.044 |
$~$
$~$
### Table C. Quantitative results on the Birthday scene in the Technicolor dataset.
| Technicolor Birthday longer | PSNR | SSIM | LPIPS |
|-|-|-|-|
| Ours | 29.12 | 0.900 | 0.094 |
$~$
$~$
### Table D. Quantitative results on the extremely long duration video sequences (Flame Salmon) in the N3V dataset.
| Flame Salmon longer | PSNR | SSIM | LPIPS | Size (MB) |
|-|-|-|-|-|
| Ours | 28.77 | 0.919 | 0.076 | 392 |
$~$
$~$
### Table E. Quantitative evaluation on the Technicolor dataset.
| Method | PSNR | SSIM | LPIPS | FPS |
|-|-|-|-|-|
| 4D Gaussians | 30.79 | 0.847 | 0.180 | 45 |
| Ours | 33.30 | 0.910 | 0.100 | 62 | | Summary: The authors propose a fully explicit dynamic Gaussian splatting method, based on keyframe interpolation. The authors separate a dynamic scene into static Gaussians and dynamic Gaussians during training and apply interpolation techniques under temporal explicit representation, including a polynomial basis interpolator for position, a spherical interpolator for rotation, and a simplified Gaussian mixture model for opacity. Additionally, the authors introduce a progressive training scheme and a point-backtracking technique to improve the final convergence. The proposed method was validated on the Neural 3D Video dataset and Technicolor dataset, and exceeds the baselines.
Strengths: 1. The paper is well-written and easy to understand.
2. The proposed designs are well-validated. The authors have done a lot of ablation experiments to validate their designs.
3. As a fully explicit method, the paper shows impressive results regarding to rendering quality, model size, and rendering efficiency.
Weaknesses: 1. Some important baselines are missing, including [1], [2], and [3]. All of them had released their code before the NeuriPS deadline. They have shown better quality than most baselines selected by this paper.
2. Some metrics are missing. I am not sure why in Table 1, the authors only report PSNR. The average LPIPS and SSIM comparison should be included.
3. The quality improvement is minor compared with some baselines. According to Table 1, the average PSNR is just about 0.2dB higher than some baselines.
[1]3DGStream: On-the-Fly Training of 3D Gaussians for Efficient Streaming of Photo-Realistic Free-Viewpoint Videos
[2]4K4D: Real-Time 4D View Synthesis at 4K Resolution
[3]Im4D: High-Fidelity and Real-Time Novel View Synthesis for Dynamic Scenes
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I am curious about the comparison results between this method and the baselines I mentioned in the weaknesses section.
2. I am curious about the choice of keyframe interval. According to L259, the time interval is set to be 10. I am curious why the authors chose 10 to be the final hyperparameters. Will a different interval number like 5 or 20 affect the final quality?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations have been well discussed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: [W1, Q1] Thank you for letting us know the relevant papers and we believe that additional comparison with them makes this paper solid. We have thoroughly reviewed these models and will include them in the revised version. You can find the updated results in Table A and we report available values. For 3DGStream [1], we have directly referred to the results from their paper. Similar to 4K4D [2] and Im4D [3], the results have been taken from the STG paper [4].
While the works showcase impressive results and offer valuable insights that we deeply respect. However, our approach does have certain advantages over theirs. First, 3DStream [1] does not support random access to time, which means you cannot directly access the desired scene at any arbitrary timestamp. Moreover, 3DStream requires a trained 3DGS model for initialization.
Second, both 4K4D [2] and Im4D [3] require background scenes for training. Unfortunately, the N3V dataset does not provide background scenes, our reproduction of Im4D 30.34dB for PSNR.
[W2] Thanks for pointing this out. We attach SSIM and LPIPS in Table B, Table C and Table D. Our results include the models mentioned in W1. We will add it to the revised version. Note that there are two implementation for SSIM metric. SSIM$\_{1}$ is structural similarity function from _scikit-image_ library and SSIM$_{2}$ is implementation borrowed from 3DGS codebase.
[W3] Other NeRF baselines may have comparable PSNR, but they have drawbacks in rendering/training time or model size. It can also be seen that there is a larger gain in the sparse initial 3D from COLMAP, compared to the other Gaussian Splatting baselines.
[Q2] While final rendering quality depends on the keyframe selection, they are also related to efficiency. Shorter keyframe intervals improve rendering quality because they allow more complex motions to be approximated, but if the keyframe interval is small enough, further reducing the interval is easy to overfit and requires a larger storage size. Therefore, we need to select an appropriately large enough keyframe interval. We report the numerical evaluation of FEDGS with respect to the keyframe interval in Table E.
We note that the empirical setting can be varied according to properties of datasets. We promise that the evaluation result will be included in the revised version of this paper.
[4] Zhan Li, Zhang Chen, Zhong Li, and Yi Xu. Spacetime gaussian feature splatting for real-time dynamic view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.
---
Rebuttal Comment 1.1:
Comment: Thanks for your replies.
Although the papers have shown some strengths and the rebuttal has answered some of my concerns, it still misses some important baselines as I mentioned, and its overall performances on different metrics (PSNR, LPIPS, and SSIM) are not SOTA. Therefore, I am inclined to maintain my score.
I strongly suggest the authors include more baselines and metrics in the revised paper.
---
Reply to Comment 1.1.1:
Comment: We are sorry for the unsatisfactory response for your concern. However, it is notable that we have tried to reproduce the comparison baselines that you suggest. To be specific, both 4K4D and Im4D require trained backgrounds of scenes, but they are not provided in their official source codes, which makes it impossible to run them. For 3DGStream, we also have used the authors’ provided code, but were unable to see the same performance as reported in the original paper: we just see PSNR 32.55dB on Cook Spinach scene, which is lower performance than reported 33.31dB. We are now contacting the authors to investigate these issues, and will make it clear in the revised version.
---
Rebuttal 2:
Title: Reference tables
Comment: ### Table A. Quantitative evaluation on the N3V Dataset using PSNR.
| PSNR | Coffe Martini | Cook Spinach | Cut Roasted Beef | Flame Salmon | Flame Steak | Sear Steak | Average |
|-|-|-|-|-|-|-|-|
| NeRFPlayer | 31.53 | 30.56 | 29.35 | 31.65 | 31.93 | 29.13 | 30.69 |
| HyperReel | 28.37 | 32.30 | 32.92 | 28.26 | 32.20 | 32.57 | 31.10 |
| Neural Volumes | N/A | N/A | N/A | 22.80 | N/A | N/A | 22.80 |
| LLFF | N/A | N/A | N/A | 23.24 | N/A | N/A | 23.24 |
| DyNeRF | N/A | N/A | N/A | 29.58 | N/A | N/A | 29.58 |
| HexPlane | N/A | 32.04 | 32.55 | 29.47 | 32.08 | 32.39 | 31.71 |
| K-Planes | 29.99 | 32.60 | 31.82 | 30.44 | 32.38 | 32.52 | 31.63 |
| MixVoxels-L | 29.63 | 32.25 | 32.40 | 29.80 | 31.83 | 32.10 | 31.34 |
| MixVoxels-X | 30.39 | 32.31 | 32.63 | 30.60 | 32.10 | 32.33 | 31.73 |
| Im4D | N/A | N/A | 32.58 | N/A | N/A | N/A | 32.58 |
| 4K4D | N/A | N/A | 32.86 | N/A | N/A | N/A | 32.86 |
| Dense COLMAP point cloud input | | | | | | | |
| STG | 28.41 | 32.62 | 32.53 | 28.61 | 33.30 | 33.40 | 31.48 |
| 4DGS | 28.33 | 32.93 | 33.85 | 29.38 | 34.03 | 33.51 | 32.01 |
| 4DGaussians | 27.34 | 32.46 | 32.90 | 29.20 | 32.51 | 32.49 | 31.15 |
| Sparse COLMAP point cloud input | | | | | | | |
| STG | 27.71 | 31.83 | 31.41 | 28.06 | 32.17 | 32.67 | 30.64 |
| 4DGS | 26.51 | 32.11 | 31.74 | 26.93 | 31.44 | 32.42 | 30.19 |
| 4DGaussians | 26.69 | 31.89 | 25.88 | 27.54 | 28.07 | 31.73 | 28.63 |
| 3DGStream | 27.75 | 33.31 | 33.21 | 28.42 | 34.30 | 33.01 | 31.67 |
| Ours | 29.04 | 32.46 | 33.21 | 29.56 | 33.23 | 33.84 | 31.89 |
$~$
$~$
### Table B. Quantitative evaluation on the N3V dataset using SSIM$_{1}$.
| SSIM$_{1}$ | Coffe Martini | Cook Spinach | Cut Roasted Beef | Flame Salmon | Flame Steak | Sear Steak | Average |
|-|-|-|-|-|-|-|-|
| NeRFPlayer | 0.951 | 0.929 | 0.908 | 0.940 | 0.950 | 0.908 | 0.931 |
| HyperReel | 0.892 | 0.941 | 0.945 | 0.882 | 0.949 | 0.952 | 0.927 |
| Dense COLMAP point cloud input | | | | | | | |
| STG | 0.916 | 0.952 | 0.954 | 0.918 | 0.960 | 0.961 | 0.944 |
| 4DGS | N/A | N/A | 0.980 | 0.960 | N/A | N/A | 0.970 |
| 4DGaussians | 0.905 | 0.949 | 0.957 | 0.917 | 0.954 | 0.957 | 0.940 |
| Sparse COLMAP point cloud input | | | | | | | |
| STG | 0.904 | 0.946 | 0.946 | 0.913 | 0.954 | 0.955 | 0.936 |
| 4DGS | 0.902 | 0.948 | 0.947 | 0.904 | 0.954 | 0.955 | 0.935 |
| 4DGaussians | 0.893 | 0.944 | 0.913 | 0.896 | 0.946 | 0.946 | 0.923 |
| Ours | 0.915 | 0.947 | 0.948 | 0.917 | 0.956 | 0.959 | 0.940 |
$~$
$~$
### Table C. Quantitative evaluation on the N3V dataset using SSIM$_{2}$.
| SSIM$_{2}$ | Coffe Martini | Cook Spinach | Cut Roasted Beef | Flame Salmon | Flame Steak | Sear Steak | Average |
|-|-|-|-|-|-|-|-|
| Neural Volumes | N/A | N/A | N/A | 0.876 | N/A | N/A | 0.876 |
| LLFF | N/A | N/A | N/A | 0.848 | N/A | N/A | 0.848 |
| DyNeRF | N/A | N/A | N/A | 0.960 | N/A | N/A | 0.960 |
| HexPlane | N/A | 0.983 | 0.985 | 0.980 | 0.988 | 0.986 | 0.984 |
| K-Planes | 0.953 | 0.966 | 0.966 | 0.953 | 0.970 | 0.974 | 0.964 |
| MixVoxels-L | 0.951 | 0.968 | 0.966 | 0.949 | 0.971 | 0.976 | 0.964 |
| MixVoxels-X | 0.954 | 0.968 | 0.971 | 0.953 | 0.973 | 0.976 | 0.966 |
| Im4D | N/A | N/A | 0.970 | N/A | N/A | N/A | 0.970 |
| 4K4D | N/A | N/A | 0.972 | N/A | N/A | N/A | 0.972 |
| Dense COLMAP point cloud input | | | | | | | |
| STG | 0.910 | 0.947 | 0.950 | 0.913 | 0.956 | 0.958 | 0.939 |
| 4DGS | N/A | N/A | N/A | N/A | N/A | N/A | 0.972 |
| Sparse COLMAP point cloud input | | | | | | | |
| STG | 0.898 | 0.940 | 0.939 | 0.907 | 0.949 | 0.950 | 0.931 |
| 4DGS | 0.894 | 0.944 | 0.943 | 0.896 | 0.951 | 0.951 | 0.930 |
| 4DGaussians | 0.886 | 0.939 | 0.907 | 0.889 | 0.942 | 0.942 | 0.917 |
| Ours | 0.922 | 0.951 | 0.953 | 0.923 | 0.960 | 0.963 | 0.945 |
$~$
$~$
### Table D. Quantitative evaluation on the N3V dataset using LPIPS.
| LPIPS | Coffe Martini | Cook Spinach | Cut Roasted Beef | Flame Salmon | Flame Steak | Sear Steak | Average |
|-|-|-|-|-|-|-|-|
| NeRFPlayer | 0.085 | 0.113 | 0.144 | 0.098 | 0.088 | 0.138 | 0.111 |
| HyperReel | 0.127 | 0.089 | 0.084 | 0.136 | 0.078 | 0.077 | 0.096 |
| Neural Volumes | N/A | N/A | N/A | 0.295 | N/A | N/A | 0.295 |
| LLFF | N/A | N/A | N/A | 0.235 | N/A | N/A | 0.235 |
| DyNeRF | N/A | N/A | N/A | 0.083 | N/A | N/A | 0.083 |
| HexPlane | N/A | | | 0.098 | | | 0.098 |
| K-Planes | 0.024 | 0.017 | 0.017 | 0.024 | 0.015 | 0.013 | 0.018 |
| MixVoxels-L | 0.106 | 0.099 | 0.088 | 0.116 | 0.088 | 0.080 | 0.096 |
| MixVoxels-X | 0.081 | 0.062 | 0.057 | 0.078 | 0.051 | 0.053 | 0.064 |
| Dense COLMAP point cloud input | | | | | | | |
| STG | 0.069 | 0.043 | 0.042 | 0.063 | 0.034 | 0.033 | 0.047 |
| 4DGS | N/A | N/A | 0.041 | N/A | N/A | N/A | 0.055 |
| Sparse COLMAP point cloud input | | | | | | | |
| STG | 0.087 | 0.056 | 0.060 | 0.074 | 0.046 | 0.046 | 0.062 |
| 4DGS | 0.079 | 0.041 | 0.041 | 0.078 | 0.036 | 0.037 | 0.052 |
| 4DGaussians | 0.095 | 0.056 | 0.104 | 0.095 | 0.050 | 0.046 | 0.074 |
| Ours | 0.068 | 0.049 | 0.048 | 0.075 | 0.035 | 0.036 | 0.052 |
---
Rebuttal 3:
Title: Reference table
Comment: ### Table E. Experimental results on different keyframe intervals and skipped frames.
| Skipped frames | 1 | | | | 2 | | | | 3 | | | |
|-|-|-|-|-|-|-|-|-|-|-|-|-|
| Keyframe interval | PSNR | SSIM | LPIPS | Size (MB) | PSNR | SSIM | LPIPS | Size (MB) | PSNR | SSIM | LPIPS | Size (MB) |
| 1 | 31.17 | 0.948 | 0.057 | 595 | 31.47 | 0.948 | 0.056 | 415 | 31.81 | 0.946 | 0.051 | 142 |
| 2 | 32.06 | 0.952 | 0.051 | 314 | 32.33 | 0.954 | 0.049 | 322 | 31.81 | 0.953 | 0.044 | 101 |
| 5 | 31.70 | 0.953 | 0.047 | 206 | 32.29 | 0.954 | 0.043 | 126 | 32.53 | 0.954 | 0.045 | 80 |
| 10 | 33.04 | 0.956 | 0.041 | 119 | 32.65 | 0.956 | 0.043 | 93 | 31.79 | 0.953 | 0.046 | 74 |
| 20 | 32.78 | 0.955 | 0.043 | 90 | 32.07 | 0.952 | 0.047 | 78 | 32.08 | 0.953 | 0.048 | 73 |
| 50 | 32.14 | 0.955 | 0.046 | 79 | 31.93 | 0.951 | 0.052 | 72 | 30.91 | 0.949 | 0.056 | 73 | | Rebuttal 1:
Rebuttal: First, we would like to thank all the reviewers for giving us the valuable opinions for our paper. All reviewers agree that FEDGS is comparable in terms of performance and efficiency to other models. Reviewers Tqxz and Vpue comment on the clarity of the paper. Reviewers Tqxz, Vpue, and MZ9L highlight the novelty of our fully explicit model. Reviewers Vpue and MZ9L point out that keyframe-based interpolation is interesting, and that treating static and dynamic separately is a good idea. In response to the reviewer's questions, we further analyze our models with various situations, including replacing dynamic part, reappearing object, occluded object, appearance from void, and extremely long duration. We have uploaded these analyses and additional results to support them as a PDF file. We hope all the reviewers carefully check and give us more comments.
Pdf: /pdf/71d0a1747af6e7ed0496cceeea3cc2770eabe64f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Decision-Focused Learning with Directional Gradients | Accept (poster) | Summary: The paper introduces a new family of surrogate losses to the DFL with linear costs, called perturbation gradient losses (PG loss). It provides theoretical analysis to bound the approximation errors and regret bounds and uses extensive experiments to demonstrate the advantages of the proposed method.
Strengths: - The proposed loss is more efficient than the standard DFO loss. It only requires the computation of the optimal decision variable without the need for implicit differentiation through $\hat{\pi}(t)$.
- The approximation error of the surrogate functions decreases as the number of samples increases, which is advantageous for large-sample applications.
- In the numerical experiment section, the PG losses perform very well even in the misspecification case.
Weaknesses: - The (sub)differentiability of the PG loss is not well-established. In Line 172, the differentiability is doubtful. I don’t think the subgradient is well defined when $\hat{\pi}$ is not unique. The paper should make more efforts to develop the theory rigorously and in writing.
- In Line 148, the directional derivative is not well-defined. If $V(t)$ is non-smooth, you should take infimum over all the $\hat{\pi}(t)$.
- The paper presents a continuous relaxation of the DFL objective, with the error bounded by the finite difference error h. However, the resulting problem is still nonconvex and nonsmooth, which may not be efficiently solved in theory. It seems more effective to use direct smoothing by random perturbation or by adding a regularization term (e.g., [30]). I encourage the authors to compare their approach with other smoothing methods in theory.
- Both experiments are for discrete decision variables and simulated data. The paper does not sufficiently address the performance of the proposed method when decisions are continuous.
Technical Quality: 2
Clarity: 2
Questions for Authors: - Lemma 2.2 Can you develop the result when $\ell_h^b$ is nonsmooth?
- Line 176, what is $Y_j$?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Danskin's Theorem**
There are many versions of Danskin’s Theorem that apply under different regularity conditions. The version we use is well-summarized [here](https://statisticaloddsandends.wordpress.com/2022/11/10/what-is-danskins-theorem/). The key is part 4: When there are multiple solutions, the theorem holds but with “gradient” replaced by “subgradient.” An intuitive example tis $f(t) = \max_{-1 \leq z \leq 1} t z = | t |$. Then, at $t = 0$, the set of optimal solutions is $[-1, 1]$ (non-unique), and thus no derivative. Still, there are subgradients, and any optimal solution gives a subgradient.
Given your excellent questions we intend to add a statement of Danskin’s Theorem at the end of Section 1.3.
### **Addressing Weaknesses (1-2)**
1. As in part 4 of reference above, Danskin’s theorem holds when $\hat \pi(\cdot)$ is not unique. Since $\hat \pi(\cdot)$ is AN optimal solution, it is a subgradient of V(t). See also footnote 1 of pg. 5 of our paper.
2. Again, we are invoking part 4. The “grad” operator should be interpreted as a directional gradient if
$\hat \pi(t)$ is unique and a Clarke Differential otherwise.
### **Addressing Weakness 3: Comparison to Smoothing and Regularization**
#### ***NP Hardness***
Optimizing decision-loss (DL) generalizes binary classification and is NP-Hard [6]. Hence, ANY method (including smoothing and regularization) that aims to learn a best-in-class policy must either suffer similar computational challenges as our method or else fail to recover a best-in-class policy.
#### ***Regularization [30]*** In [30], the authors
i) relax combinatorial constraints,
ii) smooth by introducing a regularized policy class $\hat \pi^\rho(t) \in \arg\min_{z \in \mathcal Z} \langle t, z \rangle + \rho || z ||^2$
iii) approximately optimize the decision-loss over these regularized policies
iv) implement the non-regularized $\hat \pi(t)$ using the parameters learned from the regularized $\hat \pi^\rho(t)$.
They provide no theoretical analysis of the approximation error. (For combinatorial problems, intuition suggests the error from the first step might be very large.) Even in very simple settings -- take binary classification $\mathcal Z = [-1, 1]$, $Y \in \{-1, 1\}$ -- their loss can have large flat regions and is non-convex. Hence step iii) does not appear (to us) to be any less challenging than our optimization problem.
The method of [30] is also computationally expensive. Evaluating gradients in step iii) entails solving a QP. By contrast, our method only involves solving the nominal LP. In our Shortest-Path example, there are specialized algorithms that solve that LP problem EXTREMELY fast (e.g. a vectorized Dijkstra’s algorithm), but we do not know of any specialized algorithms for solving the QP to high accuracy. Generic solvers are orders of magnitude slower.
#### ***Randomized Smoothing***
We have already compared to a convexification of a randomized smoother (the Fenchel-Young Loss from [1]). As seen in Fig. 2 and the new experiments (See Global Response Doc) our method can outperform this method empirically because it does not guarantee learning a best-in-class policy.
On the other hand, the reviewer may have instead intended the “DPO” procedure from [1] (see also [29] in Section 3.4.3.), which replaces $\hat \pi(t)$ with a smoothed version
$\mathbb E_\xi[\hat \pi(t + \sigma \xi)]$ and then attempts to solve the decision-loss with this smoothed version. [1] provides no theoretical guarantees on performance. Moreover, one can check the resulting loss is still non-convex (take $\mathcal Z = [-1, 1]$, $Y = \{-1, 1\}$) . In THEORY, its gradients are Lipschitz (an advantage). We say in theory, because in practice, the method uses monte carlo (see “Practical Implementation” on pg. 5 of [1]) with a small number of samples, and when replacing the expectation with a finite sum, the smoothness is lost.
Thus, it is not clear that this optimization is better behaved than our proposal. Numerical experiments from [29] suggest that the performance is poor because of this monte carlo sampling. They write in Section 6 “Notably DPO was not shown because its overall subpar performance,” i.e., it was omitted from all plots.
Finally, if the reviewer's concern is our lack of Lipschitz gradients, note that since we represent our loss as a difference-of-convex functions, we can “out-of-the-box” leverage [existing results on smoothing for DC functions](https://arxiv.org/abs/2104.01470). These results relate stationary points, and the global minimum of the smoothed function to the original function.
**To summarize**
- NP-Hardness shows that ANY approach to this problem must either solve a hard optimization problem or cannot consistently learn best-in-class policies. We argue that although the problem is NP-Hard, our method finds high-quality, sub-optimal solutions.
- Regularization approaches [30] introduce significant computational challenges, are still non-convex, and do not induce Lipschitz gradients.
- We have already compared to the Fenchel Young Loss in the paper (a convexified randomized smoothing approach) and found this loss might not find a best-in-class policy (Fig 2 and Global Response Doc).
- Direct randomized-smoothing of the policy (DPO) induces a non-convex loss and does induce Lipschitz gradients (in theory). However, prior work suggests that the challenges of Monte Carlo outweigh this benefit, and the method has poor performance.
- We can combine our method with existing approaches for smoothing DC function to recover Lipschitz gradients out of the box.
### ***Weakness 4: Additional experiments***
- Fig. 1 has a continuous region, synthetic data
- Shortest path experiments are combinatorial, synthetic data
- *NEW* portfolio experiment (continuous region), real data
### Questions:
1. Yes, the lemma already holds in this case (see our previous response on Danskin’s theorem).
2. Typo: Should say $Y$, not $Y_j$
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response, which partially addresses my concerns. The new empirical results look promising, and I am hopeful of increasing the score.
As for Line 148 and eq 4, can you derive the directional gradient more carefully using Danskin's theorem, line by line? Also, please do not cite a blog post, please use more formal citations.
---
Rebuttal 2:
Title: Proof for Eq 4
Comment: Thank you for your response! We are glad to hear about the positive feedback on our new empirical results. We apologize for the confusion about the blog post. The citation we would use in a galley proof is Prop B.22,
*Bertsekas DP. Nonlinear Programming. 2nd ed. Athena Scientific; 1999.*
We provided the website for the rebuttal period as it provides the same version of Danskin's theorem as the above textbook, but the textbook might not be available to everyone. In particular, the blog post copies the result nearly verbatim from the textbook, modulo some light formatting. (The blogpost also cites the textbook above.)
**Line by line proof of Eq. (4)**
To more clearly match the notation of the reference, first rewrite
$V(t) \equiv \min_{z \in \mathcal Z} t^{\top}z = - \max_{z \in \mathcal Z} -t ^\top z$, and define $\phi(t, z) \equiv -t^\top z$ and $f(t) \equiv \max_{z \in \mathcal Z} \phi(t, z)$.
Then with these new notations, $V(t) = -f(t)$ and $\hat \pi(t) \in \arg \max_{z \in \mathcal Z} \phi(t, z)$. (We provide the transformation because we are dealing with a "min" and the theorem is for a "max.")
Note $\mathcal Z$ is a compact set, $\phi(t, z)$ is convex in its first argument, and $t \mapsto \phi(t, z) = -t^\top z$ is (everywhere) differentiable in $t$ for any $z$ because it's just a linear function.
Let us first consider the case where there is a unique maximizer at $t_0$, i.e., $\hat \pi (t_0)$ is unique. Recall that $t \mapsto \phi(t, z)$ is differentiable in $t$ for all $z$, and in particular, is differentiable in $t$ at $z = \hat \pi(t_0)$. By part 3 of the blog post (equiv. the statement "If Z(x) consists of a unique point $\bar z \ldots$ " in the Bertsekas textbook), we have that
$
\nabla f(t_0) = \frac{\partial \phi(t, \hat \pi(t_0) ) }{\partial t} = -\hat \pi(t_0),
$
from the definition of $\phi$.
Then, since $V(t) = -f(t)$, we conclude that $\nabla V(t_0) = - \nabla f(t_0) = \hat \pi(t_0)$ for any $t_0$ where $\hat\pi(t_0)$ is the unique optimizer. Hence the map $\lambda \mapsto V(t_0 + \lambda y)$ is differentiable (in $\lambda$), and by the chain-rule, $\frac{\partial}{\partial \lambda} V(t_0 + \lambda y) = \langle \nabla V(t_0 + \lambda y), \frac{\partial}{\partial \lambda } (t_0 + \lambda y) \rangle = y^\top \nabla V(t_0 + \lambda y) = y^\top \hat \pi(t_0 + \lambda y)$. Evaluating at $\lambda = 0$ proves Eq. 4 in this case.
We now prove the statement when $\hat \pi (t_0)$ is not the unique maximizer. Recall again that $t \mapsto \phi(t, z) = -t^\top z$ is differentiable in $t$ for all $z$, and the derivative $\frac{\partial \phi}{\partial t} = -z$ is continuous in $z$ for all $t$ (because it's just a linear function).
Hence, by part 4 of the blogpost (equiv. part b of the Bertsekas Textbook) the set of subgradients of $f(t_0)$ is
$\partial f(t_0)
\ = \ \text{conv} \\{ \frac{\partial \phi(t, z)}{\partial t} \mid_{t = t_0} : z \text{ is a solution to } \max_{z \in \mathcal Z} \phi(t_0, z) \\}
\ = \
\text{conv} \\{ -z : z \text{ is a solution to } \max_{z \in \mathcal Z} \phi(t_0, z) \\} .$
Thus, since $\hat \pi (t_0) \in \arg\max_{z \in \mathcal Z} \phi(t_0,z)$, we have $-\hat \pi (t_0)$ is a subgradient of $f(t_0)$. This implies that for any $\lambda$,
$
f(t_0 + \lambda y) - f(t_0) \geq -\lambda \hat \pi(t_0)^\top y$.
Recalling $V(t) = -f(t)$ and multiplying by $-1$ shows,
$
V(t_0 + \lambda y) - V(t_0) \leq \lambda \hat \pi(t_0)^\top y$, i.e.,
$\hat \pi(t_0)^\top y$ is a subdifferential of $\lambda \mapsto V(t_0 + \lambda y)$. This concludes the proof of Eq. 4.
---
Rebuttal Comment 2.1:
Comment: Hi, thank you for the detailed reply.
When V is nonsmooth, you showed $\hat{\pi}(t_0)^\top y$ is a subgradient, and the finite-difference (line 152,153) is approximating this value. But which $\hat{\pi}(t_0)$ is specifically chosen? As h converges to zero, the finite-difference must converge to a specific limit, and hence there must be a specific $\hat{\pi}(t_0)$. Should you consider the directional derivative where $\hat{\pi}(t_0)$ is the one that aligns with $y$ most?
---
Rebuttal 3:
Comment: Thank you for the positive signals and the new question. We apologize for editing this response; we discussed internally and think we now better understand the heart of your question. (If we still misunderstood, please accept our apologies. We are eager to clarify once we better understand the question.)
How we currently understand your question is: Fix a $t_0$ such that $\hat \pi(t_0)$ is **not** the unique optimizer. Then, we have shown that $y^\top\hat \pi(t_0)$ is **a** subgradient of $\lambda \mapsto V(t_0 + \lambda y)$ at $\lambda = 0$. It's also clear that the finite difference $\frac{1}{h} (V(t_0) - V(t_0- hy))$ approximates **a** subgradient of this function at $\lambda = 0$. Why is it that these are the same two subgradients (since there are multiple subgradients)? In other words, why is it that $\lim_{h\rightarrow 0} \frac{1}{h} (V(t_0) - V(t_0 - hy)) = \hat \pi(t_0)^\top y$?
This is an excellent and subtle question that highlights the role of Assumption 3.1 in our results.
First, it is **not** the case that we can guarantee that $\lim_{h\rightarrow 0} \frac{1}{h} (V(t_0) - V(t_0 - hy)) = \hat \pi(t_0)^\top y$. This is a ``path-by-path" requirement that is very strong.
Why is this not a problem for our results? Note, Eq. (4) is meant to be motivation (it does not occur as a formal theorem or in a proof). It illustrates the intuition behind our PG losses. The formal result is given in Lemma 3.2 (and subsequent results that build on it). The key idea is that although the above limit doesn't hold path by path, under Assumption 3.1, it **does** hold in expectation, i.e.,
$\lim_{h\rightarrow 0} \mathbb E[ \frac{1}{h}(V(f(X)) - V(f(X) - hY))] = \mathbb E[ \ell(f(X), Y)]$. (Lemma 3.2 actually proves a stronger statement by explicitly giving the rate.) Here the role of $t_0$ is played by $f(X)$ which is random. Holding in expectation is a weaker requirement, and since the ERM approximation concentrates at its true expectation uniformly (Thm 3.4 and Thm 3.7), it's enough that it holds in expectation.
This of course raises an interesting question of whether one could make a stronger assumption than Assumption 3.1 and derive a path-by-path result. We have not explored this idea.
Did we correctly understand your question? We're happy to clarify further however we can.
---
Rebuttal Comment 3.1:
Comment: My main point is you can use the definition of the directional derivative of a convex function to find out the specific $\hat{\pi}(t_0)$, then the whole thing can go through. I don't understand why this is difficult.
Although I am still a bit skeptical about the argument and proof of Lemma 3.2, the conclusion of Lemma 3.2 appears reasonable, as the nonsmooth point has a zero measure and when you do the integration, you should expect a smooth behavior. | Summary: This paper considers a predict-then-optimize framework for solving contextual optimization problems, in particular for the case where the set of decisions is combinatorial or polyhedral, or when the loss is non-differentiable. They define a family of surrogate losses that connect the loss to the directional derivative of a plug-in function, and use zero-th order gradients to approximate the derivative. Simple numerical examples are provided.
Strengths: The paper is well written and considers a principled approach.
Weaknesses: The role of $h$ in the approximations of $\ell$ appearing in Theorem 2.1 should be made clear -- $\hat\ell^b$ and $\hat\ell^c$ have not been defined. Please give details of the proof of part c).
The main weakness is that the method appears to work well on a very simple synthetic problem. The performance appears to deteriorate on a slightly more complex problem, and then no interesting example is provided. I feel that for this outlet, this is a major weakness and would like to see how the method performs on a real world and/or large-scale problem. There are a number of such examples in the cited supporting literature.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you please address the issues above?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are not addressed. Potential negative societal impact is negligible.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Clarifying Minor Weaknesses**
We believe the reviewer meant Lemma 2.1, as there is no Theorem 2.1.
#### **Role of $h$**
The role of h is intuitively described on top of pg. 5 Line 154 (i.e. before Lemma 2.1). This is further elaborated (quantitatively) after Corollary 3.3 (pg. 6 Line 214).
#### **Definitions of Key Surrogates**
The quantities $\hat \ell^b$ and $\hat \ell^c$ are defined on the bottom of pg. 4 (line 152), which occurs just before Lemma 2.1.
#### **Proof of part c**
We are happy to add the details of part c) in a gallery version. Here are the details for you to verify: (As an aside, there are many versions of Danskin’s Theorem under different regularity conditions. [This version](https://statisticaloddsandends.wordpress.com/2022/11/10/what-is-danskins-theorem/) is sufficient for the proof below.
From the definition of $\hat \ell^b(t, y)$, we have
$\nabla \hat{\ell}^b(t, y) = \frac{1}{h} ( \nabla_t V(t) - \nabla V(t - hy) )$.
We next evaluate each of the "gradients" on the right using Danskin’s Theorem. We say "gradients" because, as we will show, these are gradients when $\hat \pi(t)$ and $\hat \pi(t -hy)$ are unique, and are subgradients otherwise. First, we validate the conditions of Danskin's Theorem for the first gradient. Specifically,
$V(t) = \max_{z \in \mathcal Z} \langle t, z \rangle$.
The function $\phi(t, z) = \langle t, z \rangle$ is continuous and differentiable by inspection, and $\mathcal Z$ is compact by assumption.
Thus, the conditions of Danskin’s theorem are met, and
$\nabla_t V(t) = \hat\pi(t)$, where the left side is a gradient if $\hat \pi(t)$ is unique, and a subgradient otherwise (see part iv of above mentioned reference).
We can treat $\nabla_t V(t - hy)$ in a similar fashion. Combining proves part c) for $\hat \ell^b$. The proof for $\hat \ell^c$ is similar.
### **Re Weaknesses: Experimental Evaluation**
Thank you for pushing us in this direction. In the Global Response Document, we’ve added two additional experiments: i) a harder instance of a shortest-path problem and ii) a portfolio optimization problem with **real data** and a low signal-to-noise ratio. In both cases, our method has an advantage over all baseline methods.
#### ***New, Harder Shortest Path Instance***
The difference between this new shortest path and the original shortest path instance is the data generation. In the original shortest path instance (following [6] and others), the costs of the arcs are exchangeable with respect to the network. There’s no special relationship between these costs and their location in the network. (For example, the edges of the square aren't systematically more expensive than internal roads.) Consequently, many candidate paths have similar costs, and the problem is arguably not too difficult. That is why many baselines perform similarly.
In our new instance, we generate the arc costs in a way that depends on the network. Specifically, we first embed two “good” paths along the diagonal (a safe one and a risky one) (see Global Response Doc), then ensure that any other path has a high cost, and then add noise to try and hide the “good” paths and confuse the safe and risky one. This is a more challenging setting because each method must first identify the good paths, and then choose between them to do well. As you can see, our approach has an edge in performance for large enough n.
#### ***Portfolio Optimization***
We study the same problem as [6, 26, 32] but use **real data**, specifically the 12 Fama French Industry Sector indices from the [Fama French Library](https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html). These indices represent returns of different asset classes and realistically mirror the asset allocation problem faced by wealth managers. We sample a month $t$ at random from the last 10 years, and let $Y = r_t$ be the return of the $d=12$ indices, and let $X = r_{t-1} + \mathcal N(0, 0.5 \Sigma)$ ($p=12$) where $\Sigma$ is the covariance of $r_t$. The additional noise lowers the signal-to-noise ratio while maintaining the correlation matrix of $X$ and makes the problem harder. See Global Response document.
Because of limited computational resources, we only present the strongest benchmarks (SPO+, FYL, 2Stage PtO, and our method). We again see that we have a distinct advantage.
We're happy to include these additional experiments in a galley version to strengthen the empirical evaluation of the methods.
#### ***Aside: On Value of Empirical Evaluation***
Finally, we stress that the benchmarks above have NO theoretical guarantees in misspecified settings. We believe offering a theoretically justified surrogate for misspecified settings is interesting in its own right, beyond its empirical evaluation.
### Limitations
Limitations are discussed on pg. 3 Line 76. We’re happy to label this discussion more clearly with a section header “Limitations” in a gallery proof it would help it to stand out.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I have read the response, and it has helped clarify things. I am satisfied that these additions have improved the paper, and adjusted my score accordingly | Summary: This paper addresses the predict-then-optimize problem by proposing a new family of surrogate loss functions. The key motivation is derived from Danskin's Theorem, which connects the expected downstream decision loss with the directional derivative of a particular plug-in objective. This objective is then approximated using zero-order gradient methods. The paper includes numerical experiments conducted on both a synthetic environment and a shortest path problem.
Strengths: 1. The paper is well-motivated.
2. The properties of the proposed surrogate loss are thoroughly derived.
3. Theoretical analysis shows that the approximation error of the proposed loss diminishes as the number of samples increases. Consequently, it can outperform existing surrogate losses even in misspecified settings.
Weaknesses: 1. The experimental section of the paper is relatively weak.
2. While the proposed method performs well in a simple synthetic environment, it does not demonstrate a clear advantage over FYL and SPO+ in the shortest path problem.
3. More experiments are needed to illustrate the advantages of the proposed method in real-world scenarios.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. How should the parameter h be selected in practice? Have you empirically studied how it affects performance?
2. In the shortest path experiment, why does FYL perform so well? Theoretically, the PG losses should be superior to FYL. Could you provide insights into this discrepancy?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: I would suggest the paper add a limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Re Weaknesses: Experimental Evaluation**
Thank you for pushing us in this direction. In the Global Response Document, we’ve added two additional experiments: i) a harder instance of a shortest-path problem and ii) a portfolio optimization problem with **real data** and a low signal-to-noise ratio. In both cases, our method has an advantage over all baseline methods.
#### ***New, Harder Shortest Path Instance***
The difference between this new shortest path and the original shortest path instance is the data generation. In the original shortest path instance (following [6] and others), the costs of the arcs are exchangeable with respect to the network. There’s no special relationship between these costs and their location in the network. (For example, the edges of the square aren't systematically more expensive than internal roads.) Consequently, many candidate paths have similar costs, and the problem is arguably not too difficult. That is why many baselines perform similarly.
In our new instance, we generate the arc costs in a way that depends on the network. Specifically, we first embed two “good” paths along the diagonal (a safe one and a risky one) (see Global Response Doc), then ensure that any other path has a high cost, and then add noise to try and hide the “good” paths and confuse the safe and risky one. This is a more challenging setting because each method must first identify the good paths, and then choose between them to do well. As you can see, our approach has an edge in performance for large enough n.
#### ***Portfolio Optimization***
We study the same problem as [6, 26, 32] but use **real data**, specifically the 12 Fama French Industry Sector indices from the [Fama French Library](https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html). These indices represent returns of different asset classes and realistically mirror the asset allocation problem faced by wealth managers. We sample a month $t$ at random from the last 10 years, let $Y = r_t$ be the return of the $d=12$ indices, and let $X=r_{t-1} + \mathcal N(0, 0.5 \Sigma)$ be the previous month return plus Gaussian noise ($p=12$). Here $\Sigma$ is the covariance of $X$. The additional noise lowers the signal-to-noise ratio while maintaining the correlation matrix of $X$. See Global Response document.
Because of limited computational resources, we only present the strongest benchmarks (SPO+, FYL, 2Stage PtO, and our method). We again see that we have a distinct advantage.
We're happy to include these additional experiments in a galley version to strengthen the empirical evaluation of the methods.
#### ***Aside: On Value of Empirical Evaluation***
Finally, even if the reviewer feels our method performs comparably to existing benchmarks, it should be stressed that those benchmarks have NO theoretical guarantees in misspecified settings. We believe offering a theoretically justified surrogate for this setting (even with comparable performance) is interesting in its own right.
### **Question: How to select $h$?**
We selected $h$ using hold-out validation set of size $200$ in our experiments. In general, we found the method insensitive to choice of $h$ as long as it was reasonably small. Please see plot on Global Response Doc.
### **Question: Why does FYL perform well in (original) Shortest Path?**
This is a great question. Our current conjecture is that it is a combination of two features:
1. Since arc costs are exchangeable across the network, there are many good candidate paths, and FYL is finding one of them.
2. By looking at its gradient, we argue that FYL essentially searches for a policy $T(\cdot)$ such that $\hat \pi(T(X)) \approx \hat \pi ( Y) $. Notice, this isn't the same as the oracle optimality condition, which would seek a policy such that $\hat \pi(T(X)) \approx \hat \pi (f^*(X))$.
These two conjectures informed our new (harder) shortest-path example where we i) embed only two good paths (risky and safe,) so that to perform well, a method must identify these two paths among all other paths and choose between them and ii) Lower the signal-to-noise ratio so that $Y$ is further from $f^*(X)$, and, hopefully, $\hat\pi(Y)$ is more distinct from $\hat\pi(f^*(X))$. As seen in Global Response doc, this does seem to affect FYL's performance.
We also note that FYL performs surprisingly poorly in our portfolio allocation experiment. (See Global Response doc)
### **Limitations**
We discuss limitations on pg. 3 Line 76. We’re happy to label this discussion more clearly as with a section heading saying “Limitations” in a gallery proof if it would help it to stand out. | Summary: This paper proposes a family of "perturbation gradient" losses for Predict-than-Optimize (PtO) that, if optimized for, can lead to best-in-class performance, even under model misspecification. On the theoretical side, this paper provides risk bounds that build on past theoretical work in PtO + the literature on using perturbation-based approaches for estimating out-of-sample performance. Importantly, it shows that the excess risk goes to zero when the number of data points $n \to \infty$. On the empirical side, they show that their loss functions outperform others from the literature in one synthetic domain and perform comparably to other loss functions in one domain from a popular benchmark.
Strengths: * The paper identifies and addresses an important problem in the PtO literature--performance guarantees under model misspecification.
* I'm not a theoretician, but the theoretical results seem non-trivial and relevant to practice.
* On the empirical side, they compare to a reasonable set of baselines from the literature.
Weaknesses: I have reviewed this paper in the past, and my two major issues were (a) that it overclaimed and (b) it had weak empirical analysis. While the paper has improved significantly on both fronts, I still have some gripes:
* _Regarding Claims:_ While the paper has added a paragraph about the approach's limitations in the introduction, I'm not sure that I understand it. My issue with the approach is that if the _true_ loss is non-convex in the policy parameters, it will be hard to optimize for, even in the limit of infinite data when $n \to \infty$. This isn't the same as the issue you describe for small $n$ in Figure 2(b), or even the statistical complexity issue of cleverly choosing $h$ as discussed in Section 3.2. It's that if the true loss $\ell(t(\theta), y)$ is piecewise constant, then for small $h$ the PG-loss $\hat{\ell}^b_h$ will indeed be close to $\ell$ but that means it will be close to piecewise constant and, as a result, hard to optimize for. The current theory assumes that you can optimize for $\hat{\ell}^b_h$ but not $\ell$, but that's a big assumption that should be discussed. Using the analogy of the ramp loss from the introduction, if your initial prediction is sufficiently far from $t = 0$, the gradient of the ramp loss will still be 0 (same as the $sgn$), and you won't be able to use first-order methods to learn a good model $\hat{f}$. Perhaps this is something that comes under your buckets of (a) the difficulty of optimizing for a "difference of convex functions" in the introduction or (b) the bias-complexity tradeoff in choosing $h$ that you allude to in the conclusions, but I couldn't immediately see the connection. Could you talk more about this?
* _Empirical Evaluation:_ I appreciate that you have included $SPO+$ and $PFYL$ as baselines, and also added a PyEPO domain. However, the results aren't conclusive even under significant model misspecification (e.g., you do no better than PFYL in either case, even when PFYL has no guarantees under misspecification, and SPO+ does roughly the same for uniform noise). Given that you've implemented one domain, running tests on the other domains in the benchmark should be fairly easy. Have you run those experiments? What do they look like?
Technical Quality: 4
Clarity: 3
Questions for Authors: Could you address my comments in the weaknesses section?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The paper does an okay job of addressing the limitations of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Overview**
We’d like to recall it is NP-Hard to optimize the (true) decision-loss over linear functions [6], essentially because it generalizes binary classification. Hence, any method (including ours) that aims to learn a best-in-class policy for all data generation mechanisms MUST also be NP-Hard. That said, theory aside, not all NP-Hard problems are created equal. Some, e.g., knapsack or bin-packing, admit “practically efficient” algorithms that solve most real-world instances in reasonable amounts of time. In contrast, others (e.g., non-metric TSP) are so hard we do not even have reliably good heuristics for large-scale instances. What we are trying to argue in our paper is that although optimizing our surrogate is NP-Hard (as it MUST be), even simple gradient descent algorithms recover very high-quality local minima that are suitable for applications.
### **Specific Questions Re Claims**
You are correct that part of the difficulty is that far away from the “heart” of the function, the true loss is flat, and, hence, our loss is flat. To be more concrete, let’s focus on the ramp loss. Basically, for a single data point $(X_i, Y_i)$, if $T(X_i)$ is more than $O(h)$ from zero, both losses will be flat. This challenge is shared by other losses, e.g., the losses proposed in [30, 24].
As you’ve also observed, this issue connects both to a) the difference-of-convex function representation and b) the bias-complexity tradeoff of choosing h.
#### _The Bias-Complexity Tradeoff_
As above, in the case of the ramp loss, if we have a single data point $(X_i, Y_i)$ and if $T(X_i)$ is more than $O(h)$ from zero, the loss is flat. However, when we have $n$ data points, the empirical loss is only flat if we are more than $O(h)$ away from ALL data points. Hence, for large $n$, our loss is unlikely to be flat in regions of high data density. Moreover, the larger we make $h$, the less likely we’ll end up in a bad region during gradient descent (provided we initialize at a “smart” point, see below). One of the takeaways of our theoretical analysis is characterizing precise conditions and how large we can make $h$ (to minimize the chances of reaching a flat region) while still guaranteeing a good enough approximation to learn the best-in-class policy, and how this should scale with $n$.
#### _Difference of Convex Function Representation_
We represent our loss explicitly as a difference of convex functions. This means we know A LOT about the structure of the loss landscape. DC optimization is a growing field [F1], [F2], and there are recent works on how to smooth DC functions [F3], [F4], [F5] to improve computational performance, and even new algorithms for identifying smart starting points for multistart gradient ascent [F6]. (These could serve as the aforementioned “smart” starting points to ensure your algorithm doesn’t get stuck in the flat parts far away from the “heart” of the function.) By contrast, the loss landscape of the original decision loss is much less understood, and so it’s less obvious how to optimize it directly (even heuristically).
#### **Summary**
We are not trying to mislead; we ARE replacing one NP-Hard optimization problem with a different NP-Hard optimization problem. However, in practice, some NP-Hard problems admit algorithms that find high-quality solutions very efficiently for practical instances, and we argue our loss leads to one such problem. Trying to quantify this “improved tractability” is necessarily subtle.
#### _Addendum on Non-Convexity_
Finally, we’d politely point out that the issue of convexity vs. non-convexity is often moot in applications. When using a nonlinear hypothesis class (e.g., a neural network with more than 1 layer), even surrogates like SPO+ and PYFL induce non-convex loss functions. These more powerful hypothesis classes are often preferred in practice, and, for these settings, optimizing these losses is theoretically no easier than optimizing our surrogate.
### **Specific Questions Regarding Empirical Evaluation**
Thank you for pushing us. Based on feedback from you and the review team, we have added two experiments: 1) A Harder instance of Shortest Path where we've hidden good paths in the network and 2) A Portfolio Optimization Example with low signal to noise ratio. In both of these "harder" settings, simple gradient descent procedures (ADAM) on our loss recovers local minima that substantively outperform benchmarks. ***See Global Rebuttal Document.*** Obviously, leveraging tools from the DC literature, one might be able to further improve upon these solutions.
Finally, in addition to the observed empirical benefits of our methods, we stress that existing
benchmarks have NO theoretical guarantees in misspecified settings. Offering a theoretically justified surrogate for misspecified settings is interesting in its own right.
- F1: https://link.springer.com/article/10.1007/s11081-015-9294-x
- F2: https://link.springer.com/article/10.1007/s10107-018-1235-y
- F3: https://arxiv.org/abs/2104.01470
- F4: https://ieeexplore.ieee.org/document/9304514
- F5: https://www.sciencedirect.com/science/article/pii/0022247X91901875
- F6: https://pubsonline.informs.org/doi/abs/10.1287/ijoc.2022.1238
---
Rebuttal 2:
Title: Response to Rebuttal
Comment: Thank you for clarifying the theoretical contributions and adding new experiments. **While I still have a few questions, I will increase my score to a 7 and recommend acceptance.** I think that (even without the theoretical properties) the paper has proposed a novel predict-then-optimize surrogate and shown improved performance on (variants) of standard domains in the literature. This is the bar to which papers in this domain have been held in the past.
As for my remaining questions/concerns:
### Experiments
**[Q1]** Why didn't you run experiments on the other domains from PyEPO (knapsack and TSP) or even existing implementations of the Portfolio Optimization problem? I find this a bit confusing because it should have been easier than creating the new domains that you have presented in your paper. Additionally, it would allow us to compare the effectiveness of the proposed approach to a much larger set of surrogates, which have also been evaluated on these more standard datasets. Even if it seems like PGC doesn't really beat SPO+ or alternatives, it would be (IMO) useful to know.
**[Q2]** How are the features generated in the new shortest path example? In the old version of the problem, the features for each edge seem to be generated independently (from a normal distribution). But if this were the case, I don't see how any model would be able to isolate the "safe" paths based on just the features.
Also, I have run experiments on the portfolio optimization domain from [26, 32] and have never seen improvements as large as those you've found in your paper. I hope you release your code, and look forward to investigating this version of the problem in more detail!
### Theoretical Properties
I think these clarifications are super useful, and I hope that they will be included in the final version of the paper. However, I still have some basic questions.
**[Q3]** When you say:
> "What we are trying to argue in our paper is that although optimizing our surrogate is NP-Hard (as it MUST be), even simple gradient descent algorithms recover very high-quality local minima that are suitable for applications."
> "However, in practice, some NP-Hard problems admit algorithms that find high-quality solutions very efficiently for practical instances, and we argue our loss leads to one such problem."
I don't understand how your theorems show this (although your experiments do). From my understanding, your theorems show that _if you can optimize the surrogate loss_ for some value/schedule of $h$, you will be able to optimize the true loss. However, it says nothing about being able to optimize for the surrogate with gradient descent, which seems to be critical to these arguments that you're making above. Am I misunderstanding something? Could you also link me to the theorem statement that shows:
> One of the takeaways of our theoretical analysis is characterizing precise conditions and how large we can make $h$ (to minimize the chances of reaching a flat region) while still guaranteeing a good enough approximation to learn the best-in-class policy, and how this should scale with $n$.
**[Q4]** When you say:
> However, when we have $n$ data points, the empirical loss is only flat if we are more than away from ALL $n$ data points. Hence, for large $n$, our loss is unlikely to be flat in regions of high data density.
This still does not guarantee that you will be able to optimize for the surrogate loss function. What stops gradient descent from reaching a local optimum in which you do better for the subset of points that have a non-zero gradient and do badly for those with zero gradients?
**[Q5]** Also, when you talk about optimizing for the PG losses as solving a "difference in convex functions" problem, (based on my skimming the abstracts) the papers that you link seem to use some sort of clever smoothing to solve the problem. However, you don't seem to be smoothing your PG losses. Why can you still solve the optimization problem for PG losses, then?
---
Rebuttal 3:
Comment: Thank you for the positive feedback. To be clear (for future AC's that might be skimming): when you say "even without the theoretical properties," did you mean
1) That the empirical/methodological contributions merit publication on their own, and the theoretical contributions are "bonus" or
2) that you have some unresolved questions about the proof/statements of the theoretical results?
If it's 2), please flag the questions for us and we are happy to address them. From our own viewpoint, we provide some of the first theoretical guarantees for best-in-class behavior in a misspecified setting using a surrogate that supports gradient descent, and we see this as an important contribution.
**[Q1]**
>Why didn't you run experiments on the other domains from PyEPO (knapsack and TSP) or even existing implementations of the Portfolio Optimization problem? ....
We agree that more benchmarking is undoubteldy helpful. Our choice of experiments in the response document was determined by 1) space constraints 2) requests from Reviewers LgZs and CTUs for real data and 3) Computational time limits in the rebuttal period. We intend to present a full set of benchmarks in a journal version of the paper for researchers to use.
More specifically, the PyEPO experimental set up for both knapsack and TSP is based on synthetic, random data. Given the other reviwer requests for real data, we thought this would add little to our (existing) synthetic data experiments.
We *do* use a standard formulation of the portfolio optimization problem from [26]. We change the data set used to the Fama French dataset because we wanted a setting with high misspecification. Indeed, for the QuandlWIKI dataset from [26, Table 1], 2 Stage MSE does almost as well as the best decision-focused methods, suggesting (to us) that the dataset is close to well-specified. In other words, there's seemingly not a lot of "room" for *any* decision-focused method to shine. We conjecture this might be because for daily stock returns i) the time scale is short enough that yesterday's stock price is a good predictor for today's stock price ii) the various stocks are very highly correlated. By contrast, the longer timescale of the monthly Fama-French returns make predictions more difficult and the different asset classes makes the signal weaker.
**[Q2]**
> How are the features generated in the new shortest path example?...
In the original shortest path problem [6, 29], each problem instance is generated with 5 features $\mathbf{X} \in \mathbb{R}^5$ drawn from multivariate normal distribution and edge weight $i$ is $f_i(\mathbf{X}) = \frac{1}{3.5^6} \left( (\frac{1}{\sqrt{p}}\beta_i^{\top}\mathbf{X} + 3)^6 + 1 \right)$
and $\beta_i \in \mathbb{R}^5$ are independently generated Bernoulli vectors.
In the new experiment, we add a new feature so $\mathbf{X} \in \mathbb{R}^6$ and assume the new feature $X_6$ is drawn from a uniform distribution with support $[0,2]$. We modify $f_i$ for the two paths highlighted in the global response doc. For red path we let $f_i(\mathbf{X}) = 2$ for all $i$ on the path and for the blue path we let $f_i(\mathbf{X}) = 4X_6$ if $0 \le X_6 \le 0.55$ and $f_i(\mathbf{X}) = 2.2$ otherwise. Finally, for all other edges, we let $f_i(X) = \frac{1}{3.5^6} \left( (\frac{1}{\sqrt{p}} \sum_{j=1}^5 \beta_{ij}^{\top}X_j + 3)^6 + 1 \right) + 2.2$ which is the same as the original shortest path experiment but shifted up by $2.2$. This shift ensures the red and blue paths are better than the rest in expectation, and the best one depends on the value of $X_6$. Finally, we add independent noise (Gaussian or Uniform) to all edges just as in original experiment.
---
Rebuttal 4:
Comment: **[Q3]**
> When you say:``What we are trying to argue $\ldots$"
> Am I misunderstanding something?
You are not misunderstanding. Our theory does say something ***slightly*** stronger -- namely because we prove uniform convergence, we've shown that if you can find a hypothesis $f(\cdot)$ that has low empirical PG loss, then it will also have low expected decision loss. So one need not perfectly optimize the PG Loss; just find a ``good enough" sub-optimal solution. But again, our theory does not guarantee that gradient descent will necessarily find such a $f(\cdot)$.
What we meant by our original comment (which you also correctly summarized) is that the ***empirical*** experiments suggest that simple gradient descent procedures do find high-quality, sub-optimal solutions.
Unfortunately, in light of the NP-Hardness of the problem, it seems difficult (or impossible?) to formulate a theoretically rigorous tractability result that would apply generally. (This difficulty applies to ***any*** surrogate that achieves best-in-class performance, not just ours.) So these empirical demonstrations are all we can (currently) offer. In many ways, this mirrors the state of the art with deep learning, where theory suggests the problem is hard/intractable, but empirical experience suggests we can reliably find high-quality local optima with (multi-start) stochastic gradient descent.
**[Q3] Continued**
> Could you also link me to the theorem statement that shows $\ldots$
Happily! Please see Theorems 3.4 and 3.7.
Without the theorems, intuition suggests that if we want the empirical PG loss to well-approximate the expected decision loss as $n\rightarrow \infty$, we need $h_n \rightarrow 0$. Indeed, any such sequence should suffice. Since we want $h_n$ to be big (to avoid the flat regions), this suggests choosing a large $h$ that decays slowly. This is essentially the suggestion in [24], which advocates for very large $h$, like $h=10$. Experiments from [29] suggest this doesn't work well.
By contrast, our Theorem 3.7 gives a tighter result and hence more insight. It shows the error between the empirical PG loss and expected decision-loss is roughly $\tilde O(\min(h, 1/\sqrt n))$. Hence, taking $h$ larger than $O(1/\sqrt n)$ slows the convergence rate. Thus, we might choose $h = 1/\sqrt n$, i.e., as large as possible without affecting the convergence rate. (See also line 242). Our theorems provide this kind of practical insight, and choosing $h$ in this manner drives (some of) our numerical improvements.
A similar analysis holds for Theorem 3.4 and relates $h$ to the Rademacher complexity of the chosen class. See Line 227.
---
Rebuttal Comment 4.1:
Comment: **[Q4]**
> When you say "However, when we have $\ldots$" $\ldots$
> What stops gradient descent from reaching a local optimum $\ldots$ ?
You are correct: On its own, there is no guarantee we can find the global optimum of our surrogate by gradient descent, and the NP-Hardness result suggests we can't globally optimize any surrogate that consistently achieves best-in-class performance. All we are hoping for is a high-quality local optimum. This mirrors the case of training a neural network.
One reason to *intuitively* believe that we should be able to find good local optima is that under Assumption 3.1, as $n\rightarrow \infty$, the decision-loss curves become smoother/more well-behaved. In other words, there are fewer "flat" locations and fewer bad local minima. See Right Panel of Fig. 2. In fact, implicit in the proof of Lemma 3.1, is the fact that (under Assumption 3.1) the function $t \mapsto \mathbb E[\ell(t, y)]$ (expected decision-loss) is differentiable with Lipschitz gradients. Since, by Theorem 3.4 and Theorem 3.7, the empirical PG loss looks closer and closer to this function, this gives us some hope for gradient descent methods for large $n$. (Again, this is not a proof we can find the optimum, just intuition that we should converge to a stationary point.)
We discuss this implicit fact on the bottom of pg. 5 with respect to how it affects approximation error, but we are happy to i) make this implicit fact explicit as a standalone lemma (with proof) and ii) connect this fact to the (intuitive) performance of first-order methods if you think it would help with the intuition.
---
Reply to Comment 4.1.1:
Comment: **[Q5]**
> Also, when you talk about optimizing the PG Losses as solving a ``difference in convex functions" problem $\ldots$. Why can you still solve the optimization problem or PG losses, then?
We apologize for any confusion. First, in our experiments, we are not explicitly leveraging any special DC structure; we're just doing SGD. SGD can be run ``out of the box" without smoothing. Our comment indicated that exploiting the DC structure through specialized algorithms can only improve the empirical performance.
Second, the most classical approach to DC problems is some form of DCA [F7], which solves a sequence of convex upper bounding problems to find a local optimum. DCA also does not require that the constituent DC functions be differentiable or smooth; it can also be applied ``out-of-the-box" in our case (but we didn't do this.)
The references we gave on smoothing ([F3], [F4], [F5]) are recent works that argue that clever smoothing of the constituent functions does not affect stationary points/local optima. They are part of an area of research focusing on using first-order methods to optimize DC functions (see references). They show that this smoothing induces various nice properties (Lipschitz differentiability, coercivity/level boundedness, etc.), and that, as a consequence, various first-order methods (on the smoothed function) converge to a stationary point of the original (non-smooth) function. Thus, we believe smoothing the PG loss in this way before running SGD is a promising area of future research. Again, intuition suggests that it can only improve our numerical performance, but we have not yet tried it. Again, these results do not guarantee we find the global optimum, just that we should be able to identify a stationary point.
- [F7]: Lipp, Thomas, and Stephen Boyd. "Variations and extension of the convex-concave procedure." Optimization and Engineering 17 (2016): 263-287. | Rebuttal 1:
Rebuttal: Attached is our global response document, which includes the following:
i) An updated shortest path experiment that embeds two "good" paths that methods must identify and choose between based on the context. This experiment increases the difficulty and reward of finding the oracle policy compared to the initial shortest path experiment. This allows us to show our PG Losses can learn better policies compared to existing surrogate benchmarks.
ii) We highlight how the choice of h affects the learned policy for the shortest path problem and see that for our choices it had minimum effect as long as h was sufficiently small.
iii) We introduce a new portfolio optimization experiment that was generated with **real** data. Our formulation follows existing benchmarks [6, 26, 32]. We again consider a linear objective of maximizing returns, but our feasible set is constructed with both a quadratic constraint and linear constraint. We plot the relative regret (lower is better) and show our approaches significantly out-perform existing benchmarks.
Pdf: /pdf/9197f0ab854cdea6d6954cd8332f51aa46607bd4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Point-PRC: A Prompt Learning Based Regulation Framework for Generalizable Point Cloud Analysis | Accept (poster) | Summary: In this work, the authors propose a regularization method for the prompt learning of generalizable point cloud analysis, which can strengthen the performances of learned representations on downstream 3D task while keeping its generalizability. The regularization consists of three components: mutual agreement constraint, text diversity constraint, and model ensemble constraint, which is a plug-and-play method for existing 3D large multi-modal models. Moreover, this work also includes new benchmarks for the evaluation of 3D point cloud domain generalization. Results on the proposed benchmark confirm the effectiveness of the proposed regularization method.
Strengths: 1. The whole framework is simple but effective;
2. The writting is good and easy to follow;
3. The construction of new benchmarks may be beneficial to the community.
Weaknesses: My major concern about this work is its novelty. As the prompt tuning method has been well studied in other areas, e.g., text-to-image generation. The proposed regularization constraint in Eq.2 is somewhat similar as the preservation loss term proposed in [1], while the other terms improve the robustness by straightforward average operation. I am not sure if the novelty is enough.
[1] DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The relationships between the $h_T$ in Text Diversity Constraint is not so clear. Would it be used for the regularization in Eq.2? Or just the inference?
2. Some details are not intuitively presented, such as basic framework of existing multi-modal. It would be better to add some diagrams to present the existing works and your improvements;
3. How is the sensitivity of Model Ensemble Constrain to its hyperparameters? Besides, I am also not so sure about the necessity of such ensembling over all epochs. Why cannot we just select a best checkpoint through a validation set?
4. What's the advantages of doing prompt learning for domain generalization over fine-tuning methods such as Low Rank Adaption?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments. Below we address your concerns one by one. Further questions are welcome and we are happy to respond.
**Q1: Major concern on novelty**
We understand your concern regarding the novelty. And we answer the question in the global response section, kindly referring to that part. We think the preservation loss in DreamBooth is not very relevant to the Eq. 2 in our work. And text-to-image generation is another different topic from the point cloud analysis.
**Q2: The relationships between $h_{T}$ and the Text Diversity Constraint is not clear**
Yes, $h_{T}$ would be used in Eq. (2). Since LLMs will generate multiple text descriptions for each point cloud category, we integrate these text features into $h_{T}$.
**Q3: The basic framework of multi-modal prompt learning and inclusion of new diagrams**
We have added the diagrams to explain the multi-modal framework and the proposed method in our work, referring to Figure 1 and 2 in the uploaded PDF file.
Figure 1 highlights our research motivation, distinguishes our work with existing methods and demonstrates superior 3DDG ability on unseen new classes and better performances of base classes.
Figure 2 illustrates the overall pipeline of the proposed approach and showcases how we incorporate the regulation constraints in the pipeline.
**Q4: The Model Ensemble Constraint sensitivity to its hyperparameters**
We add the sensitivity analysis of the model ensemble constraint to the hyperparameters mean $\mu$ and variance $\sigma^2$ and report the results in the following sub-table (a) and (b). As observed, increasing $\mu$ will give more weights to the models in later epochs and improve the base class accuracy while compromising the generalization on unseen new classes. In general, the changes are not very sharp, similarly for the variance $\sigma^2$.
Table r3-2. The sensitivity of Model Ensemble Constraint to the hyperparameters. Here mean $\mu$ and variance $\sigma^2$ are the hyperparameters of a gaussian distribution. ULIP-2 is deployed as the 3D foundation model and the experiments are conducted on the base-to-new benchmark.
In the sub-table (a), the variable is $\mu$, σ² = 1
| Metric | 7 | 9 | 11 | 13 | 15 |
|--------|------|------|------|------|------|
| Base | 72.71 | 72.84 | 73.17 | 73.25 | **73.67** |
| New | **74.63** | 74.41 | 74.45 | 74.40 | 74.27 |
| HM | 73.66 | 73.62 | 73.80 | 73.82 | **73.97** |
In the sub-table (b), the variable is $\sigma^2$, $\mu$ = 15
| Metric | 25 | 16 | 9 | 4 | 1 |
|-------|-------|-------|-------|-------|-------|
| Base |72.51 | 72.77 | 72.83 | 73.29 | **73.67** |
| New | **74.95** | 74.89 | 74.66 | 74.41 | 74.27 |
| HM |73.71 | 73.81 | 73.73 | 73.85 | **73.97** |
Selecting the best checkpoint through a validation set is a common way. In theory, this greedy strategy favors highest performances on the downstream tasks, which means the small number of learnable prompts/parameters are well adapted to these tasks. It is equivalent to our framework without Model Ensemble Constraint (MEC).
However, purely optimizing the small number of learnable prompts toward target tasks will
inevitably hinder the generalization ability of the large 3D models, as we analyzed in the paper.
We also provide the ablation study to this problem, as the following table indicates, the method without MEC has slightly lower accuracy on new classes (75.59% vs 76.10%) and harmonic mean.
However, when removing the factors of MAC and TDC, the role of MEC becomes prominent.
It raise the overall performance remarkably, especially for unseen new classes (5.28% absolute points).
Table r3-2. Ablation study for the framework without model ensembling constraint. The results are
averaged on 5 datasets. MAC: mutual agreement constraint, TDC: text diversity constraint, MEC:
model ensemble constraint. HM: harmonic mean of the Base and New class accuracies.
| MAC | TDC | MEC | Base | New | HM |
|--------|----------------|-------|--------|--------|--------|
| x | x | x | 77.91 | 67.91 | 72.57 |
| x | x | √ | **82.42** | **73.19** | **77.53** |
| √ | √ | x | **83.30** | 75.59 | 79.26 |
| √ | √ | √ | 83.18 | **76.10** | **79.48** |
**Q5: The advantages of prompt learning over low-rank adaptation**
Prompt tuning and low-rank adaptation (LoRA) are orthogonal techniques for parameter-efficient fine tuning. We prefer prompt tuning over LoRA since it does not change the architecture and parameters of the 3D foundation models. In contrast, LoRA needs to change the architecture of the foundation models by introducing low-rank matrices, which might not be desirable in practice since foundation models are usually more precious and hard to obtain.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for the careful rebuttal of the authors. I apologize for the misunderstanding about the relations between Eq.2 and Dreambooth preservation loss after I check the paper. The response has well resolved my concerns. So I decide to raise my rating to weak accept.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your careful consideration and positive feedback on our rebuttal. We greatly appreciate your willingness to re-evaluate our work and your understanding regarding the clarification of the relationship between Eq. 2 and the Dreambooth preservation loss.
We are also pleased to hear that your concerns have been resolved. Thank you again for your time and effort in reviewing our submission.
Kind regards,
Authors of Submission 1657 | Summary: This paper investigates the 3D domain generalization (3DDG) ability of large 3D models using prompt learning. They utilize parameter-efficient prompt tuning to boost the performance of 3D point cloud recognition models. The paper observes that while prompt tuning improves downstream tasks, it often reduces the generalization ability of the models. Thus, they introduce a comprehensive framework to maintain good generalization by allowing learnable prompts to interact actively with the pre-trained general knowledge in large 3D models. This framework imposes explicit three regulation constraints on the prompt learning trajectory, maximizing mutual agreement between task-specific predictions and task-agnostic knowledge. They also develop three new benchmarks to evaluate 3D domain generalization: base-to-new class generalization, cross-dataset generalization, and few-shot generalization.
Strengths: 1. The newly created benchmarks provide a more holistic evaluation of 3D domain generalization, addressing real-world challenges such as transferring to unseen classes and handling corrupted data.
2. This paper achieves consistent improvements in generalization ability across various large 3D models and benchmarks, demonstrating its effectiveness.
3. The use of lightweight prompt tuning makes the framework computationally efficient, reducing the need for extensive retraining of large models.
Weaknesses: 1. According to the paper's introduction, there are already substantial works on domain adaptation and domain generalization for 3D point clouds, including both object-level data and real scanned radar data. Many advanced methods also utilize beyond PointNet and ModelNet dataset. Consequently, the authors need to provide a more rigorous and detailed motivation for their study.
2. The right part of eq.(1) needs a more detailed explanation to describe its components clearly.
3. The method proposed in this paper seems overly simplistic and lacks novelty. Beyond the three general constraints mentioned, are there any specific designs for integrating LLMs with 3D point cloud multimodal learning?
4. This paper does not introduce any additional designs for domain adaptation. Although it proposes a new benchmark, it essentially applies transfer learning.
Technical Quality: 2
Clarity: 3
Questions for Authors: See the weaknesses section.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: This paper lacks an analysis of its limitations. The effectiveness of the text diversity constraint relies on the quality and relevance of the text descriptions, which may vary depending on the source (LLMs or manual templates).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback. Below we address your concerns one by one. Follow-up questions are welcome if something remains unclear.
**Q1: More rigorous motivation of this study is needed**
Thanks for your comments. We need to clarify the following points.
As we stated in the introduction (Line 36), there are only a few of methods discussing domain adaptation (PointDAN [45]) and domain generalization (MetaSets [19], PDG [58]) for 3D point clouds, instead of substantial works.
To make it more clear, we added Figure 1 in the uploaded one-page PDF to explain our motivation. In summary, our motivation is to enhance the performances of large 3D models on downstream 3D recognition tasks while simultaneously maintaining good 3DDG ability. Previous works focus on downstream tasks but fail to consider the generalization on unseen data and lack relevant evaluations. Their limitations include
* **Generalization among a fixed set of categories:** Our approach based on large 3D models can conduct open-set recognition and generalize to any unseen class.
* **Limited Scale and Scope:** The large 3D models we investigate usually have hundreds of millions of parameters, e.g., ULIP-2 with 100M+ params, and the 3D encoders in previous works cannot match this scale. Our method is evaluated on up to 215 classes while previous methods are only tested on up to 11 classes (Sim-to-Real).
* **Compromised generalization ability:** Previous methods leveraging large 3D models focus on the downstream performances by lightweight adaptation (e.g., PPT [52], IDPT [68], DAPT [82], Point-PEFT [53]), while this strategy compromises generalization ability.
In short, our work offers a pioneering investigation of 3DDG ability of large 3D models and presents a simple yet effective solution. We believe that the new benchmarks will benefit researchers and drive future advancements in the field.
**Q2: Further explanation of the right part of Eq. (1)**
The general idea of the right part of Eq. (1) is that we try to find optimal point and text prompt parameters $\{E^{P*}, E^{T*}\}$ that minimize the expected cross-entropy loss over the ground truth data distribution $\mathcal{D}_{gt}$. Let's dive into the components of this equation.
* $\{\textbf{\textit{E}}^{P*},\ \textbf{\textit{E}}^{T*}\}$: These are the optimal parameters that we are trying to find.
* $argmin_{\{\textbf{\textit{E}}^P,\ \textbf{\textit{E}}^T\}}$: This notation indicates that we are looking for the arguments (in this case, the parameters $\{\textbf{\textit{E}}^P,\ \textbf{\textit{E}}^T\}$) that that minimize the following expression.
* $\mathbb{E} _ { (P, y) \sim \mathcal{D} _ {gt}}$: This denotes the expected value over the ground truth data distribution $\mathcal{D}_{gt}$, where $P$ represents the input point clouds and $y$ represents its real class labels.
* $\mathcal{L}_{CE}(\tilde{\mathcal{D}}, y)$: This is the cross-entropy loss function. $\tilde{\mathcal{D}}$ represents the predicted class distribution, and $y$ is the true class label.
**Q3: The method seems simple and lacks novelty**
We answer this question in the global response section, kindly refer to that part.
In addition, when it comes to leveraging LLMs for prompt learning, we mainly regard them as powerful tools to produce diverse text descriptions to the point clouds. The specific designs are simple but effective. We customize three types of instructions to LLMs, and the details of the design are also visualized in Figure 3 of the uploaded one-page PDF. :
* _Question Answering_
* _Caption Generation_
* _Making Sentences_
After that, we encode these responses with the text encoder of the large 3D multi-modal models to obtain the representations of different 3D categories.
**Q4: Lack of specific designs for domain adaptation**
Thanks for your comments. Our work mainly investigates the 3D domain generalization (3DDG) ability of large 3D models, instead of domain adaptation (Line 1 of the main paper).
There are two key differences between domain adaptation (DA) and domain generalization (DG). _First_, DA methods can access target domain data during training while DG methods cannot. _Second_, DA methods aim to minimize performance drop when transferring to a specific known target domain while DG methods try to ensure generalization and robustness to unseen domains.
The proposed regulation constraint framework is designed for boosting task-specific performances while maintaining the generalization ability simultaneously. On the base-to-new benchmark, the models conduct lightweight prompt learning while directly test on unseen new classes. Similarly on the cross-dataset benchmark, the models learned from the source domain are directly tested on target domains.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author’s response and apologize for the delayed reply. However, I still have some concerns of this paper:
1. I recognize that this paper is the first to introduce LLM to 3D DG. However, considering that in terms of prompt learning and DA/DG technical contributions, I believe it still lacks novelty.
2. There are indeed many 3D DA/DG methods, especially for LiDAR data in autonomous driving. While they don’t focus on object-level point clouds, their ideas are still valuable. So, please consider improving the literature review.
3. I'm very clear about the difference between DA and DG. My main concern is that this paper lacks a specific design to address domain gaps, whether at the category or dataset level. As the authors claim, ‘the models learned from the source domain are directly tested on target domains’, which seems more like transfer learning.
Sorry again for my late reply. I have also carefully read the comments from other reviewers and recognize the contribution of this paper to introducing LLM to 3D DG. I am willing to raise my rating if the authors can address my remaining concerns or if other reviewers lean toward acceptance.
---
Rebuttal 2:
Title: Thanks for your very valuable comments and looking forward to discussion on our rebuttal
Comment: Dear Reviewer UDVe,
We hope you’ve had a pleasant weekend. We wanted to thank you for the detailed feedback you provided on our submission. Your insights have been very valuable, and we have carefully considered your comments in our rebuttal.
If there are any additional concerns or points you would like to discuss further, we would be more than happy to clarify or provide further information. Your guidance is greatly appreciated, and we would welcome the opportunity to address any unresolved issues.
Thank you again for your time and consideration.
Best regards,
Authors of Submission 1657
---
Rebuttal 3:
Title: Reply to reviewer's questions (part 1/2)
Comment: Dear Reviewer UDVe,
Thank you for taking the time to respond to our rebuttal, especially given this busy period. We appreciate your feedback and the opportunity to further clarify the points you find unclear. Below we will answer these questions point by point.
**Q1: Concern on technical contributions**
Thanks for your comments. We understand your concern regarding the novelty. In terms of technical contributions, we have the following points to explain.
- Previous DA/DG methods based on relatively small models for 3D point cloud recognition design common feature space among source and target domain or adapt the meta-learning framework to handle the domain shifts. They provide very valuable insights and solutions to advance the development of 3D DA/DG.
- Recent works, especially large 3D models (e.g., ULIP, ULIP-2, Uni3D), demonstrate even much better zero-shot recognition performances across a wide range of target tasks. The results imply that domain gaps can also be more effectively narrowed down by pre-training large 3D models on large-scale datasets (e.g., ULIP-2 is pre-trained on million-scale pointcloud-text-image triplets). You can treat them as different technical routes/stages to solve the 3D DA/DG problem.
- This work is built on the latter route and the idea of improving the 3DDG ability is to exploit the power of large models. To this end,
- (1) we design an active interaction strategy to align with the pre-trained knowledge in large 3D models,
- (2) we deploy LLMs as powerful interfaces to produce high-quality descriptions to various point clouds,
- (3) we synthesize the opinions from different learning stages with a gaussian-weighted voting.
- Then, these three components are effectively incorporated into a unified regulation framework to handle the category/dataset shifts between seen and unseen domains.
- Finally, we verify the effectiveness of distinct components by ablation experiments and validate the proposed framework on multiple large 3D models and multiple benchmarks to reflect the boosted generalization ability and robustness.
- We also want to note that our framework shows promising model-agnostic attribute, which implies that with the increasing abilities of large 3D models and LLMs, the 3DDG gains will increase.
- For the prompt learning part, our design is distinguished from previous works in:
- we conduct multi-modal prompt learning (both on text and 3d branches) while previous works conduct prompt tuning on a single modality. For instance, PPT [52] only tunes text prompts, IDPT [68], DAPT [82], Point-PEFT [53] tune prompts only in 3D. In the beginning of this project, we compare these two solutions and find our strategy achieves better generalization over single-modal on our benchmarks.
**Q2: Improving literature review**
Thanks for your suggestions. We appreciate that the reviewer clarifies there are many 3D DA/DG methods for **lidar data in autonomous driving**.
After receiving the feedback, we maximize our efforts to find related papers (these works are attached in another reply part due to character limitation). Many of them focus on semantic segmentation, object detection, and registration. We will read them carefully later. There is no doubt that these 3D DA/DG methods are prestigious to the field and we will reflect them in our revised related work section by:
* (1) summarizing these works according to the methodology they proposed;
* (2) explaining the differences and relations to our work.
**Q3: Lacks a specific design to adress domain gaps**
Thanks for your comments. This question is similar to Q1 (concern on technical contributions) and we answer the question in corresponding section, kindly referring to that part. About the question related to transfer learning, we have the following points to provide.
First, we agree transfer learning and DG methods share many similarities. They are both techniques used to improve the performance of models on tasks where there is limited or no data.
Second, they approach this problem from different angles/ways. Let's perceive the difference through some specific examples.
- In transfer learning, a model is typically pre-trained on a large dataset (source domain) and then fine-tuned on a smaller, task-specific dataset (target domain). For example, a model is pre-trained on the ImageNet dataset (which contains millions of images across 1,000 classes) and then fine-tuned on a smaller dataset of medical images to classify different types of skin diseases. By this way, the learned useful features from ImageNet can be transferred to the target domain with relatively little additional training.
- In domain generalization, the model is only trained on source domain and directly tested on unseen target domains, without requiring fine-tuning on the target domain data. In our base-to-new class and cross-dataset settings, we do not fine-tune the prompts on new classes or target datasets.
---
Rebuttal 4:
Title: Reply to reviewer's questions (part 2/2)
Comment: Below we attach the related works in 3D DA/DG for lidar data in autonomous driving. We will summarize them and discuss the relations with them in revision. In general, our work focuses on object-level recognition and these methods are dealing with scene-level 3D tasks, such as semantic segmetation and object detection.
1. 3DDA methods
- Saleh et al. Domain Adaptation for Vehicle Detection from Bird's Eye View LiDAR Point Cloud Data. ICCV 2019
- Xu et al. SPG: Unsupervised Domain Adaptation for 3D Object Detection via Semantic Point Generation. ICCV 2021
- Yi et al. Complete & Label: A Domain Adaptation Approach to Semantic Segmentation of LiDAR Point Clouds. CVPR 2021
- Zhao et al. ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework for LiDAR Point Cloud Segmentation. AAAI 2021
- Achituve et al. Self-Supervised Learning for Domain Adaptation on Point Clouds. WACV 2021
- Jiang et al. LiDARNet: A Boundary-Aware Domain Adaptation Model for Point Cloud Semantic Segmentation. ICRA 2021
- Shen et al. Domain Adaptation on Point Clouds via Geometry-Aware Implicits. CVPR 2022
- Yang et al. No-Reference Point Cloud Quality Assessment via Domain Adaptation. CVPR 2022
- Liang et al. Point Cloud Domain Adaptation via Masked Local 3D Structure Prediction. ECCV 2022
- Wang et al. SSDA3D: Semi-supervised Domain Adaptation for 3D Object Detection from Point Cloud. AAAI 2023
- Saltori et al. Compositional Semantic Mix for Domain Adaptation in Point Cloud Segmentation. TPAMI 2023
- Katageri et al. Synergizing Contrastive Learning and Optimal Transport for 3D Point Cloud Domain Adaptation. WACV 2024
2. 3DDG methods
- Wu et al. SqueezeSegV2: Improved Model Structure and Unsupervised Domain Adaptation for Road-Object Segmentation from a LiDAR Point Cloud. ICRA 2019
- Robey et al. Model-Based Domain Generalization. NeurIPS 2021
- Lehner et al. 3d-vfield: Adversarial augmentation of point clouds for domain generalization in 3d object detection. CVPR 2022
- Sanchez et al. Domain generalization of 3d semantic segmentation in autonomous driving. ICCV 2023
- Kim et al. Single Domain Generalization for LiDAR Semantic Segmentation. CVPR 2023
- Qu et al. Modality-Agnostic Debiasing for Single Domain Generalization. CVPR 2023
- Xiao et al. 3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds. CVPR 2023
- Li et al. BEV-DG: Cross-Modal Learning under Bird's-Eye View for Domain Generalization of 3D Semantic Segmentation. ICCV 2023
- Wang et al. Towards Domain Generalization for Multi-View 3D Object Detection in Bird-Eye-View. CVPR 2023
- Guo et al. An Accurate Outlier Rejection Network With Higher Generalization Ability for Point Cloud Registration. RAL 2023
- He et al. Domain Generalization-Aware Uncertainty Introspective Learning for 3D Point Clouds Segmentation. MM 2024
- Sanchez et al. ParisLuco3D: A High-Quality Target Dataset for Domain Generalization of LiDAR Perception. RAL 2024
- George Eskandar. An Empirical Study of the Generalization Ability of Lidar 3D Object Detectors to Unseen Domains. CVPR 2024
- Jiang et al. DG-PIC: Domain Generalized Point-In-Context Learning for Point Cloud Understanding. ECCV 2024
---
Rebuttal Comment 4.1:
Comment: Thank you for your detailed responses. Most of my concerns are addressed and I will raise my rating to Borderline Accept. Please carefully revise the paper on motivation/contribution to explicitly state how to address domain shifts and improve the literature review section as well.
---
Reply to Comment 4.1.1:
Comment: Dear Reviewer UDVe,
Thank you very much for your thoughtful reconsideration of our submission and for raising the rating. We are especially grateful for your detailed suggestions regarding the motivation and contribution sections, as well as your advice on improving the literature review.
We will carefully revise the paper to state how to address domain shifts explicitly and enhance the clarity of our contributions. Your guidance is invaluable, and we appreciate the time and effort you’ve dedicated to helping us improve our work.
Thank you again for your support.
Best regards,
Authors of Submission 1657 | Summary: This paper investigates the 3D domain generalization (3DDG) capability of large 3D models based on prompt learning. The authors propose a comprehensive regulation framework that employs lightweight prompt learning to improve both task-specific performance and domain generalization ability. The framework consists of three main components: mutual agreement constraint, text diversity constraint, and model ensemble constraint. Additionally, the authors introduce three new 3DDG evaluation benchmarks: base-to-new, cross-dataset, and few-shot generalization benchmarks. Experimental results demonstrate that the proposed method significantly enhances model generalization while improving specific task performance.
Strengths: 1 The paper demonstrates significant originality by being the first to address the 3DDG problem for large multi-modal 3D models and proposing a novel regulation framework with innovative constraint mechanisms.
2 The quality of the research is evident in its comprehensive experimental design and the significant improvements shown across multiple benchmarks and models.
3 The work's significance lies in addressing the critical issue of domain generalization in 3D point cloud analysis, potentially impacting related fields broadly. Furthermore, the introduction of new benchmarks provides valuable tools for future research in 3DDG。
Weaknesses: 1 There's no detailed comparison of training time between the proposed method and baseline approaches or full fine-tuning of large 3D models.
2 Limited validation across diverse point cloud tasks and real-world scenarios.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1 What specific advantages do the newly proposed benchmarks have compared to previous DG methods?
2 How do you view the scalability of this method on larger datasets or more complex 3D tasks?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: By a more detailed discussion, the authors could provide a more balanced view of their work, demonstrating scientific rigor and offering valuable insights for researchers looking to build upon or apply their method. This would significantly strengthen the paper and its contribution to the field.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback. We address your concerns point by point. Feel free to ask follow-up questions if something remains unclear.
**Q1: Training time comparison between baselines and our method.**
As requested, we have added a comparison of the training time between our method and the baselines. The results are shown in the following table. The proposed method consumes a similar amount of time per epoch compared to the baseline, with a slight increase due to the inclusion of our framework.
Table r1-1. Running time comparison of a strong baseline ULIP-2 and the proposed approach.
We conduct prompt learning based on ULIP-2 for 20 epochs on the base-to-new class benchmark,
and the experiments are run three times with different seeds. The settings are consistent with those in the main paper. Time is counted in seconds for all 20 epochs using a RTX 4090.
| | seed | MN40 | S-PB_T50_RS | S-OBJ_BG | S-OBJ_ONLY | SNV2 | Avg. |
|------|:------:|:------:|:-------------:|:----------:|:------------:|:------:|:------:|
| | 1 | 132 | 106 | 48 | 53 | 307 | 129.2|
| ULIP-2 | 2 | 132 | 106 | 48 | 53 | 305 | 128.8|
| | 3 | 133 | 108 | 48 | 51 | 305 | 129.6|
| |1 | 159 | 112 | 60 | 60 | 344 | 147.0|
| +**RC**(Ours)|2 | 159 | 114 | 60 | 59 | 345 | 147.4|
| |3 | 159 | 113 | 59 | 60 | 345 | 147.2|
The number of learnable parameters in our framework is 16,896 while full fine-tuning ULIP-2 has 82.3M learnable parameters (only in text and 3D encoder). According to the reported details of ULIP-2, pre-training on Objaverse [9] utilizes 8 A100 GPUs and takes 1.5 days. So full fine-tuning ULIP-2 is also expensive.
[9] Deitke et al. Objaverse: A Universe of Annotated 3D Objects. CVPR 2023
**Q2: Limited validation across diverse point cloud tasks and real-world scenarios.**
We would like to point out that we have already considered multiple real-world scenarios. In particular, there are four datasets in our new benchmarks collected from real-world scenarios, including the three variants of ScanObjectNN [55] and Omni3D [60]. ScanObjectNN is widely used in the community and Omni3D is a recently released dataset that contains a large vocabulary of 3D objects. Both of them
pose great challenges to existing point cloud recognition methods, according to the results in Table 1 and Table 2 in the main paper.
Our work mainly focuses on the recognition task since the large 3D models (e.g., ULIP, ULIP-2) are pre-trained using the contrastive objective (similar to CLIP) and they are good at the global alignment of 3D objects and their text descriptions with classe names.
Other 3D tasks like object detection and segmentation are dealing with scene-level point cloud data. They follow different paradigms and methodologies compared to object-level recognition. We leave it to future works to explore the 3DDG ability of our framework on these tasks.
[55] Revisiting Point Cloud Classification: A New Benchmark Dataset and Classification Model on Real-World Data. ICCV 2019
[60] Wu et al. OmniObject3D: Large-Vocabulary 3D Object Dataset for Realistic Perception, Reconstruction and Generation. CVPR 2023
**Q3: The advantages of the new benchmarks compared to existing ones.**
First, our benchmarks provide new evaluation dimensions for the 3DDG methods. These new evaluation dimensions are key indicators to reflect the domain generalization ability but absent in existing benchmarks. Specifically,
* the domain generalization evaluation in existing PointDA and Sim-to-Real focuses on the shared categories between source and target domain, without considering unseen new classes. We think it is a critical limitation, especially when evaluating the generalization ability of large 3D models that can conduct open-set recognition. In contrast, our created base-to-new and cross-dataset benchmarks provide the evaluations both on seen and unseen data.
* previous benchmarks fail to evaluate the generalization ability when the target domain occurs data corruptions, which are common cases in 3D point cloud analysis. In contrast, our cross-dataset benchmark introduces this kind of evaluation to measure the model’s ability against common data corruptions.
* the few-shot benchmark inspects the model generalization ability under extreme low-data regime (e.g., 1-shot learning). But this kind of evaluation has not been covered by previous benchmarks.
Second, our benchmarks are more diverse and challenging. There are only 10 classes in PointDA and 11 classes in Sim-to-Real. Our newly created benchmarks contain 7 different datasets and up to 216 point cloud categories, which will drive future research.
**Q4: The scalability of this method on larger datasets**
During rebuttal, we further test our framework on a larger dataset named Objaverse-Lvis and the results are promising. This dataset is a subset of the recently released Objaverse and only serves as a test set (target domain). Objaverse-Lvis contains 46,205 point clouds and 1,156 classes, and some classes only have a single object, posing great challenges to existing point cloud recognition methods. In the experiments, we select representative ULIP and ULIP-2 as baselines and compare them with the models with our regulation framework.
The results in the following table verify the proposed approach can also bring considerable gains (+3.27% absolute points for ULIP-2) on such a larger and challenging dataset.
| Method | Source (ShapeNetV2) | Target (Objaverse-Lvis) |
|-------------|:----------------------:|:-------------------------:|
| ULIP | 87.33 (0.95) | 0.83 (0.05) |
| +RC(Ours) | **90.43** (0.86) | **1.10** (0.08) |
| ULIP-2 | 76.70 (1.37) | 14.80 (0.22) |
| +RC(Ours) | **76.70** (1.59) | **18.07** (0.49) |
---
Rebuttal Comment 1.1:
Comment: The author's response addressed my concerns, so I changed my rating to weak accept.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer kpdQ,
We wanted to express our sincere gratitude for your thoughtful re-evaluation of our submission. Your willingness to reconsider the rating and provide us with constructive feedback is truly appreciated.
We are pleased to hear that our response helped address your concerns, and we value the time and effort you’ve dedicated to reviewing our work. Your insights have been instrumental in improving the clarity and quality of our submission.
Thank you again for your careful consideration and for raising your rating.
Best regards,
Authors of Submission 1657
---
Rebuttal 2:
Title: Thanks for your very professional reviews and looking forward to feedback on our rebuttal
Comment: Dear Reviewer kpdQ,
I hope this message finds you well. I wanted to express my gratitude for the time and effort you’ve invested in reviewing our submission. I understand that this is a busy period, and I sincerely appreciate your attention and recognition to our work.
If there are any further clarifications or questions regarding our rebuttal, we are more than happy to provide additional information.
Thank you again for your valuable feedback and consideration.
Best regards,
Authors of Submission 1657 | null | null | Rebuttal 1:
Rebuttal: First of all, we sincerely thank all reviewers and ACs for reviewing our paper and providing valuable comments. There is no doubt that these suggestions and feedback are very valuable for refining the paper. We are encouraged by the appraising comments from the reviewers: ''significant originality by being the first ...'' (Reviewer kpdQ), ''The whole framework is simple but effective'' (Reviewer d557), ''consistent improvements in generalization ability across various large 3D models and benchmarks'' (Reviewer UDVe). Moreover, all reviewers recognize that our constructed benchmark for 3D domain generalization is valuable to the community.
In the following, we will give a global response to the common concern on novelty of this work.
**Q1: The method proposed in this paper seems simple and lacks novelty.**
A1: We agree that our method is simple, but effective. The novelty of our work lies in that we are the first to investigate the 3D domain generalization capability of large 3D models, and present a simple yet effective regulation framework to address this critical issue. The originality is The originality is highly appraised Reviewer kpdQ.
Moreover, we construct three new benchmarks that provide new evaluation dimensions for 3D domain generalization (3DDG), including generalization on unseen new classes, corrupted data, and few-shot generalization. These are vital indicators to measure the 3DDG ability in real-world scenarios but ignored by previous works. The merits of our constructed benchmarks are recognized by all three reviewers: ''... new benchmarks provide valuable tools for future research in 3DDG'' (Reviewer kpdQ). ''... newly created benchmarks provide a more holistic evaluation of 3D domain generalization'' (Reviewer UDVe). ''... new benchmarks may be beneficial to the community'' (Reviewer d557)
Pdf: /pdf/5854b6b2189caed7111116eec949a101710d647b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
CODE: Contrasting Self-generated Description to Combat Hallucination in Large Multi-modal Models | Accept (poster) | Summary: This paper introduce the CODE decoding method to contrast origin image with VLM-generated image description to reveal the missed or hallucinated content in naive decoding process. The method contains two innovations: 1) a Bounded Divergence guided selector to provide dynamic combining weight. 2) an adaptive information constrain based also on Bounded Divergence. The experiment results show promising and consistent performance improvement.
Strengths: 1. The paper is well-written and addresses a crucial problem in MLLMs.
2. The idea of contrast image and caption in VLM decoding process is novel.
3. The proposed CODE method is simple and effective on multiple benchmarks.
Weaknesses: 1. The dynamic information flow control does not have that much technical novelty, as it is quite close to what IBD does.
2. Like CODE, VCD also contrast image with modified inputs (i.e. corrupted images), but not discussed in detail in this paper. What makes contrasting image descriptions better than contrasting with corrupted images?
3. What makes a good image description for CODE to use is not discussed.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: the authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1. The dynamic information flow control does not have that much technical novelty, as it is quite close to what IBD does.**
**A1.** We thank the reviewer for the valuable feedback, and we would like to clarify the primary differences between our CODE and IBD [R1].
The dynamic information flow control in our method consists of two main regulating terms, $\alpha_{t}$ and $\beta_{t}$. The first term, $\alpha_{t}$, is used to adjust the penalizing original logit values in contrastive decoding. In IBD, this term is obtained by constructing another image-biased model and calculating the JSD with the vanilla model. In contrast, our method uses only self-generated image descriptions from the model itself (thus reflecting its current understanding of the visual inputs in the textual representation) to calculate BD and obtain $\alpha_{t}$. This approach eliminates the need to build a new model and allows for seamless application to other LMMs.
The second term, $\beta_{t}$, is designed to dynamically control the threshold of $V_{head}$. In IBD, $\beta$ is set to a constant value of 0.1 and is not dynamically regulated. Thus, $\beta_{t}$ is our novel feature for CODE that can expand the token searching pool when the next token prediction considering distributional difference between visual contents and their comprehensive descriptions.
---
**W2. Like CODE, VCD also contrasts image with modified inputs (i.e. corrupted images), but not discussed in detail in this paper. What makes contrasting image descriptions better than contrasting with corrupted images?**
**A2.** We respectfully argue that we have discussed the methodological differences of our framework with VCD in detail (line 43-44, line 102-104, line 160-162, and line 200-201), because VCD is one of the important baselines for experimental comparison. We would like to highlight again that, unlike the VCD that leverages the contaminated visual inputs with Gaussian noise, our method utilizes self-generated description from models themselves as contrasting counterparts. Our design is more reasonable because the comprehensive description reflects the current model's understanding of the visual inputs and integrates this understanding into the decoding process by penalizing original logit information, whereas VCD depends on the number of noise injection steps $T$ and could suffer from unexpected adversarial effects resulting from the noise. To improve clarity, we will add this discussion to the potential final version.
---
**W3. What makes a good image description for CODE to use is not discussed.**
**A3.** Here, as we described in Table. 5 of Appendix. A, we carefully design the instruction for LMMs to obtain comprehensive descriptions for the given visual contents, and its curated prompt aims for the self-generated descriptions to span possible visual contents thoroughly, answering any potential questions about the given image.
Additionally, as we analyzed in Sec. 3.1 and Fig. 2, following the amateur model selection philosophy of contrastive decoding, the generated comprehensive descriptions reflect the given visual understanding from the models themselves and subsequentially contrasts with the original logit information to enhance response coherence during the decoding phase.
Although we have demonstrated the effectiveness of self-generated descriptions in our framework across 6 LMMs and 6 benchmarks, as the reviewer pointed out, there could be more optimal descriptions that can further mitigate hallucinatory responses. This could be an interesting future research direction, and we will definitely include this discussion in the potential final version.
---
[R1] IBD: Alleviating Hallucinations in Large Vision-Language Models via Image-Biased Decoding, *arxiv preprint*, 2402.18476
---
Rebuttal Comment 1.1:
Comment: The response addresses my concerns on technical novelty and comparison with baselines. However, I am still curious about the effect of CODE with different image description prompts.
---
Rebuttal 2:
Comment: We thank the reviewer for replies and further questions. As the reviewer requested, to investigate the effectiveness of our design choice for the detailed description prompt, we compared it with a different variation, which is a base prompt to obtain a detailed description.
- Base prompt: “Provide a detailed description of the image.”
- Our prompt: “Provide a detailed description of the image, covering all visible elements and their interactions, so as to thoroughly answer any potential questions about the image."
As in the table below, we have analyzed the performance of the mmHalBench and compared the average $\sharp$ token lengths for the responses generated by the models. The results show that the average token length of our curated prompt, designed to include a more comprehensive description of all visible elements in the image, demonstrates a longer token length (approx. 24\% longer) and slightly improved performance compared to the baseline. This indicates that the information amount of the comprehensive description indeed affects to the CODE by reflecting the current visual understanding of LMMs, and effectively mitigates hallucinatory responses. We will include these results and discuss how the prompt designs can boost the CODE performance in the final version.
| Models | | LV1.5 | | | IXC2-VL | | | IVL-1.5 | |
|----------|:-----:|:---------:|:---------------:|:-----:|:---------:|:---------------:|:-----:|:---------:|:---------------:|
| mmHal | Overall$\uparrow$ | Hal$\downarrow$ | $\sharp$ token | Overall$\uparrow$ | Hal$\downarrow$ | $\sharp$ token | Overall$\uparrow$ | Hal$\downarrow$ | $\sharp$ token |
| Base | 2.34 | 52.08 | 100.54 | 3.19 | 30.21 | 56.98 | 3.42 | 33.00 | 136.16 |
| Ours | 2.49 | 51.00 | 115.46 | 3.46 | 25.00 | 80.73 | 3.52 | 30.21 | 195.54 | | Summary: The paper proposed a contrast decoding method named CODE for large multi-modal models. CODE, as mentioned by its name uses self-generated description as contrasting references during the decoding phase of LMMs to mitigate the hallucination issues. CODE works by dynamically considering the variations between the visual features and their corresponding pure language features (description) to improve response alignment with actual visual content and misalignment with the wrong part in descriptions. Based on contrastive decoding, the author proposed dynamic restriction, which regulates the information flow, and adaptive information constraint to filters out less plausible tokens constraint in the contrastive decoding phase. The proposed method is verified on 6 benchmarks.
Strengths: 1. The writing logic is relatively clear and easy to understand. The figures and tables are neat, beautiful and intuitive.
2. The proposed method is reasonable and has a good performance.
3. The paper conducts fundamental sufficient experiments on 6 VQA benchmarks. The ablation study shows that both DC and AIC can improve the performance of MMVP and LLaVA QA 90.
Weaknesses: 1. The method has to generate a comprehensive description for each image before doing the visual question answering, the inference time is long and not convenient.
2. Only conduct ablation study of DC and AIC on MMVP and LLaVA QA 90, lack of ablation study on other benchmarks, such as, POPE.
3. Lack of analysis: why the performance of proposed method is not optimal or even inferior to the underlying Greedy decoding on some problem types of MMVP. For example, Color and Appearance in MMVP.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Have the authors tried other divergence measures such as KL divergence for the Bounded Divergence used in CODE?
2. For the computational analysis in Table 4, have you considered the cost of generating the comprehensive descriptions?
3. A question about the DC, I don't understand, when $\mathcal{D}_{\mathrm{bd}}\left(P_t^v \| P_t^d\right)$, which means there is little difference between decoding based on visual features and decoding based on pure language features. It is intuitive to decode without regard to the variations, but why does the CODE highly consider the variation?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: As mentioned in the first point of Weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1. The method has to generate $\cdots$ convenient.**
**A1.** Even if we have discussed in Discussion section and computational analysis in Table. 4, we acknowledge that our method requires additional computational resources to obtain (self-generated) textual descriptions from models themselves as visual counterparts for CD. However, we would like to highlight that, by contrasting with the models’ descriptive self-understanding, our method can mitigate the hallucinatory responses without further training, which is also one of important research venues for real-world applications.
---
**W2. Only conduct ablation study of DC and AIC $\cdots$ such as, POPE.**
**A2.** Thank you for the valuable feedback. We would like to clarify the effectiveness of DR and AIC designs by conducting further ablation studies. In Table. 3 of our manuscript, we have selected two benchmarks for the ablation study, one each from discriminative and generative categories (MMVP, LLaVA-QA). As the reviewer asked, we experimented additional ablation studies for the rest of the benchmarks (POPE and mmHalBench). Note that we randomly sampled 500 examples from POPE and reported accuracy. For mmHalBench, we reported overall scores. As in the below table, we can observe that the use of DR and AIC progressively enhance the performance in both benchmarks, which indicates that dynamically restricting information is an important design of the CODE implementation. We will incorporate these results in the potential next version.
| | | | POPE | | | mmHal | |
|:--------------:|:--------------:|:-------:|:------:|:--------:|:-------:|:-------:|:--------:|
| DR | AIC | LV1.5 | LV-N | IVL1.5 | LV1.5 | LV-N | IVL1.5 |
| X | X | 74.4 | 84.8 | 75.8 | 1.99 | 3.09 | 3.39 |
| X | O | 78.8 | 85.0 | 82.0 | 2.08 | 3.33 | 3.49 |
| O | X | 82.6 | 85.0 | 84.0 | 2.02 | 3.13 | 3.40 |
| O | O | **86.8** | **85.6** | **84.8** | **2.49** | **3.43** | **3.52** |
---
**W3. Lack of analysis: why the performance $\cdots$ in MMVP.**
**A3.** We appreciate the reviewer’s comment. First of all, there is a minor typo in the IXC2-VL result (in Table. 1, “Presence of Specific Features”), which should be 6.0 $\rightarrow$ 60.0.
We have added statistical results to the table below for each category and conducted an analysis of 54 results (total 6 models and 9 question categories). Especially, as in the below summarized results, CODE boost performances for the specific visual categories where vanilla greedy decoding scored very low. This is attributed to the utilization of self-generated description that can effectively contextualize and integrate nuanced information to conduct complex categories like “State and Condition” and “Structural and Physical Characteristics”.
Even if our method improve overall performance with consistency across 6 models, in visual categories such as "Orientation and Direction", our method slightly underperformed compared to the greedy decoding. This may because such categories (including other categories that the improvements seemingly marginal) require straightforward decisions that are enoughly handled by greedy decoding. For example, in orient. and quantity. categories, the answer is typically deterministic (*e.g.,* left, right, 1, 2, 3), which does not require a wider token searching pool compared to other types of questions. In these instances, the use of CODE does not seem to optimally benefit simpler tasks. We will definitely add these discussions and analyses to the potential final version.
| | Total | orient. | feature. | state. | quantity. | position. | color. | structure. | text. | viewpoint. |
|:------:|:-------:|:-------:|:--------:|:-------:|:---------:|:---------:|:------:|:----------:|:------:|:----------:|
| greedy | 34.11 | 33.34 | 30.15 | 13.89 | 27.08 | 28.33 | 57.78 | 13.89 | 70.00 | 32.50 |
| ours | 37.95 | 32.70 | 34.09 | 22.59 | 27.08 | 36.67 | 60.38 | 22.22 | 71.67 | 34.17 |
| $\Delta$ | +10.1\% | -1.9\% | +11.6\% | +38.5\% | +0.0\% | +22.73\% | +4.3\% | +37.5\% | +2.4\% | +4.9\% |
---
**Q1. Have the authors tried $\cdots$ used in CODE?**
**A4.** Yes, our initial design was using KL-divergence to measure divergence. However, we would like to kindly remind that the most challenging point is that the vanilla KLD is not bounded (infinite upper bound), so that it is not practical to directly use of the divergence measurements. Therefore, Utilizing the Bounded Divergence (BD) that we used for CODE implementation can address such challenge by tightening the divergence range to 0~1, which is easily incorporated to DR and AIC as an adaptive hyper-parameter form.
---
**Q2. For the computational analysis $\cdots$ comprehensive descriptions?**
**A5.** Yes, the computational analyses in Table. 4 have considered all the computational costs of generating self-generated descriptions.
---
**Q3. A question about the DC, $\cdots$ why does the CODE highly consider the variation?**
**A6.** We would like to clarify the small misunderstanding. As the reviewer mentioned, if the two logit values from the visual input and its textual description are almost identical (thus minimal variation), the decoding process should indeed prioritize the original information (the reviewer’s understanding is correct). The one missing point is that our CODE process adds up the multiplication of the logit variation and the $\alpha_{t}$. Thus, in ideal case, when the logit difference nearly gets to zero, the final addition to the original logit is canceled out, even if the $\alpha_{t}$ is close to 1. Please note that this design can be universally applied without loss of generality across different logit difference scenarios.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response, it addresses my concerns on some details of the method. I tend to hold the score. | Summary: Large Multi-modal Models (LMMs) have made significant strides in understanding visual context and generating coherent responses. However, they face challenges such as hallucinations, where responses are incorrect and unrelated to visual inputs. To tackle this issue, this paper proposes COuntering DEscription Contrastive Decoding (CODE). CODE uses self-generated descriptions as reference points during decoding to mitigate hallucinations. By aligning responses with visual content through dynamic adjustments in token predictions, CODE enhances coherence and informativeness without requiring additional training. Experimental results demonstrate CODE's effectiveness in reducing hallucinations and improving cross-modal consistency across various benchmarks and state-of-the-art LMMs.
Strengths: 1. The paper is well-written and easy to understand.
2. The use of description to enrich language information for contrastive decoding to address hallucination problems is interesting.
3. Extensive experiments are conducted to evaluate the proposed method.
Weaknesses: 1. From Table 1, it can be seen that CODE does not bring much performance improvement compared to greedy decoding in practice.
2. Did the authors consider evaluating with the CHAIR metric on generative tasks?
3. Can CODE be used in conjunction with VCD or OPERA?
Technical Quality: 2
Clarity: 2
Questions for Authors: see weaknesses.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1. From Table 1, it can be seen that CODE does not bring much performance improvement compared to greedy decoding in practice.**
**A1.** We respectfully argue that the use of CODE shows consistent performance improvements across 6 different models with varying sizes. Especially, when considering the performance improvements on the more challenging generative benchmarks (Table 2) and in-the-wild benchmarks (Fig. 5), which extend beyond simple multiple-choice tests, our method shows improvements exceeding 10%, as described in lines 272-277. These gains are not marginal, and importantly note that the improvements are achieved without further training of the models.
---
**W2. Did the authors consider evaluating with the CHAIR metric on generative tasks?**
**A2.** We thank for the reviewer’s valuable discussion point! As an initial design of our experiments, we have considered CHAIR benchmark for generative task. However, as Ben-Kish *et al.* [R1] pointed out, CHAIR is out-of-dated measurements that is limited to only 80 object annotations in the MS-COCO. As a result, we selected recently introduced GPT-aided measurements (MMHal-Bench and LLaVA-QA) for more detailed generative evaluation.
However, we further conducted experiments to clarify the effectiveness of our method on CHAIR benchmark. We have randomly sampled 500 samples from COCO and reported two metric variations per-sentence ($C_{S}$) and per-instance ($C_{I}$) proportion with context length of 64 in the below table. As in the table, our method shows competent performance and consistent results than the baselines decoding methods (greedy, VCD, and OPERA) with varying size of LMMs (LLaVA-1.5, IXC2-VL, and InternVL 1.5). We will incorporate the full CHAIR results into the potential next version.
| CHAIR | | | LLaVA-1.5 | | | | IXC2-VL | | | | InternVL | |
|:--------:|:------:|:----:|:---------:|:--------:|:------:|:--------:|:-------:|:--------:|:------:|:----:|:--------:|:-------:|
| | greedy | vcd | opera | ours | greedy | vcd | opera | ours | greedy | vcd | opera | ours |
| $C_{S}$ | 26.4 | 28.6 | 25.6 | **24.8** | 26.8 | 23.8 | 24.8 | **22.0** | 18.2 | 19.2 | **17.4** | 17.6 |
| $C_{I}$ | 11.1 | 12.1 | 11.5 | **10.9** | 11.8 | **10.2** | 11.1 | 10.6 | 10.1 | 10.4 | 10.8 | **9.7** |
---
**W3. Can CODE be used in conjunction with VCD or OPERA?**
**A3.** CD-based decoding methods essentially requires a contrasting counterpart to penalize the original information with the logit variance. Our method contrasts with the information from self-generated comprehensive description, while VCD and OPERA rely on distorted visual inputs and logit penalties on pre-mature layers for contrastive decoding, respectively. Due to the different contrastive designs of each framework, conjunction of each method is not a feasible option. Additionally, simply concatenating each decoding process may result in the over-penalization of the original information, so that we cannot assure the right models' responses. Also, considering the increased inference time for each decoding framework, we believe that directly combining CD-based methods is not practical.
---
[R1] Mitigating Open-Vocabulary Caption Hallucinations, *arxiv preprint*, 2312.03631
---
Rebuttal Comment 1.1:
Comment: Thanks for you repsonses! I still have some concerns.
1. How about the performance of the greedy decoding in Figure 5?
2. From Table 1 and 2, I think CODE does not always provide much improvement compared with the greedy decoding. It seems that only LLaVA-NeXT and InternVL have a obvious improvement. I think this is a limitation.
---
Rebuttal 2:
Comment: We would like to appreciate to the reviewer for the active engagement and discussion. We address each question separately below.
---
**A1.** As the reviewer asked, we conducted additional experiments on two in-the-wild benchmarks (RealW-QA and LLaVA(W)) and reported greedy decoding performance in the table below. Note that these results slightly underperform compared to other baseline decoding methods.
| | LV1.5 | LV1.5 | IXC-VL | IXC-VL | IVL-1.5 | IVL-1.5 |
|----------|-------:|------:|-------:|-------:|--------:|--------:|
| | greedy | ours | greedy | ours | greedy | ours |
| RealW-QA | 51.2 | 56.2 | 56.9 | 62.4 | 63.4 | 66.3 |
| LLaVA(W) | 69.3 | 72.6 | 73.3 | 82.3 | 84.8 | 85.7 |
---
**A2.** We acknowledge that the performance gain always cannot be significantly improved, especially considering our extensive experimental comparison across **6 benchmarks and 6 varying size of LMMs**. We will definitely integrate the pointed-out limitation into the final version.
Additionally, we kindly encourage the reviewer to refer to our response for R#FLJA (A3). Based on the additional analysis on MMVP, our CODE method shows more powerful performance especially in the generative tasks that require more comprehensive understanding of the given visual context, compared to the deterministic tasks (such as multiple-choice questions), which are relatively simple and straightforward. We believe that this is attributed to our CODE strategy, which utilizes the comprehensive description from the models themselves as restricting information flow when sequentially generating response tokens, thus showing fewer hallucinatory responses during more longer context generation (beyond a single answering: yes/no or multiple-choice question). | null | null | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for the constructive feedback, which we will incorporate into the potential revised version. We also appreciate to all reviewers (*Z6jD*, *FLJA*, *9CKq*) for acknowledging the novelty of our paper, which is the use of self-generated comprehensive descriptions to mitigate hallucinatory responses from existing LMMs.
We have carefully reviewed several concerns that reviewers raised and responded to each question individually. Please feel free to discuss any remaining concerns that can improve our manuscripts during the discussion period. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SPARKLE: A Unified Single-Loop Primal-Dual Framework for Decentralized Bilevel Optimization | Accept (poster) | Summary: This paper proposes SPARKLE, a single-loop primal-dual framework for solving bilevel optimization in the decentralized setting. Specifically, multiple devices collaborate to solve bilevel optimization problems and exchange information via communications over a network. From a theoretical angle, the authors prove that SPARKLE algorithm achieves better transient iteration complexity and handle gradient dissimilarity more effectively under weaker assumptions than previous works. From an empirical angle, the authors conduct experiments to show that their algorithm has better performance than existing results.
Strengths: 1. The authors provide a single-loop primal-dual framework for solving decentralized bilevel optimization. The technical contributions and the claims are sound, and the experiments can support the claims from an empirical perspective.
2. Moreover, the framework unfies many state-of-the-art strategies used in distributed optimization, like ATC, GT, EXTRA, momentum, etc, and obtains best results as compared to existing works.
Weaknesses: 1. Novelty. The novelty of this paper seems a little limited. There are already many existing works proposing various algorithms for solving decentralized bilevel optimization problems, as listed in Table 1 of this paper. I am not sure if the improvement on the transient time and communication complexity over previous works is significant enough to make this work pass the bar of acceptance. The authors may want to clarify this a bit.
2. Presentation. The authors also mention that there are mixed strategies in this paper like ATC, GT, EXTRA, auxiliary-level updates, momentum update, and so on. The organization of the techniques might not be easy for a layman to follow. It would be better if the authors can highlight the reasons and benefits of using these techniques in a table.
3. Experiments. Many important baselines are missing in the figures of Section 4. Some figures have standard deviation of 10 trials while some do not.
Technical Quality: 3
Clarity: 2
Questions for Authors: I wonder if the authors have any responses to the weaknesses I listed above.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: 1. Novelty. One major concern is the limited novelty of this paper, as mentioned in the weakness section.
2. There are also some limitations mentioned by the authors. For example the lower-level problem is strongly convex, and it is unclear if the condition number is optimal in the upper bounds. I agree that these are the common limitations of existing decentralized bilevel optimization works, and are interesting to explore.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the invaluable comments. We have thoroughly addressed all questions. Should there be any additional concerns or inquiries, we are more than willing to provide further clarification.
Re Weakness:
**1. Novelty.**
We respectfully disagree that our work has limited contributions. SPARKLE is a unified framework that **yields brand new algorithms**, and achieves **state-of-the-art** convergence rates under **more relaxed assumptions** compared to existing methods. Please refer to the global response for the clarification of the contribution.
Additional clarifications on the reviewers' specific concerns are provided below.
**[Transient iteration complexity is critical for decentralized optimization]** Transient iteration complexity is critical for both theoretical and empirical reasons.
- From a theoretical standpoint, transient iteration complexity is key for assessing decentralized algorithms' performance compared to centralized methods and distinguishing between different decentralized algorithms. Without analyzing transient iteration complexity, many decentralized bilevel algorithms would seem to have similar performance based solely on asymptotic convergence rates, potentially aligning with centralized approaches. However, this overlooks the discrepancies evident in practical performance. By examining transient iteration complexity, we can identify more effective algorithms with shorter transient iterations, aligning theoretical insights with empirical results.
- From an empirical perspective, decentralized bilevel algorithms can only achieve their asymptotic convergence rate after a sufficiently large number of iterations. However, practical scenarios often allow only a limited number of algorithmic iterations due to constraints on time or computational resources. Consequently, these algorithms typically exhibit a decentralization-incurred slowdown in convergence, as illustrated in Fig. 1 from reference [1]. Evidently, the transient iteration complexity provides a more practical representation of decentralized algorithms' performance in real-world applications.
**[Transient iteration complexity reflects the impact of the network topology]** Decentralized optimization relies on communication networks, unlike centralized optimization with fully-connected networks. While the asymptotic rate remains unaffected, the transient iterations are influenced by network topology. Sparsely-connected network has $\rho \to 1$, which hence enlarges the transient iterations. By comparing the transient iteration complexity between SPARKLE and existing baselines, we find that SPARKLE is more robust to sparse topology.
**[Communication complexity is critical for decentralized optimization]** In distributed systems, communication often takes more time than computation, especially with high-dimensional data. SPARKLE improves on existing methods by allowing different correction techniques and communication topologies for the upper and lower levels. Specifically, some SPARKLE variants, like SPARKLE-ED, can use sparser communication for the upper-level variable x without affecting transient iteration complexity. This reduces communication costs while maintaining efficiency, making SPARKLE more time-efficient compared to other algorithms.
**[More relaxed assumptions]** SPARKLE achieves the best transient time under the most relaxed assumptions (Table 1). We emphasize that the bounded gradient (BG), Lipschitz continuity (LC), and bounded gradient dissimilarity (BGD) assumptions for the upper-level function $f$ and the lower level function $g$ in other works may not hold in typical applications. For example, we consider $f(x,y)=\frac{1}{n}\sum_{i=1}^n f_i(x,y)$ with $f_i(x,y):=\frac{1}{2}x^\top A_i x$, where $A_i$ are different positive semi-definite matrices and $x\in \mathbb{R}^p$. The gradients (w.r.t. $x$) $A_i x$, and gradient dissimilarity $(A_i-\frac{1}{n}\sum_{j=1}^n A_j)x$ are unbounded for $x\in\mathbb{R}^p$. This highlights the significance of our theoretical improvement in transient time.
**[Sharp analysis]** This is the first result showing that bilevel optimization essentially subsumes the convergence of single-level optimization.
**2.Presentation.**
We thank the reviewer for the comment. Here is a brief introduction of the benefits. We will soon provide a detailed table.
GT/ED/EXTRA: correct data heterogeneity, improve transient stage
momentum: guarantee convergence, enable relaxed assumptions
mixed network topology: save communication cost
**3.Experiments**
Thanks for the comment. Here are our responses.
**Baselines**: We involved MA-DSBO and MDBO algorithms as another baselines in the hyper-cleaning problem which is shown in Table 1 in the manuscript. The stepsize $\alpha,\beta,\gamma$ for MA-DSBO and MDBO and the moving-average term of MA-DSBO are the same as that of SPARKLE and D-SOBA. The inner and outer iteration of MA-DSBO is set to 5 and the number of Hessian-inverse estimation iterations of MDBO is set to 5. The average test accuracy is shown in Figure 1 (supplementary PDF), which shows that SPARKLE outperforms than MA-DSBO and MDBO. And the other algorithms in Table 1 is not involved into the comparsion as DSBO, Gossip DSBO and SLAM suffer from a worse transient complexity and has beaten by different decentralized SBO algorithms, and LoPA requires a personal lower-level problem, which is not match the hyper-cleaning problem.
**Standard deviation**: Thanks for the comment. The standard deviation of the last 40 iterations in the hyper-cleaning with $p=0.3$ is shown in Table 5 in the manuscript. And we will added the standard deviation in the other scenarios soon. Table 1 shows the average loss of different algorithms of the last 500 iterations for 10 independent trials in the policy evaluation in the distributed policy evaluation of reinforced learning.
[1] Exponential Graph is Provably Efficient for Decentralized Deep Training
---
Rebuttal 2:
Title: Can we have your valuable feedback on our rebuttals?
Comment: Dear Reviewer ehMq,
We sincerely thank you for your valuable comments and appreciate the time and effort dedicated to providing constructive feedback on our submission. We have carefully considered your suggestions and made significant efforts to address them. Given the limited timeframe of the rebuttal period, we would greatly appreciate if you could review our rebuttal and let us know if any concerns remain. Your insights are invaluable as we strive to enhance the quality of our work.
Best,
The authors of paper 7209
---
Rebuttal Comment 2.1:
Title: Reply to the rebuttal
Comment: Dear Authors,
Thank you for your detailed rebuttal, which addressed my questions and concerns. I would be glad to increase my score.
Best,
Reviewer ehMq
---
Reply to Comment 2.1.1:
Title: Thanks for your reply!
Comment: We are delighted that your concerns have been resolved, and we sincerely appreciate your positive feedback. We will incorporate your suggestions in our later revision. Thank you again for your valuable input. | Summary: This paper introduces SPARKLE, a unified single-loop primal-dual framework for decentralized stochastic bilevel optimization. SPARKLE is highly versatile, which can incorporate various heterogeneity-correction techniques and allows for different strategies to solve upper- and lower-level problems. The authors provide a unified convergence analysis for SPARKLE and its variants, showing state-of-the-art convergence rates compared to existing decentralized bilevel algorithms. Theoretical findings are supported by numerical experiments.
Strengths: The proposed method SPARKLE in this paper is highly versatile, which can support various decentralized mechanisms and topologies across optimization levels. This paper has a comprehensive theoretical analysis for all SPARKLE variants, demonstrating that they achieve state-of-the-art convergence rates compared to existing decentralized bilevel algorithms. In addition, the authors show that the convergence performance of SPARKLE variants is comparable to their single-level counterparts, and that employing mixing strategies outperforms using GT alone.
Weaknesses: 1. In Line 303-304, the authors claim that all the SPARKLE-based algorithms generally achieve higher test accuracy than D-SOBA, while ED and EXTRA especially outperform GT. However, in Figure 1, particularly in the middle and right figures, the test accuracies are very similar, making such claims insufficiently supported. Additionally, such small differences might be due to randomness.
2. The baseline algorithms used in different experiments are different. It is unclear why the authors did not use the same baselines. Furthermore, just one or two existing decentralized SBO algorithms in the hyper-cleaning or distributed policy evaluation experiments is not enough to evaluate the performance of SPARKLE. More state-of-the-art decentralized SBO algorithms need to be added.
3. It would be important for this paper to evaluate more diverse and larger-scale tasks, including non-linear models such as neural networks (e.g., ResNet), applied to various bilevel problems (e.g., meta-learning).
4. The authors did not provide the code to reproduce the experiments, raising concerns about reproducibility and making it difficult for other researchers to verify their results.
Minor:
1. In Assumption 1, it would be clearer to specify the variable with respect to which the function is Lipschitz continuous.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The authors use the same decentralized mechanism for the lower-level and auxiliary variables. Can lower-level and auxiliary variables use different decentralized mechanisms?
2. Could the authors provide more explanation on why SPARKLE can support utilizing different mixing matrices across levels? Are there any applications for such scenarios?
3. The authors only show the upper level loss in the policy evaluation experiment. What about the test accuracy?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: As the authors stated, SPARKLE supports only strongly-convex problems in the lower-level optimization, and the condition number of the lower-level problem significantly impacts the performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the invaluable comments. We have thoroughly addressed all questions. Should there be any additional concerns or inquiries, we are more than willing to provide further clarification.
Re Weakness:
1. Thanks for this concern. In fact, the second line of Table 5 in the manuscript (Page 62) also gives the average test accuracy and standard deviation of SPARKLE with different communication strategies when $p=0.3, \theta=0.2$ on 10 independent trials, corresponding to the right subfig of Fig 1. The test accuracy of SPARKLE with ED and EXTRA is higher than GT for 1~2% with an acceptable standard deviation. Thus we can claim that for a suitable $\theta$, there is a **REAL** benefit with ED and EXTRA and it is not from the stochastic.
2. Thank you for the comment. We involved MA-DSBO and MDBO algorithms as another baselines in the hyper-cleaning problem which is shown in Table 1 in the manuscript. The stepsize $\alpha,\beta,\gamma$ for MA-DSBO and MDBO and the moving-average term of MA-DSBO are the same as that of SPARKLE and D-SOBA. The inner and outer iteration of MA-DSBO is set to 5 and the number of Hessian-inverse
estimation iterations of MDBO is set to 5. The average test accuracy is shown in Figure 1, which shows that SPARKLE outperforms than MA-DSBO and MDBO. And the other algorithms in Table 1 is not involved into the comparsion as DSBO, Gossip DSBO and SLAM suffer from a worse transient complexity and has beaten by different decentralized SBO algorithms, and LoPA requires a personal lower-level problem, which is not match the hyper-cleaning problem.
3. Thanks for this comment. Here we consider a meta-learning problem in decentralized scenario with $N=8$ nodes and Adjust-ring grouph. We consider a 5-way 5-shot task on miniImageNet dataset and compared SPARKLE with D-SOBA and MAML with decentralized communication. For D-SOBA and SPARKLE, the step-size $\beta,\gamma=0.1$ and $\alpha=0.001$. For MAML, the inner step-size is 0.1 and the outer stepsize is 0.001, and the number of inner-loop steps as 3. For all algorithms, the task number is set to 32. And we only repeat the experiment only once due to the time limitation. The accuracy in training and test dataset for the three algorithms is shown in Figure 4 in the pdf document. It can be observe that SPARKLE out performs than the other two algorithms.
4. Thanks for this comment. As SPARKLE is based on stochastic gradient decend, we think the experiments in the manuscript is not different to achieve. Thus we did not offer the code of SPARKLE. However, we will consider release the code soon.
**Minor** :We thank the reviewer for the suggestion. The Lipschitz continuity is for both $x$ and $y$.
Re Questions:
1. SPARKLE permits the use of different decentralized mechanisms for lower-level and auxiliary variables. As noted in Corollary 1, we have $\delta_y = \delta_z$ and $\hat{\delta_y} = \hat{\delta_z}$ when $\mathbf{W}_y = \mathbf{W}_z$ ( See more details in Section **3.4. Different strategies across optimization levels** and Lemma 18). Given that the influence with respect to $y$ and $z$ is identical ($n^3,n$ terms), we employ the same decentralized mechanisms for both variables for simplicity. However, different decentralized mechanisms can be used for $y$ and $z$—for instance, applying EXTRA to $y$ and GT to $z$. The theoretical transient iteration complexities for these configurations can be readily computed using Lemma 18. In SPARKLE, the primary differences lie between the variables $x,y$ and $x,z$, rather than between $y,z$. Therefore, we focus on scenarios where mixing strategies are applied to $x,y$, while maintaining a uniform strategy for $y,z$.
2. Each mixing matrix $\mathbf{W}$ represents a communication graph or topology.
Formally, we can directly use different mixing matrices $\mathbf{W}_x,\mathbf{W}_y,\mathbf{W}_z$, which corresponds to different communication topologies. Our key finding regarding the use of different mixing matrices is that some SPARKLE variants, such as SPARKLE-ED, can maintain the transient iteration complexity (up to a constant factor) even when the communication topology for the upper-level variable $x$ is sparser within certain ranges. In other words, by fixing the communication graphs for $y,z$ and using the same communication graphs for both $y,z$, while applying a sparser communication graph for $x$, the transient iteration complexities will remain the same as if the same communication graph were applied to $x$ along with $y,z$. This implies that a sparser communication graph for $x$ can reduce communication costs while preserving the same transient iteration complexity. Please refer to more detailed discussions in Section **3.5. Different topologies across optimization levels** and Section **C.2.2 Theoretical gap between upper-level and lower-level, Appendix**. We provide upper bounds for the relative sparsity of the topology for $x$ compared to $y,z$ to ensure that the transient iteration complexity remains unchanged.
There are many practical scenarios where using different communication topologies for $x,y,z$ is beneficial. For instance, when the dimension of $x$ is larger than that of $y$, or when the computational overhead for computing the gradient of the upper-level function $f$ is higher than that of the lower-level function $g$, employing a relatively sparser communication topology for $x$ compared to $y$ can reduce communication time and costs.
3. We thank the reviewer for the valuable comment. As the policy evaluation experiment is a reinforce learning problem with a synthetic dataset, it is hard to provide a "test accuracy". To verify the test performance of SPARKLE, we made a fixed "test set" with 10000 sample. The generation of test sample is same to training sample. And the averange test loss of different algorithms is shown in Figure 3, which illustrate the better test performance of SPARKLE.
---
Rebuttal 2:
Title: Can we have your valuable feedback on our rebuttals?
Comment: Dear Reviewer AnxT,
We sincerely thank you for your valuable comments and appreciate the time and effort dedicated to providing constructive feedback on our submission. We have carefully considered your suggestions and made significant efforts to address them. Given the limited timeframe of the rebuttal period, we would greatly appreciate if you could review our rebuttal and let us know if any concerns remain. Your insights are invaluable as we strive to enhance the quality of our work.
Best,
The authors of paper 7209
---
Rebuttal Comment 2.1:
Comment: Thank the authors for the rebuttal. Since all my concerns have been addressed, I am happy to increase my score.
---
Reply to Comment 2.1.1:
Title: Thanks for your reply!
Comment: We are delighted that your concerns have been resolved, and we sincerely appreciate your positive feedback. We will incorporate your suggestions in our later revision. Thank you again for your valuable input. | Summary: The paper introduces SPARKLE, a unified framework for decentralized stochastic bilevel optimization that addresses several limitations in existing approaches. SPARKLE incorporates various heterogeneity-correction techniques, including EXTRA, Exact Diffusion, and Gradient Tracking, and allows for different strategies in upper and lower-level problems. This flexibility enables SPARKLE to outperform previous methods, achieving state-of-the-art performance in terms of convergence rate, gradient complexity, communication cost, and transient iteration complexity. Notably, the framework demonstrates that EXTRA and Exact Diffusion are more suitable for decentralized bilevel optimization than Gradient Tracking. SPARKLE provides a unified convergence analysis applicable to all its variants without requiring restrictive assumptions like bounded gradients or data heterogeneity. Through numerical experiments, the paper demonstrates that SPARKLE achieves better performance compared to existing decentralized bilevel optimization algorithms.
Strengths: 1.The SPARKLE framework's unified approach to decentralized bilevel optimization, incorporating multiple heterogeneity-correction techniques (EXTRA, Exact Diffusion, and Gradient Tracking), reveals an important insight: Gradient Tracking (GT) is not the optimal choice in some situations.
2.The authors demonstrate its superiority of SPARKLE in several theoretical aspects of decentralized bilevel optimization, achieving state-of-the-art results across multiple performance metrics. The framework exhibits faster convergence rates, lower gradient complexity, reduced communication costs, and improved transient iteration complexity compared to existing methods.
3.Extensive experimental results show SPARKLE consistently outperforming existing methods across various benchmark problems and real-world scenarios, particularly in terms of convergence rates. This empirical evidence strongly supports SPARKLE's practical utility and efficiency in solving complex optimization tasks in decentralized settings.
Weaknesses: 1.Although the paper proposes mixing strategies to handle heterogeneity, it does not provide a strong motivation or clear explanation for the necessity and benefits of these mixing strategies over choosing a single strategy, such as EXTRA or ED. This lack of clarity can make it difficult for practitioners to understand the rationale behind the proposed approach and its potential advantages.
2.The primal-dual approach employed in SPARKLE incurs increased communication and storage overhead per communication round compared to non-primal-dual methods, due to the necessity of maintaining and updating dual variables in addition to primal variables across the decentralized network.
3.The paper appears to lack an explicit analysis of consensus error, a crucial metric for evaluating solution agreement across nodes in decentralized networks. This omission represents a significant gap in the framework's evaluation, potentially limiting our understanding of SPARKLE's effectiveness in achieving consistent solutions in distributed optimization scenarios.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.Over an undirected time-invariant graph with non-negative, symmetric and doubly stochastic mixing matrices, do the algorithms SPARKLE with GT, EXTRA, and ED have different ranges of application? If so, how do their application ranges differ in the context of decentralized bilevel optimization?
2.Can the convergence rate of SPARKLE be improved beyond the results presented in Table 1, considering that the algorithm without momentum step in reference [27] achieves comparable convergence results in single-level scenario? In the absence of the momentum step, how would the convergence results of Algorithm 1 be affected? If the primal-dual framework is removed, how would the convergence results of Algorithm 1 be affected?
3.Could the paper provide a detailed comparison of the computational time and communication rounds required to achieve specific test accuracy or loss thresholds among the SPARKLE algorithms with different heterogeneity-correction techniques in experimental results?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1.By requiring strong convexity in the lower-level problem, SPARKLE excludes a wide range of practical scenarios where the lower-level optimization may be non-convex or only generally convex. This constraint substantially narrows the scope of decentralized bilevel optimization problems that SPARKLE can effectively address.
2.The performance of SPARKLE is significantly impacted by the condition number of the lower-level problem. This sensitivity could limit its effectiveness in certain problem scenarios.
3.The experiments seem to lack a comparison with single-level algorithms, which could provide valuable insights into the relative performance of SPARKLE in the distributed policy evaluation in reinforced learning. While the study demonstrates SPARKLE's superior performance against other decentralized stochastic bilevel optimization (DSBO) approaches like MDBO and SLDBO, it fails to benchmark against established single-level methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the invaluable comments. We have thoroughly addressed all questions. Should there be any additional concerns or inquiries, we are more than willing to provide further clarification.
Response to Weakness:
1.**Motivation for the necessity and benefits of mixing strategies over choosing a single strategy.**
**[Motivation]** Bilevel optimization presents distinct challenges for the upper- and lower-level problems, with the upper-level often being non-convex and the lower-level strongly convex. This difference motivates us to explore whether using different update strategies for each level could provide advantages over the uniform gradient tracking (GT) strategy throughout all optimization levels used in existing approaches. To our knowledge, this question has not been addressed in prior literature.
**[Benefits]** Whether mixing strategies would bring in benefits should be discussed in separate cases.
- Before our work, existing works utilize DGD or GT in decentralized bilevel algorithms. Compared to these works, we find algorithms with mixing strategies such as SPARKLE-GT-ED converges **faster** than emplying GT alone, see Table 2 in the paper.
- On the other hand, our results in Table 2 further shows that mixing strategies will achieve the **same** convergence rate as single strategy if ED or EXTRA is utilized.
With these results, we can conclude that mixing strategies may not necessarily bring in benefits compared to the best single strategy such as EXTRA and ED (which is proposed by us).
**[Contributions]** The main aim of this paper is to showcase the general SPARKLE framework rather than advocate for mixing strategies specifically. Our goal is to highlight the power of SPARKLE in analyzing mixing-strategy algorithms, which was not possible with previous methods. Our key findings are:
- We have clarified the theoretical performance of mixing strategies, and that mixing strategies may not always offer advantages, which is still significant. Prior to our work, it was unclear whether mixing strategies would be beneficial.
- We reveal that the performance of both single-strategy and mixing-strategy algorithms depends largely on the lower-level optimization (Theorem 1 and Corollary 4).
2.**Extra overhead brought by the primal-dual approach.**
We agree that primal-dual methods introduce additional storage overhead due to the presence of dual variables. However, the communication costs per iteration may not necessarily increase. Dual variables of EXTRA and ED can be eliminated from the recursion, see the recursions in Table 3 as well as Section B.2 (Appendix). It is observed in Table 3 that only the primal variables needs to be communicted for SPARKLE-EXTRA and SPARKLE-ED, make it as communication-efficient as primal approaches like D-SOBA.
3.**Lacking an explicit analysis of consensus error.**
We provide a brief analysis of consensus errors in **Comment**. The asymtocic consensus error for SPARKLE variants using EXTRA, ED or GT is given by
$\frac{1}{K}\sum_{k=0}^K\mathbb{E}\left[\frac{\|\mathbf{x}^k-\bar{\mathbf{x}}^k\|^2}{n}+\frac{\|\mathbf{y}^k-\bar{\mathbf{y}}^k\|^2}{n}\right]
=\mathcal{O}\left( \frac{n}{K}\left(\frac{1}{1-\rho_y}+\frac{1}{1-\rho_z}\right)\right),$
where $\rho_y,\rho_z$ are spectrum gaps of relevant mixing matrices.
Re Questions:
**Q1: Application of EXTRA, ED and GT.**
EXTRA, ED and GT can be used for the same applications. However, bilevel algorithms based on GT incurs more communication overhead (refer to the detailed update rules in Table 3) and suffers from longer transient iterations compared to the other two. For this reason, we will recommend using SPARKLE-EXTRA or SPARKLE-ED for decentralized bilevel optimization problems.
**Q2: The influence of each algorithmic component.**
**[Momentum cannot be removed]** The momentum step, or moving average, is essential for mitigating hyper-gradient estimation errors. These errors arise from sampling variance and estimation inaccuracies in the lower-level solution $y^\star(\bar{x}^k)$ and auxiliary-level solution $z_\star^k$, which differ from single-level optimization. Properly selecting the moving average parameter $\theta$ helps reduce these negative effects and ensures SPARKLE's convergence. Without the moving average (i.e., $\theta=1$), convergence cannot be guaranteed. For more details, please refer to **Remark 6** (lines 829-836, Appendix) and our proof.
**[Primal-dual framework cannot be removed]** The primal-dual framework is crucial for formulating heterogeneity correction methods and deriving specific algorithms, as detailed in Table 3. It enables the recovery of algorithms by defining the matrices $A, B, C$. This framework is vital for the basic transformation in Section C.1.2 and for establishing SPARKLE's convergence under Assumptions 1, 2, and 4.
**Q3:Detailed comparison of the computational time and communication rounds.**
We thank the reviewer for the suggestion. In the data-hyperclean problem, we obtain the average of required gradient computation time to achieve specific test accuracy for SPARKLE with different communication strategy as well as D-SOBA, MDBO and MA-DSBO-GT as Figure 1 in the pdf document. It can be observed that SPARKLE needs less computation to achieve a given test accuracy than D-SOBA and MA-DSBO-GT. And SPARKLE with EXTRA in lover level communication outperforms than other strategies.
Response to Limitations:
**1.**Please refer to Section 2 in **Global Rebuttal**.
**2.Lacking a comparison with single-level algorithms.**
We thank the reviewer for the suggestion. We have taken single-level EXTRA alglrithms into the comparsion of different algorims in the distributed policy evaluation. Table 1 shows the average loss of different algorithms of the last 500 iterations for 10 independent trials. And we can observe that SPARKLE has similar convergenve performance as single-level algorithms.
---
Rebuttal 2:
Title: Analysis of Consensus errors (Part 1)
Comment: **Lemma 18**: Suppose that Assumptions 1-4 hold. Then there exist constant step-sizes $\alpha, \beta, \gamma, \theta$, such that Lemma 17 holds and
\begin{equation} \begin{aligned} \frac{1}{K}\sum_{k=0}^K\mathbb{E}\left[\frac{\Vert \mathbf{x}^k-\bar{\mathbf{x}}^k\Vert ^2}{n}+\frac{\Vert \mathbf{y}^k-\bar{\mathbf{y}}^k\Vert ^2}{n}\right] \lesssim_K \frac{n}{K}\left(\frac{\Vert \mathbf{O}_z\Vert ^2\Vert \mathbf{O}_z^{-1}\Vert ^2\Vert \mathbf{\Lambda}\_{za}\Vert ^2}{1-\Vert \mathbf{\Gamma}_z\Vert }+\frac{\Vert \mathbf{O}_y\Vert ^2\Vert \mathbf{O}_y^{-1}\Vert ^2\Vert \mathbf{\Lambda}\_{ya}\Vert ^2}{1-\Vert \mathbf{\Gamma}_y\Vert }\right), \end{aligned} \end{equation} where $\lesssim_K$ denotes the the asymptotic rate when $K\to \infty$.
**Proof**
Suppose $\alpha$, $\beta$, $\gamma$, and $\theta$ satisfy the constraints given in Eq. (89) and Eq. (90), which ensures that Theorem 1 (Lemma 17) holds.
For clarity, we define the constants: \begin{equation}
\begin{aligned} c_1=\frac{9\alpha^2L_{z^\star}^2}{\gamma^2\mu_g^2}+\frac{438\kappa^4\alpha^2}{\beta^2\mu_g^2}L_{y^{\star}}^2, \quad c_2=10\left(L^2+\frac{\theta\sigma_{g,2}^2}{n}\right). \end{aligned} \end{equation} Then there exist $\alpha$, $\beta$, $\gamma$, and $\theta$ that satisfy the constraints in Eq. (89) and (90), and also: \begin{equation} c_1\le 0.01 L^{-2}, \quad c_2\le 11 L^2.\cdot\cdot\cdot\cdot\cdot(*) \end{equation}
It implies that $c_1 c_2<0.2$. We take such values for step-sizes in the following proof.
We proceed by substituting Eq. (41) into Eq. (57), yielding:
\begin{equation}
\begin{aligned} \sum_{k=-1}^K\mathbb{E}[I_k] \leq 4 c_1\left(\frac{\Phi(\bar{x}^0)-\inf\Phi}{\alpha}+c_2 \sum_{k=0}^{K-1}\mathbb{E}\left[\frac{\Delta_k}{n}+I_k\right]+\frac{3\theta}{n}K\left(\sigma_{f,1}^2+2\sigma_{g,2}^2\frac{L_{f,0}^2}{\mu_g^2}\right)\right) +510\kappa^4\sum_{k=0}^K\mathbb{E}\left[\frac{\Delta_k}{n}\right]+\frac{3\Vert z^1_{\star}\Vert ^2}{\mu_g\gamma} \\\\+\frac{6(K+1)\gamma}{\mu_g n}\left(3\sigma_{g,2}^2\frac{L_{f,0}^2}{\mu_g^2}+\sigma_{f,1}^2\right) +73\kappa^4\left(\frac{4}{\beta\mu_g}\Vert \bar{y}^{0}-y^{\star}(\bar{x}^{0})\Vert ^2 +\frac{4K\sigma_{g,1}^2}{n\mu_g}\beta\right). \end{aligned} \end{equation}
Subtracting $4c_1c_2\sum_{k=0}^{K-1}\mathbb{E}[I_k]$ from both sides, we get:
\begin{equation} \begin{aligned} \sum_{k=-1}^K\mathbb{E}[I_k] \lesssim \frac{\Phi(\bar{x}^0)-\inf\Phi}{\alpha}+\frac{\theta}{n}K\left(\sigma_{f,1}^2+\sigma_{g,2}^2\frac{L_{f,0}^2}{\mu_g^2}\right) +\kappa^4\sum_{k=0}^K\mathbb{E}\left[\frac{\Delta_k}{n}\right]+\frac{\Vert z^1_{\star}\Vert ^2}{\mu_g\gamma} \\\\+\frac{K\gamma}{\mu_gn}\left(\sigma_{g,2}^2\frac{L_{f,0}^2}{\mu_g^2}+\sigma_{f,1}^2\right) +\kappa^4\left(\frac{1}{\beta\mu_g}\Vert \bar{y}^{0}-y^{\star}(\bar{x}^{0})\Vert ^2 +\frac{K\sigma_{g,1}^2}{n\mu_g}\beta\right). \end{aligned} \end{equation}
Substituting Eq. (57) into Eq. (41), we obtain:
\begin{equation} \begin{aligned} \frac{1}{4}\sum_{k=0}^K\mathbb{E}\left\Vert \bar{r}^{k+1}\right\Vert ^2&\\leq\frac{\Phi(\bar{x}^0)-\inf\Phi}{\alpha}+c_2\sum_{k=0}^K\mathbb{E}\left[\frac{\Delta_k}{n}\right]+c_2c_1\sum_{k=0}^K\mathbb{E}\Vert \bar{r}^k\Vert ^2 +\\\\&c_2 \left( 510\kappa^4\sum_{k=0}^K\mathbb{E}\left[\frac{\Delta_k}{n}\right]+\frac{3\Vert z^1_{\star}\Vert ^2}{\mu_g\gamma} +\frac{6(K+1)\gamma}{\mu_gn}\left(3\sigma_{g,2}^2\frac{L_{f,0}^2}{\mu_g^2}+\sigma_{f,1}^2\right) +73\kappa^4\left(\frac{4}{\beta\mu_g}\Vert \bar{y}^{0}-y^{\star}(\bar{x}^{0})\Vert ^2 +\frac{4K\sigma_{g,1}^2}{n\mu_g}\beta\right)\right) \\\\&+\frac{3\theta}{n}(K+1)\left(\sigma_{f,1}^2+2\sigma_{g,2}^2\frac{L_{f,0}^2}{\mu_g^2}\right). \end{aligned} \end{equation}
Subtracting $c_2c_1\sum_{k=0}^K\mathbb{E}\Vert \bar{r}^k\Vert ^2$ from both sides, we get
\begin{equation} \begin{aligned} \sum_{k=0}^K\mathbb{E}\left\Vert \bar{r}^{k+1}\right\Vert ^2 \lesssim&\frac{\Phi(\bar{x}^0)-\inf\Phi}{\alpha}+\kappa^4\sum_{k=0}^K\mathbb{E}\left[\frac{\Delta_k}{n}\right]+\frac{\theta}{n}K\left(\sigma_{f,1}^2+\sigma_{g,2}^2\frac{L_{f,0}^2}{\mu_g^2}\right) \\\\&+\frac{\Vert z^1_{\star}\Vert ^2}{\mu_g\gamma} +\frac{K\gamma}{\mu_gn}\left(\sigma_{g,2}^2\frac{L_{f,0}^2}{\mu_g^2}+\sigma_{f,1}^2\right) +\kappa^4\left(\frac{1}{\beta\mu_g}\Vert \bar{y}^{0}-y^{\star}(\bar{x}^{0})\Vert ^2 +\frac{K\sigma_{g,1}^2}{n\mu_g}\beta\right). \end{aligned} \end{equation}
---
Rebuttal 3:
Title: Analysis of Consensus errors (Part 2)
Comment: Combining previous upper bounds for $ \sum_{k=-1}^K\mathbb{E}[I_k]$ and $\sum_{k=0}^K\mathbb{E}\left\Vert \bar{r}^{k+1}\right\Vert ^2$ with Eq. 83, we obtain
\begin{equation} \begin{aligned} &\sum_{k=0}^K\mathbb{E}\left[\Delta_k\right] \lesssim(\eta_1+\kappa^2L_{y^{\star}}^2\eta_2)\alpha^2\sum_{k=0}^K\mathbb{E}\Vert \bar{\mathbf{r}}^{k+1}\Vert ^2+\kappa\eta_2\beta\Vert \bar{\mathbf{y}}^0-\mathbf{y}^{\star}(\bar{x}^{0})\Vert ^2+K\eta_2\beta^2\sigma_{g,1}^2 \\\\+&\underbrace{\left(\frac{\kappa^2\Vert \mathbf{O}_x^{-1}\Vert ^2\Vert \mathbf{O}\_x\Vert ^2\Vert {\mathbf{\Lambda}}\_{xa}\Vert ^2\alpha^2}{(1-\Vert \mathbf{\Gamma}_x\Vert )^2} +\frac{\Vert \mathbf{O}\_z\Vert ^2\Vert \mathbf{O}z^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{za}\Vert ^2}{1-\Vert \mathbf{\Gamma}_z\Vert }\cdot\frac{\gamma^2(L\_{g,1}^2+(1-\Vert \mathbf{\Gamma}_z\Vert )\sigma\_{g,2}^2)}{1-\Vert \mathbf{\Gamma}_z\Vert }\right)}\_{\eta_3}\sum\_{k=-1}^K\mathbb{E}[nI_k] \\\\+&\frac{\kappa^2\beta^2K\Vert \mathbf{O}_y\Vert ^2\Vert \mathbf{O}_y^{-1}\Vert ^2 \Vert \mathbf{\Lambda}\_{ya}\Vert ^2}{1-\Vert \mathbf{\Gamma}_y\Vert } n\sigma\_{g,1}^2+\frac{\kappa^2\Vert \mathbf{O}_y\Vert ^2\mathbb{E}\Vert \hat{\mathbf{e}}_y^{0}\Vert ^2}{1-\Vert \mathbf{\Gamma}_y\Vert }+\frac{\Vert \mathbf{O}_z\Vert ^2\mathbb{E}\Vert \hat{\mathbf{e}}_z^{0}\Vert ^2}{1-\Vert \mathbf{\Gamma}_z\Vert } \\\\+& \frac{\kappa^2\Vert \mathbf{O}_x\Vert ^2\mathbb{E}\Vert \hat{\mathbf{e}}_x^{0}\Vert ^2}{1-\Vert \mathbf{\Gamma}_x\Vert }+\frac{\kappa^2\alpha^2\Vert \mathbf{O}_x\Vert ^2\Vert \mathbf{O}_x^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{xa}\Vert ^2}{\theta(1-\Vert \mathbf{\Gamma}_x\Vert )^2} \left\Vert \widetilde{\nabla}\mathbf{\Phi}(\bar{\mathbf{x}}^{0})\right\Vert ^2 \\\\+&Kn\gamma^2\frac{\Vert \mathbf{O}_z\Vert ^2\Vert \mathbf{O}_z^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{za}\Vert ^2}{1-\Vert \mathbf{\Gamma}\_z\Vert }\left(\kappa^2\sigma\_{g,2}^2+\sigma\_{f,1}^2\right) +Kn\kappa^2\alpha^2\theta\frac{\Vert \mathbf{O}_x\Vert ^2\Vert \mathbf{O}_x^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{xa}\Vert ^2}{(1-\Vert \mathbf{\Gamma}_x\Vert )^2}\left(\kappa^2\sigma\_{g,2}^2+\sigma\_{f,1}^2\right) \\\\ \lesssim&\left[(\eta_1+\kappa^2L\_{y^{\star}}^2\eta_2)\alpha^2+\eta_3\right]\cdot \kappa^4\sum\_{k=0}^K\mathbb{E}\left[\Delta_k\right]+\kappa\eta_2\beta\Vert \bar{\mathbf{y}}^0-\mathbf{y}^{\star}(\bar{x}^{0})\Vert ^2+K\eta_2\beta^2\sigma\_{g,1}^2 \\\\+&n\left[(\eta_1+\kappa^2L{y^{\star}}^2\eta_2)\alpha^2+\eta_3\right]\left[\frac{1}{\alpha}+\frac{\theta}{n}K\left(\sigma\_{f,1}^2+\kappa^2\sigma\_{g,2}^2\right) +\frac{1}{\mu_g\gamma} +\frac{K\gamma}{\mu_gn}\left(\kappa^2\sigma\_{g,2}^2+\sigma\_{f,1}^2\right) +\kappa^4\left(\frac{1}{\beta\mu_g} +\frac{K\sigma\_{g,1}^2}{n\mu_g}\beta\right)\right] \\\\+&\frac{\kappa^2\beta^2K\Vert \mathbf{O}_y\Vert ^2\Vert \mathbf{O}_y^{-1}\Vert ^2 \Vert \mathbf{\Lambda}\_{ya}\Vert ^2}{1-\Vert \mathbf{\Gamma}_y\Vert } n\sigma\_{g,1}^2+\frac{\kappa^2\alpha^2\Vert \mathbf{O}_x\Vert ^2\Vert \mathbf{O}_x^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{xa}\Vert ^2}{\theta(1-\Vert \mathbf{\Gamma}_x\Vert )^2} \left\Vert \widetilde{\nabla}\mathbf{\Phi}(\bar{\mathbf{x}}^{0})\right\Vert ^2 \\\\+&\frac{\kappa^2\Vert \mathbf{O}_y\Vert ^2\mathbb{E}\Vert \hat{\mathbf{e}}_y^{0}\Vert ^2}{1-\Vert \mathbf{\Gamma}_y\Vert }+\frac{\Vert \mathbf{O}_z\Vert ^2\mathbb{E}\Vert \hat{\mathbf{e}}_z^{0}\Vert ^2}{1-\Vert \mathbf{\Gamma}_z\Vert } +\frac{\kappa^2\Vert \mathbf{O}_x\Vert ^2\mathbb{E}\Vert \hat{\mathbf{e}}_x^{0}\Vert ^2}{1-\Vert \mathbf{\Gamma}_x\Vert } \\\\+&Kn\gamma^2\frac{\Vert \mathbf{O}_z\Vert ^2\Vert \mathbf{O}_z^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{za}\Vert ^2}{1-\Vert \mathbf{\Gamma}_z\Vert }\left(\kappa^2\sigma\_{g,2}^2+\sigma\_{f,1}^2\right) +Kn\kappa^2\alpha^2\theta\frac{\Vert \mathbf{O}_x\Vert ^2\Vert \mathbf{O}_x^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{xa}\Vert ^2}{(1-\Vert \mathbf{\Gamma}_x\Vert )^2}\left(\kappa^2\sigma\_{g,2}^2+\sigma\_{f,1}^2\right). \end{aligned} \end{equation}
Eq. (89) and Eq. (90) imply that
\begin{equation} \begin{aligned} &\eta_1\lesssim \kappa^2+\kappa^2\frac{\Vert \mathbf{O}_x\Vert ^2\Vert \mathbf{O}_x^{-1}\Vert ^2{\mathbf{\Lambda}}\_{xa}\Vert ^2\Vert }{(1-\Vert \mathbf{\Gamma}_x\Vert )^2},\quad \eta_2\lesssim \kappa^2, \& (\eta_1+\kappa^2L{y^{\star}}^2\eta_2)\alpha^2\lesssim \kappa^{-4},\quad \eta_3\lesssim \kappa^{-4} \end{aligned} \end{equation}
---
Rebuttal 4:
Title: Analysis of Consensus errors (Part 3)
Comment: Taking $\alpha,\beta,\gamma,\theta$ satisfying Eq. 89, Eq.90, Eq.(*) and that $\kappa^4[(\eta_1+\kappa^2L_{y^{\star}}^2\eta_2)\alpha^2+\eta_3]$
is a sufficiently small constant, we can derive the following result: \begin{equation} \begin{aligned} &\frac{1}{K}\sum_{k=0}^K\mathbb{E}\left[\frac{\Delta_k}{n}\right]
\\\\ \lesssim&\frac{\kappa\eta_2\beta}{K}+\eta_2\beta^2\frac{\sigma_{g,1}^2}{n} \\\\+&\left[(\eta_1+\kappa^2L_{y^{\star}}^2\eta_2)\alpha^2+\eta_3\right]\left[\frac{1}{\alpha K}+\frac{\theta}{n}\left(\sigma_{f,1}^2+\kappa^2\sigma_{g,2}^2\right) +\frac{1}{\mu_g\gamma K} +\frac{\gamma}{\mu_gn}\left(\kappa^2\sigma_{g,2}^2+\sigma_{f,1}^2\right) +\kappa^4\left(\frac{1}{\beta\mu_g K} +\frac{\sigma_{g,1}^2}{n\mu_g}\beta\right)\right]
\\\\+&\frac{\kappa^2\beta^2\Vert \mathbf{O}_y\Vert ^2\Vert \mathbf{O}_y^{-1}\Vert ^2 \Vert \mathbf{\Lambda}\_{ya}\Vert ^2}{1-\Vert \mathbf{\Gamma}_y\Vert } \sigma\_{g,1}^2+\frac{\kappa^2\alpha^2\Vert \mathbf{O}_x\Vert ^2\Vert \mathbf{O}x^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{xa}\Vert ^2}{\theta K(1-\Vert \mathbf{\Gamma}_x\Vert )^2}
\\\\+&\frac{\kappa^2\Vert \mathbf{O}_y\Vert ^2\mathbb{E}\Vert \hat{\mathbf{e}}_y^{0}\Vert ^2}{(1-\Vert \mathbf{\Gamma}_y\Vert )Kn}+\frac{\Vert \mathbf{O}_z\Vert ^2\mathbb{E}\Vert \hat{\mathbf{e}}_z^{0}\Vert ^2}{(1-\Vert \mathbf{\Gamma}_z\Vert )Kn} +\frac{\kappa^2\Vert \mathbf{O}_x\Vert ^2\mathbb{E}\Vert \hat{\mathbf{e}}_x^{0}\Vert ^2}{(1-\Vert \mathbf{\Gamma}_x\Vert )Kn} \\\\+&\gamma^2\frac{\Vert \mathbf{O}_z\Vert ^2\Vert \mathbf{O}_z^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{za}\Vert ^2}{1-\Vert \mathbf{\Gamma}_z\Vert }\left(\kappa^2\sigma\_{g,2}^2+\sigma\_{f,1}^2\right) +\kappa^2\alpha^2\theta\frac{\Vert \mathbf{O}_x\Vert ^2\Vert \mathbf{O}_x^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{xa}\Vert ^2}{(1-\Vert \mathbf{\Gamma}_x\Vert )^2}\left(\kappa^2\sigma\_{g,2}^2+\sigma\_{f,1}^2\right)
\\\\ \lesssim&\frac{\kappa^5\eta_2\alpha}{K}+\kappa^{10}\alpha^2\frac{\sigma\_{g,1}^2}{n}+\frac{\kappa}{ K}\left[(\eta_1+\kappa^2L\_{y^{\star}}^2\eta_2)\alpha+\frac{\eta_3}{\alpha}\right] \\\\+&\left[(\eta_1+\kappa^2L\_{y^{\star}}^2\eta_2)\alpha^2+\eta_3\right]\left[\frac{\theta}{n}\left(\sigma\_{f,1}^2+\kappa^2\sigma{g,2}^2\right) +\frac{\kappa^5\alpha}{n}\left(\kappa^2\sigma\_{g,2}^2+\sigma\_{f,1}^2\right) +\kappa^9\frac{\sigma\_{g,1}^2}{n}\alpha\right] \\\\+&\frac{\kappa^2\Vert \mathbf{O}_y\Vert ^2\Vert \mathbf{O}_y^{-1}\Vert ^2 \Vert \mathbf{\Lambda}\_{ya}\Vert ^2}{1-\Vert \mathbf{\Gamma}_y\Vert } \sigma{g,1}^2\kappa^8\alpha^2+\frac{\kappa^{-1}\alpha\Vert \mathbf{O}_x\Vert ^2\Vert \mathbf{O}_x^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{xa}\Vert ^2}{ K(1-\Vert \mathbf{\Gamma}_x\Vert )^2}
\\\\+&\alpha^2\frac{\kappa^{10}\Vert \mathbf{O}_y\Vert ^2\Vert \mathbf{O}_y^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{ya}\Vert ^2\Vert {\mathbf{\Lambda}}\_{yb}^{-1}\Vert ^2\zeta^y_0}{K(1-\Vert \mathbf{\Gamma}_y\Vert )}+\alpha^2\frac{\kappa^{8}\Vert \mathbf{O}_z\Vert ^2\Vert \mathbf{O}_z^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{za}\Vert ^2\Vert {\mathbf{\Lambda}}\_{zb}^{-1}\Vert ^2\zeta^z_0}{K(1-\Vert \mathbf{\Gamma}_z\Vert )}
\\\\+&\alpha^2\frac{\kappa^2\Vert \mathbf{O}_x\Vert ^2\Vert \mathbf{O}_x^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{xa}\Vert ^2\Vert {\mathbf{\Lambda}}\_{xb}^{-1}\Vert ^2\zeta^x_0}{K(1-\Vert \mathbf{\Gamma}_x\Vert )}
\\\\+&\kappa^8\alpha^2\frac{\Vert \mathbf{O}_z\Vert ^2\Vert \mathbf{O}_z^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{za}\Vert ^2}{1-\Vert \mathbf{\Gamma}_z\Vert }\left(\kappa^2\sigma\_{g,2}^2+\sigma\_{f,1}^2\right) +\kappa^2\alpha^2\theta\frac{\Vert \mathbf{O}_x\Vert ^2\Vert \mathbf{O}_x^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{xa}\Vert ^2}{(1-\Vert \mathbf{\Gamma}_x\Vert )^2}\left(\kappa^2\sigma\_{g,2}^2+\sigma\_{f,1}^2\right), \end{aligned} \end{equation}
where the second inequality uses Eq.90.
From Eq. 89 and Eq. 90, we can determine the following asymptotic orders for $\alpha,\beta,\gamma$ and $\theta$
\begin{equation}
\alpha=\mathcal{O}\left(\kappa^{-4}\sqrt{\frac{n}{K\sigma^2}}\right),\quad \beta=\mathcal{O}\left(\sqrt{\frac{n}{K\sigma^2}}\right),\quad \gamma=\mathcal{O}\left(\sqrt{\frac{n}{K\sigma^2}}\right),\quad \theta=\mathcal{O}\left(\kappa\sqrt{\frac{n}{K\sigma^2}}\right).
\end{equation}
Then we get
\begin{equation}
\begin{aligned}
&\frac{1}{K}\sum_{k=0}^K\mathbb{E}\left[\frac{\Delta_k}{n}\right]
\lesssim_K \frac{\kappa^2n}{K}\left(\frac{\Vert \mathbf{O}_z\Vert ^2\Vert \mathbf{O}_z^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{za}\Vert ^2}{1-\Vert \mathbf{\Gamma}_z\Vert }+\frac{\Vert \mathbf{O}_y\Vert ^2\Vert \mathbf{O}_y^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{ya}\Vert ^2}{1-\Vert \mathbf{\Gamma}_y\Vert }\right),
\end{aligned}
\end{equation}
where $\lesssim_K$ denotes the
the asymptotic rate when $K\to \infty$.
---
Rebuttal 5:
Title: Analysis of Consensus errors (Part 4)
Comment: Then using Eq. 36 and the definition of $\Delta_k$
, we get
\begin{equation} \begin{aligned} &\frac{1}{K}\sum\_{k=0}^K\mathbb{E}\left[\frac{\Vert \mathbf{x}^k-\bar{\mathbf{x}}^k\Vert ^2}{n}+\frac{\Vert \mathbf{y}^k-\bar{\mathbf{y}}^k\Vert ^2}{n}\right] \lesssim_K \frac{n}{K}\left(\frac{\Vert \mathbf{O}_z\Vert ^2\Vert \mathbf{O}_z^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{za}\Vert ^2}{1-\Vert \mathbf{\Gamma}_z\Vert }+\frac{\Vert \mathbf{O}_y\Vert ^2\Vert \mathbf{O}_y^{-1}\Vert ^2\Vert {\mathbf{\Lambda}}\_{ya}\Vert ^2}{1-\Vert \mathbf{\Gamma}_y\Vert }\right). \end{aligned} \end{equation} In particular, the corresponding result of SPARKLE variants that using EXTRA, ED or GT is \begin{equation} \begin{aligned} &\frac{1}{K}\sum\_{k=0}^K\mathbb{E}\left[\frac{\Vert \mathbf{x}^k-\bar{\mathbf{x}}^k\Vert ^2}{n}+\frac{\Vert \mathbf{y}^k-\bar{\mathbf{y}}^k\Vert ^2}{n}\right] \lesssim_K \frac{n}{K}\left(\frac{1}{1-\rho_y}+\frac{1}{1-\rho_z}\right), \end{aligned} \end{equation} where $1-\rho_y,1-\rho_z$ are spectrum gaps of relevant mixing matrices for $y,z$ respectively.
We finish the proof.
**To our knowledge, it is the first theoretical result of consensus errors for decentralized stochastic bilevel algorithms under the condition that the asymptotic convergence rate shows linear speedup.**
---
Rebuttal 6:
Title: Can we have your valuable feedback on our rebuttals?
Comment: Dear Reviewer 4NJf,
We sincerely thank you for your valuable comments and appreciate the time and effort dedicated to providing constructive feedback on our submission. We have carefully considered your suggestions and made significant efforts to address them. Given the limited timeframe of the rebuttal period, we would greatly appreciate if you could review our rebuttal and let us know if any concerns remain. Your insights are invaluable as we strive to enhance the quality of our work.
Best,
The authors of paper 7209 | Summary: This paper studies a primal-dual framework for decentralized bilevel optimization. It unifies several heterogeneous correction techniques (gradient tracking and EXTRA). It also provides a shared rate analysis that applies to all variants and avoids several assumptions like gradient boundedness. Several other insights include deploying ED/EXTRA at lower level is more beneficial than using GT across both upper/lower levels.
Strengths: - This paper is overall well written with clear introduction of the bilevel optimization problem and shows the comparison of many different approaches and how they fit under the unified framework. An unified convergence rate is provided and discussed in detail when specializing in different settings.
- Several interesting insight is provided, including the benefit of using EXTRA and ET compared to GT.
Weaknesses: - The unification of several algorithms seems a superficial contribution. One just have to make a few unifying notations to unified these into the same framework. Probably the more non-trivial part is a unified analysis of the unified algorithm, but the paper doesn't seem to make the case that analyzing the unified algorithm is more difficult or more challenging than analyzing the individual algorithms.
- The rate analysis in Theorem 1 contains many numerical constants like 16/3, 14/3, which doesn't seem to be tight. Can the author comment all on the tightness of these constants.
Due to the above reasons, I would say the paper's contribution is marginal.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the invaluable comments. We have thoroughly addressed all questions. Should there be any additional concerns or inquiries, we are more than willing to provide further clarification.
1. **Contribution of our work**
We appreciate the reviewer’s insightful feedback. However, we respectfully disagree that SPARKLE represents a superficial contribution. We clarify this point in Section **Novelty and contribution of our work** of **Global Rebuttal**.
2. **It is non-trivial to establish a unified decentralized bilevel framework**
The reviewer commented that "the unification of several algorithms seems a superficial contribution," which we respectfully disagree with. Existing single-level decentralized algorithms like EXTRA, ED, and GT have significantly different update rules and recursions (**Table 3**). They cannot be unified by simply "making a few unifying notations." To effectively unify these algorithms, our approach transforms each of them into a primal-dual formulation, extracts the principal structures behind their different recursions, and finally introduces unifying notations.
3. **Challenges in our analysis**
This paper provides a novel analysis for a **unified** algorithmic framework, offering **improved convergence rates** and operating under **more relaxed assumptions** compared to existing literature. We have addressed significant challenges to achieve these results.
- [**Unified analysis**] SPARKLE unifies ED, EXTRA, and many GT varints. It allows different update strategies and network topologies across different levels. This unified and general framework poses significant challenges for analysis: (1) The analyses of ED, EXTRA, and GT provided in literature are drastically different from one another, necessitating the identification of the fundamental mechanisms and core principles that can unify their analysis. (2) Since the SPARKLE framework is a unifying structure, the specific properties endowed by certain algorithms or network topologies cannot be leveraged, forcing the analysis to rely solely on the basic properties, which significantly complicates the proof. (3) The SPARKLE framework is based on a primal-dual structure, whereas existing works which are primarily focus on DGD and GT, rely on the primal structure. This necessitates the development of new analytical techniques for decentralized primal-dual bilevel algorithms, further adding to the complexity of the analysis.
- [**Improved rates**]
All SPARKLE variants achieve state-of-the-art convergence rates compared to existing algorithms. Specifically, SPARKLE is theoretically proven to have shorter transient stages and better robustness to network topologies, surpassing the results achievable in existing works.
(1) Key steps in our proof involve transforming primal-dual forms (Section C.1.2, Appendix) and reformulating the SPARKLE update rules (Equations 33-35). We address consensus biases for each variable $s \in \{x, y, z\}$ and their gradients in aggregated vectors. Analyzing the coupled consensus errors collectively allows us to obtain tighter bounds compared to related works.
(2) While our bilevel algorithms can be reduced to single-level ones under certain trivial lower-level loss functions (Section C.4, Appendix), deriving single-level convergence rates directly from the bilevel analysis is challenging. It requires precise estimation of each update procedure. For more details, please refer to **Comment**.
- [**More relaxed assumptions**] Our theoretical analysis achieves state-of-the-art convergence rates under more relaxed assumptions compared to existing algorithms. Unlike some related works[2-6], we do not assume that the upper-level loss functions $f_i$ and lower-level loss functions $g_i$ are Lipschitz continous. We handle the bias of gradient estimation through additional steps, which only require high-order Lipschitz constants. Besides, we utilize heterogeneity correction techniques such as EXTRA, ED, and GT to remove the assumptions on bounded gradient dissimilarity. Please refer to **Comment** for more discussions.
4. **Numerical Constants in Theorem 1.**
The focus of our paper is to provide a new unified decentralized bilevel framework that yields **brand new algorithms**, and achieves **state-of-the-art** asymptotic rate, gradient complexity, communication cost, and transient iteration complexity under **more relaxed assumptions** compared to existing methods. Establishing tight bounds for all constants is beyond the scope of our work. Notably, our convergence rate, while potentially not tight for every constant, outperforms the algorithms listed in Table 1 of our paper in terms of $K$, $n$, and $\rho$.
We guess the reviewer is asking whether the influcne of the condition number on our convergence rate is tight or not. Here are some discussions:
- **[Tight bound in the dominant term]** The term $\kappa^5$ in the dominant term $\kappa^5 \sigma/\sqrt{nK}$ represents state-of-the-art result among single-loop decentralized bilevel algorithms, which is consistent with those presented in [1].
- **[Probably untight bound in the higher-order terms]** The condition number associated with the higher-order terms, such as $\kappa^{\frac{16}{3}}$, represents our best effort to provide a tight upper bound. However, we acknowledge that these terms may not be optimal, as we have not conducted a comprehensive analysis of the lower bound. It is important to note that many existing works in this field do not provide a detailed analysis of the influence of $\kappa$. Instead, the impact of $\kappa$ is often treated as a trivial constant hidden in the Big-O notation [2-5 of Global Rebuttal Reference]. In comparison to these works, our research represents a significant step towards a better understanding of how the condition number influences the convergence rate.
[1] Optimal algorithms for stochastic bilevel optimization under relaxed smoothness conditions.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you for your response. I can see the contribution and challenges better now. I have increased the score.
---
Rebuttal 2:
Title: Supplement to the omitted parts of the rebuttal to Reviewer E8rC
Comment: - [**Improved rates**]
All SPARKLE variants achieve state-of-the-art convergence rates compared to existing algorithms. In particular, it is theoretically established that SPARKLE has shorter transient stages and demonstrates better robustness to network topologies. These refined results surpass those achievable through existing analyses.
**(1)** A fundamental challenge is capturing the common essence of these heterogeneity correction algorithms. A key step in our proof involves the transformation that utilizes the unified primal-dual forms outlined in Section C.1.2 (Appendix) and the reformulation of the update rules for SPARKLE, as specified in Equations (33-35). We address the biases of each variable $s \in \{x, y, z\}$ along with their corresponding gradients in aggregated vectors $\hat{\mathbf{e}}_s$. By analyzing the coupled consensus errors simultaneously and recursively through the contractive matrices $\Gamma_s$, we obtain tighter bounds than those found in related works.
**(2)** Though our bilevel algorithms can formally reduce to single-level algorithms under certain trivial lower-level loss functions (see Section **C.4**, Appendix), deriving the convergence rates for single-level algorithms directly from the bilevel framework analysis presents its own challenges. We rigorously analyze the hyper-gradient norms in Lemma 8 and provide a tight upper bound. The precise results in Lemma 8 are crucial for deriving the convergence rates of the single-level degeneration versions of the bilevel algorithms within our framework.
- [**More relaxed assumptions**] Our theoretical analysis achieves state-of-the-art convergence rates under more relaxed assumptions compared to existing algorithms.
**(1) Lipschitz Continuous Loss Functions.** Most analyses in existing works rely on Lipschitz continuity of the upper-level loss functions $f_i$ and lower-level loss functions $g_i$ [2-6]. In contrast, we handle the bias of gradient estimation through additional steps, which only require high-order Lipschitz constants. For instance, we estimate the bias of $\nabla_1 f_i(x_i^k, y_i^{k+1})$ as follows:
$$\|\nabla_1 f_i(x_i^k, y_i^{k+1}) - \nabla_1 f_i(\bar{x}^k, y^{\star}(\bar{x}^k))\|^2 \leq 2 \|\nabla_1 f_i(x_i^k, y_i^{k+1}) - \nabla_1 f_i(\bar{x}^k, \bar{y}^{k+1})\|^2 + 2 \|\nabla_1 f_i(\bar{x}^k, \bar{y}^{k+1}) - \nabla_1 f_i(\bar{x}^k, y^{\star}(\bar{x}^k))\|^2 \leq 2L_{f,1}^2 (\|x_i^k - \bar{x}^k\|^2 + \|y_i^{k+1} - \bar{y}^{k+1}\|^2 + \|\bar{y}^{k+1} - y^{\star}(\bar{x}^k)\|^2).$$In contrast, other works may bound gradient or gradient estimation errors directly by a constant, based on assumptions of Lipschitz continuous loss functions.
**(2) Bounded Gradient Dissimilarity.** We utilize heterogeneity correction techniques such as EXTRA, ED, and GT to remove the assumptions on bounded gradient dissimilarity. A common feature of these techniques is that each eigenvalue of the corresponding $L_s$ matrices in **Assumption 3** has magnitude less than 1, ensuring that $\|\Gamma_s\|<1$ in basic transformations (**Eq. (33-35), Appendix**). This allows us to recursively bound consensus errors using $\Gamma_s$. In contrast, other decentralized algorithms, such as Decentralized SGD, do not satisfy Assumption 3 and therefore rely on additional assumptions.
---
Rebuttal 3:
Title: Thanks for your reply!
Comment: We are delighted that your concerns have been resolved, and we sincerely appreciate your positive feedback. Thank you again for your valuable input. | Rebuttal 1:
Rebuttal: We sincerely appreciate the detailed feedback provided by all reviewers. Here we present our response to the common concerns raised by multiple reviewers and results of newly added experiments.
**1.Novelty and contribution of our work.**
SPARKLE yields **brand new algorithms**, and achieves **state-of-the-art rates** under **more relaxed assumptions** compared to existing methods. In particular, SPARKLE implies the following novel results.
- [**SPARKLE yields new algorithms utilizing EXTRA and ED**] Before SPARKLE, existing bilevel algorithms primarily employ decentralized gradient descent (DGD) and gradient tracking (GT) in algorithm development. SPARKLE expands this paradigm by incorporating EXTRA and ED methods to address data heterogeneity, resulting in novel decentralized bilevel algorithms—SPARKLE-EXTRA and SPARKLE-ED—**which were not previously considered in the existing literature**. Furthermore, theoretical analysis demonstrates that SPARKLE-EXTRA and SPARKLE-ED exhibit superior convergence complexities compared to existing algorithms.
- [**SPARKLE yields new algorithms with different update strategies across optimization levels**] SPARKLE is the first algorithm framework that introduces mix strategies for solving upper-level, lower-level, and auxiliary-level problems. For example, it allows the use of GT to update the lower-level variable $y$, while ED is used for the upper-level variable $x$ and the auxiliary-level variable $z$. This approach yields novel decentralized bilevel algorithms such as SPARKLE-ED-GT and SPARKLE-EXTRA-GT. In contrast, existing literature on decentralized bilevel optimization primarily utilizes **the same strategy** across the upper and lower optimization levels.
- [**SPARKLE yields new algorithms with different network topologes across optimization levels**] SPARKLE is the first algorithm framework that propose using different network topologies and mixing matrices for solving upper-level, lower-level, and auxiliary-level problems. Given that the dimensions of variables can vary significantly across different levels, exploring how diverse mixing matrices can reduce communication costs represents an essential novel contribution. By using different weight matrices, all SPARKLE variants are more efficient in communication costs, see Table 1 in our paper. In contrast, existing literature on decentralized bilevel optimization primarily utilizes **the same mixing matrix** across the upper and lower optimization levels.
- [**State-of-the-art convergence rates and complexities**] As demonstrated in Table 1 of our paper, all SPARKLE-derived algorithms achieve state-of-the-art convergence rates and complexities compared to existing baselines. Specifically, all SPARKLE variants exhibit: (1) Lower asymptotic computational and communication costs; (2) Shorter transient stages; (3) Most importantly, greater robustness to network topologies. For instance, it is known that the ring topology has $1 - \rho$ on the order of ${1}/{n^2}$. When utilizing this topology, the transient stages of MDBO, Gossip DSBO, and D-SOBA are on the order of $n^{19}$, $n^{11}$, and $n^{11}$, respectively. In contrast, SPARKLE variants have a transient stage of $n^7$, which is significantly shorter than existing baselines.
- [**More relaxed assumptions**] As shown in Table 1 of our paper, the convergence of existing algorithms relies on **stringent assumptions**, including bounded gradients, bounded gradient dissimilarity, or functional Lipschitz continuity. In contrast, SPARKLE does not depend on any of these assumptions.
In summary, we hope to emphasize that our main contribution extends beyond demonstrating how different approaches fit under a unified framework. Our work also: (1) Derives novel algorithms; (2) Provides improved analysis and convergence rates; (3) Offers fresh insights into decentralized bilevel optimization.
**2. Limitation: Strongly convex lower-level function and impact of the condition number**
While we have acknowledged the following limitations in the paper, we can provide additional discussions:
- **[Lower-level strong convexity]** We agree that our work does not address the more general case where the lower-level loss function is merely convex. **This limitation is common in most existing research**, as there are currently no decentralized bilevel algorithms specifically designed for cases with generally convex lower-level loss functions. In fact, such problems are significantly challenging as the lower-level problems might not have unique solutions, and the final objective function could be non-differentiable [1]. Addressing these issues requires fundamentally different approaches, which are beyond the scope of this work.
- **[Condition number]** We agree that the condition number will impede the convergence significantly, **which is a common limitation in most existing research**. However, it is important to note that most related works treat the condition number as a constant and obscure its effect using Big-O notation [2-5]. Our analysis offers a more detailed understanding of how $\kappa$ affects the convergence rates. To mitigate the influence of the condition number, Nesterov acceleration can be used to accelerate the solving of the lower-level optimization problem. We will leave it as the future work.
**3. Experiments.**
The results are shown in the PDF document in this global rebuttal. Please refer to more details in individual rebuttals.
[1]On Finding Small Hyper-Gradients in Bilevel Optimization: Hardness Results and Improved Analysis.
[2]Decentralized stochastic bilevel optimization with improved per-iteration complexity.
[3]A stochastic linearized augmented lagrangian method for decentralized bilevel optimization.
[4]Decentralized gossip-based stochastic bilevel optimization over communication networks.
[5]Decentralized bilevel optimization over graphs: Loopless algorithmic update and transient iteration complexity.
Pdf: /pdf/1dc952fa847830cdc313e3b22b7c5d776a52a280.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
To Learn or Not to Learn, That is the Question — A Feature-Task Dual Learning Model of Perceptual Learning | Accept (poster) | Summary: This paper aims to replicate a variety of results from the perceptual learning literature using a model with two different forms of learning. It shows results that capture how specificity occurs under certain training conditions and transfer occurs under others. The two forms of learning are different in terms of supervision and location of weight changes.
Strengths: The paper tackles an interesting topic, one that is ripe for modeling influence.
Technically they are able to replicate many findings.
Weaknesses: The modeling choices made in this work made it hard to interpret the results and also made the connection to known biology more difficult. Also there are some issues in the interpretation of the results the authors provide at times.
Specifically:
In the feature-based learning method, the authors tie the weights from different orientations at the same location (this is justified by reference to cortical columns). I don't understand the mechanism that would cause all neurons representing a single location to have the same weight updates. Rather, I would expect there to be orientation-specific learning as orientation is represented differently by different cells.
Different task-based networks are trained for different orientations. The authors do not make this very clear in the main text, but it is of course very important for understanding the results. What is this meant to correspond to biologically? (of course the rest of the ventral stream doesn't change based on which orientation is being used.) The authors also refer to this form of learning as fast, but it includes the learning of many more parameters than the feature-based learning and takes many epochs.
The task-based network also performs convolutions over a space where dimensions represent orientation and space. This means that a downstream cell only gets input from 3 nearby spatial locations and 3 nearby orientations (in the first layer). The spatial specificity is warranted as cells do have spatially restricted receptive fields. But what is the justification for the restricted orientation connections? Furthermore, the fact that the weights are convolutional in this space means that there may actually be orientation transfer in these networks (for the same reason there is location transfer when the task network is trained alone). However this was not tested because different task networks were used for different orientations. The location transfer exhibited by the task-based network also depends on weights being tied (i.e. using a convolution). What is the biological explanation for this feature of the model?
The Hebbian learning (combined with normalization) seems to really mess up the representations at non-trained locations. This is more extreme than simply not transferring to them. How is this interpreted?
To help with interpretation, it would be good if the ablation studies could be run on all the experimental results.
Technical Quality: 2
Clarity: 2
Questions for Authors: Is Eqn 2 actually a convolution? It seems like each spatial location has its own separate weight that the activity is multiplied by so I don't see what about it is convolutional. (Also it isn't necessary to include Eqn A4 if the only scenario used is $\alpha=0$).
What values are $A_{t=0}$ initialized as?
The main text makes reference to the task-based network as being a convolutional layer, but it seems like it is actually 3. Also, are there nonlinearities between this layers?
For the third experiment the authors say "At the untrained condition, the threshold first decreases and then increases or saturates, indicating a transition from transfer to specificity". I'm not sure where the increase or saturation is seen. The data is a little noisy but still looks like a continued decrease on average.
I am confused about how the locations of the gabor filters are spread out across the image. Are they overlapping or no? if I just take the width and divide by 40 (as suggested by line 406), the filters would be 20 pixels wide but the standard deviation is 30 so that can't be right. Is there a stride that's missing?
In fig 7, isn't the relative lack of transfer to ori1_loc2 inconsistent with the experimental results? Also 7b makes it look like there is improvement for ori1_loc2 but 7c doesn't. Also it seems the labels are wrong on the right hand side of 7c.
It is possible the authors broke anonymity on line 105 (no need to confirm or deny, just a reminder to be careful)
There are typos in line 145 and 148.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors should probably speak more to the limitations due to the non-biological components of the model discussed here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the very detailed comments.
We streamline reviewers' main concerns and address them one-by-one.
**Q1:** On the biological plausibility of location-specific plasticity. Based on a number of experimental evidence, we believe that location-specific plasticity is possible in certain conditions.
- First, the Double Training experiments (Xiao et al. 2008, Zhang et al. 2010, Wang et al. 2012, Zhang et al. 2014, Wang et al. 2016, and Xiong et al. 2016 ) and those involving attention to induce transfer (Donovan et al. 2015, 2018, 2020) have indicated that the location-specific plasticity is possible. For instance, Wang et al. (2012) showed that "Double Training" involving stimuli unrelated to orientation, such as distinguishing between a Gabor stimulus and the letter E, could achieve transfer to untrained locations. Xiong et al. (2016) used two experimental paradigms: one in which subjects were presented stimuli without awareness (bottom-up driven), and the other subjects were informed of the presence of stimuli but no such stimuli actually presented (top-down driven), and in both cases, the transfer effect to untrained locations was observed.
- Second, neuroimaging studies have suggested that the results of perceptual learning can transform the processing of location information mediated by top-down signals into those realized by bottom-up implementations. For instance, fMRI findings from Sigman et al. (2005) showed that during the initial stages of learning a visual search task, there was significant activity in the frontal-parietal network, suggesting a pronounced top-level regulatory role; however, as the learning progressed, the frontal-parietal network activity decreased, while the activity in the lower visual cortex increased,
Taken together, we argue that neurons with different orientation preferences can achieve location-specific plasticity through top-down signals (such as attention). The columnar structure can be seen as a way to support position-specific learning.
**Q2:** On the use of different read-out networks. In our simplified model, we only extract features related to angle and position, necessitating the use of different task-based networks to read out different orientation information. We agree that in real biological systems having much more complicated structures, a single network module can process different orientation information. Nevertheless, since the focus of this study is on elucidating the dual-learning framework, in particular, the effect of different speeds of the two learning processes, we could extend the current model to have a single task-based learning module to handle different orientations, which will not change our results qualitatively.
**Q3:** On the use of CNN. The CNN in our model is solely for simulating the task-based learning process with the capability of location generalization. Indeed, there may also be some degree of orientation transfer. Since our study focuses on location specificity, we employed different task-based networks for different orientation discrimination to avoid this interference. It ensures that the learning effect is specific to the location without confusion with orientation changes.
**Q4:** Indeed, the combination of Hebbian learning with normalization can induce distorted feature representations at untrained locations and subsequently cause degraded learning effects at untrained locations. However, this phenomenon is not biologically implausible.
In real human subject experiments as cited in Fig 1C, it shows that repeated training at specific locations can indeed cause a decrease in subjects' performances at untrained locations.
**Q5:** Equation 2 presents a general learning rule. In practice, for simplicity, we simplified the operation. As shown in Equations A3 and A4, we considered
$$
\lim_{a\to0} W_{t+1}(\mathbf{x}, \mathbf{x}^\prime) = \frac{A_{t+1}(\mathbf{x}^\prime)}{\sqrt{2 \pi} a} \exp \left[-\frac{(\mathbf{x} - \mathbf{x}^\prime)^2}{2 a^2}\right],
$$
which shows that feature-based learning employs a Gaussian kernel to perform weighted convolution on representations. For simplicity, we considered the scenario where parameter $a$ approaches zero. In this case, significant weight updates occur only when the input and output positions are exactly the same, effectively creating the one-to-one connection between neurons at identical locations.
**Q6:** See reply to Q4 of the Reviewer UGym.
**Q7:** See reply to Q11 of the Reviewer Wypk.
**Q8:** Apologies for the typo. It should be "divide by 10".
**Q9:** Apologies for the confusion, indeed the labeling of "loc1\_ori2" and "loc2\_ori1" in Fig.7b were reversed. After correcting the labeling, it can be seen that our results are consistent with the experimental results in Xiao et al. 2008 (Fig.3).
After re-evaluating the improvement results shown in Fig.7b and Fig.7c, we find that the inconsistency arises due to that Fig.7b presented the average of 100 simulation runs, while the improvement in Fig.7c was calculated by averaging the rate of improvement from each simulation, where a few outliers affected the results. After excluding 4 outliers, the results are consistent, see Fig.3 in the attached PDF.
**Q10:** Thanks for the reminder.
**Q11:** Typos will be corrected.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response.
With respect to the answer to Q1: The behavioral studies that show location transfer is possible do not directly support this specific mechanism of location transfer. That would require evidence on the neurophysiological level. I believe the second point is trying to say that changes in top-down modulatory influence might be responsible for aiding location transfer (but the mechanism is vague and not implemented here). In any case, the authors should acknowledge this modeling gap more directly in the main text so as to not mislead the reader about the mechanisms of the model. Furthermore, it will help explain some trends in results such as the slight transfer to orientation 2 at location 1 seen in Figure 4B.
WRT Q 2&3: The authors say "Since our study focuses on location specificity", but the study is presenting results about orientation transfer as well. So it does seem important to reflect on how the choice of readout model/task-based learning influences that process. To even test orientation transfer requires several epochs of training of a different readout model; the details of this model/training process could influence the results. It is also said that "a single task-based learning module to handle different orientations [..] will not change our results qualitatively." My point is that it could, depending on how it is built. That is why I'm advocating for more transparency in the main text about how the model is built, how transfer is tested, and how these design choices might impact or explain the results. I think a reader would benefit from the author's careful reflection on these issues.
Q4: I looked at reference 24 and I don't see much evidence of extended training causing a decrease in performance at the transfer location. There were maybe 1 or 2 individual subjects where the average discrimination threshold on the first transfer session was slightly higher than the threshold on the first training session, but I'm not clear if these were even statistically significant differences. The more common trend however was a slight decrease in threshold on the first transfer sessions (i.e. partial transfer, even for extended training).
---
Reply to Comment 1.1.1:
Comment: Thank you for the instructive comments.
- Regarding question 1: We agree that the detailed neural mechanisms for perceptual learning in the brain remain unclear. The goal of this study is not to assert that the nervous system actually employs the models/learning algorithms we used in this work, but rather to propose a plausible explanatory framework for perceptual learning. As suggested by the reviewer, in the revised manuscript, we will discuss in much more detail about the biological plausibility of the models/learning algorithms to clarify the underlying assumptions.
- Regarding questions 2 & 3: Thanks for the suggestion. We will further clarify the settings of the models in the paper to avoid potential misunderstandings. Our previous statement that "a single task-based learning module handling different directions [...] would not qualitatively change our results" is based on the current structure and settings of our model.
- Regarding question 4: Thanks for the comments. We should have made our points more clearly. In the paper [24], the authors found that with more training in the first task, the initial threshold in the new task became higher, and so did the final achieved threshold (thus the learning performance was decreased compared to the less trained case). This indicates that extensive training in the first task can alternate neural representation, increasing the difficulty of learning the new task (as shown in our Fig1C; and Fig2 in [24], comparing the training of two and twelve blocks, T2 black and T12 green curves). | Summary: In this article, the authors propose a novel model that accounts for two different phenomena observed in human learning: i) specificity, a feature-based mechanism restricted to the very specific statistics of the environment condition, and ii) transfer, a task-based mechanism that allows to transfer knowledge to untrained locations or features. The model proposed by the author is made of a stack of 3 components: A feature extraction stage (mimicking LGN), a Feature-based learning stage (mimicking early processing), and a task-based learning (mimicking upper-processing). Training this model on a vernier discrimination task, the authors demonstrated that it accounts for numerous psychophysics phenomena: i) specificity in condition-specific training, ii) transfer in various training conditions, and ii) transition from transfer to specificity when trained on an increased number of training sessions.
Strengths: This article is well written, is well motivated, and leverages simple, yet very informative, experiments that could be easily compared to human data. Overall it was pleasant to read. Great work!
Weaknesses: I have noted various (rather minor) points that needed to be better clarified/explained (see question section). My only concern is about the generalization of the proposed model to more complex tasks (including for example natural image discrimination). I fully agree that simple tasks like the Vernier discrimination are a very good starting point to propose a novel model accounting for human learning, but scaling such models to more complex data would have drastically improved the impact of this article. Overall I don’t think this scaling-up argument is enough to reject this article, but this is something the authors need to keep in mind if they want their article to reach a more ‘global’ audience. Note that I am willing to increase my rating if the points (in the question section) are properly addressed/clarified.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1 - In my understanding, feature-based is supposed to account for selectivity, which is thought to happen when over-training is taking place (i.e. when stimuli are relatively similar, and there that the environment is relatively unchanged). But it seems to contradict this sentence (l75-77): ‘Feature-based learning … only takes place when the external environment is substantially changed ». On my understanding if the environment is « substantially changed » then we are not in the over-training condition... Am I missing something? Could you explain?
2 - In Figure 2A, this is not clear what loc(x) corresponds to ? Is it encoded as the x,y coordinate of the stimulus? Please give more detail.
3 - In Appendix 1 (Eq.A4), you suggest a=0, which involves two divisions by zero (one in the exponential multiplier, and another inside the exponential). Is that a typo or is that a=0? If this is the case how do you handle the division by zero?
4 - What is the grey line in Figure 3?
5 - In Fig 3.C. How do you explain that the accuracy for the training sample is decreasing with only feature-based learning? I would expect to see this accuracy improving if the feature-based learning model well the specificity phenomena as we observe an increase in performance for human data (Fig 1Aii ).
6 - I am not sure the claim line 173 is true: « feature based learning… reinforces the model’s performance at the trained location ». This is the opposite trend that is observed in Fig 3C… Could you discuss this point?
7 - In Fig 3D: There is no mention of the method used to compute the similarity in Fig 3D. Could you elaborate?
8 - Line 182: « The combination of feature-based and task-based learning accelerates the training process at the train location ». This is not visually obvious when we compare Fig 3B and 3E… Could you quantify the difference in convergence speed? Ideally, make sure this difference is statistically significant.
9 - Could you explain a bit more about the difficulty threshold in section 4? And what is the difference between the discrimination threshold and the difficulty threshold? Is that the same? If I understood well the difficulty threshold is the offset between the two Gabor filters (the smaller the offset the harder the task). But in line 191, you introduce the discrimination threshold, which seems to be more related the the intensity of the stimuli than the offset. Could you clarify this point?
10 - Figure 6: Could run a statistical test to make sure the T4 is indeed below T8 (in Fig6b, in the transfer setting?
11 - Not sure what you mean in line 218: In the untrained condition, the threshold first decreases and then increases or saturates… I am not sure to see that in Fig 6B… could please be more accurate here
### Typo :
* Fig2C : learining —> learning
* Line 144: Sec. A.1 —> Sec A.2
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The author properly discusses the limitations of their paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the careful and valuable comments.
We streamline reviewers' main concerns and address them one-by-one.
**Q1:** On the goal of feature-based learning. In this study, we argue that the goal of feature-based learning is to capture the statistical characteristics of external features. Therefore, it only takes effect when there is a significant change in the distribution of external features. In our experiments, when the stimuli are presented repeatedly at the same location for many times,
triggering the sense of statistical change of external stimuli by the brain,
the weight changes associated with feature-based learning become pronounced, eventually dominating the model's performance and demonstrating specificity. This mechanism highlights how feature-based learning adapts to repetitive patterns, enhancing the model’s ability to specialize in recognizing consistent (salient) features in its environment.
**Q2:** Apologies for the confusion. Indeed, $\mathbf{x}$ should be a vector indicating the position. Specifically, for the task at hand, $loc(\mathbf{x}) $represents the coordinates as $\mathbf{x}=(x,y)$, where $x$ and $y$ denote the horizontal and vertical coordinates, respectively.
**Q3:** Apologies for the confusion. Here, we actually consider the case of limit. Specifically:
$$
\lim_{a \to 0} W_{t+1}(\mathbf{x}, \mathbf{x}^\prime) = \frac{A_{t+1}(\mathbf{x}^\prime)}{\sqrt{2 \pi} a} \exp \left[-\frac{(\mathbf{x} - \mathbf{x}^\prime)^2}{2 a^2}\right].
$$
As the width $ a $ approaches zero, the Gaussian kernel becomes very sharp, indicating that weight updates are concentrated at $ \mathbf{x}=\mathbf{x}^\prime $. In this limit scenario, significant weight updates occur only when the input and output positions are exactly the same, corresponding to the one-to-one connection.
**Q4 and Q8:** In Fig.3B and Fig.3E, the gray line indicates the number of training epochs required for the model to achieve a 90\% accuracy rate. This metric serves as the reference point for assessing the convergence speed of the model under different learning conditions.
Under the task-based learning only condition (as shown in Figure 3B), the model requires approximately 75 training epochs to reach 90\% accuracy.
However, under the condition of combining task-based and feature-based learning (as shown in Figure 3E), the model converges faster, needing only about 68 training epochs to achieve the same level of accuracy.
A statistical significance was found between these two conditions, with a p-value of approximately 0.00148 (< 0.005) in one hundred simulations.
**Q5-7:** Questions 5 to 7 are closely related. Feature-based learning is independent of task requirements, so it is possible that it may reduce the model's performance. For human subjects, feature-based learning and task-based learning occur simultaneously, thus they cannot be compared separately. This explains why we do not observe enhanced feature processing at the trained locations in Fig.3C. Rather, we should focus on Fig.3D, where we assess similarity by calculating the correlation between the representations before and after feature-based learning. The correlation is calculated as follows:
$$
corr = \frac{n \left(\sum F^*_t F_t \right) - \left(\sum F^*_t \right)\left(\sum F_t \right)}{\sqrt{\left[n \sum (F^*_t)^2 - \left(\sum F^*_t \right)^2\right] \left[n \sum (F_t)^2 - \left(\sum F_t \right)^2\right]}}.
$$
Here, $ F^*_t $ and $ F_t $ represent the representations before and after the feature-based learning module, respectively, and $ n $ is the total number of representations.
**Q9:** In Section 4, all our model simulations utilize difficulty thresholds. Specifically, we adjusted the offset between two vertical Gabor filters in the task to create a series of difficulty levels, and we selected the difficulty level that corresponds to 80\% accuracy of the model as the threshold (mentioned in line 193 and Appendix C2). The "discrimination threshold" introduced in line 191, however, refers to the threshold used in psychophysical experiments to assess learning performance.
Although these two types of thresholds differ, their roles are similar: they are both used to measure the limit at which the model achieves a certain performance under specific conditions.
**Q10:** The results of the t-tests conducted on the data from Fig.6 are presented below, highlighting the comparison between T4 and T8. As observed, the statistical differences between these two conditions are not significant. This finding aligns with the results depicted in Fig.1C(ii), indicating the consistency across different parts of the study.
| | T2 | T4 | T8 |T12 |
|-----|-------------------|-------------------|--------------------|--------------------|
| **T2** | 1 | 0.025 | 4.1e-05 | 3e-07 |
| **T4** | 0.025 | 1 | 0.075 |0.0018|
| **T8** | 4.1e-05 | 0.075 | 1 |0.11|
| **T12** | 3e-07 | 0.0018 | 0.11 |1|
Table 1: Statistical comparison of Fig.6
**Q11:** We revised the description as follows: "The results are presented in Fig.6B, which show that the thresholds decrease gradually in the trained condition (left panel in Fig.6B). In conjunction with Fig.3E, it can be observed that in the untrained condition, the threshold should first decrease and then either increase or saturate. This indicates that with increased training, there is a transition from transfer to specificity at untrained locations." This describes how training intensity influences learning dynamics differently in trained versus untrained locations.
---
Rebuttal Comment 1.1:
Title: response to author
Comment: I am convinced by the author's rebuttal. I increase my rating to 7
---
Reply to Comment 1.1.1:
Comment: Thank you for the improved score. We are grateful for the valuable suggestions you provided, which have enhanced the clarity of our work. We will incorporate these insights into our revised manuscript. | Summary: 1. The paper proposes a dual-learning model to reconcile two seemingly contradictory phenomena in perceptual learning: specificity and transfer.
2. The model consists of two learning processes:
- Task-based learning: Fast, enables quick adaptation to new tasks using existing neural representations.
- Feature-based learning: Slow, refines neural representations to reflect changes in the environment.
3. The model is implemented as a hierarchical neural network with three stages:
- Feature extraction
- Feature-based learning
- Task-based learning
4. The interactions between these two learning processes explain the observed specificity and transfer effects in perceptual learning experiments:
- Specificity occurs when feature-based learning dominates (due to excessive training on the same stimulus).
- Transfer occurs when task-based learning dominates (due to varied training conditions).
5. The model successfully reproduces key experimental findings in perceptual learning, including:
- Specificity in condition-specific training
- Transfer in varied training conditions
- Transition from transfer to specificity with increased training sessions
- Transfer effects in double training paradigms
Strengths: The paper proposes a simple, novel dual-learning model that effectively reconciles the conflicting phenomena of specificity and transfer in perceptual learning. With the help of this model, the authors successfully reproduces classical findings from perceptual learning experiments. The paper is easy to follow and experiments seem sound.
Weaknesses: I am not a computational neuroscientist and only follow this field very sparsely. I found the paper interesting but can not judge the novelty of the paper and its methodology and results. Nevertheless, I would argue that the experimental section is missing ablations or theoretical results to explain the (many) moving bits and pieces , and therefore authors design choices, of the proposed model.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please ablate the model.
1) Is the first hand designed feature extractor necessary? What happens you mess this part up?
2) What happens if you train the middle part with backprop?
3) What happens if you train always the first and the last part with backprop simultaneously.
Generally I think the proposed model seems quite ad how and potentially makes intuitive sense. Please back up design choices with experimental results. Show me when model behaviors start misalignment with the expected findings of perceptual learning.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Seems ok to me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the valuable comments of the reviewer.
We streamline reviewers' main concerns and address them one-by-one.
**Q1:** On the removal of the feature-extraction module.
The feature extraction module is akin to a vision representation extractor that has been trained through extensive experiences over time. If we remove or disrupt this module, other parts of the model will not be able to operate effectively, rendering the entire model ineffective.
**Q2:** On the choice of Hebbian learning or backpropagation.
In the dual-learning framework, feature-based learning
aims to capture the distribution variation of external stimuli, which is task-independent and requires a significant number of observations. Therefore, we used slow and unsupervised Hebbian learning
to achieve this goal.
In contrast, task-based learning is directly linked to task execution using the existing representations, and it can be quickly achieved by modifying the read-out weights. We therefore chose the fast and supervised BP to implement this part. Overall, the choice of the learning method is based on the biological plausibility.
**Q3:** The newly added ablation studies.
We add two ablation studies to highlight the importance of the relative speeds of two learning processes.
**1. Accelerating feature-based learning.** As shown in Fig.1 of the attached PDF, we increase the learning rate of feature-based learning by tenfold and replicate the four experiments in the paper. The results are:
- **In Exp1**, the results do not differ significantly from the original model, but the accelerated feature learning significantly modifies feature representations, leading to significantly degraded performances at untrained locations (ori1\_loc2).
- **In Exp2**, due to continuously changing experimental conditions and the accelerated feature learning rate, the representations are constantly altered, preventing the model from performing well in the learning task and failing to achieve human-like transfer.
- **In Exp3**, the transferability of learning in new conditions diminishes with increasing training numbers, but with significant changes in feature-based learning, the differences in learning curves under new conditions are no longer distinct and become entangled, showing a decreasing trend in learning gain (except for T12), with increased error bars. However, when these differences are statistically analyzed using a T-test compared to the results in the main text (as detailed in the reply to the Reviewer Wypk), the p-values have increased, suggesting reduced statistical significance between conditions.
| | T2 | T4 | T8 | T12 |
|-----|-------|-------|-------|-------|
| T2 | 1 | 0.049| 0.00088| 0.12 |
| T4 | 0.049 | 1 | 0.29 |0.5|
| T8 |0.00088 | 0.29 | 1 |0.053|
| T12 | 0.12 | 0.5 | 0.053 | 1|
Table 1: Statistical comparison of Exp3 with accelerating feature-based learning
- **In Exp4**, double training no longer achieves the transfer of learning, and performances in both transfer conditions are even worse after double training.
**2. Slowing down task-based learning:** As depicted in Fig.2 of the attached PDF, we decreased the learning rate of task-based learning by tenfold. As expected, this adjustment resulted in a degradation of the learning effect, rendering the model less capable of mastering the tasks efficiently.
- **In Exp1**, although the model cannot fully master the current task, a comparison with Exp1 in Fig.1 reveals that there is no significant difference in the effects of training on other conditions.
- **In Exp2**, especially under random conditions, it is observed that despite the model's inability to master the current task effectively, the improvements brought about by learning still transfer to new conditions.
- **In Exp3**, due to the model's inability to effectively master the current task, the learning curves for all four different conditions are almost entangled. Significance testing shows that p-values have increased, indicating smaller differences between different training conditions:
| | T2 | T4 | T8 | T12 |
|-----|-------|-------|-------|-------|
| T2 | 1 | 0.68 | 0.089 | 0.94 |
| T4 | 0.68 | 1 | 0.02 | 0.76 |
| T8 | 0.089 | 0.02 | 1 | 0.095 |
| T12 | 0.94 | 0.76 | 0.095 | 1 |
Table 2: Statistical comparison of Exp3 with slowing down task-based learning
- **In Exp4**, due to poor learning outcomes, double training did not facilitate the transfer of learning. However, when compared with Exp4 in Fig.1, it can be seen that there is minimal disruption to transfer conditions.
Due to time constraints, the above ablation study results are only statistical outcomes from 20 simulations. We plan to conduct 100 experimental simulations consistent with the main text and will perform a more detailed analysis. However, we do not expect significant deviations from the current ablation study results.
These two sets of ablation studies demonstrate that both slow feature-based learning and fast task-based learning are necessary for our model to reproduce perceptual learning phenomena.
---
Rebuttal Comment 1.1:
Title: Thank you!
Comment: Thank you for the additional clarification and data. I will raise my score to 5, but highlight my lack of knowledge wrt the biological plausibility and relevance of the algorithm for neuroscienctists - so I can not judge the contribution of the paper fairly.
---
Reply to Comment 1.1.1:
Comment: Thank you for the score bump. We are pleased that the ablation study you suggested has enhanced the clarity of our work, and we will include these changes in our revised manuscript. | Summary: The paper puts forth a theoretical framework for perceptual learning, in which two separate learning processes contribute to learning a perceptual task (a fast, flexible task-based learning that relies on existing feature representations; and a slow, task-specific feature learning). Repeated learning sessions with the same stimulus conditions triggers feature-based learning, which will be specific to the training stimulus conditions. The framework is instantiated as a neural network model, and is used to account to both specific and transfer phenomena in psychophysical experiments.
Strengths: The paper proposes a novel computational model to explain when specificity and transfer is observed in perceptual learning. The crux of the proposed dual-learning theory is that experiments exhibiting transfer are dominated by task-based learning (which operates over existing feature representations and is adaptable), and that experiments exhibiting specificity are dominated by feature-based learning triggered by excessive exposure to the same stimulus condition triggering feature-learning to adapt to these new environment statistics.
The computational framework is instantiated as a neural network model with feature extraction (via a set of basis functions), feature-based learning which modifies the feature representations via hebbian learning, and a task network (CNN that performs the task). This network recapitulates an impressive array of human psychophysical phenomena, showing specificity when repetition of the same stimulus conditions is high, and transfer when there is more variability in the stimulus conditions. The framework impressively captures nuances of learning dynamics across multiple task variations.
As such, the work provides a strong framework for understanding diverse and seemingly contradictory findings in human psychophysics. It also presents an opportunity for impact on machine learning theory, if the slow-feature-learning and fast-task-learning framework has broader benefits for machine learning systems (e.g., could this inform approaches to life-long continual learning research?).
Weaknesses: One important limitation of this work is that the networks appear to be trained from scratch on the “probe task” (e.g., vernier acuity). In contrast, with human perception, the person’s visual system is tuned over their lifetime (presumably via some self-supervised objective), and then they are put into an experimental setting where 1-2 hours per day for N days the perform this new task. So the impact in terms of perceptual experience is set within this very broad context of “learning to see”, and a visual system that’s capable of much more than just the task at hand (e.g., vernier acuity). I suppose this begs the question of whether this all works as well if the network is initially pretrained (say on some self-supervised learning objective), and then the same types of experiments are run. I don’t see in principle why these same ideas wouldn’t apply.
Minor: When the computational framework is first introduced, the different architectural choices are unmotivated (a classical basis function network for image processing — What is this and why this? A Hebbian network for feature learning – Why a Hebbian network? Why is feature learning a separate step from the basis function learning? And then a CNN with max pooling for task-based learning. Why a CNN for this purpose, and a Hebbian network for the feature learning?). I realize these choices are all motivated, they just come out of the blue all at once, and it’s hard for the reader to grok the motivation for this setup.
Minor: The methods introduce the Feature Extraction as akin to the retina or LGN, but what’s described appears to be an orientation-tuned Gabor Wavelet Basis Set (and orientation-tuning is believe to arise later in V1 and beyond).
Minor: Can you provide more information on the architecture of the task-based CNN?
Minor: Can the authors provide further justification/clarification for the different learning algorithms implemented at each level of the network? e.g., Why shouldn’t Hebbian learning play a role “all the way to the top” of the task network? Is this biologically motivated, or a convenient way to implement assumptions of the model (e.g., location-specific learning)?
Minor: Other related work includes “How variability shapes learning and generalization” (Raviv et al, Trends in Cognitive Sciences)
Technical Quality: 4
Clarity: 4
Questions for Authors: Can the authors clarify what makes the task-based learning “fast”? Is this accomplished via learning rate hyperparameter tuning?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: OK
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your encouraging and valuable comments. We streamline reviewers' main concerns and address them one-by-one.
**Q1:** In the current study, we have used a feature extraction module that remains unchanged during the learning process to reflect the pre-processing of visual inputs in the brain. This setup mimics the resulting capability of the human visual system that has undergone extensive exposure to diverse environments throughout life. In the typical psychophysical experiments which last at most hours, we regard this part of pre-processing as unchanged. To accomplish the Vernier discrimination task in this work, we have employed a simple basis function network for pre-processing. Certainly, if more complicated tasks are included, we can employ more sophisticated pre-trained models.
**Q2:** As for the choice of models and learning methods, please refer to the overall rebuttal.
**Q3:** Indeed, as pointed out by the reviewer, orientation tuning emerges in the visual cortex. Our description of feature extraction in the paper was not precise. The feature extraction module in our model should include the input layer of V1 (responsible for extracting visual features). We will revise the statement in the manuscript accordingly.
**Q4:** The CNN used in our model is a streamlined convolutional neural network comprising three convolutional layers.
- The initial layer, layer 1, inputs a single channel and uses a 3x3 convolutional kernel to output 6 channels.
- This is followed by layer 2, which processes these 6 channels through another 3x3 kernel to produce 10 channels.
- The concluding layer, layer 3, compresses these 10 channels into a single output channel using a 3x3 convolution.
The network employs ReLU activation after the first two convolutional layers to add non-linearity and includes a sequence of flattening and a 1D max pooling on the final output to structure the output appropriately.
**Q5:** As for the model choice, please refer to our overall rebuttal. Indeed, the most fundamental difference lies in the objective functions of the two learning processes. Utilizing Hebbian learning (Hebb) or backpropagation (BP) mainly simplifies the model implementation.
**Q6:** From the perspective of the dual-learning framework, variability enhances generalizability primarily because it prevents feature-based learning from confining learning effects to specific feature combinations, thus allowing task-based learning to dominate and exhibit transferability.
On the other hand, specificity also plays an important role in learning. For instance, according to Reicher's study in 1969, native English speakers, due to extensive reading in English, can recognize words significantly faster than individual letters or strings that do not conform to phonetic rules. This indicates that prolonged training on specific types of inputs can significantly enhance the efficiency and accuracy of processing these inputs.
Overall, our model aims to simulate these learning dynamics observed in real-world scenarios. It emphasizes the interaction between feature-based and task-based learning, highlighting their joint effects on perceptual recognition and task execution capabilities. Both learning are valuable for the brain to adapt to and interact with complex environments.
**Q7:** Yes, we set a much higher learning rate for task-based learning, resulting in the network parameters related to task-based learning being updated much more quickly, which allows the model to rapidly adapt to the task replying on the existing feature representations. | Rebuttal 1:
Rebuttal: We acknowledge the very careful and valuable comments of all reviewers. We realize that there are common concerns about the aim of this study and the models we used to demonstrate the framework. In the below, we briefly summarize the motivation and main results of this work to clarify these concerns.
Overall, in this work, we explored the neural mechanisms underlying perceptual learning and proposed a dual-learning framework in which the interplay between **the rapid, supervised, task-based learning** and **the slow, unsupervised, feature-based learning** generates the rich phenomena of perceptual learning. Our model reconciles the seemingly conflicting phenomena of specificity and transfer observed in diverse experiments.
To our knowledge, our work is the first one that uses a computational model to elucidate the dual-learning framework for perceptual learning.
Since our focus is on elucidating the dual-learning framework, we have chosen as simple as possible, and meanwhile, as biologically plausible as possible, models to implement each part of learning. Nevertheless, if other models can capture the characteristics of one part of dual-learning, they can be used in our framework.
- Specifically, in our model, the feature extraction module is dedicated to transforming images into visual features, representing a stable representation system in the brain that rarely changes. To accomplish the visual discrimination task in this work, we employed the classical basis function network. Certainly, if more complicated tasks are considered, more sophisticated pre-trained models can be used.
- The feature-based learning module aims at learning the variation in the statistical distribution of visual inputs. It takes effect only when significant changes have occurred in the distribution of visual inputs, and hence it is relatively slow and goes in an unsupervised manner. We employed the classical Hebbian learning with a slow learning rate to implement feature-based learning.
- The task-based learning module is driven by the task at hand, aiming to extract task-relevant information from existing representations to accomplish the task rapidly. Hence, we used a CNN model with supervised learning (such as backpropagation) having a relatively large learning rate to implement this learning process.
To highlight the importance of the interplay between
the rapid task-based learning and the slow feature-based learning, we have carried out an additional ablation study (see the attached pdf Fig.1 and Fig.2),
which varies the learning rates of two learning processes, and demonstrates that the relative speed ratio between two learning processes is essential for perceptual learning.
Pdf: /pdf/f286356f90521b9bd0d78812de9cef1ae827abaa.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Speculative Monte-Carlo Tree Search | Accept (poster) | Summary: The paper proposes a new variant of parallelization in Monte Carlo Tree Search (MCTS) algorithm in the context of AlphaZero and the game of Go. The modification builds on the anytime nature of MCTS and consists in forking the search for subsequent moves (actions) before the search of a given move (action) is completed. It is experimentally verified in the paper that the proposed parallelization leads to reduction of training latency.
Strengths: The experiments show that the proposed method leads to time savings, at least in a few setups considered in the paper.
Weaknesses: 1. The experimental evaluation is insufficient for a comprehensive evaluation of the proposed MCTS parallelization. It seems that the strength of the method highly depends on the base number of simulations. How to choose a proper number of them is not explained in the paper.
2. The ablation section should be significantly extended – see particular questions in the section below.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Is the method applicable in the contexts other than AlphaZero and the game of Go? Can it be applied beyond the domain of games?
2. How have the base numbers of simulations, i.e., 1600/800 been selected? Why these particular values are considered?
3. The method should work nicely with sufficiently high number of simulations per move, since lowering (division of) their count would still lead to meaningful move estimations. What would happen if the base number of simulations was significantly lowered, e.g., from 1600 to 400?
4. The authors show that lowering the number of simulations per move from 1600 to 800 changes the selected move in almost 20% of the cases. However, what can be said about the strength of MCTS with 1600 simulations per move is not discussed in the paper. I would suggest making the following experiment: use sufficiently high number of simulations to achieve almost perfect move selection and then compare the accuracy of move selection with 1600 and 800 simulations, respectively (both of them versus a perfect selection case). Such a comparison should present the real difference in strength between these two cases (1600 vs. 800 simulations).
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitations of the proposed method are briefly mentioned in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q**: The experimental evaluation is insufficient for a comprehensive evaluation of the proposed MCTS parallelization. It seems that the strength of the method highly depends on the base number of simulations. How to choose a proper number of them is not explained in the paper.
**A**: As discussed in general response #2, our method does not highly depend on the base simulations, and we adopt the same number of simulations that are used in many closely related works.
**Q**: The ablation section should be significantly extended – see particular questions in the section below.
Is the method applicable in the contexts other than AlphaZero and the game of Go? Can it be applied beyond the domain of games?
**A**: Yes, as discussed in general response #3, any applications that can utilize the AlphaZero algorithm can benefit from our work, including chemistry synthesis and material science.
**Q**: How have the base numbers of simulations, i.e., 1600/800 been selected? Why these particular values are considered?
**A**: As discussed in general response #2, the base numbers of simulations is a hyperparameter of the training where 800 and 1600 are commonly selected in prior works.
**Q**: The method should work nicely with sufficiently high number of simulations per move, since lowering (division of) their count would still lead to meaningful move estimations. What would happen if the base number of simulations was significantly lowered, e.g., from 1600 to 400?
**A**: As discussed in general response #2, common applications use simulation counts around 800 to 1600 since the strength will be reduced if trained with a lower simulation count. Nevertheless, we include evaluation and discussion with various simulation counts ranging from 50 to 3200 in the general response #2. Additionally, in Section 6.3, we have mentioned the potential limitation when the simulation count is low since, given a limited computation amount (i.e., to make a decision in a very short time or a small number of resources), we can only have a limited benefit from parallelization.
**Q**: The authors show that lowering the number of simulations per move from 1600 to 800 changes the selected move in almost 20% of the cases. However, what can be said about the strength of MCTS with 1600 simulations per move is not discussed in the paper. I would suggest making the following experiment: use sufficiently high number of simulations to achieve almost perfect move selection and then compare the accuracy of move selection with 1600 and 800 simulations, respectively (both of them versus a perfect selection case). Such a comparison should present the real difference in strength between these two cases (1600 vs. 800 simulations).
**A**: As discussed in general response #2, we do not treat 1600 simulations as a golden/perfect or game-theoretical value to the current decision. Instead, the simulation count is a hyperparameter to the training, which is commonly decided by the compute budget for the training.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers, in particular for presenting the results with a smaller number of MCTS simulations. I've read the other reviews and the rebuttal. I’ve raised my initial score. | Summary: This paper proposes Speculative Monte Carlo Tree Search (MCTS), which predicts the next move of MCTS before completing the search on the current node. This concept is similar to the branch prediction algorithm in CPU pipelining. Speculative MCTS proceeds to the next node by predicting the branching direction of the current node. If the prediction fails, it reverts to the current node and flushes all previous computations.
Additionally, the paper introduces a caching technique to store the inference results of MCTS during the search. By combining speculation and caching, the training speed of an AI Go program is accelerated by 2x.
Strengths: + The writing is clear, with well-articulated motivations, extensive experimental details, and thorough discussions of related works.
+ The idea presented in this paper is straightforward yet highly effective. The 2x training speedup with just one lookahead step is impressive. The authors have done excellent work in implementation and in conducting a fair comparison with existing methods.
Weaknesses: + It is unclear whether accelerating the training of a Go program is beneficial to the community, given the existence of many strong Go programs and the infrequent need to train a new one from scratch. Although the method itself is great, the lack of application scenarios could diminish the overall contribution.
Technical Quality: 3
Clarity: 4
Questions for Authors: N/A
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: + While the caching strategy is effective in board games (and perhaps card games), it does not apply to video games such as Atari.
+ For end-to-end experiments, only 1-step lookahead speculation is evaluated. While the authors claim that more lookahead steps can potentially improve performance, this should be validated in real experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q**: It is unclear whether accelerating the training of a Go program is beneficial to the community, given the existence of many strong Go programs and the infrequent need to train a new one from scratch. Although the method itself is great, the lack of application scenarios could diminish the overall contribution.
**A**: The Go game is known for its complexity and remains several open problems unsolved, such as adversarial attacks [1] and game-theoretical values, including life-and-death [2] and game-solving [3], even though many strong Go programs exist. Moreover, many active recent research studies have also investigated multiple variants of Go games, such as Go games with various board sizes or variations in the winning condition, such as the Killall Go game [4]. Specifically, our evaluations cover 9x9 NoGo, 9x9 Go, and 19x19 Go games. Hence, being able to train a new Go program from scratch faster can accelerate the research in the field. Also, we discussed possible application scenarios in general response #3.
1. Wang, T. T., Gleave, A., Tseng, T., Pelrine, K., Belrose, N., Miller, J., ... & Russell, S. (2023, July). Adversarial policies beat superhuman go AIs. In International Conference on Machine Learning (pp. 35655-35739). PMLR.
2. Kishimoto, A., & Müller, M. (2005, July). Search versus knowledge for solving life and death problems in Go. In AAAI (pp. 1374-1379).
3. Randall, O., Müller, M., Wei, T.H. and Hayward, R., 2024. Expected Work Search: Combining Win Rate and Proof Size Estimation. arXiv preprint arXiv:2405.05594.
4. Wu, T. R., Shih, C. C., Wei, T. H., Tsai, M. Y., Hsu, W. Y., & Wu, I. C. (2022). AlphaZero-based proof cost network to aid game solving. In International Conference on Learning Representations.
**Q**: While the caching strategy is effective in board games (and perhaps card games), it does not apply to video games such as Atari.
**A**: Our work applies to games that can utilize the AlphaZero algorithm; thus, video games such as Atari are outside our primary scope of applications. However, as discussed in general response #3, other variants of the AlphaZero algorithm may apply to these video game applications.
On the other hand, in Section 5.3, we identified two sources of cache hits: (1) intra-game and (2) inter-game hits. We believe that video games also exhibit similar characteristics. Consider the following search paths for two consecutive moves in each search tree:
Search path on the tree of move 1: (root) s1 => s2 => s3 => s4 => **s5**
Search path on the tree of move 2: (root) s2 => s3 => s4 => **s5** => s6
, where s1, …, s6 represents game states. For instance, in the "Pac-Man" Atari game, the first search path can correspond to sequences of moves like "right=>**right=>right=>up**" show substantial overlap with the second search path, "**right=>right=>up**=>up", when the player navigates towards a goal. Hence, we can observe that when speculatively executing two tree searches in parallel, the inference result for s5 can be reused across consecutive moves within the same game, showing intra-game temporal locality. Similar observations also apply to inter-game caching.
**Q**: For end-to-end experiments, only 1-step lookahead speculation is evaluated. While the authors claim that more lookahead steps can potentially improve performance, this should be validated in real experiments.
**A**: End-to-end experiments can help us demonstrate the effectiveness of our proposed approach, but additional evaluations also require a lot more compute resources. Hence, we adopt a bottom-up approach to elaborate our work from an analytical analysis, caching evaluation, and latency evaluation to the final end-to-end evaluations.
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed response.
Comment: I'v read the authors' public response and responses to all reviewers. The response to me has addressed my concern about the application opportunity of SpeculativeMCTS.
I've noticed that my colleage reviewers have some concerns about the acceleration ratio with a smaller simulation step. This is not a issue in my opinion, since the proposed method is a costless acceleration regardless the number of simulation steps.
I remain my score unchanged and still vote for acceptance. Thank the authors again for you efforts in paper submission and rebuttal. | Summary: The paper considers Monte-Carlo Tree Search (MCTS) and aims to increase parallelism.
It leverages the fact that MCTS is an Anytime Algorithm, meaning early termination can still yield a feasible solution.
This property is used for prediction - the MCTS for the next move is started before the previous move's MCTS has finished.
The paper proposes "inter-decision" parallelization and speculative MCTS, where future moves are speculatively executed to reduce overall training latency.
It also explores synergies between speculation and neural network caching, which seems reasonable.
Strengths: Proposes ideas of "inter-decision" parallelization, speculative MCTS to speculatively execute future moves and reduce overall training latency.
Explores synergizing speculation and neural network caching in a natural and reasonable way.
Provides empirical evaluation demonstrating significant speedups.
Weaknesses: Some aspects are not clearly explained (see Questions section).
The efficiency analysis in the Speculation Analysis section does not consider the possibility of failure at each step of a move, implying there are more than just two cases to analyze.
Technical Quality: 2
Clarity: 2
Questions for Authors: - Fundamentally, is the speedup from speculative MCTS achieved through higher degrees of parallelism?
- Is the real-world Go model training already highly parallelized, e.g. with the common inter-game parallelism?
- The paper states "training latency remains constrained by the sequential inter-decision MCTS self-play, thus limiting the potential benefits from the increasingly powerful high-performance computing (HPC) resources and systems." Why is this the case? Where is the bottleneck? Can't enough inter-game parallelism saturate the compute resources? Where does idling occur?
- Was inter-game parallelism used when measuring the 5.8x speedup?
- Would n-way inter-game parallelism provide similar speedups as Figure 4 without the risk of failed speculation and potentially higher efficiency?
- Can the difference between intra-decision and inter-decision parallelism be clarified further? The boundary seems blurry. Isn't considering different nodes of the same tree as root analogous to making different decisions, as leveraged in NN caching?
- How is accuracy defined in Figure 2? Why is 1600 used as ground truth - would results change with more simulations, say 3200?
- In Figure 4, is it possible for the prediction from move2 to move3 to fail? If so, shouldn't the dimensionality of q be higher than 2 to account for this?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Some important details are lacking in the paper (see Questions section).
The code provided in the appendix is insufficient to reproduce the speculative MCTS process.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q**: Fundamentally, is the speedup from speculative MCTS achieved through higher degrees of parallelism?
**A**: Yes, the speedup comes from both higher degrees of parallelism and the synergies of caching to utilize available compute resources.
**Q**: Is the real-world Go model training already highly parallelized, e.g. with the common inter-game parallelism?
**A**: Yes, current Go model training, such as KataGo, ELF OpenGo, and AlphaZero, is already highly parallelized in both inter-game and intra-decision.
**Q**: The paper states "training latency remains constrained by the sequential inter-decision MCTS self-play, thus limiting the potential benefits from the increasingly powerful high-performance computing (HPC) resources and systems." Why is this the case? Where is the bottleneck? Can't enough inter-game parallelism saturate the compute resources? Where does idling occur?
**A**: As the example introduced in general response #1, the inter-game parallelism is limited due to a fixed number of games collected for a training iteration. Once inter-game parallelism saturates the throughput, the training latency can not be further reduced even with more compute resources. Hence, this motivated us to introduce inter-decision parallelism, which aims to shorten the latency of long sequential decision processes.
**Q**: Was inter-game parallelism used when measuring the 5.8x speedup?
**A**: Yes, inter-game parallelism served as our baseline in all our evaluations, including the comparison baseline, KataGo.
**Q**: Would n-way inter-game parallelism provide similar speedups as Figure 4 without the risk of failed speculation and potentially higher efficiency?
**A**: When n is moderate, inter-game parallelism has high efficiency since it's an embarrassingly parallelizable technique. However, n often reaches its upper limit in most AlphaZero training. Moreover, as general response #1 mentioned, inter-game parallelism can only improve throughput. Hence, this motivated us to further accelerate the training by reducing the latency, as shown in Figure 4.
**Q**: Can the difference between intra-decision and inter-decision parallelism be clarified further? The boundary seems blurry. Isn't considering different nodes of the same tree as root analogous to making different decisions, as leveraged in NN caching?
**A**: Intra-decision parallelism focuses on searching for the current move and making one decision, while inter-decision parallelism focuses on multiple consecutive moves and speculating the most promising sequence of decisions. Hence, in inter-decision parallelism, the NN cache can be shared among threads spread across multiple moves where each search tree is likely to overlap. In contrast, in intra-decision parallelism, where multiple threads evaluate different positions of the same tree, the benefit of NN caching is limited since the cache hits only occur when there are two identical positions in the tree.
**Q**: How is accuracy defined in Figure 2? Why is 1600 used as ground truth - would results change with more simulations, say 3200?
**A**: We clarified the definition in general response #2, and the simulation count is a hyperparameter to the training, which does not represent a golden or game-theoretic decision. We also include additional results ranging from 50 to 3200 in the attached file.
**Q**: The code provided in the appendix is insufficient to reproduce the speculative MCTS process.
**A**: We will provide the link to the code and the artifacts.
**Q**: In Figure 4, is it possible for the prediction from move2 to move3 to fail? If so, shouldn't the dimensionality of q be higher than 2 to account for this?
**A**: Figure 4 shows two cases of the prediction from move-m to move-(m+1), represented by move1 and move2. After predicting move2 from move1, the pipeline proceeds, and the state machine shown in Figure 5 transits to the corresponding state/case. Then, the pipeline continues to predict move3 based on move2, which follows the two cases in Figure 4. Hence, the dimension of q is 2 in our analysis.
**Q**: The efficiency analysis in the Speculation Analysis section does not consider the possibility of failure at each step of a move, implying there are more than just two cases to analyze.
**A**: As answered in the previous question, the two cases represent two states in the finite-state machine. Hence, the pipeline state transits to the next state/case after making a success or failure prediction.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers and clarifying my misunderstanding about Figure 4,5. I have changed my score. | Summary: The paper introduces Speculative Monte-Carlo Tree Search, which speculates by reducing the number of simulations in MCTS. Experiments demonstrate a two-fold acceleration in training on the Go game.
Strengths: The paper is well-written and easy to understand, even for readers not familiar with the RL.
The proposed method experimentally demonstrates improved training efficiency.
Weaknesses: Inter-game and intra-decision parallelism seem to offer better acceleration than inter-decision parallelism, and they do not require speculation. Can the authors provide experimental evidence to demonstrate the importance of inter-decision parallelism?
For non-speculative MCTS methods, can we use caching? Based on the paper, caching does not appear to be exclusively beneficial for speculative MCTS.
There is a lack of comparison with [19] in terms of efficiency.
Technical Quality: 3
Clarity: 4
Questions for Authors: see weakness
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: na
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q**: Inter-game and intra-decision parallelism seem to offer better acceleration than inter-decision parallelism, and they do not require speculation. Can the authors provide experimental evidence to demonstrate the importance of inter-decision parallelism?
**A**: As discussed in general response #1, inter-decision parallelism reduces training latency and improves overall resource utilization. All our evaluations in the paper utilize inter-game parallelism as a baseline. Overall, we want to emphasize that the existing two kinds of parallelism are not able to leverage the increasingly powerful HPC compute resource.
**Q**: For non-speculative MCTS methods, can we use caching? Based on the paper, caching does not appear to be exclusively beneficial for speculative MCTS.
**A**: Caching is not exclusive to our proposed method, but our research highlights its synergies with our proposed speculative parallelism of MCTS. Even though caching has been utilized in practice, including AlphaZero, KataGo and ELF Open Go training, it lacks prior literature discussion, especially in the context of caching NN results.
**Q**: There is a lack of comparison with [19] in terms of efficiency.
**A**: The prior work [19] can only be applied to gameplay or inferences, and cannot be applied to AlphaZero training since the method requires an additional pre-trained small NN to guide the speculative prediction. Hence, we do not compare our work to theirs, instead we discussed their speculation ideas in our related works section.
---
Rebuttal Comment 1.1:
Comment: Given the fact that
> Even though caching has been utilized in practice, including AlphaZero, KataGo and ELF Open Go training, it lacks prior literature discussion, especially in the context of caching NN results.
I think I need to lower the score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for the response. We want to emphasize the following:
1. Caching is a ubiquitous idea in accelerating computation, but unfortunately, most AlphaZero implementations barely mention caching's contributions to the overall training speedup in their research papers.
2. Prior research studies on parallel MCTS were primarily proposed before AlphaZero and thus did not include discussions and evaluations with neural networks.
3. In contrast, we draw conclusions about caching's contribution in Section 5.3 from our evaluations to explain the synergy of NN caching and speculative parallelization.
In particular, we show that our speculative MCTS can provide more intra-game hit opportunities. For instance, in Figure 9, the cache hit rate at move seven can increase from 36% with no speculation to 62% with 5-step ahead speculation, resulting in an improvement of 26%. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable feedback. Below, we answer the questions/concerns common among multiple reviewers.
## #1. Why is inter-decision parallelism necessary beyond inter-game and intra-decision parallelism?
Inter-game parallelism can only enhance training throughput, while intra-game parallelism improves both latency and throughput. However, the speedups from throughput improvements are capped and cannot increase even with extra compute resources. Consider the following example:
Suppose a training iteration requires collecting 8,192 games, with a batch size of 1024 per GPU. An 8-GPU machine can fully parallelize all 8,192 games by inter-game parallelism. Yet, when more computing resources are available than a single 8-GPU machine, additional training parallelism is required to utilize these resources fully, and thus, intra-game parallelism becomes crucial in this case. Moreover, in Section 2.2 of our paper, we discussed the limitation of intra-decision parallelism and related prior works. Furthermore, as HPC systems grow increasingly powerful, the batch size per GPU increases while maintaining the same latency. For instance, if the batch size is doubled to 2048 on next-gen GPUs, inter-game parallelism alone will be insufficient to saturate an 8-GPU machine since four next-gen GPUs can achieve a throughput of 8,912 games per iteration.
Hence, the time-to-solution for training cannot be reduced via throughput improvement, even with infinite compute resources, especially in AlphaZero training, which typically experiences high latency due to the long game length. As a result, our proposed inter-decision parallelism complements existing parallelism methods, offering further parallelization opportunities in training.
## #2. Clarifications on 1600 simulations
The number of simulations for MCTS is a hyperparameter in AlphaZero training, whose values commonly seen and chosen ranging from 800 to 1600 in prior works such as AlphaGo Zero, AlphaZero, ELF OpenGo, KataGo, and MuZero. For our evaluation, we selected 1600 to demonstrate that our proposed approach is scalable to higher MCTS simulations compared to many prior works. In Section 2.2, we referred to the number 1600 as performing a complete tree search, which can also be set to 800 or other reasonable figures. Also, the result from a tree search with 800 or 1600 simulations should not be seen as the golden or optimal solution to a move, namely not a game-theoretical move, but rather as reaching the reference point for training, meaning that the simulation results can improve the previous iteration.
We also want to clarify the definition of "prediction accuracy" in Figure 2. In the figure, we use the search results from lower simulation counts to predict the results from 1600 simulations. Then, "prediction accuracy" is defined as the fraction of correct predictions when using fewer simulations compared to the results of 1600 simulations. Similar evaluations can be done with the number 1600 being substituted by other reasonable figures representing the compute resource budget for a complete tree search. It is also to be noticed that the prediction accuracy is compared against the baseline MCTS without speculation, not compared against the game-theoretical move, which we do not know. More importantly, we emphasize that we can reasonably predict the result of a complete tree search, thus motivating speculation.
Furthermore, we include additional evaluations with varying simulations ranging from 50 to 3200 in the attached file. Figure 1 from the file shows that the prediction accuracy generally follows a similar trend to the 1600 simulation results in the paper. The results not only indicate that MCTS has good scaling in prediction but also suggest that our speculative MCTS is scalable across various simulations.
## #3. Applications beyond the Go game
Our method applies to applications that utilize the AlphaZero algorithm, including board games, material sciences [1], chemical synthesis [2], and video compression [3]. In general, our proposed speculative MCTS parallelization also applies to AlphaZero-like algorithms, including MuZero [4], Stochastic MuZero [5], and other variants, since our method primarily leverages the Anytime Algorithm characteristics of MCTS and hence can be easily extended to our analysis to others. For instance, MuZero can train on Atari games and other video games.
Moreover, our proposed approach is not bound to a specific application domain. We implement and evaluate speculative MCTS on AlphaZero and Go games due to the well-known robustness and complexity of the algorithm and game. Also, we provided high-level insights into the potential for applying speculative parallelism to other Anytime Algorithms. Future research could explore similar analyses for other search algorithms with Anytime properties.
1. Gaymann, A., & Montomoli, F. (2019). Deep neural network and Monte Carlo tree search applied to fluid-structure topology optimization. Scientific reports, 9(1), 15916.
2. Segler, M. H., Preuss, M., & Waller, M. P. (2018). Planning chemical syntheses with deep neural networks and symbolic AI. Nature, 555(7698), 604-610.
3. Mandhane, A., Zhernov, A., Rauh, M., Gu, C., Wang, M., Xue, F., ... & Mann, T. (2022). Muzero with self-competition for rate control in vp9 video compression. arXiv preprint arXiv:2202.06626.
4. Schrittwieser, J., Antonoglou, I., Hubert, T., Simonyan, K., Sifre, L., Schmitt, S., ... & Silver, D. (2020). Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588(7839), 604-609.
5. Antonoglou, I., Schrittwieser, J., Ozair, S., Hubert, T. K., & Silver, D. (2021, October). Planning in stochastic environments with a learned model. In International Conference on Learning Representations.
Pdf: /pdf/e35225e5a1a57ee8fc8f25191bf62ab5c22100d3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
GFlowNet Assisted Biological Sequence Editing | Accept (poster) | Summary: The author proposed GFNSeqEditor, a novel sequence editing and generation model built on GFlowNet, which provides different modifications for each sequence to enhance desired features. Several experiments have demonstrated the performance of the proposed algorithm.
Strengths: A new biological sequence editing method based on GFlowNet has been proposed, capable of identifying and editing positions in a given sequence. It has been demonstrated that the lower and upper bounds on the number of edits performed by GFNSeqEditor can be controlled by adjusting hyperparameters.
Weaknesses: There are shortcomings in the model comparison, as it does not compare with existing biological sequence design methods. In terms of sequence evaluation, mainstream evaluation experiments were not used, leading to doubts about the model's effectiveness.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Existing biological sequence design methods, such as evolutionary approaches, can still perform sequence editing in specified regions. Examples include AdaLead[1], PEX[2], Coms[3], and BiB[4]. The author must compare their method with these state-of-the-art algorithms to demonstrate its advantages.
2. The author used AMP and CRE datasets and trained Oracles independently. Evaluating with self-trained models can lead to inconsistent standards, as the author can modify their trained Oracles at any time to demonstrate the algorithm's superiority. Unlike Coms[3] or BiB[4], which use numerous sequence evaluation problems, this leads to unfair experimental evaluations. It is recommended that the author evaluate under existing various evaluation standards to demonstrate the model's advantages better.
[1] Sinai, Sam, et al. "AdaLead: A simple and robust adaptive greedy search algorithm for sequence design." arXiv preprint arXiv:2010.02141 (2020).
[2] Ren, Zhizhou, et al. "Proximal exploration for model-guided protein sequence design." International Conference on Machine Learning. PMLR, 2022.
[3] Trabucco, Brandon, et al. "Conservative objective models for effective offline model-based optimization." International Conference on Machine Learning. PMLR, 2021.
[4] Chen, Can, et al. "Bidirectional learning for offline model-based biological sequence design." International Conference on Machine Learning. PMLR, 2023.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your review and letting us know your valuable comments. Please find below our responses to your comments.
## Evolutionary-Based Methods
We would like to clarify that we have already included the evolutionary method in AdaLead (reference [30] in the paper) among our baselines. As indicated on line 265 of page 7, the DE baseline refers to the evolutionary method presented in [30]. However, instead of naming this baseline as AdaLead, we referred to it as DE, which stands for Directed Evolution. To avoid any confusion, we can revise the name of the DE baseline to AdaLead.
PEX is an evolutionary method that operates through multiple rounds of interactions with the lab. However, these wet lab evaluations can be costly and time-consuming. The focus of this paper is to propose edits without the need for any wet lab interactions. Therefore, we compared the performance of GFNSeqEditor with other baselines that do not have such need. Indeed, if we run PEX only in one iteration without any additional wet lab experiments, the PEX result would be similar to the DE reported in the paper. To address the reviewer's concern, we will add some notes about PEX in our literature review.
## Model-Based Optimization Methods
Both Coms and BiB are model-based optimization (MBO) methods. To generate sequences, they perform several rounds of optimization on a set of sequences. However, in biological sequence editing, we often aim to generate sequences similar to a pre-specified seed sequence. Using MBO-based methods for this purpose requires performing optimization on each seed sequence, which makes computations infeasible, especially when it comes to editing thousands of sequences. Moreover, adapting Coms and BiB for sequence editing purposes may require some changes in these algorithms. The advantages of GFNSeqEditor over MBO-based methods are summarized as follows:
1. To edit a sequence, GFNSeqEditor employs a pre-trained flow function. GFNSeqEditor only performs inference using the flow function and employing GFNSeqEditor does not involve any model training. This makes the GFNSeqEditor computationally efficient.
2. Our paper theoretically proves that using GFNSeqEditor the amount of edits can be controlled while MBO based methods cannot provide such guarantee.
3. MBO-based approaches require evaluating the properties of unseen sequences using a proxy model, whereas GFlowNet-based approaches can operate without this. Proxy models may provide misleading predictions for out-of-distribution sequences.
Furthermore, we have already compared GFNSeqEditor with Ledidi, an optimization-based baseline specifically designed for biological sequence editing. To address your concern, we can include background information on BiB in our literature review.
## Evaluations
**We did not train oracles for AMP and CRE datasets and the oracles are not self-trained**. For AMP, we employed the oracles used by [14] which can be downloaded from Github repo https://github.com/MJ10/BioSeq-GFN-AL. For CRE, we used the Malinois model [10] which can be obtained from GitHub repo https://github.com/sjgosai/boda2. Therefore, we believe that our comparisons are fair. We clarify this in Appendix E.1.2 on page 16, where we provide detailed information about the oracles used in this paper. Furthermore, we performed experiments on both DNA and protein sequence datasets. Finally, following previous works (see e.g., [10], [14], [23]) and addressing the specific needs of this study, we evaluated the performance of the algorithms based on several important metrics in the biological sequence domain, including property improvement, diversity, and edit percentage.
---
Rebuttal 2:
Title: Edit Official Comment by Reviewer Ns1N
Comment: Thank you for addressing my concerns and providing detailed explanations in your rebuttal. As a result, I have raised my score. | Summary: This paper introduced a new algorithm for biological sequence editing, GFNSeqEditor. This algorithm is designed based on pre-trained Generative Flow Networks (GFNs), and improves target properties by identifying and editing sub-optimal sites of input sequences. Through theoretical analysis and experiments on three datasets, this new algorithm shows that it can improve biological properties with diversified edit while minimize the number of edits to ensure safety and predictability. By comparing with a few baseline models, the GFNSeqEditor outperforms the state-of-the-art methods on property improvements, diversity, and edit percentage.
Strengths: The paper includes comprehensive theoretical analysis of the algorithm, such as bounds on expected reward, property improvements, and the number of edits. Together with extensive experiments, this paper provides sufficient information on the influence of hyperparameter choices on algorithm performance, which offers valuable insights for downstream studies.
This paper shows the versatility of GFNSeqEditor through examples of sequence generation and sequence length reduction, indicating that this algorithm can not only be used on sequence editing but also in a broader potential applications in synthetic biology.
Weaknesses: 1. the algorithm was built on the assumption that fewer edits lead to safer and more predictable modification. I would argue this claim is valid under the condition that, compared with generating a new sequence, limited editing is possibly a safer choice. Many studies on single nucleotide variance (SNV) show that a single nucleotide change can lead to functional change or property loss. To avoid unexpected confusion, it would be better to either include followup analysis on structural changes caused by sequence editing, which I believe will be too much work to do, or improve the statement on the safety claim.
2. Lack of details about the selection of non-AMP samples. Previous study [1] have shown that inappropriate selection of negative samples can introduce bias. Clarification on the criteria of selecting negative samples would help the audience better evaluate the validity of the results.
3. While GFNSeqEditor shows improvements compared with the baselines, the performance improvements are not substantial, especially on the aspect of editing percentage. Any possible reasons or statistical analysis on the significance?
[1]. Sidorczuk K, Gagat P, Pietluch F, et al. Benchmarks in antimicrobial peptide prediction are biased due to the selection of negative data[J]. Briefings in Bioinformatics, 2022, 23(5): bbac343.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. I would suggest refining the statement on fewer edits leads to safe and predictable sequence editing. Follow up analysis on how sequence editing influence structure and function change would be great to have, but not necessary.
2. Please consider add more details on how the dataset select non-AMP samples, and briefly mention about the negative sample problem for clarify.
3. It would be more clear to include statistical analysis or dig into possible reasons and future improvements would help the audience to understand the significance of this work and looking for future improvements.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for taking time to review our paper and let us know your valuable comments. Please find below our responses to your comments and questions.
## Safety Assumption
It is generally expected that fewer modifications in biological sequences are less likely to result in significant functional changes or property loss. However, we acknowledge that even limited modifications can lead to functional changes in the edited biological sequence. We agree that the safety assumption has its uncertainties and that investigating structural changes due to sequence editing is beyond the scope of this paper. Therefore, we will revise the safety statement in the paper to reflect this uncertainty.
## Selection of non-AMP Samples
For the AMP dataset, we utilized predictive models and data splits from published works [1,14, 27]. To address the reviewer's concern, we will provide additional details about the data sampling process for the AMP dataset in Appendices E.1.1 and E.1.2, using the explanations provided in [1,14, 27]. Furthermore, we will highlight the importance of negative data selection in the performance of predictive models, as discussed in the study by Sidorczuk et al. [R1], suggested by the respected reviewer.
## Statistical Analysis of Results
In our experiments, we observed that the property improvements provided by GFNSeqEditor and other baselines depend on the edit percentage. To ensure fair comparisons, we fixed the edit percentage across all algorithms to a similar level whenever possible. This is reflected in Table 1, where the edit percentages for virtually all algorithms are quite similar. According to Table 1, the property improvements provided by GFNSeqEditor are significant for the AMP and CRE datasets compared to other baselines. Furthermore, Figure 2 demonstrates the property improvements of GFNSeqEditor and other baselines as the edit percentage changes. To provide a statistical analysis of the results, Figure 6 in Appendix E.3 illustrates the distribution of properties of edited sequences. We are open to conducting additional statistical analyses based on your suggestions.
## Reference
[R1] Sidorczuk K, Gagat P, Pietluch F, et al. Benchmarks in antimicrobial peptide prediction are biased due to the selection of negative data[J]. Briefings in Bioinformatics, 2022, 23(5). | Summary: They propose a new sequence editing method using GFlowNets as priors and suggest additional hyperparameters to tune suboptimal gaps, randomness, and penalization. They theoretically analyze how these new hyperparameters can effectively control the lower and upper bounds of the number of edits. The performance results demonstrate some effectiveness of the proposed method.
Strengths: This approach introduces a new sequence editing method by leveraging the pretrained generative model, GFlowNets, as a prior. I appreciate their motivation for DNA editing and the clear narrative they present. Their theoretical analysis is robust and effectively explains the new hyperparameters.
Weaknesses: **The baselines used in the study are too weak:** I don't think this method offers significant advantages over conditional GFlowNets, which can translate input sequences to output sequences while maintaining constraints on sequence distance. There are many other methods capable of achieving this, including Seq2Seq, which the authors used as a baseline. The Seq2Seq implementation seems overly simplistic; there are potentially better techniques for DNA editing. For example, hierarchical variational inference in latent space, combined with a well-designed reward model, could optimize the process more effectively. Any thoughts on this?
**This is not a GFlowNet with multiple backward paths:** It appears that GFlowNets are used for sequence generation in an autoregressive manner (one-way generation) where \( P_B = 1 \). This approach is equivalent to Soft-Q-learning and path consistency learning (PCL) [1], so relevant soft RL literature should be clearly explained. The paper's title is somewhat misleading (though it need not be changed) as it essentially describes using a PCL-trained agent to assist in sequence editing. Reviewers suggest that the authors acknowledge this literature.
**The literature review for GFlowNets should include works published in 2024:** Reviewers also expect the authors to include recent GFlowNets literature targeting biological or chemical applications, such as those published in ICLR and ICML 2024. Additionally, there are preprints discussing improvements and evolutionary methods (e.g., genetic algorithms) using GFlowNets; engaging with this existing literature would be beneficial. The current literature review seems not very up-to-date in 2024.
[1] Nachum, Ofir, et al. "Bridging the gap between value and policy based reinforcement learning." Advances in neural information processing systems 30 (2017).
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. There are many hyperparameters, and biological tasks often involve expensive oracle functions. How can we tune these hyperparameters in real-world applications?
2. Is there any tendency for the proposed method to work better on large sequences? Please provide insights on this.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: They address their limitations in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude for taking the time to review our paper and letting us know your thoughtful comments. Please find below our responses to your comments.
## Conditional GFlowNets
Conditional GFlowNets can be used for sequence editing by training the flow function with a sequence distance constraint. However, implementing this approach for sequence editing is not scalable and can be computationally intractable. The distance constraint depends on the seed sequence, requiring a separate flow function for each seed sequence to obtain diverse edits. This approach encounters several challenges:
- Training GFlowNets for each seed sequence is infeasible for large datasets.
- This approach requires evaluating the properties of unseen sequences using a proxy model, which can lead to inaccurate predictions for out-of-distribution sequences, resulting in poorly trained flow functions.
- It may reduce the generalizability of the flow function, with the trained flow functions potentially containing only local information.
Moreover, it is uncertain whether such an approach can provide theoretical guarantees on the amount of edits, similar to those provided by the proposed GFNSeqEditor.
In contrast, using GFNSeqEditor, one can train one flow function on offline data. Then for sequence editing, GFNSeqEditor only makes inferences with the flow function. This makes GFNSeqEditor computationally efficient. Moreover, the paper theoretically proves that employing GFNSeqEditor, the amount of edits can be controlled using three hyperparameters.
## Variational Inference
In order to compare GFNSeqEditor with variational inference based methods, we include LaMBO [32] as a baseline. LaMBO maps sequences into a latent space and uses Bayesian optimization in this space to generate sequences with relatively higher properties. We find controlling the amount of edits challenging using LaMBO. To sum up, we believe that performing sequence editing with variational inference based methods can be an interesting future research direction as it requires careful and intricate algorithm design.
## Soft-Q-Learning and PCL Literature Review
We will acknowledge the relevant literature on Soft-Q-Learning and path consistency learning (PCL). We will add that “Generating sequences in an autoregressive fashion using GFlowNet involves only one path to generate a particular sequence. In such cases, generating biological sequences with GFlowNet can be viewed as a Soft-Q-Learning [R1, R2, R3] and path consistency learning (PCL) [R4] problem.”
## GFlowNets Literature Review
We will expand our literature review to include more recent studies on GFlowNets. We will add notes on references [R5]--[R13]. Several novel GFlowNet training methodologies are proposed in [R5, R7, R10, R12]. The application of GFlowNets when a predefined reward function is not accessible is explored in [R6]. Distributed training of GFlowNets is discussed in [R8]. Accelerating GFlowNet training is investigated in [R9]. Moreover, [R11] employs GFlowNets for designing DNA-encoded libraries. To reduce the need for expensive reward evaluations, [R13] proposes a new GFlowNet-based method for molecular optimization.
## Responses to Questions
**Response to Question 1**: The theoretical bounds obtained in Section 4.3 can be used to determine a range for hyperparameters. Subsequently, a few experiments can be conducted within this range to identify the optimal hyperparameter values.
**Response to Question 2**: The intuition behind GFNSeqEditor’s superior performance with larger sequences is its utilization of a flow function which is trained to capture global information about the sequence space. In contrast, the performance of local search-based baselines decrease as sequence length increases. This is because, as the search space expands, it becomes more challenging for local search methods to find sequences with optimal performance.
## References
[R1] T. Haarnoja, H. Tang, P. Abbeel and S. Levine, “Reinforcement Learning with Deep Energy-Based Policies,” ICML 2017.
[R2] J. Grau-Moya, F. Leibfried and P. Vrancx, “Soft Q-Learning with Mutual-Information Regularization,” ICLR 2019.
[R3] S. Mohammadpour, E. Bengio, E. Frejinger and P. Bacon “Maximum entropy GFlowNets with soft Q-learning,” AISTATS 2024.
[R4] O. Nachum, M. Norouzi, K. Xu and D. Schuurmans “Bridging the Gap Between Value and Policy Based Reinforcement Learning,” NeurIPS 2017.
[R5] H. Jang, M. Kim, S. Ahn, “Learning Energy Decompositions for Partial Inference in GFlowNets,” ICLR 2024.
[R6] Y. Chen and L. Mauch, “Order-Preserving GFlowNets,” ICLR, 2024.
[R7] M. Kim, T. Yun, E. Bengio, D. Zhang, Y. Bengio, S. Ahn and J. Park, “Local Search GFlowNets,” ICLR, 2024.
[R8] T. Silva, L. M. Carvalho, A. H. Souza, S. Kaski and D. Mesquita, “Embarrassingly Parallel GFlowNets,” ICML, 2024.
[R9] M. Kim, J. Ko, T. Yun, D.i Zhang, L. Pan, W. C. Kim, J. Park, E. Bengio and Y. Bengio “Learning to Scale Logits for Temperature-Conditional GFlowNets,” ICML, 2024.
[R10] P. Niu, S. Wu, M. Fan and X. Qian, “GFlowNet Training by Policy Gradients,” ICML, 2024.
[R11] M. Koziarski, M. Abukalam, V. Shah, L. Vaillancourt, D. A. Schuetz, M. Jain, A. van der Sloot, M. Bourgey, A. Marinier and Y. Bengio, “Towards DNA-Encoded Library Generation with GFlowNets,” ICLR 2024 Workshop on Generative and Experimental Perspectives for Biomolecular Design, 2024.
[R12] S. Guo, J. Chu, L. Zhu, Z. Li and T. Li, “Dynamic Backtracking in GFlowNets: Enhancing Decision Steps with Reward-Dependent Adjustment Mechanisms,” Arxiv, 2024.
[R13] H. Kim, M. Kim, S. Choi and J. Park, “Genetic-guided GFlowNets for Sample Efficient Molecular Optimization,” Arxiv, 2024.
---
Rebuttal Comment 1.1:
Comment: Most of my concerns were addressed, so I increased the score accordingly. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
GraphTrail: Translating GNN Predictions into Human-Interpretable Logical Rules | Accept (poster) | Summary: The authors introduce a new global explanation generation method for GNNs. Their method uses the fact that message passing GNNs break a graph into a set of computation trees. They use Shapley values to then compute the influence of each computation tree. This is then mapped to a boolean formula over concepts. They note that the search space of trees is linear vs the exponential search space of all possible subgraphs. They compute computation trees as paths for each node. Therefore, there are n possible computation trees; one for each node. After each of the computation trees are evaluated via Shapley values the top-k trees are sent to their logical formulator where logical rules are generated. They then evaluate and compare fidelity scores across several different GNN architectures and datasets. They also evaluate the effect of varying the choice of k.
Strengths: S1. The paper fills a gap in the literature; utilizing shapley values to generate global explanations of computation trees.
S2. The idea seems sound and successfully reduces the computationally intractable problem of computing shapley values over all possible subgraphs to just the computational trees of graphs.
S3. Experimental results seem to support some of the authors’ claims and some ideal behaviour required in interpretable AI can be witnessed.
Weaknesses: W1. The experimental results are not sufficient to justify this method’s superiority as an explanation method. The fidelity of their explanations just compares to GLG. It is understandable that this is a global explanation method that uses logical rules so it should compare to other similar global explanation methods, there still should be some comparison to other explanation methods.
W2. The explanations generated can be more interpretable than other global explanation methods. However, the explanations are still not very easy to read and can be very long and complex.
Technical Quality: 3
Clarity: 3
Questions for Authors: I suggest the author’s add additional experiments that strengthen this method’s claims. Adding other global explanation methods and even instance-level explanations could demonstrate the strengths of this method.
Please also address all the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1/W1: I suggest the author’s add additional experiments that strengthen this method’s claims. Adding other global explanation methods and even instance-level explanations could demonstrate the strengths of this method.**
*Answer:* We did not compare with any other explainer since none of the existing algorithms generate a logical formula over concepts to explain GNN predictions. Nonetheless, based on this suggestion, we have added three more explainers: GNNExplainer (local) [1], PGExplainer (local) [2] and XGNN (global) [3].
For local explainers, which produce an explanation subgraph for each input graph, we generated explanations for all graphs in the training set. We then applied the logical formula as an OR operation over all these subgraphs. In practice, this means that if any of the local explanations from a graph with a GNN-predicted class Y is present in a test graph, then that test graph is predicted to be from class Y.
XGNN, being a global explainer, operates differently. Instead of producing a graph, it uses a graph generator that creates graphs, maximizing the prediction likelihood of a target class label by the GNN being explained. For our comparison, we generated 3 graphs per class from XGNN and, similar to the local explainers, employed a logical formula that is an OR operation over all these explanations.
The results of this expanded comparison are presented in the table below. Notably, XGNN, despite being a global explainer, shows the weakest performance. This is likely because the graphs it generates are rarely contained (subgraph isomorphic) in any of the test graphs. Local explainers also demonstrate lower efficacy, as their local explanations fail to capture global patterns effectively.
|Algorithm|BAMultiShapes|Mutag|Mutagenicity|
|--|--|--|--|
GnnExplainer|0.54| 0.73|0.46 |
PGExplainer|0.54|0.71|0.44|
XGNN|0.45|0.30|0.43|
GLGExplainer|0.51|0.74|0.62|
GraphTrail|**0.87**|**0.83**|**0.72**|
[1] Rex Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. 2019. GNNExplainer: generating explanations for graph neural networks. NeurIPS '19.
[2] Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng Chen, and Xiang Zhang. 2020. Parameterized explainer for graph neural network. In NeurIPS '20.
[3] Hao Yuan, Jiliang Tang, Xia Hu, and Shuiwang Ji. 2020. XGNN: Towards Model-Level Explanations of Graph Neural Networks. In KDD '20.
**W2. The explanations generated can be more interpretable than other global explanation methods. However, the explanations are still not very easy to read and can be very long and complex.**
*Answer:* We acknowledge the potential for further enhancing GNN explainability. However, it's crucial to recognize that in datasets where the underlying property is inherently a function of multiple concepts (such as motifs), the ground-truth logical formula itself may necessarily be complex. This is exemplified in the Mutag and Mutagenicity datasets, where the property is attributed to the presence of eight distinct toxicophores (subgraphs) [4]. Similarly, the BAMultishapes dataset, which provides ground-truth logical formulas, exhibits formula sizes (variables + operators) exceeding 10 for both classes.
Given these intrinsic complexities, the challenge lies in striking a balance between interpretability and fidelity. Generating concise, easily interpretable formulas while maintaining high fidelity to the underlying data patterns remains an open research problem.
[4] Debnath AK, Lopez de Compadre RL, Debnath G, Shusterman AJ, Hansch C. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity. J Med Chem. 1991 Feb;34(2):786-97.
-----
# Appeal to the reviewer:
We appreciate the reviewer's constructive feedback on our work. We have incorporated additional baselines as suggested. We would be grateful if the reviewer could reassess our paper in light of these improvements and consider adjusting the rating accordingly.
---
Rebuttal Comment 1.1:
Title: I would like to keep the rating as it is.
Comment: I would like to keep the rating as it is due to the following reasons.
1. The additional experiments should be done in a rigid manner in the original submitted paper instead of in the the feedback in a rather ad-hoc manner.
2. For W2, the authors' feedback does not satisfactorily address the weak point.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer AvDp,
Thank you for your feedback on our rebuttal. We appreciate your input and would like to seek further clarification on two points you raised:
1. Experimental Design:
You mentioned that our comparison seems ad hoc. Could you please elaborate on which aspects you find ad hoc? We would value your insights on how to design a more robust experiment. As we noted in our rebuttal, GLG is currently the only technique we're aware of that generates a logical formula, which limits our comparison options.
2. Interpretability of Formulas:
You suggested that existing global explainers might produce more interpretable formulas. Could you provide specific examples of such explainers? To our knowledge, GLG is the only existing baseline in this area. In our analysis, we demonstrated that our approach outperforms GLG in both accuracy and interpretability.
We look forward to your responses, as they will help us improve our work and address any remaining concerns.
Thank you for your time and expertise.
Best regards,
The Authors
---
Rebuttal 2:
Comment: Dear reviewer,
Please respond to the rebuttal as is expected of you as a NeurIPS reviewer asap! Thanks | Summary: This paper proposes a novel method for providing global explanations for GNNs by constructing logical formulas that offer easy-to-understand interpretations for each class. The authors first propose using computation trees instead of subgraphs to construct explanations. They then suggest using Shapley values to evaluate the contribution of each computation tree to the classification results. Finally, they derive logical formulas corresponding to each class through symbolic regression. Compared to traditional instance-level explanation methods, this approach can explain the decision-making process of GNNs from a higher level. Moreover, compared to other model-level explanation methods, this method does not require prior knowledge about the dataset and is easier to understand. Overall, this paper is innovative, has a clear approach, and provides convincing experiments.
Strengths: - ### Originality
- Compared to previous works that employ concept-based explanations for GNNs (e.g., GLGExplainer), the proposed concept vector form is more comprehensible. This concept vector also addresses a significant challenge in subsequent Shapley value calculations. Additionally, this method is end-to-end, reducing complexity.
- The use of computation trees instead of instance-level subgraph extractors as concepts results in explanations that better reflect the inherent properties of the dataset and are less influenced by the quality of instance-level explainers. The exploration of Shapley values in the context of graphs is relatively novel. The paper analyzes the reasons for the high complexity of Shapley value calculations on graphs and successfully reduces this complexity through a combination of various methods.
- ### Quality
- This paper starts with the problem of providing global explanations for GNNs, reviewing existing solutions and identifying their shortcomings. It proposes a new approach that uses computation trees and Shapley values to construct concept formulas. The approach is clear, identifying the deficiencies of current methods and proposing solutions. The paper then analyzes the challenges of calculating Shapley values and successfully addresses these challenges using existing methods. This section details the difficulties of computing Shapley values on graphs, providing an effective method for efficient computation.
- Next, the paper uses symbolic regression to generate logical formulas from concept vectors. This process is clear and intuitive. Finally, it tests the proposed method on multiple datasets using GLGExplainer as a baseline, evaluating Fidelity, Robustness, and Data Efficiency. The results demonstrate that this is a reliable and effective method. Overall, the paper presents a clear approach, offering a new method for explaining GNNs using logical formulas and an efficient way to compute Shapley values on graphs, with detailed and comprehensive experiments.
- ### Clarity
- The paper provides detailed explanations of the problem definition, relevant concepts (such as Shapley value, concept, graph isomorphism, rooted computation tree), and experimental aspects. It clearly describes the task objectives and provides corresponding mathematical formulations, with strict mathematical definitions of the relevant concepts. The experimental section is well-organized, detailing the GNNs used, the process of selecting the hyperparameter kkk, and the standards used in the explanation process.
- Overall, most of the content in this paper is clearly presented and easily understandable by readers.
- ### Significance
- The significance of this paper lies primarily in the construction of subgraphs as concepts using computation trees and the efficient computation of Shapley values on graphs.
- First, constructing subgraphs using computation trees greatly reduces the complexity of the search space. Given the message-passing paradigm of GNNs, using computation trees aligns better with the information processing of GNNs compared to other graph structures.
- Second, the paper studies methods for computing Shapley values on graphs, proposing an efficient computation scheme based on existing methods. This contributes to the application of Shapley values in the graph domain.
Weaknesses: - The main weakness of this paper is that it does not adequately describe the specific calculation methods for Shapley values and the symbolic regression methods used. Instead, it merely cites the sources of these methods without providing appropriate descriptions in the text or the appendix. For example, in line 207, it only states that it uses methods from [A unified approach to interpreting model predictions](https://arxiv.org/abs/1705.07874) without further describing the sampling strategy. Similarly, in lines 247-248, it mentions the [multi-population evolutionary algorithm](https://arxiv.org/abs/2305.01582) used for symbolic regression but lacks a basic description of this method. In the appendix, lines 521-526 only state that the [Depth-First Canonical Form (DFCF)](https://dl.acm.org/doi/10.5555/2993947.2993981) is used to convert computation trees to Canonical Form for efficient tree isomorphism testing, but it also lacks specific descriptions.
- Such a presentation, where the methods are only cited without basic descriptions, makes it difficult for readers to understand the proposed methods. Additionally, Table D in the paper shows the accuracy of the GNNs used in the experiments on various datasets, but the accuracy appears to be lower than that reported in other papers. This may indicate that the GNNs used in the experiments were not sufficiently trained (this issue will be elaborated on in the Questions section).
Technical Quality: 3
Clarity: 2
Questions for Authors: - According to the experimental setups in [EiG-Search](http://arxiv.org/abs/2405.01762), [D4Explainer](https://arxiv.org/abs/2310.19321), and [GLGExplainer](http://arxiv.org/abs/2210.07147), the GNN accuracies trained on the BAMultiShapes, Mutag, Mutagenicity, and NCI1 datasets are higher than those presented in Table D of this paper. Were the GNNs used in this paper's experiments sufficiently trained?
- In line 225, the top \( k \) computation trees with the highest Shapley values are selected and denoted as \( \mathcal{C}^{*} \). From which dataset or set are these computation trees selected? Is this set composed of computation trees from every node of all graphs in the training set?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Apart from the limitations discussed in Section 5 of the paper, in Eq. 7, when calculating \(\Phi\left(\mathcal{G}^{S}\right)\), the GNN may not provide correct classification probabilities for the embedding \(h^S_{\mathcal{G}}\) due to insufficient generalization ability. This can affect the calculation of the Shapley values and, consequently, the overall performance of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: The main weakness of this paper is that it does not adequately describe the specific calculation methods for Shapley values and the symbolic regression methods used. Instead, it merely cites the sources of these methods without providing appropriate descriptions in the text or the appendix. For example, in line 207, it only states that it uses methods from A unified approach to interpreting model predictions without further describing the sampling strategy. Similarly, in lines 247-248, it mentions the multi-population evolutionary algorithm used for symbolic regression but lacks a basic description of this method. In the appendix, lines 521-526 only state that the Depth-First Canonical Form (DFCF) is used to convert computation trees to Canonical Form for efficient tree isomorphism testing, but it also lacks specific descriptions.**
Answer: Due to space constraints, we decided to omit the referred details, as these algorithms serve as tools rather than original contributions of our work. We understand that this decision impacted the comprehension of certain sections. Based on your feedback, we propose to include more detailed descriptions provided in the **`global response` above**. These descriptions will be added to the appendix, with references from appropriate sections in the main manuscript.
**Q1: GNN accuracy in EiG-Search, D4Explainer, and GLGExplainer, are higher. Were the GNNs used in this paper's experiments sufficiently trained?**
*Answer:* GNN accuracies are hard to replicate exactly due to variabilities such as train:test splits and training on a single seed (while we have reported results across three splits and three seeds). Nonetheless, to address this concern, we have taken the exact model weights from EiG-Search and measured the performance of GraphTrail and GLGExplainer. The results are in **Table 6** of the pdf attached to the global response. Consistent with previous results, GraphTrail outperforms by similar margins.
| EiG-Search | MUTAG | Mutagenicity | NCI1 |
|-------------|---------------|---------------|---------------|
| GLGExplainer | 0.56 +- 0.21 | 0.61 +- 0.03 | 0.55 +- 0.01 |
| GraphTrial | **0.86 +- 0.06** | **0.75 +- 0.01** | **0.72 +- 0.01** |
**Q2: In line 225, the top $k$ computation trees with the highest Shapley values are selected and denoted as $\mathcal{C}^{*}$. Is this set composed of computation trees from every node of all graphs in the training set?**
*Answer:* This is correct.
**L1: Apart from the limitations discussed in Section 5 of the paper, in Eq. 7, when calculating $(\Phi\left(\mathcal{G}^{S}\right))$, the GNN may not provide correct classification probabilities for the embedding $(h^S_{\mathcal{G}})$ due to insufficient generalization ability. This can affect the calculation of the Shapley values and, consequently, the overall performance of the method.**
*Answer:* As a global GNN explainer, our primary objective is to derive a logical formula that accurately reflects the outputs of the GNN, rather than the ground truth labels. This approach means that even in cases where the GNN's prediction is incorrect due to poor generalizability, our explainer aims to produce a logical formula that predicts the same incorrect label. This fundamental property guides our design choice to compute Shapley values based on the GNN's predictions rather than the ground truth labels. By doing so, we ensure that our explanations faithfully represent the GNN's decision-making process, including any potential errors or biases, thus providing a true reflection of the model's behavior rather than an idealized version of what it should do.
-------
# Appeal to the reviewer:
We sincerely thank the reviewer for their thoughtful and constructive feedback on our work. In response to the suggestions provided, we have made the following significant improvements to our manuscript:
1. Included detailed descriptions of the additional content recommended by the reviewer.
2. Provided further empirical evidence demonstrating the efficacy of GraphTrail on an open-source model.
In light of these improvements, we kindly request the reviewer to *reassess our work and consider adjusting the rating to reflect these enhancements.* We remain open to any further suggestions or queries you may have.
---
Rebuttal Comment 1.1:
Title: Eagerly awaiting feedback
Comment: Dear Reviewer Sg1i,
Thank you for your constructive feedback. We have addressed your comments in our rebuttal. As we are less than a day away from the end of the discussion phase, we would greatly appreciate your feedback on the clarifications provided in our rebuttal.
Regards,
The Authors
---
Rebuttal Comment 1.2:
Comment: Thanks for your response. I will keep the current score. | Summary: This paper introduces GraphTrail, an end-to-end global GNN explainer providing logic formulas over sub-graphs level concepts. These concepts are extracted at subgraph level by using the Shapley values and, then, the GNN predictions are mapped into logic via symbolic regression. Different experiments show that GraphTrail is improving wrt other GNN explainers, that has robust and accurate performances.
Strengths: - The problem investigated in the paper is very interesting, the paper is well written and the problem well formulated.
- The way different existing techniques have been combined to design GraphTrail is very interesting.
Weaknesses: - Some aspects of the comparison against GLGExplainer should be importantly clarified. Also as these two models have such a similar core, more experimental analysis should be done to validate the advantages of GraphTrail wrt to GLGExplainer.
Moreover, I had a look at the GLGExplainer paper, and the fidelity valued reported there (Table 2 and 4), and the one in this paper are very different, hence I'm asking if Table 2 of GraphTrail paper is actually reliable or there is an explanation for the difference of results reported in the two papers. While here it is claimed that "both GRAPHTRAIL and GRAPHTRAIL-s significantly outperform GLGEXPLAINER", the results in the GLGExplainer paper for the fidelity in the two datasets (on the test set) are:
BAMultiShapes 0.96 ± 0.03
Mutagenicity 0.81 ± 0.01
the results reported in the GRaphTrail paper for GLGExplaier are:
BAMultiShapes 0.51 ± 0.03
Mutagenicity 0.62 ± 0.02
Hence I'm wondering either the GLGExplainer had a mistake in their implementation, the authors of GraphTrail were not able to correctly reproduce the GLGExplainer performances, or there are some additional hypothesis distinguishing these experiments that I didn't understand.
Similarly for the provided explanations in the different datasets (the comparison in Figure 5 here), they seem quite different from the ones reported in the GLGExplainer. This should be clarified in order to understand if actually GraphTrail is performing better or not than on of its main competitor.
- Definition 5 is a bit unclear, could you make it more precise? Even if Fig 2 helps a lot to understand the tree construction.
- I see the computational advantage of only considering Rooted Computation Trees as concepts, but I guess it is tricky in practice to find an L that works for all the graphs in D, and that part of the complexity of the problem is understand the minimum L that make a good explanation.
Minor comments and typos:
- Please split the rigorous definition from the comment in Definition 4. Hence in practice, the concepts of a graph are all its subgraphs, i.e. this include the original graph itself, disconnected nodes and so on, right?
- Curiously, in the abstract of GLGExplainer and GraphTrail there is this common sentence: "..making GLGExplainer a promising diagnostic tool for learned GNNs." and "..GraphTrail makes it an invaluable diagnostic tool for refining GNNs.." It seems strange that it is a coincidence, as GraphTrail describes in details the GLGExplainer paper. I'd suggest to rephrase in a more original way (in addition to having changed the adjective "promising" to "invaluable").
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) "The objective of our work is to develop an end-to-end global explainer that (1) mines the subgraph concepts used by a black-box GNN model, and then (2) uncovers the boolean logic used by the GNN over these concepts to make its predictions." If I understand correctly, your model is a post-hoc GNN explainer, even if I think this is never mentioned in the paper. Or I'm missing something here?
2) What do you mean by "unique" subgraph? This has not been defined as far as I see.
3) Obs 1 and 2 are simply theorems (or lemmas) with a proof, also the proofs seem formal to me. So I don't understand why the authors are calling them observations.
4) Is it possible to evaluate the logic rules in terms of accuracy to make the prediction? The fidelity is great, but also the accuracy of the extracted explanations should be evaluated to measure the quality of the extracted explanations for a certain task.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The model limitations have been correctly described.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1(a). Comparison vs GLGExplainer**
**Ans:** The algorithmic foundations of GLG and GraphTrail are significantly different (please see Lines 47-63). These include:
1. **Non-reliance on local explainers:** GLG assumes local explanations as an input and then operates over these to identify the logical formula. GraphTrail observes that relying on local explanations creates a disconnect with the objective, as they lack a global understanding of the model. Hence, GraphTrail mines the concepts directly from the training data.
2. **Concepts:** In GLG, each concept in the formula corresponds to a feature vector and not a subgraph. These vectors represent the embedding of a cluster of subgraphs generated by the instance explainer. Hence, in its original form, the formula is not human-interpretable. To convert into a human-interpretable formula, GLG randomly selects a subgraph from the cluster, assuming all subgraphs in a cluster are similar. We show in App. F that the graphs in a cluster are diverse and hence randomly picking one is not a faithful explanation. GraphTrail doesn't suffer from this issue since the concepts map to computation trees.
3. **Formula construction:** While GLG uses an entropy layer [1] to construct the logical formula, GraphTrail uses symbolic regression.
**W1(b) Conduct additional experiments.**
*Ans.* We have conducted the following experiments, which are reported in `the rebuttal pdf'.
* **Accuracy of generated formulas** against ground-truth (`Table 1`)
* The **statistical significance** of the results via paired T-tests (`Table 2`)
* **Precision, recall and f-score** (`Tables 3-5`) of the generated formulas against GNN output.
**W1(b). ...the fidelity in GLGExplainer, and the one in this paper are different...**
**Ans:** Indeed, GLG is not reproducible. We highlight this explicitly in lines 307-308 of our main draft with a pointer to Appendix D offering a discussion on the possible causes. As we note in Appendix D, there is definite **test data leakage** in the Mutagenicity dataset. We include the screenshot of the corresponding code in Fig G. Specifically, in this dataset, the presence of NO2 and/or NH2 motifs indicate a positive class label (mutagenic). However, a graph may be positive due to other factors (See [2]). GLG alters the train set by **only** including graphs that contain NO2 and NH2. This selection bias dramatically simplifies the explanation task, as the model is essentially fitted to explain NH2 or NO2 presence. When we include all graphs, the result deteriorates.
In other datasets, GLG has released the instance explanations being used for deriving the global explanation, but not the exact train sets used that generate these instance explanations. When we run the GLG code (released by authors) on raw datasets using the reported hyper-parameters, the produced instance explanations don't match the released instance explanations. Hence, the fidelity numbers are different.
If we use the released explanations, the reported results in GLG match. This isolates the reproducibility issue to the generation of instance explanations, similar to our observation in Mutagenicity.
We had reached out to the authors via email requesting: (i) the pipeline that produce the release explanations, (ii) justification for the train set selection in Mutagenicity. We have not received any response.
For transparency, our code and datasets are publicly available. Our results can be verified independently.
**W2. Simplify Def. 5**
**Ans:** We'll change it as follows.
> A *computation tree*, also known as a receptive field, is a fundamental concept in GNNs that describes how information propagates through the graph during the neural network's operation. Formally, it is defined as follows.
>
> Given a graph $G=(V,E,X)$, a node $v \in V$ and the number of layers $L$ in the GNN, the computation tree $T^L_v$ rooted at $v$ is constructed as follows:
> * Enumerate all paths (including those with repeated vertices) starting from $v$ and extending up to $L$ hops away.
> * Merge these paths into a tree structure where, (i) The root is always $v$, and (ii) Two nodes from different paths are merged if:
> a. They are at the same depth in their respective paths, and
> b. All their ancestors in the paths have already been merged.
>This tree represents how information flows to node $v$ during $L$ rounds of message passing in the GNN.
**W3: Determine $L$?**
**Ans:** The value of $L$ is known to us. It represents the number of layers in the GNN being explained. Since each node's embedding is determined by its $L$-hop computation tree, $L$ is not derived and given as input.
**W4: Split Def. 4 from comment. What is the space of concepts?**
*Ans.* Sure.
Space of concepts: In GLG, the space of all subgraphs includes the graph itself, disconnected nodes, etc. However, GraphTrail notes that a node embedding is solely determined by its $L$-hop neighborhood. Consequently, this space is limited to only the $L$-hop computation tree of each node.
**W5: similar sentence in abstract.**
*Ans.* We will rephrase.
**Q1: ...your model is a post-hoc explainer**
*Ans.* Yes. We'll state this explicitly.
**Q2: What's meant by "unique" subgraph?**
*Ans.* A unique subgraph (or tree) refers to a subgraph that is not isomorphic to any other subgraph in the collection extracted from the graph(s).
**Q3: Obs 1 & 2 are theorems/lemmas.**
*Ans.* We'll change to Lemmas.
**Q4: Accuracy of logical rules.**
*Ans.* Added as discussed in W1(a).
[1] Entropy-based logic explanations of neural networks, AAAI'22
[2] Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity. J Med Chem'91
[3] What graph neural networks cannot learn: depth vs width. ICLR'20
------
# Appeal
In light of the additional experiments and clarifications, we kindly request the reviewer to reassess the rating of our work.
---
Rebuttal Comment 1.1:
Title: Post rebuttal comment
Comment: I thank the authors for the response and the provided clarifications. As a side note, I also would like to reassure the authors that they don't need to appeal for the reassessment of their work as done to all the reviewers, as the most important aspect in a rebuttal is not increasing the score obtained but clarifying the important aspects of their research and the soundness/originality of the proposal. Moreover, any reviewer is perfectly knowing that he/she can update the score during/after the rebuttal.
I see your points on the bias introduced in the Mutagenicity dataset, but I didn't understand if this selection only regards the training set or also the test set. Otherwise it would not be clear why enlarging the training set would reduce the GLG performances. Have you tried your model on the "reduced" dataset considered by GLG? I think this would make a more fair comparison wrt findining the hyperparameters that maximise the performance of both the models.
Curiously, I saw that in their paper the GLG authors report: "As discussed in Section 3, we used PGExplainer (Luo et al., 2020) as the Local Explainer. However, we modified the procedure for discretizing weighted graphs into a set of disconnected motifs. Indeed, **the authors in Luo et al. (2020) limited their analysis to graphs that contained the ground truth motifs and proposed to keep the top-k edges as a rule-of-thumb for visualization purposes.** For Mutagenicity, over which PGExplainer was originally evaluated, we simply selected the threshold θ that maximises the F1 score of the local explainer over all graphs, **including those that do not contain the ground-truth motif.**
If I understood correctly the ground-truth motif refers to NH2 and NO2, hence it seems exactly the opposite of what you found in the code. Could you point out in the github repository of the GLG paper where you found the snippet you indicated in the paper: https://github.com/steveazzolin/gnn_logic_global_expl/tree/master
I think this is a crucial point to clarify.
Clarified this point I'll be happy to consider reassessing my score.
---
Reply to Comment 1.1.1:
Title: Clarification on queries related to GLGExplainer non-reproducibility
Comment: We appreciate the reviewer's engagement in this discussion phase and are pleased to provide clarifications to their queries below. Additionally, we would like to express our apologies for requesting a re-assessment of ratings. We appreciate the opportunity to address the concerns and improve our submission.
**Clarification 1:**
>As discussed in Section 3, we used PGExplainer (Luo et al., 2020) as the Local Explainer. However, we modified the procedure for discretizing weighted graphs into a set of disconnected motifs. Indeed, the authors in Luo et al. (2020) limited their analysis to graphs that contained the ground truth motifs and proposed to keep the top-k edges as a rule-of-thumb for visualization purposes. For Mutagenicity, over which PGExplainer was originally evaluated, we simply selected the threshold θ that maximises the F1 score of the local explainer over all graphs, including those that do not contain the ground-truth motif.
**If I understood correctly the ground-truth motif refers to NH2 and NO2, hence it seems exactly the opposite of what you found in the code.**
*Answer:* As the GLG authors note in their quoted statement, they only alter the discretization process, which is an operation conducted **post** training ("However, we **modified the procedure for discretizing weighted** graphs into a set of disconnected motifs.") Test-data leakage happens during training due to only including graphs that contain NO2 and NH2 for the mutagenic class. In discretization, where they select the optimal value of $\theta$, no filtering is performed. Let us elaborate.
Given a graph with adjacency matrix $A\in\lbrace 0,1\rbrace^{n\times n}$, PGExplainer outputs a continuous-valued mask $M\in[0,1]^{n\times n}$. The mask represents the probability of an edge being part of the local explanation. To extract the local explanation, this mask needs to be discretized into a binary matrix $\lbrace 0,1\rbrace^{n\times n}$. $\theta$ is used as a threshold in this particular process of converting the continuous valued mask into a binary matrix. The optimal $\theta$ is selected by maximizing the F1 score of the local explanations. The set for optimizing $\theta$ is unfiltered.
**Clarification 2**
**Could you point out in the github repository of the GLG paper where you found the snippet?**
*Answer:* The snippet code is from the PGExplainer repo. It can be found in lines 73-90 at https://github.com/flyingdoog/PGExplainer/blob/master/MUTAG.ipynb?short_path=39bc9a0. This is the codebase used by GLG to generate the local explanations.
Subsequently, while loading these explanations, the GLG code again explicitly checks for the presence of NO2 and NH2 in each graph and performs class attribution of molecules *solely* based on their presence/absence. See lines lines 269-283 in https://github.com/steveazzolin/gnn_logic_global_expl/blob/master/code/local_explanations.py. The NO2 and NH2 structures are initialized in lines, 23-36.
**Clarification 3:**
**I didn't understand if this selection only regards the training set or also the test set. Otherwise it would not be clear why enlarging the training set would reduce the GLG performances. Have you tried your model on the "reduced" dataset considered by GLG?**
*Answer:* The selection process applies exclusively to the training set. However, it is crucial to understand that this is **NOT** a case of reduced training set. A reduced training set would involve randomly selecting a smaller subset of graphs from both mutagenic and non-mutagenic classes. Instead, what we have here is an engineered training set.
In this engineered set, specifically those graphs from the mutagenic class that contain NO2 and/or NH2 are selected. These groups are known to be part of the ground-truth motifs associated with mutagenicity. Ordinarily, this knowledge should be withheld from the training process to maintain the integrity of the supervised learning paradigm.
By incorporating this label-related information into the training set selection, we are essentially leaking data that should be part of the evaluation criteria into the learning process. This violation undermines a fundamental principle of supervised learning: the separation of training data from the ground truth used for evaluation. | Summary: The authors propose a novel method to provide instance-level GNN explanations that uncover the combinatorial reasoning learned by a GNN from the training data. They do so by mining discriminative subgraph-level concepts using Shapley values and mapping them to human-interpretable boolean formulas over these concepts through symbolic regression.
Strengths: - the authors describe a novel instance-level explainability method they designed for GNN models
- the authors compare their method against a SOTA technique and outperform it
- the authors assess their method against multiple datasets
Weaknesses: - the real-world datasets the authors use to assess their method only correspond to datasets describing collections of molecules
- we miss some ablation studies to understand the contribution of particular components of the proposed technique
Technical Quality: 3
Clarity: 3
Questions for Authors: We consider the paper interesting and relevant. Nevertheless, we would like to point to the following improvement opportunities:
GENERAL COMMENTS
(1) - did the authors consider trying their method on some datasets from a different domain / not describing collections of molecules?
(2) - did the authors consider performing some ablation studies to understand, e.g., what is the contribution of selecting trees by considering Shapley values vs. a random selection?
(3) - did the authors consider the statistical significance of the results obtained? E.g., the Wilcoxon signed-rank test.
FIGURES
(4) - All figures: consider a color palette that would be friendly for colorblind people. The following link could be helpful in this regard: https://davidmathlogic.com/colorblind/#%23D81B60-%231E88E5-%23FFC107-%23004D40
TABLES
(5) - Table 1: align numeric values to the right to make differences in magnitude evident. Use the same number of decimals across rows.
(6) - Table 2: report the same number of decimals in all cases. Apart from the averages, the authors are reporting the standard deviation?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately acknowledged the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments on our work. Please find below our responses to the suggestions and concerns raised.
**Q1/W1: Did the authors consider trying their method on some datasets from a different domain / not describing collections of molecules?**
*Answer:* We note that BAMultishapes is not a molecular dataset. In this dataset, the gap between GraphTrail and GLGExplainer is the highest ($0.87$ vs. $0.51$).
**W2/Q2: What is the contribution of selecting trees by considering Shapley values vs. a random selection?**
*Answer:* We have conducted this ablation study. As evident from the table below, choosing random trees significantly deteriorates the fidelity, indicating the value of Shapley-based selection.
Algorithm| BAMultishapes | Mutag | Mutagenicity | NCI1
---|---|---|---|---
GraphTrail with Shapley | **0.87 +- 0.01**| **0.83 +- 0.08** | **0.72 +- 0.03** | **0.70 +- 0.03**
GraphTrail with random trees| 0.54 +- 0.03| 0.68 +- 0.04| 0.55 +- 0.01| 0.54 +- 0.02
**Q3 - did the authors consider the statistical significance of the results obtained? E.g., the Wilcoxon signed-rank test.**
*Answer:* We have conducted paired T-tests to measure the statistical significance of the results. The results are presented below, which shows that the improvement obtained by GraphTrail over GLGExplainer is statistically significant.
|BAMultishapes | MUTAG | Mutagenicity | NCI1|
|--|--|--|--|
|t=58.952, $p$-value=$\approx$ 0 |t=2.908, $p$-value=0.017 | t=23.519, $p$-value=$\approx$ 0 | t=12.159, $p$-value=$\approx$ 0|
**Q4 FIGURES
(4) - All figures: consider a color palette that would be friendly for colorblind people.**
**Answer:** Sure, we will change the color palette as per the guidelines in the link.
**Q(5) - Table 1: align numeric values to the right to make differences in magnitude evident. Use the same number of decimals across rows.**
**Answer:** We will make this correction.
**Q(6) - Table 2: report the same number of decimals in all cases. Apart from the averages, the authors are reporting the standard deviation?**
**Answer:** We will modify the table as suggested.
------
# Appeal to the reviewer
With the inclusion of statistical significance tests, ablation study and clarifications, we hope the reviewer finds our manuscript improved. If the reviewer agrees, we would appreciate support for our work by increasing the rating accordingly.
---
Rebuttal Comment 1.1:
Comment: We have read the authors' response and their responses to the rest of the reviewers. Thank you for providing the clarifications. We have no further questions and have decided to keep our positive score. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their insightful and constructive feedback. Below, we provide a comprehensive point-by-point response to their comments. Additionally, **we have attached a PDF document** containing various new empirical analyses as suggested by the reviewers. Key revisions include:
1. Enhanced Empirical Benchmarking:
* Integration of **three new baselines**: GNNExplainer, PGExplainer, and XGNN
* **Additional assessment metrics** (along with Precision, Recall, F1) to measure efficacy of explainers.
* **Accuracy of the generated formulas** against ground-truth labels.
* **Statistical Significance** of the results with paired T-tests.
* **Ablation study** on selection of computation trees with Shapley values vs. a random selection.
2. Clarifications and presentation improvements:
* Potential reasons behind non-reproducibility in GLGExplainer.
* Additional details on symbolic regression, canonical label construction, and optimizing Shapley computation.
* Various other presentation enhancements as suggested by the reviewers
We hope these revisions have strengthened our manuscript. *We kindly request the reviewers to reassess their ratings in light of this rebuttal.* We are open to further engagement with the reviewers for any additional queries or suggestions.
----------------
## Detailed descriptions of Symbolic Regression, Depth-first canonical form and efficient Shapley computation as requested by Reviewer `Sg1i`
**Symbolic Regression:**
>In symbolic regression, given a set of $n$ input-output pairs $\{(x_i, y_i)\}_{i=1}^n$, where $x_i \in \mathbb{R}^{d}$ is an input vector, and $y_i \in \mathbb{R}$ is the output value (or label), and a set of operators, such as addition, subtraction, etc., the goal is to find a symbolic equation $e$ and corresponding function $f_e$ such that $\forall i,\: y_i \approx f_e(x_i)$, while also reducing the complexity (number of operators and variable) of the formula. The loss function is presented in Eq. 11. Finding the optimal formula is not computationally tractable since the number of formulas grows exponentially with the number of variables and operations. Hence, from various approximation strategies in the literature [23] and we use [6], which leverages an evolutionary algorithm.
>
> The process begins by initializing $n_p$ populations, each with $L$ random expressions of complexity $C$ (3 in our experiments). For each $P_i$, a set $M_i$ is created to store the expressions with the smallest loss (Eq. 11) at each complexity level within that population. Additionally, a global set $H$ is maintained to store the expressions with lowest loss at each complexity across all populations.
>
>The algorithm iterates over each population $P_i$ for a fixed number of epochs, allowing them to evolve. Evolution is driven by running tournaments within each population, where the expression $E$ with lowest loss is declared the winner. A copy of the winner is created and chosen for *mutation* or *crossover*, forming $E^*$. Mutating an expression involves repeatedly applying random operations from a set of operations, e.g., replacing/adding operators and variables. Crossover selects the two best expressions, $E_1, E_2$, from the population and swapping random sub-expressions between them, forming $E_1^*, E_2^*$. The newly formed expression(s) replace the oldest expression(s) in the population.
>
>Once evolution within each population is complete, the sets $M_i$ and $H$ are updated with the best expressions. Once the chosen number of epochs is complete, the expression from $H$ with minimum loss is returned.
**Depth-first Canonical Form (DFCF):**
>**Canonical Labeling:** *A canonical label of a graph $\mathcal{G}$ is a unique representation that remains unchanged under isomorphisms, i.e., two graphs $\mathcal{G}_1$ and $\mathcal{G}_2$ have the same canonical label *iff* they are isomorphic.*
>
>We construct the canonical label of a rooted computation tree through *Depth-First Canonical Labeling*. From a labelled rooted unordered tree we can derive many labelled rooted ordered trees as shown in `Fig. 1 in the rebuttal pdf`. There is a one-to-one correspondence between a labelled rooted ordered tree and its depth-first string encoding. The ordering of the strings orders the trees and the minimum in the ordering is the canonical label. Each string is a depth-first (preorder) traversal that uses 'dollar' to represent a backtrack and 'hash' to represent the end of the string encoding. In sorting, 'hash' is greater than 'dollar' and both these symbols are greater than other labels.
>The Depth-First Canonical Form (DFCF) is constructed by sorting the vertices of a rooted unordered tree level by level. At each level, the vertices are sorted first by their labels and then by the ranks of their children at respective levels. `Fig. 2 in the rebuttal pdf` illustrates the process.
**Shapley**
> SHAP represents the Shapley value (SV) explanation as an additive feature attribution method and fits a weighted linear regression model to approximate the model's output. The weights in this model correspond to the SVs (contribution of features). It specifies the explanation as: $g(z)=\phi_0 + \sum_{j=1}^M \phi_jz_j $
where $g$ is the model, $z\in {0,1}^M$ is the coalition vector, $M$ is the maximum coalition size, and $\phi_j \in R$ is the SV for a feature $j$.
>Kernel-SHAP is a SHAP variant to estimate SVs. It calculates SVs via:
>* **Coalition Sampling:** It randomly selects subsets of features, coalitions $(z_k)$. It assigns weights to each coalition using the SHAP kernel:
$ \pi(z) = \frac{(M - 1)}{{M \choose x} |z| (M - |z|)} $; $M=$ total features, $|z|=$ coalition size.
>* **Predictions:** For each coalition, Kernel-SHAP maps the coalition back to the original features and obtains a prediction afterwards.
>* **Fitting:** Using the calculated weights and predictions, Kernel-SHAP fits a linear model.
**Please see the attached pdf.**
Pdf: /pdf/8b3b37b941c24012149b275460001699ae035ccc.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Agent-to-Sim: Learning Interactive Behavior from Casual Videos | Reject | Summary: The paper presents ATS (Agent-To-Sim), a framework to enable agent behavior modeling from multiple casual video captures in indoor scenarios captured during long spans of time. The proposed pipeline consists in (1) 4D reconstruction of the scene geometry and observer and agent motion, and (2) controllable agent behavior learning and generation.
For the first stage, multi-video registration uses coarse-to-fine registration to globally align the cameras to a shared canonical space derived from DINOv2 per-frame features (initialized with a walkthrough clip of the environment) and then jointly optimizes the 3D structures while adjusting the cameras locally with novel featuremetric losses (which makes the optimization robust to changes of lighting and appearance and improves alignment accuracy) and standard photometric and regularization losses. With the proposed (annealed) swapping of latent per-video codes during optimization, missing information is shared across videos, while video-specific details are kept.
For the controllable agent behavior modeling, in order to generate plausible interactive behaviors, the generated behavior conditions on an encoding of the scene, observer, and past from the agent's egocentric perspective, which avoids overfitting to specific locations in the scene. Then, the ego-perception-conditioned generation of full body motion proceeds hierarchically via diffusion: Generated goals Z condition generated paths P, which finally condition generated body motions G.
The included experiments reflect the quality of the 4D reconstructions achieved by the proposal, the improvements in displacement errors compared to two baselines (as well as ablations of the proposed method), and a qualitative analysis of the effects of the behavior conditioning signals.
Strengths: - Great technical achievement to reconstruct agent behavior in indoor settings, exploiting the shared information across different videos captured at different times via robust alignment based on semantic features from foundational image models (DINOv2) and diffusion-based short-term hierarchical motion generation.
- Plausible long-horizon generation of agent motion for different bodies, conditioned on the environment, observer, and past trajectory.
- Despite the complexity of the system, the description is relatively brief and complete, whig, along with the rest of the paper, is excellently written.
Weaknesses: - The paper focuses on environment-aware motion of agents in the presence of a (human) observer. Even if out of scope for this paper, it would be interesting to discuss more complex agent-environment interactions (see my questions below).
- I believe the current experiments use a small number of environments/scenes, which makes it hard to justify considering the system for larger-scale deployment, but I'll be happy to update my score if the authors correct me.
Technical Quality: 3
Clarity: 4
Questions for Authors: - In the appendices (L.507) and implementation details (L. 245) the training time horizon is 6.4 seconds, but in L.181 the motion modeling sets a horizon of 5.6 seconds. Is my understanding correct that, for each window, the first 0.8 seconds are used as previous trajectory and the remainder 5.6 seconds as targets?
- How is an additional (moving) agent in the scene (e.g. a person or another animal moving in the background) currently handled by the 4D agent-observer-scene modeling method described for a single agent?
- Are there many examples of videos not reconstructible due to notable changes in the scene layout (e.g. movement of large furniture) in the captured data? If so, how is it handled? I did not see any reference to this issue in the original manuscript, but I think it is a reasonable discussion when attempting to employ this framework at scale.
- How could this framework be extended towards modeling complex interactions with the environment (e.g. opening a door, sitting on a chair after moving it, etc.)?
- How many scenes and different agents are used in training and validation? Do they specific agents overlap across splits, or just the types of agents?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors reasonable address limitations and social impact in the appendices.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive feedback! Below please find our responses to your questions and comments.
**Q1 The paper focuses on environment-aware motion of agents in the presence of a (human) observer. Even if out of scope for this paper, it would be interesting to discuss more complex agent-environment interactions. How is an additional (moving) agent in the scene (e.g. a person or another animal moving in the background) currently handled by the 4D agent-observer-scene modeling method described for a single agent?**
Our model is designed to handle interactions of a single agent with the observer and the scene. The other agents that appear in the videos are treated as part of the scene. However, due to the static scene assumption (Eq. 1), those moving agents are either averaged out or not well reconstructed (please see Fig. D in the [rebuttal pdf](https://openreview.net/attachment?id=abfRu0bgF7&name=pdf) for a visual example). Solving re-identification and multi-object tracking [E] in 3D will enable introducing multiple agents, which is an exciting future work.
[E] Rajasegaran, Jathushan, et al. "Tracking people by predicting 3d appearance, location and pose." CVPR 2022.
**Q2 Are there many examples of videos not reconstructible due to notable changes in the scene layout (e.g. movement of large furniture) in the captured data? If so, how is it handled? I did not see any reference to this issue in the original manuscript, but I think it is a reasonable discussion when attempting to employ this framework at scale.**
Thanks for the suggestion. As shown in Fig. D of the [rebuttal pdf](https://openreview.net/attachment?id=abfRu0bgF7&name=pdf), our method fails to reconstruct notable layout changes when they are only observed in a few views, e.g., the cushion and the large boxes (left) and the box (right). Leveraging generative image prior to in-paint the missing regions is a promising direction to tackle this problem [F].
[F] Weber, Ethan, et al. "Nerfiller: Completing scenes via generative 3d inpainting." CVPR. 2024.
**Q3 How could this framework be extended towards modeling complex interactions with the environment (e.g. opening a door, sitting on a chair after moving it, etc.)?**
To reconstruct complex interactions with the environment, one idea is to extend the scene representation to be hierarchical (represented as a kinematic tree), such that it consists of articulated models of interactable objects [G, H, I]. To generate plausible interactions between the agent and the scene (e.g., opening a door), we can extend the agent state $G$ to include both the agent and the articulated objects (e.g., door), and learn a behavior generator to generate their trajectory jointly [J].
[G] Song, Chaoyue, et al. "REACTO: Reconstructing Articulated Objects from a Single Video." CVPR. 2024.
[H] Liu, Jiayi, Ali Mahdavi-Amiri, and Manolis Savva. "Paris: Part-level reconstruction and motion analysis for articulated objects." ICCV. 2023.
[I] Wei, Fangyin, et al. "Self-supervised neural articulated shape and appearance models." CVPR. 2022.
[J] Li, Jiaman, et al. "Controllable human-object interaction synthesis." ECCV 2024.
**Q4 How many scenes and different agents are used in training and validation? Do specific agents overlap across splits, or just the types of agents?**
We demonstrate our approach on four types of agents with different morphology, i.e., cat, human, dog, and bunny, in three different scenes, where human and cat share the same scene. We train an instance-specific model for each agent, and the data is not mixed up across agents. For quantitative evaluation of behavior generation, we report the performance of the cat agent on the held-out test sequences of the same cat. Training a model across different agent identities and types will be an interesting future work.
**Q5 The current experiments use a small number of environments/scenes, which makes it hard to justify considering the system for larger-scale deployment.**
For the cat dataset, we use 26 video clips over the span of a month, which is not super large-scale but we believe this is an important step to go beyond a single video. The major difficulty towards large-scale deployment is the efficiency and robustness of 4D reconstruction algorithms. In terms of robustness, we showed a meaningful step towards scaling up 4D reconstruction by neural initialization (Eq. 6). We believe this paper is a step toward large scale deployment, and we will release code for reproducibility and make it easy to follow up.
**Q5 In the appendices (L.507) and implementation details (L. 245) the training time horizon is 6.4 seconds, but in L.181 the motion modeling sets a horizon of 5.6 seconds. Is my understanding correct that, for each window, the first 0.8 seconds are used as previous trajectory and the remainder 5.6 seconds as targets?**
You are totally correct about this!
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses and clarifications. I think the paper is nice to read and shows promising results, so I am keeping my score. | Summary: This paper discusses using an iPhone's RGBD camera to collect several hours of videos within a room over a time span of one month. Through these multi-view videos, a 4D reconstruction of the room is generated. A collection of rigid bodies is used to simulate agents (such as cats, dogs, etc.) in the room. Utilizing goal-conditional path generation technology, users can ultimately control the movement of these agents by setting goals.
Strengths: 1. The video presented in this paper is very effective; it reconstructs 4D video from a single view and reconstructs a complete room from multiple views.
2. In addition to reconstruction, the paper also discusses how to control the movement of the agent through goal-condition path generation.
3. Intuitively, I think this is a good paper and may inspire researchers in the field of 4D reconstruction.
Weaknesses: 1. While I am not an expert in 4D reconstruction, I find the presentation of this paper rather unclear, particularly the methodology section, which is extremely difficult to understand. My confusion began around lines 126-127. What are the color and feature descriptors of the video? I later noticed that ψ is described as the DINOv2 [40] feature of the input image. So, is ψ a feature of an image? How to obtain it? The paper should clarify this. Additionally, what is X, and is it a point cloud obtained from a mobile phone? If so, how does the point cloud acquire its color in Equation 2?
2. I suggest using a table to explain each symbol in detail. If the explanation of a symbol requires context from the paper, ensure it is as understandable as possible. For technical terms, provide detailed explanations within the paper. A comprehensive symbol table in the appendix would significantly enhance the paper's clarity.
3. The paper lacks detailed quantitative experiments to demonstrate the effectiveness of the method.
Technical Quality: 3
Clarity: 2
Questions for Authors: What is the practical use of this work?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors adequately addressed the limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive feedback! We added a table of notations to improve the clarity, and we will expand the explanation of individual symbols in the paper.
| Notation | Description |
|-------------|------------------------------------------------------------------------------------------------------|
| **Global Symbols** |
| $B$ | The number of bones of an agent. By default $B = 25$. |
| $M$ | The number of videos. |
| $N_i$ | The number of image frames extracted from video $i$. |
| $I_i$ | The sequence of color images {$\{I_1, \ldots, I_{N_i}\}$} extracted from video $i$. |
| $\psi_i$ | The sequence of DINOV2 feature images {$\{\psi_1, \ldots, \psi_{N_i}\}$} extracted from video $i$. |
| $T_i$ | The length of video $i$. |
| $T^*$ | The time horizon of behavior diffusion. By default $T^* = 5.6s$. |
| $T'$ | The time horizon of past conditioning. By default $T' = 0.8s$. |
| $Z \in \mathbb{R}^3$ | Goal of the agent, defined as the location at the end of $T^*$. |
| $P \in \mathbb{R}^{3 \times T^*}$ | Path of the agent, defined as the root body trajectory over $T^*$. |
| $G \in \mathbb{R}^{6B \times T^*}$ | Pose of the agent, defined as the 6DoF rigid motion of bones over $T^*$. |
| $\omega_s \in \mathbb{R}^{64}$ | Scene code, representing the scene perceived by the agent. |
| $\omega_o \in \mathbb{R}^{64}$ | Observer code, representing the observer perceived by the agent. |
| $\omega_p \in \mathbb{R}^{64}$ | Past code, representing the history of events happened to the agent. |
| **Learnable Parameters:** | **4D Reconstruction**|
$T$ | Canonical NeRFs, including a scene MLP and an agent MLP. |
| $\beta_i \in \mathbb{R}^{128}$ | Per-video code that allows NeRFs to represent variations across videos. |
| $\mathcal{D}$ | Time-varying parameters, including {$\{\xi, G, W\}$}. |
| $\xi_t \in SE(3)$ | The camera pose that transforms the scene to the camera coordinates at $t$. |
| $G^b_t \in SE(3)$ | The transformation that moves bone $b$ from its rest state to time $t$ state. |
| $W \in \mathbb{R}^B$ | Skinning weights of a point, defined as the probability of belonging to bones. |
| $f_\theta$ | PoseNet that takes a DINOV2 feature image as input and produces camera pose. |
| **Learnable Parameters:** | **Behavior Generation**|
| MLP$_{\theta_z}$ | Goal MLP that represents the score function of goal distributions. |
| ControlUNet$_{\theta_p}$| Path UNet that represents the score function of path distributions. |
| ControlUNet$_{\theta_G}$| Pose UNet that represents the score function of pose distributions. |
| ResNet3D | Scene perception network that produces $\omega_s$ from 3D feature grids. |
| MLP$_{\theta_o}$ | Observer MLP that produces $\omega_o$ from observer’s past trajectory in $T'$. |
| MLP$_{\theta_p}$ | Past MLP that produces $\omega_p$ from agent’s past trajectory in $T'$. |
Below please find our responses to your questions and comments.
**Q1 What are the color and feature descriptors of the video?**
Given a video $i$ with $N_i$ frames, we extract color images {$\{I_1, \ldots, I_{N_i}\}$} and compute their dense features descriptors {$\{\psi_1, \ldots, \psi_{N_i}\}$} using a pre-trained DINOv2 network.
**Q2 What is X, and is it a point cloud obtained from a mobile phone? If so, how does the point cloud acquire its color in Equation 2?**
${\bf X}$ in Eq. 1-3 are not point clouds. They are continuous 3D coordinates used to the define density and color of NeRFs in its implicit representation. The color at a location ${\bf X}$ can be queried by evaluating the MLP in Eq. 2. The parameters of the MLP are learned via differentiable rendering optimization in Eq. 7. We will clarify in the paper.
**Q3 The paper lacks detailed quantitative experiments to demonstrate the effectiveness of the method.**
We provided additional quantitative comparisons and analysis in the [global response](https://openreview.net/forum?id=fzdFPqkAHD¬eId=abfRu0bgF7) Table A-C, specially on camera localization, 4D reconstruction, and behavior learning. We found that our design choices are necessary to achieve good performance.
**Q4 What is the practical use of this work?**
Our goal is to learn “world models” that one can interact with from videos. This is a fundamental question that has practical application in generating contents for VR/AR, as well as robot learning with plausible agent simulation. For VR/AR applications, our approach enables generating data-driven agents that can interact with humans and scenes in a realistic manner. For robotics, the learned realistic behavior simulation can be used to pretrain robot policies that have a smaller sim-to-real gap, before adapting to the real world.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The table is very clear and addresses most of my concerns. I would suggest that, in future versions, this information be included in the appendix, where it can be expanded in greater detail, given that space is not as limited there. This addition would greatly enhance the reader's understanding of the paper.
While it is challenging to reproduce the results based solely on the paper's description, the availability of the code is a significant asset. If the code is complete, I believe that with improvements in the clarity of the writing and the inclusion of more detailed explanations in the appendix, this paper could be strong enough for publication in NeurIPS. However, my primary concern is that the current presentation makes the paper somewhat difficult to follow, which could impact its accessibility to a broader audience.
My remaining question is whether the authors could restate the paper’s task, objectives, inputs and outputs, datasets, key modules, and the corresponding inputs and outputs for those key modules in a way that is more accessible to readers. I believe these elements could be added to the appendix to further aid reader comprehension.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response and additional feedback. We provide the requested elements below and will improve the presentation accordingly. Please kindly let us know if anything is missing or unclear.
**Task and Objectives.** We develop a method to learn interactive behavior models of agents from casual videos captured over a long time horizon. The objectives include:
- **Casual 4D reconstruction**: Enabling low-cost capture of agent’s shape, motion, and interactive behavior signals from casual videos (e.g., captured with an iPhone);
- **Interactive behavior modeling**: Learning behavior of agents interacting with the environment and the observer; and
- **Flexible representation**: Extending behavior learning to broader agent categories, such as animals,
which will ultimately contribute to VR/AR and Robotics via generating interactive contents for VR/AR, as well as robot learning with plausible agent simulation.
**Dataset.** We demonstrate our approach on four types of agents with different morphology, i.e., cat, human, dog, and bunny, in three different scenes, where human and cat share the same scene. Here is a breakdown of the data we used
| | # Videos | # Frames | # Days | Time span (days) |
|--------|--------------|--------------|------------|------------------|
| Cat | 26 | 15391 | 9 | 37 |
| Human | 5 | 5668 | 2 | 4 |
| Dog | 3 | 4330 | 1 | 1 |
| Bunny | 2 | 1080 | 1 | 1 |
**Input/output**. We provide the input/output of the global system and key submodules below. We integrated the above into a pipeline figure and will add it to the appendix.
- **Global Input/output**
- Input: A walk-through video of the environment and a video collection of target agent.
- Output: An interactive behavior generator of the agent.
- **Neural localization** (Sec 3.2, L154-165)
- Input: Neural localizer $f_\theta$ and the video collection of the agent.
- Output: Camera poses for each video frame.
- **4D Reconstruction with feature-metric alignment** (Sec 3.2, L167-176)
- Input: Video collection of the agent and corresponding camera poses.
- Output: Reconstruction of the geometry ${\bf T}$, agent motion ${\bf G}$ and observer motion ${\boldsymbol \xi}$.
- **Behavior learning** (Sec. 3.3)
- Input: Reconstruction of the scene geometry ${\bf T}$, agent motion ${\bf G}$ and observer motion ${\boldsymbol \xi}$.
- Output: An interactive behavior generator of the agent.
- **Behavior generation** (Fig. 3)
- Input: Ego-centric scene feature grid, agent's past trajectory over horizon $T'=0.8s$, observer's past trajectory over $T'=0.8$.
- Output: Goal, path, and a sequence of full body motion of the agent over $T^*=5.6s$. | Summary: This paper presents Agent-to-Sim, an approach to learn a 3D agent in a 3D environment from casual videos of the same agent captured over a long horizon. ATS first conducts 4D spatio-temporal reconstruction from the set of videos, including a deformable agent, the background scene, and a moving observer. This is done with a coarse-to-fine video registration method. Then, given the 4D reconstruction, ATS learns a hierarchical diffusion model over the agent's goal, path, and pose trajectories. The overall approach is tested on a dataset of iPhone videos for over several types of agents and motion patterns.
Strengths: - I am not a subject matter expert in this field. However, the paper was clear and well-written such that even a non-expert like myself can understand the proposed high-level approach. The attached supplementary materials give a great visual overview of the paper.
- The paper outlines several limitations of the proposed approach and future directions to address them. The limitations are meaningful and help the reader better understand the problem setting, modelling assumptions, and future directions.
- The paper tackles a challenging problem on the path towards building scalable and realistic simulators.
Weaknesses: - Certain technical details are not clear for readers unfamiliar with the related literature. This limits understanding and reproducibility. See questions.
- Evaluation of the method seems limited and is mostly limited to qualitative comparisons. I suppose this is inevitable given that ATS tackles a new problem setting than related work. However, it does limit the reader's ability to evaluate the significance of this methodology.
- For behavior generation evaluation, I don't understand why certain baselines were selected. In particular, FaF seems like a detection + multi-agent motion forecasting paper for self-driving, so it's not immediately clear how it can be adapted to this setting.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the video code $ \beta $ and how is it used?
2. How are the ego-perception codes used in behavior generation?
3. What is B in L182?
4. Why is each module in behavior generation evaluated separately, conditioned on GT inputs? Since the task is behavior prediction, another natural evaluation setting seems to be an end-to-end evaluation setting comparing body motion prediction from ego-perception inputs. This would open up other ablation studies to understand the efficacy of the hierarchical model; e.g., by comparing against a non-hierarchical diffusion model.
5. In L191-196, how are the two diffusion models used? Are they combined to use a form of classifier-free guidance?
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed limitations and potential social impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive feedback! We plan to expand on details in the additional page of the final version as well as the appendix. In [the response to Reviewer C9Gr](https://openreview.net/forum?id=fzdFPqkAHD¬eId=cNiL3khmUC), we also added a table of notations to improve the clarity. Our code and data will be released for reproducibility and our goal is to allow researchers to continue working along this path.
Below please find our responses to your questions and comments.
**Q1 Evaluation of the method seems limited and is mostly limited to qualitative comparisons.**
We provided additional quantitative comparisons and analysis in the [global response](https://openreview.net/forum?id=fzdFPqkAHD¬eId=abfRu0bgF7) Table A-C, specially on camera localization, 4D reconstruction, and behavior learning. We found that our design choices are necessary to achieve good performance.
**Q2 Why is each module in behavior generation evaluated separately, conditioned on GT inputs? Since the task is behavior prediction, another natural evaluation setting seems to be an end-to-end evaluation setting comparing body motion prediction from ego-perception inputs. This would open up other ablation studies to understand the efficacy of the hierarchical model; e.g., by comparing against a non-hierarchical diffusion model.**
Thanks for the great suggestion! We re-did the evaluation of the behavior prediction using the suggested end-to-end setup without using GT goals/paths. As a result, hierarchical out-performs one-stage by a large margin for all metrics. We posit hierarchical model makes it easier to learn individual modules. *Please see the [global response](https://openreview.net/forum?id=fzdFPqkAHD¬eId=abfRu0bgF7) for details.*
**Q3 What is the video code 𝛽 and how is it used?**
The video code 𝛽 is a 128-dimensional latent code that is concatenated to the fourier code as the input to NeRFs, similar to GIRAFFE [B]. We use this idea to represent scenes with slightly different layouts given a shared NeRF backbone.
[B] Niemeyer, Michael, and Andreas Geiger. "Giraffe: Representing scenes as compositional generative neural feature fields." CVPR. 2021.
**Q4 How are the ego-perception codes used in behavior generation?**
The ego-perception codes are used as conditioning signals for behavior generation. Specifically, we concatenate the perception codes with the positional encoding of diffusion timesteps (representing the noise level $\sigma$) to predict the amount of noise. The predicted noise is then subtracted from the input noisy signal. This process is repeated for 50 times until a clean signal is obtained.
**Q5 What is B in L182?**
$B$ in $G \in \mathbb{R}^{6B \times T^*}$ is the number of bones of the agent. Each bone has 6 degrees-of-freedom including a center location (3DoF) and orientation (3DoF).
**Q6 In L191-196, how are the two diffusion models used? Are they combined to use a form of classifier-free guidance?**
Our behavior model consistents of 3 diffusion models, for goal, path, and full body motion generation respectively. Each diffusion model is trained with random dropout of the conditioning [C]. At test time, we classifier-free guidance to mix the conditional and unconditional score estimates with guidance-scale s = 2.5 following MDM [57].
To enable precise user control for the path and full body models, we follow controlnet [72] and omnicontrol [62] that use two networks with identical architectures and dense skip connections in between. The first network receives the perception codes $\omega$ only; the second network receives additional control inputs (i,e., goal and path) and modulates the intermediate features of the first network. We found the ControlUNet architecture allows precise control when goal and path is provided by the user, as shown in Tab. D in the response to [Reviewer m6Ge](https://openreview.net/forum?id=fzdFPqkAHD¬eId=IUiGgGniSH).
[C] Ho, Jonathan, and Tim Salimans. "Classifier-free diffusion guidance." arXiv preprint arXiv:2207.12598 (2022).
**Q7 FaF seems like a detection + multi-agent motion forecasting paper for self-driving, so it's not immediately clear how it can be adapted to this setting.**
This baseline represents goal, path, and full body motion as Gaussians, and learns to predict both the mean and variance of Gaussian distributions by minimizing the negative log-likelihood [D]. We implemented and trained it using the same data as ATS.
This baseline was named as FaF because its input/output are close to FaF’s motion forecasting module. To avoid confusion, in the new [Table A](https://openreview.net/forum?id=fzdFPqkAHD¬eId=abfRu0bgF7) and [Table D](https://openreview.net/forum?id=fzdFPqkAHD¬eId=IUiGgGniSH), we renamed it as Gaussians.
[D] Kendall, Alex, and Yarin Gal. "What uncertainties do we need in bayesian deep learning for computer vision?." NeurIPS 2017.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I agree with Review m6Ge that the paper could benefit from more detailed exposition and evaluation. The authors have also agreed to improve the paper's exposition with an additional page and provided additional evaluation of its methodology in camera localization, 4D reconstruction, and behavior simulation. Considering this, I would like to maintain my rating and recommend accepting this paper. | Summary: The paper presents a method for learning interactive behaviors of various agents, including humans, cats, dogs and a bunny, by leveraging unstructured videos captured casually. The various videos are registered together in a common frame, offering a 4D reconstruction of the agent and the environment. Based on this reconstruction, the multi-modal distribution describing different agent behaviors is learned by using diffusion models and Control UNets.
Strengths: The paper addresses the very challenging problems of learning agent behaviors from a collection of unstructured videos captured over different sessions. To learn interactive behaviors, both the trajectories of the agent and the surrounding environment need to be reconstructed, as to have relevant context of the behavior. Additionally, the motion of the camera/observer need to be reconstructed as well, to allow the registration of the videos in a common frame. As the videos are collected over a potentially large period of time, change in the environment can occur, complicating the tasks of registration and reconstruction.
The idea of using ego-perception encoding for the learning and generation of plausible interactive behaviors is another strong point. After the agent and the environment are reconstructed, ego-perception encoding is learning perception codes of the scene, the observer and past trajectory, factors that condition the generation of the agent's body motion.
Behavior generation considers the generation of the goal and the conditioned generation of the path, taking into account the goal.
Weaknesses: There are numerous models employed in the proposed framework. Due to the limited space available, few details are provided about their motivation and their implementation. This makes both understanding of the work and its reproducibility very challenging.
A particular aspect which is not addressed in detail is the modeling of the agents, especially of animals like cats that are quite challenging due to their non-rigid nature. In particular, it is not clear how eq.2 is combined with eq.3, and why the same number of "bones" (b=25, L.137) is used for all agents. Also, the nature of G^b is not discussed in detail.
Additionally, details on how NeRF-type reconstructions are combined with feature descriptors, and how this helps in handling layout changes is not discussed in detail.
More examples like the previous can be given for different aspects covered in the paper, like camera localization (eq.6), scene alignment (eq.7) and behavior learning (eq.10 and 11). Each of these aspects would certainly require more space for describing in detail the corresponding models and support the relative claims in the experimental evaluation.
Regarding experimental evaluation in particular, only high-level results regarding the agent behavior prediction are provided, while it would be crucial to quantitatively assess the quality of 4D reconstruction and, importantly, to include a detailed ablative study.
Overall, although some very interesting ideas are proposed in this work, both for 4D reconstruction of agent behaviors and behavior learning and generation, I think that the paper is too densely packed without having enough space to describe the paper contributions in sufficient detail. In my view, even describing in detail one of the 4D reconstruction or agent behavior modeling parts alone would be challenging in the space available. This affects also the experimental evaluation, as not all claims are supported by the results.
### Minor comments
- L.35: "Such systems do not scale well"
- Figure 1, caption: incomplete sentence "conditioned different observer trajectories"
- L.88: "whiling accounts"
- L.113: what "longitudinal videos" are?
- Figure 3, caption: what does "low latency" means in this context?
- L.215: "we collect the a"
Technical Quality: 2
Clarity: 2
Questions for Authors: - How are the agents, and especially the animals, modeled?
- What happens if a behavior is observed only once in the dataset or conversely, how many times need a behavior be observed to be included in the model?
- How robust is the method with respect to changes in the environment?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Limitations of the work are discussed in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive feedback! Due to the complex nature of the problem, it is difficult to unpack all the details in the limited space. We plan to expand on details in the additional page of the final version as well as the appendix. In [the response to Reviewer C9Gr](https://openreview.net/forum?id=fzdFPqkAHD¬eId=cNiL3khmUC), we also added a table of notations to improve the clarity. Our code and data will be released for reproducibility and our goal is to allow researchers to continue working along this path.
Below please find our responses to your questions and comments.
**Q1 How are the agents, and especially the animals, modeled? A particular aspect which is not addressed in detail is the modeling of the agents, especially of animals like cats that are quite challenging due to their non-rigid nature.**
We use the bag-of-bones model from BANMo [65], which accounts for both articulated motion and non-rigid deformation. The deformation is computed by blending the motion of a set of unstructured 3D coordinates/bones that rotate and translate over time.
**Q2 In particular, it is not clear how eq.2 is combined with eq.3, and why the same number of "bones" (b=25, L.137) is used for all agents. Also, the nature of G^b is not discussed in detail.**
We model the density and color of an agent in the time-invariant space (Eq. 2), which can be mapped to the deformed space at a given time instance (Eq. 3). We use $B=25$, as it is the superset for bones of all agents we processed. $G^b_t$ is a rigid transformation representing the state of bone b at time t.
**Q3 How NeRF-type reconstructions are combined with feature descriptors, and how this helps in handling layout changes?**
Similar to distilled feature fields [A], we extend a static NeRF to represent feature fields (Eq. 1), and optimize them together with the motion $\mathcal{D}$ using DINOv2 descriptors. This is referred to as feature-metric bundle adjustment (FBA) in Eq. 7. We find FBA is robust to moderate layout changes, since DINOv2 feature descriptors are robust to local appearance changes [40]. We quantitatively validate the effect of FBA in [global response](https://openreview.net/forum?id=fzdFPqkAHD¬eId=abfRu0bgF7) Tab B. *This is also evident in Fig. D of the rebuttal pdf, where we can localize cameras despite of the layout changes of the scene.*
[A] Kobayashi, Sosuke, Eiichi Matsumoto, and Vincent Sitzmann. "Decomposing nerf for editing via feature field distillation." NeurIPS 2022.
**Q4 Clarification on designs (Eq. 6-7, Eq. 10-11).**
We added an ablation of Eq. 10-11 in Table D row (d). We find that replacing ControlUNet with concatenation (L197-198, concatenating goals with perception codes) produces worse results (e.g., Path error: 0.115 vs 0.146). We also provided additional quantitative comparisons and analysis on camera localization (Eq. 6), feature-metric bundle adjustment (Eq. 7) in the [global response](https://openreview.net/forum?id=fzdFPqkAHD¬eId=abfRu0bgF7) Table B. We found that those designs are necessary to achieve good performance.
Table D: Evaluation of Behavior Control. We separately evaluate path and full body motion generation, given guidance signals of goal and path respectively. The metrics are minimum average displacement error (minADE) with standard deviations (±σ). The best results are in bold.
| Method | Path (m) ↓ | Orientation (rad) ↓ | Joint Angles (rad) ↓ |
|-------------------------------|-------------------|----------------------|----------------------|
| Gaussian [31, 44] | 0.206±0.002 | 0.370±0.003 | 0.232±0.001 |
| ATS (Ours) | **0.115±0.006** | **0.331±0.004** | **0.213±0.001** |
| | | | |
| (a) w/o observer $ω_o$ | 0.126±0.011 | **0.330±0.004** | **0.212±0.001** |
| (b) w/o scene $ω_s$ | 0.179±0.003 | **0.329±0.004** | **0.212±0.001** |
| (c) ego→world [61] | 0.209±0.002 | 0.429±0.006 | 0.250±0.002 |
| (d) control-unet→concat | 0.146±0.005 | 0.351±0.004 | 0.220±0.001 |
**Q5 What happens if a behavior is observed only once in the dataset or conversely, how many times does a behavior need to be observed to be included in the model?**
Due to our ego-centric encoding (Eq.12), we find that a behavior can be learned and generalized to novel situations even when seen once. Although there's only one data point where the cat jumps off the dining table, our method can generate diverse motion of cat jumping off the table while landing at different locations (to the left, middle, and right of the table). *Please see Fig B of the [rebuttal pdf](https://openreview.net/attachment?id=abfRu0bgF7&name=pdf) for the corresponding visual.*
**Q6 How robust is the method with respect to changes in the environment?**
We noticed displacements of chairs and the presence of new furniture in our captured data. Our method is robust to these in terms of camera localization (Tab B of the global response, Fig D of the rebuttal pdf). However, 3D reconstruction of these transient objects is challenging and we leave it as future work.
**Q7 What "longitudinal videos" are?**
Longitudinal videos come from the term “longitudinal study”, which refers to a research design that involves repeated observations of the same variables (e.g., people) over long periods of time. We think it fits well with our study of learning a behavior model of agents from videos captured over a long-horizon. We will clarify.
**Q8 What does "low latency" mean in this context?**
Low-latency indicates the model can generate goals at an interactive frame-rate. We will clarify.
---
Rebuttal 2:
Title: Comments after rebuttal
Comment: I thank the authors for their responses and their comments. I appreciate the additional results provided as well as the evaluation regarding camera localization and 4D reconstruction. Their answers have clarified some aspects, especially regarding agent reconstruction. As mentioned in the reviews, there are issues regarding notation and clarity which, to some extent, are due to the limited space available for describing in sufficient details all the claimed contributions. In my view, the one additional page of the final version will still not be sufficient. This also concerns reproducibility, as sufficient details in the paper itself need to be provided, even if the code will be published. I increase my rating to borderline reject, as I appreciate the contributions of this work and the answers provided by the authors, yet I feel that a more detailed presentation and evaluation would make the paper much stronger. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their feedback. We propose an approach to learn an interactive behavior model of agents from casual videos captured over a long time horizon. Reviewers note that we tackle a "challenging" problem (m6Ge, Kj9h) with “very interesting”, “effective” ideas (m6Ge, C9Gr), “great technical achievement to reconstruct agent behavior” (xDUi) as well as “great visuals” (Kj9h) that “may inspire researchers in the field of 4D reconstruction” (C9Gr).
This paper received 3 above-accept reviews: Accept, WA, BA, and one Reject. The reject recommendation is due to the lack of space and detailed description given the complexity of the problem addressed. Other reviewers request quantitative evaluations. We report more quantitative results on behavior prediction (m6Ge, Kj9h), camera registration and 4D reconstruction (m6Ge, Kj9h, C9Gr). Please also note that we will make our code/data available for reproducibility, and improve the exposition based on the feedback using the extra 1 page allowance.
**End-to-end evaluation and comparison to a 1-stage model (Kj9h)**: We re-did the evaluation of the behavior prediction using the suggested end-to-end setup without using GT goals/paths (please see the new Table A), and added comparison against the 1-stage model (row a). Our hierarchical model out-performs 1-stage by a large margin for all metrics. We posit hierarchical model makes it easier to learn individual modules. We also re-run the other ablations in this setting (row b-d), which verifies our design choices.
Table A: End-to-end Evaluation of Interactive Behavior Prediction. We report results of predicting goal, path, orientation, and joint angles, using $K = 16$ samples across $L = 12$ trials. The metrics are minimum average displacement error (minADE) with standard deviations (±σ). The best results are in bold.
| Method | Goal (m) ↓ | Path (m) ↓ | Orientation (rad) ↓ | Joint Angles (rad) ↓ |
|---------------------------------|-------------------|----------------------|----------------------|----------------------|
| Location prior [94] | 0.663±0.307 | N.A. | N.A. | N.A. |
| Gaussian [31, 44] | 0.942±0.081 | 0.440±0.002 | 1.099±0.003 | 0.295±0.001 |
| ATS (Ours) | **0.448**±0.146 | **0.234**±0.054 | **0.550**±0.112 | **0.237**±0.006 |
| | | | | |
| (a) hier→1-stage [73] | 1.322±0.071 | 0.575±0.026 | 0.879±0.041 | 0.263±0.007 |
| (b) w/o observer $ω_o$ | 0.647±0.148 | 0.327±0.076 | 0.620±0.092 | 0.240±0.006 |
| (c) w/o scene $ω_s$ | 0.784±0.126 | 0.340±0.051 | 0.678±0.081 | 0.243±0.007 |
| (d) ego→world [61] | 1.164±0.043 | 0.577±0.022 | 0.873±0.027 | 0.295±0.006 |
**Camera localization (m6Ge, Kj9h, C9Gr)**: We added an experiment on camera localization using GT cameras from annotated GT correspondences. *A visual of the annotated GT correspondence and 3D alignment can be found in Fig. C of the attached pdf.*
We report camera translation and rotation errors in Table B. We observe that removing neural localization (Eq. 6) produces significantly larger localization error (e.g., Rotation error: 6.35 vs 37.56). Removing feature-metric bundle adjustment (Eq. 7) also increases the error (e.g., Rotation error: 6.35 vs 22.47). Our method outperforms multi-video TotalRecon by a large margin due to the above innovations.
Table B: Evaluation of Camera Registration: The best results are in bold.
| Method | Rotation Error (°) ↓ | Translation Error (m) ↓ |
|-----------------------------|----------------------|-------------------------|
| Ours | **6.35** | **0.41** |
| w/o Neural Localizer | 37.59 | 0.83 |
| w/o Featuremetric BA | 22.47 | 1.30 |
| Multi-video TotalRecon | 59.19 | 0.68 |
**4D reconstruction (m6Ge, Kj9h, C9Gr)**. We added an experiment to evaluate the accuracy of 4D reconstruction using synchronized videos captured with two moving iPhone cameras looking from opposite views. We compute the GT relative camera pose between the two cameras from 2D correspondence annotations. One of the synchronized videos is used for 4D reconstruction, and the other one is used as held-out test data. For evaluation, we render novel views from the held-out cameras and compute novel view depth accuracy DepthAcc (depth accuracy thresholded at 0.1m) for all pixels, agent, and scene, following TotalRecon [52].
Our method outperforms both the multi-video and single-video versions of TotalRecon by a large margin in terms of depth accuracy and LPIPS, due to the ability of leveraging multiple videos. *Please see Fig A in the rebuttal pdf for qualitative comparison.*
Table C: Evaluation of 4D Reconstruction. The best results are in bold.
| Method | DepthAcc (all) ↑ | DepthAcc (fg) ↑ | DepthAcc (bg) ↑ | LPIPS (all) ↓ | LPIPS (fg) ↓ | LPIPS (bg) ↓ |
|----------------|------------------|-----------------|-----------------|---------------|---------------|---------------|
| Ours | **0.708** | **0.695** | **0.703** | **0.613** | **0.609** | **0.613** |
| Single-video TotalRecon | 0.533 | 0.685 | 0.518 | 0.641 | 0.619 | 0.641 |
| Multi-video TotalRecon | 0.093 | 0.644 | 0.047 | 0.622 | 0.616 | 0.623 |
Pdf: /pdf/de2f4a7d2e5618e828401c649a3739e004afdcbd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SLIM: Style-Linguistics Mismatch Model for Generalized Audio Deepfake Detection | Accept (poster) | Summary: This paper proposed a new approach named Style-Linguistic Mismatch (SLIM) for generalizable audio deepfake detection. The authors claimed that a certain dependency between linguistic information and style information can generalize well for audio anti-spoofing tasks. Additionally, the proposed method can also explain the final decision of the deep learning model. In order to prove the claim, a proof-of-concept experiment was conducted to show that real audio has a higher correlation coefficient between linguistics and style in Table 1. Based on this, the authors proposed a two-stage learning framework. In the first stage, the dependency is captured by two compression modules based on SSL features. In the second stage, a simple projection head is trained on the combination of the extracted dependency with the SSL features.
The authors conducted sufficient experiments to demonstrate the generalization ability of the proposed method. Two in-domain evaluation datasets and two out-domain evaluation datasets were used for this point. Besides, the performance beats several SOTA baseline methods on these datasets, enhancing the reliability of this work. The authors also provided enough analysis and visualization to prove their hypothesis.
Overall, it is a good paper for generalizable audio deepfake detection task.
Strengths: 1. Explicitly explored the dependency between style and linguistics for audio deepfake detection
2. Pay attention to interpretable and generalizable audio deepfake detection simultaneously, which is novel to the community.
3. Good performance was shown in the experiments, especially significantly improving the performance on the out-domain datasets, which is quite crucial for anti-spoofing tasks.
Weaknesses: 1. The first training stage adopts an idea from an abnormal detection task, in which only real audio (normal data) was used for training. However, the real normal data is of a much larger order of magnitude than that used in the experiment. These corner cases should be considered in the work.
Technical Quality: 3
Clarity: 3
Questions for Authors: I am curious whether the author applied their method to other unseen attacking scenarios. For example, in [1], the author proposed a new unseen scenario, where some audio genres in the evaluation dataset are not presented in the training dataset. I think if the author can provide some results on this scenario, it would be more convincing for their proposed method.
[1] Zeng, Chang, et al. "Improving Generalization Ability of Countermeasures for New Mismatch Scenario by Combining Multiple Advanced Regularization Terms." arXiv preprint arXiv:2305.10940 (2023).
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time spent reviewing our manuscript and pointing us to another dataset that we believe would be a great fit for further evaluation of the proposed method. Below is our point-by-point response:
- “magnitude of stage 1 training data”
We acknowledge that our current approach is limited to available real data. As listed in Section 2, the datasets for Stage 1 are currently limited to CommonVoice and RAVDESS. As a follow-up for more robustness, we plan to leverage larger pre-training datasets, for example, those used in learning self-supervised speech representations. We will clarify this in the Limitations section.
- “applying SLIM to applied other unseen attacking scenarios”
Thank you for the reference, this is indeed an interesting setup and we will extend our experiments accordingly in the future. However, our current scope of “unseen” attacks is limited to English datasets and VC/TTS systems in public evaluation datasets. (Please also see our response to Reviewer 4aqZ regarding our problem scope.) Since the recommended dataset is in Chinese language, our model will need to be retrained on Chinese data to perform the evaluation. In the future, we plan to extend our investigation to include more recent TTS/VC methods as well as more varied genres and languages.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I am keeping my score as is. | Summary: The paper suggests a novel method for detecting synthesized speech. Namely, the framework introduced in the paper allows the detection of a statistically significant mismatch between the style (i. e. paralinguistic attributes) and linguistic characteristics of synthesized speech samples, which helps to differentiate them from the real speech samples. The framework is based on frozen pre-trained SSL encoders and relatively small learnable parts that make the experiments computationally feasible for a small cluster.
The authors compare their method with several baselines, and it outperforms SoTa model in a cross-domain setup.
Strengths: - The idea of the proposed method is non-trivial and allows us to better understand the differences between real and synthesized speech.
- The proposed method outperforms SoTa on out-of-domain data and is on par with SoTa on in-domain data.
- The evaluation is good, and the method is compared with a number of very decent baselines.
- The paper is well-written and easy to follow.
Weaknesses: - The Analysis section is somewhat limited (especially the "Interpretation of model decisions" part). The authors claim that the success of their method is connected with very particular artifacts in synthesized speech; however, this point is weakly supported since only four small cherry-picked correctly classified examples were provided (see the "Questions" part).
Technical Quality: 3
Clarity: 3
Questions for Authors: Questions:
- Is the confidence of your detector connected with the severity of the artifacts in the synthesized speech samples? Do the mel spectrograms of the most confidently correctly detected synthesized samples contain the most clearly visible artifacts?
- Which types of TTS models are easier to detect, and which are harder to detect by your method?
- You provided examples of correctly classified speech samples. What about incorrectly classified ones? Can you share an explanation or guess what are the peculiarities of these "complicated" samples that prevent them from being correctly classified by your method?
- Do you plan to upload your code to Git Hub?
Suggestions for the current paper:
- I suggest adding the visual analysis of mel spectrograms of the **incorrectly** classified speech samples to the paper.
Suggestions for future research:
- Your current research makes a step forward toward more explainable synthesized speech detection. Another step in the same direction was made in Topological Data Analysis for Speech Processing by Tulchinskii et al. (2023). They have shown that synthetic and real speech samples can be separated using the barcodes of the attention maps of the HuBERT model. The sum of the bars in the barcode for real speech is bigger than the sum of bars in the barcode for the synthesized speech - at least on some attention heads of the HuBERT. It would be interesting to investigate in more detail, how the topological characteristics (i. e. barcodes or other characteristics) of the speech embeddings are connected with the linguistic and paralinguistic properties of the speech.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors addressed the limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time spent reviewing our manuscript and for finding our work innovative. A point-by-point response to your questions can be seen as follows.
- ”Is the confidence of your detector connected with the severity of the artifacts in the synthesized speech samples? Do the mel spectrograms of the most confidently correctly detected synthesized samples contain the most clearly visible artifacts?”
We did notice that the low-quality samples (i.e., those with NISQA MOS < 2) were labeled as deepfakes with high confidence by SLIM. These samples were correctly classified with high confidence by the baseline models compared in Table 2. For the ones that did not contain a significant amount of noise, we did not observe significant correlation between the detector confidence scores and the severity of artifacts. We also did not see clearly separable patterns in the mel-spectrograms when comparing the most confident correctly detected samples with less confident ones. While mel-spectrograms are useful as a supplementary tool for studying samples, in our experience they don’t fully reveal all the deepfake artifacts that are important for a model to make a decision.
We would like to also point out that the actual source of the deepfakes remains an open question. This is one of the reasons why recent works have begun focusing on "interpretation-driven detection," e.g. formant analysis to report deepfakes. In our case, the `interpretation’ is incorporated into our model design, where the distance between pairs of style-linguistics dependency features can be directly used to quantify the mismatch (Figure 2, Page 8). We also show that the dependency features are complementary to the features that focus on the deepfake artifacts. Such complementarity can be seen from Table 2 - SLIM variants (Page 7), where models using only SSL features perform better on ASVspoof2021, whereas the models using dependency features outperform on MLAAD. Fusion of the two resulted in the best performance.
- “Which types of TTS models are easier to detect, and which are harder to detect by your method?”
As some details of the TTS models used in employed datasets are not known, we performed an analysis on the recent ASVspoof5 dataset and presented a breakdown of model performance for different attacks and codecs (Table 1 and 2 in the rebuttal PDF). In general, degradation is seen when codecs with lower bit rates are applied. We found systems with zero-shot capability are harder to detect than other methods (e.g., YourTTS).
- ”You provided examples of correctly classified speech samples. What about incorrectly classified ones? Can you share an explanation or guess what are the peculiarities of these "complicated" samples that prevent them from being correctly classified by your method?”
We will add incorrectly classified samples to the paper. In general, we observed that severely degraded samples (e.g. audio too short/noisy) were commonly misclassified. This could be due to the design of our model, which by nature may require longer duration to capture the style-linguistics mismatch. The observation here also aligns with the difference seen between style/linguistics-only features and dependency features in Table 2, where the former performs better on ASVspoof2021 and the latter performs better on MLAAD.
- ”Do you plan to upload your code to GitHub?”
Since we are currently filing for IP, we do not currently plan to release the training code. However, we provided details in the paper to facilitate easy implementation of our model. The Appendix includes a detailed description of the model architecture (Appendix A.3), training hyperparameters (Appendix A.6), the list of pre-training datasets (Appendix A.2), and a Pytorch-style pseudocode of the training objective (Appendix A.4). | Summary: "SLIM: Style-Linguistics Mismatch Model for Generalized Audio Deepfake Detection" describes a motivation and systematic approach to disentangling different components of speaking characteristics, in order to perform audio deepfake detection. This paper demonstrates a working 2-stage training pipeline, numerous ablations, and metrics over several English language focused datasets. In addition, qualitative demonstrations of the space of features learned and used by the model form a study which backs the philosophical approach of the paper with regard to disentangling stylistic characteristics of real speakers, in order to have a generalized defense against audio deepfakes.
Strengths: The overall approach taken here is useful, the problem is relevant, and the overall experiments cover a portion of the necessary ground to match the claims.
Qualitative studies both in the main body of the paper, and the appendix are generally interesting and it is worth considering if these experiments can be fit into the unified paper "flow" for this publication, especially given final conclusion writing. Overall ablation study and description of methods are well done, and form a high quality "core" to the paper. Architecture figures and ablation tables are both outstanding in terms of clearly communicating approach, and variations in the results. Use of open source toolkits, and sharing of hyperparameters should help reproducibility.
Weaknesses: The portion of the title "... Generalized Audio Deepfake Detection" claims a robust and general method for "audio deepfake detection". However, there are a broad range of prosodic styles from people with impaired speech, new learners of languages, children developing their ability to communicate and so on.
For a paper about generalized deepfake detection, where such detection keys particularly on the prosody of at least one example, I would like to see a larger study and example of voices in-the-wild, as opposed to the current examples which seem to be reasonably fluent speakers and performers, who may have "trained" speech patterns by and large. This is despite the use of the "in-the-wild" dataset, which doesn't seem truly wild in terms of robustness testing. Defense approaches should have some example study and discussion of False Positives beyond pure metrics (though the metrics discussions here are well done) - particularly when out-of-domain speaking patterns may have heavy overlap with "deepfake" data and the features used for classification, demonstrated as part of the paper. The examples in Figure 4. along with accompanied writing are a start down this road but not sufficient.
Mozilla Common Voice has a decent amount of this type of truly-in-the-wild speech for some qualitative study, and there are existing papers which use the same dataset for few-and-zero-shot TTS and voice conversion. The "in-the-wild" dataset here seems to largely focus on imitative TTS and voice conversion, and their "real" counterparts, which would generally point to celebrities, politicians and other public figures who (very likely) do not have the types of speech patterns mentioned previously. Though dysarthric speech is mentioned briefly in the limitations section, the issues which crop up from study of dysarthric speech are also found to some extent in many "typical" speakers as well, in more subtle ways so directly addressing this with some examples would strengthen the core claim of the paper in regards to "generalized detection".
MLAAD is multilingual, but some details of the dataset construction lead to limitations in its testing (outside the scope of this paper, beyond the continued critique that broader datasets and synthetic generation methods are needed to test generalization). However, here only the EN subset appears to be used - which again reduces the claims from the title since it means the bulk of testing is on English locales. This is not a problem in terms of the experiments, but the writing and claims of the paper should be limited around this fact. Additionally, these are speech deepfakes not the broader category of "audio" per-se, so maybe some further adjustment is warranted, though other papers in this subarea tend to use "audio deepfake" to describe speech deepfakes.
As it stands, the examples shown do not convince me that the "attacks" used here are sufficiently high-quality to claim a generalized defense, though the developed method seems to perform well on the datasets used, and the overall scientific study (though limited) is well done.
Technical Quality: 3
Clarity: 3
Questions for Authors: What are the systems tested in Table 1? Either by name, or citation? What is the source of the speakers? If these have PII, a description of the speakers broad categorizations is sufficient. If pulled from an existing dataset, speaker ids would be good. As it stands this table is largely uninformative, without any material information beyond a general design motivation for follow-on work (since CCA shows some behavior differences between methods).
Given the importance of both sample rate and noise in audio, it would be very useful to test this approach under those forms of degradation - e.g. does the method scale down to data of narrow bandwidth, at low samplerate or under the presence of additive noise / background sound (such as music, crowd noise, applause, and so on). The prosodic example may hold under reasonable conditions, but how many detections are relying on prosodic features versus simpler acoustic artifacts? Figure 2. hints at this to some extent, but some explicit description and study would be useful.
Generally the data examples shown are extremely noisy, and the synthesis methods are not particularly high quality. Testing on both clean audio, and higher quality synthesis, as well as under controlled degradations could raise my score. After all it is plausible an attacker may use telephony as a transmission channel - especially if the degradations imposed by the channel give the attacker a further advantage.
As a general direction - it may be useful to directly answer some of the questions posed by the titles of the citations in this paper e.g. "Does audio deepfake detection generalize?" - the claim here being "yes", but demonstrations being limited to existing datasets rather than further tests with recently developed technologies / APIs and so on. "Does deepfake detection rely on artifacts?" - the claim here is also (somewhat) "yes", which hurts the counterclaim of being generalized to some degree, unless these artifacts are general across a broad swath of methods, which would be a surprising finding given existing demonstrations.
The primary concern in order to raise my score would be a more proper scoping of the generalization claims, and the domain claims around this method given limitations of the testing datasets. The conclusion also discusses a fair bit about qualitative analyses which are largely relegated to the appendix, so there is further mismatch between the chosen title and the final claim.
Larger and more diverse datasets (multi-lingual being one option, more unusual speaker styles would be another), or more particularly use of a variety of recent, high performing methods would raise my score if the writing is mostly unchanged. Some of these methods may only be available by API, which is unfortunate but perhaps necessary - additionally TortoiseTTS and spinoffs should have specific, stronger synthesis exemplars than those demonstrated, especially under the assumption an attacker may be doing manual selection given a corpus of intermediate generations to choose the best final result.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed some limitations of their work, however this review is partly hinging on the gap between claims, and the effective limitations and demonstrated results. More writing on the limitations, and particularly potential harms of deploying unbalanced "defense" methods in terms of accessibility would be beneficial.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time spent reviewing our work and for sharing your detailed comments which helped us to revise our work. A point-by-point response to the posted questions can be seen below:
- “What are the systems tested in Table 1?...”
As the samples referred to in Table 1 are part of the ASV2019 training data, the detailed PII is not available. We agree that detailed information of the generative models will be useful. Considering that each deepfake attack in ASV2019 has its own synthesis pipeline, and the relevant details are already summarized in the ASV2019 summary paper [1] , we will add a reference to the appropriate section and guide readers to refer to the summary paper for details.
- “..., it would be very useful to test this approach under those forms of degradation.”
Based on the first part of the question on robustness to degraded speech, we performed an evaluation of SLIM on 12 different codecs within the more recent ASVspoof5 dataset (released in July 2024, two months after our initial submission to NeurIPS). The results are summarized in Table 1 in the global rebuttal PDF. Note that ASVSpoof5 includes the Opus codec, which is used in telephony systems. In addition, we point out that the ASVspoof 2021, which is one of the test datasets, includes different types of lossy codecs with varied bitrates, typically used for media storage.
Regarding the second half of the question on prosodic features vs simpler acoustic artifacts, we agree that separating the two categories of speech samples would definitely help in identifying the true source of deepfake. However, when we performed listening tests and spectrogram visualizations on the deepfake samples, there were many cases where a sample could manifest a combination of artifacts and style-linguistics mismatch pattern. It was therefore challenging to divide samples into two distinct categories and perform testing separately. To gain some insight, we performed an ablation on SLIM (Table 2 - SLIM variants), where we experimented using only the SSL features (rows 1-3 under `SLIM variants’; corresponding to the artifact cases), only the dependency features (row 4; corresponding to the mismatch cases), and the combination of the two (row 5; corresponding to leveraging both sources). The difference in performance can be used to gauge the question of the actual source of the deepfake samples. Our results show that the mismatch cases could be a smaller portion in ASV2021, since the dependency-alone performance was much worse than using SSL features. For the In-the-wild and MLAAD datasets, the performance of dependency features is on par with, if not better than, SSL features, demonstrating that these two datasets have more mismatch cases.
- ” Testing on both clean audio, and higher quality synthesis, as well as under controlled degradations could raise my score.”
We evaluated our model on the ASVspoof5 dataset released in July 2024, where the most recent generative models were used together with different codec degradations. We provide a breakdown of the model performance with regard to different types of unseen attacks, as well as unseen codecs. These results can be found in Table 1 and Table 2 in the rebuttal PDF.
- ”The primary concern in order to raise my score would be a more proper scoping of the generalization claims,...”
We agree that our current system only operates on English data and cannot yet handle all prosodic styles (as briefly discussed in Limitations). In the introduction (Page 1, line 30), we specified that the current SOTA methods lack generalizability to unseen attacks, which we aim to tackle in this study. Following your suggestion, we will revise the manuscript in various parts, including the abstract, introduction, and limitations, to scope-limit our use of the term generalization to “unseen attacks”.
Audio deepfake detection (ADD) in the ADD community indeed most commonly refers to speech deepfakes, we will clarify this in our paper. Regarding multilinguality, our current approach leverages pretrained embeddings that were trained on tasks from English data (e.g. emotion recognition), so we are currently limited by the availability of language-specific high quality pre-trained embeddings.
Regarding evaluation on prosody styles, we point that our test sets cover a decent variety of speakers (e.g., 58 celebrities in the In-the-wild, and 100+ speakers in total for all test datasets). Performance across diverse test sets indicates our model’s ability to do well on a variety of speaking styles. However, we acknowledge that an extensive evaluation on special prosodic styles such as pathological speech, children speech, or new learners of a language, has not been included in our study due to small sample size, limited variability of speech content, and noise issues (e.g., [2], [3]). We will acknowledge this limitation in the paper.
- ”Larger and more diverse datasets (multi-lingual being one option, more unusual speaker styles would be another), or more particularly use of a variety of recent, high performing methods would raise my score if the writing is mostly unchanged.”
We performed evaluation of SLIM on the ASVspoof5 data, where some of the recent main-stream platforms (e.g., tortoiseTTS) are used for generating the deepfake data. The results are reported in Table 1 and Table 2 in the rebuttal PDF.
Reference:
[1] X. Wang, et. al., “ASVspoof 2019: A large-scale public database of synthesized, converted and replayed speech,” Elsevier Computer Speech & Language Journal, vol. 64, 2020.
[2] Coppock, Harry, et al. "COVID-19 detection from audio: seven grains of salt." The Lancet Digital Health 3.9 (2021): e537-e538.
[3] Schu, Guilherme, et. al., "On using the UA-Speech and TORGO databases to validate automatic dysarthric speech classification approaches." IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023.
---
Rebuttal Comment 1.1:
Title: Reply to Authors
Comment: The additional analysis, study under various degradations, clarifications around data, and particularly addition of ASVspoof5 data cover a reasonable portion of my prior concerns. Combined with writing which more accurately scopes the claims made, this has strengthened the impact of the work and should be of interest to a larger audience, now and into the future. | Summary: This paper proposes a new method for audio deepfake detection by first employing self-supervised pre-training on real samples only and then used to do real/fake classification. The proposed method achieves SOTA performance in both within-domain and cross-domain scenarios.
Strengths: 1. The proposed technique is sound and reasonable. Learning the correlation between style and linguistics for detecting deepfakes makes sense to me.
2. The comprehensive ablation studies in Table 2 further verify the effectiveness of the proposed method.
3. The analysis (Interpretation of model decisions) and visualization (Mel-spectrograms) are reasonable.
Weaknesses: 1. The idea of capturing the mismatch between style and linguistics is promising, but it's unclear how this mismatch correlates with deepfake samples. More intuitive illustrations and examples are needed to better validate this motivation.
2. In Figure 2 (training framework for ADD), the source of supervision for style and linguistics is not apparent. How do you ensure that each encoder learns the corresponding features? Additionally, how do you achieve perfect disentanglement between the style and linguistics encoders?
3. It appears that the latest dataset used is ASVspoof2021, which is quite old. Why not incorporate more recent and advanced deepfake datasets for evaluation?
Technical Quality: 3
Clarity: 4
Questions for Authors: See the Weaknesses part.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: I did not see any obvious limitations for this work. It is a fairly good paper but not very impressive to me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time reviewing our manuscript and for acknowledging our contribution to the field. We have provided a point-by-point response as follows:
- “The idea of capturing the mismatch between style and linguistics is promising, but it's unclear how this mismatch correlates with deepfake samples.”
As the idea of style-linguistics mismatch has not been systematically investigated before, we first referred to existing linguistic studies that show concrete examples of how these two aspects are correlated in real speech (Section 2.2, Page 3, line 111-113), e.g., the impact of emotional status on word choices. Given that main-stream TTS and VC methods model these two aspects independently, such subtle correlation in real speech may be missing in deepfakes. For example, VC systems swap the original voice identity with a new one, without considering if the new voice would match in style with the word choices. To verify that the hypothesized mismatch does exist in deepfakes, we then provided preliminary results of CCA analysis (Section 3.1, Page 3-4, line 117-142; Table 1) that show a significantly higher correlation value of the two aspects in real speech and a lower correlation in deepfake speech, which also aligns with the distance between the dependency features learned by SLIM (Page 8, Figure 2). Although it is challenging to exhaustively list all mismatch cases, we provided a spectrogram illustration in Figure 4 (top right), which demonstrates a deepfake sample identified by SLIM that shows abnormal rhythm of pauses when uttering certain words. We agree that more examples could benefit the understanding of the mismatch, which requires a more systematic and detailed investigation. We plan to pursue this for future analysis.
- ”In Figure 2 (training framework for ADD), the source of supervision for style and linguistics is not apparent. How do you ensure that each encoder learns the corresponding features? Additionally, how do you achieve perfect disentanglement between the style and linguistics encoders?”
We acknowledge that a perfect disentanglement of the two aspects is a challenging task. However, based on existing works on how information propagates through self-supervised learning (SSL) model layers (reference listed in Section 2.2, page 3, line 106-109), it is possible to obtain two representations which have maximal information about one of the aspects while retaining minimal information of the other. In our work, we tried to limit the entanglement by choosing and freezing the pretrained embeddings fine-tuned for tasks that are likely independent of each other, i.e. ASR for linguistics and SER for style. To ensure a satisfactory disentanglement of our adopted representations, we performed a correlation analysis in Appendix 1, Figure 5, where the average correlation value of the two representations is close to 0. These results help to ensure that the two input representations are maximally disentangled (if not perfectly). Due to page limit, we were not able to integrate these analysis results into the main text.
- “It appears that the latest dataset used is ASVspoof2021, which is quite old. Why not incorporate more recent and advanced deepfake datasets for evaluation?”
We employed four test datasets, out of which both the In-the-wild and MLAAD are newer datasets than ASVspoof2021. The MLAAD was the latest one at the time of writing, of which the latest version was released in April 2024. While the ASVspoof2021 is not the latest dataset, it does have the advantage of covering a variety of attacks, which is summarized in Appendix 2, Table 3 (Page 16). The ASVspoof 2021 dataset also includes different types of lossy codecs with varied bitrates, typically used for media storage, facilitating the evaluation of model robustness to codecs. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful comments and suggestions; we appreciate all reviewers’ positive feedback on our fundamental approach motivated by the style-linguistics mismatch modeling for deepfake speech detection, our experiments, and our overall paper presentation.
The reviewers’ common concerns are mainly on the scope limit of our model/claims and the extension to more diverse and multilingual data. We clarify that our current scope is limited to speech deepfake detection in English. The proposed model performs well on a variety of unseen deepfake attacks and varied types of signal degradation. We agree that it is important to extend our study to multiple languages as well as to more diverse speaking styles. Both these directions are currently on our research roadmap.
There were similar questions related to robustness to compression codecs (Reviewer 4aqZ), detailed analysis on performance obtained under different / more recent generative models (Reviewer chQg and Reviewer 4aqZ), and how the model performs on more challenging datasets / datasets with more genres (Reviewer 4aqZ and Reviewer 1UM9). At the time of our initial submission, MLAAD was the most recent open-source dataset (version 3 released in April 2024), which we employed as one of the evaluated datasets. However, it did not incorporate different codecs. While ASVspoof2021 indeed includes codecs, the types of generative models were not state-of-the-art. Following the questions brought up by the reviewers, we performed an extra round of evaluation of SLIM on the ASVspoof5 dataset (released in July 2024, after our initial submission to NeurIPS) which has 10+ types of codecs and more recent TTS and VC systems. We report a breakdown of the evaluation results in the attached PDF file. Restricted by the constraints on training data in the ASVspoof5 challenge, we were not able to use the same Wav2vec-XLSR as backbones, and substituted them with WavLM-Base backbones. Apart from a few resulting changes in the training hyperparameters, the overall training strategy remains very much the same. To respect the anonymity rule, we confirm that the results in the PDF file were analyzed and created only for the rebuttal phase and they do not overlap with any public information.
The following results can be found in the attached PDF file:
- Breakdown of SLIM’s performance under clean and 12 different codec conditions
- Breakdown of SLIM’s performance for 16 different unseen attacks
In general, we see that the proposed model generalizes well across different codecs and attack types in ASVSpoof5.
Pdf: /pdf/54be0e509415883e7c96e15ee7e66598129532c5.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ReFT: Representation Finetuning for Language Models | Accept (spotlight) | Summary: The paper introduces Representation Finetuning (ReFT) - a family of methods to learn interventions directly on model representations, rather than model weights. The authors compare ReFT to Parameter Efficient Finetuning (PEFT), and find that it yields similar performance while being significantly more parameter-efficient.
Strengths: - The paper is well-written.
- The main ideas in the paper are clear and easy to understand.
- Novel and impactful contribution.
- The paper presents a new paradigm of learning modifications to representations, rather than learning modifications to weights. This new paradigm is more parameter-efficient, while appearing to have a similar level of expressivity. PEFT methods have been incredibly impactful, and I could see ReFT being similarly impactful.
Weaknesses: - Lack of representation-editing-based baselines
- The paper focuses on comparing ReFT to PEFT methods. Appendix B discusses existing representation-editing methods, and casts them in terms of the ReFT framework.
- I think it would strengthen the paper to compare ReFT and DiReFT to these existing representation-editing methods. I am curious to understand the difference in performance of an intervention learned by gradient descent (e.g. ReFT) vs an intervention learned by contrastive pairs (e.g. activation addition, RepE).
- Some unclear presentation
- Line 102
- What does "the hidden representation created at row $i$ and column $k$" refer to? As far as I can tell, this does not integrate with the previous notation defined in the second paragraph of Section 3. I assume this refers to layers and positions - if this is the case, then it would be clearer to say so explicitly.
- Inconsistent variable names
- I think the presentation would be clearer if variable names were used across sections.
- Section 3.1 uses $\mathbf{b}$ to represent a hidden state, whereas Section 3.2 uses $\mathbf{b}$ to refer to a bias vector. I think this can be easily fixed by using $\mathbf{h}$ to represent the hidden state in Section 3.1.
- Section 3 uses $m$ to represent the number of layers, whereas Section 3.2 uses $m$ to refer to the length of the output sequence $\mathbf{y}$.
- Typo in Table 17 (?)
- LLaMA-7B/DiReFT/AQuA: 221.3.
- It might be worth double checking your numbers more generally if tables are not generated by code.
- Other suggestions
- Section 3.2
- The paper could benefit from a motivation of the LoReFT expression, and how it was selected over the other expressions mentioned in Appendix E.
- Define dropout more explicitly
- What does dropout refer to in this case? My assumption is that with probability $p$, the intervention is not performed. But in the cases where the intervention is not performed ("dropped out"), what is being optimized?
- Include baseline generations in Appendix I
- Only LoReFT outputs are given in Appendix I - it's hard to interpret these generations without having baselines to compare them to. For example, even examining the difference between the baseline (no intervention) and LoReFT would be helpful, particularly for the long-form generations.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Practical recommendations for using ReFT:
- Appendix E mentions that there is not one clear best expression of ReFT. Which variation would the authors recommend individuals use, and why?
- Is there a recommended methodology for determining hyperparameters in practice?
- In what scenarios should one use PEFT vs ReFT vs other inference-time interventions?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: - The authors address the following limitations:
- The classes of models evaluated is limited.
- Hyperparameter selection seems fairly complicated, and automating this selection will be valuable for future adoption of the methodology.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks so much for raising these great questions and providing helpful feedback!
### RepE baseline.
We agree that gradient-free methods such as activation addition or RepE could be effective in steering models for tasks such as style transfer [4]. On the other hand, **we argue that it could be hard for these methods to steer models to achieve general finetuning goals** (e.g., steer a model to become a binary sentiment classifier or steer a model to answer multiple choice questions). These methods usually rely on statistical methods (e.g., PCA) to propose a set of static steering vectors based on a few-shot training dataset (i.e., usually with a handful of training examples). Mathematically, **these methods have the same expressivity as BitFit** [5] (only learning the bias term of LMs) yet BitFit is learned with SGD on large datasets. Meanwhile BitFit, although with much lower trainable parameter count (comparable to ReFT), usually underperforms compared to other PEFTs (see Table 4 on pg. 9). However, we agree that learning-free methods such as RepE have other unique advantages:
- it could be very effective when the steering objective is more generic (e.g., style transfer for a chat-model) without heavy shifts in model’s behavior;
- it could be very useful for LM personalization applications with very limited training source and do not require high precision.
### Need to include the baseline generations.
Thanks for raising this issue. We agree. For all examples in Appendix I, we will add in the original mode outputs. **We include one example from GSM8K in our attached rebuttal pdf**.
### The best expression of ReFT.
Indeed, there is no clear winner for math reasoning benchmarks. Here are some additional pointers in terms of choosing different intervention functions:
- **DiReFT removes the orthogonality constraint which improves the memory footprint** and trades compute efficiency (i.e., DiReFT trains faster) for a slight drop in performance.
- **LoReFT generally converges quicker** and is less sensitive to learning rate and other hyperparameters partly due to its orthogonality constraint.
- **The orthogonality constraint offers composability and interpretability**. Given the constraint, subspaces are orthogonal to each other (i.e., changing in one subspace should not affect others). As a result, we think this gives a nice property for us to compose LoReFT together at inference-time. We explore this a bit in Appendix G.1..
### Hyperparameter selection with ReFT.
**We address this question in our general responses** by providing additional details on the hyperparameter searching process of ReFT!
### PEFT vs ReFT vs other inference-time interventions.
Thanks for the question! It would be best for users to benchmark these methods for a specific domain. If we were allowed to guess, here are some pointers:
- Inference-time intervention or activation addition does not require training. If the use case is not mission critical, one could use these methods for quick turnaround and showcase.
- In general, since ReFT allows gradient descent, it should be more effective than non-training methods since it actively searches causal pathways to steer the model. ReFT also works with quick adaptation (e.g., n-shot training where n <= 10) as shown in G.2..
- The composability of ReFT could be better (e.g., combining a set of directional steering together such as changing the tone, Language and length of the generating text.) compared with PEFT and ITI. We include a preliminary exploration of composability in Appendix G.2..
Again, we feel like this is an open-ended question. And introducing ReFT definitely pushes the community to think about the differences among these methods.
### The classes of models evaluated is limited.
**We addressed this question in our general responses** by applying ReFT to other models and tasks!
### Unclear presentation.
Thanks for all these suggestions! We will address the following items in camera-ready.
> Line 102: What does "the hidden representation created at row and column" refer to?
Sorry about the confusion here. Row and column map to the layer and position of the intervening residual stream. We will remove these two redundant notations $i$ and $k$, and rewrite the current sentence as: *“Let b be the hidden representation created at a specific residual stream located at a specific position and layer., …”*
> Variable names were used across sections and typos.
We will make the following changes:
- We will revisit our notations in Sec. 3 and keep a consistent format in our next revision. Please feel free to raise additional suggestions, and we will incorporate them.
- We will replace our notation for $b$ in Eqn. 1 and related parts with $h_b$ to represent the base representation.
- We will replace $m$ in Sec. 3.2 with $k$ to avoid overloading our symbols.
- We also noticed this typo ("LLaMA-7B/DiReFT/AQuA: 221.3") after we submitted our draft. The correct entry should be 21.3. We will update the number, and check existing ones in our next revision. Our result tables (i.e., not the hyperparameter tables) are semi-automatically generated, but human error is still possible)
> Other suggestions on writings.
We will make the following changes:
- We will motivate how we come up with the current LoReFT formula better. We did cut a bit of text on this from our earlier draft due to length concerns. We will motivate LoReFT better from the interpretability works by bridging Sec 3.2 and Sec 3.2 together better in the next revision.
- We will clarify the use of dropout in Eqn. (2) and (3).
[4] Zou et. al., 2023, “Representation Engineering: A Top-Down Approach to AI Transparency”
[5] Zaken et. al., 2022, “BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models”
---
Rebuttal Comment 1.1:
Comment: I've read the author rebuttal and the general response. I thank the authors for their diligent engagement. I think the proposed writing edits will improve the manuscript. I elect to maintain my overall score. | Summary: This work proposes a novel method for fine-tuning language models (LM) called Representation Fine-tuning (ReFT), which updates only a small number of parameters. Unlike existing parameter-efficient fine-tuning methods such as LoRA, ReFT enables fine-tuning with minimal parameter updates by learning small interventions in the representation of specific layers and token positions. The authors introduce Low-rank Linear Subspace ReFT as a method within ReFT, demonstrating experimentally that it can fine-tune LMs effectively with fewer parameter updates compared to existing parameter-efficient fine-tuning methods.
Strengths: - **Originality**: While representation engineering has been utilized in various works, applying it to parameter-efficient fine-tuning is unprecedented. The motivation from interpretability works is also interesting. From the perspective of originality, this work is commendable.
- **Quality**: The proposed method’s effectiveness is empirically validated on multiple LMs based on Llama and RoBERTa across approximately 20 datasets and four tasks, proving its practical applicability. Additionally, the authors provide extensive experimental results with various hyperparameters in the appendix, offering sufficient reference data for future users and making it easy to identify scenarios requiring caution. Thus, this paper can be considered a complete piece of work.
- **Clarity**: The paper is written very clearly, including appropriate figures to facilitate easy implementation by the reader.
- **Significance**: This paper holds significant value as it suggests a more effective way to fine-tune LMs through interventions in representation rather than model weights, unlike existing PEFT methods. It can serve as a drop-in replacement for the widely used LoRA, potentially having a substantial impact on future LM fine-tuning research.
Weaknesses: - **Quality**: Although the extensive experimental results in numerous settings sufficiently demonstrate the method's significance compared to existing methods, including results from models other than Llama, such as Mistral or Phi, would emphasize the method's applicability in various scenarios.
- **Significance**: The need for extensive hyperparameter optimization to decide which layer and position of hidden representation to apply the intervention function is a potential weakness. This issue is well explained and mentioned in the Limitations section.
Technical Quality: 4
Clarity: 4
Questions for Authors: ### Questions
Practically speaking, when do the authors believe this method should be attempted instead of LoRA? In other words, in what situations is the LoREFT most appropriate? Based on the experimental results, it appears that using the LoREFT might be unsuitable for achieving high performance on inference tasks such as GSM8K.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors adequately addressed the limitations in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for assessing our paper to be a significant contribution, and for your question!
### ReFT with LMs other than LLaMAs.
**We addressed this question in our general responses** by exploring other model types such as Mistral and Phi! As shown by our initial results, both LoReFT and DiReFT work for other types of LMs.
### Hyperparameter selection process of ReFT.
**We took up this suggestion in our general responses** by providing more details on the current hyperparameter searching process.
### Practically, when to choose ReFT over LoRA?
Thanks for raising the question! We wish to include more of these insights in the main text with additional space allowed in the next revision. For now, here are some practical guide of using ReFT:
- **LoReFT works better when the base LM is strong**: in our experiments, we usually find ReFT scales with the quality of the base LM for harder generation tasks. For instance, the gap between ReFT and LoRA is much smaller when applying ReFT to larger LLaMA models for our math reasoning benchmarks as shown in Table 2.
- **LoReFT (or ReFT) is composable by its nature**: LoReFT localizes task-dependent interventions into orthogonal subspaces: you can partition the subspaces of a single LoReFT intervention for different tasks. Specifically, you can train different subspaces for different tasks, and compose them together to learn a combined skill. We showed some initial results in Appendix G.2.. Although LoRA weights can be squished together, ReFT is much more interpretable. Additionally, the number of intervenable representations are abundant in LLMs. As a result, it becomes much more feasible to overload and stack interventions together in a zero-shot fashion.
- **Practically, ReFT could be a better solution in a multi-tenant finetuned model serving service**: imagine a case where we are serving thousands of finetuned models: for a batch user queries, we want to call different finetuned models. We cannot serve thousands of SFT LMs without enormous costs. One alternative is to have a single base LM and thousands LoRA weights cached in the memory. For this approach, you have to hot-switch between LoRA weights in memory. For ReFT, it potentially becomes much easier since ReFT only needs to intervene on the prompt representations once for the batch, and pass the intervened prompt KV cache to the inference engine without inference-time overhead. We only realized this after we submit the paper, and we hope to discuss this further in our next revision.
### To improve ReFT's performance on tasks such as GSM8K.
**We addressed this in our general responses** by improving LoReFT’s performance on math with additional interventions on decoding steps!
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their response. I would appreciate it if the details provided in the rebuttal could be included in the final version of the paper. I will maintain my score as I have no further concerns about this work. | Summary: The authors propose an alternative PEFT method based on representation intervention techniques that are used in interpretability research. They evaluate their method in a variety of settings including multiple architectures and finetuning dataset families.
Strengths: * The method presented by the paper uses an order of magnitude fewer parameters than comparable baselines (e.g., LoRA, DoRA) while still maintaining comparable accuracies after finetuning.
* The method achieves consistently stronger performance than the baselines on commonsense reasoning tasks
* Evaluation was thorough: the authors evaluated multiple architectures across scale and multiple dataset families.
Weaknesses: LoRA and other baselines can adjust the number of tunable parameters via the rank parameter. The authors should evaluate specifically how their method compares to the baselines when there are a comparable number of tunable parameters, e.g, by lowering the LoRA rank. The original LoRA paper suggests that performance can sometimes increase (and often, at least, not decrease) when lowering the rank.
Given that the ReFT intervention occurs on a fixed set of positions, the authors should evaluate if ReFT is effective even in long-context settings.
While there are performance improvements on the commonsense reasoning tasks, performance decreases on other tasks (sometimes fairly substantially, e.g. in Table 2), which limits the applicability of this method. While the authors evaluate multiple settings, it would be interesting to present results on a wider class of evaluations and finetuning datasets to evaluate where ReFT would be preferable to other datasets. (Note: While this would be interesting and would make for a stronger paper, I do not believe it is necessary for a solid paper).
Technical Quality: 4
Clarity: 4
Questions for Authors: What is the memory footprint of ReFT in comparison with other methods?
Since the intervention occurs on a fixed number of prefix tokens, what happens when the prompt prefix is shorter than the number of intervened positions?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors adequately address the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive feedback and questions!
### LoRA with fewer parameters has been tried.
We want to clarify that **the baseline numbers we have in our tables are the best performance after hyperparameter tuning** done by the original LLM-Adaptor [3] paper. For instance, this adaptor benchmarking paper searches **the best adaptor location** (out of ranks \{`Attention` only, `MLP` only, `both`\}) as well as best ranks (out of ranks {`4`, `8`, `16`, `32`}) for LoRA. With applying LoRA only to the `Attention` module and rank=`4`, LoRA can get much lower parameter count while sacrificing performance (e.g., see Figure 3 on pg. 5 of their paper). We will clarify this in our next revision.
### ReFT with long-context tasks.
Thanks for the suggestions! **We took up this suggestion in our general responses** by applying ReFT to a long-context summarization task.
### ReFT with LMs other than LLaMAs.
**We addressed this in our general responses** by applying ReFT to two other LM types: Mistral and Phi-3. We additionally tried to close the gap in the math benchmark by enhancing LoReFT.
### Memory footprint of ReFT.
If we understand the question correctly, **the memory footprint should be largely bounded by the number of training parameters**. Thus, one can use the Params (%) column in our result tables (e.g., Table 1 on pg. 6) to rank the memory footprint for various methods. One thing to note is that LoReFT’s orthogonal constraint does require more memory due to the orthogonalization process compared with DiReFT. However, this might not be the dominating factor.
Moreover, compared against LoRA, **ReFT does require some inference-time overhead** since ReFT is an intervention-based method which cannot merge its learned weights into the original model. Nevertheless, since we constrain ourselves to intervene on the prompt tokens, the overhead is limited (fractional inference time increase is less than 1% in various settings). We provided a detailed analysis in Appendix H on pg. 38, and we hope to highlight this in our next version when more space is allowed in the main text.
### Prompt prefix is shorter than the number of intervention positions.
Thanks for bringing this up, and **this is a great technical question**! We introduced the concept of intervention position padding in our ReFT Python library. In short, it will pad the prompt with a single padding token, and we will perform dummy interventions on this token if needed. The attention mask and loss calculation will bypass this token to make sure other tokens are not affected.
[3] Hu et. al., 2023, "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and additional experiments.
> LoRA with fewer parameters has been tried.
Thanks for pointing this out, although I still believe there are some issues with the tuning setup. I think this brings up another issue that I missed in the review: you are using the evaluation from a different paper. How can you be sure that your evaluation setup exactly matches their setup? Further, from their paper, it seems they tune the LoRA rank parameter on math reasoning datasets, which, as your results suggest, have potentially different behavior than other datasets. Also, your method requires tuning which specific layers to which the adapter is applied—what happens if you do this with LoRA? It may be the case that if you apply LoRA to the same set of layers as you use for ReFT may lead to similar performance and a substantial drop in the number of the adapter parameters.
In my view, the primary positive aspect of your method is that it requires fewer parameters than LoRA, but generally performs similarly (better on some datasets, worse on others). This is why being extremely careful in your comparison to the number of parameters that LoRA requires is so important.
For the other responses—thank you, I appreciate the clarification.
---
Rebuttal 2:
Title: Response from the authors.
Comment: Thanks for raising these methodological points!
Throughout the paper, we report published numbers for other approaches rather than running our own evaluations of those approaches. Our assumption is that the other authors are the most expert in how to get their approaches to work best, and so this provides us with the most ambitious comparison points. We have carefully verified that we are using the same protocols for evaluation and use of the training data.
For the hyperparameter searching comparisons, our method for choosing hyperparameters seems stricter than the norm. In particular, we use GSM8K (also in math reasoning domain) to select our hyperparameters and apply these to both the Math and Commonsense Reasoning benchmarks, to avoid any implicit model selection based on test runs. The other authors use the Math10K train/test split to do model selection.
For the Math and Commonsense Reasoning benchmarks, both LoRA and ReFT are applied to all layers. For each layer, ReFT intervenes on the residual stream (which is weightless and therefore cannot be targeted by LoRA), while LoRA is applied to multiple weight matrices such as the Q/K/V projections. Although the LLM-Adaptor paper did not attempt to tune ranks on the Commonsense Reasoning benchmark, the DoRA paper (a newer variant of LoRA) [6] tried to halve the rank of DoRA (i.e., reducing the parameter count by 50%) and found that performance consistently dropped across all LLaMA models, as we reported in the paper (see DoRA (half) in our Table 1).
We report ReFT evaluations with much smaller parameter counts than the other methods, which would seem to put us at a disadvantage rather than an advantage. We could double-check this by increasing the ReFT parameter count to match the LoRA numbers. We would worry about lowering the LoRA counts and running our own experiments, for the reason we noted above (LoRA advocates might argue for different settings than we would choose).
[6] Liu et. al., 2024, “DoRA: Weight-Decomposed Low-Rank Adaptation”
**Additional clarification:** regarding the evaluation setup, we would like to clarify that we *directly copied* the publicly available codebase for the LLM-Adaptor paper to ensure a fair comparison (e.g., same datasets, evaluation metrics, decoding strategies, etc..).
---
Rebuttal 3:
Comment: Thanks for the response!
> Further, from their paper, it seems they tune the LoRA rank parameter on math reasoning datasets, which, as your results suggest, have potentially different behavior than other datasets
To clarify what I mean by this: you find that your methods reasonably underperform e.g., LoRA, on arithmetic reasoning tasks (Table 2), which, as an example, could arise because LoRA has more trainable parameters. If this is true, then when tuning on an arithmetic reasoning task (GSM8K, Math10k, or otherwise), your hyperparameter selection might favor higher LoRA ranks because it adds additional trainable parameters. The most striking claim of your paper is that your method outperforms LoRA specifically on commonsense reasoning tasks with an order of magnitude fewer parameters. It's possible, then, that one could tune LoRA specifically to perform well on these commonsense reasoning tasks by using the same tricks that you use to tune your method (e.g., applying it to specific layers).
I also did read your global response and see that with some additional tweaking you match LoRA on math reasoning datasets, which is interesting—what I'm arguing is simply that it's important to put the same effort into tuning LoRA as your method. For example, as I proposed above, adding an hyperparameter for LoRA (analogous to the one for your method) to specify which layers it is applied to.
With that said I think you have a good paper and I am voting for acceptance, the reason my score isn't higher is because (1) I am not fully convinced that tuning LoRA (e.g., by applying it to specific layers and tuning the rank a bit) could not perform similarly to your method, and (2) because this method might not serve as a good drop-in replacement for LoRA because it underperforms on important datasets (arithmetic reasoning) without extra manual effort. | Summary: This paper proposes representation finetuning for efficient tuning or intervening for task-specific representations in models while keeping the base model frozen. They define LoReFT and unify several current representation intervention methods under their framework. They conduct extensive experiments on several types of NLP benchmarks and models to demonstrate the efficiency and better understand the effectiveness of ReFT.
Strengths: 1. The paper proposes a new and potentially useful paradigm for efficient fine-tuning of model representations for specific tasks. Their proposed LoReFT inherits the merits of previous representation intervention method and operates on a low-rank subspace to control the the representations. They also put LoReFT under a bigger framework of representation intervention and discuss its relationship with other previous method. They provide good insight on understanding the development process of current representation-level interpretability work.
2. Their experiments are extensive and solid. They successfully demonstrated the efficiency of their ReFT, as well as the effectiveness to some extent. I admire that the authors are willing to show the limitation of performance of their method on some types of tasks to faithfully argue the benefits of their method.
3. They have an open-sourced package for reproducing the whole pipeline. They also have detailed documentation of their hyperparameter tuning process. I think this is especially important for new methods like ReFT which I imagine would requires some deep understanding to tune the hyperparameters.
4. They have some interesting intervention examples in the Appendix. They also provide very good ablation of LoReFT design in the Appendix.
Weaknesses: 1. I feel the hyperparameter tuning is still pretty heavy althoughtthe authors try to show that they only need to finetune the model on one task from a specific type and use the set of hyperparamters for other tasks in the same type. I expect to see more results on how robust the hyerparameters are across models in the wild. For example, if we find a set of hyperparameters for Llama 7B, will that generalize to Llama2 7B or models of the same size? I also want to see how much the variance is for the best set of hyperparameters for different models and tasks.
2. Sometimes I expect to see how ReFT would fit the current pipeline of SFT + RLHF alignment paradigm. Some results, for example, on the instruction tuning datasets, are lag behind the current SOTA by too far. I'm not saying the method should achieve SOTA but would expect to see the possibility of it to push the new boundaries in the current context of LLM fine-tuning.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. What do the prefix and suffix refer here for classification tasks and generation tasks in your hyperparameter tuning descriptions?
2. You might not have experiments on that but just out of curious if ReFT can be used to optimize the objective in preference-based learning like the Bradley-Terry model in DPO and how it would perform.
3. For the ultrafeedback fine-tuning, do you select the best responses to do MLE? As this is usually used as a preference learning dataset.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating our work and raising interesting questions!
### Generalization of hyperparameters.
Yes, we try to challenge the generalizability of ReFT by testing whether a set of hyperparameters for one task transfers to another as we do hyperparameter search on separate dataset splits. Moreover, we indeed **tried the exact same setting you are proposing here** for our instruction-tuning benchmarks in Sec. 4.4. We select the hyperparameters on LLaMA-1 7B and test those settings with Llama-2 7B without additional hill-climbing (see L248 where we mention this). Although this is not the full picture, our findings do show that selected hyperparameters transfer well across models as well. We will highlight this in our next revision.
### ReFT with the SFT + RLHF alignment paradigm.
We want to clarify that our focus is not to establish a new SoTA on model alignment compared to SFT+RLHF or DPO. Rather, we seek to offer comprehensive comparisons with other PEFTs.
While ReFT saves parameters and maintains better performance compared with LoRA in our instruction-tuning experiments, **our results do suggest other important application of ReFT** that we wish to discuss in-depth if space is allowed:
- **Much quicker iteration on the alignment pipeline.** For instance, if we want to evaluate the alignment dataset quality of Alapca-15K and ultrafeedback, we could use ReFT to finetune a model and compare the performance instead of SFT. This potentially allows much quicker iterations on the data pipeline especially when datasets and models are large and constantly evolving.
- **Towards understanding the mechanism of instruction-tuning.** The fact that a base LM can be finetuned to follow instructions with extremely lightweight interventions that worth no more than 0.0019% - 0.0039% of the original parameter counts is surprising. Our finding can shed lights on uncovering the training dynamics of instruction-tuning.
### Clarifications on the prefix and suffix tokens.
In the current draft, we experiment with a simple intervention strategy: intervening only on **the leading tokens** (“prefix” or the first $n$ tokens) and **the trailing tokens** (“suffix” or the last $n$ tokens) of the input prompt (i.e., there is no intervention being applied to output tokens). For classification tasks, we only intervene on the prefix tokens.
The **intuition of this simple strategy** is that interventions on the prefix tokens change the “information read-out” of all following tokens working as anchors (since attention processes them differently now), and the interventions on the suffix tokens steer the generations.
### ReFT with DPO and the usage of the preference dataset Ultrafeedback.
Indeed, DPO (or any arbitrary loss function) can be integrated with **a single model loaded into the memory** (taking the advantage of ReFT by only training the interventions). Additionally, we have integrated DPO trainers in our library which will be open-sourced.
And yes, you are right - we are under-utilizing these preference dataset at this point. We only do SFT with the best rated responses without using the contrastive signals.
---
Rebuttal Comment 1.1:
Title: Reply to the rebuttal
Comment: I appreciate the clarifications the authors made around my questions. I like the one that mentioning quicker iteration for data selection on alignment pipeline using ReFT.
I would keep my overall positive feedback on the paper and will keep my score unchanged. | Rebuttal 1:
Rebuttal: We thank all reviewers for their useful comments. We remark on some of the shared questions here. All other questions are addressed in individual reviewer responses.
## Re: The significance of ReFT over LoRA and others.
Although in almost all other responses, we focus on comparing ReFT with other PEFTs by judging who achieves the SoTA performance, the most significant and surprising insight that we think ReFT brings to the table is **“minimally manipulating representations of LLMs can achieve strong finetuning performance”**.
ReFT also offers **a new generic finetuning paradigm** under which more variants could emerge as the community starts to explore the “steering” power of LLMs’ hidden representations. We showed some of the explorations in our Appendix F-H, yet there are much more to study both as research directions as well as practical applications: How to automate hyperparameter search for intervention locations? How to design a better intervention function? Is the intervention latent space interpretable? How to serve finetuned models with ReFT at scale? How to allow quick personalization for LLMs with ReFT? etc..
## Re: Remarks on hyperparameter selection with ReFT.
Multiple reviewers ask about the hyperparameter selection process with ReFT.
We want to clarify that **ReFT actually has a similar amount of hyperparameters to LoRA and other PEFTs**. ReFT only introduces two new hyperparameters, the intervening layers and the intervening positions, while *removing LoRA hyperparameters such as alpha and applied components (i.e., which component to apply LoRAs on) for LoRA*. We will clarify this further in the camera-ready. We also agree that it would be best if we could automate this process in the future. For now, we provide a practical guide of choosing hyperparameters for ReFT in Appendix D.2..
## Re: ReFT with other LLMs, such as Mistral and Phi-3.
Some reviewers raise the question regarding whether ReFT works with other model types.
To give a quick preview of ReFT with Mistral and Phi-3, we reran our quick adaptation experiments in Appendix G.2 with Mistral-7B-instruct-v2.0 and Phi-3-small-8k-instruct, where we finetune these two models to kindly refuse all user queries with 5 training examples with a rank=4 LoReFT on the last token at layer 15. **We include qualitative results in our attached rebuttal pdf**.
Additionally, we finetune various models from the Mistral and Phi-3 families as well as LLaMA-3-8B-Instruct on our math benchmarks with DiReFT. We use the same set of hyperparameters mentioned in the paper without additional hyperparameter searching.
| Name | % Params | AQuA | GSM8K | MAWPS | SVAMP | Avg |
|--------------------------|----------|------:|------:|------:|------:|----------:|
| LLaMA-7B | 0.031% | 21.3% | 24.1% | 74.5% | 42.7% | 40.6% |
| Mistral-7B-v1.0 | 0.031% | 24.0% | 53.2% | 85.3% | 64.0% | 56.6% |
| LLaMA-3-8B-Instruct | 0.031% | 31.9% | 68.8% | 88.7% | 78.0% | 66.8% |
| Mistral-7B-Instruct-v0.2 | 0.031% | 30.7% | 55.2% | 83.2% | 69.0% | 59.5% |
| Phi3-small-8k-Instruct | 0.031% | 37.0% | 79.8% | 92.0% | 84.1% | **73.2%** |
These results definitely cannot provide the full picture without comparing against PEFTs. However, it is clear that ReFT (LoReFT and DiReFT) works for models from the Mistral and Phi-3 families. Bonus: Phi3 is obviously the best here as we expected since it’s actually tuned heavily on math and reasoning tasks!
## Re: Ways to improve ReFT on Math, and a showcase of ReFT with long-context summarization.
Multiple reviewers raise the question about what tasks ReFT is more suited for, and how to improve performance in benchmarks such as math reasoning.
We agree that LoReFT currently underperforms in math reasoning tasks compared with LoRA. One potential reason is that we only apply ReFT to the prompt tokens but not at generation tokens, which trades off the steering power for ReFT for lowered inference overhead. To verify this hypothesis, we ran additional experiments by **applying ReFT to selected prompt tokens as well as ALL decoding tokens** (LoReFT w/ decoding). Given our limited resources, we only ran it on our math reasoning benchmarks for a single seed with LLaMA-7B:
| Name | % Params | AQuA | GSM8K | MAWPS | SVAMP | Avg |
|-----------------------------|----------|------:|------:|------:|------:|----------:|
| LoRA | 0.826% | 18.9% | 37.5% | 79.0% | 52.1% | 46.9% |
| LoReFT | 0.031% | 21.4% | 26.0% | 76.2% | 46.8% | 42.6% |
| LoReFT w/ decoding | 0.062% | 20.1% | 31.2% | 80.7% | 54.8% | 46.7% |
This essentially closes the gap with LoRA. In addition, we note that this result is likely far from optimal: we ran this experiment without hyperparameter tuning, and we applied interventions on the attention output, not the residual stream. The additional parameters came solely from the decoding step interventions.
**Reviewer d959 brought up another related point: how applicable is ReFT for tasks requiring long-form context?** To address this with limited resources, we ran a validation experiment by applying ReFT to do long-context summarizations where we finetuned LLaMA-7B with only 10 examples sampled from the WikiSum [2] (input > 1000 tokens) and a single rank=4 LoReFT intervention (# of trainable params=32,772). We compared our model with an publicly available SFT summarization model. **We include one example in our attached rebuttal pdf**. ReFT can certainly adapt our base LM to do long-context summarization! In the next revision, we might consider adding a full-fledged experiment.
[1] Meng et. al., 2022, “Locating and Editing Factual Associations in GPT”
[2] Cohen et. al., 2021, “WikiSum: Coherent Summarization Dataset for Efficient Human-Evaluation”
Pdf: /pdf/23d6066de21db2a1361a70ccaca9df4e86f40d3e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Rethinking Imbalance in Image Super-Resolution for Efficient Inference | Accept (poster) | Summary: In this paper, the authors propose a novel framework called Weight-Balancing Super-Resolution (WBSR) that reformulates the SR task as an imbalanced distribution transfer learning problem.
Strengths: The key contributions of the paper are: 1. Introduction of a Hierarchical Equalization Sampling (HES) strategy to tackle data distribution imbalances by enhancing feature representation from texture-rich samples. 2. Development of a Balanced Diversity Loss (BDLoss) function that focuses on learning texture regions while ignoring redundant computations in smooth areas, aiming to correct model optimization imbalances. 3. Presentation of a gradient projection dynamic inference strategy for accurate and efficient inference without changing the original model structure or training data. 4. Extensive experimental results demonstrating that the proposed method achieves comparable or superior performance to existing approaches with a significant reduction in computational cost (approximately 34%).
Weaknesses: 1. line 203 to 205 line of the article, the author introduces the reasoning method of gradient dynamic projection, but does not specifically illustrate the specific network used in this method within the article. If the structure of the network could be described more carefully, the article would be better.
2. The article has designed a data sampling technique, network training technique, and network inference technique to enhance super-resolution technology. However, in the experimental section, it seems that only a limited number of experiments were conducted to support the argument. If more comparative experiments could be added to substantiate the claims, the article would be significantly improved.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. line 203 to 205 line of the article, the author introduces the reasoning method of gradient dynamic projection, but does not specifically illustrate the specific network used in this method within the article. If the structure of the network could be described more carefully, the article would be better.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 2. The article has designed a data sampling technique, network training technique, and network inference technique to enhance super-resolution technology. However, in the experimental section, it seems that only a limited number of experiments were conducted to support the argument. If more comparative experiments could be added to substantiate the claims, the article would be significantly improved.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thanks for your positive evaluation and valuable suggestions.**
**Weaknesses:**
***[Q1:]: line 203 to 205 line of the article, the author introduces the reasoning method of gradient dynamic projection, but does not specifically illustrate the specific network used in this method within the article. If the structure of the network could be described more carefully, the article would be better.***
***[A1:]***: Thank you for this suggestion. Our gradient dynamic projection method directly computes the mean and standard deviation of the gradient magnitude from testing images for classification, allowing for efficient inference without the need for an additional specific network structure. Our framework is designed to be plug-and-play, enabling the use of different restoration backbone networks depending on the specific task. For a detailed explanation of the inference process, please refer to Figure 2(b) in the original manuscript.
***[Q2:]: The article has designed a data sampling technique, network training technique, and network inference technique to enhance super-resolution technology. However, in the experimental section, it seems that only a limited number of experiments were conducted to support the argument. If more comparative experiments could be added to substantiate the claims, the article would be significantly improved.***
***[A2:]***: Following your suggestion, we conduct additional comparative experiments to provide more robust support for our proposed data sampling, network training, and inference techniques. The results of these experiments are presented in Tables 4, 5, and 7 of the attached PDF in the Author Rebuttal. These experiments demonstrate the robustness and effectiveness of our approach, providing stronger evidence to support our claims. We will incorporate these additional experiments into the revised manuscript to enhance the overall evaluation of our methods.
---
Rebuttal Comment 1.1:
Title: The rebuttal is not very satisfying.
Comment: The concerns raised in the review is addressed in this rebuttal, but not very satisfying.
1. the author did not answer my question, what is the backbone network of your plug and play part worked on? Please explain at least in the experiment. Also, this content should be reflected in camera ready of this paper.
2. The authors gave a page of further experiments, but there was a lack of organization and explanation for the experiments, as well as no explanation for the notation in the experiments. For example, in Table 5, the difference between WBSR and WBSR+ was not given, so listing experiments actually brings more problems. I suggest the authors add appropriate explanations and organization of experiments to enhance persuasiveness.
---
Rebuttal 2:
Comment: Thank you very much for your further feedbacks. We sincerely apologize for the lack of organization and explanation for the experiments and notations in the PDF attached in the author rebuttal.
Table 1, Table 2, and Table 3 present the comparison results between three SR networks (RCAN, Fusformer, and SRResNet) and our methods integrated with these backbone networks. The compared results are evaluated on four datasets including an autonomous driving scene dataset (KITTI2015), two satellite remote sensing datasets (CAVE and NTIRE2020), and a low-light super-resolution dataset (RELLISUR), which validate the generalization of our method across diverse scenarios.
Table 4 provides the comparison between the vanilla RCAN method and the combined method with our sampling method (RCAN+HES).
Table 5 shows the comparison results between the backbone RCAN and the combined methods when it is integrated with three existing sampling methods (+BSPA, +SamplingAug, and +DDA), our sampled method (+HES), our loss function (+BDLoss) and both our sampled method and loss function (+ WBSR† and + WBSR).
Here, "WBSR" indicates the model dynamic inference using multiple smaller subnetworks with lower computational costs. "WBSR†" indicates the model inference using the whole supernet with maximum computational cost. (i.e., 100% FLOPs).
Table 6 presents the comparison results between two transformer-based SR backbones (SwinIR and Fusformer) and the combined methods when they are integrated with our WBSR.
Table 7 provides the comparison results between the backbone SRResNet and the combined methods when it is integrated with the classifier-guided method (Classifier) and our gradient-based method, respectively.
Table 8 presents the comparison results between two other restoration backbones (Image denoising backbone FFDNet and JPEG compression artifact removal backbone RNAN) and the combined methods when they are integrated with our WBSR, which demonstrates the generalization of our method to other tasks.
Figure 1 showcases visualization examples of our method on medical and remote sensing datasets.
"SR" in the first and third columns represents the SR results of our method.
"Patch Classification" in the second and fourth columns represent the visualization results of image patches with different restoration difficulty after classification using our gradient-based method.
Different color patches represent different restoration difficulties, e.g., dark blue represents texture areas that are difficult to restore, and green represents smooth areas that are easy to restore.
**[Q1:]**: the author did not answer my question, what is the backbone network of your plug and play part worked on? Please explain at least in the experiment. Also, this content should be reflected in camera ready of this paper.
**[A1:]**: In the above explanations, we clearly indicated what are the backbone networks in the experiments (i.e, the first method in each tables). We will reflect them in camera ready of this paper.
**[Q2:]**: The authors gave a page of further experiments, but there was a lack of organization and explanation for the experiments, as well as no explanation for the notation in the experiments. For example, in Table 5, the difference between WBSR and WBSR+ was not given, so listing experiments actually brings more problems. I suggest the authors add appropriate explanations and organization of experiments to enhance persuasiveness.
**[A2:]**: Thanks for this suggestion. As explained above, in table 5, "WBSR" indicates the model dynamic inference using multiple smaller subnetworks with lower computational costs. "WBSR†" indicates the model inference using the whole supernet with maximum computational cost. (i.e., 100% FLOPs).
We will follow the reviewer's suggestion and add appropriate explanations and organization of experiments to enhance persuasiveness. | Summary: This paper rethinks the imbalance problem in image SR and proposes a plug-and-play weight-balancing framework. It combines a Hierarchical Equalization Sampling strategy and a Balanced Diversity Loss to reduce computational cost while keeping or improving SR performance. Extensive experimental results demonstrate the effectiveness and superiority of the proposed method.
Strengths: 1. Exploring Data imbalance for low-level tasks is meaningful.
2. This paper is well-written and easy to understand.
3. Extensive quantitative experiments are provided that helps to understand the significance of the whole proposed method.
Weaknesses: 1. The proposed method is of incremental contributions. This work is not the first to explore imbalance in image SR, which has been explored by the previous work, such as
[1] Xiaotong Luo, Yuan Xie, Yanyun Qu: Learning Re-sampling Methods with Parameter Attribution for Image Super-resolution. In NeurIPS, 2023.
2. The data sampling is also explored in image SR. The experiments lack the comparison and discussion with the latest related works, such as [1] and:
[2] Shizun Wang, Ming Lu, Kaixin Chen, Jiaming Liu, Xiaoqi Li, Chuang Zhang, and Ming Wu. Samplingaug: On the importance of patch sampling augmentation for single image super-resolution. In BMVC, 2021.
[3] Xinyi Zhang, Tao Dai, Bin Chen, and Shu-Tao Xia. DDA: A dynamic difficulty-aware data augmenter for image super-resolution. In IJCNN, 2023.
3. The transformer-based SR backbones should be considered to compare in the experiment parts.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is other classification criteria tried, such as PSNR measurement like ClassSR? Actually, the mean and standard deviation of the gradient magnitude of the input samples may not well reflect the sample reconstruction difficulty.
2. How about the generalization of the proposed methods on other restoration tasks, like image denoising, JPEG compression. It seems that the mean and standard deviation of the gradient magnitude would be affected by some noises or more complex degradation factors.
3. There are some writing problems, such as
- Line 24-25
- Line 68
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See the above comments
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thanks for your positive evaluation and valuable suggestions.**
**Weaknesses:**
***[Q1:]: The proposed method is of incremental contributions. This work is not the first to explore imbalance in image SR.***
**[A1:]**: We want to clarify the contributions and novelties of our work from both the motivation and methodology aspects as follows.
**Motivation**. Although previous works[1-3] have explored imbalance in image SR, they primarily focus on the imbalance in data distribution. In contrast, our method explores both data distribution imbalance and model optimization imbalance inspired by our theoretical analysis and experimental verification that the imbalance in model optimization is more critical for SR performance.
**Methodology**. To solve both data distribution imbalance and model optimization imbalance, we propose a novel data sampling strategy, network optimization method, and efficient inference framework.
Regarding **data sampling**, we design the Hierarchical Equalization Sampling (HES) strategy to address data distribution imbalance with sample- and class-level sampling, enabling balanced generalization of feature representation from diverse samples.
Regarding **network optimization**, we propose the Balanced Diversity Loss (BDLoss) to correct model optimization imbalances based on the distribution transformation theorem, which focuses on learning texture regions while disregarding redundant computations in smooth regions.
Regarding the **efficient inference framework**, we present a gradient projection dynamic Inference strategy to adaptively allocate the subnet model without any increase in additional parameters by calculating the gradient projection map.
***[Q2:]: The data sampling is also explored in image SR. The experiments lack the comparison and discussion with the latest related works.***
***[A2:]***: The data sampling is explored in image SR [1,2,3] to address the data distribution imbalance.
In particular, the Dual-sampling technique[1] alternates sampling between uniform samples and hard samples, which causes unreliable learning due to the model to oscillate between two types of samples.
The greedy sampling approach[2] focuses on collecting more hard samples, which leads to overfitting specific samples and failing to generalize well across diverse data.
The dynamic sampling method[3] controls the sampling probability of each class by the relative loss, which can be influenced by various factors (e.g., noise) and lead to instability as the loss values fluctuate during training.
Different the above approaches, our method designs a novel two-tier data sampling strategy (Hierarchical Equalization Sampling, HES) to address both the data distribution imbalance.
HES first performs more sample-level sampling to learn generalized feature representations followed by fewer selective class-level sampling to focus on texture-rich regions to correct sample bias with stable learning and prevent the model's overfitting, which mitigates the model oscillation in [1] and addresses the overfitting problem in [2]. Furthermore, our HES achieves balanced stable training with our BDLoss for diverse samples in each training step, which solves the instability and training bias issues in [3].
In Table 5, we conduct additional experiments to compare our HES with these works[1-3], which show that our HES outperforms the previous best sampling method BSPA of an average of 0.1dB in terms of PSNR and demonstrates the superiority and generalization capabilities of our HES.
In "WBSR†", we can achieve even greater performance gains of 0.18 dB by integrating our HES with our BDLoss.
***[Q3:]: The transformer-based SR backbones should be considered.***
***[A3:]***: Thanks for this suggestion. We conduct additional comparisons with transformer-based SR backbones. As shown in Table 6 of the attached PDF in the Author Rebuttal, our approach achieves performance improvements on both natural and remote sensing datasets with average gains of 0.16 dB and 0.26 dB, respectively.
**Questions:**
***[Q4:]: Is other classification criteria tried, such as PSNR measurement like ClassSR?***
***[A4:]***: Actually, we tried the PSNR measurement of ClassSR to assess the sample reconstruction difficulty before the paper submission. Since this approach necessitates the introduction of additional classification networks to categorize images, leading to increased training costs and added computational complexity. This contradicts our goal of achieving efficient inference. Therefore, we finally give up the PSNR measurement and adopt the gradient-based method.
Following this suggestion, we present the comparison between our gradient-based method and classifier-guided method (PSNR measurement) in Table 7 of the PDF attached in the author rebuttal. As we can see our gradient-based method achieves comparable performance to more complex classifier-guided methods while maintaining computational cost, which proves its effectiveness.
***[Q5:]: How about the generalization of the proposed methods on other restoration tasks.***
***[A5:]*** Thanks for your valuable suggestion. We conduct additional comparative experiments on various restoration tasks, including image denoising and JPEG Compression Artifact Removal (CAR) in Table 8.
Our method achieve performance improvement of 0.07 dB compared to the denoising baseline FFDNet, and 0.11 dB compared to the CAR baseline RNAN.
Although the interference of noise and blocking artifacts, these improvements are not as substantial as the improvement of 0.18 dB achieved in super-resolution baselines, our method remains effective even in complex degradations factors. Because our weight-balancing approach effectively mitigates the imbalance issues prevalent in these restoration fields.
***[Q6:]: There are some writing problems.***
***[A6:]***: Thanks for this suggestion. We will proofread the whole paper and correct the writing problems in the revised manuscript.
---
Rebuttal 2:
Comment: Thank the reviewer very much for the great efforts in reviewing our paper. We kindly wish to remind the reviewer to consider our response and additional experiments. We are more than willing to provide further clarifications if there are any lingering questions or concerns. | Summary: This paper proposes a Weight-Balancing framework to address the imbalanced learning issues in image super-resolution. Two categories of imbalance are involved, including data distribution imbalance and model optimization imbalance. Experiments demonstrate the effectiveness of the proposed method.
Strengths: 1. The idea of addressing the imbalanced issues in image SR is reasonable, straightforward, and effective.
2. The experiments demonstrate that the propsoed method enables computatation cost reduction but maintaining the pixel-domain accuracy.
Weaknesses: 1. Some grammatical errors should be corrected, e.g., "are used to accelerate inference have been widely xxx" in Line25-26. Please check the whole paper.
2. I doubt the assumption that the distribution of the training set is imbalanced, whereas the independent testing set is balanced. Why is there this difference between the training and test sets?
3. The description of the proposed HES sampling method is confusing. For example, what dose the "classes" in the images SR mean? How to determine the number of classes K?Please improve the clarity and add more technical details.
4. From the probabilistic view, in my opinion, the prediction of SR network trained with L1 loss should correspond to the medium of a latent noisy prediction distribution. Therefore, I doubt the correctness of Eq. (2).
5. In the ablation study, the impact of the dynamic supernet model should be isolated. For instance, I wonder the performance gain of RCAN+HES over the vanilla RCAN.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the weaknesses part.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Please see the weaknesses part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thanks for your positive evaluation and valuable suggestions.**
**Weaknesses:**
***[Q1:]: Some grammatical errors should be corrected, e.g., "are used to accelerate inference have been widely xxx" in Line25-26. Please check the whole paper.***
***[A1:]***: Thanks for this suggestion. We will proofread the whole paper and correct the grammatical errors in the revised manuscript.
***[Q2:]: I doubt the assumption that the distribution of the training set is imbalanced, whereas the independent testing set is balanced. Why is there this difference between the training and test sets?***
***[A2:]***: We assume the training sets are imbalanced and the testing sets are balanced due to the inherent characteristics of real-world datasets and the objectives of model evaluation. This imbalance problem assumption is common in machine learning and computer vision tasks, such as Imbalanced Learning[1,2], long-tail classification[3,4], and Anomaly Detection[5,6] tasks.
Specifically, training sets with a large number of samples follow a skewed Gaussian distribution, typically reflecting the natural distribution of real-world data, where some classes are overrepresented (major classes) and others are underrepresented (minor classes).
To achieve fair model evaluation, testing sets are usually considered to have a balanced uniform distribution with an equal number of samples in each class, as each sample is independently tested by the model.
When the testing dataset contains a large number of simple smooth images and a small number of complex texture images, even if the texture images are poorly restored, the overall PSNR will still be high, which will affect the accuracy of objective measurement.
Thus, balanced testing sets ensure that evaluation metrics accurately reflect the model's ability to handle different types of features, thereby providing more reliable evaluation results.
Lastly, extensive experiments on diverse real-world datasets in Tables 1-3 of the attached PDF in the Author Rebuttal demonstrate that our methods are robust and effective across various real-world applications, supporting the validity of our assumptions and method.
[1] He H, Garcia E A. Learning from imbalanced data[J]. IEEE TKDE 2009.
[2] Haixiang G, Yijing L, et al. Learning from class-imbalanced data: Review of methods and applications[J]. ESWA, 2017.
[3] Park S, Lim J, Jeon Y, et al. Influence-balanced loss for imbalanced visual classification[C]. In CVPR 2021.
[4] Wang P, Han K, Wei X S, et al. Contrastive learning based hybrid networks for long-tailed image classification[C]. In CVPR 2021.
[5] Zhang G, Yang Z, Wu J, et al. Dual-discriminative graph neural network for imbalanced graph-level anomaly detection[J]. In NeurIPS 2022.
[6] Dragoi M, Burceanu E, Haller E, et al. AnoShift: A distribution shift benchmark for unsupervised anomaly detection[J]. In NeurIPS 2022.
***[Q3:]: The description of the proposed HES sampling method is confusing. For example, what does the "classes" in the image SR mean? How to determine the number of classes K? Please improve the clarity and add more technical details.***
***[A3:]***: To solve the imbalance problems of image SR, we classify the imbalanced training dataset into multiple classes. Here, "classes" refer to the levels of restoration difficulties of image patches. For example, texture-rich patches are usually considered as samples from difficult classes, while smooth patches are considered as samples from easy classes.
The number of classes K is set manually to 10 in our method to ensure a balance between performance and computational cost. Different values of K can lead to varying restoration performance due to the allocation of different subnetworks for inference under computational cost limitations. A higher K allows for more suitable subnetwork selection and potentially better inference performance, but it also requires more computational.
For a detailed analysis and experimental results, please refer to the supplementary material, "Analysis of the class K of samples", in the original manuscript.
***[Q4:]: From the probabilistic view, in my opinion, the prediction of the SR network trained with L1 loss should correspond to the medium of a latent noisy prediction distribution. Therefore, I doubt the correctness of Eq. (2).***
***[A4:]***: Thanks for pointing out this error. Your observation is correct. From a probabilistic perspective, when a super-resolution (SR) network is trained using L1 loss, the prediction aligns with the median of the latent noisy prediction distribution. This is because L1 loss minimizes the sum of absolute differences between the predictions and the ground truth, which corresponds to estimating the median of the error distribution. In contrast, L2 loss minimizes the sum of squared differences, which is more closely associated with estimating the mean of a Gaussian distribution.
It is important to note that our Distribution Transformation theory remains applicable and valid whether L1 or L2 loss is employed. For a balanced uniform distribution, the median and the mean are equivalent. Therefore, both L1 and L2 losses provide different perspectives on the training set distribution without affecting the fundamental principles of our theory.
***[Q5:]: In the ablation study, the impact of the dynamic supernet model should be isolated. For instance, I wonder the performance gain of RCAN+HES over the vanilla RCAN.***
***[A5:]***: As shown in Table 4 of the attached PDF in the Author Rebuttal, we provide the results for this comparison by using the whole supernet with 100% FLOPs for inference, i.e., RCAN+HES.
It can be seen from the results that RCAN+HES improves performance by 0.11 dB compared to the vanilla RCAN, which demonstrates the effectiveness of our HES.
For a more detailed comparison of our method with other methods and ablation studies, please refer to Table 5 of the attached PDF in the Author Rebuttal.
---
Rebuttal 2:
Comment: Thank the reviewer very much for the great efforts in reviewing our paper. We kindly wish to remind the reviewer to consider our response and additional experiments. We are more than willing to provide further clarifications if there are any lingering questions or concerns. | Summary: To address imbalances and parameter redundancy problems, author proposed the Weight-Balancing framework (WBSR), which balances model learning without altering the original model structure or training data. The approach includes a Hierarchical Equalization Sampling (HES) strategy to handle data distribution imbalances and a Balanced Diversity Loss (BDLoss) function to optimize learning for texture-rich regions while reducing redundant computations in smooth areas. They introduce a gradient projection dynamic inference strategy for accurate and efficient inference.
Strengths: The Weight-Balancing framework (WBSR) achieves balanced model learning without altering the original model structure or training data, addressing dataset imbalances and parameter redundancy effectively
The Balanced Diversity Loss (BDLoss) function optimizes model learning by focusing on texture regions and minimizing redundant computations in smooth areas, leading to more efficient training
The method achieves comparable or superior performance to existing approaches with a 34% reduction in computational cost, demonstrating significant efficiency improvements.
Weaknesses: The approach may still face scalability challenges when applied to extremely large datasets or high-resolution images, limiting its applicability in some real-world scenarios.
While focusing on texture-rich regions can improve feature representation, it may also lead to overfitting, reducing generalization performance on smooth or less textured areas
Technical Quality: 3
Clarity: 3
Questions for Authors: To enhance generalization, consider testing the proposed methods on a broader range of datasets, particularly those with varying characteristics (e.g., different textures, lighting conditions). This will help validate the robustness of the framework across diverse scenarios.
Address the potential complexity of the Weight-Balancing framework (WBSR) by providing a more user-friendly implementation or detailed guidelines. Including example codes or a simplified version could facilitate adoption by practitioners with varying levels of expertise.
Provide well-defined interfaces for each module with clear input and output specifications. This will help users know what data to provide and what to expect in return.
Include a section that addresses common issues users might encounter, along with solutions and tips for resolving them.
Include additional examples showcasing how WBSR can be applied in various real-world scenarios, such as different types of images or specific applications (e.g., medical imaging, satellite imagery)
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: While the approach emphasizes learning from texture-rich regions, this focus may result in suboptimal performance on images with predominantly smooth areas, leading to a risk of overfitting to specific features while neglecting others.
Although the framework is validated on various models and datasets, it may not have been tested extensively across all possible scenarios, raising concerns about its generalization capabilities in diverse applications.
The formulation of the super-resolution task as an imbalanced distribution transfer learning problem relies on certain statistical assumptions that may not hold in all real-world scenarios, potentially limiting its effectiveness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thanks for your positive evaluation and valuable suggestions.**
**Questions:**
***[Q1:]: To enhance generalization, consider testing the proposed methods on a broader range of datasets (e.g., different textures, lighting conditions).***
***[A1:]***: Following this suggestion, to validate the robustness of our method across diverse scenarios, we conduct additional experiments on four datasets including an autonomous driving scene dataset (KITTI2015), two satellite remote sensing datasets (CAVE, NTIRE2020), and a low-light super-resolution dataset (RELLISUR). The experiential results are shown in Tables 1-3 of the attached PDF in the Author Rebuttal. As we can see, our method achieves good generalization and robustness across diverse scenarios. Specifically, our method obtains an average PSNR improvement of 0.11 dB on the autonomous driving scene dataset, 0.27 dB and 0.25 dB on two satellite remote sensing datasets, and 0.1 dB on the low-light condition SR dataset, respectively.
***[Q2:]: Address the potential complexity of the Weight-Balancing framework (WBSR) by providing a more user-friendly implementation or detailed guidelines..***
***[A2:]***: Thanks for this suggestion. To facilitate the implementation of the WBSR framework, we will provide implementation details in the supplementary materials of the revised manuscript and we will also release the source codes of our method.
***[Q3:]: Provide well-defined interfaces for each module with clear input and output specifications.***
***[A3:]***: In the revised manuscript, we will provide the well-defined interfaces for each module with clear input and output specifications. Additionally, we will include a section in the supplementary materials to discuss common issues that users might encounter.
***[Q4:]: Include additional examples (e.g., medical imaging, satellite imagery).***
***[A4:]***: In Figure 1 of the attached PDF in the Author Rebuttal, we provide additional examples that show how WBSR performs in medical MRI images and satellite hyperspectral images, which also have imbalanced sample classes.
In addition, we also provide additional quantitative results of our WBSR on a broader range of datasets in Tables 1-3 of the attached PDF, our method performs well in various real-world applications.
**Limitations:**
***[Q5:]: While the approach emphasizes learning from texture-rich regions, this focus may result in suboptimal performance on images with predominantly smooth areas, leading to a risk of overfitting to specific features while neglecting others.***
***[A5:]*** Existing SR networks are apt to overfit smooth areas and underfit texture-rich areas due to their inherent lack of diverse and intricate features. Although our approach emphasizes learning from texture-rich regions, it will not overfit to texture-rich regions.
Because we only employ fewer selective class-level sampling to focus on texture-rich regions while more sample-level sampling to learn generalized feature representations in HES. In addition, our BDLoss also encourages balanced learning of more diverse features by distribution transformation to avoid overfitting certain specific features, which also includes an L2 regularization function by reducing the complexity of model weights to prevent this.
***[Q6:]: Raising concerns about its generalization capabilities in diverse applications.***
***[A6:]*** Regarding the generalization capabilities of our method, please see our response to Q1.
***[Q7:]: The formulation of the super-resolution task as an imbalanced distribution transfer learning problem relies on certain statistical assumptions that may not hold in all real-world scenarios, potentially limiting its effectiveness.***
***[A7:]*** We assume the training sets are imbalanced and the testing sets are balanced due to the inherent characteristics of real-world datasets and the objectives of model evaluation. This imbalance problem assumption is common in machine learning and computer vision tasks, such as Imbalanced Learning[1,2], long-tail classification[3,4], and Anomaly Detection[5,6] tasks.
Specifically, training sets with a large number of samples follow a skewed Gaussian distribution, typically reflecting the natural distribution of real-world data, where some classes are overrepresented (major classes) and others are underrepresented (minor classes).
To achieve fair model evaluation, testing sets are usually considered to have a balanced uniform distribution with an equal number of samples in each class, as each sample is independently tested by the model.
When the testing dataset contains a large number of simple smooth images and a small number of complex texture images, even if the texture images are poorly restored, the overall PSNR will still be high, which will affect the accuracy of objective measurement.
Thus, balanced testing sets ensure that evaluation metrics accurately reflect the model's ability to handle different types of features, thereby providing more reliable evaluation results.
Lastly, extensive experiments on diverse real-world datasets (shown in Tables 1-3) demonstrate that our methods are robust and effective across various real-world applications, supporting the validity of our assumptions and method.
[1] He H, Garcia E A. Learning from imbalanced data[J]. IEEE TKDE 2009.
[2] Haixiang G, Yijing L, et al. Learning from class-imbalanced data: Review of methods and applications[J]. ESWA, 2017.
[3] Park S, Lim J, et al. Influence-balanced loss for imbalanced visual classification[C]. In CVPR 2021.
[4] Wang P, Han K, et al. Contrastive learning based hybrid networks for long-tailed image classification[C]. In CVPR 2021.
[5] Zhang G, Yang Z, Wu J, et al. Dual-discriminative graph neural network for imbalanced graph-level anomaly detection[J]. In NeurIPS 2022.
[6] Dragoi M, Burceanu E, Haller E, et al. AnoShift: A distribution shift benchmark for unsupervised anomaly detection[J]. In NeurIPS 2022. | Rebuttal 1:
Rebuttal: We appreciate all reviewers with their positive comments and valuable suggestions.
**Reviewer D78d (Rating: 7 - Accept)** gives positive comments on both our method and experimental results. The reviewer's concerns are on the applicability and generalization in some real-world scenarios. To response to reviewer's concerns, we have presented additional experiments on four datasets to validate the generalization of the proposed method .
**Reviewer FS1Z (Rating: 5 - Borderline accept)** has the main concerns on the presentation of the paper and ablation study. To response to reviewer's concerns, we have present additional ablation study and will improve the presentation in the revision.
**Reviewer SYab (Rating: 3 - Reject)** has the main concerns on the contributions and additional experiments. To response to reviewer's concerns, we have clarified our contributions from both the motivation and methodology aspects. In addition, we also presented additional experiments to evaluation the generalization of the proposed method.
**Reviewer Cw8X (Rating: 7 - Accept)** acknowledges our contributions and performance improvement. The reviewer's concerns are on the details of the network structure and expectation of more experimental evaluation. To response to reviewer's concerns, we have explained the network structure and also present additional comparative experiments.
Our responses to individual comments of each reviewer are posted in the rebuttal under each reviewer's report. All the required experimental results are presented in the PDF attached in this rebuttal.
Specifically:
- **Table 1** , **Table 2**, and **Table 3** show quantitative experiments of our method on four datasets with varying scenes.
- **Table 4** , **Table 5**, and **Table 7** provide ablation studies of our method and quantitative comparison results with other related methods.
- **Table 6** and **Table 8** present quantitative experiments of our method on transformer-based SR backbones and other restoration backbones.
- **Figure 1** showcases representative visualization examples for our method on medical and remote sensing datasets.
For convenience, we highlight the figure and tables relevant to each reviewer's comments as follows:
- **Reviewer D78d**: Table 1, Table 2, Table 3, and Figure 1.
- **Reviewer FS1Z**: Table 4 and Table 5.
- **Reviewer SYab**: Table 5, Table 6. Table 7, and Table 8.
- **Reviewer Cw8X**: Table 4, Table 5, and Table 7.
We sincerely hope the rebuttal will address the reviewers' concerns and convince the reviewers to give more convincing decisions.
Pdf: /pdf/2ef0a7b7bacf8ddd64f3598e71f40a41e9155e41.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
HC-GAE: The Hierarchical Cluster-based Graph Auto-Encoder for Graph Representation Learning | Accept (poster) | Summary: This paper proposes a novel Hierarchical Cluster-based Graph Auto-encoder (HC-GAE) for unsupervised graph representation learning. HC-GAE can reduce the over-smoothing problem and generalize to multiple downstream tasks.
Strengths: 1. The motivation is clear and easy to understand.
2. The proposed method seems to be simple and effective.
3. This paper is easy to follow.
Weaknesses: 1. The novelty is limited. (1) Both the motivations are not novel. Over-smoothing is a classic problem on GNNs and has been discussed for many years. Multi-task ability has also been discussed in previous literature [1], which has much better multi-task performance than the proposed method. (2) The method is not very novel either. The method is just a combination of existing methods (DiffPool, VGAE) with simple modification.
2. One of the motivations of this paper is to enhance the performance of GAEs for multiple downstream tasks, and the authors also emphasize it for the proposed method. Thus the authors are encouraged to conduct experiments on link prediction task, which is also very important in graph learning and very different from node classification and graph classification. If HC-GAE is not designed for this task or cannot perform well on it, the authors should discuss the limitation on downstream tasks and highlight the discussion.
3. The loss function requires to compute multiple KL-divergence, and may require a lot of time to compute. The authors are encouraged to analyze the time and space complexity, and also conduct experiments to show the computational cost.
4. The baselines are not enough and kind of old. The authors are encouraged to include more recent baselines, including auto-encoder based methods (e.g., GraphMAE2 [2]) and contrastive learning based methods (e.g., CCA-SSG [3], GGD [4] and [5]).
5. The authors are also encouraged to include experimental results on more datasets.
6. There is no sensitivity test on the hyper-parameters.
[1] Multi-task Self-supervised Graph Neural Networks Enable Stronger Task Generalization. In ICLR 2023.
[2] GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner. In WWW 2023.
[3] From Canonical Correlation Analysis to Self-supervised Graph Neural Networks. In NeurIPS 2021.
[4] Rethinking and Scaling Up Graph Contrastive Learning: An Extremely Efficient Approach with Group Discrimination. In NeurIPS 2022
[5] SEGA: structural entropy guided anchor view for graph contrastive learning. In ICML 2023.
Technical Quality: 1
Clarity: 3
Questions for Authors: Please refer to "Weaknesses".
Confidence: 4
Soundness: 1
Presentation: 3
Contribution: 1
Limitations: Please refer to "Weakness 2".
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: Both the motivations are not novel. Over-smoothing is a classic problem and Multi-task ability has also been discussed. The authors are encouraged to conduct experiments on link prediction task, and discuss the limitation on downstream tasks.
A1: We would like to further explain and emphasize the motivation and the contribution of this work. First of all, the reviewer is correct that the over-smoothing problem is a very classical problem. But it still seriously influences most of the GNN-based methods by now, so that it has been discussed for many years until now, indicating that the over-smoothing is always an important issue for the development of novel GNN-based methods.
On the other hand, the DiffPool and the GAE are both popular methods after 2018. Specifically, the DiffPool can be seen as a kind of Hierarchical Pooling methods that rely on constructing the original graph as the hierarchical pyramid-like structures, for the purpose of extracting meaningful structural characteristics of the original graph. These hierarchical structures are formed by gradually compressing the nodes into a number of corresponding clusters based on the node features, that are extracted through the GNN model, i.e., the node features are computed by performing the graph convolution operation. The GAE also needs to extract the node features through the graph convolution operation for either the decoder and encoder. Since, the required graph operations for both the DiffPool and VGAE models suffer from the over-smoothing problem, these two classical methods still have theoretical drawbacks.
To overcome the above drawbacks, the novel contributions of this work are twofold. First, unlike the DiffPool that relies on the graph operation over the global graph structure, the proposed HC-GAE is based on the new subgraph convolution framework. Specifically, we propose to employ the hard node assignment to assign the nodes into separated clusters, resulting in separated subgraphs. The convolution operation is performed within each individual subgraphs, and the node information cannot be propagated between different subgraphs, this can thus significantly reduce the over-smoothing problem and improve the performance. Second, unlike some classical GAE-based methods (also see R#3-Q2&A2), the proposed HC-GAE can simultaneously reconstruct both the topological structures and the node features through the hierarchical strategy associated with the node assignment operation, i.e., the hard node assignment for the encoder and the soft node assignment for the decoder. Moreover, unlike some Hierarchical GAE methods (also see R#3-Q2&A2) that essentially rely on the node drop pooling (i.e., the masking strategy) and focus more on the node feature reconstruction, the proposed HC-GAE will not drop any node information, reducing the topological information loss, effectively extracting bidirectionally hierarchical structural features of the original sample graph.
In summary, both the proposed HC-GAE and the DiffPool belong to the kind of the hierarchical strategy-based GNN methods, but they are still theoretically different in terms of the detailed definition and motivation. Moreover, the proposed HC-GAE is also different from the classical GAE methods, and can significantly overcome their drawbacks. We will follow the review’s suggestion and add these statement in the manuscript, making the manuscript more polish.
For the link prediction problem, we thank the reviewer’s constructive suggestion. This enlightens our future work, and we will discuss this in the conclusion as future works.
Q2: The authors are encouraged to analyze the time and space complexity.
A2: Thanks for the suggestion. We have briefly analyzed the complexity as follows.
For the encoding and decoding process, given the input graph with node set $V$ and edge set $E$, the proposed model requires the storage complexity of $\mathcal{O}(\frac{1}{\mu}\|V\|^2)$ where each cluster has $\mu$ nodes. By contrast, the DiffPool needs $\mathcal{O}(\|V\|^2)$. For the time complexity, if we set the layer as $k$, the proposed model requires $\mathcal{O}(k\mu\log\frac{\|V\|^2}{\mu})$. Clearly, the proposed model has the similar time and space complexity with the DiffPool.
Furthermore, we will following the reviewer’s suggestion and generate a family toy sample graphs with increasing sizes (e.g., from 50, 100, 150 t0 5000), and evaluate how time and space complexity varies with the increasing sizes, demonstrating the efficiency.
Q3: The baselines are not enough and kind of old. The authors are encouraged to include more recent baselines, including auto-encoder based methods.
A3: Thank you for the constructive suggestion. In fact, for the current experiments, we have compared the proposed model with some methods published in recent 3 years. For the node classification, these methods include: S2GAE (WSDM 2023), GraphMAE (KDD 2022). For the graph classification, these methods include: S2GAE (WSDM 2023), GraphMAE (KDD 2022), InfoGCL (NIPS 2021). To make the experiments more convincing, we have also compared the proposed method with the newest methods published in 2024, and the results are shown as follows. More experiments will be evaluated during the review stage, and please feel free to discuss with us for the experiment. Finally, we will also do some sensitivity test on the hyper-parameters, following the reviewer’s suggestion.
Table 1. The experiments for node classification. For the datasets Cora, CiteSeer and PubMed, the results for Graph U-Nets (2024) are 84.4 ± 0.60, 73.2 ± 0.50, and 79.6 ± 0.20; for Hi-GMAE (2024) are 86.4 ± 0.50, 74.5 ± 0.80, and 85.3 ± 0.20; for Ours (HC-GAE) are 88.0 ± 0.10, 75.3 ± 0.10, and 87.6 ± 0.40.
Table 2. The experiments for graph classification. For the datasets PROTEINS and COLLAB, the results for Graph U-Nets (2024) are 77.68 and 77.56; for Hi-GMAE (2024) are 76.63 and 82.16; for ours (HC-GAE) are 78.13 and 80.41.
---
Rebuttal 2:
Title: About out responses
Comment: Dear Reviewers,
Thanks for your efforts and the constructive suggestions for this paper. We have provided the response based on your comments.
Please read our responses and feel free to discuss with us during the reviewer-author discussion stage, if you have any concerns for our response.
Best Regards,
The authors
---
Rebuttal 3:
Comment: Thanks for the author's response, but my concerns are not addressed.
---
Rebuttal 4:
Comment: After reading the rebuttal, I feel that my concerns are not addressed.
1. I have listed 6 weak points in the review. However, the authors only give responses to 3 points.
2. I have listed some baselines, including references, but the authors did not use them. Those baselines are very fast to run.
Besides, the authors seem to be not familiar with this field
1. I still cannot understand the novelty. This method is just a combination of existing techniques.
1. There have been some surveys on Graph Pooling for Graph Neural Networks [1].
2. "Hard node assignment" has been widely used in graph coarsening [2, 3, 4, 5, 6, 7, 9].
3. "Simultaneously reconstruct both the topological structures and the node features" has been used in [7, 8, 10].
2. The complexity analysis is wrong. In graph neural networks, the adjacency matrices are implemented with sparse matrices, in order to reduce the complexity. Thus the complexity of the DiffPool is $O(|E| + |V| n_1)$ [7], where $|E|$ is the number of edges in the graph, $|V|$ is the number of nodes in the graph, and $n_1$ is the number of nodes after the first coarsening operation.
---
[1] Graph Pooling for Graph Neural Networks: Progress, Challenges, and
Opportunities. In IJCAI 2023.
[2] Training Large-Scale Graph Neural Networks Via Graph Plartial Pooling. IEEE Transactions on Big Data.
[3] Efficient Representation Learning of Subgraphs by Subgraph-To-Node Translation. ICLR 2022 workshop.
[4] CC-GNN:A Community and Contraction-based Graph Neural Network. In ICDM.
[5] SizeShiftReg: a Regularization Method for Improving Size-Generalization in Graph Neural Networks. NeurIPS 2022.
[6] SMGRL: A Scalable Multi-resolution Graph Representation Learning Framework. Arxiv
[7] GraphZoom: A multi-level spectral approach for accurate and scalable graph embedding. In ICLR 2020.
[8] HARP: Hierarchical representation learning for networks. In AAAI 2018.
[9] Spectral Clustering with Graph Neural Networks for Graph Pooling. In ICML 2020.
[10] Graph U-Nets. In ICML 2019.
---
Rebuttal 5:
Title: Replying to About out responses
Comment: Thank you for the responses as well as the constructive suggestions.
After reading the new response from the review, we feel very strange and startled that the reviewer thinks he is not respected during the rebuttal stage. Maybe, there are some unnecessary misunderstandings between us, and we would like to explain in details and again show our respects to the reviewer.
Please note, this does not mean that we entreat the acceptance for this paper. In fact, we also know this work still has some weaknesses, since this is the first work of a student who will start his PhD from this September. The other authors have published many top conferences and journals including nearly 30 TPAMI papers, thus we clearly understand the concerns raised by the reviewer, especially for the TOP conference like NIPS. By contrast, we just hope that the rebuttals can really help the student realize the drawbacks of his work, and know how to make this paper more polish in future.
First, we have expressed our appreciation for the reviewer's efforts on reviewing this paper. We also thanked the reviewer's suggestions at the beginning of each response. As summary, we also said that "please feel free to discuss with us". Thus, we trust that we have expressed our sincere respects to the reviewer.
Second, we have tried our best to answer each of the reviewer's concerns. Because some of the answers simultaneously correspond to more than one point, it just seems that there are only three answers. Specifically, A1 corresponds to the points 1 and 2 raised by the reviewer, A2 corresponds to the point 3 raised by the reviewer, and A3 corresponds the points 4-6 raised by the reviewer, respectively.
Third, as explained in the rebuttal, the aim of this paper is not only to utilize the hard node assignment. Moreover, the hard node assignment is not only used to construct the coarsened nodes. More specifically, we employ the hard node assignment to generate a number of separated subgraphs, and perform the subgraph convolution operation to interdict the node information propagation between different subgraphs, reducing the over-smoothing problem. As a result, this is entirely different from most of the existing works.
Fourth, thanks for correcting the mistake of the time complexity, and giving these new references. These will enlighten the student's future works.
Finally, we thank the reviewer's new suggestions, and we have given our above responses to the new concerns raised by the reviewer. We hope our new response can eliminate the misunderstanding between us, and please trust that we are always appreciated with the reviewer's suggestions.
Thank you very much again for the responses.
Best Regards,
The authors | Summary: The paper presents a Hierarchical Cluster-based Graph Auto-Encoder (HC-GAE) for improved graph representation learning. HC-GAE uses hard node assignment for encoding and soft node assignment for decoding; thus, it enables hierarchical compression and expansion of graphs. The authors argue their method can reduce the over-smoothing effect by limiting message-passing to individual subgraphs. HC-GAE effectively improves performance in node and graph classification tasks on real-world datasets.
Strengths: The paper is well-written and includes all the necessary details. The proposed model solves a significant problem in the graph representation community and numerical results are promising.
Weaknesses: The code should be released at the review stage to check reproducibility. Especially for empirical work, releasing codes is a prerequisite for acceptance to me.
The proposed HC-GAE is similar to existing models that employ graph auto-encoders with hierarchical (multi-level, multiresolution) pooling (or coarsening). The authors should include a discussion comparing HC-GAE with these models, both qualitatively and quantitatively. Without this discussion, the paper's claim of novelty cannot be highlighted.
1. Graph U-Nets
1. Mesh Variational Autoencoders with Edge Contraction Pooling
1. Masked Graph Auto-Encoder Constrained Graph Pooling
1. GRAPH AUTOENCODER FOR GRAPH COMPRESSION AND REPRESENTATION LEARNING
1. Multiresolution equivariant graph variational autoencoder
1. Hi-GMAE: Hierarchical Graph Masked Autoencoders
It is unclear whether the over-smoothing effect does not exist in HC-GAE truly, as there are no experimental results against the number of layers. If GNN is applied to the coarsened graph, message-passing between coarsened nodes will occur, potentially leading to over-smoothing. To support their argument, the authors should empirically validate the claim that HC-GAE reduces over-smoothing (i.e., how performance varies by the number of layers).
Technical Quality: 2
Clarity: 3
Questions for Authors: - I think “compressing procedure during the decoding process (line 9 – 10)” should be “compressing procedure during the encoding process”. Is it a typo?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1:The code should be released at the review stage to check reproducibility. Especially for empirical work, releasing codes is a prerequisite for acceptance to me.
A1: Thanks for the constructive suggestion. We have provided a demo to the reviewer, please see the official comment with the anonymized link, and this follows the request of policy of the Reviewing/Discussion process. The reviewer can also use other standard datasets we raised in the manuscript with our code. We trust that this can help the reviewer demonstrate the reproducibility for the proposed method. Please feel free to discuss with us for the code during the discussion stage. Moreover, we promise that we will also release our code on the Github after the review process.
Q2: The proposed HC-GAE is similar to existing models that employ graph auto-encoders with hierarchical (multi-level, multiresolution) pooling (or coarsening). The authors should include a discussion comparing HC-GAE with these models, both qualitatively and quantitatively. Without this discussion, the paper's claim of novelty cannot be highlighted. E.g., 1) Graph U-Nets; 2) Mesh Variational Autoencoders with Edge Contraction Pooling; 3) Masked Graph Auto-Encoder Constrained Graph Pooling; 4) GRAPH AUTOENCODER FOR GRAPH COMPRESSION AND REPRESENTATION LEARNING; 5) Multiresolution equivariant graph variational autoencoder; 6) Hi-GMAE: Hierarchical Graph Masked Autoencoders
A2:Thanks for the constructive suggestion. Although some existing Hierarchical GAE methods also employ the auto-encoder with the hierarchical pooling, there are still some important theoretical differences between the proposed HC-GAE and these alternative methods.
First, unlike these alternative Hierarchical GAE methods, the proposed HC-GAE not only hierarchically constructs the series of coarsened graphs with shrinking sizes, but is also defined by associating with the separated subgraph convolution operations during the hierarchical pooling process. Thus, unlike these alternative methods relaying on the global convolution over the whole graph structure, the proposed HC-GAE can naturally restrict the information propagation in each individual separated subgraph, significantly reducing the over-smoothing problem. Second, the alternative Hierarchical GAE methods are essentially based on the node drop pooling (i.e., the masking strategy), that focuses more on the node feature reconstruction, resulting in topological missing and weakening the structure information reconstruction. By contrast, the proposed HC-GAE needs to simultaneously reconstruct both the topological structures and the node features through the hierarchical pooling strategy associated with the node assignment operation. Thus, the proposed HC-GAE can effectively extract bidirectionally hierarchical structural features of the original sample graph, in terms of either the adjacency matrices (i.e., the topological information) or the node feature matrix (i.e., the node representation). We will cite the references suggested by the reviewer, and add the above discussions in the manuscript, making the paper more polish.
Furthermore, to empirically demonstrate the effectiveness, we will also compare the proposed HC-GAE model with the alternative Hierarchical GAE suggested by the reviewer. As a preliminary evaluation, we have collected several models suggested by the reviewers, the results are shown as follows and our method can significantly outperform the alternative methods. Note that, more experiments are performing now, and we will add the new experimental results as well as the above theoretical discussions in the final manuscript, making this paper more polish.
Table 1. The experimental results (node classification); the datasets are Cora, CiteSeer, and PubMed.
The results for Graph U-Nets are 84.4 ± 0.6, 73.2 ± 0.5, and 79.6 ± 0.2;
for Hi-GMAE are 86.4 ± 0.5, 74.5 ± 0.8, and 85.3 ± 0.2;
for ours(HC-GAE) are 88.0±0.1, 75.3±0.1, 87.6 ± 0.4.
Table 2. The experimental results (graph classification), the datasets are PROTEINS and COLLAB.
The results for Graph U-Nets are 77.68 and 77.56;
for Hi-GMAE are 76.63 and 82.16;
for ours(HC-GAE) are 78.13 and 80.41.
Q3: It is unclear whether the over-smoothing effect does not exist in HC-GAE truly, as there are no experimental results against the number of layers. If GNN is applied to the coarsened graph, message-passing between coarsened nodes will occur, potentially leading to over-smoothing. To support their argument, the authors should empirically validate the claim that HC-GAE reduces over-smoothing (i.e., how performance varies by the number of layers).
A3: Thanks for the constructive suggestion. Through the theoretical viewpoint, the definition of the proposed HC-GAE associated the separated subgraph convolution operation, and the node information can not be propagated to other individual subgraphs, significantly reducing the over-smoothing problem. For the empirical viewpoint, we have select three alternative methods that we have evaluated and suggested by the reviewer, we evaluate how the classification accuracies vary with number of layers. We find that the proposed HC-GAE can significantly outperform these alternative methods, especially with more than 3 layers for either the encoder or the decoder. We will add these new results in the final manuscript, making the experiment and the statement more self-contained.
---
Rebuttal Comment 1.1:
Comment: Thank you for opening the codes. I will raise my score to 4.
However, I cannot still find a clear difference between the proposed and existing models.
About Q&A2-1. Separate subgraph convolution: If the separated subgraph convolution is the first novel contribution that makes a difference, why does the paper focus only on the auto-encoder? Why do the authors not apply this operation for the general GNN layers to reduce the over-smoothing effect? (e.g., separated subgraph GCN, separated subgraph GAT, ...). I cannot find a good reason why this paper is positioned as it is now.
About Q&A2-2. Node assignment: From what I understand from the responses and paper description so far, it is difficult to see clear contributions other than a 'combination' of hard and soft assignments. I think existing pool and un-pool methods have used at least one kind of operation in this paper.
---
Reply to Comment 1.1.1:
Comment: Thanks for your appreciation. We will update our paper, following the reviewer’s suggestions.
About Q&A2-1.
We think that the separated subgraph convolution has great potential for solving the over-smoothing problem. In future work, we will apply it to improve the GNN layers. Thanks for your advice about the separated subgraph convolution. It helps us recognize the worth of the proposed convolution operation. In this paper, we focus more on the graph representation learning and the multiple downstream tasks.
About Q&A2-2.
Node assignment is one of the key contributions mentioned in our paper, but not all. To explain in detail, we summarize our contributions as follows.
The first contribution of the proposed HC-GAE model is that it performs the subgraph convolution operation in each individual subgraph. Compared to the other hierarchical GAEs, our subgraph generation reduces the information propagation through edges, which is the main factor of the over-smoothing. On the other hand, we adopt the soft node assignment to reconstruct the original graph structure in the decoding. And the outputs of the decoder can be employed as the node-level representations. Since the HC-GAE is based on the hierarchical framework, it can effectively extract bidirectionally hierarchical structural features, training for multiple downstream tasks.
Second, we re-design the loss function to be suitable for our model on multiple downstream tasks. Since the HC-GAE generates the bidirectionally hierarchical structural features beyond the original GAEs, the traditional loss function might neglect their rich semantics in training. The existing hierarchical pool methods or un-pool methods hardly discuss the loss related to the bidirectionally hierarchical features.
Last but not least, we realize the improvement of the hierarchical GAE performance. The results show that our model effectively learns the graph representations for multiple downstream tasks.
---
Rebuttal 2:
Title: The anonymized link for the code requested by the reviewer
Comment: The reviewers asked for code during the rebuttal stage, below is the anonymized link for the code.
https://anonymous.4open.science/r/HC-GAE-ECD7
---
Rebuttal 3:
Title: About our responses and the updated anonymous link for the CODE
Comment: Dear Reviewers,
Thanks for your efforts and the constructive suggestions for this paper. We have provided the response based on your comments.
Moreover, we have updated the code requested by the reviewer, please see the anonymous link: https://anonymous.4open.science/r/SSHPool-FB16.
Please read our responses and feel free to discuss with us during the reviewer-author discussion stage, if you have any concerns for our response.
Best Regards,
The authors | Summary: This paper develops a novel GAEs, namely the Hierarchical Cluster-based GAE (HC-GAE) model, to learn effective features for either node classification or graph classification. To extract the bidirectionally hierarchical structural features of the original graph, this paper first utilize the hard node assignment to transform the original graph into a family of coarsened graphs, and then utilize the soft assignment to reconstruct the original graph. During the encoding process the convolution operation is restricted within each separated subgraph, so this HC-GAE can address the shortcoming of over-smoothing problem. This new model has superior performance on both node classification or graph classification tasks.
Strengths: 1.The idea of this work is interesting, and the bidirectionally hierarchical structure learning based on the adaptive node assignment seems novel to me.
2.The proposed HC-GAE model is flexible for either node classification or graph classification.
3.The paper is clearly written and easy to follow, and the experimental results also demonstrate the effectiveness of the new HC-GAE model.
Weaknesses: Although this paper introduces a novel graph representation learning method, but some problems still need to be addressed or be clearer.
1.Why the HC-GAE model utilizes the hard and soft assignment for the encoder and decoder respectively? As I see, the author can only use any of these assignment strategies, right?
2.If the hard assignment can help the new model to reduce the over-smoothing problem, how about the soft-assignment?
3.Why the reconstructed features are effective for node classification? Why do not use the middle features?
4.Although this paper is clearly written, but the writing of Section 3 can be more polished.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness raised above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Although this paper introduces a novel graph representation learning method, but some problems still need to be addressed or be clearer.
Q1: Why the HC-GAE model utilizes the hard and soft assignment for the encoder and decoder respectively? As I see, the author can only use any of these assignment strategies, right?
A1: One key innovation of the proposed HC-GAE model is that it performs the subgraph convolution operation in each individual subgraph, so that the node information can only be propagated in a separated subgraph, significantly reducing the over-smoothing problem. To eliminate the structural connection between different subgraphs, each node of the original input sample graph need to be assigned to an unique cluster. The soft assignment will assign each node to multiple clusters associated with different probabilities, thus it cannot help us to reduce the over-smoothing problem.
Q2: If the hard assignment can help the new model to reduce the over-smoothing problem, how about the soft-assignment?
A2: Thanks for the suggestion. The reason is similar to the above problem. The soft assignment cannot guarantee that each node will be assigned into the same cluster, so that one node information may be existed in all other subgraphs, so that we cannot restrict the node information propagate in an individual separated subgraph. In other words, each node can still propagate its information to all other nodes. With the model architecture becomes deeper, the over-smoothing problem will appear.
Q3: Why the reconstructed features are effective for node classification? Why do not use the middle features?
A3: Sorry that we didn’t explain this in details. The proposed HC-GAE model is one kind of the hierarchical structure-based GNN methods, that can hierarchically constructs the series of coarsened graphs with shrinking sizes. Thus, the size of the coarsened graph in the middle layer is smaller than the original graph, i.e., many nodes of the original graph are compressed as the coarsened node. Clearly, the coarsened nodes are not suitable for the classification of original nodes.
Q4: Although this paper is clearly written, but the writing of Section 3 can be more polished.
A4: Thanks for the suggestion. We will further revise this section.
---
Rebuttal 2:
Title: About our responses
Comment: Dear Reviewers,
Thanks for your efforts and the constructuve suggestions for this paper. We have provided the response based on your comments.
Please read our responses and feel free to discuss with us during the reviewer-author discussion stage, if you have any concerns for our response.
Best Regards,
The authors
---
Rebuttal Comment 2.1:
Comment: The responses address my concerns about this paper. Thus, I keep to recommend the acceptance and improve the score of this paper. | Summary: The authors propose a new GNN-based representation learning method (HC-GAE), that can abstract effective local node features and global graph features. These features can be used for both node and graph classification. The new HC-GAE method consists of two main computational modules, they are the encoder associated with the hard node assignment, and the decoder associated with the soft assignment. Moreover, the deep representation in the middle layer can be seen as the global graph features, and the output in the last layer can be seen as the local node features. All these features encapsulate bidirectionally hierarchical structural features of the original sample graph based on the hierarchical strategy. Finally, the authors also propose a new loss function for integrating the information from either the encoder or the decoder.
Strengths: 1. The idea of this work seems novel and interesting for me, performing the local convolution operation within the separated subgraphs through the hard node assignment not only addresses the over-smoothing problem, but also forms hierarchical representation for the graphs.
2. The descriptions are clear, the experiments demonstrate the effectiveness, and the new model is technical sound.
Weaknesses: Overall, the writing is easy to understand, but I see minor typos or grammar mistakes in Section 3 and 4. I didn’t check these in details, the authors should carefully correct them for the final manuscript.
I feel a little strange for the loss function. Because the function tends to minimize the differences of either the structures or the node features between the input (for encoder) and the output (decoder). As I see, if the authors use the output for the node classification, why you do not directly use the original input node features? Or, foe the node classification you eliminate the effects from the node features for the loss function. This is not very clear.
Appendix B, what do you mean for Ghazan? I didn’t see any explanation of this word. Is it a kind of alternative method or others?
The authors discuss some theoretical advantages of the proposed method in Sec 3.4. But they don’t discuss any reason about why the proposed method performs well in Sec 4. They just simply show the accuracies are better. Some more analysis is needed.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the questions I asked in weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: I feel a little strange for the loss function. Because the function tends to minimize the differences of either the structures or the node features between the input (for encoder) and the output (decoder). As I see, if the authors use the output for the node classification, why you do not directly use the original input node features? Or, foe the node classification you eliminate the effects from the node features for the loss function. This is not very clear.
A1: We are sorry that we didn’t explain the loss function for more details. For graph classification, the proposed model needs to simultaneously reconstruct the topological structures (i.e., the adjacency matrices) and the node feature information (i.e., the node feature matrices). On the other hand, for the node classification, we only need to reconstruct the structure information. We will explain these more clear in the manuscript.
Q2: Appendix B, what do you mean for Ghazan? I didn’t see any explanation of this word. Is it a kind of alternative method or others?
A2: We are sorry for the typos, in fact we should use the HC-GAE. We will correct this problem.
Q3: The authors discuss some theoretical advantages of the proposed method in Sec 3.4. But they don’t discuss any reason about why the proposed method performs well in Sec 4. They just simply show the accuracies are better. Some more analysis is needed.
A3: Thanks for the constructive suggestion. The theoretical reasons for the effectiveness are twofold. First, the proposed HC-GAE is based on the new subgraph convolution framework. Specifically, we propose to employ the hard node assignment to assign the nodes into separated clusters, resulting in separated subgraphs. The convolution operation is performed within each individual subgraphs, and the node information cannot be propagated between different subgraphs, this can thus significantly reduce the over-smoothing problem and improve the performance. Second, the proposed HC-GAE can simultaneously reconstruct both the topological structures and the node features through the hierarchical strategy associated with the node assignment operation. Moreover, it will not drop any node information, reducing the topological information loss, effectively extracting bidirectionally hierarchical structural features of the original sample graph. We will add these theoretical discussion in the manuscript associated with the classification accuracies, following the the reviewer’s suggestion.
---
Rebuttal Comment 1.1:
Comment: The authors’ responses seem reasonable for my questions. Please revise the paper as promised in the rebuttal.
---
Rebuttal 2:
Title: About our responses
Comment: Dear Reviewers,
Thanks for your efforts and the constructuve suggestions for this paper. We have provided the response based on your comments.
Please read our responses and feel free to discuss with us during the reviewer-author discussion stage, if you have any concerns for our response.
Best Regards,
The authors | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Diffusion Twigs with Loop Guidance for Conditional Graph Generation | Accept (poster) | Summary: The paper presents Twigs, a score-based diffusion framework for enhancing conditional generation tasks. It includes a central trunk diffusion process for graph structure and stem processes for graph properties. The innovative loop guidance strategy manages information flow between trunk and stem processes. Experiments show Twigs significantly outperforms contemporary baselines, especially in tasks like inverse molecular design.
Strengths: 1. A new guidance strategy that explicitly models the coupling effects between condition and data.
2. Impressive experimental results.
Weaknesses: 1. The main drawback of the proposed method is the limited flexibility. The loop guidance requires the joint training of $s_{\theta}$ and $s_{\phi}$ and it cannot provide guidance for unseen conditions.
2. Some of the experimental settings and evaluation pipeline are unclear. Please at lease specify them in the appendix.
3. Minor: Please put back the guidelines of paper checklist.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How will the sampling step size affect the generation? Can we use different step sizes when sampling $y_s$ and $y_i$?
2. Classifier guidance helps the diffusion model to generalize to new conditions. Can Twigs provide guidance of unseen conditions by, for example, separating the training of $s_{\theta}$ and $s_{\phi}$?
3. How will the incorporation of multiple $s_{\phi}$ influence the total training time of Twigs? Can you provide the training time comparison as well?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations mentioned in the manuscript are more like future direction of applying the similar strategy to other domains. Please provide more discussion if possible.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your many excellent suggestions! We have acted on all of these, and additionally, address all your questions and comments below.
**RE: Twigs cannot guide unseen conditions (W1) (Q2)**
Thanks for raising this point. Based on your suggestion we have added an experiment to tackle the issue that you raised. The Twig framework is very flexible, and it can be trained with unconditional setting, and combined with a classifier, to achieve generalization, as you mentioned. We provide here an example.
Specifically, we first train an unconditional version of Twigs, which consists of a diffusion model over $k$ graph properties. Secondly, we train a separate discriminator for the new property to achieve prediction guidance. This idea is the same as in classifier guidance, and in MOOD (Lee et al 2023), which allows the model to generalize to properties not seen during the training of the diffusion model.
The table below shows the unconditional version of Twigs trained with three properties (excluding the unseen property) and a separate property predictor (for the unseen property), on the `community-small` dataset. The results show that Twigs receives benefits from having the diffusion over multiple properties.
| Property | MAE values | | | |
| ------------- | ---------- | -------- | ------- | ---- |
| | Twigs p=3 | MOOD | Digress | GDSS |
| Density | **2.12** | **2.12** | 2.34 | 2.95 |
| Clustering | **9.94** | 11.3 | 10.6 | 12.1 |
| Assortativity | **15.8** | 16.7 | 17.8 | 19.6 |
| Transitivity | **8.68** | 8.76 | 9.42 | 11.4 |
**RE: experimental settings and evaluation pipeline are unclear (W2)**
Thank you, based on your comment, we have added the hyperparameters used in the experiments, which we will integrate into the main text. In particular, we will clarify the following settings.
For Sections 4.1 and 4.2 we follow the same hyperparameters from Huang et al (2023).
For Section 4.3 we follow the hyperparameters from Lee et al (2023), for the MOOD baseline, we explore the OOD coefficient between $0.01$ and $0.09$.
For Section 4.4 we follow the hyperparameters from Jo et al (2022).
**Guidelines checklist (W3):**
Yes, we will bring it back.
**RE: how step sizes affect generation. Q1**
For step-size Sections 4.1 and 4.2, we follow the same hyperparameters from Huang et al (2023), including step-sizes of the sampling procedure. For Section 4.3 we follow the hyperparameters from Lee et al (2023), which leverages GDSS.
**RE: can we have different main and secondary process stepsizes. Q1**
To generalize to unseen properties, we do not need to separate the diffusion, we show above that we can train a separate predictor on unseen properties. Therefore we can maintain the same stepsizes in the diffusion flows.
**RE: provide the training time comparison in multiple diffusion (Q3)**
We show the impact of multiple diffusion flows on the community-small and Enzymes datasets. Specifically, we report the average time (in seconds) for the training of a single epoch for Twigs with one and three secondary diffusion flows.
We observe that our models encounter a small overhead compared to GDSS and Digress, however, we believe it is a good tradeoff for performance.
| Dataset | Twigs p=1 | Twigs p=3 | GDSS | Digress |
| --------------- | --------- | --------- | ------ | ------- |
| Community-small | 0.2747 | 0.2997 | 0.2294 | 0.2382 |
| Enzymes | 4.8669 | 5.0304 | 4.8260 | 4.8451 |
**Final comment**
We are grateful for your thoughtful review. We hope our response has addressed your questions and concerns, and will appreciate the same being reflected in your stronger support for this work.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: Thank you for the response and the additional experimental results. The comparison of per epoch training time suggests a slight overhead. Since different models will have different numbers of optimal training epochs, can you also include the total training time in comparison?
Regarding the experimental settings, please explicitly write out the dataset preprocessing, training/test data splitting, evaluation metrics, and the choice of hyperparameters in addition to the citations in later revision.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Gpe5, thank you for requesting additional information, which helps to highlight the advantages of our method. As shown in the table below, the overall training times indicate that the overhead of our model is negligible. To ensure fairness, we trained all models for 5,000 epochs on both datasets.
| Dataset | Twigs p=1 | Twigs p=3 | GDSS | Digress |
| --------------- | --------- | --------- | ------ | ------- |
| Community-small | 0h 22m | 0h 24m | 0h 19m | 0h 20m |
| Enzymes | 6h 45m | 6h 59m | 6h 42m | 6h 43m |
**RE: experimental settings**. Thanks for making the request explicit which helps improving our work. We will update all the experimental details of the data and hyperparameters systematically in the main text.
---
Rebuttal 2:
Title: Thank you for the reply
Comment: Thank you for the additional results. Given the above discussion, seems the proposed method provides limited performance improvement while introducing small computational overheads. Can you elaborate more on what is the advantage of the proposed method compared to MOOD and JODO?
---
Rebuttal Comment 2.1:
Comment: Thanks for the opportunity to clarify the strengths of our method compared to JODO and MOOD.
**Novelty of Twigs**
The idea of our method (Twigs) stems from the goal of uncovering the intricate relationships between the graph structure and each of the target properties. As a result, we define a hierarchy of diffusion flows, aimed to improve the representational power of the neural network. The experiments show that the orchestration of our novel hierarchical model leads to the learning of richer conditional representations, leading to improvements when compared to previous guidance methods including MOOD and JODO.
Specifically, we define two types of diffusion processes with the following roles: (1) multiple *Stem* processes, which aim to unravel the interactions between the graph structure and single properties with the networks $s_{\phi_i}$, and (2) the *Trunk* process, which orchestrates the combination of the graph structure score from $s_\theta$ with the stem process contributions from $s_{\phi_i}$. Twigs resembles a cycle going from the stem process into the trunk process, which we name *loop guidance*.
**Summary of computational experiments**
In practice, our method is able to outperform JODO for generation over single and multiple target property, and MOOD on both property optimization and multi-property conditional generation.
**Comparison with JODO**
First, we report the MAE results for single quantum properties from the `QM9` dataset (lower the better). Twigs outperforms JODO on all properties.
| Model | $C_v$ | $\mu$ | $\alpha$ | $\Delta \epsilon$ | $\epsilon_\text{HOMO}$ | $\epsilon_\text{LUMO}$ |
| --------- | ------------------- | ------------------- | ----------------- | ----------------- | ---------------------- | ---------------------- |
| JODO | 0.581 (± 0.001) | 0.628 (± 0.003) | 1.42 (± 0.01) | 335 (± 3) | 226 (± 1) | 256 (± 1) |
| **Twigs** | **0.559 (± 0.002)** | **0.627 (± 0.001)** | **1.36 (± 0.01)** | **323 (± 2)** | **225 (± 1)** | **244 (± 3)** |
Secondly, we show that the advantage holds for multiple properties. We consider three molecular properties ($\alpha$, $\mu$, $\Delta \epsilon$) for the `QM9` dataset. Our model (Twigs) consistently achieves significantly lower MAE values compared to JODO, underscoring the enhanced accuracy and reliability of our predictions.
| Model | MAE values | | |
| ----- | -------------------- | -------------------- | ----------------- |
| | $\alpha$ | $\mu$ | $\Delta \epsilon$ |
| JODO | 2.749 $\pm$ 0.03 | 1.162 $\pm$ 0.04 | 717 $\pm$ 5 |
| Twigs | **2.544 $\pm$ 0.05** | **1.094 $\pm$ 0.02** | **640 $\pm$ 3** |
**Comparison results with MOOD**
First we report the Novel top 5% docking scores on `ZINC250k` (higher is better). This metric considers multiple constraints over QED, SA, and tanimoto similarity. Our model achieves improved scores in 4 out of 5 cases.
| Model | parp1 | fa7 | 5ht1b | braf | jak2 |
| ----- | -------------------- | ------------------- | -------------------- | -------------------- | ------------------- |
| MOOD | 10.409 (± 0.030) | 7.947 (± 0.034) | 10.487 (± 0.069) | **10.421 (± 0.050)** | 9.575 (± 0.075) |
| Twigs | **10.449 (± 0.009)** | **8.182 (± 0.012)** | **10.542 (± 0.025)** | 10.343 (± 0.024) | **9.678 (± 0.032)** |
Secondly, we report the Novel hit ratio on `ZINC250k` (higher is better). The metric represent the fraction of hit molecules constrained on tanimoty similarity. The richer representation of our model lead to improvements on 4 out of 5 properties.
| Model | parp1 | fa7 | 5ht1b | braf | jak2 |
| ----- | ------------------- | ------------------- | -------------------- | ------------------- | ------------------- |
| MOOD | 3.400 (± 0.117) | 0.433 (± 0.063) | 11.873 (± 0.521) | **2.207 (± 0.165)** | 3.953 (± 0.383) |
| Twigs | **3.733 (± 0.081)** | **0.900 (± 0.012)** | **16.366 (± 0.029)** | 1.933 (± 0.023) | **5.100 (± 0.312)** |
Finally, we report MAE values for graph generation conditioned on three properties on the `community-small` dataset (lower is better). Twigs achieves lower errors on all the considered target properties.
| Model | MAE Density | MAE Clustering | MAE Assortativity |
| ----- | ----------- | -------------- | ----------------- |
| MOOD | 2.53 | 11.4 | 17.3 |
| Twigs | **2.27** | **10.6** | **16.1** |
Please, let us know if we have addressed your concerns. | Summary: This study proposed a conditional generative model with guided diffusion. The authors introduce a new mechanism called loop guidance to include conditions. Empirical analysis includes small molecule generation on a diverse of datasets and properties, with both quantitative evaluation and visualizations.
Strengths: 1. The paper presents a well-structured framework for conditional generation with guided diffusion, including a neat summary of relevant methods in this field.
2. The presentation of the methods and results is clear and easy to follow.
3. The authors provide extensive experimental comparisons, offering a thorough evaluation of the proposed method against various baselines.
Weaknesses: The weaknesses mainly arise from the insufficient empirical evidence:
1. No comparison has been provided regarding the computational cost, making it unclear how efficient the proposed method is relative to others.
2. Although the authors claimed that conditional generation is a fast-developing research field, the baseline methods used for comparison are generally from before 2023, which may not reflect the current SOTA.
3. No ablation study or hyperparameter selection/analysis has been reported.
4. Some terminologies could be introduced and analyzed more carefully (see question 1 below).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What are the meanings of the properties $C_v$, $\mu$, $\alpha$, $\Delta\epsilon$, $\epsilon_{\text{LUMO}}$, and $\epsilon_{\text{HOMO}}$? Why are they important impact factors for molecule generation? Are they practically easy to obtain? Since these properties are used as guides when generating small molecules, do the generated molecules actually have properties close to the guide?
2. What is the relationship and difference between the proposed method and other recent works on guided diffusion, such as https://arxiv.org/pdf/2406.01572, https://arxiv.org/abs/2305.20009, and https://openreview.net/pdf?id=8NfHmzo0Op? (Note: I'm not requiring a quantitative comparison, especially for the first one, which was uploaded after the conference submission deadline. However, for the latter two papers, as they have been published for more than 3 months, I believe they should be included in the paper.)
3. How should the top row of Figure 2 be interpreted? Are they different samples from the same guidance, or are they the same sample shown from different views?
4. Similarly, it is not clear how to interpret the results in Figure 3 (and this figure was not referenced in the main text). Are all the samples generated based on the same input properties? How can the legitimacy of these generations be validated? What does the ground-truth molecule look like? To what level they are better than the generation performance of the baseline methods?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors listed one limitation of the current work as not generalizing the framework to other types of data (text, image). No additional analysis on the broader impact has been provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks!
**RE: computational cost (W1)**
In Table 12 of paper, we present the inference times of our model compared to MOOD, showing that while we encounter a small overhead, we achieve superior performance.
In addition, we report below the average time (in seconds) required to train a single epoch for the `community-small` and `Enzymes`. For Twigs, we consider one property (density).
| Dataset | Twigs p=1 | GDSS | Digress |
| --------------- | --------- | ------ | ------- |
| Community-small | 0.2747 | 0.2294 | 0.2382 |
| Enzymes | 4.8669 | 4.8260 | 4.8451 |
**baseline methods (W2)**
We provide a list of the recent methods (after 2023) in our experiments.
In Sections 4.1 and 4.2, we compare Twigs with the following methods on QM9: TEDMol (Luo et al 2024), Jodo (Huang et al 2023), EEGSDE (Bao et al 2023), GeoLDM (Xu et al 2023), EquiFM (Song et al 2023). In Section 4.3, we compare with Lee et al (2023). In Section 4.4, we compare with Lee et al (2023), and Vignac et al (2023) on the `community-small` dataset.
**Hyperparameters (W3)**
Here we report the hyperparameters used in the experiments.
For Sections 4.1 and 4.2 we follow the same hyperparameters from Huang et al (2023).
For Section 4.3 we follow the hyperparameters from Lee et al (2023), for the MOOD baseline, we explore the OOD coefficient between $0.01$ and $0.09$.
For Section 4.4 we follow the hyperparameters from Jo et al (2022).
**Ablation study (W3)**
We have added a new study to understand the impact of multiple properties over the `community-small` dataset. We report the results in the "Additional experiments with multiple properties" part of the [global comment](https://openreview.net/forum?id=fvOCJAAYLx¬eId=SQNeH19zc9).
**Refs**
Huang et al (2023). Learning joint 2d & 3d diffusion models for complete molecule generation.
Lee et al (2023). Exploring chemical space with score-based out-of-distribution generation.
Jo et al (2022). Score-based generative modeling of graphs via the system of stochastic differential equations.
Bao et al 2023. Equivariant Energy-Guided SDE for Inverse Molecular Design.
Luo et al 2024. Text-guided diffusion model for 3d molecule generation.
Xu et al 2023. Geometric latent diffusion models for 3d molecule generation.
Song et al 2023. Equivariant flow matching with hybrid probability transport for 3d molecule generation.
Vignac et al 2023. Digress: Discrete denoising diffusion for graph generation
**properties close to the guide (Q1)**
The generated molecules' properties closely align with the target values, as evidenced by the MAE values presented in Tables 3, 5, and 8. The MAE quantifies the deviation between the desired ground truth properties and those of the generated molecules.
**meanings of the properties (Q1)**
The mentioned properties are standard quantum properties used in the QM9 dataset for modeling molecules (Hoogeboom et al 2022). The molecule properties are obtained from the data.
- $\alpha$ Polarizability: Tendency of a molecule to acquire an electric dipole moment when subjected to an external electric field.
- $\epsilon_{\text{HOMO}}$: Highest occupied molecular orbital energy.
- $\epsilon_{\text{LUMO}}$: Lowest unoccupied molecular orbital energy.
- $\Delta \epsilon$ Gap: The energy difference between HOMO and LUMO.
- $\mu$: Dipole moment.
- $C_v$: Heat capacity at 298.15K.
**Refs:**
Hoogeboom et al 2022. Equivariant Diffusion for Molecule Generation in 3D. ICML.
**related works (Q2)**
Our approach differs from the mentioned methods in two key ways, which we will elaborate on in the related works section of our paper.
**Framework**: The first two studies (Nisonoff et al. 2024, Gruver et al. 2023) are based on discrete frameworks. The third study (Klarner et al. 2024) adopts a plug-and-play approach using a diffusion model, which is agnostic to the underlying framework. Differently from the above, our method operates within a continuous framework utilizing stochastic differential equations (SDEs).
**Contributions**: Nisonoff et al. 2024 primary contribution is to provide a framework for diffusion guidance within discrete spaces. On the other hand, the main contribution of Gruver et al. (2023), addresses the challenges of optimizing discrete sequences by a novel sampling technique.
Klarner et al. 2024 propose to learn two models one diffusion without labels and an additional discriminator model with property labels.
Our contribution is separate from all the above methods, as we uniquely leverage multiple diffusion flows, defined over two distinct types of flows within a hierarchical structure. Specifically, we define a primary flow for capturing structure and a secondary flow for modeling properties.
**Refs**
Nisonoff et al (2024). Unlocking Guidance for Discrete State-Space Diffusion and Flow Models.
Gruver et al (2023). Protein Design with Guided Discrete Diffusion.
Klarner et al (2024). Context-Guided Diffusion for Out-of-Distribution Molecular and Protein Design.
**Clarification on Figs (Q3-Q4)**
Figure 2: Depits uncurated samples obtained via our model conditioned on $C_v$ property. The performance of conditioning is given in Table 3 First column ($C_v$). In addition, to validate the legitimacy, those molecules are representative of the "Molecule quality" results from Table 4.
Figure 3. It shows uncurated samples of molecules produced under two desired properties ($C_v$ and $\mu$). The performance corresponding to Fig 3 is represented in Table 5 first two columns ($C_v$ and $\mu$).
Note that in this setup, our goal is not to reconstruct a specific molecule there could be multiple different molecules that have the desired properties and do not reconstruct the training set. In this sense, the target properties should be considered separately from the molecule.
We would be grateful if this could be reflected in an increased score for our work. Thank you!
---
Rebuttal Comment 1.1:
Title: Follow up
Comment: Dear Reviewer 8xx4,
Thanks again for your thorough review and constructive comments.
Following up, we would greatly appreciate any updates or feedback you may have regarding our rebuttal.
We would also appreciate any further comments and questions. Your constructive feedback can help us enhance the quality of our manuscript.
Thank you and we look forward to your response. | Summary: This paper introduces Twigs, a new score-based diffusion model for graph generation conditioned on graph-level properties. It employs two diffusion processes, one for graph data (trunk process) and one for graph-level properties (stem process). The underlying generation process corresponds to a factorization of the joint distribution into the unconditional distribution of the graph data and conditionally independent distributions of graph properties given the graph data. Empirical studies across diverse benchmarks demonstrate the effectiveness of the proposed approach.
Strengths: **S1:** The overall presentation is clear and easy to follow.
**S2:** Extensive empirical studies demonstrate the effectiveness of the method for conditional molecule and graph structure generation.
Weaknesses: **W1**: The idea of hierarchical (conditional) diffusion model for graph generation has been explored in previous works. For example, GDSS [1] and EDGE [2] explore the factorization of the joint distribution of graph structure and node attributes for unconditional molecule generation. GraphMaker [3] explores this idea for node-label-conditioned generation of large attributed graphs.
**W2**: The assumption of the conditional independence of the multiple graph properties may be too strong.
**W3**: The empirical studies performed consider at most modeling two graph properties at a time and there is a lack of understanding in how the model performance changes as more graph properties are modeled simultaneously. In addition, there is a lack of understanding in how the model performance varies as the properties get more correlated.
**W4**: Some presentation details can be further clarified. For example:
- The terms "hierarchical modeling" and "hierarchical conditional diffusion" are used without a formal definition in table 1.
- I guess $y_{s,t}$ between L110 and 111 refers to $y_{s}$ at time $t$, but it is not formally defined.
- L106 says that $y_s$ is the primary variable (graph structure) and takes shape $\mathbb{R}^{N\times D}$. Does this consist of both adjacency matrix and node features like atom types?
[1] Jo et al. Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations.
[2] Chen et al. Efficient and Degree-Guided Graph Generation via Discrete Diffusion Modeling.
[3] Li et al. GraphMaker: Can Diffusion Models Generate Large Attributed Graphs?
Technical Quality: 2
Clarity: 3
Questions for Authors: In addition to the issues mentioned in the "Weaknesses" section,
**Q1**: Why did you not include the results of EDM and TEDMol in table 4, as reported in [1]?
[1] Luo et al. Text-guided Diffusion Model for 3D Molecule Generation.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The paper assumes conditional independence between multiple graph properties, which does not necessarily hold in practice.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks.
**Re: related works (W1)**
The suggested works, including GDSS (Jo et al. 2022), GraphMaker (Li et al. 2024), and EDGE (Chen et al. 2023), are relevant to our research and will be included in our paper. Below we report the differences with our method. Refer to the table of the global comment for a summary.
First, these methods use different hierarchical notions in their frameworks. In EDGE and GDSS, edges and nodes are modeled symmetrically. In contrast, we introduce a unique hierarchical approach by learning multiple asymmetric diffusion flows. This distinction is crucial: while the mentioned methods use diffusion flows in the same roles, our method uses a secondary process to learn interactions with properties, feeding back into the main process to learn the graph structure.
Secondly, our methodology is designed specifically for generating graphs with desired conditional properties, which the mentioned papers do not focus on. GDSS and EDGE do not consider conditional modeling, and while GraphMaker explores node-label conditioning, this differs from our goal of generating graphs constrained by specific properties. Our method learns a joint distribution of graph structures (nodes, edges) and properties, unlike the mentioned methods that only learn a joint distribution of nodes and edges.
**RE: conditional independence (W2)**
Assuming conditional independence among the properties $\alpha$, $\epsilon_{\text{HOMO}}$, $\epsilon_{\text{LUMO}}$, $\Delta \epsilon$, $\mu$, and $C_v$ given the molecular graph can simplify the modeling process. This assumption leverages the fact that the molecular graph captures the essential structural dependencies, allowing us to treat the properties as independent for computational efficiency and ease of interpretation, even if slight interdependencies exist.
**RE: Modeling multiple properties (W3)**
We show here two additional results on 2 and 3 properties. We study the models over the `community-small` dataset. Specifically, our method (Twigs) is trained using multiple secondary processes (one for each property).
**Two properties**
We show the MAE results for property pairs for density. Our model achieves the lowest MAE in both cases. We notice that while the properties may present some form of correlation, our model can achieve a good performance in generating the graphs with desired properties.
| Model | MAE Density | MAE Clustering |
| ------- | ----------- | -------------- |
| GDSS | 2.95 | 13.3 |
| Digress | 2.82 | 12.1 |
| MOOD | 2.43 | 12.0 |
| Twigs | **2.34** | **11.0** |
| Model | MAE Density | MAE Assortativity |
| ------- | ----------- | ----------------- |
| GDSS | 2.61 | 19.8 |
| Digress | 2.52 | 18.1 |
| MOOD | 2.40 | 17.2 |
| Twigs | **2.39** | **16.7** |
**Three properties**
The Table below shows that our method achieves the lowest MAE values (the lower the better) across all three required properties.
| Model | MAE Density | MAE Clustering | MAE Assortativity |
| ------- | ----------- | -------------- | ----------------- |
| GDSS | 2.97 | 12.5 | 19.4 |
| Digress | 2.65 | 11.2 | 18.2 |
| MOOD | 2.53 | 11.4 | 17.3 |
| Twigs | **2.27** | **10.6** | **16.1** |
**RE: Hierarchical conditional diffusion (W4.1)**
Here's a clarification of "Hierarchical Conditional Diffusion" and the distinction from "unconditional hierarchical models", which will be integrated into the paper.
In lines 39-43, we define "Hierarchical Conditional Diffusion" as follows: ".. rather than treating heterogeneous structural and label information uniformly within the hierarchy, we advocate for the co-evolution of multiple processes with distinct roles. These roles encompass a primary process governing structural evolution alongside multiple secondary processes responsible for driving conditional content."
To distinguish our method from "unconditional" hierarchical models by Jin et al. (2020) and Qiang et al. (2023), we label those models as "Hierarchical Modeling" in Table 1. While they model hierarchical structures, our approach is unique in leveraging a hierarchy of branching diffusion processes for conditional generation based on desired properties.
**References**
Jin et al. 2020. Hierarchical generation of molecular graphs using structural motifs.
Quiang et al. 2023. Coarse-to-fine: a hierarchical diffusion model for molecule generation in 3d.
**RE: Variable $y_s$ (W4.2)**
In the variable $y_{s,t}$ t indicates time. We will add to the main text.
**RE: Dimension of graph (W4.3)**
We have two cases:
The 3D case in (Appendix B.1), we denote the variable $y_s$ as a 3D graph $G = (A, x, h)$, with node coordinates $x= (x^1, \ldots, x^N) \in \mathbb{R}^{N \times 3}$, node features $h = (h^1, \ldots,h^N) \in \mathbb{R}^{N \times d1}$, and edge information $A \in \mathbb{R}^{N \times N \times d2}$.
The 2D case in Appendix B.2: we denote $y_s$ as a 2D graph with $N$ nodes we consider the variable $y_s=(X,A)\in \mathbb{R}^{N\times F}\times\mathbb{R}^{N\times N}$, where $F$ is the dimension of the node features, $X\in\mathbb{R}^{N\times F}$ are node features, $A\in\mathbb{R}^{N\times N}$ is weighted adjacency matrix. We define the perturbed property $y_i \in \mathbb{R}$ and the (fixed) property $y_C \in \mathbb{R}$.
**RE: EDM and Tedmol results (Q1)**
Apologies for the misleading results presentation. Due to space constraints, we placed the results for EDM and Tedmol in Table 10 of Appendix C.1. We will move it to the main text. These methods were included in the appendix because they are less competitive compared to the other baselines.
**Final comment**
We hope our response addresses your concerns and reinforces your support for this work.
---
Rebuttal 2:
Comment: Thank you for the detailed response, and I've read it carefully. Still, my main concern is the lack of studies on modeling >2 properties for real-world datasets like QM9, where the assumption of conditional independence can be insufficient.
---
Rebuttal Comment 2.1:
Comment: Thank you for proposing the experiment, as it has further highlighted the strengths of our approach. The table below presents the results on the QM9 dataset, comparing three molecular properties ($\alpha$, $\mu$, $\Delta \epsilon$) for our model, Twigs, and the JODO method. Our model consistently achieves significantly lower MAE values across all three properties, underscoring the enhanced accuracy and reliability of our predictions.
| Model | MAE values | | |
| ----- | -------------------- | -------------------- | ----------------- |
| | $\alpha$ | $\mu$ | $\Delta \epsilon$ |
| JODO | 2.749 $\pm$ 0.03 | 1.162 $\pm$ 0.04 | 717 $\pm$ 5 |
| Twigs | **2.544 $\pm$ 0.05** | **1.094 $\pm$ 0.02** | **640 $\pm$ 3** | | Summary: This paper proposes a novel score-based diffusion framework called Twigs that incorporates multiple co-evolving flows to capture complex interactions and dependencies for enriching conditional generation tasks. It consists of a central trunk process and additional stem processes, coordinated by a loop guidance strategy during sampling. Extensive experiments on conditional graph generation demonstrate Twigs' strong performance gains over baselines, highlighting its potential for challenging generative tasks like inverse molecular design.
Strengths: 1. Based on my knowledge, the idea of incorporating multiple co-evolving diffusion processes proposed in this paper is very novel, especially in the field of graph generation.
2. The paper is well written and provides sufficient background information, such as Table 2, to help understand the differences between the method proposed in this paper and other classifier-based and classifier-free methods.
3. The experiments are very comprehensive and thorough, and the experimental results fully demonstrate the outstanding performance of the proposed method.
Weaknesses: 1. The content in Section 3 is written too generally and does not address the specific characteristics of graph data very well. Additionally, the mathematical symbols used appear somewhat disorganized.
2. I am a little bit confused about the the dimension graph structure $y_s \in \mathbb{R}^{N\times D}$ on top of Eq. (1). Does $D$ covers a lot more information, e.g. node coordinates (dim 3), node features (dim $d_1$), and edge information (dim $N \times d_2$), as indicated in Appendix B.1?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is the external context $y_C$ one of the $k$ dependent graph properties $y_k$ in Eq. 2?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Not applied.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your thoughtful comments. We address all your concerns, as described below.
**RE: Section 3 is written too generally and does not address the specific characteristics of the graph (W1)**
We maintain the section in a general format to accommodate multiple cases, specifically one for 2D graphs and another for 3D graphs. We achieve the desired flexibility by introducing a variable $y_s$ that encompasses both node features and the adjacency matrix (2D case), as well as the coordinates (3D case). Below are the detailed descriptions of the $y_s$ variable.
**RE: Dimension of the graph structure in Eqn (1) (W2)**
We address two cases:
**3D Case (Appendix B.1)**: We denote the variable $y_s$ as a 3D graph $G = (A, x, h)$, where node coordinates are represented as $x = (x^1, \ldots, x^N) \in \mathbb{R}^{N \times 3}$, node features as $h = (h^1, \ldots, h^N) \in \mathbb{R}^{N \times d_1}$, and edge information as $A \in \mathbb{R}^{N \times N \times d_2}$.
**2D Case (Appendix B.2)**: Here, we denote $y_s$ as a 2D graph with $N$ nodes. The variable is defined as $y_s = (X, A) \in \mathbb{R}^{N \times F} \times \mathbb{R}^{N \times N}$, where $F$ represents the dimension of the node features. In this case, $X \in \mathbb{R}^{N \times F}$ are the node features, and $A \in \mathbb{R}^{N \times N}$ is the weighted adjacency matrix. Additionally, we define the perturbed property $y_i \in \mathbb{R}$ and the fixed property $y_C \in \mathbb{R}$.
**RE: External Context $y_C$ and Dependent Graph Properties (Q1)**
Indeed, the external context $y_C$ can be **one or more** of the $k$ dependent graph properties. Specifically, $y_C$ represents the graph property (or properties) upon which we are conditioning. For instance, in the context of conditional generative modeling for drug design, we might seek a molecule with a specific $\epsilon_{\text{LUMO}}$ value. In this scenario, $\epsilon_{\text{LUMO}}$ serves as the $y_C$ variable. The properties listed in Equation (2) encompass all relevant characteristics of the molecule, such as $\alpha$, $C_v$, $\epsilon_{\text{HOMO}}$, and others. Our model aims to perform multiple $k$ diffusion processes for each property, with each process conditioned on the context $y_C$ and the property $y_k$.
**Final Comment**
Thank you so much for your constructive feedback. If you believe we sufficiently addressed your concerns, we would appreciate an increase in your score for this paper.
---
Rebuttal Comment 1.1:
Title: Follow up
Comment: Dear Reviewer GMeF,
Thanks again for your thorough review and constructive comments.
Following up, we would greatly appreciate any updates or feedback you may have regarding our rebuttal.
We would also appreciate any further comments and questions. Your constructive feedback can help us enhance the quality of our manuscript.
Thank you and we look forward to your response.
---
Rebuttal Comment 1.2:
Title: To reviewer GMeF
Comment: Dear Reviewer GMeF,
Please let the authors know if your concerns have been addressed.
PS (to authors): Graphs with adjacency matrices that contain edge features are tensors not 3D graphs.
Thanks,
AC
---
Rebuttal 2:
Comment: Thanks for your response. My questions are well-resolved. I will keep my score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Back to the Continuous Attractor | Accept (poster) | Summary: The authors study continuous attractor networks, and their famous instability under noise. They show that continuous attractors, despite being unstable, are functionally robust, and analyse some behaviours when noise is introduced.
Strengths: The main thrust of the paper was very interesting and very novel. I am a theoretical neuroscientist, and the studied noise-instability of CANs is a classic problem that we are taught in lectures. This paper appears to resolve this problem, using the very clever proof technique of 'knowing a relatively esoteric piece of maths (to me!) that solves the problem of interest as a corollary'. If I've understood this right, it should be applauded!
The rest of the paper was a smorgasbord of somewhat interesting characterisation of the behaviour of noise perturbed CANs, illustrating some interesting phenomenology.
A lot of the exposition was very clear, especially on the micro-scale: individual ideas were explained well. Figures 1 and 3 were good.
The bound on error in figure 5A was cool (if true).
Weaknesses: I got very confused by what most of the paper was trying to show. The main result, section 3, is, almost draw-droppingly, nice. (Though I'm partly trusting that the authors are applying the results correctly; they seem to be, because the step from the stated theorem to their claims is not large) The rest is, ..., a bit more meh, especially in how it was presented. I would have liked a lot more signposting: why are these sections the right ones to care about? What are you showing, and why?
For example, section 2 shows that noise degrades CANs - a nice literature review - and then shows a categorisation of perturbed fixed point dynamics. I guess the idea was to show that noise-instability was a problem for all CAN types in the literature? If so you could do with signposting that. If not, why did you include all of that? Why was it important that some patterns appear and not others in the perturbed dynamics?
Then there were a lot of quite confusing things, especially in the figures:
Fig 1) A was a great figure, B was never mentioned in the text, despite being very pretty.
Fig 2) Why were there hexagons everywhere? I never could find any reason for there to be hexagonal figures, did you just make the ring a hexagon for fun? If so, tell the reader! Further, in B and C, are the blue lines numerical? How did you choose what to colour blue? Should I be using it as evidence for your big theoretical claim? Or is it just illustrative?
Fig 3) What am I looking at in figure 4A2: it says example trajectory? But there are many trajectories, no? The dots in B and C are presumably fixed points, why then is it called a limit cycle (line 246)? It doesn't look like that? Why is 4D described as around a repulsive ring invariant manifold? Are you describing the ring inside the torus? (rather than the presumably attractive limit cycles that are marked on the figure) What does the colouring of the torus (the greys) denote? I didn't get told what the task was for figure A1 so had to go to the appendix to see that this is apparently the output of the integration? Why are you plotting the output space, and not the recurrent space as in all the other figures?
On that last point, why include details of the (standard) discretisation scheme, MSE loss, and network details, when key steps to understand what I am looking at (e.g. figure A1 = output space) are missing?
Figure 5A) Why did one of the finite time results go above the line? Shouldn't this be an exact theoretical result, yet it appears not to be true?
Seemed obtuse to claim a linear relationship then show a log-linear plot, fig 5C? How should I see this?
Did you define the memory capacity?
Did you need to introduce omega-limit set, especially given the likely audience for this paper?
Finall, some other points:
You should definitely cite and discuss the relationship to this paper: Information content in continuous attractor neural networks is preserved in the presence of moderate disordered background connectivity, Kuhn & Monasson, 2023.
What was section 5.2 trying to show? First it claims that 2.2 presents a theory of approximate solutions in the neighbourhood of continuous attractors (news to me, as far as I could tell, that section showed me that all CAN models were unstable to noise and turn into a variety of different fixed points under noise, that doesn't sound like a theory at all? Section 3 seems to be the theory?) Then you list four conditions on what sounds like exactly the same problem? What is the difference between dynamical systems having robust manifolds, and the working memory task being solved, isn't the whole point of the model that these two are the same? (i.e. you can solve a working memory task with a CAN). Is this supposed to be a concluding section that says when working memory can be solved? Then why have you suddenly defined state and dynamical noise that haven't been used before, I thought we had a perfectly nice definition of perturbations on the network (equation 2)? This section seemed... strange, in my eyes the paper would be improved by removing it
Smaller things:
- line 107 - comment left in doc
- Figure 2 caption line 1, missing 'of' and pluralisation of 'implementation'
So all in all, I think the exposition, despite, as I said, often being very clear when describing a single idea, is, on a macro scale, a mess. I had a very hard time following, and the figures were quite tough-going. I think this paper should probably get in, but I think it should be given a good clean up first, at least for a humble neuro-theorist, rather than a dynamical systems guru, to understand.
Technical Quality: 3
Clarity: 2
Questions for Authors: My, many, questions and confusions ended up being in the weaknesses section.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations were somewhat discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their valuable comments. They have significantly enhanced the quality of our manuscript.
> signposting
We thank the reviewer for and agree with this feedback. We improved the text by:
- Changing the last paragraph of the introduction, summarizing each section
- Highlighting the main questions arising from the discussed examples
- Adding sentences at the start of each (sub)section to point out the main message
> Fig 1) A was a great figure...
We now added the needed reference to Fig.1B.
> Fig 2) Why were there hexagons everywhere?
This is a quality of the ring attractor in [1]. The attractor implemented by 6 neurons is made up of 6 line attractors, fused at their ends. At these fusion points there is a lack of smoothness (similar to the function abs(x)). Therefore, this piecewise straight "ring" attractor, when projected onto two dimensions looks like a hexagon. We have added clarification to this on the main text.
> Further, in B and C...
We find the connecting orbits between the fixed points by identifying the slow part of simulated trajectories. From these slow trajectories, we choose the trajectory that is closest to two fixed points. These trajectories go from a saddle node to a stable fixed point. We do this for every pair of neighbouring fixed points.
> Should I be using it as evidence for your big theoretical claim?
In this section, we aim to motivate the relevance of our theory by demonstrating two key points: 1) that continuous attractors are inherently fragile to noise in the parameters (a well-known fact), and 2) that all bifurcations from and approximations of continuous attractors share a common feature: a slow invariant attractive manifold.
In the subsequent sections, we provide an explanation of these universal features.
> What am I looking at in figure 4A2...
We appreciate the remarks and have revised the text accordingly.
1. You are correct; this figure shows multiple example trajectories. We have corrected this in the text and caption.
1. We have corrected the mistake of referring to Fig. 4B and C as a limit cycle.
1. We did not include the slow repulsive points that we found as it decreased the interpretability of the figure and this structure is not relevant for how the network solves the task.
1. The grey lines on the torus represent simulated trajectories, indicating the network dynamics after a saccade.
1. Fig. 4A1 shows the output of an integrated angular velocity, illustrating the task in addition to the solution type in Fig. 4C.
Fig. 4A1 and 4A2 are different in that they show how the networks behave (in output space).
The other subfigures in Fig.4 illustrate the stability structures of the networks, i.e., to which part of state space they tend over time in the absence of input.
> On that last point...
We included these details to support reproducibility.
> Figure 5A) Why did one of the finite time results go above the line?
This is indeed an exact theoretical result; however, our numerical methods are not exact. Because we numerically approximate the invariant manifold of each trained RNN, on which we calculate the uniform norm of the vector field, we cannot guarantee the vector field simulated trajectories follow to be exact. Additionally, the network initialization along the invariant manifold is approximate due to our parametrization (using a cubic spline). Nevertheless, it is important to note that this method has only a single violation of our theory among approximately 250 networks tested.
> Seemed obtuse...
We appreciate the reviewer pointing this out. We have corrected the mistake and now observe the log-linear relationship, which aligns with the theoretical expectation. The angular error should asymptotically approach zero as the number of fixed points increases, making the log-linear plot the appropriate representation for this relationship.
> Did you define the memory capacity?
We determine the location of the fixed points through points of reversal of the flow direction (the system evolves along a 1D subspace). We calculate the probability of a point converging to a stable fixed point by assessing the local flow direction (which allows us to characterize the basin of attraction). The memory capacity is the entropy of this probability distribution. The definition is in S.4.3.1., referenced in the main text. We hope these definitions are clearer in the new version of Sec. 5.
> Did you need to introduce omega-limit set...
We believe that this definition is supporting the definition of memory capacity.
As we explained above, the memory capacity is calculated from the omega-limit set of each of the points on the invariant ring.
This idea can be more generally applied to systems with other omega-limit sets, like limit cycles or chaotic orbits and therefore included this definition.
> You should definitely cite...
[2] analyzes an Ising network perturbed with a specially structured noise at the thermodynamic limit.
Although their analysis elegantly shows that the population activity of the perturbed system does not destroy the Fisher information about the input to study instantaneous encoding, they do not consider a scenario where the ring attractor is used as a working memory mechanism. In contrast, our analysis involves understanding how the working memory content degrades over time due to the dynamics. We are not aware of any mean field analysis that covers this aspect.
We include this work to discuss continuous attractors in mean field approaches.
> What was section 5.2 trying to show?
See the shared rebuttal for our clarification of Sec. 5.2.
### References
[1] Noorman, M. et al. (2022). Accurate angular integration with only a handful of neurons. bioRxiv, 2022-05.
[2] Kühn, T., & Monasson, R. (2023). Information content in continuous attractor neural networks is preserved in the presence of moderate disordered background connectivity. Physical Review E, 108(6), 064301.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for their detailed and impressive responses.
Broadly, my comments have been answered. That said, they were largely about clarity, so it is hard to verify if the paper does indeed make more sense.
Certainly including the details of the network is important for reproducibility (sec 4.1) and apologies for making comments that sound like they're suggesting these details should be removed. That was not my intention! Rather, it seemed strange to put such standard details in the main text rather than the appendix, when the space could be used to explain so many other things.
I continue to think the paper should get in, and will raise my score to 6 on the assumption that the resulting paper will indeed be much clearer. (I agree with reviewer 8M6u that this paper in its first version has an audience problem)
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer again.
For our proposal to increase the readability of the paper and to ameliorate the audience problem, we would like to point the Reviewer to our comment to reviewer 8M6u: https://openreview.net/forum?id=fvG6ZHrH0B¬eId=GM02jaKd5z. | Summary: The manuscript investigates the stability and robustness of continuous attractors to small deformations in vanilla recurrent neural networks. Continuous attractors were once heavily studied in the context of working memory, but they are inherently fragile and susceptible to even small perturbations and lead to very different topologically distinct attractor landscapes. This study, on the other hand, focuses on finite time behavior around these distinct attractor landscapes. Specifically, the authors theoretically discuss and empirically show that continuous attractors exhibit slow manifold structures under perturbations, which persist despite their theoretical instability.
Strengths: - The study of continuous attractors in the finite time limit, opposite of what is usually studied with attractor landscapes, is a fresh look, novel, original, and interesting.
- The authors provide a superb theoretical motivation, which convinced me of their results correctness.
Weaknesses: - The presentation in section 5 becomes unclear, and perhaps too dense. The authors may want to expand on this section significantly.
- Some more control experiments need to be added for proving the generality (See below).
Technical Quality: 4
Clarity: 2
Questions for Authors: I believe the work warrants a borderline accept as is, yet I would feel very supportive of its publication (up to a strong accept) if the authors performed the following changes:
- I believe the slow manifold picture the authors are introducing here is not too different from [1], specifically Fig. 7. Can the authors please clarify the differences in the main text?
- The fast-slow decomposition does not seem to be specific to vanilla RNNs. Can the authors please include experiments with LSTMs and GRUs, which would support their claims on generality.
- Similarly, the experiments on Section 4 are centered around ring-like attractors. Can you please show an example with other structures? For example, you can consider the delayed addition/multiplication tasks discussed in [2]. Relatedly, similar slow dynamics arguments are made in [2], which I believe the authors should list as a close related work and explain the differences in their approach.
- A more detailed discussion of Section 5 is desired. I was able to understand the main takeaways, but could not evaluate the validity of the claims. Perhaps the authors may want to explain in simpler terms the evidence presented in Fig. 5.
- The title is not descriptive of the manuscript's content and feels like as if it belongs to a blog post. Could you please update the title to be representative of the paper's conclusions?
Citations
[1] Opening the Black Box: Low-Dimensional Dynamics in High-Dimensional Recurrent Neural Networks, David Sussillo and Omri Barak, Neural Computation, 2013
[2] Schmidt, D., Koppe, G., Monfared, Z., Beutelspacher, M., & Durstewitz, D. (2019). Identifying nonlinear dynamical systems with multiple time scales and long-range dependencies. arXiv preprint arXiv:1910.03471.
**Edit:** My main remaining concern is the presentation, which I find to be unnecessary complicated. That being said, this work is an important piece of contribution to neuroscience and I support its acceptance.
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their valuable comments and suggestions (and appreciate the recent edit). They have significantly enhanced the quality of our manuscript.
### Weaknesses:
> Some more control experiments need to be added for proving the generality (See below).
We address the specific questions below, but would like to emphasize that the main contribution of this submission is theoretical.
The numerical experiments are meant to illustrate our results, rather than prove them.
### Questions:
> I believe the slow manifold picture the authors are introducing here is not too different from [1], specifically Fig. 7. Can the authors please clarify the differences in the main text?
We appreciate the remark, and will revise the text to include the following:
1. The two approaches are indeed similar, however there are subtle technical differences.
1. In [1], Sussillo and Barak are primarily concerned with a pointwise definition of slowness, by comparison normal hyperbolicity requires a uniform separation of timescales over the entire invariant manifold.
1. Our theory can explain why perturbations to the trained RNN (with random gaussian noise with zero mean and standard deviation) still lead to the same approximate plane attractor dynamical structure is still in place. The Persistence Theorem guarantees that for small perturbations (of size $\epsilon$) the persistent invariant manifold will be at the approximate same place (it will be at a distance of order $\mathcal{O}(\epsilon))$. See Figure 9 for the experiments of the structural perturbations in [1]. They do not provide an explanation for their observations.
> The fast-slow decomposition does not seem to be specific to vanilla RNNs. Can the authors please include experiments with LSTMs and GRUs, which would support their claims on generality.
We have now trained and analyzed both LSTMs and GRUs on the angular integration task. We include our preliminary results in a separate document uploaded to OpenReview.
> Similarly, the experiments on Section 4 are centered around ring-like attractors...
We appreciate the comment, and have included an additional task where the approximate continuous attractor is of higher dimension, namely a double angular velocity integration task. Please see the shared reply to all reviewers on our new findings.
Specifically regarding addition or multiplication tasks, an idealized solution to either would require that the RNN represent $G = (\mathbb{R},+)$ or $G = (\mathbb{R}_{+},\times)$, which are not compact.
Because this contradicts our technical assumptions, we opt to focus on tasks where the invariant manifolds are naturally compact.
> similar slow dynamics arguments are made in [2]...
We thank the reviewer for pointing out this work, we will reference it accordingly.
This work identifies asymptotic behaviors in dynamical systems, fixed point dynamics and more general cases cycles and chaos.
We look beyond asymptotic behavior and characterize attractive invariant manifolds, thereby also identifying connecting orbits (or heteroclinic orbits) between fixed points.
We would like to reiterate that we believe that the main contribution of the paper is a new theory of approximations of continuous attractors.
Although we developed new analysis methods for dynamical systems to find slow manifolds in them, we do not propose a new general framework for analysis of all dynamical systems.
Finally, [2] provides analysis tools for Piecewise-Linear Dynamical Systems, while our methods are generally applicable to RNNs with any activation function.
> The presentation in section 5 becomes unclear, and perhaps too dense...
We agree with the reviewer, the updated manuscript will include a substantial revision of section 5. In it, we will focus on clarity.
We have split the section into generalization properties of trained RNNs (all relating to Fig.5 and in relation to our theoretical prediction on the error bound) and the four conditions that guarantee that a system that approximates an analog memory system will be near a continuous attractor (which was section 5.2).
Please, see the shared rebuttal for our clarification of Section 5.2.
> the evidence presented in Fig. 5.
The main message of Fig.5 is to show the validity of our theoretical predictions about the bound to the memory based on the uniform norm of the flow.
Besides, we can demonstrate that even though all networks learned a continuous attractor approximation, they are distinguished from one another by their fixed point topology, which determines their asymptotic behavior and hence generalization properties (Fig.5D and E).
These results indicate the distance to a continuous attractor as was discussed in Sec.3.2 as measured by the uniform norm of the invariant slow manifold.
> The title is not descriptive of the manuscript's content and feels like as if it belongs to a blog post. Could you please update the title to be representative of the paper's conclusions?
The title references the reversal of the Persistence Manifold theorem, i.e., how to get back to a continuous attractor.
Furthermore, it references that we can return to continuous attractors as a useful concept to describe neural computation because of the deep connection of all continuous attractor approximations.
We can however propose to revise the title to: "A theory of analog memory approximation: Back to the continuous attractor."
### References
[1] Opening the Black Box: Low-Dimensional Dynamics in High-Dimensional Recurrent Neural Networks, David Sussillo and Omri Barak, Neural Computation, 2013
[2] Schmidt, D., Koppe, G., Monfared, Z., Beutelspacher, M., & Durstewitz, D. (2019). Identifying nonlinear dynamical systems with multiple time scales and long-range dependencies. arXiv preprint arXiv:1910.03471.
---
Rebuttal Comment 1.1:
Comment: With the proposed changes, I believe this work is an important addition to the theoretical neuroscience literature. I will increase my score to match this belief. I do believe there is a way for authors to increase my score even further if the authors commit to addressing the following concern:
I believe one of the main readers of this work, though the authors probably do not intend it that way, will be experimental neuroscientists. I believe the current writing is too heavy for them to be able to understand this work and start testing some of these ideas with experiments. Could you comment on how you would go about making changes to the writing to be more accessible to this audience? In my mind, some of the theoretical contributions can be toned down in favor of accessibility to a more general audience, which in turn would increase your impact.
I believe your responses to all reviewers are satisfactory. I appreciated the new experiments in short time and I will be championing for the acceptance of this work. I do believe if you can provide a satisfactory answer to my concern above, I feel comfortable recommending this work for a spotlight.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback. We greatly appreciate your thoughtful consideration of the potential impact on experimental neuroscientists, a crucial audience we indeed hope to reach!
To address your concern, we propose to add a new section to the main text, simplify the language of our presentation, and add additional material in the appendix.
## We will include a new section titled: "Implications on experimental neuroscience"
Following is a draft:
Animal behavior exhibits strong resilience to changes in their neural dynamics, such as the continuous fluctuations in the synapses or slight variations in neuromodulator levels or temperature. Hence, any theoretical model of neural or cognitive function that requires fine-tuning, such as the continuous attractor model for analog working memory, raises concerns, as they are seemingly biologically irrelevant. Moreover, unbiased data-driven models of time series data and task-trained recurrent network models cannot recover such continuous attractor theories precisely. Our theory shows that this apparent fragility is not as devastating as previously thought: despite the "qualitative differences" in the phase portrait, the "effective behavior" of the system can be arbitrarily close, especially in the behaviorally relevant time scales. We show that as long as the attractive flow to the memory representation manifold is fast and the flow on the manifold is sufficiently slow, it represents an approximate continuous attractor. Our theory bounds the error in working memory incurred over time for such approximate continuous attractors. **Therefore, the concept of continuous attractors remains a crucial framework for understanding the neural computation underlying analog memory, even if the ideal continuous attractor is never observed in practice.** Experimental observations that indicate the slowly changing population representations during the "delay periods" where working memory is presumably required, do not necessarily contradict the continuous attractor hypothesis. Perturbative experiments can further measure the attractive nature of the manifold and their causal role through manipulating the memory content.
## Simplifying Language
We will revise the manuscript to reduce the use of technical jargon and complex theoretical language where possible. This will involve clearly defining key terms and concepts upfront and using more intuitive explanations throughout the text.
We will simplify Theorem 1 to only talk about invariant manifolds (locally invariant manifolds indeed unnecessarily complicate the statement).
We will furthermore leave out concepts such as omega-limit sets and only include it in the supplementary material for the curious theoretician to show how to generalize certain concepts to other stability structures (in this case going from fixed points to considering limit cycles as well). Similarly, we will also relegate discussions of the Hausdorff distance to a less prominent section in the supplementary materials (as this part of the theory is still important to guarantee that the persistence invariant manifold is close to the original continuous attractor).
Additionally, we include an additional supplementary section to explain through less technical and in more intuitive terms the dynamical systems terms that are essential. This includes compact, normally hyperbolic, invariant, diffeomorphism and heteroclinic/connecting orbits. We will further explain what the neighborhood of a vector field entails (see also our proposed intuitive definitions list below).
Finally, in this new supplementary section, we will include a subsection with a visual illustration of our claims, aiming to tap into human geometric intuition. In this visualization, we show how models that behave almost like a perfect analog memory system correspond to a volume of models in the space of dynamical systems.
We will furthermore provide intuitive definitions of several key concepts used in our paper:
- Manifold: A part of the state-space that locally resembles a flat, ordinary space (such as a plane or a three-dimensional space, but more generally $n$-dimensional Euclidean space) but can have a more complicated global shape (such as a donut).
- Invariant set: A property of a set of points in the state space where, if you start within the set, all future states remain within the set and all past states belong to the set as well.
- Normally Hyperbolic Invariant Manifold: A behavior of a dynamical system where flow in the direction orthogonal to the manifold converges (or diverges) to the manifold significantly faster than the direction that remains on the manifold.
- Diffeomorphism: A diffeomorphism is a stretchable map that can be used to transform one shape into another without tearing or gluing.
$C^1$ neighborhood of a $C^1$ function: A set of functions that are close to the function in terms of both their values and their first derivatives. | Summary: This paper studies the fragility of continuous attractors, which have been used to explain various computations or functions in the brain related to memory and navigation, to perturbations. The authors mainly focus their analyses on ring attractors, which have been used to model continuous-valued memory. Under perturbation, the authors find that the bifurcations of continuous attractors exhibit structurally stable forms with different asymptotic behaviors but similar finite-time behaviors as the original attractor. For example, a stable limit cycle arising from a bifurcation of a ring attractor would be functionally similar to the ring attractor in finite time. Thus, the authors posit that signatures of the continuous attractor persist even under perturbation in the form of a persistent, attractive manifold which serves as an approximate continuous attractor. Experiments and analyses on recurrent neural networks performing analog memory tasks show that the networks learn approximate continuous attractors. Thus, the authors conclude from their theory and numerical experiments that approximate continuous attractors do remain a good model of analog memory.
Strengths: 1. The authors' theoretically demonstrate the existence of a persistent attractive manifold in various bifurcations of a continuous attractor, and study the systems' finite time behaviors to show that they are functionally equivalent. To my knowledge, this is a novel contribution and an important result to bolster the continuous attractor hypothesis.
2. Apart from the theory, the authors also carry out numerical experiments on a working memory task, characterized by a 1-D ring attractor. Through their analyses they validate their theory.
3. The authors also study the generalization properties of the approximate attractors.
Weaknesses: 1. The experiments and associated analyses focus solely on networks that approximate 1D ring attractors. This is quite simplistic, and at least for the numerical expreiments, the authors could consider tasks like navigation where a planar 2D attractor is approximated by the networks.
2. The authors have only qualitatively characterized the variations in the topologies of the networks. It is perhaps possible to quantitatively characterize this by using Dynamical Similarity Analysis [1] on various trained networks.
3. For the generalization analysis, the authors could evaluate generalization performance by the nature/type of the approximate attractor as well. Furthermore, although I may have missed this, could the authors comment on what networks hyperparameters lead to which approximations?
4. The figures and presentation could be improved:
1. On line 107 there is a comment that should be removed ("add link to details").
2. Fig. 4C, caption should indicate the nature of the solution found.
3. Fig. 5B, y-axis label is missing.
4. Fig. 5D, could also show the mean $\pm$ std for classes of networks.
5. Fig. 5E, y-axis label is missing. Also, the authors could just use the normalized MSE on the axis could just follow the convention used in Fig. 5A instead of using dB.
6. Overall, the writing could be improved in several places to improve clarity. For example, the conclusions of the generalization analysis and their implications are not very clear, and how this connects to the various types of approximate attractors is not clear (related to W3).
**References:**
1. Ostrow et al. "Beyond geometry: Comparing the temporal structure of computation in neural circuits with dynamical similarity analysis." Advances in Neural Information Processing Systems 36 (2024).
Technical Quality: 3
Clarity: 2
Questions for Authors: See the Weaknesses section. I have some additional questions:
1. How do the authors identify the various kinds of approximations of the attractors? Can this be automated, perhaps by using to DSA to cluster the various types?
2. At what level of performance are all trained networks compared? Are they all trained until the same loss value and how close is this MSE to 0?
**Rebuttal update:** Score increased from 5 to 7.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: There is no limitations section. Tthe authors have mentioned in the discussion that they do not explicitly characterize all possible invariant manifolds identified. However, there are other limitations as well such as the lack of diversity in the tasks/attractors explored. I would also encourage the others to unpack the limitations related to the numerical fast-slow analysis, such as how sensitive the results are to the threshold hyperparameter, cases or specific attractors where it does not work as well, etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their valuable comments and suggestions.
### Weaknesses
> The experiments and associated analyses focus solely on networks that approximate 1D ring attractors.
> This is quite simplistic, and at least for the numerical experiments, the authors could consider tasks like navigation where a planar 2D attractor is approximated by the networks.
We appreciate the remark, and have included an additional task where the approximate continuous attractor is of higher dimension, namely a double angular velocity integration task. Please see the shared reply to all reviewers on our new findings.
The networks develop corresponding approximate continuous attractors that have the same structure as the task requires (in this case a torus).
That being said, we would like to reiterate that the primary contribution is theoretical and the numerical experiments are meant to illustrate the theory.
Regarding navigation tasks, two points bear mentioning.
1. For planar attractors are diffeomorphic to $R^2$, note that they do not conform to the assumptions on normally hyperbolic invariant manifolds, since $R^2$ isn't compact.
There are suitable generalizations of this theory to noncompact manifolds [1], but we do not pursue them since they require more refined tools, which would only obscure the point that we are trying to make.
2. Tangentially, we would also like to point out that we assume that neural dynamics are naturally bounded (e.g. by energy constraints) and hence sufficiently well described by compact invariant manifolds.
In the revised version of the manuscript, we will include the above limitations and provide reference to [2].
> The authors have only qualitatively characterized ...
We thank the reviewer for pointing out the reference; we applied DSA to our numerical results.
Our preliminary observations are that DSA reflects the fact that the geometry of the invariant manifold is preserved, but it cannot detect the emergence of fixed-points and saddles on the perturbed manifold.
The DSA values clustered around two points regardless of the number of fixed points.
This appears to be consistent with the results reported in the referenced paper, c.f. Figure 4 shows a gradual increase in DSA as $\alpha \to 1$ despite having a bifurcation at $\alpha = 1$.
Lastly, we would like to note that the analysis using DSA cannot be trivially automated. As pointed out by the authors of DSA:
1. The DSA 'score' is relative; one needs to compare different dynamics.
1. DSA essentially requires 'learning' or fitting a separate model, which implicitly requires performing model selection with respect to the delay embedding, rank of the linear operator.
For these reasons, we would like to adhere to our initial analyses.
> For the generalization analysis, the authors could evaluate generalization performance by the nature/type of the approximate attractor as well.
We looked at the generalization performance by the nature/type of the approximate attractor (Fig.5D MSE vs number of fixed points).
>Furthermore, although I may have missed this, could the authors comment on what networks hyperparameters lead to which approximations?
The only networks hyperparameters that we varied were the nonlinearity and the size.
In all our figures we show which nonlinearity and size corresponds to which fixed point topology (which we characterize through the number of fixed points on the invariant ring).
> The figures and presentation could be improved [...]
We appreciate the comments, and changed the manuscript accordingly.
> Overall, the writing could be improved in several places to improve clarity.
We improved the writing, focusing on overall clarity. See the shared comments.
>the conclusions of the generalization analysis and their implications are not very clear...
In the revised version, we will make a stronger point that connects the inherent slowness of the invariant manifold to the generalizability of the approximate solutions.
We also added a longer description of the implications of our numerical experiments to the main text.
### Questions:
> How do the authors identify the various kinds of approximations of the attractors? Can this be automated, perhaps by using to DSA to cluster the various types?
We identify approximations by their (1) attractive invariant manifold (as motivated by the theory) and (2) asymptotic behavior (as motivated by our analysis of perturbations and approximations of ring attractors).
The invariant manifold in our examples typically take the structure of a ring with fixed points and transient trajectories on it.
In the supplementary document (FigR.1) we show that the identified invariant manifold indeed reflects the fast-slow separation expected for a normally hyperbolic system.
We find the fixed points and their stabilities by identifying where the flow reverses by sampling the direction of the local flow for 1024 sample points along the found invariant manifold.
The only example we found that is of another type is the attractive torus (Fig.4D).
For this network, instead of finding the fixed points, we identifies stable limit cycles where there was a recurrence of the simulated trajectories, i.e., where the flow returned back to an initial chosen number of time steps (up to a distance of $10^{-4}$).
For the difficulties of using DSA, see above.
> At what level of performance are all trained networks compared? Are they all trained until the same loss value and how close is this MSE to 0?
All networks are trained for 5000 gradient steps.
We exclude those networks from the analysis that are performing less than -20dB in terms of normalized mean squared error tested on a version of the task that is 16 times as long as the task on which the networks were trained.
[2] Eldering, J. (2013). Normally hyperbolic invariant manifolds: The noncompact case (Vol. 2). Atlantis Press.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' rebuttal, the responses mostly address my concerns. I particularly appreciate the authors' additional experimental results with the 2D toroidal attractor. I understand that the DSA score is relative – I was hoping it could quantitatively characterize the similarity between various approximate attractors or help in clustering approximations with different numbers of fixed points, but the authors' response is illuminating. Also, related to another reviewer's concern, it might help to explicitly show the steps in the Euler-Maruyama integration to show the relationship between (6) and (7).
With the proposed changes and writing updates, I'm happy to increase my score to 7. I think this is a valuable contribution to theoretical neuroscience and dynamical systems theory, and hope to see it accepted to NeurIPS.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their additional comments.
We would like to point the Reviewer to our response to reviewer xmvZ for our notes on how we will explicitly describe the steps in the Euler-Maruyama integration to demonstrate the relationship between (6) and (7): [see](https://openreview.net/forum?id=fvG6ZHrH0B¬eId=oMNXw7U8k7). | Summary: The study explores some bifurcations from continuous attractors in neuroscience models, revealing various structurally stable forms. Through fast-slow decomposition analysis, they uncover the persistent manifold surviving destructive bifurcations. Additionally, they analyze RNNs trained on analog memory tasks, showing approximate continuous attractors with predicted slow manifold structures. An important takeaway of their work is that continuous attractors demonstrate functional robustness and serve as a valuable universal analogy for understanding analog memory.
Strengths: - The main idea of the paper is interesting, and this work can have an important takeaway, as mentioned in "Summary".
- The connection to Fenichel’s theorem is very nice.
- The visualizations in Fig. 1 are very helpful to understand the overall message
Weaknesses: 1. They discussed some interesting theoretical techniques (e.g., Theorem 1, Proposition 1) in their study. However, their theoretical investigation and results are limited to a few very simple systems, low-dimensional systems either in Section S4 or low-dimensional RNNs with specific activation functions and restrictive settings, i.e., specific parameter values (e.g., equations (1) and (10)). The bifurcation analysis of the line attractors and fast-slow decomposition in Section S2 are also studied for very simple systems. Therefore, it is difficult to determine how general their theoretical discussion is and whether it can be applied to investigate and obtain results for more general and high-dimensional cases.
2. In Sect. 3.1, the perturbation p(x) is not clear enough. Specifically, it is unclear:1) Under what conditions the perturbation function p(x) induces a bifurcation? 2) What types of (generic) bifurcations can arise from the perturbation p(x)? Likewise, the functions h and g are also not clear enough. It is unclear how one can obtain/choose the functions h and g such that the two systems defined by Eq. (2) and Eqs. (3) & (4) are equivalent.
3. What does **sufficiently smooth** mean in Theorem 1? As mentioned by the authors after this theorem, it applies to continuous piecewise linear systems. However, it cannot be applied to all piecewise smooth (PWS) systems , such as Filippov systems. In particular, for these systems, bifurcations involving non-hyperbolic fixed points can be analyzed using similar slow (center) manifold approaches, but only for part of the phase space. However, discontinuity-induced bifurcations cannot be examined in the same way, as there is no slow manifold in these cases.
4. It is unclear under what conditions RNN dynamics can be decomposed into slow-fast form to which we can apply Theorem 1.
5. In Sect. 4.1, line 213, it is vague how assuming an Euler integration with unit time step, the discrete-time RNN of (6) transforms to eq. (7). Is this transformation independent of the function f and matrix W in eq. (6)?
6. In S4, the sentence "All such perturbations leave at least a part of the continuous attractor intact and preserve the invariant manifold, i.e. the parts where the fixed points disappear a slow flow appears." needs more clarification. Could you explain the mathematical reasoning behind this assertion?
Technical Quality: 3
Clarity: 2
Questions for Authors: How does the selection of a specific threshold value influence the identification and characterization of slow manifolds in neural networks with continuous attractors as discussed in the first lines of section 4.2? Could you elaborate on how different threshold settings impact the dynamics of network states and the emergence of persistent manifolds?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: - The authors have discussed most limitations of the analysis in the discussion section, but I suggest making them more explicit. This could be done by either incorporating a dedicated (sub)section on limitations or adding a bold title "**Limitations**" at the beginning of the relevant paragraph within the discussion section.
- As mentioned above, another important limitation is that it is difficult to determine how general their theoretical discussion is and whether it can be applied to investigate and obtain results for more general and high-dimensional cases.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their valuable comments and suggestions.
### Weaknesses:
>their theoretical investigation and results are limited to a few very simple systems, low-dimensional systems [...]
We respectfully disagree with the stated limitation-- the role of the analysis, and numerical experiments is not to prove the generality of the theory but to illustrate it. Hence, we focus on visualizable low-dimensional systems and are more helpful in developing intuition.
Nevertheless, we include additional results with RNNs trained on a 2D task.
In the updated manuscript, we emphasize that the theory holds under broad, practically relevant conditions.
1. We added statements that assure that our theory is applicable regardless of the dimensionality or the invariant manifold (see shared rebuttal).
1. Furthermore, we will revise Theorem 1, to show that normal hyperbolicity is both **sufficient and necessary** for invariant manifolds to persist, see [1].
> RNNs with specific activation functions and restrictive settings
We appreciate the reviewer's insightful comment. In response, we have conducted additional experiments with LSTMs and GRUs, for which the results are included in the supplementary document and discussed in the shared rebuttal.
> Under what conditions the perturbation function p(x) induces a bifurcation?
Continuous attractors satisfy the first-order conditions for a local bifurcation; that is, they are equilibria, and their Jacobian linearization possesses a non-trivial subspace with eigenvalues having zero real parts. Consequently, any generic perturbation $p(x)$ will induce a bifurcation of the system.
For a more comprehensive discussion on this topic, see [2].
> What types of (generic) bifurcations can arise from the perturbation p(x)?
We are working to characterize codimension-1 bifurcations for a ring attractor and believe a simple polynomial normal form can be derived.
Characterizing bifurcations with codimension $n > 2$ is an open problem.
Perturbations in the neighbourhood of a ring attractor (in $C^1$ topology) will result in no bifurcation, a limit cycle, or a ring slow manifold with fixed points.
>the functions h and g are also not clear
The essence of Theorem 1 can be restated as follows: **if** the function $f$ has a normally hyperbolic invariant manifold (NHIM), **then** there exist vector fields $h$ and $g$ that satisfy the conditions for equivalence between the systems defined by Eq.2 and Eqs.3&4.
This means that the existence of these functions is guaranteed under the condition of having a NHIM, but their explicit forms is case specific.
> What does sufficiently smooth mean in Theorem 1?
We mean that a system needs to be at least continuous, but some extra conditions apply if a system is not differentiable (discontinuous systems are not considered in our theory). Since all theoretical models and activation functions for RNNs are at least continuous and piecewise smooth, our theory is broadly applicable.
Center manifold are unique, and generally are local in **both** space and time and therefore invariance under the flow generally cannot be analyzed using them.
> It is unclear under what conditions RNN dynamics can be decomposed into slow-fast form to which we can apply Theorem 1.
Theorem 1 holds for all dynamical systems that have a normally hyperbolic continuous attractor. For example, RNNs with a ReLU activation functions can only have such continuous attractors. The continuous attractors and their approximations that we discuss are all normally hyperbolic. In fact, there is a huge benefit from having normal hyperbolicity as it can counteract state noise.
> line 213 [...]
The transformation from the continuous-time to the discrete-time is independent on the function $f$ anf the matrix $W$.
However, it is important to note that the discretization process can result in significantly different system behavior depending on the activation functions used. For instance, discretization can introduce dynamics that are not present in the continuous-time system.
> In S4, the sentence "All such [...]
We will reformulate it as "all such perturbations leave the geometry of the continuous attractor intact as an attractive invariant slow manifold, i.e. the parts where the fixed points disappear a slow flow appears."
The persistence of the invariant manifold under perturbations is a direct consequence of the normal hyperbolicity condition in Theorem 1.
Therefore, for a normally hyperbolic continuous attractor there will remain an attractive slow invariant manifold.
**Questions:**
>How does the selection of a specific threshold value...
We believe that the threshold value used to identify slow manifolds is robust- we used the same value for the newly added RNN models, without modification.
>Could you elaborate [...] emergence of persistent manifolds?
It is unclear for us to which threshold the reviewer is referring. The emergence of persistent manifolds happens under the 4 conditions we discuss.
We demonstrate that all systems with a sufficiently good generalization property (in our case, defined as networks with NMSE lower than -20 dB) must have a NHIM. The persistence of these manifolds is a direct consequence of their normal hyperbolicity.
**Limitations:**
>The authors have discussed most limitations [...]
We appreciate the reviewer's suggestion to make the limitations of our analysis more explicit. In response, we have included a dedicated Limitations subsection in the discussion section of the manuscript. Please refer to the shared rebuttal for further details.
[1] Mané, R. (1978). Persistent manifolds are normally hyperbolic. Transactions of the American Mathematical Society, 246, 261-283.
[2] Kuznetsov, Y. A., Kuznetsov, I. A., & Kuznetsov, Y. (1998). Elements of applied bifurcation theory (Vol. 112, pp. xx+-591). New York: Springer.
---
Rebuttal 2:
Comment: I really appreciate the authors' responses and new experiments, particularly the additional experiments with LSTMs, GRUs, and RNNs trained on a 2D task.
- Regarding the first weakness I mentioned, my main concern was not about the role of numerical experiments in proving the generality of the theory. Rather, it was mostly about:
1) Whether and how the theory itself is applicable to higher-dimensional systems in the sense that the conditions of the theorems can be met for high-dimensional systems (especially as many systems in neuroscience are high-dimensional). Nevertheless, I appreciate the revision of Theorem 1 to show that normal hyperbolicity is both sufficient and necessary for invariant manifolds to persist.
2) Bifurcation and stability analysis in Appendix (S2) are not only restricted to a low-dimensional system but also to a system with specific parameters (eq. (9)). One can at least consider such analysis for a more general case, e.g., W = [w11 w12; w21 w22]. Then, one can mention that in some cases, even when having a theory for low-dimensional systems, it can be extended in a similar way to higher-dimensional systems, for instance, through center manifold theory, where the higher-dimensional system has the same low-dimensional center manifold.
- Regarding the perturbation p(x) (the second mentioned weakness), thank you for the clarification. Certainly, dynamical systems with nonhyperbolic equilibria and/or nonhyperbolic periodic orbits are not structurally stable, meaning that any ϵ-perturbation (in the sense of Definition 1.7.1 on p. 38 of [1]) will induce a bifurcation of the system. But, please note that in Sect. 3.1, based on your response, "$l$" must be nonzero. Otherwise, one can consider $l=0$, which implies there is no continuous attractor, which might lead to confusion, as it did for me. So, I suggest changing "$l$" to "$l \neq 0$" in Sect. 3.1.
- Finally, I agree with Reviewer CAVv that it might help to further clarify Sect. 4.1 to demonstrate the relationship between equations (6) and (7).
------------------------------------
[1] J. Guckenheimer and P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Springer-Verlag, New York, 1983.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for the additional comments.
## Bifurcation and stability analysis
> Bifurcation and stability analysis in Appendix (S2) are not only restricted to a low-dimensional system but also to a system with specific parameters
### Analytically tractable examples
We would first like to clarify that in Supplementary Sections 1 and 2 the provided examples are illustrative rather than exhaustive.
Recognizing that the supplementary material previously lacked clear structure and signposting, we will revise the text to better convey the motivation behind the examples and to provide a clearer explanation of the analysis.
The analysis can furthermore be easily extended to a more general form $W = [w\_{11} w\_{12}; w\_{21} w\_{22}]$ that has a bounded line attractor, through a coordinate transformation.
### Extension of bounded line attractor analysis to higher dimensional systems
We agree that this analysis in S2 can be extended to higher dimensions, and we will incorporate this remark into the supplementary text.
An extension of these results of a low-dimensional system can be easily achieved by the 'addition' of dimensions that have an attractive flow normal to the low-dimensional continuous attractor or invariant manifold.
More generally, the results from a low-dimensional system can indeed be extended to higher-dimensional systems through reduction methods from center manifold theory.
On the center manifold the singular perturbation problem (as is the case for continuous attractors) restricts to a regular perturbation problem [1a].
Furthermore, relying on the Reduction Principle [2a], one can always reduce all systems (independent of dimension) to the same canonical form, given that they have the same continuous attractor. We thank the reviewer for pointing this out and will add a remark on this possibility to extend results.
## Perturbation and dimensionality
> Regarding the perturbation ...
We understand how that may have lead to confusion. We changed the statement in Sec.3.1 to be $l\\neq 0$ for clarity.
## Discretization of SDE
We appreciate the suggestion to better explain the discretization procedure, i.e., the steps for going from Eq.(7) to Eq.(6).
### Steps
In the supplementary material, we will include a note on the discretization, which goes as follows.
**Discretize the time variable:** Let $t\_n = n \\Delta t$.
The Euler-Maruyama method for a stochastic differential equation
$$\\mathrm{d}{\\mathbf{x}} = (-\\mathbf{x} + f( \\mathbf{W}\_{\\text{in}} \\mathbf{I}(t) + \\mathbf{W} \\mathbf{x} + \\mathbf{b} )) \\mathrm{d}{t} + \\sigma\\mathrm{d}{W}\_{t}$$
is given by :
$$\\mathbf{x}\_{n+1} = \\mathbf{x}\_n + ( - \\mathbf{x}\_n + f ( \\mathbf{W}\_{\\text{in}} \\mathbf{I}\_n + \\mathbf{W} \\mathbf{x}\_{n} + \\mathbf{b} ) ) \\Delta t + \\sigma \\Delta W\_{n}$$
with $\\Delta W\_{n}=W\_{(n+1)\\Delta t}-W\_{n\\Delta t}\\sim \\mathcal{N}(0,\\Delta t).$
**Subsitute $\\Delta t = 1$:**
$$\\begin{aligned}
\\mathbf{x}\_{t+1} &= \\mathbf{x}\_t + ( -\\mathbf{x}\_t + f(\\mathbf{W}\_{\\text{in}} \\mathbf{I}\_t + \\mathbf{W} \\mathbf{x}\_t + \\mathbf{b}) ) + \\sigma \\Delta W\_t, \\\\
&= f(\\mathbf{W}\_{\\text{in}} \\mathbf{I}\_t + \\mathbf{W} \\mathbf{x}\_t + \\mathbf{b}) + \\sigma \\Delta W\_t.
\\end{aligned}
$$
**Introduce the noise term** $\\zeta\_t = \\sigma \\Delta W\_t$, which represents the discrete-time noise term.
Thus, we have derived the discrete-time equation:
$$\\mathbf{x}\_t = f(\\mathbf{W}\_{\\text{in}} \\mathbf{I}\_t + \\mathbf{W} \\mathbf{x}\_{t-1} + \\mathbf{b}) + \\zeta\_t$$
We thought that $\\Delta t=1$ would simplify the presentation, however, it seems to be misleading the readers.
In our numerical experiments, we used a $\\Delta t<1$.
We will update our manuscript to include $\\Delta t$ for clarity.
### Integration scheme
Numerical integration of a stochastic differential equation is an extensive field by itself [3a].
We chose to use the simplest Euler-Maruyama discretization form because this leads to the standard RNN form.
Although it is generally inferior to other methods in terms of efficiency, systems with a fast-slow decomposition are stiff which presents additional challenges in their solution.
Computational neuroscientists often train RNNs as models of neural computation [4a,5a]
and interpret them as dynamical systems.
Our experiments connect to existing literature.
In future studies, it would be interesting to perform experiments with Neural SDEs [6a].
---
Rebuttal 3:
Comment: Dear Authors,
Due to your new experiments and the fact that most of your responses were convincing to me, I will wait until the end of the discussion period. I will raise my score to 6 if you can address my concerns regarding the bifurcation and stability analysis; otherwise, I will raise it to 5.
Kind regards,
Reviewer
---
Rebuttal 4:
Title: References
Comment: [1a] Fenichel, N. (1979). Geometric singular perturbation theory for ordinary differential equations. Journal of differential equations, 31(1), 53-98.
[2a] Kirchgraber, U., & Palmer, K. J. (1990). Geometry in the neighborhood of invariant manifolds of maps and flows and linearization. (No Title).
[3a] https://docs.sciml.ai/DiffEqDocs/stable/solvers/sde_solve/
[4a] Mante, V., Sussillo, D., Shenoy, K. V., & Newsome, W. T. (2013). Context-dependent computation by recurrent dynamics in prefrontal cortex. nature, 503(7474), 78-84.
[5a] Sussillo, D., & Barak, O. (2013). Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural computation, 25(3), 626-649.
[6a] Tzen, B., & Raginsky, M. (2019). Neural stochastic differential equations: Deep latent gaussian models in the diffusion limit. arXiv preprint arXiv:1905.09883. | Rebuttal 1:
Rebuttal: We are grateful and encouraged that the reviewers found our work novel and interesting. Reviewers remarked that it is "a novel contribution and an important result to bolster the continuous attractor hypothesis", "fresh look, novel, original, and interesting [and] superb theoretical motivation", that "the main thrust of the paper was very interesting and very novel" and "it should be applauded."
We feel many of your suggestions have led us to changes and additions that better position the paper.
To respond to the reviewer's comments, we have performed the following analysis:
* quantify the fast-slow time scale separation on the manifold found in task-trained RNNs (Fig R1)
* trained LSTM and GRU networks (Fig R2)
* trained RNNs on a 2D task where the continuous attractor manifold is a torus (Fig R3)
# Generality of the theory
While most bifurcation analyses in theoretical neuroscience and machine learning are based on a particular parameterization (e.g., pairwise weight matrix), our theory applies to any differentiable dynamical system and to continuous piecewise smooth systems (with a global continuous attractor). Hence, the robustness of most continuous attractors are covered. The only necessary condition is the *normal hyperbolicity* as demonstrated via separation of eigenvalues (Fig R1).
## Architecture
We tested our theory with LSTMs and GRUs to support our claim about the universality on trained RNNs.
These networks form the same normally hyperbolic invariant slow ring manifold just like Vanilla RNNs (Fig R2C,D) and on this manifold we find fixed points (Fig R2A,B). This consistency of structure across different RNN architectures provides further validation of our theoretical framework.
## Simple systems
The analysis of theoretical models and numerical experiments is intended to illustrate the theory's practical applicability rather than to prove its generality.
We focused on low-dimensional systems because they are easier to visualize and are a better guide to developing intuition.
We include results on RNNs trained on a 2D task (a double angular velocity integration task) to further demonstrate our theory's relevance. In the trained RNNs (Fig R3A,B), we find a slow, attractive, invariant manifold in the shape of a torus with a point topology. Additionally, we find evidence supporting the relevance of the error bound in these trained RNNs (Fig R3C,D).
## Broader impact
Approximate continuous attractor theory applies to dynamical systems inferred from data, from task-trained neuralODEs and RNNs, finite size effects on theoretical models, and due to lack of expressive power parametrized dynamics.
We believe our theory opens up a new way of grouping similar dynamical systems for understanding the essential computation.
# Clarity
We acknowledge the reviewers' concerns regarding clarity and add details on the main topics highlighted in the reviews. If accepted, our final manuscript will be updated.
## Section 5.2
Section 5.2 outlines the conditions under which approximations to an analog working memory problem are near a continuous attractor. This section is crucial for clarifying when a situation like Proposition 1 would occur. These conditions are met for RNNs:
* C1: This translates to the existence of a manifold in the neural activity space with the same topology as the memory content. We formalize the dependence as the output mapping being a locally trivial fibration over the output manifold.
* C2: Persistence, as per the reverse of the Persistent Manifold Theorem, requires the flow on the manifold to be slow and bounded.
* C3+C4: Non-positive Lyapunov exponents correspond to negative eigenvalues of $\nabla_zh$. Along with dynamics robustness (corresponding to the persistence of the manifold), this implies normal hyperbolicity. We have expanded on this correspondence by building on [1].
## Parameter dependence for the analysis
The threshold parameter for identifying invariant slow manifolds was chosen to reflect the bimodal distributions of speeds along the integrated trajectories.
The supplementary document (Fig R1) shows that the identified invariant manifold accurately reflects the fast-slow separation expected for a normally hyperbolic system, thereby validating our method's legitimacy.
The number of fixed points (NFPS) identified depends on the number of points sampled for the angular flow on the invariant ring, but converges to the true NFPS as the grid of initial points is refined.
# Limitations
We will add a separate **Limitations** subsection:
Although our theory is general, since the magnitude of perturbation is measured in uniform norm, for specific parameterizations, further analysis is needed. If the parameters are not flexible enough, the theory may not apply, for example, RNNs with "built-in" continuous attractors such as LSTM without a forget gate cannot be destroyed. However, in biological systems, this is highly unlikely at least at the network level.
Our empirical analysis requires the fast-slow decomposition around the manifold. Not all dynamical system solutions to the tasks that require analog memory have this property (hence sec 5.2). Solutions such as the quasi-periodic toroidal attractors or constant spirals represent challenges to the current framework of analysis in understanding the general phenomena of analog memory without continuous attractors.
Our numerical analysis relies on identifying a time scale separation from simulated trajectories. If the separation of time scales is too small, it may inadvertently identify parts of the state space that are only forward invariant (i.e., transient). However, this did not pose a problem in our analysis of the trained RNNs, which is unsurprising, as the separation is guaranteed by state noise robustness (due to injected state noise during training).
[1] Mané, R. (1978). Persistent manifolds are normally hyperbolic. Transactions of the American Mathematical Society, 246, 261-283.
Pdf: /pdf/59f7f791f554091c1da9fc916996ffae94d696dc.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Bigger, Regularized, Optimistic: scaling for compute and sample efficient continuous control | Accept (spotlight) | Summary: This paper investigates the sample efficiency problem in continuous control. The authors propose the BRO algorithm, i.e., Bigger, Regularized, Optimistic. The authors find that strong regularization allows for effective scaling of the critic networks, which, paired with optimistic exploration, leads to quite good performance. BRO achieves strong performance on numerous continuous control benchmarks, and is the first model-free reinforcement learning algorithm that can learn meaningful performance in DMC dog and humanoid tasks.
Strengths: - this paper is easy to follow and easy to understand
- the studied topic is important to the RL community. It is vital to develop stronger and more powerful model-free RL algorithms for continuous control problems
- despite that this work combines numerous previous well-developed tricks and strategies, the authors selectively incorporate them into one framework and demonstrate that such design choice incurs quite good performance.
- the experiments are extensive and solid
- the developed BRO algorithm is the first model-free RL algorithm that can achieve meaningful performance in DMC dog and humanoid tasks
Weaknesses: this paper has the following drawbacks,
- the quality of the figures could be significantly improved. Please try to export pdf with matplotlib instead of taking screenshots by convention.
- BRONet seems to be a simplified version of ResNet. I do not seem to observe any significant network architecture difference between them. The authors should not over-claim on the network architecture. Any clarifications here?
- It is often unclear how the figures are plotted and which environments they cover, e.g., Figure 4, Figure 6. This should not be vague and ought to be clearly stated in the main text
- I am a bit concerned with the claim that *Algorithmic improvements matter less as the scale increases* (Line 184). Do you think that this is always correct? One should focus more on scaling instead of algorithmic improvements in the context of RL?
- missing baselines and references. The authors should compare against some other recent strong model-free RL algorithm, e.g., TD7 [1]. Meanwhile, the authors should cite the REDQ [2] paper when referring to replay ratios. REDQ should be included as a baseline approach in the paper. Moreover, a recent paper introduces sample multiple reuse (SMR) [3] that updates a fixed batch multiple times to boost sample efficiency. I think this can be a very relevant work and should be included and discussed in the paper. Also, the authors write that they introduce another actor network, and incorporate regularization techniques into critics, it reminds me of another work, DARC [4], where they leverage double actors for exploration and introduce critic regularization for better performance. It should be included in the paper.
[1] For sale: State-action representation learning for deep reinforcement learning
[2] Randomized Ensembled Double Q-learning: Learning Fast Without a Model
[3] Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control via Sample Multiple Reuse
[4] Efficient Continuous Control with Double Actors and Regularized Critics
Despite the aforementioned drawbacks, this is a solid paper that may have a potentially large impact on the RL community. I would be happy to reconsider the score if the aforementioned flaws are addressed during the rebuttal phase.
Technical Quality: 3
Clarity: 2
Questions for Authors: - how do you expect BRO to be applied in the discrete control tasks? Will BRO beat other stronger MCTS-based approaches in Atari games? Any comments here?
- can you elaborate more on explaining why scaling the actor network is not effective?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have a good discussion of the potential limitations of this work in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time reviewing our work and the suggestions on how to improve it. We are also very pleased that the reviewer found the experimental section solid. We leave our rebuttal below:
>missing baselines and references...
We thank the reviewer for suggesting these algorithms. In response, we added an experiment comparing BRO and BRO (Fast) to TD7, SMR, and RedQ on the 15 DMC tasks from our original evaluations. Given the limited time and the marginal performance improvements reported over SAC/TD3, we excluded DARC but cited and discussed it in the related work section. Our findings show that BRO and BRO (Fast) significantly outperform the additional baselines, and we detail these results in the rebuttal PDF we uploaded.
>The quality of the figures could be significantly improved...
We thank the reviewer for noticing that - indeed some of our figures were embedded as PNG files. We changed all figures to vector PDFs.
>BRONet seems to be a simplified version of ResNet…
BroNet uses layer normalization and adds an extra layer normalization before the first residual stream, as shown in Figure 12. This addition boosts performance by over 20% in DMC tasks, aligning with prior studies like [1] and [2] that reported declines when applying standard ResNets to Atari and DMC benchmarks. We named this configuration "BroNet" to highlight these crucial design choices. Additional offline RL experiments demonstrating BroNet's effectiveness are detailed in our joint response and Figure 3 of the rebuttal PDF. At the same time, we agree that it is crucial not to present BroNet as a new neural architecture but as an effective integration of existing modules for RL applications. We believe that naming this configuration aids in distinguishing specific architectures within the community. We added this discussion to the Method section. We hope this meets the reviewer’s expectations, though we are open to revising the presentation in the manuscript as needed.
>It is often unclear how the figures are plotted and which environments they cover, e.g., Figure 4, Figure 6...
We thank the reviewer for noticing that. We checked all figures for links to task lists and added them when missing. We use two task sets in our experiments: all 40 tasks from DMC, MW, and MYO, and 10 tasks from DMC and MW. The majority of experiments (e.g. Figure 6) were performed on all 40 tasks.
>I am a bit concerned with the claim that Algorithmic improvements matter less...
We thank the reviewer for this feedback. We acknowledge that scaling may not always be beneficial, such as in tasks with sparse rewards and complex exploration. However, our main takeaway is that scaling, alongside optimization techniques like layer normalization and replay ratio, can significantly enhance performance, often surpassing RL-specific algorithmic advancements (e.g., Figures 4 & 7). Many recent algorithms focus solely on these specific improvements while maintaining traditional architectures with two layers of 128-256 units (e.g., RedQ, SMR, TD7, SR-SAC [4]). We believe the synergy between algorithmic and scaling enhancements warrants more attention in future research. While we agree that algorithmic improvements are crucial for the field's progress, we aimed to suggest they matter "less" rather than "not at all." We are open to modifying the wording based on the reviewer’s feedback.
>how do you expect BRO to be applied in the discrete control tasks...?
We focused exclusively on continuous control because there are significant differences in best practices between continuous and discrete algorithms. For example, continuous algorithms usually require more mechanisms to curb overestimation [3] or different replay ratios [4]. However, we are happy to announce additional experiments in the Atari benchmark, where we use 3 tasks and augment the SR-SPR model with BroNet and find BroNet promising in these tasks. We summarize these results in the joint response. Due to the large computational demands associated with MCTS-style algorithms, we leave comparing BRO to those for future work. We summarize these results in the joint response and the rebuttal PDF.
>can you elaborate more on explaining why scaling the actor network is not effective?
We interpret this result to be consistent with the previous works showing that off-policy actor-critic algorithms are more “critic centric” (e.g. [3] showing that actor can be updated two times less frequently, or [4] showing that resetting critic is more important than resetting actor). We hypothesize that the critic learning is thus more complex because it models the dimension of actions as well (and policy changes only wrt. state). Furthermore, since the policy optimizes for critic output, the policy complexity is inherently capped by the expressiveness of the critic.
We thank the reviewer again for their time, insights, and suggestions. Based on these we made several additions to our manuscript, which we describe in detail in the joint response. In particular, we hope that the additional baselines and added references address the issue of “missing baselines and references”. We also hope that the added results in the offline RL and discrete image-based RL benchmarks increase the reviewer’s confidence in the BroNet utility. If so, we kindly ask the reviewer to consider adjusting the initial score of our work.
[1] Schwarzer, Max, et al. "Bigger, better, faster: Human-level atari with human-level efficiency."
[2] Bjorck, Nils, et al. "Towards deeper deep reinforcement learning with spectral normalization."
[3] Fujimoto, S., et al. Addressing function approximation error in actor-critic methods.
[4] D'Oro, Pierluca, et al. "Sample-efficient reinforcement learning by breaking the replay ratio barrier."
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed rebuttal experiments. Please find the comments below,
> missing baselines and references
It is good to see experiments on TD7 and SMR. The experiments can be added to the paper, maybe in the appendix. Please also consider citing the references which I believe can further strengthen this work.
> figure quality
Yes, please do change all figures to vector PDFs
> BRONet and ResNet
Thanks for the clarification. I would recommend the authors not to over-claim the network architecture. Please consider revising the presentation to properly position the contribution of your work
> how the figures are plotted and which environments they cover
Please make them clearer in the final version. As commented, these should not be vague.
> the claim that Algorithmic improvements matter less as the scale increases
Thanks for the clarification. I think the authors should incorporate the rebuttal clarifications into the revision and have a more careful discussion on this.
> BRO in Atari games
Thank you for the additional experiments. I understand that the MCTS-based method can be computationally inefficient. I would recommend the authors take a look at the EfficientZero [1] algorithm, which is quite efficient and runs very fast on Atari 100k games. The performance of EfficientZero is also quite good. However, I understand that discrete control is not the focus of this work, and it is okay to see that BRONet beats SR-SPR on some Atari games.
[1] Mastering atari games with limited data
> why scaling the actor network is not effective
Thanks for the clarification.
All in all, this is a solid work. It is my hope that the authors can revise the manuscript based on my comments and suggestions. Considering its potential impact on the RL community and the rebuttal addresses all my concerns, I feel more confident to vote for accepting this paper. I am happy to increase my score from 5 to 7. Congratulations!
---
Reply to Comment 1.1.1:
Title: We thank the reviewer for their prompt response
Comment: We thank the reviewer for their prompt response and for increasing the score of our manuscript. We commit to the careful implementation of the above in our final version of the paper.
We are also happy to answer new questions if any arise.
Best regards,
Authors | Summary: This paper investigates how reinforcement learning (RL) can benefit from parameter scaling. The authors introduce BroNet, a variant of ResNet with LayerNorm, as a well-regularized network structure for the critic that improves performance when scaled. They also find that when the critic is properly regularized, the common Clipped Double Q-learning trick can be replaced with optimistic exploration, further boosting sample efficiency. These findings are combined into a new algorithm called BRO.
To demonstrate the efficiency of their approach, the authors conduct extensive experiments on 40 challenging tasks across 3 benchmark suites. They also provide comprehensive ablation studies to justify their design choices.
Strengths: - The paper addresses the important and timely topic of scaling in model-free off-policy RL.
- It offers extensive large-scale studies on various design choices, providing valuable insights to the RL community.
- The proposed BRO algorithm shows promising results across a wide range of tasks.
Weaknesses: As acknowledged by the authors, the study primarily focuses on state-based off-policy RL. The transferability of their conclusions to other domains such as image-based problems and offline RL remains unclear, which limits the paper's broader impact.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The benefits of scaling the critic appear to saturate after 5M parameters. What are your projections for further scaling? Do you anticipate additional benefits beyond this point, or do you believe there are fundamental limitations?
- Could you provide more details on the evaluation metrics used, particularly for the MetaWorld benchmark? Given the various reporting methods in the literature for MetaWorld success rates, it would be helpful to clarify which specific metric was employed in this study.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes, the authors have discussed the limitation in section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and valuable suggestions. We are also happy that the reviewer found the insights provided by our manuscript valuable. Please share our rebuttal below:
>As acknowledged by the authors, the study primarily focuses on state-based off-policy RL. The transferability of their conclusions to other domains such as image-based problems and offline RL remains unclear, which limits the paper's broader impact.
We thank the reviewer for pointing us towards these domains. As described in the joint response, we are happy to announce two major additions to our manuscript.
Firstly, we add experiments on two offline RL benchmarks: AntMaze (6 tasks); and Adroit (9 tasks) [1]. We design these experiments to test the effectiveness of the naive application of BroNet to popular offline approaches. To this end, we run Behavioral Cloning (BC) in pure offline, Implicit Q-Learning (IQL) offline + fine-tuning, and SAC with RLPD buffer online with offline data setup. We run all these algorithms with the default network and BroNet backbones. We find that the naive application of BroNet leads to performance improvements across all tested algorithms.
Secondly, we add experiments in discrete image-based RL benchmark Atari 100k [2]. Similarly to offline, we evaluate the application of BroNet to the popular discrete RL algorithm SR-SPR. In this experiment, we substitute the regular critic network with a BroNet (without changing the convolutional encoder on other parts of the model like learning rate). We run 5 seeds on 3 tasks and find that BroNet can improve the performance of SR-SPR, but its effectiveness is dependent on the reset configuration and thus demands future work.
We show these experimental results in the uploaded rebuttal PDF. Additionally, we added a passage to our limitations section where we state that the usefulness of BRO for offline/image-based setups should be studied further. We hope that these added experimental results are a valuable addition for readers interested in vision-based and offline control, as well as increase the reviewer’s confidence in the contribution of our manuscript.
>The benefits of scaling the critic appear to saturate after 5M parameters. What are your projections for further scaling? Do you anticipate additional benefits beyond this point, or do you believe there are fundamental limitations?
We believe that performance saturation beyond 5M parameters is due to the low complexity of the environments studied. Therefore, we anticipate that larger models will be beneficial for more challenging tasks, such as real-world robotics, multi-task learning, and image-based control in varied environments. Scaling is expected to become an increasingly significant tool in reinforcement learning. However, it is not a universal solution, and complex challenges like exploration may need approaches beyond just scaled SAC or BRO. We added this remark to Section 3.
>Could you provide more details on the evaluation metrics used, particularly for the MetaWorld benchmark? Given the various reporting methods in the literature for MetaWorld success rates, it would be helpful to clarify which specific metric was employed in this study.
We thank the reviewer for this suggestion. We added the following text to our manuscript detailing our evaluation method: “In the MetaWorld environment we follow the TD-MPC2 evaluation protocol. As such, the environment issues a truncate signal after 200 environment steps, after which we assess if the agent achieved goal success within the 200th step. We do not implement any changes to how goals are defined in the original MetaWorld and we use V2 environments”. We hope that this clarifies our evaluation procedure for MetaWorld tasks. Please inform us if otherwise.
We thank the reviewer again for their valuable insights and suggestions on how to improve our manuscript. We think that implementing these suggestions resulted in additional value for readers, especially those interested in applying BRO to other domains than continuous control (e.g. offline, image-based, or discrete RL).
[1] Fu, Justin, et al. "D4rl: Datasets for deep data-driven reinforcement learning."
[2] Kaiser, Lukasz, et al. "Model-based reinforcement learning for atari."
---
Rebuttal Comment 1.1:
Title: Thanks for the additional results
Comment: I appreciate the author providing additional results on offline RL and image-based RL. However, the experiments seem to deviate from the goal of this paper. Changing the network from MLP to BroNet (with the same parameter size, I presume?) only shows the efficiency of BroNet other than "scaling also helps offline RL and image-based RL". The correct experiments should be also changing the size of BroNet like what you did in the main paper. Especially as you argue the the results saturate with 5M since the tasks are too simple, maybe you can prove that on the image-based tasks.
I understand these experiments could be too expensive during the rebuttal phase. Thus, I would encourage you to provide them in the final version.
---
Rebuttal 2:
Title: Thank you for the quick response and apologies for the confusion
Comment: We thank the reviewer for their quick response. We also apologize for not describing the additional experiments thoroughly enough and for the confusion that resulted.
In the offline and Atari experiments we are using a scaled BroNet. Specifically, in the offline experiments, we substitute the standard MLP (2 layers of 256 units) with the standard BroNet variant that we use in BRO and BRO (fast) (i.e. 6 layers with 512 units). In the Atari experiments, we leave the convolutional encoder untouched and substitute the Q-network head (1 layer of 512 units) with a slightly smaller BroNet (i.e. 4 layers with 512 units). We used 4-layer BroNet because the Q-network head used in the original implementation has only a single layer. We hope this answer clarifies that the newly added experiments are in line with the results presented in the main body (i.e., that we present results of a BroNet with scaled parameter count).
We also thank the reviewer for suggesting using >5M in the image-based setup. We will provide these results in the camera-ready version.
Regards,
Authors | Summary: The paper studies how to scale up RL algorithms in the continuous domain and introduces the BRO (Bigger, Regularized, Optimistic) algorithm, designed to enhance sample efficiency with (relatively) large models. The authors conduct extensive experiments to verify the effectiveness of factors like replay ratio, regularizations, optimistic exploration, and quantile Q-values when scaling up RL algorithms. The findings from these extensive experiments lead to the novel BRO algorithm, which consists of a novel architecture with proper regularization and exploration. Empirical results demonstrate that BRO achieves state-of-the-art performance on 40 complex tasks from the DeepMind Control, MetaWorld, and MyoSuite benchmarks, outperforming leading model-based and model-free algorithms, especially in the challenging Dog and Humanoid tasks.
Strengths: 1. This paper tackles an important problem of scaling up in reinforcement learning, especially in continuous action domains.
2. The authors conduct extensive experiments on the effects of different methods on scaling up, which I found very informative.
3. The proposed algorithm, BRO, achieves strong empirical performances on various domains, especially on the Dog & Humanoid domains.
4. The paper is well-structured and well-written.
Weaknesses: Usually, scaling up benefits more when a large amount of data is available, where large models can lead to positive transfer or generalization across different tasks. However, the current setup is the same as the standard setting, where the agent is trained for 1M steps on each task separately. The work would be more significant if the model is trained on and can be transfered across different tasks.
Technical Quality: 3
Clarity: 4
Questions for Authors: See the weakness section. Is there any evidence that the proposed method can benefit from training on diverse tasks?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The authors discussed their limitations well in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and valuable feedback regarding our work. We are very pleased that the reviewer found our results on scaling interesting. Please find our rebuttal below.
>Usually, scaling up benefits more when a large amount of data is available, where large models can lead to positive transfer or generalization across different tasks. However, the current setup is the same as the standard setting, where the agent is trained for 1M steps on each task separately. The work would be more significant if the model is trained on and can be transferred across different tasks.
We thank the reviewer for their excellent question related to generalization across different tasks. In this work, we intentionally focused on the single-task setup because it is the standard approach in previous work proposing base agents [1, 2]. Furthermore, it provides a clear and isolated setting to study the properties of RL algorithms and serves as a necessary foundational step before tackling multi-task learning. During the limited rebuttal period, we enhanced our single-task analysis by the experiments described in the single-task setup (e.g. longer training, offline, image-based, and BRO with TD3 backbone), further confirming that BRO is an attractive option for future studies. Nevertheless, conducting informative multi-task or continual learning experiments requires significantly more work, as highlighted by e.g. [3].
However, we commit to running simple preliminary multi-task experiments for the camera-ready version. We also added this discussion to the future work section.
>Is there any evidence that the proposed method can benefit from training on diverse tasks?
There are generic arguments that suggest that BRO could perform well in a multi-task learning environment. For example, the increased network capacity combined with robust regularization techniques should theoretically decrease catastrophic forgetting. Moreover, these design choices should also help in reducing overfitting to individual tasks, thus increasing generalization across tasks. We do not have any insights into the quality of representations learned by BRO in a multi-task setup, but such research could be interesting when tackling multi-task RL.
We thank the reviewer again for their time and insights. We are happy to answer further questions if any arise. We also welcome further discussion on the inclusion of preliminary multi-task results in the camera-ready version of our paper.
[1] Hessel, Matteo, et al. "Rainbow: Combining improvements in deep reinforcement learning."
[2] Schwarzer, Max, et al. "Bigger, better, faster: Human-level atari with human-level efficiency."
[3] Wolczyk, Maciej, et al. "Disentangling transfer in continual reinforcement learning."
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I believe the additional experiment strengthens the paper. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful feedback. We are pleased that our work was well received, and that all reviewers recognized the potential significance of our work for the RL community, the scope of our experiments, and the significant performance improvements our method provides over previous approaches. Following the reviewers' suggestions, we have made additions to our manuscript that we believe further increase its value. We have included graphs of the new results in the rebuttal PDF and summarize them here:
**New baselines** - Following the recommendation of reviewer TNcA, we have added three new baselines: TD7 (original codebase), RedQ (JaxRL), and SMR (JaxRL) and tested them on 15 DMC tasks. In these experiments, our proposed BRO achieves 250-350% better performance than the suggested algorithms. These results are displayed in Figure 1 of the rebuttal PDF.
**Extended training** - Following reviewer zd6E's suggestion, we expanded BRO training beyond 1M environment steps, although in a single-task setup. We trained BRO and BRO (Fast) for 3M and 5M steps respectively on 7 Dog and Humanoid tasks and compared them to TD-MPC2 and SR-SAC. BRO significantly outperforms these baselines and notably almost solves the Dog Run tasks at 5M steps (achieving over 80% of possible returns). We show the 3M results in Figure 2 of the rebuttal PDF.
**Offline RL benchmark** - As suggested by reviewer 7YXC, we have added experiments on two offline RL benchmarks: AntMaze (6 tasks) and Adroit (9 tasks) [1]. We tested three scenarios: pure offline (comparing vanilla Behavioral Cloning (BC) to BroNet-based BC), offline with fine-tuning (comparing vanilla IQL [2] to BroNet-based IQL), and online with offline data (comparing vanilla SAC to BroNet-based SAC). Using BroNet led to noticeable improvements for all learners, as depicted in Figure 3 of the rebuttal PDF.
**Image-based benchmark** - Following Reviewer 7YXC, we added experiments on 3 tasks from the Atari 100k [3] benchmark. Here, we changed the regular Q-network of the SR-SPR (RR=2) model [4] to a BroNet, and considered changing the reset schedules to better fit the plasticity of the BroNet model. As depicted in Table 1 of the uploaded PDF, applying BroNet to discrete, image-based tasks is a promising avenue for future research.
**BRO + TD3** - To evaluate BRO performance beyond maximum entropy objectives, we tested BRO and BRO (Fast) with a TD3 backbone across 15 DMC tasks. BRO with a SAC backbone slightly outperformed TD3, though TD3 remains a viable option. This result might be helpful for practitioners interested in applying BRO to models with TD3 backbone, such as the image-based SOTA algorithm DrM [5]. These findings are illustrated in Figure 1 of the rebuttal PDF.
We hope that these results show that our approach to scaling seems to be promising in other branches of RL as well, and will ultimately prove to be helpful for readers interested in problems beyond continuous control. We believe that these substantially increased the quality of our manuscript, and again, we are grateful to the reviewers for their suggestions. We invite the reviewers to inspect the new results in the uploaded rebuttal PDF and are happy to answer any further questions.
[1] Fu, Justin, et al. "D4rl: Datasets for deep data-driven reinforcement learning."
[2] Kostrikov, Ilya, Ashvin Nair, and Sergey Levine. "Offline reinforcement learning with implicit q-learning."
[3] Kaiser, Lukasz, et al. "Model-based reinforcement learning for atari."
[4] Schwarzer, Max, et al. "Data-efficient reinforcement learning with self-predictive representations."
[5] Xu, Guowei, et al. "Drm: Mastering visual reinforcement learning through dormant ratio minimization."
Pdf: /pdf/cca6fabc09cbc9d2d2070d331319b0831bde681c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DEX: Data Channel Extension for Efficient CNN Inference on Tiny AI Accelerators | Accept (poster) | Summary: The paper proposes a novel method to address the accuracy degradation caused by downsampling on small AI processors. The authors observed that the input layer often has a small number of channels, leading to underutilization of the processors. To mitigate this issue, they introduce a technique involving patch-wise even sampling and channel-wise stacking. This method incorporates additional spatial information, thereby improving accuracy while efficiently using processing resources that would otherwise be wasted.
Strengths: * The motivation for the proposed method is clear and well-justified.
* The idea of sampling and channel-wise stacking is both simple and effective, demonstrating a practical solution to a common problem in small AI processors.
* The paper provides well-defined baselines, including normal downsampling and CoordConv, and offers comparisons with various data channel extension approaches.
* The authors conduct a thorough sensitivity analysis, evaluating accuracy, model size, and latency across different channel sizes.
* The method shows low latency and minimal model size overhead, making it a promising solution for improving accuracy without significant performance trade-offs.
Weaknesses: * Further comparisons with a broader range of more complex models could strengthen the evaluation.
* There could be more discussion of potential limitations and scenarios where the method might be less effective.
Technical Quality: 3
Clarity: 3
Questions for Authors: * How does the proposed method perform on more complex models and larger datasets beyond the scope of the current evaluation?
* Are there specific scenarios or types of models where this method might be less effective?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed the limitations concerning small models and acknowledged the potential negative societal impact due to the increased use of computational resources to improve accuracy. In my opinion, their discussion is sufficient and adequate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and effort in providing us with positive and thoughtful comments. We respond to your question in what follows. Please also refer to the *global response* we posted together.
---
*Question 1 & Weakness 1) How does the proposed method perform on more complex models and larger datasets beyond the scope of the current evaluation?*
Thank you for the suggestion. We agree that evaluating the proposed method on more complex models and larger datasets would be beneficial. However, the tiny AI accelerators currently face memory and architecture limitations that restrict support for complex and larger models. For instance, WideNet already utilizes 70% of the weight memory (432KB). Given that MAX78000/MAX78002 are the only available platforms with disclosed hardware details and open-source tools, we plan to extend our idea to a wider variety of models with more computationally capable tiny AI accelerators in the future.
Additionally, during the rebuttal period, we conducted an experiment to see if DEX is applicable to another task, face detection, using the VGGFace2 dataset. The results demonstrated the effectiveness of DEX over the downsampling, with mAP improving from 0.65347 to 0.69307. Please refer to the response to Reviewer k2BH, Question 3 for further details.
---
*Question 2 & Weakness 2) Are there specific scenarios or types of models where this method might be less effective?*
Thank you for the question. We think DEX might be less effective in certain tasks where incorporating more pixel information is not beneficial. For instance, DEX might be less effective in scene categorization where the overall structure and composition of the scene are more important than the detailed textures or pixel-level variations, such as determining if an image is an indoor or outdoor scene. In those cases, alternative data extension strategies might be used instead of patch-wise even sampling to utilize the additional channel budget.
We will incorporate this discussion in our final draft.
---
Rebuttal Comment 1.1:
Comment: Thanks for the additional experiments on the face detection task. I wil keep my score.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you for your response to our rebuttal! Thank you again for the positive review and valuable comments.
Best,
Authors | Summary: This paper introduces DEX, a novel technique designed to enhance the efficiency of DNN inference on resource-constrained tiny AI accelerators by extending the data channels. This approach aims to improve both resource utilization and inference accuracy by incorporating additional spatial information from the original image through patch-wise even sampling and channel-wise stacking. The authors identify that the limited memory budget on resource-constrained tiny AI accelerators requires downscaling the input image, which can degrade model quality and underutilize resources. By extending the data channel, the proposed method retains more information from the input image, thereby improving inference accuracy and maximizing resource utilization without increasing inference latency. Evaluations on real tiny AI accelerator devices demonstrate a 3.1% accuracy improvement with no additional inference latency.
Strengths: - The paper identifies the accuracy degradation and resource under-utilization issues for DNN inference on tiny AI accelerators and proposes a simple yet effective solution that can improve inference accuracy without additional inference latency.
- The evaluations are conducted on real hardware devices.
- The paper is well-written with the motivation, methodology and results being presented in a clear and logical manner.
Weaknesses: While the paper has notable strengths, several areas could be improved:
- End-to-end Performance: The impact of proposed DEX on the end-to-end latency and throughput is not evaluated. Although the authors claim no increase in inference latency, the overhead of the channel expansion from input RGB images (including several preprocessing steps) needs to be studied.
- Power/Energy Measurement: The paper does not include the analysis of power consumption and energy efficiency with DEX, which are crucial considerations for tiny AI accelerators.
- Scope of Models: The study is limited to classification DNN models. The impact on other tasks, such as object detection, face recognition, and more complex applications, should be explored.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What is the overhead of the channel expansion process in terms of computational and memory resources? How does this impact end-to-end inference performance? And on which hardware does such pre-processing happen?
- How does the power consumption vary when using the proposed technique?
- Can you comment on the applicability of DEX to other DNN tasks, such as object detection or natural language processing, which are also common tasks for tiny devices? Will DEX still be effective in improving inference accuracy for those tasks?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have acknowledged the limitations regarding the exploration of larger models. However, they have not addressed the broader applicability of DEX to other tasks beyond classification. It would be beneficial to include evaluations on other applications such as object detection, segmentation, and more to demonstrate the generalizability of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and effort in providing us with positive and thoughtful comments. We respond to your question in what follows. Please also refer to the *global response* we posted together.
---
*Question 1 & Weakness 1) What is the overhead of the channel expansion process in terms of computational and memory resources? How does this impact end-to-end inference performance? And on which hardware does such pre-processing happen?*
Thank you for the question. While our work currently focuses on AI accelerators—specifically their utilization and inference latency—considering data processing overhead is an important discussion for practical deployment. Note that the overhead and impact of data processing depend on the target application scenario and benchmark setup.
**Overhead of the channel expansion and hardware**: The latency of the channel expansion process depends on the processor's computational capability. During our evaluation, we pre-processed data on a powerful server, and thus data processing was negligible.
We additionally conducted data processing on the ultra-low-power MCU processor on the board (Arm Cortex-M4) to understand the data processing overhead on less-capable devices. We measured the overhead of applying DEX to expand channels from a 3x224x224 image (a typical size for ImageNet) to 64x32x32 (the highest channel expansion used in our accelerators) on the Arm Cortex-M4 (120MHz).
This process took 2.2 ms on the Arm Cortex-M4. In terms of memory, this addition took the SRAM memory of 62KB (64x32x32 Bytes - 3x32x32 Bytes) on the processor. However, since DEX extends data to a size that the data memory in the AI accelerator can accommodate, this additional memory will not be an issue from the AI accelerator’s perspective.
**Impact on end-to-end inference performance**: Note that the MCU processor and the AI accelerator are independent processing components that run in parallel. This means that if the inference latency on the accelerator is higher than the data processing latency, data can be pre-processed for the next inference during the current inference (and thus **data processing latency can be hidden**). For inference, the inference latency of EfficientNet (11.7ms) is higher than the data processing latency of 2.2ms, and thus the inference throughput remains the same under continuous inference.
However, this depends on the scenario. The end-to-end impact of data processing latency depends on the processor's computational capability, the dimension of the data, and the size of channel expansion. For instance, in scenarios where data processing is done and transferred in more capable machines (e.g., cloud servers, smartphones, etc.) than the MCU processor on the tiny AI accelerator, the impact of data processing can be even more negligible.
We will incorporate this into our final draft.
---
*Question 2 & Weakness 2) How does the power consumption vary when using the proposed technique?*
Thank you for the question. **We measured the power by varying the size of the channel extension with a Monsoon Power Monitor**. The results are as follows:
| Model | Chan=3 | Chan=6 | Chan=18 | Chan=36 | Chan=64 |
|:---------:|:------:|:------:|:-------:|:-------:|:-------:|
| SimpleNet | 53.82 | 53.85 | 58.21 | 61.42 | 68.97 |
| WideNet | 60.74 | 61.37 | 63.76 | 67.92 | 77.14 |
All numbers are in milliwatts (mW).
As the number of channels increased, power consumption increased accordingly. This is because a higher number of channels uses more processors in the AI accelerator, leading to increased power consumption.
We will incorporate this into our final draft.
---
*Question 3 & Weakness 3) Can you comment on the applicability of DEX to other DNN tasks, such as object detection or natural language processing, which are also common tasks for tiny devices? Will DEX still be effective in improving inference accuracy for those tasks?*
Thank you for the question. The core idea of this work is to utilize additional channels for extra inputs to improve task accuracy. We validated the generalizability of our method across 16 cases (four datasets and four models). We believe this idea would still be beneficial for other tasks that have a low number of input channels, such as RGB images.
During the rebuttal, **we conducted an experiment to see if our idea generalizes to a face detection task**. Specifically, we used the VGGFace2 dataset and the Tiny Single-Shot Detection (Tiny SSD) model [r1]. Due to computational complexity and limited time during the rebuttal period, we downsized the dataset by taking the first 100 identities, resulting in 33K training samples and 2K test samples for 50 epochs. We used the Adam optimizer with a fixed learning rate of 0.001.
Here is the result:
| Method | Channel Size | mAP |
|--------------|:--------------:|:---------:|
| Downsampling | 3 | 0.65347 |
| DEX | 18 | 0.68317 |
| DEX | 64 | **0.69307** |
The results show that mAP (mean Average Precision) improved with DEX compared to Downsampling, illustrating that DEX works for a face detection task.
Nevertheless, we acknowledge that DEX might be less effective in certain tasks where incorporating more pixel information is less beneficial and where detailed pixel-level information might not significantly improve performance. A thorough evaluation is necessary to verify this for different tasks.
We will incorporate this in the final draft.
[r1] Tiny SSD: A Tiny Single-shot Detection Deep Convolutional Neural Network for Real-time Embedded Object Detection | Summary: Recent advancements in tiny ML accelerators, such as MAX 78000 and MAX 78002, have significantly boosted hardware processing power. On one hand, these accelerators feature 64 parallel processors with per-processor memory instances, enhancing CNN inference speed compared to traditional MCUs. On the other hand, downsampling of input images due to limited data memory can lead to accuracy degradation. To address this, this paper proposes DEX, which integrates patch-wise even sampling and channel-wise stacking to incorporate spatial information from original images into input images. Evaluation results demonstrate that DEX improves accuracy without introducing additional latency.
Strengths: + This paper presents a simple yet compelling idea to tackle CNN inference on a specific tiny ML accelerator.
+ Figures, such as Figure 5, clearly illustrate the DEX procedure.
+ The analysis in the paper provides a clear understanding of how DEX operates.
Weaknesses: - This approach appears suitable only for specific small devices.
- Some procedures are unclear and require further clarification. Detailed questions are listed below.
- Limiting the approach to processing only the first layer for simplicity may be a limitation of this work.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Lines 111 to 112 mention that 512KB memory is divided into 64 segments, giving each core an 8KB dedicated memory instance (as shown in Figure 1). Is this scenario realistic? In other words, does each core physically have its own private 8KB memory? Typically, all processors share the same 512KB memory. Moreover, the Block Diagram from the MAX78000 specification in Ref [34] does not indicate these dedicated memory instances.
2. Line 114 states that an image with an input shape of 3x224x224 may not fit MAX78000 even with Q7 format due to memory limitations for each channel. However, the MAX78000 product datasheet mentions "input image size up to 1024x1024 pixels" (Ref [34]). Could this be double-checked?
3. What if the channels of the intermediate layers are significantly fewer than the number of cores? Would it still be feasible or straightforward to extend DEX to those layers?
4. This work is similar to data augmentation but is designed for specific devices. How important or valuable is this hardware device in the context of TinyML? Even for the MAX78000/MAX78002, the contribution seems limited since DEX has only been applied to the first layer, and the accuracy improvement appears to be limited.
Ref [34]: https://www.analog.com/en/products/max78000.html
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and effort in providing us with positive and thoughtful comments. We respond to your question in what follows. Please also refer to the *global response* we posted together.
---
*Weakness 1) This approach appears suitable only for specific small devices.*
Please see our response to Question 4 below.
---
*Weakness 2) Some procedures are unclear and require further clarification. Detailed questions are listed below.*
Thank you for pointing this out. See our response below. We will incorporate the changes.
---
*Weakness 3) Limiting the approach to processing only the first layer for simplicity may be a limitation of this work.*
Please see our response to Question 3 below.
---
*Question 1) Lines 111 to 112 mention that 512KB memory is divided into 64 segments, giving each core an 8KB dedicated memory instance (as shown in Figure 1). Is this scenario realistic? In other words, does each core physically have its own private 8KB memory? Typically, all processors share the same 512KB memory. Moreover, the Block Diagram from the MAX78000 specification in Ref [34] does not indicate these dedicated memory instances.*
**Each core cannot share the same 512KB memory.** This is due to the *per-processor memory instance* property in tiny AI accelerators, which allows for rapid data access and parallelization. The block diagram in Ref [34] provides only an abstract view of the memory architecture and does not illustrate this detail. For a more detailed explanation, please refer to the MAX78000 User Guide. To be precise, four processors share one data memory instance, as noted in our draft (lines 524-526). Nevertheless, parallelization occurs at the channel level. Hence, the data memory can be viewed as 64 segments in terms of parallelization.
---
*Question 2) Line 114 states that an image with an input shape of 3x224x224 may not fit MAX78000 even with Q7 format due to memory limitations for each channel. However, the MAX78000 product datasheet mentions "input image size up to 1024x1024 pixels" (Ref [34]). Could this be double-checked?*
Note that 1024 x 1024 pixels = 1048 KB does not fit within the 512KB data memory size. The MAX78000 datasheet description, “Programmable Input Image Size up to 1024 x 1024 pixels,” is feasible using its “streaming mode.” According to the official documentation, “Streaming allows input data dimensions that exceed the available per-channel data memory in the accelerator.” This mode leverages special hardware support, such as the streaming queue in the MAX78000. However, this comes at the cost of increased inference latency. We did not cover this in the paper as it is a special implementation specific to the hardware and may not generalize to other types of hardware. The focus of our analysis was on the standard operation mode.
---
*Question 3) What if the channels of the intermediate layers are significantly fewer than the number of cores? Would it still be feasible or straightforward to extend DEX to those layers?*
**Yes, it is both possible and straightforward to extend DEX to those layers by modifying the output channel size of those layers.** In this work, we focused on modifying the first CNN layer due to simplicity, effectiveness, and memory constraints. The first layer, representing image data in three channels (RGB), has the most unused processors after initial data assignment. Extending channels at the first layer significantly increases data utilization with minimal impact on model size. This approach aligns with the design of weight memory in tiny AI accelerators, which maximizes model capacity by collective use across processors. We discussed this in Lines 317-322 in our manuscript.
---
*Question 4) This work is similar to data augmentation but is designed for specific devices. How important or valuable is this hardware device in the context of TinyML? Even for the MAX78000/MAX78002, the contribution seems limited since DEX has only been applied to the first layer, and the accuracy improvement appears to be limited.*
While our solution might look similar to data augmentation, it is specifically designed for tiny AI accelerators to maximize both processor utilization and accuracy improvement. We strongly believe that these **tiny AI accelerators are crucial platforms in the context of tinyML**. The advent of tiny AI accelerators is bringing AI closer to us than ever before, offering reduced latency, low power cost, and improved privacy. These accelerators with small form factors are becoming integrated into wearable devices recently, e.g., smart earbuds, patches, watches, glasses, wristbands, and shoes [r1, r2, r3, r4].
In this paper, we focus on the MAX78000 and MAX78002 since they are the most widely used tiny AI accelerator research platforms [1, 6, 13, 39, 40, 43] thanks to their disclosed hardware details and open-source tools, enabling in-depth analysis and modification of their operations. **These tiny AI accelerators are common research platforms, similar to the STM32 series in MCU research and the NVIDIA Jetson series in Edge TPU research**.
In that context, we believe DEX is an important step in utilizing tiny AI accelerators within TinyML by improving accuracy without sacrificing inference latency. We identify inefficiencies in these accelerators and enhance accuracy through a novel data extension algorithm. We believe that **an average 3.1%p accuracy improvement is meaningful, especially for resource-constraint tiny devices**, and we focused on the first layer due to the reasons explained in our response to Question 3.
[r1] Ananta Narayanan Balaji and Li-Shiuan Peh. 2023. AI-On-Skin: Towards Enabling Fast and Scalable On-body AI Inference for Wearable On-Skin Interfaces. Proceedings of the ACM on Human-Computer Interaction 7, EICS (2023), 1–34.
[r2] OmniBuds - Sensory Earables powered by AI accelerators.
[r3] Hearables with GAP9. TWS Processor with GAP9.
[r4] Shift Moonwalkers.
---
Rebuttal Comment 1.1:
Comment: Thanks for the careful response. I still feel positive about this paper.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: We sincerely appreciate your response. Thank you once again for your positive feedback and valuable suggestions, as well as for recognizing the value of our contributions. We are happy to continue the discussion if you have further questions.
Best,
Authors | Summary: This paper indicates that current AI accelerators with limited data memory often require downsampling input images, which leads to reduced accuracy. Therefore, the proposed Data channel EXtension (DEX) includes additional spatial information from original images as informative input through two procedures: patch-wise even sampling and channel-wise stacking. This effectively extends data across input channels. As a result, DEX enables parallel execution without increasing inference latency. The numerical experiments consistently show improved model performance on four datasets.
Strengths: • The proposed method is easy to understand, with clearly written paragraphs and well-organized sections.
• The experiments conducted demonstrate the effectiveness of the proposed method.
Weaknesses: - The proposed data channel extension requires the assumption that only a limited number of processors tied to memory instances are utilized while the remaining processors remain idle. However, it is not ensured that such an assumption is always true to trigger the proposed method.
- The proposed method is as simple as an implementation trick; hence, the technical contribution is limited.
* The compared channel extension methods are all proposed by the authors and hence failed to show a fair comparison.
* It is curious what the performance of patch-wise random sampling could achieve.
Technical Quality: 2
Clarity: 3
Questions for Authors: - The primary concern of this paper is that the proposed image sampling approach is too simple to make a significant technical contribution. Moreover, the proposed method relies on the assumption of having unused per-processor memory instances to initiate the sampling process, a condition that may not always be met.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Included
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and effort in providing us with thoughtful comments. We respond to your question in what follows. Please also refer to the *global response* we posted together.
---
*Weakness 1) The proposed data channel extension requires the assumption that only a limited number of processors tied to memory instances are utilized while the remaining processors remain idle. However, it is not ensured that such an assumption is always true to trigger the proposed method.*
We acknowledge that our work targets tiny AI accelerators that feature parallel processors and per-processor memory instances for rapid data access and parallelization, as we described in Section 2. **Tiny AI accelerators with these hardware-level optimizations are crucial** for performance improvement compared to traditional MCUs. We believe these tiny AI accelerators will be widely adopted in various small devices, such as recent AI-capable smart earbuds, patches, watches, glasses, wristbands, and shoes [r1, r2, r3, r4].
Given the extensive tinyML research on the STM32 MCU series and edge AI work on the NVIDIA Jetson series in the past years, we believe **these tiny AI accelerators will become a key enabling force for true on-device AI in tiny devices such as wearables**. In this context, we focus on the tiny AI accelerator platforms (MAX78000 and MAX78002) since they are not only the most widely used tiny AI accelerator research platforms [1, 6, 13, 39, 40, 43] but also feature these hardware-level optimizations. Our insights into utilizing parallel processors for accuracy improvement without sacrificing inference latency will remain valuable as long as future AI platforms continue to incorporate these tiny AI accelerators.
[r1] Ananta Narayanan Balaji and Li-Shiuan Peh. 2023. AI-On-Skin: Towards Enabling Fast and Scalable On-body AI Inference for Wearable On-Skin Interfaces. Proceedings of the ACM on Human-Computer Interaction 7, EICS (2023), 1–34.
[r2] OmniBuds - Sensory Earables powered by AI accelerators.
[r3] Hearables with GAP9. TWS Processor with GAP9.
[r4] Shift Moonwalkers.
---
*Weakness 2) The proposed method is as simple as an implementation trick; hence, the technical contribution is limited.*
- *The compared channel extension methods are all proposed by the authors and hence failed to show a fair comparison.*
- *It is curious what the performance of patch-wise random sampling could achieve.*
**Simplicity of the method**: While the proposed method might seem simple, we provide an in-depth analysis of its rationale, impact, utilization, and constraints in Section 3. Also, this simplicity allows our solution to be generally applicable to various types of models and AI accelerators. **We would like to mention that the other reviewers pointed out our simple and effective solution as a strength of our paper** (UAVo: “simple yet compelling idea”; k2BH: “simple yet effective solution”; Zbte: “both simple and effective”). Our approach is new and novel, specifically designed for emerging tiny AI accelerators, and we have shown its effectiveness. We believe many impactful papers, especially in the field of AI/ML, present simple yet impactful solutions.
**Baselines**: This area has been hardly explored as tiny AI accelerators are new platforms, resulting in **few existing baselines**. In our original submission, **we did conduct a comparative study with existing channel manipulation methods proposed by prior art** such as Downsampling, CoordConv, and CoordConv (r) in Tables 1 and 2. While these baselines were not originally designed for our target platforms, we believe they provide meaningful comparisons that validate our design rationales—data channel extension to achieve accuracy improvement without extra latency. In addition, we compared with other possible channel extension strategies in Table 4 proposed by us (except for Downsampling which is widely used) due to the lack of proper baselines in the literature. We believe this is a fair comparison incorporating existing approaches and possible alternatives.
**Patch-wise random sampling**: Thanks for the suggestion of comparison with patch-wise random sampling. Following your suggestion, we measured the performance of it and integrated it with Table 4 as shown below. DEX’s data extension algorithm outperformed the baselines, including patch-wise random sampling. We will incorporate this into our final draft.
| Method | InputChan | InfoRatio (X) | Accuracy |
|:------------------------------|:---------:|:-------------:|:--------------:|
| Downsampling | 3 | 1.0 | 57.8 ± 1.2 |
| Repetition | 64 | 1.0 | 56.3 ± 0.8 |
| Rotation | 64 | 1.0 | 55.7 ± 0.6 |
| Tile per channel | 64 | 21.3 | 39.3 ± 0.9 |
| Patch-wise seq. | 64 | 21.3 | 60.4 ± 1.5 |
| **Patch-wise random sampling** | 64 | 21.3 | 60.4 ± 1.0 |
| DEX | 64 | 21.3 | **61.4 ± 0.6** |
---
*Question 1) The primary concern of this paper is that the proposed image sampling approach is too simple to make a significant technical contribution. Moreover, the proposed method relies on the assumption of having unused per-processor memory instances to initiate the sampling process, a condition that may not always be met.*
Please see our responses to Weakness 1 and 2 above. | Rebuttal 1:
Rebuttal: # Global Response
Dear Reviewers,
We appreciate all of you for your positive reviews and for highlighting the **strengths of our work**:
**ySD7:** (1) Easy to understand, (2) clearly written, (3) well-organized, and (4) demonstrates the effectiveness of the proposed method.
**UAVo:** (1) Presents a simple yet compelling idea, (2) clear illustrations, and (3) in-depth analysis.
**k2BH:** (1) Identifies important issues in tiny AI accelerators, (2) a simple yet effective solution, (3) evaluation with real devices, (4) well-written, and (5) clear and logical motivation and methodology.
**Zbte:** (1) Well-justified motivation, (2) both simple and effective, demonstrating a practical solution, (3) well-defined baselines, (4) thorough sensitivity analysis, and (5) low latency with minimal model size overhead.
We also sincerely thank the reviewers for their **constructive comments** to improve our work. We have addressed all the questions from reviewers with clarifications and new experiments during this rebuttal period. **We summarize how we addressed the reviewers’ main questions** as follows:
**ySD7:**
- We clarified the assumption and highlighted its importance in tiny AI accelerators.
- We clarified our technical contribution.
- We clarified that we compared with existing baselines.
- We conducted an experiment to compare with patch-wise random sampling.
**UAVo:**
- We clarified the memory architecture of the tiny AI accelerators.
- We clarified the input size limitation.
- We discussed the possibility of applying DEX to intermediate layers.
- We highlighted the importance of the tiny AI accelerator platforms and the significance of the result.
**k2BH:**
- We measured the overhead of data processing with Arm Cortex-M4.
- We measured the power consumption using a Monsoon Power Monitor.
- We conducted an experiment on a face detection task.
**Zbte:**
- We clarified the scope of the experiments and conducted an experiment on a face detection task.
- We discussed scenarios where DEX might be less effective.
We will carefully incorporate these points and our responses into our final draft. Thank you once again for your valuable feedback and suggestions.
Sincerely,
Authors | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SpikedAttention: Training-Free and Fully Spike-Driven Transformer-to-SNN Conversion with Winner-Oriented Spike Shift for Softmax Operation | Accept (poster) | Summary: The paper introduces a novel method for converting Swin Transformer and BERT into SNN without additional training, achieving high accuracy and energy reduction. Key innovations include fully spike-based encoding, trace-driven matrix multiplication, and an exponent-free spike-based softmax using winner-oriented spike shift. The method maintains the original attention architecture, achieving state-of-the-art accuracy on ImageNet with a 42% energy reduction and only 0.3% accuracy loss on GLUE benchmarks while reducing energy consumption by 58%.
Strengths: 1. SpikedAttention allows for direct transformer-to-SNN conversion without requiring additional training, preserving the original attention architecture and simplifying the deployment process of energy-efficient neural networks.
2. SpikedAttention significantly reduces energy consumption by 42% compared to baseline models like Swin Transformer for image classification and by 58% for NLP tasks on the GLUE benchmark.
3. The method achieves state-of-the-art accuracy for SNNs on the ImageNet dataset (80.0%) and maintains high performance with minimal accuracy loss (0.3% on average) when converting BERT models.
Weaknesses: 1. SpikedAttention requires a longer timestep to maintain high accuracy compared to directly trained SNNs, which could limit its efficiency and responsiveness in real-time applications.
2. The method does not currently support functions like GeLU and LayerNorm, making it difficult to generalize to all types of language models and potentially limiting its applicability across diverse neural network architectures.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Table 1, why are there two Meta-SpikeFormer?
2. In Table 1, why SpikedAttention w/o ReLU overperforms SpikedAttention w/ ReLU in terms of Acc?
3. How will the energy consumption change when the input length increases in NLP tasks?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. SpikedAttention requires a longer timestep to maintain high accuracy compared to directly trained SNNs, which could limit its efficiency and responsiveness in real-time applications.
2. The method does not currently support functions like GeLU and LayerNorm, making it difficult to generalize to all types of language models and potentially limiting its applicability across diverse neural network architectures.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Long Timestep of SpikedAttention)** Thanks for the comment about the total timestep. As you mentioned, SpikedAttention requires a longer timestep than a directly trained SNN. However, directly trained prior works [1,2] have 2x more weight parameters to achieve the accuracy of 80%. More parameters mean that a higher number of off-chip memory accesses is required which is a significantly more energy consuming operation. Also, fetching data from off-chip DRAM takes longer which increases the latency. Note that a model in [1] with a similar number of parameters as SpikedAttention is >3% less accurate than SpikedAttention.
In addition, SpikedAttention has a longer timestep but only generates one spike within the timestep (highly sparse). SpikedAttention has an average spike rate under 2%, while the previous works typically have a spike rate of 30%. Since it has significantly higher sparsity than other SNNs, it can be computed with low latency and high energy efficiency if sparse GEMM accelerators [3,4,5,6] are used. The throughput of spGEMM accelerators is determined by the total number of non-zeros (spikes) amortizing the impact of a longer timestep. For instance, if the spike rate is 10x lower, we can keep the performance at the same level even with a 10x longer timestep. Recently, accelerators targeted for sparse SNNs have been proposed to maximize the throughput and efficiency when running SNNs [7,8].
[1] M. Yao, et al., “Spike-driven Transformer V2: Meta spiking neural network architecture inspiring the design of next-generation neuromorphic chips,” in Proc. of ICLR, 2024.
[2] M. Yao, et al., “Spike-driven transformer,” NeurIPS’23.
[3] Z. Zhang, et al., “Sparch: Efficient architecture for sparse matrix multiplication,” HPCA’20.
[4] Jinkwon Kim, et al., “Harp: Hardware-based pseudo-tiling for sparse matrix multiplication accelerator”. MICRO’23.
[5] N. Srivastava, et al., “Matraptor: A sparse-sparse matrix multiplication accelerator based on row-wise product,” MICRO’20.
[6] G. Zhang, et al., “Gamma: Leveraging gustavson’s algorithm to accelerate sparse matrix multiplication,” ASPLOS’21.
[7] S. Narayanan, et al., “Spinalflow: An architecture and dataflow tailored for spiking neural networks,” ISCA’20.
[8] R. Yin, Y. Kim, D. Wu, and P. Panda, “Loas: Fully temporal-parallel datatflow for dual-sparse spiking neural networks,” 2024.
**Meta-SpikeFormer Results in Table 1)** Thanks for the comment on experimental results of the prior work, i.e., Meta-SpikeFormer. We trained two different versions of Meta-SpikeFormers, i.e., one with high accuracy and one with low energy consumption, for the comparison with the proposed SpikedAttention. For the higher accuracy, we trained Meta-SpikeFormer which has total timestep of 4. For the higher energy efficiency, we trained Meta-SpikeFormer with the total timestep of 1.
**Energy Consumption w.r.t. Input Token Length)** We agree with the reviewer that energy consumption of ANN/SNN varies with the input token length for NLP tasks. Therefore, we varied the input token length of MA-BERT (i.e., target ANN) from 64 to 256 on SST-2 dataset. Also, the same lengths of input tokens are fed into the converted SpikedAttention model for the energy evaluation. The attached **Figure R2** (on PDF of global rebuttal) shows energy consumptions at various input lengths for both ANN and SpikedAttention. In addition, the accuracy losses due to the ANN-to-SNN conversion are presented which are negligible (< 1%).
For both ANN and SpikedAttention, the energy consumption increases as the maximum input length increases. This is because the number of computations increases in the attention module with the increase in the input length. For instance, MA-BERT with the input length of 128 consumes 189.7mJ of energy, while MA-BERT with the input length of 256 consumes 458.9mJ (2.4x). Since SpikedAttention benefits from fully spiked-based computations, the energy consumption for input lengths of 128 and 256 is 79.9mJ and 188.0mJ, respectively. These energy numbers imply that SpikedAttention is ~2.4x more energy-efficient compared to the ANN, regardless of the input token length. As we optimized the spiked-based computation in the attention module with WOSS and trace-based matrix multiplication, the energy reduction ratio of SpikedAttention compared to ANN slightly increases as the input length increases.
* We will add this relationship between input token length and the energy consumption in Appendix of the paper.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' responses. They address my concerns. | Summary: The paper presents SpikedAttention, a novel method for converting pre-trained transformers into spiking neural networks (SNNs). The method introduces two key techniques: trace-driven matrix multiplication and winner-oriented spike shift (WOSS) for softmax. These techniques enable the conversion of attention modules into spike-based computations without altering the original transformer architecture or requiring any additional training. The authors demonstrate the effectiveness of SpikedAttention by converting Swin Transformer for image classification and BERT for natural language processing tasks, achieving state-of-the-art accuracy and significant energy reduction compared to previous SNN-based transformers and the original ANN models.
Strengths: 1. Novel techniques. The paper introduces innovative methods for implementing attention mechanisms in SNNs, addressing the challenges of softmax computation and matrix multiplication between spike-based matrices.
2. State-of-the-art accuracy and energy efficiency. The proposed SpikedAttention achieves impressive results on both vision and language tasks, demonstrating superior accuracy and energy savings compared to existing SNN-based transformers.
3. Direct conversion without training. The method enables the direct conversion of pre-trained transformers into SNNs without requiring any additional training or architectural modifications, making it a practical and efficient approach.
4. Applicability to both vision and language tasks. The authors showcase the versatility of SpikedAttention by successfully converting both Swin Transformer and BERT models, highlighting its potential for broader applications in various domains.
Weaknesses: 1. Longer timestep compared to directly trained SNNs. The paper acknowledges that SpikedAttention requires a longer timestep than directly trained SNNs to maintain high accuracy, which could impact its latency and efficiency in certain scenarios. Could the authors draw a figure to show the accuracy and energy with different timesteps?
2. Limited support for certain functions. The current implementation of SpikedAttention does not support GeLU and LayerNorm, limiting its applicability to most models.
3. There is no ablation study in this work. What if trace-driven matrix multiplication is absent? What if WOSS softmax is missing?
4. Limited discussion on hardware implementation: While the paper mentions the potential for hardware implementation, a more in-depth analysis of the hardware implications and optimizations would be valuable.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Impact of timestep on performance: How does the choice of timestep affect the accuracy and energy efficiency of SpikedAttention, and is there a trade-off between these factors?
2. Performance on more complex tasks: How does SpikedAttention perform on more challenging vision and language tasks, such as object detection, machine translation, or question answering?
3. Hardware implementation and optimization: What are the specific hardware considerations and potential optimizations for deploying SpikedAttention on neuromorphic chips or other specialized hardware?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper discusses limitations in the final section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Impact of Timestep on Accuracy/Energy)** Thanks for the comment about the trade-off between the total timestep and the accuracy/energy efficiency. Regarding the accuracy, as discussed in Appendix E, the larger the timestep T, the smaller the base value of an one-spike SNN can be. A smaller base reduces the conversion loss, leading to higher accuracy. Regarding the energy consumption, as discussed in Appendix A.2, a longer timestep incurs higher energy consumption due to more neuron model computations and data movements. To summarize, a longer timestep increases the energy consumption while reducing the accuracy loss.
As requested, we attached **Figure R1** (on PDF of global rebuttal) showing the accuracy loss and the energy consumption at various timesteps when converting Swin-T to an SNN for ImageNet classification. It clearly shows the trade-off between timestep and accuracy/energy consumption.
* We will add this figure to Appendix in the paper.
**Ablation Study)** Thanks for the comment on the ablation study. We also considered adding experiment results without either trace-driven matrix multiplication or WOSS. However, those two methods are indispensable to realize the end-to-end spike-based computation for transformers (i.e., without any one of them, it is impossible to run the model without structural modification [1,2] or involving floating-point computations [3]). First, when we use AND operation instead of the trace-driven multiplication in SpikedAttention, the activation values floor to zero as the layers get deeper because the one-spike SNN is highly sparse, and the accuracy collapses to zero. Second, without WOSS, the softmax operation cannot be implemented by spike-based computations, and the attention module of the ANN needs to be restructured (e.g., removing softmax) and trained from scratch. Since our most important goal was achieving “training-free ANN-to-SNN conversion”, converting the softmax to spike-based computations without WOSS is impossible.
[1] Z. Zhou, et al., “Spikformer: When spiking neural network meets transformer,” ICLR’23.
[2] M. Yao, et al., “Spike-driven transformer,” NeurIPS’23.
[3] Z. Wang, et al., “Masked spiking transformer,” ICCV’23.
**Hardware Deployment)** SpikedAttention is not a model targeted for a specific type of hardware but designed to support a diverse set of hardware architectures. For example, Loihi [4], i.e., Intel’s neuromorphic chip, implements a trace model in hardware that decays and tracks the per-neuron trace over time. The hardware module for updating neuron traces is implemented on a neuromorphic chip to support spike-timing-dependent plasticity (STDP), i.e., a well-known unsupervised learning rule. We intentionally designed the trace-driven matrix multiplication to utilize the already available hardware resources for neuron traces. Therefore, trace-based matrix multiplication is fully supported on neuromorphic chips. In addition, recent Loihi2 allows users to select a neuron model among various neuron models implemented in hardware. To support WOSS neurons, we can modify the existing LIF neuron hardware and hardware overhead details are provided in Appendix C of the submitted paper.
SpikedAttention has a longer timestep compared to directly trained SNN models, but only generates one spike within the timestep (highly sparse). SpikedAttention has an average spike rate under 2%, while the previous works typically have a spike rate of 30%. Since it has significantly higher sparsity than other SNNs, it can be computed with low latency and high energy efficiency if sparse GEMM accelerators [5,6,7,8] are used. The throughput of spGEMM accelerators is determined by the total number of non-zeros (spikes) amortizing the impact of a longer timestep. For instance, if the spike rate is 10x lower, we can keep the performance at the same level even with a 10x longer timestep. Recently, accelerators targeted for sparse SNNs have been proposed to maximize the throughput and efficiency when running SNNs [9,10].
[4] Davies, M., et al. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro’18.
[5] Z. Zhang, et al., “Sparch: Efficient architecture for sparse matrix multiplication,” HPCA’20.
[6] Jinkwon Kim, et al., “Harp: Hardware-based pseudo-tiling for sparse matrix multiplication accelerator”. MICRO’23.
[7] N. Srivastava, et al., “Matraptor: A sparse-sparse matrix multiplication accelerator based on row-wise product,” MICRO’2020.
[8] G. Zhang, et al., “Gamma: Leveraging gustavson’s algorithm to accelerate sparse matrix multiplication,” ASPLOS’21.
[9] S. Narayanan, et al., “Spinalflow: An architecture and dataflow tailored for spiking neural networks,” ISCA’20
[10] R. Yin, et al., “Loas: Fully temporal-parallel datatflow for dual-sparse spiking neural networks,” 2024.
**Performance on Complex Task)** Thanks for the comment on performance on more complex tasks. To demonstrate that SpikedAttention can reduce energy consumption for more complex tasks, we converted MA-BERT for question answering to SpikedAttention. We trained the existing MA-BERT on the SQuAD dataset [11] for question answering and converted it to spike-based computations. As a result as presented on **Table R1** (on PDF of global rebuttal), SpikedAttention achieves an energy reduction of 59.5% with only 1.1% accuracy loss.
* We will add **Table R1** to Section 5.2 (Conversion of BERT to SpikedAttention) in the paper.
[11] Pranav Pajpurkar, et al., “SQuAD: 100,000+ questions for machine comprehension of text.” ACL’16.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal.
Comment: I appreciate the authors' responses, which address my concerns. I have increased the rating from 5 to 6. Thank you. | Summary: This paper proposes a transformer-to-SNN conversion method without modifying its attention architecture. To minimize the energy consumption, the authors apply one-spike phase coding, Trace-driven matrix multiplication, and winner-oriented spike shift for softmax. They evaluate their conversion method on vision and NLP tasks including ImageNet classification and GLUE Benchmark.
Strengths: - Solid paper structure and intuitive figures. Easy to follow the paper.
- The proposed methods are well-structured for energy efficiency.
- Comparable performance with low energy consumption
- As well as theoretical energy consumption, the authors provide hardware-realistic energy including data movement and the update of membrane potential.
.
Weaknesses: - In the vision task, the authors evaluate their method on only the ImageNet dataset. It would be helpful if there are other type of datasets such as neuromorphic datasets.
- In Table 1, the meaning of without ReLU is ambiguous. Please elaborate the details.
- In figure 2(b), weights should be $W_Q$, $W_K$, $W_V$.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In WOSS, the winner-spike is time-shifted and fires at t=0. It seems that the output spike always fires at t=0 regardless of when the winner-spike fires. I wonder if there is any information loss from shifting
- It would be better if the authors could provide the firing rate for the ImageNet dataset. This will be a good comparison with direct coding which is generally used in previous SNN transformers [1,2].
- In Appendix C, the authors compare the area and power overhead of the general and WOSS LIF in the softmax layer. How about the performance of general and WOSS LIF? It would be better to connect the overheads and performance.
[1] Zhou, Zhaokun, et al. "Spikformer: When spiking neural network meets transformer." arXiv preprint arXiv:2209.15425 (2022).
[2] Yao, Man, et al. "Spike-driven transformer." Advances in neural information processing systems 36 (2024).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Neuromorphic Datasets)** Thanks for the comment on expanding experiments to the event-based dataset. In this work, we proposed an ANN-to-SNN conversion method for tasks that provide high accuracy using ANNs. Therefore, when submitting the paper, we did not train ANNs on event-based data to convert them to SNNs. Probably, directly training SNNs might be more suitable for event-driven tasks. However, we agree with the reviewer’s suggestion that demonstrating performance on event-based datasets would make our method more comprehensive. In **Table R1** below, we compare the performance of the Spiked-Attention (Ours) with the Masked Spiking Transformer (MST) on CIFAR10-DVS [1] and CALTECH 101 [2]. Input spikes from each event-based dataset are fed into the model over ten timesteps. The results demonstrate that the proposed Spiked-Attention successfully converts ANNs trained on event-based datasets as well. Note that we were unable to report the number of parameters and the energy consumption of MST since the pre-trained MST model on event-based data is not publicly available. Analyzing the energy consumption by considering the neuron model and the data movement of membrane potentials as provided in [3], we can achieve 2.4~2.7x higher energy efficiency than ANNs with less than 0.8% accuracy loss in event-driven datasets.
* We will add **Table R1** to Appendix in the paper.
| Dataset | Model | Param (M) | Energy (mJ) | Timestep | Accuracy (%) |
|:---:|:---:|:---:|:---:|:---:|:---:|
| CIFAR10-DVS | ANN (w/ ReLU) | 27.6 | 20.6 | 1 | 88.6 |
| | MST (Unsigned) | - | - | 128 | 86.6 |
| | Ours (Unsigned) | 27.6 | 8.4 | 48 | 88.3 |
| N-CALTECH 101 | ANN (w/ ReLU) | 27.6 | 23.5 | 1 | 91.6 |
| | MST (Unsigned) | - | - | 64 | 84.7 |
| | Ours (Unsigned) | 27.6 | 8.6 | 48 | 90.8 |
**Table R1: Experiment results on event-based dataset**
[1] Li, H et al., CIFAR10-DVS: An event-stream dataset for object classification. Frontiers in Neuroscience, 2017.
[2] Orchard, G et al., Converting static image datasets to spiking neuromorphic datasets using saccades. Frontiers in Neuroscience, 2015.
**Notation in Figure 2)** In Figure 2, $S_Q$, $S_K$, and $S_V$ are query ($Q$), key ($K$), and value ($V$) within an attention layer in spike forms, not the weights required to generate $Q$, $K$, and $V$. Thus, the notation ‘$S$ ’ represents a spike train. In other words, these are activations generated by the “weights” you mentioned.
Figure 2(a) is the attention operation used in [3], which performs the AND operation between spike-encoded $Q$ and spike-encoded $K^T$, followed by the floating-point softmax operation to obtain an attention map. Figure 2(b) is the attention operation presented in [4], which uses the AND operation between $Q$, $K$, and $V$ in spike forms. However, it omits the softmax operation which cannot be generalized to any transformer models.
[3] Z. Wang, et al., “Masked spiking transformer,” ICCV’23.
[4] M. Yao, et al., “Spike-driven transformer,” NeurIPS’23.
**Information Loss in WOSS)** We appreciate the comment about information loss due to WOSS-based softmax. As mentioned in the paper, the output spikes are shifted to always have a winner spike at t=0. Since the winner spike generated at t=0 is the first output spike, no spike occurs before the winner spike. Therefore, shifting the spikes based on the winner spike does not result in any information loss. Rather, a winner-spike-based shift increases the range of time steps over which spikes can be generated for a given total timestep T. In short, the proposed WOSS method improves the precision of the softmax operation by extending useful time steps rather than losing information.
**Spike Rate on ImageNet)** Thanks for the comment on the spike rate for the ImageNet dataset. When converting Swin-T (‘w/o ReLU’) to an SNN using SpikedAttention on ImageNet, we observed an average spike rate of 1.8%. For the comparison, we re-implemented MST [3] and obtained the average spike rate of 30%. It means that our conversion method provides the SNN model which fires much less resulting in higher energy efficiency.
* We will add this comparison on firing rate between SpikedAttention and MST [3] in the paper.
**Performance of WOSS Hardware)** Thanks for the comment on the performance of the general LIF and the WOSS LIF hardware modules. The general and WOSS LIF hardware modules presented in Appendix C have the same performance, in terms of latency, because they are synthesized and operate at the same clock frequency, i.e., 333MHz. It means that the membrane potential of each neuron, either general or WOSS, gets updated in 3ns.
* We will add this information in Appendix C.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses, which have been helpful in enhancing my understanding of your work. Therefore, I’ve raised the score by 1 point.
Furthermore, I want to clarify that weakness #3. I was pointing out that your notation for weights is all $W_Q$, instead of using $W_Q$, $W_K$, and $W_V$.
---
Reply to Comment 1.1.1:
Comment: Thanks for pointing out the typo. I believe you are mentioning the weight notations in Fig. 1(b). We will fix them in the final manuscript. | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and insightful comments which have helped improve our paper. We address a few common points in this response. All other questions are addressed in reviewer specific responses.
**Meaning of ‘w/o ReLU’)** *( for Reviewer XrJp and Reviewer uL88 )* Since a spike in SNNs can only represent {0,1}, the binary spikes as an activation map in SNNs cannot represent negative values, thus having limited information capacity compared to the activation map of ANNs. Since CNNs normally have positive activations with ReLU as an activation function at all conv layers, it is easier to convert them using binary spikes. However, simply converting transformers to SNNs may cause accuracy loss because many layers are not followed by ReLU, e.g., normalization layers and linear layers. Therefore, in the experiments of SpikedAttention, we tested two transformer versions for the ANN-to-SNN conversion, i.e., one with ReLU inserted at all layers to always generate positive-only activations (denoted as ‘w/ ReLU’) and another without such modification (denoted as ‘w/o ReLU’). Thus, transformers without inserting additional ReLUs but keeping the originally placed ReLUs are called ‘w/o ReLU’ in the paper. Since most layers are keeping both positive and negative activations in the original model (i.e., ‘w/o inserted ReLUs’), it shows higher accuracy than the model ‘w/ inserted ReLUs’.
In our Swin-T experiments, for instance, Swin-T without inserted ReLUs is converted to an SNN with positive and negative spikes making the SNN ternary similar to [1]. However, this increases the number of spikes in the converted SNN model increasing the energy consumption (refer to Table 1; 3mJ vs. 1.8mJ). To avoid this, we may add ReLUs at all intermediate layers, i.e., Swin-T w/ ReLU, and convert the model to an SNN (like MST [2]). Then, since all activations are positive, we can easily represent them using binary spikes.
Since our final goal is to convert transformers without any modification or additional training, converting ANN w/o ReLU is desirable. Therefore, we converted MA-BERT w/o (inserted) ReLUs and reported the results in Table 3.
* We will edit the paper accordingly in order not to confuse the readers regarding the term ‘w/o ReLU’.
[1] Y. Guo, “Ternary Spike: Learning Ternary Spikes for Spiking Neural Networks”, AAAI’ 24.
[2] Z. Wang, et al., “Masked spiking transformer,” ICCV’23.
**Support of GeLU and LayerNorm)** *( for Reviewer xJgY and Reviewer uL88 )* We appreciate your comment about supporting fully spike-based GeLU and LayerNorm. As pointed out by reviewers, the conversion method proposed in SpikedAttention does not support GeLU and LayerNorm yet. As mentioned in “Conclusion”, SpikedAttention has focused on converting the attention module and the softmax operation to fully spike-based operations, which are fundamental building blocks of all transformer models. Please consider this work as the first yet big step towards supporting all transformers by fully spike-based computations including yet to be conquered GeLU and LayerNorm layers. This remains as our future work.
Pdf: /pdf/d0dbb52f2e90c3618b68fd335203253af1a9ad46.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Deep Learning for Computing Convergence Rates of Markov Chains | Accept (spotlight) | Summary: The authors proposed a novel computational method to estimate the convergence rate of general Markov Chains. They utilized neural network to verify if contration drift (CD) holds for a given Markov Chain. As an extension to a prior work [1], the authors provided further theoretical analysis of their methods, and proposed an explicit formula of the convergence rate. Various numerical experiments were conducted to justify the applicability.
Strengths: 1. I'm not familiar with the topic about estimating the convergence rate of a given Markov chain. In general, the idea is very novel to me. Since this problem is very difficult, it seems that the proposed method is indeed promising.
2. The paper provides detailed theoretical analysis, including the explicit formula of convergence rate and sample complexity.
Weaknesses: 1. The paper is somewhat intricate and difficult follow. I notice that it is built upon [1], but more detailed background and related work should be provided to improve readability. Specifically, the authors should discuss more previous methods if there exist and make comparisons theoretically or numemrically.
2. In experiments, the authors only showed the results of their algorithms. I think the convergence rate should be verifed through directly simulating the Markov chain and computing Wasserstein distance between the sampled distribution and stationary distribution.
3. The authors only conducted experiments in 2D, lacking high dimensional examples.
4. A more fundamental problem is that how to verify whether CD holds in practice in high dimensional case. The inequality is defined in the pointwise sense, which is difficult to verify due to the curse of dimensionality.
[1] Qu, Yanlin, Jose Blanchet, and Peter Glynn. "Computable Bounds on Convergence of Markov Chains in Wasserstein Distance." arXiv preprint arXiv:2308.10341 (2023).
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. What is the relationship between CD and conventional inequailities such as Poincare inequality, Log Sobolev inequality?
1. I wonder the choices of $U$. In all the experiments, $U$ is fixed as a constant. How is this constant selected? Also, is there any other choice of $U$ that can be considered in practice? How does different choices of $U$ lead to different convergence rate estimation? What is the practicability of sequentially learning neural network in section 3.4?
2. The convergence rate in Theorem 3 depends on $\inf/\sup V$. How is inf/sup $V_\theta$ computed in practice?
3. Does there always exist $U,V$ to satisfy CD? If not, how can one know that from DCDC? In another word, how can one tell the failure of training of network from the fact that CD doesn't hold?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations are pointed out in the weakness and question part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your detailed feedback. In the following, we address the concerns (W1-4) and answer the questions (Q1-4).
W1. In Section 2, we only introduce necessary *analytical* concepts (e.g., random mapping representation, local Lipschitz constant, and contractive drift) to quickly set up for building the *computational* framework. In retrospect, we agree that we should add more background (e.g., Markov chains, stationary distributions, and traditional convergence analysis) to enhance the readability. This will be done in the revision.
Since DCDC is the first computational framework to bound the convergence of general state-space Markov chains, there are currently no other numerical methods available for direct comparison. Since analytical methods can only handle *stylized* (structured) Markov chains, the *realistic* (less structured) examples considered in this paper are clearly beyond their reach.
W2. Given a non-trivial Markov chain $X$, simulating $X_n$ for large $n$ is often very expensive, simulating $X_\infty$ directly is typically not feasible, and the convergence rate ($r<1$ with $C>0$ such that $W(X_n,X_\infty)\leq Cr^n$) is determined by the *infinite* sequence $W(X_0,X_\infty)$, $W(X_1,X_\infty)$, …. These three reasons make it impossible to reliably estimate the convergence rate via direct simulation in finite time.
Now we have DCDC to generate a convergence bound $W(X_n,X_\infty)\leq Cr^n$, the correctness of which is theoretically guaranteed, so it is not necessary to verify the bound by estimating $W(X_n,X_\infty)$. Although estimating $W(X_n,X_\infty)$ can reveal whether the above bound is tight, it is often impractical due to the first two reasons mentioned earlier. In particular, for the examples in this paper, $X_\infty$ is unknown. In fact, complex dynamics + intractable equilibrium = notoriously hard convergence analysis (without DCDC).
W3. We plan to apply DCDC to high-dimensional chains in future work. In the current paper, we focus on 2D examples to verify the effectiveness, visualize the CDE solution, and gain valuable insights from the shape (e.g., a sunken surface, a sloping plane, and a wedge-like curve).
W4. The pointwise verification of an equality/inequality in a high dimensional space is indeed a fundamental issue. These issues, however, are common to other areas for which NN solvers of functional equations have demonstrated huge success [5]. For instance, Neural-network-based PDE solvers also face this same issue. While the sample complexity literature typically uses an L2-based criterion (based on suitable Sobolev norms), see, e.g., [6], typical applications often require approximations of PDE solution with uniform convergence guarantees on a specific set of points. While we expect dimension-dependent complexity rates as in the PDE literature, we note that our task is easier because we only need to prove an inequality, not an equality. However, given the successful record and vast literature of PINNs, we believe that our method can be at least equally successful in a wide range of settings and we plan to pursue a sharp sample complexity theory similar to that developed in the PDE literature in future research.
Q1. There are two primary classes of methods to bound the convergence of Markov chains: drift & minorization/contraction conditions (Chapter 9-20 of [2]) and spectral/operator theory (Chapter 22 of [2]). Poincare and log-sobolev inequalities belong to the latter while CD in [1] advances the former. “The former have been the most successful for the study of stability and convergence rates, despite the inherent difficulty of constructing an appropriate Lyapunov function” [3]. DCDC leverages deep learning to tackle this inherent difficulty (and more).
Q2. When $U$ is a constant, the constant is theoretically not important as CD is linear in $V$. However, in practice, we can use this constant to control the scale of $V$. In the SGD example, $KV=V-1$ leads to $\max V\approx1000$, so we use $0.1$ to scale $V$ down, which turns out to be easier to learn/approximate.
When $U$ is not a constant, it modifies the underlying metric (Section 2 of [1]). When a chain is expansive ($d(f(x),f(y))>d(x,y)$), we may find $U$ to make it non-expansive ($d_U(f(x),f(y))\leq d_U(x,y)$). The examples in this paper are already non-expansive, so we set $U$ to be constant. The application of DCDC to expansive chains is left for future research where sequential CDE solving (Section 3.4) becomes crucial.
Q3. In this paper, the infimum and supremum are computed over a mesh grid. The error can be controlled if the Lipschitz constant of the neural network is estimated (e.g., [4]).
Q4. Given $U$, $V$ is an expected discounted cumulative reward (Remark in Section 2), so it always exists but can be infinite. When $V=\infty$, the neural network in DCDC diverges to infinity. When it does not diverge to infinity, the training is successful if we verify the CDE.
[1] Qu, Y., Blanchet, J., Glynn, P., “Computable bounds on Convergence of Markov chains in Wasserstein distance”, 2023
[2] Andrieu, C., Lee, A., Power, S., Wang, A.Q., “Comparison of Markov chains via weak Poincare inequalities with application to pseudo-marginal MCMC”, 2022
[3] Douc, R., Moulines, E., Priouret, P., Soulier, P., “Markov Chains”, 2018
[4] Scaman, K., Virmaux, A., “Lipschitz regularity of deep neural networks: analysis and efficient estimation”, 2018
[5] Raissi, M., Perdikaris, P., Karniadakis, G.E., “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations”, 2018
[6] Lu, Y., Blanchet, J., Ying, L., “Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent”, 2021
---
Rebuttal Comment 1.1:
Comment: Thanks for you feedback addressing most of my concerns. I still believe some more experiments mentioned in W2 can be conducted to strengthen the paper. I understand the expensive cost of simulation a general non-trivial Markov chain, but you can try some easier one. For instance, in the SGD for logistic regression case, the computational cost is low and thus one can run sufficienly many steps (e.g. 100k) to get sufficiently accurate estimation of $X_\infty$. Then the true convergence rate can be computed (at least approximated) and compared with the theoretical results. Nevertheless, I think this work is indeed interesting and constructive after reading the author rebuttal. I'm happy to raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply. The suggestion of trying easier Markov chains for exact rates is very helpful and we will include this comparison. Since we mainly care about whether our bounds are good for reasonably large $n$’s (but not the first several $n$’s), given the exponential convergence, we may need to estimate some very small $W(X_n,X_\infty)$, which is pretty hard in general, but we could do it efficiently in a non-trivial multidimensional process, using importance sampling. | Summary: The paper studies the problem of convergence rate analysis for general state-space Markov chains. They propose Deep Contractive Drift Calculator (DCDC), the first general-purpose sample-based algorithm for bounding the convergence of Markov chains. There are two components, a theoretical one that utilize an auxiliary function (a solution of a certain equation) to bound convergence, and an empirical one that utilize deep neural networks to approximate the auxiliary function. Furthermore, the authors provide statistical guarantees on the approximation.
Strengths: 1. Extremely well written paper. The abstract gives a quite clear overview of the paper, pointing out the main contribution in this paper and the importance of the problem addressed without exaggeration. The wording in the paper is succinct, subjective, yet pleasant to read. I'm not an expert in the field but I quickly understand the importance of the paper. It appears to me that the whole presentation is rather mature and professional.
2. The authors study a problem of fundamental importance and give a general solution that is simple yet effective and without sound guarantees. The authors borrow the idea of Lyapunov function approximated by neural networks and successfully apply it to the convergence problem of Markov chains, which in my opinions is of some fundamental importance. Furthermore, the authors provide statistical guarantees on the approximation, forming a rather complete story.
Weaknesses: I'm not able to find effective weaknesses in this paper.
Technical Quality: 4
Clarity: 4
Questions for Authors: No.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: I'm not seeing any limitations of essence not addressed in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your positive feedback. Please feel free to read the other rebuttals.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for your positive feedback on my positive feedback. I'll read the other rebuttals. | Summary: #### Summary
The paper introduces the Deep Contractive Drift Calculator (DCDC), a novel sample-based algorithm for bounding the convergence rates of general state-space Markov chains to stationarity in Wasserstein distance. The method leverages deep learning to solve the Contractive Drift Equation (CDE), providing explicit convergence bounds. The paper includes theoretical analysis, sample complexity, and empirical validation on realistic Markov chains.
Strengths: #### Strengths
1. **Innovative Approach**: The use of deep learning to solve the Contractive Drift Equation (CDE) is novel and bridges a gap between deep learning and traditional mathematical analysis.
2. **Theoretical Rigor**: The paper provides thorough theoretical foundations, including the derivation of the CDE and detailed proofs of convergence bounds.
3. **Practical Implications**: The approach is validated on realistic Markov chains, demonstrating its applicability to problems in operations research and machine learning.
4. **Clarity of Exposition**: The paper is well-written, with clear explanations of the methodology and theoretical results.
Weaknesses: #### Weaknesses
1. **Computational Complexity**: The approach may be computationally intensive, particularly for high-dimensional state spaces. More discussion on computational efficiency and scalability would be beneficial.
2. **Comparison with Existing Methods**: While the paper discusses theoretical advantages, empirical comparisons with existing state-of-the-art methods for convergence analysis are limited.
3. **Generality**: The method is demonstrated on specific types of Markov chains. Extending the empirical validation to a broader range of applications would strengthen the paper.
4. **Sample Complexity**: Although the sample complexity is analyzed, practical guidelines for choosing sample sizes in different scenarios would enhance the utility of the method.
Technical Quality: 3
Clarity: 3
Questions for Authors: #### Questions
1. **Computational Complexity**:
- Could you provide more details on the computational complexity of the DCDC algorithm, particularly for high-dimensional state spaces? How does the method scale with increasing dimensions?
2. **Comparison with Existing Methods**:
- How does the DCDC method compare empirically with existing state-of-the-art methods for bounding convergence rates of Markov chains? Are there specific scenarios where DCDC significantly outperforms these methods?
3. **Generality**:
- The method is validated on specific types of Markov chains. Do you foresee any challenges in applying DCDC to other types of Markov chains, such as those with more complex dynamics or in higher-dimensional spaces?
4. **Sample Complexity**:
- While you provide a theoretical analysis of sample complexity, can you offer practical guidelines or heuristics for choosing sample sizes in different applications? How sensitive is the method to the choice of sample size?
5. **Practical Applications**:
- Can you discuss potential practical applications of the DCDC method in more detail? For instance, how might this method be applied in real-world scenarios such as reinforcement learning or stochastic optimization?
6. **Assumptions and Limitations**:
- The paper discusses some assumptions and limitations. Could you elaborate on the key assumptions that are critical for the theoretical results, and how robust the method is to violations of these assumptions?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: no
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your valuable feedback and positive view about our paper. In the following, we address the concerns (1-4) and answer the questions (1-6).
1&4 *Computational and Sample Complexity*: The computational complexity, as with any typical deep learning method involving general non-convex optimization, is largely an open problem. There are, however, recent results for regularized ReLU architectures which can be studied to global convergence using convex relaxations (see [1]). We are interested in exploring the applications of these results in future work. We believe that we can use the parameter $\epsilon$ to control the complexity of these convex relaxations relative to the inequality-gap in the CD bound induced in terms of the parameter $\epsilon$. It is important to keep in mind (as we mention in the paper) that our task is simpler than finding an exact solution, because we care about a one-sided inequality. In terms of sample complexity, the curse of dimensionality is also, unfortunately, an issue that plagues virtually any algorithm that learns a function by sampling [2]. Our goal in this paper is to introduce a novel methodology (the first of its type) that enables the use of deep learning to address this important problem, but we admit that obtaining sharp sample complexity bounds is an important problem that we are also leaving for future research. We envision a sample complexity theory that depends on $\epsilon$ in such a way that as $\epsilon$ is small we recover the sample complexity guarantees that are expected if we know that the CDE is similar to the analysis in [3]. We also leave this important topic for future research.
2 *Comparison with Existing Methods*:
We are not aware of other data driven computational frameworks to bound the convergence of general state-space Markov chains. Since analytical methods can only handle *stylized* (structured) Markov chains, the *realistic* (less structured) examples considered in this paper are clearly beyond the reach of the existing methods (which are based on analytical developments that are not “automatic” or computer based). We can provide a discussion, however, between CD and other existing methods to build the inequalities per-se, this discussion will summarize the comparison presented in [4].
3 *Generality*:
Markov chains find important applications in a wide range of disciplines (including Computer Science, Economics, Electrical Engineering, Management Science, Operations Research etc.). We use non-trivial examples in Operations Research (e.g., queueing networks) and Machine Learning (e.g., stochastic gradient descent) to illustrate the applicability of the method. For complex Markov chains (e.g., reflected Brownian motions) in high dimensional spaces, one challenge is that contraction may occur along some but not all directions (e.g., $|\partial f/\partial x_1|<1$ or $|\partial f/\partial x_2|<1$ but not both), resulting in a local Lipschitz constant of one. In our ongoing work, we will introduce vector-valued CDs to address this issue.
5 *Practical Applications*:
Thanks for this important question. Regarding the application to stochastic optimization, while we include an SGD example, admittedly this is just to show-case the applicability of the method. While the example that we provide already saturates what can be done with “standard methods” (which again, virtually all involve pen-and-paper approaches) we recognize that a broader ablation is needed to fully understand the potential of this approach. For the application to reinforcement learning, note that typical results mostly focus on finite state spaces using assumptions that are rather difficult to verify in general state spaces. Our methods open up the development of algorithms that satisfy CD type conditions. We will also mention this in the camera ready version
6 *Assumptions and Limitations*:
The current paper focuses on compact spaces. The extension to non-compact spaces is left for future research. On compact spaces, the key assumption is CD itself (i.e., CD has a solution). As discussed at the end of Section 2, the CDE solution is an expected discounted cumulative reward, so it exists but can be infinite. When it is infinite, the chain has too little contraction to converge in Wasserstein distance. In this case, the neural network in DCDC diverges to infinity, which can be viewed as a certificate of non-convergence.
[1] Ergen, T., Pilanci, M., “Global optimality beyond two layers: Training deep ReLU networks via convex programs”, 2021
[2] Raissi, M., Perdikaris, P., Karniadakis, G.E., “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations”, 2018
[3] Lu, Y., Blanchet, J., Ying, L., “Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent”, 2021
[4] Qu, Y., Blanchet, J., Glynn, P., “Computable bounds on Convergence of Markov chains in Wasserstein distance”, 2023 | null | null | Rebuttal 1:
Rebuttal: * We appreciate the feedback and comments of all of the referees. We note that two out of the three reports rate the paper with an evaluation of 8 (strong accept) whereas one of the referees has some concerns providing an evaluation of 4 (borderline reject).
* We try to focus most of the response below to answer questions raised and to address concerns raised. The main issues have to do with the existence of a solution to the CD, the answer is yes, but it may diverge and this could be practically detected in training. The second issue has to do with complexity results and the fact that learning a function in high dimensions is subject to the curse of dimensionality. But this issue is present in every single application of deep learning, which in the end involves approximating high dimensional functions based on a limited sample.
* The bottom line is that this paper is the first one that enables the use of deep learning to estimate rates of convergence to stationarity for Markov chains that take values on a general state space. We acknowledge the limitations and we’ll be happy to add more discussion (along the lines of what we include in this report, taking as a template the literature on solutions to PDEs based on deep learning techniques). | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Harmonizing Visual Text Comprehension and Generation | Accept (poster) | Summary: This work presents TextHarmony to simultaneously comprehending and generating visual text. The paper performs theoretical and experimental analysis about the performance degradation due to the inherent inconsistency between vision and language modalities. A MoE and LoRA-based module, Slide-LoRA, is then proposed to solve this problem by applying modal-specific and modal-independent LoRA experts dynamically. The experimental results indicate TextHarmony achieves comparable performance to modality-specific fine-tuning results.
Strengths: 1. The unification of image understanding and image generation into the visual text domain is a novel approach that broadens the scope of LMM applications.
2. The results presented in this paper looks good and demonstrates the effectiveness of Slide-LoRA.
3. The paper is well organized, with reasonable motivation and insights.
Weaknesses: 1. To my knowledge, there exist other multimodal generative models like DreamLLM[1] and Emu[2]. It would be better to compare with these methods.
2. The authors put related work into the supplementary material. It would be better to briefly summarize the background and representitive works in the main text.
[1] Dong, Runpei, et al. "Dreamllm: Synergistic multimodal comprehension and creation." ICLR2024
[2] Sun, Quan, et al. "Emu: Generative pretraining in multimodality." ICLR2024
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could TextHarmony generate more complex glyphs such as Chinese characters?
2. Have you tried using OCR tools to further polish the captions of DetailedCaps-100K? For example, removing the images whose captions are inconsistent with the OCR results.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Please refer to weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable comments and the approval of contributions to our work. Your concerns are addressed as follows:
**W1: Compare TextHarmony with DreamLLM and Emu.**
In Table E, we compare TextHarmony with DreamLLM and Emu in terms of both visual comprehension and visual generation. As we can see, TextHarmony **performs better** to DreamLLM and Emu in terms of visual text comprehension and generation.
> #### Table E: Comparison between TextHarmony, DreamLLM and Emu.
| | DocVQA | TabFact | TextVQA | AnyText-Bench | Mario-Eval
| :--- | :----: | :---: | :---: | :---: | :---: |
| Emu | 13.2 | 56.3 | 22.1 | 0.13 | 0.31 |
| DreamLLM | 32.7 | 60.4 | 41.8 | 0.18 | 0.30 |
| TextHarmony | **47.1** | **62.4** | **60.2** | **0.75** | **0.35** |
**W2: Briefly summarize the background and representative works in the main text.**
Thanks for your advice. We will briefly summarize the background and representative works of our paper in the main text of the revised version.
**Q1: Could TextHarmony generate more complex glyphs such as Chinese characters?**
Yes. The training data of TextHarmony in the manuscript only involves English, thus it could not generate Chinese characters. However, the training data of TextHarmony-Align (please refer to Global Author Rebuttal) contains Chinese characters, which makes it capable of generating Chinese characters. We show some examples of Chinese character generation in **Figure C** of the PDF file submitted during rebuttal.
**Q2: Using OCR tools to further polish the captions of DetailedCaps-100K**
Thanks for the suggestion. We use DBNet [1] for text detection and Parseq[2] for text recognition. We filter out the captions that have over 50% mis-matching rate to the OCR results. Specifically, a detected text line is called "mis-matching" if it cannot be found in the caption. After the above procedure, a total of 1839 captions are filtered out from DetailedCaps-100K, suggesting that there is room for improvement in the dataset. Restricted by the rebuttal time, we would like to leave for future work the specific impacts of this strategy to the performance of TextHarmony.
---
> [1] Real-time Scene Text Detection with Differentiable Binarization. AAAI 2020.
> [2] Scene Text Recognition with Permuted Autoregressive Sequence Models. ECCV 2022.
---
Rebuttal Comment 1.1:
Comment: I have read the authors' rebuttals and other reviewers' comments, and have a few questions for the authors. The DreamLLM and Emu papers don't include some of the benchmark results in Table E. Did you reproduce them yourselves, and which version of Emu did you use? Besides, in the global rebuttal, TextHarmony-Align outperforms Monkey and Anytext , so was the training data changed from Mario-Laion to Anytext-3m or was it all used?
---
Reply to Comment 1.1.1:
Comment: Thanks for your timely reply. Your concerns are addressed as follows:
**Q3: Did you reproduce them yourselves, and which version of Emu did you use?**
Yes. We reproduce the results of DreamLLM and Emu based on the official repositories. We use the instruction-tuned Emu, i.e., Emu-I for fair comparison, since TextHarmony has also gone through the instruction tuning stage.
**Q4: Was the training data changed from Mario-Laion to Anytext-3m or was it all used?**
The training data was changed from Mario-Laion to Anytext-3m for fair comparison. | Summary: This paper introduces a multimodal generative model (TextHarmony) for unified comprehension and generation of visual text. To overcome the performance degradation brought by modality inconsistency, the authors propose the slide-lora, which partially decouples the multimodal generation space. An image-text caption dataset, DetailedTextCaps-100K, is also developed to enhance visual text generation capabilities.
Strengths: 1. TextHarmony involves visual text coprehension and generation in a single model for the first time. It achieves comparable performance to modal-specific models. It is a solid step forward for multimodal task unification in visual texts.
2. The analysis of the modality inconsistency problem in the multi-modal generation is reasonable and the proposed solution(SlideLoRA) is well motivated, novel and effective.
Weaknesses: a) The connection between this work and visual text is not clearly stated. I understand the focus of this work is to construct a multimodal generative model, but it would be helpful to elaborate specifically how this work achieves multimodal generation in the field of visual text.
b) It would be helpful to report the model size and the inference speed of TextHarmony.
Technical Quality: 4
Clarity: 3
Questions for Authors: Line 74 ‘a versatile large multimodal’. Do you mean ‘a versatile large multimodal model’ ?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time to review our paper, the valuable comments and the approval of contributions to our work. And we are looking forward to further discussions with you. Your concerns are addressed as follows:
**W1: The connection between this work and visual text is not clearly stated. ...**
To improve the performance of visual text comprehension, we increase the resolution of the input images (specifically from 448 to 896). Then, in the pre-training phase, we use images with rich OCR annotations (e.g., DocStruct-4M) to enhancing the text perception abilities of the model. For visual text generation, we randomly mask the text portions of the image and force the model to generate these portions in order to focus training on the generation of text elements.
**W2: It would be helpful to report the model size and the inference speed of TextHarmony.**
Thanks for the suggestion. TextHarmony has **15.4 billion parameters** in total. On AnyText-Bench, the generation of each image costs **1340ms** on average. On DocVQA, the inference time for each output text token is **92ms** on average.
**W3: Line 74 ‘a versatile large multimodal’. Do you mean ‘a versatile large multimodal model’ ?**
Yes. We will modify it in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thanks to the clarification, my concerns are nicely addressed. Like I mentioned within the Strengths, TextHarmony is a solid work with reasonable motivation and impressive experimental results, and I am inclined to accept it.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your timely feedback and the strong support for our work. We are committed to incorporating all of the clarifications you suggested in the next version of our paper. | Summary: TextHarmony is a versatile multimodal generative model designed to comprehend and generate visual text. Traditional methods struggle with the inconsistency between vision and language modalities, leading to performance issues. TextHarmony overcomes this with Slide-LoRA, which combines modality-specific and modality-agnostic LoRA experts, ensuring a unified generative process. The model is enhanced by a high-quality image caption dataset, DetailedTextCaps-100K. Experiments show that TextHarmony, with only a 2% increase in parameters, matches modality-specific fine-tuning performance and improves visual text comprehension and generation tasks.
Strengths: + Explore the possibility of integrating visual text comprehension and generation
+ The proposed Slide-LoRA method is effective to harmonize the training of different modalities and tasks
+ Experimental results show that the combination is possible and the effectiveness of the proposed model
Weaknesses: - The motivation of combing visual text comprehension and generation is not clear. Only the first try (maybe) to integrate the two tasks may not be convincing enough.
- As shown in Table 1 and Table 3, the performance of the proposed method is not superior over existing baselines, e.g., TextHarmony vs. Monkey for comprehension and TextHarmony vs. AnyText for generation. It may be not easy to identify the advantages of combining visual text comprehension and generation. From this point of view, this work seems to simply try combing these two aspects, and not provides valuable research insights.
- Due the limited performance improvement, the human evaluation becomes more necessary to discriminative the proposed method and existing baseline models.
- More explanations or evidence are expected to support certain arguments, e.g., 1) “the optimization of TextHarmony is tremendously difficult due to inconsistent training objectives” in line 106, 2) “mutually exclusive” in line 108, and 3) why classifier and denoising problems are inconsistent as discussed in line 111?
- Some unclear experimental settings, such as “w/o Slide-LoRA”, n and s in Table 4.
- From the qualitative results in Figure 6, it seems that there is not any evident improvement of TextHarmony compared with AnyText.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the above weaknesses.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The interleaved generation ability of the proposed approach is unknown.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful review and valuable comments. We are looking forward to further discussions with you. Your concerns are addressed as follows:
**W1: The motivation ... is not clear ... first try ... may not be convincing ...**
Our motivation is more than just addressing gaps in unified visual text comprehension and generation.
A unified model combining visual text comprehension and generation is indispensable in many aspects:
- In many scenarios like multi-modal story generation [1], multi-modal document generation [2], the model is required to **generate coherent multimodal outputs**. Separate models cannot guarantee the contextual consistency of multimodal content.
- Using separate models **increases deployment and maintenance costs**.
- Recent works like GPT-4o[3] and Chameleon[2] show that the unification of visual and text generation expands the scope of MLLMs and enables a more unified process of multimodal data. Please also refer to reviewer LmXP "S1: The unification ... visual text domain is a novel approach that **broadens the scope of LMM applications**".
Concerning the unsatisfactory performance of previous multi-modal generation models in visual text (refer to Table 1 and 3 in the manuscript), we focus on perception, comprehension and generation of visual text and **address the issue of multimodal generation inconsistency**.
**W2 (1): The performance ... is not superior ...**
Please refer to the response to **Common Issue 1** in our Global Author Rebuttal.
**W2 (2): ... not easy to identify the advantages of combining ...**
- A unified generation model is indispensable (detailed in response to W1).
- The modality inconsistence brings performance degradation. We relieves the degradation and "achieves comparable performance to modal-specific models" (refer to S1 of Reviewer r95i).
- Aligning the settings with Monkey and AnyText, TextHarmony-Align **outperforms Monkey and AnyText** (refer to **Response (2), Table B and C** in the Global Author Rebuttal).
- Interleaved multimodal generation abilities(Figure B in the PDF file).
**W2 (3): ... seems to simply try ... not provides valuable research insights.**
Our work is **not** simply trying to combine visual text comprehension and generation.
- Besides unifying visual text comprehension and generation, we demonstrates that the **inconsistency in the multimodal generative space** is a cause of underperformance.
- The proposed Slide-LoRA shows 2.5% gains in comprehension and 4.0% in generation tasks, demonstrating its effectiveness against modal inconsistency.
- Our contribution is also acknowledged by other reviewers ("The analysis of the modality inconsistency problem ... is **reasonable** ... SlideLoRA is **well motivated, novel and effective**" by Reviewer r95i, "The results ... **demonstrates the effectiveness** of Slide-LoRA" by Reviewer LmXP).
**W3: Due to limited performance improvement, human evaluation ... necessary ...**
Slide-LoRA achieves 2.5% gains in comprehension and 4.0% in generation tasks compared to baseline. TextHarmony-Align outperforms Monkey and AnyText(**Table B and C** in the **Global Author Rebuttal**). As a result, we think the improvement is **not limited**. Regardless, we agree with you about the importance of human evaluation(**Table D**).
**W4: More explanations or evidence are expected to support certain arguments**
(1): "the optimization ... difficult due to inconsistent training objectives" in line 106
We draw this conclusion mainly from our pilot experiment (Figure 2 in the manuscript). The performance declines a lot (4%~8%) when simultaneously optimizing comprehension and generation, which is also observed in studies like [2] (i.e., the performance of Chameleon-MultiTask is much worse than Chameleon-SFT).
(2): "mutually exclusive" in line 108
Here "mutually exclusive" refers to the fact that text and images are naturally inconsistent in many aspects (information density, data structure, information granularity, etc), thus text generation and image generation require different (exclusive) feature and generation space. Thus, modality alignment has been a fundamental issue in multimodal learning[1, 2, 4].
(3): why classifier and denoising ... inconsistent ... line 111
Previous studies [5,6] show the inconsistency of classification and regression tasks in deep learning. For example, in object detection, a double-head architecture splitting classification and regression has better performance than single-head[5]. In our work, the optimization of TextHarmony contains classification (text generation) and regression (image generation), which are also inconsistent and more difficult given that it is a multi-modal generation issue.
**W5: Some unclear experimental settings, such as "w/o Slide-LoRA", n and s in Table 4.**
"w/o Slide-LoRA" refers to training TextHarmony without Slide-LoRA module. "n" refers to the total number of lora experts in Slide-LoRA. "s" represents RT, RI and RS (Line 120-122), each contains "s" lora experts, i.e., s=n/3. We will add them in the revised version.
**W6: ... there is not any evident improvement of TextHarmony compared with AnyText.**
We would like to clarify that we do not claim the performance of TextHarmony is better than AnyText. On the issue of comparisons between model performances, please refer to **Response (2), Table B and C**.
**L1: The interleaved generation ability ... is unknown.**
TextHarmony supports generate interleaved sequences, as showcased in Figure B in the PDF file.
---
>[1] SEED-Story: Multimodal Long Story Generation with Large Language Model. Arxiv 2024
>[2] Chameleon: Mixed-Modal Early-Fusion Foundation Models. Arxiv 2024
>[3] Hello GPT-4o. OpenAI, 2024
>[4] Learning transferable visual models from natural language supervision. ICML 2021
>[5] Revisiting the Sibling Head in Object Detector. CVPR 2020
>[6] D2det: Towards high quality object detection and instance segmentation. CVPR 2020
---
Rebuttal Comment 1.1:
Title: Thanks for the authors' rebuttal
Comment: Thanks for carefully considering my comments and adding the human evaluation results. Some of my concerns have been addressed, e.g., the performance comparison and certain unclear arguments. Therefore, I raise my score to 5.
However, compared with the existing work combining comprehension and generation, this work seems not to provide valuable insights (e.g., the synergetic relations in DreamLLM) in terms of research, even though some applications may be promoted by unifying these two.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your timely feedback and raising your score. We would like to clarify our research insights.
Our valuable research insights mainly lie in the word "Harmonizing". In addition to unifying visual text perception, comprehension and generation in a single model, we reveal the issue of modal inconsistency in multimodal generation through comprehensive observation (Emu2 [1], Chameleon [2], MM-Interleaved [3] ) and experiments ( Figure 2 in the manuscript ). The different performance of DreamLLM may be due to the optimization of the model structure and training strategy. We then propose an effective solution, Slide-LoRA, which uses multiple LoRA experts to partially decouple the generative space. That's why "Harmonizing" rather than "Unifying" is chosen as the title of the work.
We are open to different perspectives and further discussion. Thank you again.
> [1] Generative Multimodal Models are In-Context Learners. CVPR 2024
> [2] Chameleon: Mixed-Modal Early-Fusion Foundation Models. Arxiv 2024
> [3] MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer. Arxiv 2024. | Summary: This work presents TextHarmony, a unified and versatile multimodal generative model proficient in comprehending and generating visual text. Simultaneously generating images and texts typically results in performance degradation due to the inherent inconsistency between vision and language modalities.
Strengths: - This work introduces TextHarmony, a versatile large multimodal that allows for the unification of diverse visual text perception, comprehension, and generation tasks. TextHarmony performs comparably to specialized models in visual text perception, comprehension, generation, and editing
- The proposed Slide-LoRA dynamically aggregates modality-specific and modality agnostic LoRA experts, partially decoupling the multimodal generative space.
- A high-quality dataset of detailed visual text image captions (DetailedTextCaps-100K) is constructed with a closed-source MLLM to enhance the performance of visual text generation
Weaknesses: - Details of the Modal-Aware Gating are not given?
- Results of visual text editing and generation in Table 2 demonstrate that Anytext achieves better performance.
- The case in Fig. 5, "Good Time" and "Summer Love", are not correctly synthesized, and the performance is worse than Textdiffuser2 and Anytext.
Technical Quality: 3
Clarity: 3
Questions for Authors: see weakness
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: see weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. If you have any further comments or suggestions, please let us know. Your concerns are addressed as follows:
**W1: Details of the Modal-Aware Gating are not given?**
As stated in **Line 123-127**, the Modal-Aware Gating is an MLP module containing two linear layers. It determines whether the processing of the input token sequence requires knowledge of text generation or image generation. In the multi-modal pretraining stage, the Modal-Aware Gating is trained with $\gamma$=1 in text generation and $\gamma$=0 in image generation, according to the **Equation (3)** in the manuscript. We will update this section in the revised version.
**W2: Results of visual text editing and generation in Table 2 demonstrate that AnyText achieves better performance.**
Please refer to the **Global Author Rebuttal**, in which we address your concern in detail in three points.
**W3: The case in Fig. 5, "Good Time" and "Summer Love", are not correctly synthesized, and the performance is worse than Textdiffuser2 and AnyText.**
Thanks for the careful review. The incorrectly synthesized character cases (i.e., "e" to "E" and "r" to "R") are case-sensitive text generation issues. As observed from Figure 5 of the manuscript, TextDiffuser-2, which uses the same training set to TextHarmony in visual text generation, also fails to generate the correct character cases (i.e., "Keep Focused" to "KEEP FOCUSED"). Given that the same training data (i.e. Mario-Laion) as TextDiffuser-2 is used, we check the training data and find that **some of the training data is case-insensitive**. Besides, aligned with AnyText's training set, TextHarmony is able to **generate correctly case-sensitive text**, as illustrated in Figure A of the PDF file submitted during rebuttal. For the comparison with AnyText's performance, please also refer to the response in the **Global Author Rebuttal**.
---
Rebuttal Comment 1.1:
Title: Sincere Invitation to Participate in the Discussion
Comment: Dear Reviewer SovZ,
We would like to extend our appreciation for your time and comments. Due to the rush in finalizing the writing, some aspects may cause confusion and misunderstanding. Ensuring that the rebuttal aligns with your suggestions is of utmost importance. We are open to further discussions to clarify any remaining questions or concerns. We would greatly appreciate it if you could consider improving the evaluation after reviewing our responses.
Thank you very much for your consideration.
Sincerely,
The Authors | Rebuttal 1:
Rebuttal: Thanks to the ACs and reviewers for taking time and effort to review our manuscript. And we are also looking forward to further discussion. Below, we would like to address the common issues raised by reviewers.
**Common Issue 1: Performance compared to unimodal generation models, such as TextHarmony *vs.* Monkey in image comprehension and TextHarmony *vs.* AnyText in image generation. (W2 and W3 by Reviewer SovZ, W2, W3, and W6 by Reviewer 2avc)**
- **Response (1): Unfair comparison between generic models and specialized models**
- Please kindly note that AnyText and Monkey are unimodal generation specialized models while TextHarmony is a multi-modal generation model. It is **not particularly fair** to simply compare the performance of a specific task **between specialized models and general models**. For instance, Monkey's performance on scene text recognition, text grounding and text-centric VQA is much worse than specialized models, as shown in Table A. Besides, in the field of multi-modal generation, the performance of multi-modal generation models (SEED-LLaMA [1], Emu [2], DreamLLM [3], MM-Interleaved [4], Chameleon [5]) is also inferior to unimodal generation specialized models.
- Thus, when a generic model is compared with a specialized model, the focus should be on overall performance, not just on the performance of a particular task. The overall performance of TextHarmony is **unanimously approved by all the reviewers** (Reviewer SovZ "TextHarmony performs comparably to specialized models", 2avc "Experimental results show...the effectiveness of the proposed model", r95i "achieves comparable performance to modal-specific models”, and LmXP "The results presented in this paper looks good"). In addition, among multi-modal generation models, TextHarmony achieves much better performance than other models (Table 1 and 3 in the manuscript) in visual text comprehension and generation.
> #### Table A: Comparison of the performance of Monkey and specialized models.
| | Scene Text Recognition (Union-14M) | Text Grounding (MSRA-TD500) | Text-centric VQA (DocVQA) |
| :--- | :----: | :---: | :---: |
| Monkey | 32.7 | 13.6 | 66.5 |
| Specialized Models | **85.2** (MAERec [6]) | **84.9** (DBNet [7]) | **88.4** (ERNIE-Layout [8]) |
---
- **Response (2): Aligning the settings with specialized models and human evaluation**
- Monkey and AnyText have different settings with TextHarmony in terms of model architecture, training data, etc. To address this case, we conduct ablation experiments **aligning the settings** of training data (AnyWord-3M in AnyText), image resolution (1344*896 in Monkey), LLM (Qwen 7B in Monkey), and model pipeline (two-stage visual text rendering in AnyText). As shown in Table B and C, our model (TextHarmony-Align) slightly **outperforms Monkey and AnyText** in image comprehension and image generation.
- What‘s more, following the constructive suggestion by Reviewer 2avc (many thanks), we conduct **human evaluation** in visual text editing following the setting established by TextDiffuser. Specifically, the questionnaire consists of 100 cases, which includes two multiple-choice questions: (1) Which of the following images has the best text rendering quality? (2) Which of the drawn text best harmonizes with the unmasked region? We have collected 10 questionnaires, and the results are shown in Table D. TextHarmony-Align also outperforms AnyText and TextDiffuser2 in human evaluation.
> #### Table B: Comparison of the performance on visual text comprehension.
| | DocVQA| TextVQA | OCRBench |
| :--- | :----: | :---: | :---: |
| Monkey | 50.1 | 64.3 | 514 |
| TextHarmony | 47.1 | 60.2 | 440 |
| TextHarmony-Align | **52.9** | **64.5** | **523** |
> #### Table C: Comparison of the performance on visual text generation.
| | NED| CLIP Score |
| :--- | :----: | :---: |
| AnyText | **0.88** | 0.36 |
| TextHarmony | 0.75 | 0.35 |
| TextHarmony-Align | **0.88** | **0.38** |
> #### Table D: Human evaluation of visual text editing
| |TextDiffuser2| Anytext|TextHarmony|TextHarmony-Align
| :---|:---:|:---:|:---:|:---:|
|Q1|699|740|721|**765**|
|Q2|645|691|685|**698**|
---
- **Response (3): Comparison to the actual baseline**
- Our actual baseline model is the multimodal generative model without the addition of Slide-LoRA (TextHarmony* in Table 1 and 3 in the manuscript). TextHarmony equipped with Slide-LoRA shows an average improvement of **2.5%** in visual text comprehension and **4.0%** in visual text generation tasks. It demonstrates that TextHarmony considerably mitigates the issue of modal inconsistency in multimodal generation, which is what our paper focuses on.
Hopefully, our reply would address the concerns about model performance.
---
> [1] Making LLaMA SEE and Draw with SEED Tokenizer. ICLR 2024.
> [2] Emu: Generative Pretraining in Multimodality. ICLR 2024.
> [3] DreamLLM: Synergistic Multimodal Comprehension and Creation. ICLR 2024.
> [4] MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer. Arxiv 2024.
> [5] Chameleon: Mixed-Modal Early-Fusion Foundation Models. Arxiv 2024
> [6] Revisiting Scene Text Recognition: A Data Perspective. ICCV 2023.
> [7] Real-time Scene Text Detection with Differentiable Binarization. AAAI 2020.
> [8] ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document Understanding. EMNLP 2022 Findings.
Pdf: /pdf/94625e5c20e21120b54a2def9f1be666ad2c4458.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Differentiable Modal Synthesis for Physical Modeling of Planar String Sound and Motion Simulation | Accept (poster) | Summary: The paper presents a differentiable model that can synthesize musical string sound and simulate motion based on physical properties. The method uses finite-difference time-domain (FDTD) solver to obtain numerical solutions and take them as ground truth. Then a differentiable pipeline with neural network components is used to map physics properties to output waveform. Experimental results show that it achieves better performance than Modal synthesis and DDSPish baselines.
Strengths: - To my best knowledge, it is the first differentiable method that can generate the musical string sound and motion from physics properties.
- The proposed method is more efficient and accurate than baselines.
Weaknesses: - The proposed method relies FDTD solutions as ground truth and I am wondering what will be the gap between the simulated solutions versus real-world audio data.
- Following the previous question, there might not be too many real-world data available for learning and usually material properties are unavailable. Is it possible to infer these properties through the differentiable pipeline?
- It needs ablation studies for using two MLPs for modulation layers, as well as a MLP for mode estimator.
- There are references typos in L135 and L229.
Technical Quality: 2
Clarity: 2
Questions for Authors: See questions in weakness
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Authors mentioned limitations in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer Hh9k for the extensive review. Below are the responses to your concerns. Each item in Weakness is labeled with a number following the W (from the top, W1, W2, ...)
- **W1 (The gap between the simulation and the real-world audio data)**
- We summarized some of the main differences between the simulated data (using StringFDTD-Torch [1] or DMSP) and the real-world audio data as below.
| | Simulated | Recorded |
| --- | :---: | :---: |
| Can obtain sound | ✓ | ✓ |
| Can obtain displacement (i.e. movement) | ✓ | ✗ |
| Free from measurement errors | ✓ | ✗ |
| Free from modeling errors | ✗ | ✓ |
| Close to the 'sounds' of everyday life* | ✗ | ✓ |
#### *This is not true for all FDTD simulations, but it is true for the simulations covered in this manuscript.
The first four rows illustrate the main differences in system modeling.
- **Simulated data** has the advantage that the displacements of the strings are directly found as solutions of the PDE, giving **access to the motion information for all positions** and that there is **little measurement error**; however, since the string system is represented by a parametric PDE, this does not prevent the modeling itself from introducing errors, but there have been many validation studies showing that these **modeling errors are small enough** [2-3].
- On the other hand, **recorded audio** data can be said to have **no modeling error** because it drives the actual string as it is, but it is characterized by a large loss of information in that it uses a specific receiver sensor to record audio data. The audio data records the vibration of the measurement equipment (e.g. microphone membrane) transmitted from the vibration of the string to the receiver at a specific location, through the medium of the measurement environment. So the **displacement of the string is unknown**, and **measurement errors** (such as microphone coloration or room reverberations) are inevitably introduced.
The last row of the table is a slightly different context to this systematic difference and considers its value as a musical instrument.
- As described in the manuscript, and as the systematic difference suggests, simulated audio is significantly different from real recorded audio, since we are in effect “listening to the displacement” picked up at a particular location on the string. The system in which the vibration of a string propagates through the air and is recorded at the receiver, as we hear in our daily lives, is another active area of research [4]. Based on these studies and ours, we believe that in the future, **we will be able to further bridge the gap between simulated data and actual recorded data**.
[1] Lee, J. W., Choi, M. J., & Lee, K. (2024, April). String Sound Synthesizer On GPU-Accelerated Finite Difference Scheme. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1491-1495). IEEE.
[2] Bensa, J., Bilbao, S., Kronland-Martinet, R., & Smith III, J. O. (2003). The simulation of piano string vibration: From physical models to finite difference schemes and digital waveguides. *The Journal of the Acoustical Society of America*, *114*(2), 1095-1107.
[3] Ducceschi, M., & Bilbao, S. (2022). Simulation of the geometrically exact nonlinear string via energy quadratisation. *Journal of Sound and Vibration*, *534*, 117021.
[4] Bilbao, S., & Ahrens, J. (2020). Modeling continuous source distributions in wave-based virtual acoustics. *The Journal of the Acoustical Society of America*, *148*(6), 3951-3962.
- **W2 (Possibility of inferring the material properties through the proposed method)**
- In response to this question, which holds good insights for further research, we share the results of an informal experiment. From what we have tried, we believe that it is not impossible, but requires some tricks and a well-controlled experimental setup to get meaningful conclusions. For example, the problem of estimating string’s material properties and initial conditions from single-channel audio recordings is ill-posed and it is well-known [5] that there can be multiple solutions to similar problems. However, with better DMSP models (as mentioned in the answer to the previous question) and tricks to better address ill-posedness for properties, we are optimistic about solving this problem (e.g., inferring the material properties and the initial conditions from the sound.)
[5] Kac, M. (1966). Can one hear the shape of a drum?. *The american mathematical monthly*, *73*(4P2), 1-23.
- **W3 (Ablations with the number of MLP layers)**
- In response to a reviewer's question, we found that the difference in performance based on the number of MLP layers is insignificant, which is why we did not report it separately in the manuscript. This can be taken similarly to what we said in the text about using GRUs instead of MLPs, which is that **there is a small difference in performance, but not enough to change the ranking between models**. Modifying the details for each module did not have a significant impact on the performance change, which is consistent with conventional knowledge: Stacking more layers than fewer generally performs better, but the performance gains taper off, so we chose a modest number of layers, taking into account GPU memory and batch size. The most important factor for model performance is how the mode information is utilized, and we found the decoder structure that modulates a sinusoidal oscillator through AM and FM performs best for motion synthesis.
- **W4 (Typos and missing references)**
- Thank you so much for pointing this out. Please see the global response (`Author Rebuttal by Authors`).
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer Hh9k,
As the discussion between the reviewers and authors is coming to an end, could you please respond to the authors to confirm whether your concerns have been addressed?
Thanks!
AC
---
Rebuttal Comment 1.2:
Comment: Thanks for the response. I appreciate the clarification and additional ablation studies. My concerns are mostly addressed and I am happy to raise my score. | Summary: This paper proposes Differentiable Modal Synthesis for Physical Modeling (DMSP), which is a neural-model-based method to predict the vibration of nonlinear strings. DMSP takes the physical parameters for the differentiable equation as the input, predicts the mode frequencies and AM/FM effects, and finally outputs the waveform of vibration.
Strengths: S1: This paper is among the first attempts to study neural audio synthesis using physical properties, which makes a meaningful contribution in this sub-area.
S2: Section 2 provides some background which is helpful for understanding the context of the proposed method.
Weaknesses: W1: Some method details are not introduced with sufficient clarity. In the loss section, the detailed mathematical forms for pitch loss and mode frequency loss are not explained. Is the pitch loss L2 loss, L1 loss, or in other forms? What type of regularization is used for the mode frequency loss? Also, since there are typically many mode frequencies for a linear/nonlinear system, it is unclear how many mode frequencies the mode estimator predicts. Is it a fixed number or a variable number? If it is a variable number, how did the neural model output a variable number of mode frequencies?
W2: As the authors have also discussed, the mode predictor turns out to be not functioning appropriately based on the results in Table 2 (the big gap between DMSP and DMSP-N). Since the mode predictor is a major component of the proposed system, the design of the proposed pipeline is not well-justified.
W3: It is generally unclear what advantages the proposed physics-driven sound synthesis paradigm has over neural-based generative models, such as AudioLM or diffusion models. The latter has been shown to synthesize high-quality sounds, music, and speech. Does the proposed paradigm have higher quality, lower computational overhead, greater generalizability, or better in other aspects than the latter paradigm?
W4: (Minor) There are many empty references, e.g., line 135, line 229, and wrong references, e.g. Table 1 is line 243 which should probably be Table 2. Also, in line 243, ‘the ablation of the mode information is studied..’ Shouldn’t this table be the main result? Calling it an ablation result rather than the main result may cause some confusion, especially when Table 3 shows another ablation study result.
W5: (Minor) The problem statement in 3.1 is a bit confusing to me. For example, it is unclear to me why the initial condition $u_0$, is an element of $\mathcal{U}$. $u_0$ is a function on $\Omega \times \{0\}$, whereas $\mathcal{U}$ is a set of $(x, t)$ pairs. Also, in line 138, it seems that the mapping $\mathcal{S}$ is not only a function of $\mathcal{P}$, but also a function of initial conditions. Why is the latter omitted?
Technical Quality: 3
Clarity: 1
Questions for Authors: Please see ‘weaknesses’.
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: Limitations are minimally discussed in Section 5. It would be nice to extend the discussion to include the failure of the mode predictor, the computational complexity of DMSP-N, etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate reviewer ygxN’s effort in reviewing our paper. Below are the responses to your concerns.
- **W1 (Clarity in methodological details)**
- We clarify the mathematical definition of loss based on the reviewer's comments. We used $\ell^1$ distance as $\mathcal{L}_{f_0} = \|\hat{f_0} − f_0\|_1$ for the metric between all mode frequencies; however, we found no significant deviation in training results according to the $p$ value of the $\ell^p$-norms.
- As specified in Appendix D, we used 40 numbers of modes considering the fundamental frequency and the Nyquist limit by the temporal sampling frequency. This mode count was determined to be an integer value such that an integer multiple of the maximum fundamental frequency in the dataset is less than or equal to the Nyquist frequency, and as the reviewer notes, a more detailed synthesis would be possible if this mode count could be assigned dynamically according to $f_0, \kappa$, and $\alpha$.
- **W2 (Role of the mode predictor)**
- With all due respect, we disagree with the reviewer’s point for two reasons:
- The **Modal synthesis is pretty much just a solution in itself for linear strings** (the difference between Modal and FDTD comes only from the frequency-dependent damping term, compare Eqn. 1 and 3 for $\alpha=1$), so it is perfectly natural for other models, including DMSP, to lag behind Modal, as the training data does not have $\alpha$ exactly $1$ like the Linear ($\alpha=1$) test set. Thus, the discrepancy in DMSP's Linear string test result compared to Modal's **does not imply that DMSP’s mode estimator doesn't work at all.**
- As the title of this paper suggests, our main interest is on planar (thus nonlinear; see Section 2.1) strings. As the Nonlinear string test result suggests, **the mode predictor is not a major component**: DMSP still outperforms the rest of the baselines and ranks second, suggesting that even with errors in the mode estimator, synthesis results can be better.
Our statement in L255 means that "errors can occur in estimating the mode through a neural network", and nowhere in the text states that the mode estimator is the main module. However, when it comes to the wording of this sentence, we agree that it could be potentially misleading to readers, and we will refine the sentence to be clearer about what we're claiming.
- The following additional experimental results further demonstrate how trivial the error in the mode predictor is in the overall proposal.
| | Linear | (\\(\alpha=1\\)) |||| Nonlinear | (\\(\alpha>1\\)) |||
| --- | ---: | ---: | ---: | ---: | --- | ---: | ---: | ---: | ---: |
| | **SI-SDR** | **SDR** | **MSS** | **Pitch** || **SI-SDR** | **SDR** | **MSS** | **Pitch** |
| Modal | **–3.191** | **0.681** | 18.449 | ***0.420*** || –16.611 | –1.900 | 17.254 | 2.316 |
| DDSPish | –39.478 | –2.598 | **11.047** | 5.518 || –25.951 | –2.102 | 9.745 | 3.306 |
| DDSPish-woFM | –46.609 | –2.257 | ***10.911*** | 11.304 || –46.858 | –2.272 | 10.299 | 14.013 |
| **DMSP-N** | ***–2.844*** | ***1.496*** | 12.525 | **0.792** || ***15.670*** | ***16.455*** | ***4.772*** | **1.027** |
| **DMSP** | –22.298 | –2.000 | 12.504 | 1.717 || **–10.315** | **0.221** | **5.656** | ***1.437*** |
For this additional experiment, we increased the training data and the training time (about a week using a single 2080), while keeping all other network structural details the same. For the Linear results, DMSP still lags behind DMSP-N and Modal, suffering from its disadvantage in mode estimation accuracy, but clearly outperforms the baselines in nonlinear strings. This emphasizes that the error of the mode estimator is not a critical part of the overall pipeline, but rather the decoder part.
- **W3 (Comparison with generative models)**
- While we fully understand the reviewer's curiosity, we believe that providing a formal analysis of this point in the text would detract from the main argument:
1. As we believe the reviewer also agrees with (as noted in Strength S1 of the review), **there is no prior work on generative models based on neural networks (such as AudioLM or diffusion) for synthesizing string motion**, i.e., the time-dependent displacement of a string represented by the solution of a PDE, as in this study.
2. The main point of the paper is **'making one of the physical modeling methods (modal synthesis) differentiable facilitates efficient and effective nonlinear string synthesis'**, and it is considered somewhat out of context to add claims such as 'the proposed model structure and training method is even better/worse than those trained with the diffusion framework and/or LLM training method'.
Nevertheless, we tried various architectures (such as Transformer and WaveNet) and also trained conditional generation on top of a diffusion framework (like DiffWave [1] or Music Spectrogram Diffusion [2]). Please see the PDF file attached to the global response for a comparison under different architectures. Most attempts were prone to fail in motion synthesis, i.e., modeling the physical correlation of displacement with position, and DMSP performs well because it directly utilizes mode information to represent the physical correlation.
[1] Kong, Z., Ping, W., Huang, J., Zhao, K., & Catanzaro, B. DiffWave: A Versatile Diffusion Model for Audio Synthesis. In *International Conference on Learning Representations*, 2021.
[2] Hawthorne, C., Simon, I., Roberts, A., Zeghidour, N., Gardner, J., Manilow, E., & Engel, J. (2022, December). Multi-instrument Music Synthesis with Spectrogram Diffusion. In *ISMIR 2022 Hybrid Conference*.
- **W4 (Typos and missing references)**
- We thank the reviewers for pointing this out. Please see the global response (`Author Rebuttal by Authors`).
- **W5 (Problem statement)**
- Please see the global response.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer ygxN,
As the discussion between the reviewers and authors is coming to an end, could you please respond to the authors to confirm whether your concerns have been addressed?
Thanks!
AC
---
Rebuttal Comment 1.2:
Comment: Thanks for the response. Most of my concerns are addressed and I am happy to raise the score. | Summary: A computational framework that can approximate the motion of nonlinear strings is proposed. The implementation is differentiable, so one can train then neural nets in the framework as usual.
Strengths: - The paper is written nicely such that I could follow the basic discussions even if I am not familiar with the topics of speech and audio. It seems to include all the information required in this kind of paper, such as technical background, model description, loss description, and experiments.
- The empirical performance of the method is clearly superior to the reasonable baselines.
Weaknesses: I have only minor comments as follows. Basically no need to respond to them.
- In Table 1, the computational complexity part does not consider the computation needed for learning unknown parameters. The same kind of complexity analysis may be difficult for that part, but at least, the fact that such a training procedure happens for some of the methods should be noted somewhere close.
- Lines 137 and 138: there seems to be confusion about the definition of the initial condition and/or the operator $\mathcal{S}$. First, $u_0 \in \mathcal{U}$ sounds strange if $\mathcal{U}$ is the space of functions $\Omega \times [0,\infty) \to \mathbb{R}$ because $u_0$ does not take the time $\in [0,\infty)$ as the argument. Second, the domain of $\mathcal{S}$ is the product of $\mathcal{P}$ and the space of initial conditions, isn't it?
- The role of the noise decoder is unclear. Is it important in the given experiments where the data are purely from the simulator?
Technical Quality: 3
Clarity: 4
Questions for Authors: I don't have major questions.
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitations are simply stated in the conclusion section, which sound reasonable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer gawP for the constructive review. We also thank you for appreciating the novelties made by the differentiable string motion synthesis. Below are the responses to some of your comments. Each item in Weakness is labeled with a number following the W (from the top, W1, W2, ...)
- **W1 (Specifying that the training procedure happens)**
- The authors agree that it's a good idea to clarify that differentiable methods need to be trained in advance. We will mention this in the caption, as we are concerned that labeling this as one of the columns in the table could potentially mislead some readers with the “Pre-computation” in the table, which is to apply the least-squares method for fitting the modes to each specific initial condition (IC). As reviewers recognize, there are differences between pre-computation and training:
| | Training | Pre-computation |
| --- | ---: | ---: |
| Performed before inference | ✓ | ✓ |
| Performed every time for new ICs | ✗ | ✓ |
| Performed every time for new materials | ✗ | ✓ |
| Typical wall-time required | 0 days | 00 secs |
In the manuscript, Table 1 only refers to the inference scenario, and we believe that it is difficult to make a rigorous comparison of the computational complexity for training along the same lines. However, we agree with the reviewer that it would be nice to include the fact that neural networks require parameter optimization for inference. Following the reviewer’s suggestion, we included this in the caption of the table, albeit briefly.
- **W2 (Problem statement)**
- Please see the global response (`Author Rebuttal by Authors`).
- **W3 (Role of noise decoder)**
- The role of the noise decoder is not dominant, but it does help model the simulated nonlinear strings. As the nonlinear strings considered herein are modeled as the coupled system between the transverse and the longitudinal vibrations, the motion along the longitudinal axis contains strong near-noisy timbre along with (in)harmonic pitch skeletons (please see Figure 8.10 in [1]). This coupling leads to the appearance of timbres that can be approximated as filtered noise, especially in the transient region of the highly nonlinear strings with large pluck amplitudes. The noise decoder was designed to model such characteristics.
[1] Bilbao, S. (2009). *Numerical sound synthesis: finite difference schemes and simulation in musical acoustics*. John Wiley & Sons.
Per the reviewer's advice, we will add them to the camera-ready version. We plan to add each of the points mentioned in W1, W2, and W3 to the caption of Table 1, Section 3.1 of the main text, and the Appendix. Again, we thank the reviewer for the constructive advice.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' rebuttal, it further clarified the discussion. I would maintain my originally positive score. | null | null | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for taking the time to review the paper and for their efforts to improve the quality of the manuscript with their constructive comments. We responded to each reviewer's comments and concerns individually, and the responses below address common points made by reviewers.
- **Missing references, typos, and misleading expressions are revised as follows.**
- L135: "as specified in Equation 4,"
- L229: "linear wave solution as in Equation 1,"
- L243: "The efficacy of DMSP is studied as shown in Table 2."
- **Problem statement is revised as follows.**
- We assume that the solution $u : \Omega × [0, \infty) \to \mathbb{R}$ resides within the Banach space $\mathcal{U}$.
For a given PDE parameter $\rho\in\mathcal{P}$ and initial condition $u_0\in\mathcal{U}\_0\subset\mathcal{U}$ with $u\_0:\Omega\times\{0\}\to\mathbb{R}$, let $\mathcal{S}:\mathcal{P}\times\mathcal{U}\_0\to\mathcal{U}$ denote a nonlinear map, specifically the FDTD numerical solver tailored to the context of this study.
Assume that we are provided with observations $\\{\rho^{(i)}, u\_0^{(i)}, u^{(i)}\\}^N_{i=1}$, where $\rho^{(i)}$ and $u\_0^{(i)}$ are i.i.d. samples drawn from a probability measure supported on $\mathcal{P}$ and $\mathcal{U}\_{0}$ respectively, and $u^{(i)} = \mathcal{S}(\rho^{(i)},u\_0^{(i)})$ potentially contains noise.
Our goal is to construct an approximation of $\mathcal{S}$ denoted as $\mathcal{S}\_\theta : \mathcal{P}\times\mathcal{U}\_0 \to \mathcal{U}$, and select parameters $\theta^*\in\mathbb{R}^{N\_\theta}$ such that
$$
\min_{\theta}\\mathbb{E}\_{\rho\sim\mu\_{\mathrm{pa}},u\_0\sim\mu\_{\mathrm{ic}}}\left\\|\mathcal{S}(\rho,u\_0) - \mathcal{S}\_\theta(\rho,u_0)\right\\|\_{\mathcal{U}}\approx\min\_\theta\frac{1}{N}\sum\_{i=0}^N\left\\|\mathcal{S}(\rho^{(i)},u\_0^{(i)}) - \mathcal{S}\_\theta(\rho^{(i)},u\_0^{(i)})\right\\|\_{\mathcal{U}}
$$
where $\rho^{(i)}\sim\mu\_{\mathrm{pa}}$ and $u_0^{(i)}\sim\mu\_{\mathrm{ic}}$. Leveraging $\mathcal{S}\_\theta$, one can compute the solution $\hat{u} = \mathcal{S}\_\theta(\rho,u\_0)$ corresponding to a new parameter $\rho\in\mathcal{P}$ and a new initial condition $u\_0\in\mathcal{U}\_0$. By specifying values for $x$ and $t$, one can then either synthesize the sound of the string picked-up (also referred to as read-out) at a specific location $x_0$ as $\hat{u}(x\_0, t)$, or simulate the motion of the string by concatenating $\hat{u}(x, t)$ across all $x \in\Omega$.
In addition to this answer, the **attached PDF file includes** the following:
- a table of **synthesis results for the improved model** from additional experiments,
- a table of **ablation study for the improved model**,
- a comparison table of training results for **various neural network architectures** such as Transformer and WaveNet,
- a **scatter plot of the improved scores**, and
- a figure showing the **motion synthesis results** over time for different pluck positions.
Pdf: /pdf/c1b106e3bcabeda5b89f46b681b70e36dacd7506.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Theoretical Characterisation of the Gauss Newton Conditioning in Neural Networks | Accept (poster) | Summary: The authors derived upper bounds for the condition number of the outer product of the jacobian of the neural network output (the Gauss Newton matrix) in the case of deep linear networks, and non-linear networks with a single hidden layer with piecewise-linear activation functions. They empirically evaluate these bounds using MNIST (with the exception of one plot in the main text that used CIFAR-10) and find their bounds reflect the general trend of the empirical condition number as depth (or in some cases width) are varied. The work is motivated by highlighting the importance of Hessian information in the optimisation of neural networks.
Strengths: - The empirical evaluations show that the bounds are actually quite informative, following the general behaviour of the condition number and in some cases being quite tight as well. They point out that if they replace the convex combination in their bound(s) with a maximum, the bound becomes much looser and does not show any behaviour following trends.
- The understanding of neural network loss landscapes and how best to optimise them is still in its relative infancy, so works like this can be valuable.
- Extensive proofs and extra plots are provided in the appendices
Weaknesses: - ~~The paper generally uses the same dataset (MNIST) for almost all the plots, so it's hard to tell if these good behaviour of their bounds hold on other, potentially harder to optimise, problems such as CIFAR-10.~~ **It was pointed out in the rebuttal that CIFAR-10 was included in the appendix.**
- The precise motivation of studying the Gauss Newton is slightly weak. It's not clear to me that these results actually tell us anything substantial about optimising the neural network.
- The authors were not able to extend their bound to the deep non-linear network case.
Technical Quality: 4
Clarity: 3
Questions for Authors: - To be clear, on line 82 where we refer to "outer gradient product of the loss function", this is actually the outer gradient product of the network function and has no dependence on the loss function, right?
- Suggestion: In the equation below line 182 (which should probably be numbered) some extra brackets and the use of \cdots rather than \dots would make this expression clearer to read.
Confidence: 2
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors briefly discuss some future directions their work could be taken in, but do not seriously critique the work they present here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weakness 1**:
> The paper generally uses the same dataset (MNIST) for almost all the plots, so it's hard to tell if these good behaviour of their bounds hold on other, potentially harder to optimise, problems such as CIFAR-10.
**Answer**:
Thank you for this comment. Please note that we do have experiments using Cifar-10 in the Appendix (see for instance Figure 20 and 21), which showcase that our bounds are also tight in this case. Does this address your comment?
**Weakness 2**:
The precise motivation of studying the Gauss Newton is slightly weak. It's not clear to me that these results actually tell us anything substantial about optimising the neural network.
**Answer**:
Thank you for this comment. We would like to answer your comment in two parts:
- **Preconditioning methods based on GN matrix are successful.** Although the Gauss-Newton matrix $\mathbf{G}_O$ is only an approximation to the full Hessian matrix, it does seem to capture the curvature of the loss very well given the success of many second-order optimization methods based on approximations of the Gauss-Newton matrix, such as K-FAC [Martens and Grosse, 2020], Shampoo [Gupta et al., 2018] or Sophia [Liu et al., 2023].
- **GN matrix performs better than Hessian as preconditioner in Sophia algorithm.** Particularly interesting is the last method, in which the authors observe that their optimizer based on the Gauss-Newton matrix performs even better than their optimizer based on the full Hessian matrix, implying that the Gauss-Newton matrix is a good preconditioner and captures the curvature of the loss landscape well.
- As the condition number characterizes the convergence rates of gradient-based methods at least locally, we would like to argue that our results do help in understanding the optimization process of neural networks with gradient-based methods better.
<!-- Furthermore, we would like to highlight the close connection of the GN matrix to the Neural Tangent Kernel (NTK) and the Jacobian. Therefore, we believe that a better understanding of the GN matrix can also help to provide valuable insights into related objects. -->
**Weakness 3**
The authors were not able to extend their bound to the deep non-linear network case.
**Answer**:
We are indeed not yet able to extend our bounds to deep non-linear networks, which is something that we also mention in the limitation section. Yet we believe that our theoretical bounds on deep linear and residual networks already provide valuable insights on how different choices in the network architecture can affect the condition number of the Gauss-Newton matrix.
**Question 1**:
To be clear, on line 82 where we refer to "outer gradient product of the loss function", this is actually the outer gradient product of the network function and has no dependence on the loss function, right?
**Answer**:
Thank you for this question. Yes, you are right, the Gauss-Newton matrix is defined in Eq.(1) (same as defined in prior work discussed above) and it has no dependence on the loss function. As elaborated in lines 84-90, this is precisely the outer product Hessian $\mathbf{H}_O$ when the loss function is the MSE loss, which is what we are considering for most part of the work.
**Question 2**:
Suggestion: In the equation below line 182 (which should probably be numbered) some extra brackets and the use of $\cdots$ rather than $\dots$ would make this expression clearer to read.
**Answer**:
Thank you for this suggestion! We will improve the display of the equation to make it more readable.
We kindly request the reviewer to let us know if there are any remaining
questions they may have. If they find that their queries have been sufficiently addressed, we would greatly appreciate it if they could reconsider their evaluation of our paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your comprehensive response.
Yes, I see now that you have also run the main experiments on CIFAR-10, so this does address my concern. I wonder whether you could fit both MNIST and CIFAR-10 in figure 5, perhaps side by side? I believe the camera-ready version allows an extra page (please double check) so do consider this in case of acceptance.
I am convinced by your explanation of the motivation. On second inspection, it does appear that this is alluded to in the introduction, but perhaps some more explicit references to work like KFAC and Sophia might make the motivation clearer, as you said in your rebuttal.
I also agree that it is fair to leave the deep non-linear case to future work.
Thank you for clarifying my points of confusion. Please make modifications to wording if you believe it will help readability.
Overall, I think my initial score of 6 (weak accept) was slightly harsh. After realising that the experiments were also run on CIFAR-10, I am happy to raise my score to an 8 (strong accept) as I believe this is good work that has been carried out to an excellent standard. | Summary: This paper examines the condition number of the Gauss-Newton matrix [1] in neural networks. It shows that normalization techniques, such as Batch Normalization [2], initial normalization, skip connections [3], and appropriate layer dimensions, reduce the condition number and therefore enhance the training stability. The objective is to provide insights into the training of deep neural networks by characterizing a new, tight upper bound for the condition number of the Gauss-Newton matrix. The paper primarily focuses on linear neural networks and shallow non-linear neural networks.
[1] SCHRAUDOLPH, Nicol N. Fast curvature matrix-vector products for second-order gradient descent. Neural computation, 2002, vol. 14, no 7, p. 1723-1738.
[2] IOFFE, Sergey et SZEGEDY, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In : International conference on machine learning. pmlr, 2015. p. 448-456.
[3] HE, Kaiming, ZHANG, Xiangyu, REN, Shaoqing, et al. Deep residual learning for image recognition. In : Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 770-778.
Strengths: This paper provides new theoretical insight into the training dynamics of Deep Neural Networks (DNNs) by examining the conditioning number of the Gauss-Newton Matrix. The experimental results are robust and shed light on often obscure aspects of DNN optimization, such as the impacts of normalization, layer size, residual connections, and ReLU activation function. Meanwhile, the way the authors interpret the experimental results seems reasonable. Importantly, the paper also clearly outlines its limitations in the conclusion.
Weaknesses: The paper is difficult to read and follow due to several typos and unclear ideas. The main contribution is not significant and heavily relies on Singh's work [4]. The authors discuss the K-FAC method [5], which uses the EFIM and not the Hessian, even though it can be an approximation in certain cases where the model's likelihood is in the exponential family [6]. The paper only addresses the Gauss-Newton matrix and not the Generalized Gauss-Newton matrix [6], which could be more relevant, especially when using a cross-entropy loss function instead of MSE. Additionally, the references are outdated (e.g., line 49 and most of the related work), and there are typographical errors, such as "generalized" being repeated on line 140 and "collapse" being misspelled on line 124. There is also a lack of references related to the Gauss-Newton matrix, Kronecker properties PyTorch library, etc...
[4] SINGH, Sidak Pal, BACHMANN, Gregor, et HOFMANN, Thomas. Analytic insights into structure and rank of neural network hessian maps. Advances in Neural Information Processing Systems, 2021, vol. 34, p. 23914-23927.
[5] MARTENS, James et GROSSE, Roger. Optimizing neural networks with kronecker-factored approximate curvature. In : International conference on machine learning. PMLR, 2015. p. 2408-2417.
[6] MARTENS, James. New insights and perspectives on the natural gradient method. Journal of Machine Learning Research, 2020, vol. 21, no 146, p. 1-76.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Could you clarify the statement on line 31: “imagine an entire set of neurons being dead”? What implications does this scenario have for your study?
2. How good is the approximation of the Gauss Newton matrix $\mathbf{G}_O$ without considering the term $\mathbf{H}_F$?
3. Can you also conduct experiments on more challenging datasets like ImageNet [7] or TinyImageNet [8]?
4. Can you develop more about the time and space complexity of the computation of the condition number?
5. Can you explain and rephrase the lines 719 and 720 with the associated Figures 19 and 21?
6. Why did you compute the condition number on CPU rather than on GPUs (line 725)?
7. Can you also explore the relation between te condition number and the batch size?
[7] DENG, Jia, DONG, Wei, SOCHER, Richard, et al. Imagenet: A large-scale hierarchical image database. In : 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009. p. 248-255.
[8] Le, Ya and Xuan S. Yang. “Tiny ImageNet Visual Recognition Challenge.” (2015).
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The limitations of the work are well addressed by the authors. I do not believe this work has any particular negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weakness 1**:
> The paper is difficult to read and follow due to several typos and unclear ideas. The main contribution is not significant and heavily relies on Singh's work [4].
**Answer**:
We understand the concern of the reviewer regarding the potential overlap with Singh et al. [2021].
- We would like to clarify that while our work builds on Singh et al. [2021], our **main contribution** is the introduction of **tight upper bounds for the condition number of the Gauss-Newton (GN) matrix** for linear and residual networks of arbitrary depth and width. To the best of our knowledge, this has **not been addressed before in the literature**.
- More specifically, Singh et al. [2021] derived expressions for the Hessian which we use in our analysis. Therefore, there is an **entirely different focus and a thematic difference** between our current work and the previous work in Singh et al. [2021]. However, obtaining bounds on the GN matrix based on these expressions is not an easy task. We demonstrated that naive bounds are vacuous, while we presented experimental evidence demonstrating that the theoretical bounds are predictive in practice. We would be grateful if the reviewer could elaborate on what ideas are not clear in the paper, we would be happy to provide further details.
**Weakness 2**:
> The paper only addresses the Gauss-Newton matrix and not the Generalized Gauss-Newton matrix [6], which could be more relevant, especially when using a cross-entropy loss function instead of MSE.
**Answer**:
Thank you for this insight regarding the Generalized Gauss-Newton matrix.
- Our current work focuses on the Gauss-Newton matrix to build a solid foundational understanding of its properties, which we believe is important before extending our analysis to more complex scenarios.
- While we recognize the relevance of the Generalized Gauss-Newton matrix, particularly for cross-entropy loss, addressing this is currently beyond the scope of our work. We believe that this is a key direction for future research and will add this to the discussion on limitations and future work in the final version of the paper.
- Despite this, our **findings on the Gauss-Newton matrix already provide important insights** into the *initialization and training dynamics of neural networks using MSE loss*, which are valuable in their own right.
**Weakness 3**
> Additionally, the references are outdated (e.g., line 49 and most of the related work). There is also a lack of references related to the Gauss-Newton matrix, Kronecker properties PyTorch library, etc
**Answer**:
Thank you for this comment. We apologize for the outdated references. We have now updated our references to cover more recent and additional relevant works, particularly on the Gauss-Newton matrix, normalization techniques, and Kronecker properties. This also includes replacing the outdated references with new ones.
**Weakness 4**:
> [...] and there are typographical errors, such as "generalized" being repeated on line 140 and "collapse" being misspelled on line 124.
**Answer**:
We apologize for the issues regarding the readability and typographical errors in the initial submission.
- We would like to point out however that it is in fact not a typographical error, as Liao and Mahoney [2021] indeed analyze a generalization of the family of generalized linear models, which they called generalized generalized linear models (G-GLM).
- We have also thoroughly revised our work to correct all typos and improve the clarity and have also rephrased sections, such as lines 140, for better comprehension.
**Question 1**:
> Could you clarify the statement on line 31: “imagine an entire set of neurons being dead”? What implications does this scenario have for your study?
**Answer**:
The statement on line 31 refers to the phenomenon where neurons become inactive (that is, produce zero output) due to poor initializations or training dynamics. This will have a direct effect on the eigenspectrum of the GN matrix, which we will illustrate on a simple example.
Consider a 2-layer linear network $F_{\theta}(\mathbf{x}) = \mathbf{W} \mathbf{V} \mathbf{x}$, where one neuron in the hidden layer is dead, that is output only zeros. Then this is equivalent to the row corresponding to the neuron of the matrix $\mathbf{W}$ to be a zero vector. This directly implies that the rank of $\mathbf{W}$ is reduced by one or that there is another zero eigenvalue in the eigenspectrum. This in turn increases the value of the pseudo condition number $\kappa(\mathbf{W})$ that appears in the upper bound in Eq. (4).
**Question 2**:
> How good is the approximation of the Gauss Newton matrix 𝐺𝑂 without considering the term 𝐻𝐹?
**Answer**:
Thank you for this question.
- The difference between the Gauss Newton matrix $\mathbf{G}_O$ and the Hessian of the loss function $\mathbf{H}_L$ depends on both the residual and the curvature of the network $F\_{\boldsymbol{\theta}}(\mathbf{x})$. Thus, close to convergence when the residual becomes small, the contribution of $\mathbf{H}_F$ will also be negligible and $\mathbf{G}_O$ is essentially equal to $\mathbf{H}_L$.
- Furthermore, Lee et al. [2019] show that sufficiently wide neural networks of arbitrary depth behave like linear models during training with gradient descent. This implies that the Gauss-newton matrix is a close approximation of the full Hessian in this regime throughout training.
---
Rebuttal 2:
Title: Answers to Questions 3-7
Comment: **Question 3**:
> Can you develop more about the time and space complexity of the computation of the condition number?
**Answer**:
Thank you for this question. In order to compute the condition number we need to compute the eigenspectrum of the GN matrix $\mathbf{G}_O$, which has dimension $p \times p$, where $p$ is the number of parameters or of the matrix $\hat{\mathbf{G}}_O$, which has dimensions $kd \times kd$, where $d$ and $k$ are the input, respectively output dimension of the network. The time complexity of calculating the eigenvalue decomposition has a cubic complexity. That is, in order to compute the condition number, we have a computational complexity of $\mathcal{O}(\min(p, kd)^3)$.
**Question 4**:
> Can you also conduct experiments on more challenging datasets like ImageNet [7] or TinyImageNet [8]?
**Answer**:
Thank you for this suggestion.
- **Only marginal gain in insight is expected from experiments on ImageNet.** We would like to argue that we expect only a marginal gain in insights from additional experiments on ImageNet compared to experiments on Cifar-10, which we have already conducted.
In particular, note that the complexity of a given dataset appears only through the input covariance matrix $\boldsymbol{\Sigma}$, which is a separate factor in the upper bound. Furthermore, as we have mentioned in Remark R1, the effect of the conditioning of the input data on the conditioning of the GN spectra can be largely reduced by whitening or normalizing the input data.
Thus, the effect that different datasets have on the condition number of the GN matrix can be largely alleviated through preprocessing, which is common practice.
- **Scaling up experiments is expensive and challenging.** An additional aspect that we want to emphasize is the additional effort to scale up the computation of the condition number, which is challenging and not straightforward, given the time and space complexity elaborated in the previous question.
Given the above explanations, we believe that experiments on ImageNet are less relevant to our work and currently also out of scope.
**Question 5**:
> Can you explain and rephrase the lines 719 and 720 with the associated Figures 19 and 21?
**Answer**:
Thank you for this question. As one can see from Eq. (6), the condition number for each term in the sum of the first upper bound in Eq.(6) improves when $\beta$ increases. This can be seen by the fact that the ratio will be dominated by $\beta$ and will go to 1 for $\beta \to \infty$. This is also what we observe empirically in Figure 19 an 21, where the condition number is smaller for $\beta = 1$ compared to the other two settings, where $\beta = 1/L < 1/\sqrt{L} < 1$ for deeper networks with $L > 1$.
**Question 6**:
> Why did you compute the condition number on CPU rather than on GPUs (line 725)?
**Answer**:
Thank you for this question.
- In the case of linear and residual networks (with no activation function), where the GN matrix can be expressed analytically through Eq. (2), we did not find a significant time advantage of running the code on GPU compared to CPU.
- In the other case, where the GN matrix had to be computed numerically through automatic differentiation (for instance the experiments on pruning weights at initialization), we ran into memory problems with the GPUs. The way we resolved this was to build up the GN matrix through backpropagation on the GPU and move it to CPU to finally compute the condition number. Despite the slow down by moving the GN matrix between GPU and CPU, this still led to a speed up in time.
**Question 7**:
> Can you also explore the relation between the condition number and the batch size?
**Answer**:
Thank you for this interesting question!
- Our work does not explicitly consider training dynamics, so generally speaking, our bounds consider $n$ to be the total number of sample points. However, one could also interpret $n$ as the batch size during training, in which case the condition number could be interpreted as the condition number of the "local loss landscape" of a single mini-batch. Note that $n$ only appears implicitly in our theoretical upper bounds through the condition number of the empirical input covariance matrix $\boldsymbol{\Sigma}$.
We kindly request the reviewer to let us know if there are any remaining
questions they may have. If they find that their queries have been sufficiently addressed, we would greatly appreciate it if they could reconsider their evaluation of our paper.
---
Rebuttal Comment 2.1:
Title: Official Comment by Reviewer 6bdM
Comment: Thank you for the authors' comprehensive rebuttal and the additional analyses provided in response to the feedback from myself and the other reviewers.
The authors have well addressed my concerns, which has significantly improved my understanding of the paper. Based on this, I will raise my score to 6 (Weak Accept). | Summary: This paper characterizes the conditioning of Gauss-Newton (GN) matrix. The contribution of this paper is clear and straightforward: for deep linear networks, it establishes a bound on the condition number of GN matrix, which is further extended to 2-layer ReLU networks. These bounds could be useful in certain scenarios. Numerical experiments are conducted to support the theoretical claims.
Strengths: 1. This paper is clearly written and well-organized. The motivation of conducting the proposed research is clearly demonstrated in the introduction, i.e., why it is interesting to study the condition number of GN matrix for certain types of simplified deep neural networks. The main results and empirical results are also clearly presented.
2. Numerical experiments verify that the bounds are tight under conditions imposed by the authors.
Weaknesses: 1. My major concern is about the implications of the derived bounds. Specifically, I note that training/learning such as gradient descent learning dynamics, which is crucial in practice, is not involved in deriving the bound. Thus it is hard to see the implication of these derived bounds since they are the same for both before and after training of the model parameters, i.e., it provides no information about the effects of training and fails to characterize the properties of solutions of deep learning models in a specific task. This might even lead the bounds to be meaningless in certain scenarios. For example, in Lemma 1, let $k = 1$, then according to [1], after enough iterations of gradient descent, the largest singular value $\sigma_{\max} \to \infty$ (as there is no formal definition of $\sigma_{\max}$ in Lemma 1 I assume that my understanding of its definition is correct) while $\sigma_{\min} \to 0$ for both $W$ and $V$, thus the bound blows up and becomes meaningless.
2. I find the introduction part of this paper a bit verbose, e.g., it spends about 4 pages (almost half of the main body) to discuss existing results and related works before presenting details of main results (starting from Lemma 1). I think it would be better to emphasize more about the technical contributions of the current work, which is not clear to me since it seems that many important steps have already been solved by previous works, e.g., Eq. (2), (3), and Lemma 3.
**Reference**
[1] Ji and Telgarsky. Gradient descent aligns the layers of deep linear networks.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could the authors give some implications of the proposed bounds and how we can better use it in practice?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors addressed their limitations in Section 7. In addition, in my view, the bounds fail to capture the distinctions between the solutions of deep learning models and random parameters such as the initialization, which limits its significance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weakness 1**:
> My major concern is about the implications of the derived bounds. Specifically, I note that training/learning such as gradient descent learning dynamics [...] is not involved in deriving the bound. Thus it is hard to see the implication of these derived bounds since they are the same for both before and after training of the model parameters [...]. This might even lead the bounds to be meaningless in certain scenarios. For example, in Lemma 1, let 𝑘=1, then according to [1], after enough iterations of gradient descent, the largest singular value 𝜎max→∞ [...] while 𝜎min→0 for both 𝑊 and 𝑉, thus the bound blows up and becomes meaningless.
[1] Ji and Telgarsky. Gradient descent aligns the layers of deep linear networks.
**Answer**:
Thank you for raising your concerns about the implications of the derived bounds, which we would like to answer in two parts:
1. **Upper bound remains tight throughout training.** As you correctly pointed out, the bounds are indeed agnostic to the learning dynamics. And it is of course conceivable to find a parametrization of the weight matrices for which the upper bounds become vacuous. However, our empirical results in **Figure 9** in **Appendix D** of the main paper show that the upper bound actually remains tight throughout training, and is thus predictable of the Gauss-Newton condition number throughout training.
2. **Reference [1] considers extreme case of linearly separable data.** We have checked reference [1] closely after your comment and would like to note that [1] consider an extreme case, in which they assume that the data is linearly separable, which is of course generally never the case. However, this assumption is essential for the authors in [1] to show the asymptotic weight matrix alignment.
Therefore, we would like to argue that our bounds are indeed useful and relevant in practical settings, as has been illustrated in the point above.
**Weakness 2**:
> I find the introduction part of this paper a bit verbose, e.g., it spends about 4 pages (almost half of the main body) to discuss existing results and related works before presenting details of main results (starting from Lemma 1). I think it would be better to emphasize more about the technical contributions of the current work, which is not clear to me since it seems that many important steps have already been solved by previous works, e.g., Eq. (2), (3), and Lemma 3.
**Answer**:
Our intention was to provide a comprehensive background and context to ensure that readers with varying levels of familiarity with the topic could fully understand the significance of our contributions. However, your point is well-taken and we will modify the text according to your comment. Thank you for pointing this out.
**Question 1**:
> Could the authors give some implications of the proposed bounds and how we can better use it in practice?
**Answer**:
This is a very good question indeed.
- There are several implications regarding the choice of architecture that are discussed in the paper (for instance the way to scale the width in relation to the depth, and the importance of using normalization layers). The paper also gives a justification of why pruned networks are more difficult to train as they have worse condition number. Although this is not our focus, it is possible that our analysis could inspire better techniques for pruning neural networks.
- Finally, we also want to mention potential applications in architectural search that is often performed at initialization due to the prohibitive cost of training. [Mellor et al., 2021, Yu et al., 2019, Elsken et al., 2019].
**Limitations 1**:
> The authors addressed their limitations in Section 7. In addition, in my view, the bounds fail to capture the distinctions between the solutions of deep learning models and random parameters such as the initialization, which limits its significance.
**Answer**:
Thank you for this comment. We will add this limitation to our discussion in section 7.
We kindly request the reviewer to let us know if there are any remaining
questions they may have. If they find that their queries have been sufficiently addressed, we would greatly appreciate it if they could reconsider their evaluation of our paper.
---
Rebuttal Comment 1.1:
Title: Reply to author rebuttal
Comment: **Response to rebuttal of weakness 1**
My concern still remains. Fig. 9 is not a practical setting: the model for Fig. 9 is a 3-layer linear model and cannot directly perform classification for data that is not linearly separable. The first plot of Fig. 9 only reveals that the loss converges, then
1. if the model fits the data perfectly, then the data is linearly separable, which contradicts the second point of the author response.
2. if the model does not fit the data perfectly, then the first point of the author response is far from sufficient as the training dynamics is actually not a successful one.
---
Overall, my point lies in that the proposed methods overlook many aspects of practical settings, therefore the range that the proposed methods can be applied to is rather limited. | Summary: This paper is dedicated to the theoretical characterization of the condition number of the Gauss-Newton (GN) matrix in neural networks. By studying deep linear networks and two-layer nonlinear networks, the authors establish tight bounds on the GN matrix's condition number and extend this analysis to architectures incorporating residual connections and convolutional layers. The methodology is rigorous, and the experimental validation is thorough, making significant contributions to understanding optimization processes in deep learning.
Strengths: 1. This paper deeply studies the properties of Gauss-Newton matrices, especially in terms of condition numbers in deep linear networks and two-layer two-layer nonlinear networks (Leaky ReLU activation), and provides rigorous theoretical derivation and proof.
2. This paper experimentally shows that the width and depth of the network, when the parameters are initialized with a certain distribution, have a strong correlation with the condition number.
Weaknesses: 1.The discussion in this paper on the relationship between the Gauss-Newton matrix condition number and the convergence rate of network optimization could be richer.
2.Convergence Rate Analysis in Figure 17: The network's optimal solution and the corresponding minimum loss differ under various settings, making it difficult to analyze the convergence rate from the loss changes. For instance, at epoch 300, the network with a width of 15 may have nearly converged, while the network with a width of 200 still shows a significant downward trend. Thus, it is hard to draw conclusions about the convergence speed.
3.The caption of Figure 9 does not match the subfigures.
4.Figures 12-16 are not referenced or analyzed in the paper.
5.This paper lacks exploration of more general networks and recent network structures.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can this paper be extended to study the effect of network initialization on convergence rate?
2. Is it more appropriate to adjust the experiment on the effect of condition number and convergence rate in this paper to study different initializations or study the depth of the current network? They are more likely to keep the minimum loss close to each other, and make more use of the analysis of convergence rate.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As stated in the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weakness 1**:
> Convergence Rate Analysis in Figure 17: The network's optimal solution and the corresponding minimum loss differ under various settings, making it difficult to analyze the convergence rate from the loss changes. [...] Thus, it is hard to draw conclusions about the convergence speed.
**Answer**:
Thank you for pointing this out.
- To investigate this further, we have rerun the same experiment for 2000 epochs. The result can be found in Figure 2 of the attached PDF.
- As the Reviewer pointed out, we indeed observe that the two networks with the smallest width converge to a suboptimal loss. Nevertheless, we *still observe the **connection** between a **smaller condition number** and a **faster convergence rate** for the remaining four networks*.
**Weakness 2**:
> The discussion in this paper on the relationship between the Gauss-Newton matrix condition number and the convergence rate of network optimization could be richer.
**Answer**:
Thank you for this comment.
- As discussed above, we have **rerun one experiment** to investigate the connection between the convergence rate and the Gauss-Newton matrix condition number more closely.
- Moreover, we discuss below as an *Answer* to your **Question 1** how our work can be extended to study the **effect of the condition number at initialization on the convergence rate**. We hope that these additional discussions have enriched the discussion on the relationship between the condition number of the GN matrix and the convergence rate of network optimization.
**Weakness 3**:
> The caption of Figure 9 does not match the subfigures.
Thank you for pointing this out. We have corrected the caption now.
**Weakness 4**:
> Figures 12-16 are not referenced or analyzed in the paper.
Thank you for this comment. We have now added the reference to Figures 12-16 in appendix I.1. All experiments, except the VGG experiment, showcase that there is indeed a connection between the condition number of the network and the convergence speed during training.
**Weakness 5**:
> This paper lacks exploration of more general networks and recent network structures.
**Answer**:
It would indeed be interesting to extend our analysis to other architectures, such as CNNs or Transformers.
- **Straight forward extension to CNN.** We would like to note that our results can already be directly extended to CNNs by making use of the fact that the convolution operation can be reformulated in the form of a matrix-vector product using Toeplitz matrices, in which case we can apply the same analysis as for fully connected networks, which we also mention in a remark in line 266 and further elaborate in Appendix B. Previous work, such as Pinson et al. [2023] or Singh et al. [2022] have also studied properties of linear CNNs, such as its rank, making it an interesting object of study.
- **Empirical results on CNN.** We addtionally provide empirical results for the condition number of the GN matrix of linear CNNs at initialization in Figure 1 of the attached PDF of the general response, where we examine the effect of kernel size and number of filters on the condition number of the GN matrix.
- **MLPs in Transformers suggest potential transferability of some theoretical results.** The theoretical analysis of the condition number in Transformers would indeed be very intriguing, although we are currently unable to derive bounds for this setting. We would like to mention that previous work. Nevertheless, we would like to point out that MLPs make up a large part of standard transformer architectures and thus we expect that some theoretical results will also carry over to the Transformer setting.
---
Rebuttal 2:
Title: Answers to Question 1 and 2
Comment: **Question 1**:
> Can this paper be extended to study the effect of network initialization on convergence rate?
**Answer**:
This is a very interesting question. As the condition number is a very local property, it is in general hard to connect the conditioning at network initialization to a global convergence rate. However, we would like to argue below that an ill-conditioned network initialization will still affect the rate of convergence for gradient descent (GD) in the initial phase of training. For this we will present a modified analysis of GD for strongly convex functions, where we use local constants $\mu(k)$ and $L(k)$ instead of the global smoothness and Lipschitz constant, respectively.
Let us denote the Lipschitz constant by $L$ and the smoothness constant by $\mu$. Furthermore, let the step size be such that $\eta_k \leq \frac{1}{L}$. Then by the definition of gradient descent, we have:
\begin{align*}
||\boldsymbol{\theta}\_{k+1} - \boldsymbol{\theta}^* ||^2 &= ||\boldsymbol{\theta}_k - \boldsymbol{\theta}^* - \eta_k \nabla f(\boldsymbol{\theta}_k) ||^2 \\\\
&= ||\boldsymbol{\theta}_k - \boldsymbol{\theta}^* ||^2 -2 \eta_k \nabla f \left( \boldsymbol{\theta}_k \right)^\top \left( \boldsymbol{\theta}_k - \boldsymbol{\theta}^* \right) + \eta_k^2 ||\nabla f \left( \boldsymbol{\theta}_k \right) ||^2 \\\\
& \stackrel{\text{Strong convexity}}{\leq} (1-\eta_k \mu) ||\boldsymbol{\theta}_k - \boldsymbol{\theta}^* ||^2 - 2\eta_k (f(\boldsymbol{\theta}_k) - f(\boldsymbol{\theta}^*)) + \eta_k^2 ||\nabla f \left( \boldsymbol{\theta}_k \right) ||^2 \\\\
& \stackrel{\text{Smoothness}}{\leq} (1-\eta_k \mu) ||\boldsymbol{\theta}_k - \boldsymbol{\theta}^* ||^2 - 2\eta_k (f(\boldsymbol{\theta}_k) - f(\boldsymbol{\theta}^*)) + 2 \eta_k^2 L (f(\boldsymbol{\theta}_k) - f(\boldsymbol{\theta}^*)) \\\\
&= (1-\eta_k \mu) ||\boldsymbol{\theta}_k - \boldsymbol{\theta}^* ||^2 - 2 \eta_k (1 - \eta_k L) (f(\boldsymbol{\theta}_k) - f(\boldsymbol{\theta}^*))
\end{align*}
Since we assumed that $\eta_k \leq \frac{1}{L}$, the last term is negative. Therefore:
\begin{equation}
||\boldsymbol{\theta}_{k+1} - \boldsymbol{\theta}^* ||^2 \leq (1-\eta_k \mu) ||\boldsymbol{\theta}_k - \boldsymbol{\theta}^* ||^2
\end{equation}
So by recursively applying Eq. (1) and replacing $\mu$ by the local smoothness constants $\mu(k)$:
\begin{equation}
||\boldsymbol{\theta}_k - \boldsymbol{\theta}^* ||^2 \leq \prod\_{i=0}^{k-1} (1-\eta_i \mu(i)) ||\boldsymbol{\theta}_0 - \boldsymbol{\theta}^* ||^2
\end{equation}
One can clearly see the effect of $\mu(0)$ in the bound, which is even more dominant when $\mu(k)$ changes slowly. Of course, the effect of $\mu(0)$ attenuates over time, and that's why we are talking about a local effect. However, one should keep in mind that overparametrization leads the parameter to stay closer to initialization (at least in the NTK regime). We are happy to add a detailed discussion in the final version of the paper.
**Question 2**:
> Is it more appropriate to adjust the experiment on the effect of condition number and convergence rate in this paper to study different initializations or study the depth of the current network? They are more likely to keep the minimum loss close to each other, and make more use of the analysis of convergence rate.
**Answer**:
We are not sure what the Reviewer is asking precisely and kindly ask the Reviewer to rephrase the question, so we can dedicate an answer to it during the Author-Reviewer discussion phase.
We kindly request the reviewer to let us know if there are any remaining questions they may have. If they find that their queries have been sufficiently addressed, we would greatly appreciate it if they could reconsider their evaluationof our paper.
---
Rebuttal Comment 2.1:
Comment: Thank you for your explanation, which has been generally helpful. Since my concerns have been resolved, I now hold a favorable view of this work and would like to raise my rating to 6. | Rebuttal 1:
Rebuttal: Dear Reviewers,
we would like to thank you for the time that you have committed to reviewing our work and for the questions and comments, which have helped to enhance our work considerably.
We are pleased to report that we were able to address almost all of your comments and questions (except one question which we hope **Reviewer NTTB** can clarify), which we summarize below:
1. - As suggested by **Reviewer NTTB** we have extended our discussion on the relationship between the Gauss-Newton matrix condition number and the convergence rate of network optimization, which we substantiate with an experiment that we discuss below in more detail.
- Additionally, we have conducted another experiment on linear CNNs, which highlights the empirical applicability of our bounds in the CNN setting.
2. As requested by **Reviewer oXX9** we have elaborated the practical relevance of our derived bounds, which can be seen in Figure 9 in appendix D of our main paper, and further discussed how these bounds can be better used in practice.
3. As requested by **Reviewer 6bdM** we have highlighted the novelty of our contribution and clarified questions regarding the actual computation of the condition number (e.g. CPU vs. GPU, computational complexity).
4. As requested by **Reviewer ewWf**, we provide further motivation for studying the Gauss-Newton matrix and its relevance for understanding the optimization process of neural networks.
5. We clarified some formulations, corrected some minor typos, and promised to update parts of our references.
We believe that our revision highlights the novelty of our contribution, and that the additional experiments, which we provide, support our claim. We are looking forward to the Author-Reviewer discussion period.
In the meantime, we kindly ask you to re-evaluate our paper and consider raising your scores and confidence in your assessments.
Best regards,
The Authors
### Further experiments
- **Experiments on CNNs.** As requested by **Reviewer NTTB** we conducted additional experiments on the condition number of the Gauss-Newton matrix of linear CNNs at initialization, which can be found in Figure 1 of the attached PDF. We examine the effect of kernel size and number of filters on the condition number of the GN matrix. We observe a trend, where the number of filters increases the condition number (in analogy to depth in MLPs) and the kernel size improves conditioning (in analogy to width in MLPs). This highlights the empirical applicability of our bounds in the CNN setting.
- **Connection between condition number of GN matrix and convergence speed.** As has also been requested by **Reviewer NTTB** we have rerun an experiment, which evaluates the convergence speed of a 2-layer ReLU network with varying width, for more epochs. The result can be found in Figure 2 of the attached PDF. Although the two networks with small widths converge to a suboptimal minimum and should be discarded from the discussion on convergence speed, we can still observe the connection between a smaller condition number and a faster convergence rate for the remaining four networks.
Pdf: /pdf/f916179363e940a30fe3439da37d2fc519617402.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Localized Adaptive Risk Control | Accept (poster) | Summary: This paper proposed a novel localized adaptive risk control algorithm that provides not only average case risk guarantees but also worst-case guarantees. Simulations in several different applications are provided, demonstrating the improved performance when compared with adaptive risk control.
Strengths: The paper is very well written. The illustration figures are very helpful in explaining the localization effects of difference choices of $w(x)$.
The problem is also well motivated: the worst case risk guarantee is indeed very important in applications such as medical imaging.
The proposed algorithm is novel.
Weaknesses: 1. My major concern is on the simulation results. Though I appreciate the improved performance compared with ARC in different settings, the paper does not provide any comparisons with the state-of-the-art approaches in these applications. For example, for electricity demand prediction, the SOTA method is based on quantile regression and there are many papers published trying to provide better predictions on demand. How does the proposed LARC compare with these SOTA methods? If LARC does not outperform the existing methods, what are the benefits of using LARC? Similarly, tumor image segmentation is also a standard task in medical imaging, so it will be interesting to compare LARC with the SOTA methods in that field too.
2. It seems that the worst-case guarantee is only provided for i.i.d. data, is that correct? If so, what's the challenge of obtaining a worst-case guarantee for arbitrary data sequence?
In addition, the paper mentioned that the major improvement of this paper compared with [Angelopoulos et al., 2024b] is the online setting. However, usually, i.i.d. data is obtained by sampling uniformly from an offline data set. So what is the significance of the theoretical results if the worst-case guarantee can only be provided for i.i.d. cases?
Technical Quality: 3
Clarity: 3
Questions for Authors: See above
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments. Below, we address each comment point by point:
- The focus of this work is on producing calibrated predictions, i.e., predictions with risk control guarantees, rather than on providing more accurate predictions. To this end, we compare standard L-ARC against ARC with decaying step sizes [Angelopolous et al. 2024] (presented at ICML 2024), which, to the best of our knowledge, is the state-of-the-art online calibration scheme that offers statistical guarantees.
- The worst-case guarantee in 2.3.2. is valid for any sequence, not necessarily i.i.d. The statistical localized guarantee in Section 2.3.1. is obtained under the same data model considered in [Angelopoulous et al. 2024b].
- The worst-case deterministic guarantee (Theorem 2) is valid for any choice of data sequence, even if adversarial. Moreover, if the data is generated in an i.i.d. fashion—such as by sampling without replacement from a dataset or through online interaction with a stationary environment—L-ARC also provides the statistical guarantee stated in Theorem 1.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. I will keep my score.
---
Rebuttal 2:
Comment: Thank you for considering our responses and for your prompt reply! Given the overall positive assessment of the paper and the comments, we would appreciate it if you could elaborate on what would have been necessary to achieve a higher score. | Summary: This paper addresses the design and analysis of the localized version of adaptive risk control (L-ARC). In the first section, the problem of classical ARC is nicely introduced, showing the threshold updating mechanism and the convergence analysis of the resulting loss. Then, the problem setting, design, and analysis of ARC are naturally generalized to those of L-ARC. Finally, L-ARC is applied to three practical problems of electricity demand forecasting, tumor segmentation, and beam selection. The result of the experiments supports the usefulness of L-ARC.
Strengths: The application scope of L-ARC is broad, and it can be utilized in various application examples. The reviewer believes L-ARC has a high potential for broad impacts. The update model (17) and (18) developed for L-ARC in this paper is novel. The theoretical guarantee, in particular, the convergence analysis of the localized loss is also given under standard assumptions.
Weaknesses: Both Theorems 1 and 2 state that the localized loss is bounded by \kappa * B, meaning the maximum value of the kernel and an upper bound of the loss. In the reviewer's understanding, the theorems are technically correct, but he/she cannot find their value. They seem to be just deriving an obvious upper bound from the assumptions.
Technical Quality: 3
Clarity: 3
Questions for Authors: Related to Theorems 1 and 2, could the author comment on their value of the upper bound of the localized loss?
L-ARC updates the threshold function, denoted by $g_t()$. The setting seems to be restrictive. Could the authors extend the setting to the case of multiple prediction sets? In other words, multiple scoring functions s_1, s_2, ... are considered and their thresholds are adaptively updated in a similar manner to (17) and (18).
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitation of L-ARC is stated in Section 5, saying that it requires a memory function of storing the input data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments. Below, we address each comment point by point:
- The terms in the bound incorporate factors that relate the algorithm's guarantees to domain-dependent quantities, such as the maximum value of the loss $B$ and the maximum value of the kernel $\kappa$. These quantities naturally appear in kernel-based algorithms [Kivinen et al. 2004, Gibbs et al 2023]. The values of these terms depend on the specific problem. For example, we have $B=1$ for the miscoverage loss and the FNR loss (experiment in Section 3.2); for the SNR regret in the experiment in Section 3.3, $B=1$; and for the electricity forecast, $B$ is given by the maximum value of the label variable $Y_t$. The value of $\kappa$ is a positive quantity controlled by the designer; it can be chosen arbitrarily, and it determines the level of localization of L-ARC.
- L-ARC can be generalized to multiple prediction sets and experts. We thank the reviewer for this suggestion, and note that this direction can be pursued by combining the results of L-ARC with those from the recent work “Improved Online Conformal Prediction via Strongly Adaptive Online Learning” by Bhatnagar et al. This paper focuses on localization in time using multiple experts active at different time instants. Combining this approach with L-ARC would provide a calibration method with localized guarantees both in the input space and in time. We will include this interesting research direction in our future work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply comments. They convinced the reviewer.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our paper and for your useful comments! | Summary: The paper introduces Localized Adaptive Risk Control (L-ARC), an enhancement of Adaptive Risk Control (ARC). L-ARC updates a threshold function within a Reproducing Kernel Hilbert Space (RKHS) to provide localized risk guarantees while maintaining ARC's worst-case performance. Experiments show that L-ARC improves fairness across different data subpopulations in tasks like image segmentation.
Strengths: The introduction of Localized Adaptive Risk Control (L-ARC) is a significant advancement over traditional ARC. By focusing on localized risk guarantees, L-ARC addresses the critical issue of uneven risk distribution across different data subpopulations, which is a well-known limitation of ARC.
Weaknesses: 1. The introduction of a threshold function within an RKHS and the associated online adaptation process may complicate the implementation. Practitioners might find it challenging to understand and apply the method without a significant background in RKHS and online learning algorithms.
2. The paper primarily compares L-ARC with traditional ARC. Including comparisons with other state-of-the-art risk control or calibration methods would provide a more comprehensive evaluation of L-ARC's strengths and weaknesses.
Technical Quality: 2
Clarity: 2
Questions for Authors: Refer to Weaknesses.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments. Below, we address each comment point by point:
- We aim to ease the implementation of the algorithm by providing code to reproduce the experiments. In the revised manuscript, we will also clarify the connection with online learning algorithms, enabling practitioners to leverage this connection when implementing L-ARC.
- We compare L-ARC against ARC with decaying step sizes [Angelopolous et al. 2024] (presented at ICML 2024), which, to the best of our knowledge, is the only online calibration scheme that offers statistical guarantees. Another recent algorithm, “Improved Online Conformal Prediction via Strongly Adaptive Online Learning” by Bhatnagar et al., proposes localizing predictions in time rather than in the input space. Since these two approaches are complementary and orthogonal, a direct comparison may not be particularly insightful. Nonetheless, L-ARC can be combined with this approach to achieve both time and input space localization. We consider this an interesting research direction for future work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I will keep my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our paper! We hope that the above has addressed your concerns. If not, please let us know. | Summary: This paper introduces an online calibration method to enhance the Adaptive Risk Control (ARC) framework. ARC traditionally adjusts prediction sets based on a scalar threshold to ensure long-term risk control and marginal coverage guarantees. However, as mentioned in the paper, it may unevenly distribute risk guarantees across different subpopulations. L-ARC addresses this by updating a threshold function within a reproducing kernel Hilbert space (RKHS) to provide localized statistical risk guarantees, ranging from conditional to marginal risk, while preserving the worst-case performance. My additional comments are as follows.
Strengths: - The paper is easy to read. It could have been better structured, but it is okay.
- The authors have proposed a new technique for calibration and provided theoretical and experimental results.
Weaknesses: - The concept of input space is not clear in the introduction.
- Is the pre-trained model trained on all the data sources that will be used during the calibration process if they are different?
- Is the loss function convex in (2) convex? And what the intuitive explanation that (6) should hold is. Does it somehow follow from the regret analysis of the online gradient descent algorithm, which converges at the same rate?
- What is the crucial reason for using the RKHS function instead of a scalar? How is this useful?
- Ideally, the introduction of RKHS should have helped with the rates, but it looks like the rates in (12) and (13) are worse than the rates in (9) and (6), so then in what terms localized ARS is better?
- On the other hand, keeping track of a function g_t in RKHS for each t is way more expensive than just keeping track of a scalar in practice. Why the proposed method makes sense is not clear.
- What is the reason to consider the functional form in (15)
- How are the updates in (17)-(21) derived? Equation (19) is fine, which is the standard representation of a function in RKHS
- As mentioned before, the additional challenge with RKHS is that we need to now store a^i_t from i =1 to t because we need to update them, and for large T, this could be problematic in practice. And if you make the dictionary size of storing them constantly, then it would incur additional regret in the final terms.
- The experimental results are weak. They are not descriptive enough to make sense of the advantages of the proposed techniques over the advantages of existing methods. The long-term coverage plots look almost similar, so why is the benefit coming in other figures? Also, authors should report the downside of the proposed RKHS-based approach, which would require more memory access to implement the proposed method, which is not required for ARS. So, comparisons are also not fair.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please check the weaknesses.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful comments. Below, we address each comment point by point:
- The input space of the prediction model is commonly referred to as the feature space in the machine learning literature. We will be happy to clarify the terminology in the revised manuscript.
- As typically assumed in the conformal prediction literature [Angelopolous et al. 2022, Angelopolous et al. 2024, Feldman et al. 2022], L-ARC is agnostic to the data distribution used to pre-train the prediction model. This means that L-ARC does not assume that the model is trained on the same population used for calibration and testing. We will clarify this point in the revised manuscript.
- The loss function in (2) is not necessarily convex; for example, it could be the miscoverage loss, which is non-convex. The assumptions about the properties of the loss function used to prove Theorems 1 and 2 are stated in Assumptions 3, 4, and 5. As referenced in the main text, the result reviewed in (6) is known from conformal inference [Angelopolous et al. 2024 and Gibbs and Candes 2021]. We will specify that the loss in (2) is not necessarily convex in the revised manuscript.
- As mentioned in the introduction and shown in Figure 1, using a single scalar threshold, “ARC may distribute such guarantees unevenly in the input space, favoring a subpopulation of inputs at the detriment of another subpopulation.” In contrast, as described in Section 1.3, L-ARC uses a threshold function in an RKHS, allowing it to “produce prediction sets with localized statistical risk control guarantees as in (9), while also retaining the worst-case deterministic long-term guarantees (6) of ARC.” As demonstrated in the experiments, which encompass both standard [Angelopolous et al. 2022, Angelopolous et al. 2024] and new benchmarks, L-ARC not only retains the long-term coverage of ARC but it also provides fairer and more homogeneous risk control across different subpopulations of the input space, which is an essential requirement for many applications.
- As mentioned at the beginning of Section 1.2, ARC does not offer any form of localized risk guarantee. As elaborated in Section 1.3, by replacing the scalar threshold with a member of the RKHS, L-ARC can instead provide localized risk control, a more general statistical guarantee. As stated in the bullet points in Section 1.3, the convergence results are the same as those for ARC, with an additional term that depends on the level of localization of the kernel function. This function dictates the degree of localization of the guarantee, as explained in Figure 2. These guarantees recover (6)-(7) when a non-localized threshold is used.
- As mentioned in the conclusion, memory efficiency is the main drawback of using localized thresholds, which motivates future work. However, the benefit of localized risk control may justify the increased computational and memory cost in some applications for which fairness is critical. While we have decided to leave the derivation and analysis of a memory-efficient version of L-ARC for future work, the attached PDF file demonstrates how to obtain a simple variant of L-ARC with **constant memory requirements** that still exhibits improved localized risk control. We plan to include these new empirical results in the supplementary material for the revised manuscript.
- The functional form (15) highlights how the current approach generalizes ARC. The proposed threshold function combines a scalar component $c_t$, like ARC, plus a varying component $f_t$ from an RKHS which allows to localize the risk control guarantee. We will elucidate the relation to ARC and the functional representation in the revised text.
- Equation (17)-(21) can be interpreted as the updates rule of (regularized) online gradient descent. In the revised manuscript we will refer to the work of [Kivinen et al., 2004] for a derivation.
- See the previous point and the attached pdf for a solution to this problem that has **constant**, not linear, memory requirements.
- In our paper, we provide experimental results that include standard and recent benchmarks [Angelopoulos et al. [2024b], Angelopoulos et al. [2022b]], as well as a new beam selection problem with engineering significance. Across this variety of experiments, as the reviewer notes, L-ARC not only guarantees the same worst-case deterministic guarantees as ARC (the curves overlap), but it also substantially improves coverage across different subpopulations of the data, as shown in the right panels of Figures 3, 4, and 5. The price to pay for localized risk control and increased fairness is an increased memory requirement. This cost is justified in all applications for which fairness and conditional coverage are necessary, as in the proposed examples. To address the memory requirement issue, in the attached PDF we provide a variant of L-ARC that allows control and reduction of memory and computational requirements. We will be happy to include these additional experiments in the supplementary material of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. However, I remain unclear about the claim that having non-sublinear regret in equation (13) is beneficial. Additionally, addressing the tradeoff between finite memory size and regret performance seems crucial for this paper, rather than leaving it to future work. Without considering finite memory in theoretical analysis, the practical applicability of the proposed approaches is limited. Because without establishing this connection theoretically, I am not sure if the theoretical contributions of this work are good enough.
While the paper includes a set of experimental results, the key takeaways are not immediately clear. The presentation needs significant improvement to facilitate understanding. For example, the reduction in average miscoverage error in Figure 3b is not sufficiently explained, leaving uncertainty about its adequacy. Similarly, the results for beam selection in Figure 5 are difficult to interpret, and the benefit is unclear. Although the authors mention that L-ARC requires less time, the actual time measurements are not reported in the figures, and the percentage gains remain unclear.
Overall, the paper shows promise, but I cannot recommend it for acceptance in its current form. To acknowledge the efforts, I will slightly increase the scores, but major revisions are necessary before this paper can be considered for publication. I encourage the authors to continue refining and improving the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for considering our response and for your valuable reply!
We never claimed that the additional term in (13) is beneficial; rather, this term is a consequence of using an RKHS threshold instead of a constant. As elaborated in the text, this term gauges the impact of RKHS localization on the final bound. The implicit benefit of including (13) is that by allowing for a “controllable” suboptimality gap, we can target the statistical localized guarantees in Theorem 1 that are not achievable using other schemes, such as ARC. We also note that this suboptimality gap is practically non-existent; as you previously observed, the long-term coverage curves of ARC and L-ARC overlap. However, the conditional risk control properties of L-ARC are substantially superior as highlighted in the experiment.
The theoretical contributions of the paper include the derivation of a novel algorithm for online calibration based on RKHS that we prove to enjoy both long-run and localized statistical guarantees. To do so we introduce several novel technical results (see our response to reviewer LXXA). We believe that the paper offers substantial theoretical advancements to the field of online calibration. We would be happy to include the memory-efficient variant of L-ARC in the additional material, although we will reserve its analysis for future work as we believe the current work provides enough contributions. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful comments. Multiple reviewers raised concerns about the linear memory requirement of L-ARC. In the original submission, we acknowledged this limitation and planned to leave the derivation and analysis of memory-efficient variants for future work. Nonetheless, in response to these comments, we have decided to include a PDF document that presents a variant of L-ARC with **constant memory requirements** and provides empirical results demonstrating how this scheme balances memory efficiency with localized risk control. We hope this addresses the reviewers’ concerns and plan to include these new results as part of the supplementary material.
Pdf: /pdf/2023f20207968d3a51d2a6ac97df563aa6852d23.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes the Localized Adaptive Risk Control (L-ARC) scheme for learning to perform conformal prediction from online data. In the setting under consideration, the data is potentially non-i.i.d. and the scalar threshold parameter typically used by existing methods in the construction of the prediction set is replaced by a function learned online from the data stream via functional stochastic gradient descent over a reproducing kernel Hilbert space (RKHS). It is shown that the selection of the RKHS -- i.e., the properties of the associated kernel, such as the scale parameter of a radial basis function (RBF) kernel -- yields "localization" of the resulting conformal predictor to the data set in the sense that prediction performance (miscoverage error) remains consistent across distinct subpopulations within the data. This is demonstrated through experiments on electricity forecasting, medical image segmentation, and beam selection problems that illustrate the localization property, while theoretical results characterize the price paid for this localization in terms of looser bounds on suboptimality and convergence rate.
The key algorithmic contribution of the paper is the use of *online* kernel learning, which is well-studied (compare eqs. (19)-(21) in the present submission with (10)-(12) in [Kivinen et al., 2004]), to learn a *function* to replace the scalar threshold typically used in conformal prediction. Existing methods for online conformal prediction [Gibbs and Candès, 2021; Bhatnagar et al., 2023; Feldman et al., 2023; Angelopoulos et al., 2024] consider either a fixed scalar threshold in the construction of the prediction set, while recent work on conformal prediction with a threshold function [Gibbs et al., 2023] does not directly apply to the online setting. The primary theoretical contributions are the establishment of upper bounds characterizing the localization effects of the choice of RKHS kernel in terms of a certain notion of suboptimality and the rate of convergence to a neighborhood of optimality, where the size of the neighborhood is shown to depend on the choice of kernel. The experimental results illustrate that L-ARC outperforms the existing, non-localized ARC method [Gibbs and Candès, 2021; Feldman et al., 2022] at achieving the desired level of miscoverage error across distinct subpopulations.
Strengths: While the existence of an online method for conformal prediction over non-i.i.d. data that uses threshold functions appears to be an open problem in the recent conformal prediction literature, the significance and potential utility of using a threshold *function* taken from a RKHS instead of a scalar or another class of function are not immediately obvious. However, the theoretical and experimental results of this paper indicate that a useful notion of "localization", where the choice RKHS kernel allows reliable conformal prediction on data with distinct subpopulations, results from considering this class of threshold functions -- this is a very interesting and original insight that is likely to draw attention in the conformal prediction community, and is a major strength of the paper. The theoretical results also provide some useful insight into the effect of kernel choice in L-ARC, and the main steps in the analysis appear to be sound (I read but did not thoroughly check all details in the appendix). Finally, the experimental results provide strong support to the utility of the localization effect of the choice of kernel, without which the theoretical results would lose much of their force and the significance and potential utility of L-ARC would remain unclear.
Weaknesses: The primary weaknesses of this work arise from lack of context with previous work and lack of motivation and discussion of the technical results. These issues make it difficult to accurately judge its significance and contribution. Specifically:
1. Important context with previous work is missing. First, the L-ARC method proposed in Sec. 2.2 is essentially an adaptation of online kernel learning (see [Kivinen et al., 2004] and its many citers) to the conformal prediction setting (as mentioned in the summary above, eqs. (19)-(21) in the present submission are very similar to (10)-(12) in [Kivinen et al., 2004]), yet this connection is not mentioned. This information is important for clarifying the limits of the present paper's contribution.
2. The relationship between the technical results presented and previous analyses is unclear. In particular, it is unclear from the text (including the proofs in the appendix) what parts of the analysis draw on previous analyses of conformal prediction methods -- if the results are entirely independent and original, this can be highlighted -- and what key technical innovations were required in the theoretical analysis.
3. Motivation of the technical results and discussion and clarification of their meaning is generally lacking. As a result, the technical meaning and effect of "localization" characterized in Sec. 2.3, especially in the Assumptions and Theorem 1, remain unclear. Specific questions regarding these issues are included in **Questions** 3, 4, and 5 below.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. What are the main technical innovations in the proofs of the main results?
2. What parts of the analysis draw on previous work?
3. How reasonable are the assumptions presented in Sec. 2.3 and when do they hold (especially Assumptions 4 and 5)?
4. What is significance of the weighting inside the expectation on the LHS of eq. (22) and why is this relaxed, reweighted expectation meaningful?
5. What is the role and importance of the weighting function $w(\cdot)$ and the corresponding terms containing $f_w(\cdot)$ in eq. (22)? The presence of $w(\cdot)$ seems like an artifact arising from considering the covariate shift $w(\cdot)$ in lines 418-419 in the appendix, which might be expected to go away when $max_w$ is taken in the proof of Lemma 1 in Sec. A; why do $w(\cdot)$ and $f_w(\cdot)$ persist in the statement of Theorem 1 and how do we interpret them?
6. The present paper uses the term Adaptive Risk Control (ARC) to refer to the methods proposed in [Gibbs and Candès, 2021] and [Feldman et al., 2022] on lines 18-19, but these works call their methods Adaptive Conformal Inference (ACI) and Rolling Risk Control (Rolling RC), respectively; which of these does ARC refer to, and which is implemented in the experiments?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Aside from the issues raised above, the limitations have been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments. Below, we address each comment point by point:
- From a methodological point of view, the main innovation lies in using functions from an RKHS to define prediction sets that are calibrated online and that ensure localized risk control. The RKHS function is optimized online using gradient descent steps applied to kernels. As the reviewers noted, the optimization with kernel was originally studied in [Kivinen et al., 2004] and we apply this framework to online calibration. To this end, we had to develop several technical innovations. In Theorem 1, we analyze the functional derivative of the update and show that the stationary point satisfies the localized risk guarantee (41). This allows us to prove that, under Assumptions 4 and 5, we asymptotically achieve localized coverage. In Theorem 2, we prove that, for the problem of online calibration, the threshold function defined by the RKHS has a bounded infinity norm across all iterations (Proposition 3). This allows us to bound the first term in (76) and prove the worst-case guarantees. All these intermediate results as well as final theorems are novel contributions. In the revised manuscript we will clarify the connection between L-ARC update rules and [Kivinen et al., 2004], and emphasize the technical novelties introduced by the proof.
- The parts of the proof that are drawn from previous work are in Theorem 1, which leverages, as intermediate steps, the regret bounds (47) and (49) from [Kivinen et al. 2024] and [Cesa-Bianchi et al. 2004, Theorem 2], respectively. These works are referenced in the Appendix.
- Assumptions 2 and 3 are standard in the conformal risk control literature (see [Angelopolous et al. 2022, Angelopolous et al. 2024, Feldman et al. 2022]). These are reasonable and easily met in practice, as they state that the risk functional is bounded and that it decreases if the set grows larger, which is true for typical losses. Assumption 4 is a stronger version of Assumption 3, similar to (6) in [Angelopolous et al. 2024]. Like the previous assumption, it states that if the set size increases, the expected risk is strictly decreasing. Finally, Assumption 5 states that the loss function is left-continuous in the threshold value, and it is automatically satisfied for many popular losses. For all other losses, we note that this assumption can be easily satisfied by replacing ≤ with < in the definition of the set in (14). We plan to make this modification in the definition of the set predictor to remove this assumption.
- As discussed in Section 1.2, the weighting term in the localized risk equation amounts to a shift in the covariate distribution. By proving that the inequality is guaranteed for every shift $w\in \mathcal{W}$, we prove that the L-ARC provides risk control for all potential distributions shift in set $ \mathcal{W}$. Specifically, inequality (22) states that for any shift $w\in \mathcal{W}$ it is possible to bound the shifted average risk averaged by a quantity that is shift dependent, hence the presence of $f_w(\cdot)$ and $w(\cdot)$. For example, if there is no shift, $w(\cdot)=1$, and this quantity equals zero. We thank the reviewer for carefully reading the Appendix and spotting the typo in lines 418-419. In fact, the $\max$ is a left over from a previous proof attempt, and it should be removed in the current version. We plan to correct this typo in the revised manuscript.
- Adaptive Risk Control refers to a generalization of ACI from conformal prediction to risk control, and is instantiated in our experiments with a decaying step size [Angelopolous et al. 2024] to guarantee statistical coverage. We chose [Angelopolous et al. 2024] as a benchmark because it is the state-of-the-art for online calibration, and, at the time of writing, it is the only online calibration scheme with statistical guarantees.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their response. It has mitigated several of my concerns and I have raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We are glad that we were able to address your concerns, and we thank you again for your valuable feedback! | null | null | null | null | null | null |
Active Sequential Posterior Estimation for Sample-Efficient Simulation-Based Inference | Accept (poster) | Summary: This work introduces a new way to do active learning in simulation-based inference. This algorithm allows for the sampling of data points not only in regions of interest but also in regions that lead to high information gain. They then apply this algorithm to the problem of urban travel demand calibration, which consists of estimating the expected travel demand between different points of interest.
Strengths: ### Originality
* I'm not aware of other pieces of work introducing active learning in sbi the same way as done in this paper.
* I'm unfamiliar with the urban travel demand calibration literature, but I am also unaware of a similar piece of work.
### Quality
* The idea of targeting samples of high epistemic uncertainty in SBI is sound.
* The different approximations made give rise to a practical algorithm.
* The experiments suggest that the active learning scheme allows the reduction of the number of samples required from the simulation for the task of urban travel demand calibration.
### Clarity
* Overall, the paper is clearly written. The method and motivation are well described, step by step and the urban travel demand calibration task is also well described.
### Significance
* Introducing active learning in sbi is of high significance.
* I'm unfamiliar with the urban travel demand calibration literature but I would trust the authors on the fact that this is an important topic.
Weaknesses: ### Originality
* I have no concerns regarding originality.
### Quality
* I find that Equation 4 is not well motivated. Why are the terms coming from equation 3 and the proposal prior simply multiplied ? I would have expected an hyperparameter controlling the weight assigned to both those terms but there is not, and this choice of simply multiplying the terms seems arbitrary.
* In my understanding, optimizing equation 4 can be very computationally costly as it requires evaluating the Bayesian neural network on many samples and each evaluation of Bayesian neural network consist itself in the evaluation of several neural networks. I find that this limitation is not well stated in the paper.
* RMNSE is a strange metric to use for posterior evaluation, in my opinion. This metric is designed to evaluate the quality of point estimation but not distributions. In particular, it falls short when the posterior is multi-modal; the metric would prefer an unimodal approximation in the middle to the correct multimodal distribution.
### Clarity
* The paper is a mix of methodological and applicative paper and does not have a clear scope. While the title suggests an applicative paper, the authors introduce a new algorithm that is valuable in all generality for the field of simulation-based inference. The experiments are, however, limited to urban travel demand calibration and hence do not demonstrate the this method is effective for other problems.
### Significance
* I have no concerns regarding significance.
Technical Quality: 3
Clarity: 3
Questions for Authors: Do you have results with a metric other than RMNSE that would not suffer from the same issues?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations regarding the computational cost of the acquisition function are not mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort providing detailed feedback on our work. Please find our responses to your questions and comments below.
Note: Where applicable, we prefix sections with `W-<x>`, `Q-<y>`, or `L-<z>` to reference itemized comments in Weaknesses, Questions, and Limitations, respectively, numbered by the order in which they were mentioned.
**[W-1]** The motivation for Equation 4 is primarily discussed in Section 3.3 (lines 172-175), which describes how the $\theta^*$ that maximizes this notion of distributional uncertainty is not necessarily a likely parameter under the observational data $x_o$. Multiplying by the proposal prior, which captures the most up-to-date estimate of $p(\theta | x_o)$ at each round of the algorithm, allows us to re-weight this uncertainty by the approximate likelihoods of each parameter under $x_o$ and prioritize more likely $\theta$ during acquisition. This multiplication is primarily motivated by the desire to more closely align the effective likelihoods of parameters $\theta$ with the true proposal prior, which is ultimately present and corrected for in the NDE loss (see line 178).
**[W-2, L-1]** Regarding computational cost of acquisition: Section 3.4 describes the mechanism by which Equation 4 can be optimized over a fixed set of proposal samples during each round. While we do indeed need to evaluate the NDE for several candidate parameters and many network realizations, this acquisition evaluation can be performed very efficiently as a batched inference step under the NDE model.
Further, the additional computational overhead can be compared against SNPE (i.e., no acquisition function) for our primary traffic task, as the reported wallclock times (found in subfigures (b) for each of the calibration plots) provide the raw runtimes for each method when obtaining the 128 simulation samples for each setting. Empirically, this allows us to compare the total time spent by our algorithm (including the cost of optimizing the acquisition function), and we observe a negligible difference compared to SNPE over our explored horizons.
**[W-3, Q-1]** You raise a valid point: RMSNE is indeed a non-standard metric for evaluating SBI methods in the literature, and is employed in this work almost entirely due to its prevalence in the OD calibration community. In our traffic calibration setting, there are few why it’s used:
1. We are ultimately interested in producing good point estimates of the true OD matrices,
2. RMSNE is a standard in the OD calibration space and allows us to compare with other baseline methods like SPSA and PC-SPSA, and
3. While we may benefit downstream from having a good distributional estimate under $x_o$ (as SBI methods generally attempt to provide), for our task, we are mostly interested in how approximating the posterior can improve our method's intermediate exploration of the parameter space in service of producing good point estimates.
Nevertheless, we agree a systematic exploration of the final posterior accuracy under metrics better suited to capture distributional differences (not just point estimates) would be warranted in general. Please see our global rebuttal reply for more details here.
**[W-4]** Please see our global rebuttal reply for details on additional empirical evaluation, including discussion around new simulation benchmarks and more representative metrics.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply, I comment on it below.
> The motivation for Equation 4 is primarily discussed in Section 3.3 (lines 172-175), which describes how the $\theta^*$ that maximizes this notion of distributional uncertainty is not necessarily a likely parameter under the observational data $x_o$. Multiplying by the proposal prior, which captures the most up-to-date estimate of $p(\theta | x_o)$ at each round of the algorithm, allows us to re-weight this uncertainty by the approximate likelihoods of each parameter under $x_o$ and prioritize more likely $\theta$ during acquisition. This multiplication is primarily motivated by the desire to more closely align the effective likelihoods of parameters $\theta$ with the true proposal prior, which is ultimately present and corrected for in the NDE loss (see line 178).
I understand the need to introduce the $p(\theta | x_o)$ term. My comment was more that such acquisition
$$ \alpha(\tilde{\theta}, p(\phi|D)) = \tilde{p}(\tilde{\theta}) (E_{\phi' \sim \phi | D}[...])^\lambda$$
would also be perfectly valid for any $\lambda$ in my opinion and the choice of $\lambda$ = 1 seems arbitrary. The following acquisition would also be valid
$$ \alpha(\tilde{\theta}, p(\phi|D)) = (1-\lambda)\tilde{p}(\tilde{\theta}) + \lambda E_{\phi' \sim \phi | D}[...].$$
Therefore, I was wondering what was motivating your choice of acquisition function.
> Regarding computational cost of acquisition: Section 3.4 describes the mechanism by which Equation 4 can be optimized over a fixed set of proposal samples during each round. While we do indeed need to evaluate the NDE for several candidate parameters and many network realizations, this acquisition evaluation can be performed very efficiently as a batched inference step under the NDE model.
Further, the additional computational overhead can be compared against SNPE (i.e., no acquisition function) for our primary traffic task, as the reported wallclock times (found in subfigures (b) for each of the calibration plots) provide the raw runtimes for each method when obtaining the 128 simulation samples for each setting. Empirically, this allows us to compare the total time spent by our algorithm (including the cost of optimizing the acquisition function), and we observe a negligible difference compared to SNPE over our explored horizons.
Thanks for the clarification.
>You raise a valid point: RMSNE is indeed a non-standard metric for evaluating SBI methods in the literature, and is employed in this work almost entirely due to its prevalence in the OD calibration community. In our traffic calibration setting, there are few why it’s used:
We are ultimately interested in producing good point estimates of the true OD matrices,
RMSNE is a standard in the OD calibration space and allows us to compare with other baseline methods like SPSA and PC-SPSA, and
While we may benefit downstream from having a good distributional estimate under $x_o$ (as SBI methods generally attempt to provide), for our task, we are mostly interested in how approximating the posterior can improve our method's intermediate exploration of the parameter space in service of producing good point estimates.
Nevertheless, we agree a systematic exploration of the final posterior accuracy under metrics better suited to capture distributional differences (not just point estimates) would be warranted in general. Please see our global rebuttal reply for more details here.
[W-4] Please see our global rebuttal reply for details on additional empirical evaluation, including discussion around new simulation benchmarks and more representative metrics.
Thanks for the addition of new benchmarks with metrics better suited for simulation-based inference.
Given the huge improvement in the evaluation of the method which was my main concern, I increased my score to 7.
---
Rebuttal 2:
Comment: Thank you for the favorable score revision and again for your valuable feedback! It is very much appreciated. A few additional comments to the points raised:
***Regarding the formulation of the acquisition function***
Thank you for clarifying. We agree, the family of functions
$$\alpha(\tilde{\theta}, p(\phi, D)) = \tilde{p}(\tilde{\theta})(\mathbb{E}_{\phi^\prime\sim\phi |D}[\dots])^\lambda$$
under parameter $\lambda$ constitutes valid choices for the acquisition function for any $\lambda$, facilitating different levels of emphasis on uncertainties at values of $\theta$ relative to their likelihoods under $\tilde{p}$. The choice to use $\lambda =1$ here is in part due to convenience as we did not set out to explore this parametric family explicitly, but it intuitively captures a desirable balance in the relationship between uncertainties and likelihoods of $\theta$.
In particular, under level sets $\alpha(\cdot, p(\phi, D)) = z$ (where $p(\phi, D)$ is held constant), as likelihoods $\tilde{p}(\tilde{\theta})$ decrease by a factor of $n$, the average deviation between $p(\theta | x, D)$ and $p(\theta | x, \phi)$ need only increase by a factor of $\sqrt{n}$, i.e., changes in uncertainty are sub-linear in the likelihood ratio. With the introduction of $\lambda$, we’d see this factor generalize to $n^{1/(2\lambda)}$, and may need to take additional measures to balance the resulting sensitivity between the terms. We find that $\lambda =1$ is a natural choice that reasonably captures the desire to explore potentially unlikely parameters with high uncertainties without ignoring them (e.g., $\lambda \rightarrow 0$) or relying too heavily on them (e.g., $\lambda \rightarrow \infty$). We nevertheless find it intriguing to explore the impacts of $\lambda$ on the effectiveness of the acquisition function in practice, and will aim to incorporate this alongside our additional ablation tests in the final manuscript.
Regarding the additive form, we find this slightly less intuitive, and potentially lacking some of the above-mentioned qualities. In particular, it only shifts the uncertainties for choices of $\theta$ rather than scaling them by their likelihood, meaning the relationship between terms is no longer dependent on a multiplicative factor. That is, under level sets of $\alpha$, absolute differences in likelihood $\tilde{p}(\tilde{\theta})$ need to be made up for by proportional absolute differences in uncertainty around $\tilde{\theta}$, which is slightly counter-intuitive (e.g., for any two $\theta_1,\theta_2$, the needed change in uncertainty is no longer determined by their likelihood ratio under $\tilde{p}$). Additionally, for $\lambda$ close to 1, $\theta$ with low likelihood may be too readily selected provided $\tilde{p}(\theta)$ is dominated by high uncertainty found outside of high-likelihood regions under $p(\theta | x_o)$.
We welcome further discussion if there are any outstanding questions or concerns. Thank you again! | Summary: The paper considers the problem of efficient neural density estimation for simulation-based inference, in settings where we want to estimate the parameters of a model from which we can draw samples but cannot define a likelihood function. Essentially, the main contribution of the paper is to integrate an active learning approach for exploring the parameter space efficiently, for which they define a measure by which they decide which parameter values should be "explored" next, i.e. used to generate more data to train the posterior. The approach is used for the Origin-Destination Matrix estimation problem in traffic simulation.
Strengths: The main contributions of this paper are:
1) In case of simulation-based inference, an important question is how to explore the parameter space efficiently. This paper uses the concept of "active learning" for this purpose, by choosing the next few samples to "label" (by running simulations) using a measure
2) The paper brings on the concept of model uncertainty through Bayesian Neural Network, and marries it off with simulation-based inference
3) The paper considers elaborate experiments of origin-destination matrix calibration based on SUMO traffic simulations.
Weaknesses: My general comment is that the paper has several useful and interesting ideas, but they have not been sufficiently explored or developed.
1) The experiments show improved sample complexity due to the "active learning". But if the true posterior distribution p(\theta|x_0) over parameter space is multimodal, then we may risk finding a suboptimal solution, especially if the prior is not suitable.
2) A measure is proposed to choose the next few samples to be "labelled", but we don't really get to understand why that measure should be used.
3) Although the general approach is quite generic, experiments are shown for only one task (OD calibration)
Minor comments:
Algo 1: I think you should initialize D(r) with D(r-1) outside the "for" loop for variable b, and inside the loop, it should be D(r)=D(r) U (\theta_b,x_b)
Fig 1 part b: internal text is illegible, should be expanded
Eq 4: p(\theta|x_0, \phi) should be p(\theta|x_0, \phi ')
Technical Quality: 2
Clarity: 3
Questions for Authors: 1) How to choose a good prior for the OD matrix? Can we use some insights based on additional knowledge about the locations?
2) What are the other endogenous and exogenous parameters used for this simulation?
3) The technique proposed is quite generic, but only one application is shown: OD matrix estimation for traffic simulation. Can you discuss other applications, maybe within the traffic simulation domain itself?
4) Is there any other way of dealing with the parameters \phi without defining a distribution over them? In other words, if we do not choose 'q' to be a Bayesian Neural Network, can we still use the active learning approach to choose candidate '\theta'?
5) Is Equation 3 basically the variance of \theta? Does the active learning basically choose those values at which \theta has maximum variance according its current posterior distribution?
6) When we are sampling candidate parameter values before applying the selection criteria, should we consider a distribution different from the current posterior? Choosing the current posterior essentially means we are "exploiting" the region of the parameter space where we already have observed some values - maybe we should "explore" the other parts of the parameter space using a different distribution?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: .....
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort providing detailed feedback on our work. Please find our responses to your questions and comments below.
Note: Where applicable, we prefix sections with `W-<x>`, `Q-<y>`, or `L-<z>` to reference itemized comments in Weaknesses, Questions, and Limitations, respectively, numbered by the order in which they were mentioned.
**[W-1]** This is true, although SBI/Bayesian methods in general are sensitive to a choice of prior, and this is not a limitation exclusive to our method. Further, the active learning mechanism does not change the limiting behavior of the method insofar as its relation to APT [1], as discussed in Section 3.4 (i.e., we still have $q_\phi(\theta |x)\rightarrow p(\theta|x)$ as $N \rightarrow \infty$).
[1]: D. S. Greenberg, M. Nonnenmacher, and J. H. Macke. Automatic posterior transformation for likelihood-free inference, 2019.
**[W-2]** The acquisition function proposed for selecting simulation samples is motivated across Sections 3.1, 3.2, and 3.3, with the principal motivation being to improve on traditional SBI methods by exploring the simulation parameter space in a more principled manner over the course of the multi-round inference procedure. In particular, we want to do so in a way that is maximally informative to our model of the posterior, which can yield compounding benefits in posterior accuracy under $x_o$ that are particularly important in settings with low simulation budgets and/or high simulation costs.
**[W-3]** Please see our global rebuttal reply for details on additional empirical evaluation, including discussion around new simulation benchmarks and more representative metrics.
**[W-4]** _Regarding syntax issues and small figure text_ Thank you for pointing these out, they will be corrected in our final manuscript.
**[Q-2]** Many of these parameters are intrinsic to the SUMO simulator or the traffic network. For example, exogenous parameters include the route choice set and link attributes (e.g., free flow speeds, number of lanes). Endogenous variables include link and path traffic statistics (travel times, flows, speeds), and departure times.
**[Q-3]** Our proposed approach falls into the class of simulation-based inference methods, and is indeed intended to be generally applicable in settings where statistical inference under an arbitrary mechanistic model is needed. There are many scientific domains that define problems of this nature (e.g., inferring parameters of biological processes, calibrating physics models to real-world observations, etc). In the transportation space, SBI methods like the one proposed can be used not only for inferring network demand (OD matrices), but also for many other dynamics in urban settings, such as human mobility and public transportation. Urban designers and city planners can reason about effects of various proposed changes using simulation models, as well as leverage inverse models learned with SBI to better characterize parameters required to achieve desired outcomes.
**[Q-4]** Evaluating the acquisition function as-is requires some means of inducing a distributional estimate over the parameters of the chosen model. BNNs and MC-dropout are flexible ways to achieve this in practice, but without this form our acquisition function cannot be used as intended. We are not otherwise aware of a clear way to sidestep this requirement without considering an entirely different acquisition function (and no such formulation exists in the SBI/BO literature, to the best of our knowledge).
**[Q-5]** Equation 3 captures average differences in the assigned likelihoods between the posterior model ensemble ($p(\theta | x_o, D)$, see also the marginalization above line 157) and any particular model instantiation under the weight posterior. It could perhaps be likened to a notion of "distributional variance" around $\theta$ under the model weight posterior $p(\phi |D)$, but it is distinctly different from the variance of $\theta$ under the posterior $p(\theta | x_o)$.
**[Q-6]** This is a great question, and very similar to the one that motivated this work. It is worth first noting that sequential SBI methods in general are focused on learning a particularly accurate picture of $p(\theta | x_o)$, i.e., the posterior under known observational data of interest. These methods explicitly condition on $x_o$ to produce each round’s proposal distribution, effectively “refocusing” the next round’s samples to reflect the model’s current understanding of $\theta$ that explain $x_o$. As a result, the posterior at a given round does not strictly correspond to regions previously explored in the parameter space (and so “exploitation” may be a bit misleading); rather, it captures the parameters expected to explain $x_o$ under the simulation model, given the simulation samples that have since been observed.
In any case, there are a few challenges when thinking about using the alternative distribution you mention in this case:
1. How can we ensure convergence to the true posterior if sampling from a modified distribution?
2. (similarly) How can we ensure the resulting exploration is consistent with the prior?
Our approach effectively implements your proposition by allowing the underlying model to "explore" regions of the parameter space it doesn't currently "understand" well, while remaining consistent with prior specifications and converging to the true posterior in the limit. Put another way, our method effectively produces the mentioned alternative "exploration distribution," but through a combination of components (the posterior model and a selection mechanism) that can be tractably mapped onto properties required for consistent Bayesian inference.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their meticulous responses to all the questions. My main concern about the work was that it is focussing only on one very specific task, though the approach was quite generic (I see that other reviewers too had the same concern). I am satisfied by the author's decision to add a few benchmark tasks, and now I am leaning towards recommending acceptance.
---
Reply to Comment 1.1.1:
Comment: Thank you for the favorable score revision and again for your valuable feedback! It is very much appreciated, and we welcome any further questions or discussion. | Summary: The paper addresses the problem of computational cost, or alternatively, of sample efficiency in simulation-based inference (SBI). By employing an active learning scheme in sequential neural posterior estimators (SNPE), the proposed method achieves improved sample efficiency, which is paramount when dealing with expensive to evaluate scientific simulator-based models.
Strengths: The paper has the following strengths:
* It addresses a relevant problem in SBI.
* The idea of using active learning to guide the sequential steps is interesting.
* The proposed method is applied to real-world traffic simulation problems.
Weaknesses: * The methodology is not motivated properly, and at times seems quite ad hoc. The authors first mention the expected information gain (EIG), which is a principled criteria for active learning, but then they propose equation 2 as the criteria they use. It was not clear to me why this is a good choice, and how it related to EIG. The distance $\mathcal{D}$ in equation 2 became the quadratic loss in equation 3 without any justification/reasoning. I did not understand how the equation above line 157 was estimated. This seems like a critical quantity in the proposed method, and it would be nice if there is a discussion about it.
* Existing methods for SBI using neural density estimators do not typically provide uncertainty quantification for the weights and biases ($\phi$) of the neural network. There is very little emphasis placed on discussing this aspect. The authors mention Bayesian neural networks once in Section 2.2, and talk about MC-dropout for approximating the posterior of $\phi$. The paper lacks an in depth discussion and experiments about using them in the context of SBI (which has so far not been done to the best of my knowledge), and how they affect the proposed method (for instance, how to set the prior for the Bayesian neural network, sensitivity to the dropout rate, etc.).
* The paper includes experiments only on the traffic simulators. This wouldn't be an issue if this was an applied paper, or if the method was motivated by certain aspects of this problem which would generalise to other simulators as well. However, the method is presented in general, which is why I would expect a varied set of benchmark experiments. Given that a major portion of the references are from transportation research literature, perhaps this work, in its current state, would be more relevant for audience of that community.
* The clarity of writing can certainly be improved.
* The paper lacks discussion of the hyperparameters of the proposed method, its sensitivity to the choice of hyperparameters, and how to set them in practice.
* The literature on active learning for simulator-based models is not cited and discussed, see for instance the following papers. This makes it difficult to judge the technical novelty of the proposed method.
* Gutmann and Corander (2016): https://arxiv.org/abs/1501.03291
* Kleinegesse et al (2019): https://proceedings.mlr.press/v89/kleinegesse19a.html
* Kleinegesse et al (2020): https://arxiv.org/abs/2003.09379
Technical Quality: 2
Clarity: 1
Questions for Authors: * I did not understand the reasoning behind adjusting equation 3. Why multiplying with the prior makes sense?
* Line 140-141: not clear what 'cost' is referring to.
* Line 172-173: I did not understand what is being conveyed.
* Line 191-192 talk about recovering the true optimum of $\alpha$ as $N$ tends to infinity. But in practice $N$ is much smaller. Does this statement hold even in the finite case?
* What is the computational overhead of the proposed method compared to SNPE?
Typos/grammatical errors:
* Line 96
* Line 127 (genetic instead of generic)
* Line 121-122 doesn't parse well
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Please include a paragraph on the limitations of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort providing detailed feedback on our work.
Note: We prefix sections with `W-<x>`, `Q-<y>`, or `L-<z>` to reference itemized comments in Weaknesses, Questions, and Limitations, respectively.
**[W-1]** The discussion around EIG was principally intended to help set the stage for desirable qualities of an acquisition function, despite the fact it is itself not directly possible to optimize in LFI settings due to its reliance on the likelihood $p(x|\theta,D)$. This drawback motivates an alternative formulation that captures these qualities, which begins with our introduction of a notion of distributional uncertainty (up to a divergence measure $\mathcal{D}$) in Equation 2. This is further connected to Equation 3 via the discussion in Section 3.3, which makes concrete an approach inspired by the general form in Eq. 2 and that seen in [1].
Line 157 is an expansion of the marginalization involved over NDE model parameters $\phi$ in defining $p(\theta|x,D)$, and is included primarily to highlight the relationship between $p(\theta|x,D)$ and $p(\theta|x, \phi)$. More exposition will be added to our final manuscript to better contextualize this form.
[1]: K. Kandasamy, J. Schneider, and B. Poczos. Bayesian active learning for posterior estimation. (IJCAI ’15)
**[W-3]** CI-1 Please see our global rebuttal reply for details on additional empirical evaluation, including discussion around new simulation benchmarks and more representative metrics.
**[W-4]** Appendix C.1.1 provides the hyperparameters used by our algorithm across explored experimental settings, as well as our NDE model’s structure, dropout rate, etc. We agree that a more principled analysis of hyperparameter sensitivity of heuristics for setting them in practice would be valuable. While performing such a study is prohibitive on our primary task due to computational constraints, the effects of the algorithm hyperparameters (e.g., $R$, $N$, $B$) and NDE hyperparameters (e.g., network structure, dropout rate, etc) will be explored on the smaller scale SBI benchmarks discussed above and reported in our final manuscript.
**[W-5]** These are valuable resources and indeed provide useful context for leveraging implicit models in Bayesian optimization contexts. While we believe mentioning these works in our literature review is warranted, there are a few reasons why they might otherwise have been seen as indirectly relevant:
- They primarily leverage GP surrogate models rather than models seen in more recent SBI literature (e.g., flow-based generative models) and those used in our work.
- Our efforts are focused on formulations that only require a direct posterior approximation, rather than assuming surrogate likelihoods or likelihood ratios are also available.
- They primarily focus on experimental design contexts, which embrace a slightly different set of assumptions, formulations, and intended applications.
**[Q-1]** The adjustment to Eq. 3 is motivated in Section 3.3 (lines 172-175), which describes how the $\theta^*$ that maximizes this notion of distributional uncertainty is not necessarily a likely parameter under the observational data $x_o$. Multiplying by the proposal prior, which captures the most up-to-date estimate of $p(\theta | x_o)$ at each round of the algorithm, allows us to re-weight this uncertainty by the approximate likelihoods of each parameter under $x_o$ and prioritize more likely $\theta$ during acquisition. See also [Q-3] below.
**[Q-2]** “Cost” here is referring to the time or compute resources spent evaluating the simulator under the mentioned parameter. We agree this should be more clearly stated.
**[Q-3]** Eq. 3 captures a means of quantifying uncertainty over the likelihood values assigned to $\theta$ under the NDE and its various realizations under $\phi\sim p(\phi|D)$. This does not capture the desire for $\theta$ to also be likely under the posterior $p(\theta|x_o)$, however. Our approach, along with many sequential SBI methods, is primarily concerned with learning an accurate view of the $p(\tilde{\theta}|x_o)$ under observational data $x_o$. Lines 172-173 are simply connecting this larger goal to the implications of optimizing Eq. 3, suggesting $\theta^*$ may be of low value if it’s unlikely under the observational data $x_o$. This motivates the adjustment to Eq. 3 that appears in Eq. 4 (addressed in [Q-1]).
**[Q-4]** Lines 191-192 convey that the optimum of $\alpha$ (call it $\alpha^*$) is captured in a sample $X_n$ of size $n$ (drawn over $\alpha$'s domain) in the limit $n \rightarrow \infty$. This could be expanded to include the relevant implications, namely,
$$p(\alpha^* \in X_n) \rightarrow 1, n \rightarrow \infty$$
or
$$\text{argmax}_{\theta\in X_n} \alpha(\theta, p(\phi | D)) \rightarrow \alpha^*, n \rightarrow \infty$$
This does not suggest that there exists a finite sample of size $n$ within which $\alpha^*$ is present, and instead only captures the limiting behavior for increasingly large sample sizes. In practice, we leverage this consistency with $\alpha^*$ given this limiting behavior.
**[Q-5]** The primary difference between ASNPE and SNPE in terms of computational overhead is the optimization of the acquisition function in Eq. 4. As is mentioned in Section 3.4, however, the acquisition function is computed over a fixed size parameter batch and NDE realizations $\phi\sim p(\phi|D)$ (with specific hyperparameters reported in Appendix C.1.1). This can be performed efficiently as a batched inference step under the NDE model, and typically yields a negligible difference in terms of raw runtimes between the two methods. For the traffic case study we explore in the paper, the raw runtimes of both methods (found in subfigures (b) for each of the calibration plots) allow us to compare the total time spent by our algorithm (including acquisition evaluation), and we observe a negligible difference compared to SNPE over our explored horizons.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed responses. I am happy to see the inclusion of some benchmark examples that definitely improve the paper.
* **W-1:** I am still unclear about the role of $\mathcal{D}$ here, as it is taken to be the squared loss (if I understand correctly), and not any divergence/distance metric on the space of distributions.
* **W-2:** I do not see any response to this comment.
* **W-4:** This is exactly why it makes sense to have some simpler benchmarking experiments when proposing new methods, so that we can test the performance by varying different settings and hyperparameters in a "sandbox" environment, and develop intuition about the limits and workings of the method. I hope such detailed analyses are included in the future versions of the manuscript.
* **W-5:** Thank you. The points you mentioned are exactly what needs to be in the paper to put the proposed method in context with relevant literature.
* **Q-1:** I feel like the problem is with the sentence in lines 172-173. It is perhaps too dense for readers to understand (seems like another reviewer had similar confusion like me). A bit of explanation might help the readers.
Given that the paper has improved quite a bit, I am happy to raise my score. All the best!
---
Reply to Comment 1.1.1:
Comment: Thank you for the favorable score revision and again for your valuable feedback! It is very much appreciated. A few additional comments to the points you raised:
- **[W-1]** Equation 2 is mostly used as a general form that highlights the central idea of computing the expected difference between the marginal posterior and individual realizations of the NDE, for $\phi\sim p(\phi | D)$. This gives us a functional form dependent on specific choices of $x$, but not specific $\theta$, as it leverages the entire distributional form of the posterior (i.e., $\mathcal{H}_{\mathcal{D}}(x)$ is a scalar). Equation 3, however, uses the exact likelihood values at particular choices of $\theta$, and while loosely borrowing the approach from Eq. 2, it is not a downstream result of selecting any particular divergence measure. That is, it provides a notion of uncertainty induced by $p(\phi | D)$ that can be measured for specific parameter candidates $\theta$, and thus a divergence measure isn’t directly applicable here.
- **[W-2]** _(Apologies, we ran out of space in our initial rebuttal)_ While quantifying uncertainty over NDE parameters is indeed uncommon in SBI settings, we viewed this mostly as a stepping stone to enable the use of our proposed acquisition function. While we provide some context in Appendix C.1.1, a more principled analysis of model hyperparameters is planned for the final manuscript as detailed in [W-4]. Additionally, while there is some exposition in Section 3.4 that highlights possible choices of NDE that both align with typical options in SBI and provide parameter uncertainties, we agree this should be further expanded upon, and will incorporate this in our final version.
- **[W-4]** We agree, and will be sure to include these results, analyses, and ablations as laid out in our global rebuttal.
- **[W-5]** We also think a discussion relating these works to our paper would help better position our method, and will include this in our updated manuscript.
- **[Q-1]** Understood, thank you for making note of this; we’ll revise the wording here.
We welcome further discussion if there are any outstanding questions or concerns. Thank you again! | Summary: This paper proposes an approach to performing neural simulation-based inference – specifically, sequential neural posterior estimation – in a simulation-efficient manner for complex and expensive simulation models. The idea is to use active sampling to sequentially generate datapoints from the simulator and to train the neural density estimator using batches of datapoints generated this way. The authors test their approach experimentally on an origin-destination estimation problem in an urban mobility simulator, comparing against other common simulation-based Bayesian inference methods ("vanilla" SNPE and ABC) and against two alternatives that are commonly used in the urban mobility modelling literature ((PC-)SPSA).
Strengths: _Originality_
As far as I am aware, active sampling via an explicit acquisition function as described in this paper has not been explored yet in the neural SBI literature, although some less formal active sampling already occurs in sequential/round-based training of neural SBI methods.
_Quality_
The method looks overall reasonably sound and some comparison against multiple alternatives is presented for the case of a complex urban mobility model.
_Clarity_
The paper was mostly clear in my reading, although there were one or two points that were a little unclear to me that I will detail below.
_Significance_
Overall I think this paper has (and approaches to making simulation-based inference procedures as simulation-efficient as possible, more generally, have) the potential to be significant and will assist practitioners in real-world settings to make use of simulation models effectively.
Weaknesses: The two main weaknesses in my reading are:
- Sorry if I missed something, but clarity was lacking in one key respect for me, namely what exactly the relationship was between $q_{\phi}(\theta \mid x)$, $p(\theta \mid x, \phi)$, $\tilde{q}_{x,\phi}(\theta)$ on Line 178, and `q_{\phi,x}(\theta)` (I had to do this last one `in this format` rather than in $math\ mode$, sorry, because it wasn't rendering properly in $math\ mode$ and I couldn't figure out why) on Line 178. Is `q_{\phi}(\theta \mid x)` (now $math\ mode$ isn't working here either...) the same as `q_{\phi,x}(\theta)`? And `\tilde{q}_{x,\phi}(\theta)` the same as `p(\theta \mid x, \phi)`? If so can you use consistent notation throughout for clarity? And if not could you please clarify what the difference between all of these guys are?
- The empirical evaluation was quite weak in my opinion. The authors present a new approach to training SNPE models and it's great that they've applied it to a complex simulator to get some idea of whether it works on real practical simulators of interest, but I think it's also important to test a new method on some simpler benchmark models for which a good (even if approximate) ground-truth posterior can be obtained. This will allow the authors to properly test whether their method produces good posteriors, which isn't really tested at the moment by looking purely at RMSNE values. In general I would like to see something that demonstrates how well the proposed Bayesian pipeline can actually estimate the full posterior distribution, since this is ultimately what it's trying to do. I think the paper needs revision to include such experiments, even if the detail on this is mostly relegated to the appendix.
A couple of less substantial weaknesses are:
- Perhaps the literature review could be a bit more extensive/comprehensive. For example, reference [1] below is a(n admittedly very) recent paper on origin-destination matrix estimation in mobility models, and while it might not be necessary or possible to compare against, it would perhaps be worthwhile including in the literature review as a SOTA method for estimating OD matrices when evaluating the likelihood is extremely expensive. Further, reference [2] below also considers the problem of efficiently performing SBI for complex and expensive simulators, albeit with a focus on obviating the task of learning summary statistics so a slightly different focus.
- Some proof-reading and fixing of formatting issues is needed. For example: Line 268 is missing a word ("...PC-SPSA is an effective extension _that over_ parameters in a lower-dimensional subspace...") as is Line 294 ("...ASNPE is outperformed by _PC-SPSA two_ of our explored settings..."). Also the lines following Lines 141 and 156 have no numbers, not sure what's happened there (not a big deal currently but might cause problems down the line).
[1] _Zachos, I., Damoulas, T., & Girolami, M. (2024). Table inference for combinatorial origin‐destination choices in agent‐based population synthesis. Stat, 13(1), e656._
[2] _Dyer, J., Cannon, P. W., & Schmon, S. M. (2022, May). Amortised likelihood-free inference for expensive time-series simulators with signatured ratio estimation. In International Conference on Artificial Intelligence and Statistics (pp. 11131-11144). PMLR._
Technical Quality: 2
Clarity: 2
Questions for Authors: See above ^ and thanks in advance for your responses!
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: I do not see that this work especially threatens any negative societal impact. The main limitations of the work that I see are already addressed in the Weaknesses section above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort providing detailed feedback on our work. Please find our responses to your questions and comments below.
Note: Where applicable, we prefix sections with `W-<x>`, `Q-<y>`, or `L-<z>` to reference itemized comments in Weaknesses, Questions, and Limitations, respectively, numbered by the order in which they were mentioned.
**[W-1]** There is indeed some overlap in notation, and the differences between these forms are mostly context-dependent:
- $q_{\phi}(\theta | x)$: the approximate posterior NDE model, with parameters $\phi$
- $p(\theta | x, \phi)$: a concrete posterior density produced by a particular setting of NDE weights $\phi$
- $\tilde{q}_{x, \phi}(\theta)$: the approximate proposal prior
- $q_{\phi, x}(\theta)$: same as $q_{\phi}(\theta | x)$, but with syntax adjusted to mirror the key result being used from [1] at line 178.
In particular, the form used on line 178 was constructed to resemble the setup of a key result we leverage from [1]. However, we agree that this may ultimately have made the connection to the notation elsewhere in our methodology unclear, and will revise this in our final manuscript (by both introducing each term if using a different form, and simplifying where applicable).
[1] D. S. Greenberg, M. Nonnenmacher, and J. H. Macke. Automatic posterior transformation for likelihood-free inference, 2019.
**[W-2]** Please see our global rebuttal reply for details on additional empirical evaluation, including discussion around new simulation benchmarks and more representative metrics.
**[W-3]** _Regarding paper [1]_: We distinguish between the following two problems in OD estimation:
1. General OD estimation problems (also known as travel demand estimation problems), where the outputs are stand-alone OD matrices that can be used for a variety of planning and operational network analysis, and
2. Model calibration problems, where the goal is to calibrate (or estimate) the inputs (such as the demand inputs specified as OD matrices) of a specific traffic simulation model and the output is a calibrated traffic simulator that can itself be used for analysis.
Paper [1] addresses Problem (i), while our work addresses Problem (ii). We would like to stress that this distinction is an extremely important one. In particular, the main challenges of Problem (ii) are due to using an intricate (e.g., stochastic, non-differentiable, high compute cost) traffic simulator. This calls for likelihood-free methods, sample-efficient methods, and methods robust to simulator stochasticity. In contrast to this, Problem (i) can be formulated as a differentiable problem, which can be tackled with a likelihood-based approach, has little-to-no compute cost challenges, and no need for sample efficiency. This distinction is also discussed in [A].
[A]: C. Osorio (2019) High-dimensional offline origin-destination (OD) demand calibration for stochastic traffic simulators of large-scale road networks. Transportation Research Part B: Methodological, Volume 124, Pages 18-43
_Regarding paper [2]_: This work addresses the similar problem of performing likelihood-free inference under expensive simulators/low budgets. However, it focuses almost entirely on formulations suited for time-series data, and aims principally to jointly learn both summary statistics and a classifier for performing density ratio estimation. These are no doubt important settings in this space, but this particular work ultimately differs in several significant ways form our paper (e.g., we use a single observational data point, work exclusively with direct posterior approximations, require a means of inducing uncertainty over NDE parameters, etc).
**[W-4]** Thank you for pointing these out, lines 268 and 294 are indeed missing words and will be corrected in our final manuscript. We also noticed the missing line numbers in several places (appears to be an odd formatting bug) and will ensure these are fixed.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal.
**[W-1]**: Thanks for explaining. I do think consistency in notation (e.g., picking either bullet point 1 or bullet point 4 to use throughout) will be important for the paper's clarity. I also do not personally see why it's important to distinguish between bullet point 1 and bullet point 2. Would it not perhaps be clearer to use $\lbrace{q_{\phi} \mid \phi \in \Phi\rbrace}$ or something to refer to the NDE model and $q_{\phi}$ for a particular instantiation (i.e. the NDE at parameter values $\phi$)?
**[W-2]**: Thanks for taking the time to prepare these additional experiments, I think this is valuable to include in the updated paper.
**[W-3]**: Thanks for considering how your contribution relates to these papers. I think the differences you raise make sense, and would be important to discuss in the revision to more thoroughly contextualise your work.
With these improvements in place I would be happy to raise my score.
---
Reply to Comment 1.1.1:
Title: Follow-up on Planned Revisions
Comment: Thank you again for your feedback, we wanted to briefly follow up on our planned revisions mentioned above. Please let us know if there are any further questions we can address; we’d be happy to continue the discussion if there are outstanding concerns or revision suggestions. Thank you!
---
Rebuttal 2:
Comment: Thank you for your consideration regarding the score, as well as the additional feedback. It is very much appreciated!
**[W-1]** We agree: the use of $q_{\phi, x}(\theta)$ (bullet #4) is inconsistent and should be replaced with $q_{\phi}(\theta | x)$ (bullet #1). We will make this change in our final version.
Regarding $q_{\phi}(\theta | x)$, or $q_\phi$, we generally use this to indicate that $q$ is a family of models parameterized by $\phi$. $p(\theta|x, \phi)$ is then used in contexts when $\phi$ is concrete (e.g., in an expectation like that of Eq. 2, or in the marginalization above line 157), and parallels the generic form we see with the marginal posterior $p(\theta | x, D)$. However, as you point out, this distinction isn’t particularly important (or at least the use of two separate forms isn't necessary), and we agree simply using $q_\phi(\theta |x)$ directly is better in these contexts given it's clear $\phi$ is concrete (i.e., it doesn't clash with the broader notion that $q_\phi$ can refer holistically to the model family). We will make these substitutions in our final version.
**[W-3]** We also think this discussion is valuable, and will be sure to include it in the literature review of our final manuscript.
---
In summary, **W-1**, **W-2** (addressed in global rebuttal), and **W-3** will each be incorporated in our final version. Thank you again for your consideration, and we welcome any further questions or discussion.
---
Rebuttal 3:
Title: Request for Final Score Revision
Comment: Given you indicated a willingness to revise your score, we wanted to kindly ask if you would look over our reply, which includes our plan to incorporate your suggestions (_W-1_, _W-2_, and _W-3_). If these changes meet your expectations and warrant the aforementioned score revision, doing so prior to today’s deadline would be greatly appreciated. Thank you again for your helpful feedback during this process! | Rebuttal 1:
Rebuttal: We’d like to thank all reviewers for their time and effort in providing insightful feedback on our work. Please find our responses to your individual questions and comments in the rebuttal replies to each review.
Due to space constraints, we would like to address a common critique raised by many reviewers here in our global rebuttal regarding limited empirical evaluation. In summary, we agree that our work would benefit greatly from additional benchmark results, particularly for common simulation models in simulation-based inference (SBI) literature. To this end, we’ve evaluated our method on three tasks under three new metrics. Our hope is that these initial results provide additional depth when it comes to judging the empirical value of our contributions. These settings will be scaled up and explored in a more systematic fashion for presentation in our final manuscript.
### New benchmark tasks
We evaluated our method (ASNPE) and SNPE-C [1] on three common SBI tasks found in the literature (namely [2], which provides a suite of benchmarks across several SBI methods): **SLCP distractors**, **Bernoulli GLM**, and **Gaussian Mixture** (note that each corresponds to a reproducible task environment from [2]).
We further evaluated our method on these tasks using metrics beyond RMSNE that are better suited for capturing the accuracy of the approximate posterior, including _classifier 2-sample tests (C2ST)_, _maximum mean discrepancy (MMD)_, and _Kernelized Stein Discrepancy (KSD)_.
**Note**: Full details for each of these tasks and metrics can be found in [2].
We evaluated both methods over small sample horizons: 4 rounds at 125 samples per round, for a total of 500 simulation samples. Note that this is nearly four times larger than the sample sizes collected for the primary traffic setting explored in paper. For reference, 125 samples in our (non-parallelized) SUMO environment takes ~3 hours, whereas 125 samples from SLCP takes ~5 seconds on our hardware (see also Appendix C.1.1).
Note that these trials were repeated five times for each method, and the average score and standard deviation for each metric over these trials is reported in the tables below.
### Results
| | MMD | C2ST | KSD | L2 |
| :--- | :--- | :--- | :--- | :--- |
| SNPE | 11.3820 ± 1.1043 | 0.9938 ± 0.0003 | 0.2901 ± 0.1217 | 5.9121 ± 0.3323 |
| ASNPE | **11.2673** ± 0.5796 | 0.9938 ± 0.0002 | **0.2558** ± 0.1331 | **5.6731** ± 0.1988 |
**Table 1: SLCP Distractors**
| | MMD | C2ST | KSD | L2 |
| :--- | :--- | :--- | :--- | :--- |
| SNPE | 18.7130 ± 1.8315 | **0.9952** ± 0.0028 | 0.2496 ± 0.0579 | **37.2472** ± 3.4843 |
| ASNPE | **16.1893** ± 0.5625 | 0.9980 ± 0.0007 | **0.2340** ± 0.0467 | 39.4204 ± 0.2686 |
**Table 2: Bernoulli GLM**
| | MMD | C2ST | KSD | L2 |
| :--- | :--- | :--- | :--- | :--- |
| SNPE | 1.2655 ± 0.0412 | 0.9950 ± 0.0000 | 0.0886 ± 0.0292 | 6.6242 ± 0.9629 |
| ASNPE | **1.1216** ± 0.0780 | 0.9950 ± 0.0000 | **0.0633** ± 0.0256 | **3.7100** ± 0.3214 |
**Table 3: Gaussian Mixture**
Note that _L2_ refers to the average $L_2$ distance between $x_o$ (observational data point) and samples $x\sim p(x|\theta), \theta\sim p(\theta|x_o)$. This provides some insight into how well the posterior is calibrated around our data point $x_o$, and whether samples drawn from the posterior $\theta\sim p(\theta|x_o)$ ultimately yield synthetic data close to $x_o$ under the mechanistic model.
While these simulation horizons are relatively small, we find that ASNPE tends to outperform SNPE across most settings and on most metrics. In particular, ASNPE outperforms SNPE on MMD and KSD metrics across all settings. Outside of _Bernoulli GLM_, ASNPE either matches or exceeds SNPE in terms of C2ST and reported $L_2$, whereas on _Bernoulli GLM_ SNPE is better on these metrics.
### Additional studies for our final manuscript
Due to the difficulty of evaluating our transportation simulator for large simulation samples, in-depth ablation studies or hyperparameter sensitivity analysis has been difficult to carry out at scale. Given the efficiency of running our method on the much faster, lower-dimensional simulations explored above, we will use these scenarios to provide 1) hyperparameter ablations, 2) comparison between ASNPE and SNPE over long simulation horizons, and 3) diminishing effects/failure modes of the acquisition function in the SBI loop.
### In defense of our single traffic task
We’d like to note that we intentionally focused our efforts on a single problem that captures many facets of challenging real-world settings, including high-dimensionality, realistic (traffic) dynamics, and slow simulation times. Further, the availability of real-world, metropolitan-scale traffic simulations is very limited, and most high-impact works in the urban mobility space limit their empirical evaluation to synthetic networks or small-scale road networks. This is due to the difficulty of accessing and/or developing realistic road network models for major metropolitan areas. While investigating additional simulation models is undoubtedly valuable and can improve the general appeal of our contribution, we ultimately believe the transportation problem focused on in this paper best captures the realistic qualities of slow, real-world, high-dimensional simulation models.
[1] D. S. Greenberg, M. Nonnenmacher, and J. H. Macke. Automatic posterior transformation for likelihood-free inference, 2019.
[2] Lueckmann, J. M., Boelts, J., Greenberg, D., Goncalves, P., & Macke, J. (2021). Benchmarking simulation-based inference. AISTATS. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: A recent class of methods that have shown to perform well at simulation-based inference are based on modeling the posterior density as a neural network, using mixture density networks, normalizing flows, or other popular architectures. With enough data, these methods tend to produce salient estimates of the posterior density on parameters conditional on data (or summary statistics). With any simulation-based inference method, proposing parameters efficiently is a relevant concern because the posterior can have most of its mass in a tiny region of the support of the prior. Standard SNPE constructs sequential proposal distributions from the current posterior approximation and uses importance reweighting, which is a simple heuristic but not optimal. The main contribution in this paper is an active learning method which filters potential parameter values by ones that are sufficiently large in terms of an acquisition function, such as expected information gain. The authors come up with a reasonable acquisition function that can be approximated via Monte Carlo. The ASNPE algorithm is then applied to an origin-destination calibration problem, and performs well compared to competing approaches under a budget of simulator evaluations.
Strengths: The paper is clear about the contributions, and contextualizes them well relative to the previous work. Active learning is a desired traits in simulation-based inference methods, where simulators can be costly. The active learning component that is introduced in this work seems like a clear improvement over the standard heuristic employed by SNPE.
Weaknesses: I would be interested to see more experiments/applications than just the Bayesian O-D calibration example. Particularly, I'd like to see how this method compares to competing approaches on a standard "benchmark" in likelihood-free/simulation-based inference, i.e. Lotka-Volterra, a queue model, Heston model, a Gaussian toy model as in SNL, etc. This particular simulator model seems to have an extremely high-dimensional parameter space relative to the budget of simulations used for the experiments, and I am curious if it is this particular setting in which this method shines compared to others. I.e. the benefit is clear when the simulation budget is 128, but if it is on the order of 10^4 or 10^5, does it vanish?
Technical Quality: 3
Clarity: 4
Questions for Authors: 1.) I am curious how much the computational cost increases when this methodology is used compared to the simple sequential updating in SNPE. Of course, when the simulator is extremely expensive, the cost of evaluating the acquisition function will be small in comparison, but would be it still be recommended if the simulator is not too onerous.
2.) On a similar note, I would be interested for the authors to carve out and add a little exposition describing exactly the class of SBI problems that they posit would have the largest boon from this active learning approach when compared to the standard sequential updating in SNPE. What types of problems are most rewarded by the acquisition function filtering step, and on which class of problems is it less rewarding?
3.) I may have missed it, but is it obvious that q_\phi still converges to the true posterior in the presence of the filtering step? In the presence of the filtering step, the proposal is no longer just \tilde{p}(\theta), but instead it is a transformed distribution essentially weighting \tilde{p}(\theta) by the acquisition function. Shouldn't this need to be considered when updating the NDE in order to ensure q_\phi really converges to the posterior?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors provided an honest assessment of the performance of their method in comparison with other methods in the experiments section. I believe that potential negative societal impact is not a concern here. As I mentioned earlier, I would be interested to know if the authors believe there are limitations in the context of which SBI problems their method is not as well-suited for.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort providing detailed feedback on our work. Please find our responses to your questions and comments below.
Note: Where applicable, we prefix sections with `W-<x>`, `Q-<y>`, or `L-<z>` to reference itemized comments in Weaknesses, Questions, and Limitations, respectively, numbered by the order in which they were mentioned.
**[W-1]** Please see our global rebuttal reply for details on additional empirical evaluation.
***Regarding vanishing benefit***: While we haven’t been able to experiment with large sample sizes (e.g., of the order 10^4-10^5) in our high-dimensional traffic setting due to computational constraints in our academic environment, we expect ASNPE’s advantage over SNPE to diminish as we collect increasingly large amounts of simulation samples. This is primarily due to the fact ASNPE and SNPE converge to the true posterior in the limit under samples from the simulation model, and both take similar steps to improve the accuracy of the posterior approximation under the observational data of interest. Precisely at what simulation budget we might expect the difference between methods to become negligible, however, is difficult to anticipate, and would require a more principled empirical study.
To this end, we will explore this question alongside the experimental results on the common SBI benchmarks mentioned above in our final manuscript. Given the smaller scale simulation models, we can better characterize the relationship between the dimensionality of the parameter space and the behavior of ASNPE and SNPE over longer simulation horizons.
**[Q-1, Q-2, L-1]** You raise a valid concern: the cost required for evaluating the acquisition function may not always be worthwhile when the simulation model is very cheap to evaluate. In these cases, more data may be more valuable than a principled exploration of the parameter space, and the time ASNPE spends on active learning may be better spent collecting additional simulation samples.
Precisely characterizing the kinds of problems where we expect the cost for acquisition to be worthwhile is difficult, but in general we posit our method will provide the most benefit in complex settings (e.g., stochastic, non-trivial, large parameter/observation spaces) where simulation samples are limited (due either to slow simulation models, limited computational resources, or both). Our primary traffic case study is a problem very much characterized by these qualities, which is at least in part why we elected to focus most of our efforts on this single problem. In these cases, ASNPE may exhibit large benefits over traditional methods given its more principled exploration of the parameter space, and with sequential inference, early improvements can compound heavily over time (i.e., informative samples produce better proposal distributions, from which the next round’s samples are drawn, and so on).
Regarding ASNPE’s relative computational overhead, note that acquisition evaluation can be performed very efficiently as a batched inference step under the NDE model, and typically yields a negligible difference in terms of raw runtimes between the two methods. For our primary traffic task, the reported wallclock times (found in subfigures (b) for each of the calibration plots) provide the raw runtimes for each method when obtaining the 128 simulation samples for each setting. Empirically, this allows us to compare the total time spent by our algorithm (including acquisition evaluation), and we observe a negligible difference compared to SNPE over our explored horizons.
**[Q-3]** The convergence of $q_\phi$ to the true posterior is discussed in Section 3.4. In particular, we leverage a result from APT [1] (also known as SNPE-C), which states that training our NDE model parameters via maximum likelihood $\min_\phi \tilde{\mathcal{L}}$, where
$$\tilde{\mathcal{L}}(\phi) = -\sum_{i=1}^N \log \tilde{q}_{x,\phi}(\theta_i)$$
and
$$ \tilde{q}_{x,\phi}(\theta) = q_{\phi}(\theta)\frac{\tilde{p}(\theta)}{p(\theta)}\frac{1}{Z(x,\phi)} $$ (line 178)
implies $q_\phi(\theta|x) \rightarrow p(\theta|x)$ as $N \rightarrow \infty$ (along with the proposal posterior $\tilde{q}_{x,\phi}(\theta)\rightarrow \tilde{p}(\theta|x)$). This setup allows us to incorporate the true proposal prior $\tilde{p}(\theta)$ in the training loss without additional explicit corrective terms. Further, this limiting behavior holds even when optimizing $\tilde{\mathcal{L}}$ over samples drawn from the transformed distribution under the acquisition filter, as it shares its support with the proposal prior (and the same can be said for any distribution with this quality, in the limit as $N \rightarrow \infty$).
[1] D. S. Greenberg, M. Nonnenmacher, and J. H. Macke. Automatic posterior transformation for likelihood-free inference, 2019.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal.
[W-1]: I appreciate the addition of further evaluations on more standard SBI benchmarks, especially given the time constraint. My concerns were not necessarily about the lack of an in-depth evaluation of the advantage as a function of sample size, but rather that a sufficient amount of exposition should be given to elucidate to the reader the settings in which this methodology may be particularly helpful, and in which it may provide a significantly smaller benefit. I believe that the additional simulations that were described in the global rebuttal quell my concerns here.
[Q-1, Q-2, L-1]: I do agree with the authors' perspective that the traffic simulation model showcases the benefits of this methodology well. I appreciate the clarification regarding the relative computational cost of the filtering step.
[Q-3]: I also appreciate the clarification in this regard.
I believe this work has value to practitioners and researchers of SBI and lean towards recommending acceptance.
---
Reply to Comment 1.1.1:
Comment: Thank you for recommending acceptance and again for your valuable feedback! It is very much appreciated, and we welcome any further questions or discussion.
An additional note for **[W-1]**: Regarding exposition that better characterizes the problems for which our method is particularly ill/well-suited, the 2nd paragraph under **[Q-1, Q-2, L-1]** is likely the most relevant part of our original rebuttal to this point. We intend to expand on this further in our final version alongside the additional numerical results. | Summary: The paper introduces ASNPE, a modification of sequential neural posterior estimation (SNPE) that uses active learning to determine the set of most informative simulation parameters. The method is benchmarked in a synthetic scenarios based on a real-world traffic network and outperforms domain-specific optimization schemes as well as simulation-based inference methods.
Strengths: - The clear and concise communication throughout the paper makes it an enjoyable read.
- It features a simple and elegant approach to target a gap in current SNPE methods.
- Considering the empirical evaluations, a high-dimensional setting with real-world importance is chosen. ASNPE demonstrates a substantial improvement over both domain-specific optimization schemes and, at least for prior 2, SNPE. Again, the results are analyzed in a clear and informative way.
- Lastly, I appreciate the detailed description of the planned open source Python packages in Appendix D.
Weaknesses: Major:
- While the reported experimental setup is exciting, the proposed method is tested only in a single setting. Since ASNPE seems to be a promising general improvement of SNPE-C, it would have substantially benefitted the work to demonstrate its usefulness in a wider range of experimental settings (e.g., by adapting tasks from [1] as a compact first experiment). I acknowledge that this is likely not feasible in the limited rebuttal period, but I believe it would make the paper much more informative for the broader SBI audience.
Minor:
- The paper directly jumps from theory to experimental setup and the results without mentioning the implementation of ASNPE, but this would be quite informative for the reader.
- Section 4.2 does not feature all benchmark comparisons - MC-ABC and SNPE suddenly appear in the results and I expected to find more information concerning MC-ABC at least in Appendix C. As a related minor comment, I suppose the benchmark uses APT / SNPE-C for SNPE, but it would be helpful to state this explicitly.
- Lastly, it would have been informative to see ablation studies (potentially in a computationally less demanding setting) or at least a theoretical discussion regarding the impact of the hyperparameters R, N, and B on the performance of ASNPE.
[1] Lueckmann, J. M., Boelts, J., Greenberg, D., Goncalves, P., & Macke, J. (2021). Benchmarking simulation-based inference. AISTATS.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In the training setup of SNPE and ASNPE, the number of simulations is not clear to me. The authors state that they perform R=3 round with selection size B=32, which should result in 96 simulations seen by the neural nets. How does this match the 128 sample simulation horizon?
- I (and maybe also future readers) would be interested in the authors opinion on the reasons for the clear gap between SNPE and ASNPE in the prior 1 but not prior 2 setting.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The explicit analysis of failure modes (which of course, are to be expected in any method comparison since no method always performs the best) is a big plus. As stated above, this does not incorporate a comparison between SNPE and ASNPE, which would be interesting for the wider SBI community.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort providing detailed feedback on our work. Please find our responses to your questions and comments below.
Note: Where applicable, we prefix sections with `W-<x>`, `Q-<y>`, or `L-<z>` to reference itemized comments in Weaknesses, Questions, and Limitations, respectively, numbered by the order in which they were mentioned.
**[W-1]** Please see our global rebuttal reply for details on additional empirical evaluation.
**[W-2]** We agree this transition could be smoother. Section 3.4, however, sets up several practical considerations for how ASNPE can be implemented (e.g., efficiently approximating Eq. 4, training neural network-based NDEs with MC-Dropout, etc), and specific model details and hyperparameter settings are relegated to Appendix C.1.1 (primarily due to space constraints). In our final manuscript, we will incorporate a discussion on concrete implementation choices prior to the experimental setup, and direct readers to the Appendix for additional details.
**[W-3]** The intended scope of Section 4.2 was to introduce methods specifically used in OD calibration literature (i.e., just SPSA and PC-SPSA), while we relied on prior context for the likelihood-free inference methods (i.e., SNPE and MC-ABC). However, as you point out, MC-ABC is not properly introduced, and the exact version of SNPE remains unclear (although it is indeed APT). In our final manuscript, we’ll broaden the scope of Section 4.2 to include all reported methods and properly introduce them.
**[W-4]** This is a good point, and something we will incorporate alongside the smaller scale experiments run on benchmarks mentioned in [W-1]. As a quick theoretical discussion, however, the following reflects our observations on the large-scale setting we explore in the paper. Keeping the total number of simulation samples constant,
- The number of rounds $R$ dictates how many times we update the proposal distribution over the course of the simulation horizon. Increasing this value can enable quicker feedback to the NDE, requiring fewer simulation samples before re-training the model. When the prior is well-calibrated and simulation samples are representative of the observational data, this can have a positive compounding effect that boosts the rate of convergence to the desired posterior. However, for larger R the resulting batch sizes are smaller and the NDE receives noisier updates, which can have the opposite effect and hurt early performance when the prior is poor.
- The number of selected samples B per round is directly determined by R when the total number of simulations is held constant, and thus the above effects apply here.
- The number of proposal samples N per round governs the size of the parameter candidate pool over which the acquisition function is evaluated. Increasing this value allows us to consider more potentially relevant candidates under $\tilde{p}(\theta)$, and can thus increase the quality of the resulting B-sized batch. Given the acquisition function can be evaluated over this pool very efficiently (i.e., as a batched inference step through the NDE model), one can practically scale this up arbitrarily to increase the sample coverage over the proposal support (but with decreasing marginal utility).
**[Q-1]** Thank you for pointing this out. It should instead say R=4 (selection size B=32 is correct), and will be corrected in our final version.
**[Q-2, L-1]** Generally, prior I captures relatively high bias/low noise around the true OD parameter, while prior II is comparatively low bias/high noise. Our opinion as to why gaps between SNPE and ASNPE relies on a few observations:
1. Empirically, all methods under prior II benefit from its lower bias (seen very clearly in Figure 2) without suffering too greatly from the higher noise, indicating a particular sensitivity to the prior bias for this congestion setting and time of day.
2. ASNPE can identify useful simulation parameters more quickly than SNPE due its explicit acquisition mechanism, aligned with uncertainty in the NDE model.
Given the sensitivity around the prior in this setting, ASNPE can have an edge over SNPE by making early strides to correct for the bias through exploration, and this can have a significant compounding effect for multi-round inference on such short simulation horizons. Put another way, ASNPE attempts to counter uncertainty in the NDE model by employing a more principled exploration of the parameter space than SNPE, and the effects of doing so are magnified under the higher bias under prior I (and less prominent under the less biased prior II).
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. I especially value the additional experiments given the short time available.
[W-1] I appreciate C2ST as an additional metric, but think it is important to note that the score is nearly 1 for all settings and methods and thus its informative value is very limited here. Acknowledging the papers focus on small simulation budgets, I still would recommend to additionally repeat the experiments with a bigger budget to better discriminate between SNPE and ASNPE regarding C2ST in a *final* version (not necessary during discussion period). If I understand the authors correctly, this is already planned with " 2) comparison between ASNPE and SNPE over long simulation horizons".
The other responses greatly improve the clarity for me and would be informative in the final version. I believe the paper is valuable for the NeurIPS community and increased my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for the favorable score revision and again for your valuable feedback! It is very much appreciated.
**Regarding C2ST** We agree completely, and should emphasize that this metric in particular was empirically difficult to improve under the small sample sizes tested. This is mostly consistent with the C2ST values reported in [1] on the reference benchmark tasks, where C2ST is broadly quite close to 1.0 for small sample sizes (smallest reported being $10^3$). Indeed, we intend to scale up these trials and explore performance on larger simulation samples ($10^5$+) for the final version. _(Note that our initial attempts at doing so indicate the metric appears to improve significantly for even small multipliers on sample size. For instance, on the Bernoulli GLM task with a sample size five times as large (2500 simulation draws), C2ST is much more distinctive, with SNPE averaging ~$ 0.87$ and ASNPE ~$0.85$ across five trials.)_
[1] Lueckmann, J. M., Boelts, J., Greenberg, D., Goncalves, P., & Macke, J. (2021). Benchmarking simulation-based inference. AISTATS. | null | null | null | null |
Private Online Learning via Lazy Algorithms | Accept (poster) | Summary: This paper studies private online prediction from experts (OPE) and online convex optimization (OCO) problems and proposes a general transformation that converts lazy (non-private) algorithms into private algorithms. By applying it to existing lazy algorithms, they obtain improved regret bounds for both problems. A lower bound for lazy algorithms is also provided, suggesting that new techniques other than lazy algorithms are needed if one wants a better regret.
Strengths: 1. The proposed method is a general transformation, namely, it applies to any algorithms that satisfy certain properties.
2. The results obtained outperform previous bounds, especially in the high privacy regime ($\varepsilon \ll 1$).
3. A matching lower bound is provided.
Weaknesses: 1. The improvement made in this paper may not be very significant. It only improves the dependency on $\varepsilon$, while in many scenarios $\varepsilon$ is treated as a constant.
2. The lower bound may not be essentially matched -- Theorem 4.2 requires the algorithm to be $\varepsilon^2$-CDP instead of $(\varepsilon,\delta)$-DP. It does not mean that every $(\varepsilon,\delta)$-DP algorithm should have that lower bound.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. The paper assumes an oblivious adversary. Can the results be extended to the adaptive adversarial setting?
2. It looks like Condition 4.1 is not limited to algorithms with a small number of switches. It only requires that once the algorithm resamples, the resampling distribution depends only on past loss functions and not on internal randomness. Is my understanding correct?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Irrelevant.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the feedback. We respond to the reviewer’s main comments below; we are glad to clarify further as needed during the discussion period.
Improvement for small $\epsilon$: It is true that in many applications, $\epsilon$ is set to be some constant number. But the online setting is a bit different. Take the DP-OPE, as the non-private regret term $\sqrt{T}$ can dominate the other term, we can get privacy for free even when $\epsilon \ll 1$, and hence the high-privacy regime becomes more important and interesting. This is also crucial in scenarios where many applications of DP-OPE are necessary: in this case, our results show that we can run more such instances of DP-OPE than existing work while satisfying privacy.
Lower bound for CDP: Lower bounds in the oblivious setting turn out to be elusive, and prior work has only resulted in the trivial lower bound of $1/\epsilon$. Therefore, we restricted ourselves to the family of low-switching algorithms. The choice of CDP is made so that it is easier to prove the tight composition of the privacy budget. We can prove similar (tight) lower bounds for pure DP as it has a simple tight composition (our upper bounds are tight for pure DP).
Question 1: The privacy guarantee may be extended to adaptive adversaries, but the utility guarantee may be broken. As the algorithm is low-switching with the same model choice over a batch, the adaptive adversarial may design some loss functions with very large regret based on this. We note that our main goal in this paper is to study oblivious adversaries, especially given the recent work [AFKT23], which studies adaptive adversaries and gives tight lower bounds for certain privacy regimes.
Question 2: Yes, you are right. Our lower bounds should hold for this family of algorithms as well.
---
Rebuttal Comment 1.1:
Comment: Thank you for your addressing my questions. I decide to keep my positive score. | Summary: This paper investigates private online learning, focusing on online prediction from experts (OPE) and online convex optimization (OCO). The authors propose a new transformation that translates lazy, low-switching online learning algorithms into private algorithms. Based on this transformation, their resulting algorithms attain regret bounds that significantly improve over prior art in the high privacy regime. Furthermore, they complement their results with a lower bound for DP-OPE, showing that these rates are optimal for a natural family of low-switching private algorithms.
Strengths: * The improvements in results for DP-OPE and DP-OCO are highly valuable.
* The research motivation of this work is clear, and the proposed transformation (L2P) is a very solid contribution to DP online learning.
Weaknesses: * The transformation in this work is based on low-switching online algorithms, and is only applicable to oblivious adversaries. In contrast, previous DP-OCO algorithms [ST13, AS17, KMS+21] can adapt to non-oblivious setting.
* The theoretical innovation in this paper is limited, as it largely follows previous work [AFKT23b].
Technical Quality: 3
Clarity: 3
Questions for Authors: * In footnote 2, the authors state that their algorithms will satisfy a stronger notion of differential privacy against adaptive adversaries. However, I believe that for non-oblivious adversaries, the proposed DP online algorithms in this work cannot provide regret guarantees. Is this correct? If so, I would like to understand why the low-switching private algorithms cannot handle non-oblivious adversaries, whereas previous work using binary tree techniques [ST13, AS17, KMS+21] can.
* From Theorem 3.2, it is evident that $\epsilon$ of L2P increases over time, meaning that its privacy weakens progressively. Could this be a concern for the proposed DP algorithm?
* The authors established a lower bound for low-switching private online algorithms based on **CDP** analysis, whereas the upper bound provided in this paper is based on **DP** analysis, i.e, Theorem 3.2 and 3.9. Could you clarify the implications of this difference?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the feedback. Please see our responses below; we are glad to clarify further as needed during the discussion period.
Adaptive adversary: we note that our main focus in this work is oblivious adversaries since the recent work of [AFKT23] studies adaptive adversaries and shows tight lower bounds for certain privacy regimes. Given the separation in rates between adaptive and oblivious adversaries, the goal of our work is to study and understand the fundamental limits of oblivious adversaries.
Innovation: Some works in this line implicitly show the connection between lazy algorithms and private algorithms, but we are the first to formalize this connection explicitly via our L2P framework that can convert general lazy algorithms into private ones. The framework does not follow previous works like [AFKT23b] directly and requires new techniques and analysis, such as the new correlated sampling strategy through a parallel sequence of models, or the new regret guarantees that measure the effect of batching on lazy online algorithms.
We also answer the reviewer’s questions below.
Q1: This is correct. Our regret bound may be invalid with an adaptive adversary, and there are some challenges in extending it further. For example, as our algorithm consistently produces output over one batch, the adversary can know our next $B$ predictions and might choose some bad loss functions. The privacy guarantee, however, holds against adaptive adversaries. This is similar to several results in DP ML, where we get privacy for any input sequence, while utility holds under some distributional assumptions. In other words, if our assumptions are invalid, we may get worse utility but will not lose privacy.
Q2: We will lose the privacy budget each time we make a switch, and it is unavoidable to weaken the privacy guarantee. However, we set the privacy budget for each time step in a way that guarantees that the final privacy parameter guaranteed by the algorithm is satisfactory. This requires that the number of rounds $T$ is bounded, as is the case in most private optimization procedures: for example, in DP-SGD with full batch size, the privacy weakens over iterations as well, and therefore we need to bound $T$.
Q3: Lower bound for CDP: lower bounds in the oblivious setting turn out to be elusive and prior work had only resulted in the trivial lower bound of $1/\varepsilon$. Therefore, we restricted to the family of low-switching algorithms. The choice of CDP is made so that it is easier to prove the tight composition of the privacy budget. We can prove similar (tight) lower bounds for pure DP as it has a simple tight composition (our upper bounds are tight for pure DP).
---
Rebuttal Comment 1.1:
Comment: Thanks for your response, which has addressed my concerns. I decide to keep my positive score. | Summary: The paper studies the differentially private (DP) variants of the classical online prediction form experts problem (OPE) and online convex optimization (OCO) problem. The main contribution is a (black-box) approach to transform lazy (i.e. slow-varying) online learning algorithms into private algorithms. The paper shows that the transformation only incurs a small additional regret as compared to the regret for the classical OPE/OCO regret. In particular, for the DP-OPE and DP-OCO, the proposed algorithms improve the O(1/eps) regret in previous works to O(1/eps^{2/3}). In addition, a matching regret lower bound is provided for the family of low-switching private algorithms for the DP-OPE problem.
Strengths: The paper is well-written with all claims supported by proofs. Compared to previous works, the L2P algorithm proposed uses a new correlated sampling strategy to avoid accumulation of privacy cost, and the paper improves the regret of lazy algorithms due to batching. In addition, the paper provides a lower bound for the DP-OPE problem, showing that limited switching is not enough to obtain faster rate, suggesting the need for other techniques.
Weaknesses: The lower bound is only valid for the family of slow-varying algorithms, thus the algorithms proposed might not be optimal.
Technical Quality: 3
Clarity: 3
Questions for Authors: The paper is relatively dense, with many definitions related to differential privacy (section 2), which might be unfamiliar for people who have little knowledge about differential privacy before. I’m wondering if all those definitions are necessary for the main part of the paper (maybe some can be moved to the appendix)?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes the paper has discussed the limitations and provides improvement in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review and feedback. Below, we address the main comments; we are glad to clarify further as needed during the discussion period.
1. Lower bounds: we acknowledge that the conditional lower bound is not ideal, but it is important to note that this is the first non-trivial (even conditional) lower bound for oblivious adversaries. Several papers have studied DP-OPE and DP-OCO, and so far, none has come up with a better lower bound than the trivial $1/\varepsilon$ lower bound (though for adaptive adversaries, there are some lower bounds). We, therefore, believe that our conditional lower bound (while admittedly not entirely satisfactory) is a step in the right direction towards proving general non-trivial lower bounds for oblivious adversaries. Finally, we note that our lower bound proves that our rate is the best possible using existing techniques that have been pursued by the recent line of work based on limited switching.
2. Organization: we will reorganize the paper, add more discussions, and defer less important definitions and details to the appendix, in order to make the paper more readable for people less familiar with differential privacy.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the additional comments on the lower bound! I will keep my score! | Summary: This paper proposes a new mechanism that converts lazy online learning algorithms into private algorithms. Unlike previous private online algorithms that use individualized privatized methods, the paper's new mechanism is a black box private algorithm that can be applied to many popular non-private methods such as the lazy shrinking dartboard algorithm or the regularized multiplicative weights algorithms.
Strengths: - The problem is well-motivated. Even though previous private algorithms have good theoretical guarantees, their privatized mechanism is usually tailor-made to solve a specific problem, which limits their applicability. The new black box algorithm makes converting a non-private to a private one simple and seamless.
- The private term has improved dependence in the $\epsilon$ term which could be valuable in the high privacy regime (where $\epsilon \ll 1$).
- The authors also provide lower bounds that match their upper bound.
- The paper is well-written overall.
Weaknesses: - I'm not sure if I agree with the claim that the new bound is an improvement over the previous bound. Obviously, the high privacy regime is very important, and as the authors have mentioned, this would allow for the composition of more instances. However, as far as I know, a lot of practical application of DP algorithm uses $\epsilon \approx O(1)$ so the new bound could be worse than previous results.
- It would be nice if there are some proof of concept experiments to compare the new mechanism with the algorithms in previous works.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can the author provide some intuition on why we would want to use concentrated DP to prove the lower bound? How does concentrated DP make it easier for us to construct the lower bound?
- What are some non-lazy algorithms that satisfy Assumption 3.1? What would the regret bound look like for these non-lazy algorithms.
- I'm a bit confused about the setting of $\epsilon$ in Theorem 3.2. To get the optimal regret bound, we need to set $\epsilon$ using the provided formula which I think is roughly $O(1)$. Doesn't that defeat the point of the proposed framework where we only see improvement when $\epsilon <<1$.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time in reviewing and feedback. Please see our responses below; we are glad to clarify further as needed during the discussion period.
- Improvement for small $\varepsilon$: First, by modifying the parameter settings, we can recover the previous best results when $\epsilon \ge \Omega(1)$; second, as the non-private regret $\sqrt(T)$ can dominate the private cost in regret when $\epsilon$ is large, the high privacy regime is interesting and non-trivial. In practice, people are usually forced to use constant epsilon as smaller values of epsilon degrade the performance significantly. However, in our setting, we can get privacy for free (when the non-private regret $\sqrt{T}$ dominates) even if $\epsilon \ll 1$. Therefore finding the smallest $\varepsilon$ (or best privacy) possible that allows the non-private regret $\sqrt{T}$ is an interesting open question, and our work provides progress towards resolving it.
- Q1, Intuition for concentrated DP: Lower bounds in the oblivious setting turn out to be elusive, and prior work has only resulted in the trivial lower bound of $1/\varepsilon$. Therefore, we restricted ourselves to the family of low-switching algorithms. The choice of CDP is made so that it is easier to prove the tight composition of the privacy budget. We can prove similar (tight) lower bounds for pure DP as it has a simple tight composition (our upper bounds are tight for pure DP).
- Q2, non-lazy algorithms: The classic Multiplicative Weights algorithm (that draws a fresh sample at each iteration) may be non-lazy, but still satisfies Assumption 3.1. As long as it satisfies Assumption 3.1, we can prove the regret bound as claimed.
- Q3, setting of $\varepsilon$ in Theorem 3.2: The theorem provides a way to calculate $\varepsilon$ based on the parameters of the algorithms such as $\eta$ and $p$. As a result, we can tolerate much smaller values of $\varepsilon$: indeed, in Theorem 3.9 and Theorem 3.11, we instantiate Theorem 3.2 with several hyperparameters that result in small values of $\varepsilon$. Hope this clarifies your concern. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Reconstruct and Match: Out-of-Distribution Robustness via Topological Homogeneity | Accept (spotlight) | Summary: The paper proposes a method for improving domain generalization and test time adaptation by introducing a selective variant of slot attention, formulating the relationship between slots across images as topological homogeneity between hypergraphs constructed based on the slots, and thereby matching occurances of the same object between different images.
Strengths: The proposed method and its components introduce novel techniques for the domain generalization problem that are well motivated and appear to be sound.
The mathematical derivation of the employed forms of the individual steps makes sense.
Several ablations well demonstrate the dependence of REMA's results on its main building blocks.
Weaknesses: The proposed method consists of fairly many newly introduced steps.
While ablations show their necessity of the major modular building blocks of REMA for achieving the eventual results, these building blocks are not analyzed in detail.
For understanding them (and therefore REMA), it would be necessary to expound each individual step and confirm that it has the intended effect and that the explicit or implicit mathematical assumptions hold.
For example, the results of standard slot attention can be of varying quality, and it would be important to verify whether and how such imprecisions introduced there propagate through the next steps.
The analysis in Figure 5 goes in this direction, but it is not clear what exactly is probed and measured here. A similar analysis as in for the other components of the method would be helpful.
**Main issue of the paper**: There exist different communities in ML which study the generalization performance of DNNs to distribution shifts in very different settings. Unsupervised Domain Adaptation (UDA) is separate from Domain Generalization (DG) and both are separate from OOD generalization. Putting papers which study either of those in the same Table is wrong, misleading and highly confusing. All of these settings have their benefits and challenges and the numbers are simply incomparable if the initial conditions, such as which data is available during training, differ this strongly. The authors conflate and confuse all three settings on multiple occasions which makes the paper confusing and the results are impossible to understand. It is further completely unclear in which of the three their proposed method falls into. Below, I explain the issue in much more detail and provide specific problematic text instances. The paper needs a major revision which includes reworking all sections and most Tables.
line 183: "For OOD generalization, we leverage the three most widely used benchmark datasets." The datasets PACS and Office-Home are usually used in Unsupervised Domain Adaptation (UDA), not OOD generalization. Datasets commonly used to benchmark OOD generalization are datasets like ImageNet-R, ImageNet-Sketch, ObjectNet or ImageNet-V2. The choice of the baselines the authors compare to also gives the impression that they confuse domain adaptation with OOD generalization: CORAL and DANN are UDA methods.
line 66: "The goal of OOD generalization is to find a predictor f : X 7→ Y that generalizes well to all unseen target domains." This is correct, but then the used baselines are wrong because they assume access to the target domains and are trained using this information. It is not clear to me how the UDA methods (shown in Table 1) can be implemented without this access.
The setting of UDA differs drastically from OOD generalization: In UDA, we attempt to learn a model which performs well on the source domain and a set of target domains, using unlabeled data from the target domains. That is, in UDA, we assume access to the unlabeled test set at training time. This differs from the OOD generalization setting: Here, we want to train a model on a source dataset and then test its generalization to an unseen test set. That is, we assume no knowledge about the test time distribution shift. This is a crucial difference and it is wrong to confuse the two terms. The implemented baselines, e.g. CORAL and DANN, are used in a UDA setup, not in an OOD generalization setup. In this light, the wording "Following common practice, the model selection is based on a training domain validation set" is meaningless because the common practice for model selection differs depending on which setting we consider.
The related work section is similarly confusing. On the one hand, the authors wish to review works on OOD generalization where the goal is "to train a model using data from the source distribution so that it can perform well on an unknown target distribution". [41] and [42] use the DG setting, i.e. assumes access to n-1 target domains. [63] assumes access to all target domains. It is unclear why [61] is a good citation for domain-invariant learning. I haven't checked all citations, but the ones I checked **all** assumed access to the target domains.
The setting the baselines use in Table 1 is inconsistent. CORAL and DANN have been trained in an UDA setting. That is, they assume access to unlabeled target data at test time from a target domain. VNE and IRM are used in a domain generalization setting where they assume access to n-1 domains and aim to perform well on an unseen domain. These are two very different settings which are incomparable to each other and it is wrong and confusing to put them in the same table without any discussion. Further, the authors aim to test OOD generalization where no access to unseen target domain data is available (line 66) which is incompatible with either UDA or domain generalization.
The baseline numbers for the cited papers are wrong. I checked the numbers of CORAL and VNE from Table 1 with the numbers in the original papers and they do not match. If the authors reimplemented all the baselines, they need to state this and discuss where the performance differences come from.
Checking papers-with-code for the PACS dataset (https://paperswithcode.com/sota/domain-generalization-on-pacs-2), the best numbers are close to 100% for the domain generalization setting. Since the authors use DG benchmarks in their table 1, I assume that using other benchmarks in the DG domain would be valid as well. If we filter for RN50 architectures only, the best number is 90.5% which is higher compared to the best number reported by the authors.
Looking at the Office-Home dataset (https://paperswithcode.com/sota/unsupervised-domain-adaptation-on-office-home), the best number here is 90%. Here, the setting is UDA. But as mentioned before, it is not clear which setting should be used because the authors include both UDA and DG benchmarks in their Table 1. The best number in the UDA setting is higher than what the authors report in Table 1. There are actually five separate benchmarks for OfficeHome on papers with code (Domain Generalization, Domain Adaptation, Universal Domain Adaptation, UDA, Partial DA) and it is not clear which one should be used.
It is not clear how the models are trained. The test sets are comprised of different domains. Do the authors train their models on all of them or on e.g. two of the domains and test against the rest or on one of the domains and then test against the rest?
For test-time adaptation, the authors missed important works which perform better than their method. SLR achieves an error of 48.7% on the highest severity of ImageNet-C [A]. ETA and EATA [B] achieve an error of around 52%.
[A] Mummadi et al. "Test-time adaptation to distribution shift by confidence maximization and input transformation"
[B] Niu et al. "Efficient test-time model adaptation without forgetting."
Press et al. [C] showed that most test-time adaptation methods collapse when adapting for long periods of time. The authors should test their model on the proposed CCC benchmark to analyze whether their method also suffers from the collapse. The authors are writing that they test their method in the continuous test-time adaptation scenario in Fig. 4b, but it is entirely unclear on which dataset they are testing it. Is it the CCC benchmark? Notably, if the authors are testing their approach on the Continual Test-time adaptation benchmark from [D] (although I am just guessing here), Press et al. showed in [C] that one needs to adapt for longer time periods to show-case collapse and the adaptation periods proposed in [D] are insufficient. The authors must state clearly which continual learning benchmark they are using and I would suggest to report numbers on both [C] and [D].
[C] Press et al. "RDumb: A simple approach that questions our progress in continual test-time adaptation"
[D] Wang et al. "Continual test-time domain adaptation"
Technical Quality: 3
Clarity: 3
Questions for Authors: The paper requires a major revision to make it much more clear which setting the authors are targetting: Is it:
- OOD generalization as suggested by the title? Then, the authors **cannot** assume any knowledge about the distribution shift at test time. Then the baselines in Table 1 cannot be used as they do assume access to the target domains.
- Unsupervised Domain Adaptation as suggested by the baselines in Table 1, i.e. DANN and CORAL? Then, the authors need to reformulate their title, abstract, motivation etc. and remove all instances of "OOD generalization" because this is very confusing otherwise. The authors also need to remove the Domain Generalization baselines from Table 1.
- Domain Generalization as suggested by the IRM and VNE baselines in Table 1? Then, the authors need to remove the UDA baselines from their Table 1 and remove instances of mentioning OOD generalization from the paper.
The authors need to add Test-Time adaptation baselines to their Table and improve their continual Test-Time evaluation.
The authors need to make more comprehensive evaluations on the influence of the different components of their loss.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations section in the appendix is very short. A clear limitation is that the method introduces several hyperparameters which need tuning. This imposes an additional computational overhead which has not been discussed.
Another limitation which can be fixed is that the authors did not study the continual learning setting to analyze whether their method collapses when adapting for long time periods.
In 2024, most people work with large-scale pretrained models such as e.g. CLIP. It is not clear how this method could be used on those models and whether improving ResNet50's performance is relevant nowadays.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the time and effort the reviewer has invested in reviewing our paper. However, **we must point out that the review contains many factual errors and misunderstandings, rendering most comments unacceptable due to their erroneous assumptions.** We will try our best to eliminate the misunderstandings via the following responses.
> **Q1: Regarding problem definitions and experimental comparisons**
(1) Our paper is NOT related to the UDA setting; we consistently emphasize our research on OOD generalization (a.k.a. domain generalization). The reviewer's comment that "Unsupervised Domain Adaptation (UDA) is separate from Domain Generalization (DG) and both are separate from OOD generalization" is **problematic**. In our paper, **DG and OOD generalization are synonymous and can be used interchangeably**. This is also a **consensus** within the community, as exemplified by two well-known DG surveys [1,2] that treat the terms as synonymous. In the abstract of [1], it is stated: "Domain generalization (DG), i.e., out-of-distribution generalization, has attracted increasing interest in recent years."
(2) In Table 1, we ONLY compare with DG baselines. The reviewer's claim that our baseline comparisons, which include DANN and CORAL, suggest we are evaluating UDA algorithms is a **significant misunderstanding**. DANN and CORAL, originally introduced as UDA methods, are now important DG baselines that match deep features from different source domains (NOT target domain) via domain adversarial training (DANN) and correlation alignment (CORAL). This is a **common practice in the DG community**. The well-known **DomainBed** benchmark (code: https://github.com/facebookresearch/DomainBed) and many follow-up works in the field also include both DANN and CORAL as their main DG baselines. Regarding the baseline results, we cite all these numbers from DomainBed or their original papers (following the identical setting) to ensure a fair comparison. Our experiments were also conducted in the same conditions. Please refer to recent papers to avoid misunderstandings regarding benchmarks, such as [3-6]. Based on this, we need to emphasize that the comparisons in Table 1 are reasonable and fair, and our problem definition is also precise and clear.
(3) The reviewer's statement that ''The baseline numbers for the cited papers are wrong. I checked the numbers of CORAL and VNE from Table 1 with the numbers in the original papers and they do not match'' is **incorrect**. Regarding CORAL, its original paper conducted UDA (not DG) experiments on Office-31. **Moreover, it did not involve any experiments on PACS, Office-Home, or VLCS at all. Why then is it claimed that our ''cited'' results are wrong?** Regarding VNE, we used the results from Table 2 of the original paper, which were obtained using the **ERM** algorithm (see our Table 1). The results from Table 3 of the original VNE paper, which were obtained using the **SWAD** algorithm, are inconsistent with our experiments and thus cannot be used. Regarding the mentioned results from paperswithcode website, it is not standard practice in the DG community to base comparisons on them due to the potential distinction in training and inference. For example, the paper that achieved 90.5% on PACS employed a different model selection strategy (not ID val).
(4) The reviewer's statement that ''the PACS and Office-Home datasets are usually used in Unsupervised Domain Adaptation (UDA) and not OOD generalization'' is **incorrect**. **The PACS dataset was introduced in the paper "Deeper, Broader and Artier Domain Generalization," which is a DG paper, not UDA.** Actually, the datasets we use are among the most commonly utilized in the DG community for image data, as referenced in [1-6].
(5) We clarify why our paper also conducted experiments related to test-time adaptation (TTA). The goal of TTA [7] is to update the model online during testing, which complements the goal of DG, i.e., DG aims to learn a generalizable model using only source data, while TTA seeks to enhance generalization ability using unlabeled target data. On the other hand, a series of TTA works utilizes self-supervised learning tasks for both the training and testing phases (L174-177). This aligns with our slot-based approach, and our hypergraph-based matching naturally links training and test phases, enabling our framework to also work as a TTA method. We reiterate that the experiments for DG and TTA were conducted separately (Tab. 1 does not utilize any TTA strategies).
**References**
[1] Generalizing to Unseen Domains: A Survey on Domain Generalization. TKDE, 2022.
[2] Domain Generalization: A Survey. TPAMI, 2022.
[3] Diverse Weight Averaging for Out-of-Distribution Generalization. NeurIPS, 2022.
[4] Improving Out-of-Distribution Generalization by Adversarial Training with Structured Priors. NeurIPS, 2022.
[5] MADG: Margin-based Adversarial Learning for Domain Generalization. NeurIPS, 2023.
[6] On the Adversarial Robustness of Out-of-distribution Generalization Models. NeurIPS, 2023.
[7] A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts. IJCV, 2024.
> **Q2: Regarding TTA experiments**
(1) We thank the reviewer for pointing out these two works [A, B] and have added them to our paper. However, we have to note that SLR [A] is still a preprint paper (since 2021), uses extra input augmentation (compared to other baselines), and does not release code (hard to make a fair comparison).
(2) Regarding continuous TTA, we have already presented it in Fig. 4(b) (CIFAR-10C), where the x-axis represents the type of corruption and multiple SOTA TTA methods are compared. More results (CIFAR-100C and ImagNet-C) will be added to the final version.
> **Q3: Regarding ablation studies**
In the original manuscript, the ablation of REMA is shown in Tab. 4, Fig. 4, and Fig. 6. As suggested, we provide additional ablation studies in **the attached PDF (Tab. 2)**.
---
Rebuttal 2:
Title: I apologize for the misunderstandings in my review
Comment: Dear authors,
I would like to acknowledge major misunderstandings of the paper and the relevant literature on my part. I was not aware that DANN and CORAL are now also DG benchmarks. Seeing those in the table contributed to me thinking the paper is based in UDA. I apologize for missing this and will raise my score to 6 and withdraw my concerns.
Best, reviewer Eaaq
---
Rebuttal Comment 2.1:
Title: Appreciation for Your Revised Review and Understanding
Comment: Dear Reviewer EaaQ,
Thank you for your thoughtful reconsideration of our paper. We appreciate your willingness to update your evaluation and withdraw your previous concerns. Your revised understanding and increased score significantly aid in the review process, and we are grateful for your efforts to resolve these issues.
Best regards,
Submission4013 Authors | Summary: This paper presents REMA which designed to improve the robustness of deep learning models against out-of-distribution (OOD) data. REMA employs a selective slot-based reconstruction module to dynamically map dense pixels into a sparse set of slot vectors, enabling the identification of major components from objects in an unsupervised manner. Additionally, a hypergraph-based relational reasoning module is introduced to model high-order dependencies among these components, ensuring topological homogeneity. Experiments conducted on standard benchmarks demonstrate that REMA outperforms state-of-the-art methods in OOD generalization and test-time adaptation settings, highlighting its effectiveness in handling distribution shifts and enhancing the adaptability of deep models in real-world, non-stationary environments.
Strengths: 1. REMA introduces a unique combination of selective slot-based reconstruction and hypergraph-based relational reasoning to address OOD robustness, which has not been explored extensively in previous studies.
2. The framework effectively identifies and leverages major components of objects without requiring human prior knowledge or fine-grained annotations, reducing the need for extensive labeled data.
3. The paper provides extensive experimental results on multiple benchmark datasets, showing improvements over existing state-of-the-art methods in both OOD generalization and test-time adaptation scenarios.
Weaknesses: 1. The paper does not provide a detailed analysis of the computational overhead introduced by the new modules, which could impact the scalability of the approach for large-scale or real-time applications.
2. While the experiments cover several benchmark datasets, the generalizability of REMA to other types of datasets or more diverse real-world scenarios is not fully explored.
3. The effectiveness of REMA relies on several hyperparameters (e.g., the number of slots, attention iterations), and the paper does not thoroughly investigate the sensitivity of the model's performance to these parameters.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Parameter Sensitivity: How sensitive is REMA's performance to the choice of hyperparameters such as the number of slots and attention iterations? Could an automated hyperparameter tuning method improve the robustness and generalization of the model further?
2. Computational Efficiency: What is the computational overhead associated with the selective slot-based reconstruction and hypergraph-based relational reasoning modules? How does this overhead impact the scalability and real-time applicability of REMA in large-scale or resource-constrained environments?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback, which we address below:
> **Q1-1: How sensitive is REMA's performance to the choice of hyperparameters such as the number of slots and attention iterations?**
As suggested, we have provided quantitative experimental results in **the attached PDF (Fig. 3)**. Within a reasonable range, REMA is *not sensitive* to changes in hyperparameters (*max variation: ~1.8%*). Note that increasing the number of slots and iterative attention times will lead to higher computational costs.
> **Q1-2: Could an automated hyperparameter tuning method improve the robustness and generalization of the model further?**
In our main paper, to ensure fairness in comparisons, we strictly follow the traditional DG methods for model selection. Introducing automated hyperparameter tuning could alter the experimental benchmarks. Moreover, searching a set of hyperparameters for each task would increase the training cost. As demonstrated in Q1-1, we empirically found that the proposed method is not sensitive to changes in hyperparameters. Therefore, we opted to fix hyperparameters across all experiments. On the other hand, it is true that this may not be the optimal solution for individual cases. Leveraging hyperparameter searching techniques to identify potentially optimal parameters for each case could further enhance the final performance.
> **Q2-1: What is the computational overhead associated with the selective slot-based reconstruction and hypergraph-based relational reasoning modules?**
In **the attached PDF (Fig. 4 and Fig. 5)**, we have demonstrated the computational overhead of the selective slot-based reconstruction (SSR) and hypergraph-based relational reasoning (HORR) modules. (1) During training, compared to previous OOD generalization methods, our SSR, which utilizes a small number of slots, does not significantly increase computational costs; HORR also does not incur substantial overhead due to the limited number of hypergraph nodes and edges. Thus, our method (SSR + HORR) is close to previous methods in terms of param. and GFLOPs **(Fig. 4)**. (2) During inference, the speed of our method is comparable to most previous TTA methods **(Fig. 5)**. Note that without TTA, it is equivalent to the inference speed of ERM.
> **Q2-2: How does this overhead impact the scalability and real-time applicability of REMA in large-scale or resource-constrained environments?**
(1) Scalability. First, the time complexity of Slot Attention is approximately O(N×K×d), which indicates that the computational cost of the algorithm increases linearly with the increase in the length of the input sequence, the number of slots, and the feature dimensionality. In our case, N and K are generally O(1), and d is typically O(10^2), thus the complexity of hypergraph convolution is approximately O(10^2). Second, for hypergraph convolution, the time complexity is often in the order of O(E×C×d), where E is the number of hyperedges, C is the average cardinality of the hyperedges (i.e., the average number of vertices that each hyperedge connects), and d is the dimensionality of the features. Similarly, E and C are generally O(1), and d is typically O(10^2), thus the complexity of hypergraph convolution is approximately O(10^2). Based on this detailed analysis of complexities, we find that the two main operations in REMA are efficient, indicating the potential to scale up.
(2) Real-time applicability. In our main paper (Sec. 4.3 and Fig. 4(b)), we show the results of continuous test-time adaptation, which aims to evaluate the capability to handle dynamically changing environments. Our method consistently outperformed SOTA methods, showing superior stability without significant changes. As mentioned in Q2-1, our computational overhead is not substantial. Therefore, combined with our capability for continuous adaptation, this ultimately allows us to meet real-time demands—ensuring both speed and stability.
> **Q3: The generalizability of REMA to other types of datasets or more diverse real-world scenarios.**
Thank you for your suggestion. As suggested, we have added two typical applications in medical scenarios, including pneumonia classification (chest X-ray images) [1] and skin lesion classification [2]. Please refer to the original papers for details of the experimental setup. The results are presented below.
**Pneumonia Classification.** Chest X-ray images from three different sources: NIH, ChexPert, and RSNA. The task is to detect whether the image corresponds to a patient with Pneumonia or not.
| Method | RSNA | ChexPert | NIH |
| ----------- | ---- | -------- | ---- |
| ERM | 55.1 | 60.9 | 53.4 |
| IRM | 57.0 | 63.3 | 54.6 |
| CSD | 58.6 | 64.4 | 54.7 |
| MatchDG [1] | 58.2 | 59.0 | 53.2 |
| MiRe [3] | 63.6 | 65.0 | 56.4 |
| REMA (Ours) | 68.2 | 70.5 | 62.4 |
**Skin Lesion Classification.** We adopt seven public skin lesion datasets, including HAM10000, Dermofit (DMF), Derm7pt (D7P), MSK, PH2, SONIC (SON), and UDA, which contain skin lesion images collected from different equipment.
| Method | DMF | D7P | MSK | PH2 | SON | UDA | Avg |
| ----------- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| DeepAll | 24.9 | 56.8 | 66.7 | 80.0 | 86.1 | 62.6 | 62.9 |
| MASF | 26.9 | 56.8 | 68.2 | 78.3 | 92.0 | 65.4 | 64.6 |
| MLDG | 26.7 | 56.6 | 68.9 | 80.2 | 88.2 | 63.2 | 64.0 |
| CCSA | 27.6 | 57.4 | 68.3 | 75.0 | 90.5 | 67.6 | 64.4 |
| LDDG [2] | 27.9 | 60.1 | 69.7 | 81.7 | 92.7 | 69.8 | 67.0 |
| REMA (Ours) | 29.1 | 63.4 | 70.8 | 83.5 | 94.8 | 75.3 | 69.5 |
**References**
[1] Domain Generalization using Causal Matching. In ICML, 2021.
[2] Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization. In NeurIPS, 2020.
[3] Mix and Reason: Reasoning over Semantic Topology with Data Mixing for Domain Generalization. In NeurIPS, 2022.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: After reviewing the rebuttal and considering the comments from other reviewers, I will raise my score. My questions have been satisfactorily addressed. Thank you to the authors.
---
Reply to Comment 1.1.1:
Comment: We are glad to hear that our response helped resolve your questions. Thank you again for your time and constructive feedback! | Summary: The authors propose an approach to tackle OOD generalization via a method that tightly combines learning-based feature extraction with graph-based relationship modelling to explicitly learn and represent the topological structure of image data, achieving competitive results across a range of OOD and test-time adaptation benchmarks.
Strengths: **Originality & Significance:**
- Interesting method focusing on explicitly modelling the often neglected /only implicitly modelled topological structure of image data via a mixture of learning- and graph-based methods
- Proposed approach demonstrates an appealing way of ‘casting’ the common human intuition regarding our visual reasoning process, which draws on different levels of hierarchy, into an end-to-end trainable algorithm through a well-justified composition of different components
**Quality:**
- Experiments conducted with a good selection of comparative methods to gauge performance improvements across two different tasks (OOD and test-time adaptation) and multiple datasets
- The authors perform an appropriate ablation of their main components, both quantitatively and qualitatively
**Clarity:**
- The paper is well written and easy to read and follow; The topic is well motivated and well-placed within the context of related efforts, clearly pointing out which areas of research the authors build upon and what the remaining challenge is;
- Clear visualizations help the reader to quickly grasp the underlying concepts, both methodologically and architecturally
Weaknesses: _TLDR; I do not see any severe ‘prohibitive’ weaknesses in this work, but have a few questions and requests that I’d like the authors to clarify & address._
- Some insight into what the slots actually represent would be insightful and elevate the paper, see below.
- Missing detail on complexity, please see question section below.
- Missing details regarding inference and training procedure, see below.
- Some inconsistencies in notation as well as some typos that should be corrected
Technical Quality: 3
Clarity: 3
Questions for Authors: **Main concerns, questions & potential improvements:**
**[Q1]**: Consistency in notation could be improved.
The authors initially use lower-case characters to indicate functions, e.g. in equation (1), and mathcal font to indicate sets (e.g. input, output, feature space/domain);
However, this suddenly changes during the introduction of the method (l.94), and mathcal is used to refer to transformations (e.g. K_beta, which is nothing else than a function);
Then slightly later (l.135), mathcal refers to the ‘result’ of the linear transformation that essentially forms the graph nodes;
$\textrightarrow$ I’d highly recommend to keep a consistent notation to avoid confusion (as it did confuse me);
**[Q2]**: Some missing details regarding training & inference procedure.
- How exactly is training performed? The authors mention that the “algorithm first trains deep models using the reconstruction objective” (l.171) – is the model (or a part of it) then frozen when the graph is created/HORR trained, or is HORR simply added and everything then trained/finetuned? The appendix section doesn’t make this clear to me either, and some more details here would be helpful.
- The authors mention in l.99 that the queries “will be refined during the T attention iterations”. How many iterations are employed in practice, and is this performed for each new image pair (i.e. each step) during training?
- How exactly is the inference actually performed at test time? Does inference require pairs as employed during training, or what is the exact setup there?
**[Q3]**: Details around complexity;
Following up from Q2, what is the ‘complexity’ of the method in contrast to other related methods, e.g. in terms of inference time?
Note that given the impressive performance and graph-based nature, this could be an interesting insight to the reader (to gauge potential trade-offs between multi- and 1-step reasoning methods)
**[Q4]**: Details regarding slots / information bottleneck:
- The authors mention in the appendix that ‘5’ slots have been typically used. Is this the same for all datasets? And how many were selected as relevant?
- Do these numbers change across datasets or even classes? And if so, could you provide possible insights into why, e.g. multiple objects / increased complexity / more components, or similar.
**[Q5]**: Insights into the actual 'slot correspondence'/representation:
- 5 slots seems quite few to represent the content of an image. Is this due to the simplicity of the images, and do the authors have some intuition how this would change for `real-world' natural settings?
- How do the slots actually 'align' with 'components' of objects: Taking the motivational picture of the horse, would your algorithm actually identify 'parts' of an object as a component, or rather different objects in an image, or entirely different? Some insight into these aspects would be highly interesting to actually see to which extent the inner workings align with what we would expect from humans (which is, after all, your underlying motivation)
---
**Additional comments:**
- typo l.92: embedding -> embeddings
- capitalization l.109: we -> We
- typo l.216: methods […] is -> are
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations have been adequately addressed by the authors in the appendix;
$\textrightarrow$ I highly appreciate the authors being honest and providing ‘proper’ limitations to their method in terms of potential applicability!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your insightful comments and appreciation of our work. We address each question in detail and provide further clarifications below.
> **Q1: Consistency in notation.**
Sorry for the confusion. We have modified the notation in Eq. (2) to make it consistent with other formulas, i.e., Q_gamma -> f_q, K_beta -> f_k, and V_phi -> f_v.
> **Q2-1: How exactly is training performed?**
We employ a phased training strategy, starting with training SSR to enable the model to accurately extract sparse representations from images, followed by fine-tuning with HORR to equip the model with the capability to reason about these main components. The model (or a part of it) is not frozen during HORR training.
> **Q2-2: How many iterations are employed in practice, and is this performed for each new image pair (i.e. each step) during training?**
The number of iterations is fixed at 3, consistent with the original slot attention mechanism. This process aims to progressively reconstruct the original image. In the context of slot attention using iterative attention, the process is generally applied repeatedly at each step during training for every new image pair.
> **Q2-3: How exactly is the inference actually performed at test time? Does inference require pairs as employed during training, or what is the exact setup there?**
During the inference phase, pairwise samples are not required; a single image or a batch of randomly sampled images suffices. For OOD generalization, a vanilla inference process is performed. For test-time adaptation, the model parameters need to be updated online using the test samples.
> **Q3: Details around complexity.**
We have included quantitative results regarding the complexity of the method in **the attached PDF (Fig. 4 and Fig. 5)**. When the iterative attention and graph matching are only performed during training, the inference speed (without TTA) is nearly equivalent to ERM. With TTA **( Fig. 5)**, our method's inference speed is also comparable to several mainstream approaches.
> **Q4: Details regarding slots / information bottleneck**.
As stated in the appendix, the initial number of slots is set to 5. In our experiments, this is the same for all datasets since OOD generalization and TTA benchmarks typically consist of single object images. Thus, there is no need to increase its value which will bring more computational overhead. However, for scene images, such as semantic segmentation datasets, we need more slots to present their composition. In a nutshell, the number of slots depends on the complexity of data and is relatively insensitive to the change of datasets that have the same type.
> **Q5: Insights into the actual 'slot correspondence'/representation.**
For the first subquestion, **(1)** continuing from the previous discussion, the number of slots required is related not only to the complexity of the image content but also to the nature of the task itself. In our experiments, we focus on the relatively simple case of image classification tasks involving single objects, which require fewer slots. However, in scenarios containing multiple objects where tasks like object detection or semantic segmentation are desired, more than ten slots might be necessary, depending on the specific task. **(2)** It is important to note that more slots are not always better; on one hand, the computational cost increases, and on the other, there might be an over-segmentation issue in segmentation tasks (conversely, too few slots can lead to under-segmentation). However, for classification tasks, the granularity with which we recognize the main components of an object is flexible. For instance, in describing a person, we could (i) broadly identify the upper and lower body, (ii) recognize the head, torso, and limbs, or (iii) further divide specific parts such as the torso into more detailed segments. **(3)** In summary, classification problems require fewer slots and are less sensitive to the number of slots, whereas scene understanding tasks necessitate a larger and more sensitive allocation of slots.
For the second subquestion, **(1)** our process of identifying main components is somewhat akin to fully unsupervised attribute/concept discovery. That is, without component annotations, the model essentially learns an attention mask, where each position's value reflects its significance in relation to the class label. However, we cannot guarantee that the learned components will perfectly align with concepts readily understood by humans, as we lack fine-grained human annotations. Of course, replacing the encoder with a more powerful visual extractor like DINOv2 [1] could enhance discovery capabilities, but this would change both the whole experiment and the baseline methods. **(2)** From a methodological perspective, our proposed method can decompose images into high-level concepts in an unsupervised manner and cluster the images based on those discovered concepts. In **the attached PDF (Fig. 1)**, we provide some visual results from real-world datasets, showing that REMA can segment images into different areas (the number of areas depends on the number of slots). These areas are informative and correspond to different high-level concepts. For example, distinguishing between an animal's head, body, and legs. **(3)** As seen in Fig. 4(a) of the main paper, without REMA, the model might learn only small discriminative areas or even background regions. However, REMA enables the learning of objectness, which more completely emphasizes the entire foreground object area. In addition, the affinity matrix in Fig. 5 demonstrates that our REMA is capable of learning more accurate cross-domain correspondences, illustrating its robustness to distribution shifts.
**Reference**
[1] DINOv2: Learning Robust Visual Features without Supervision. In arXiv:2304.07193.
---
Rebuttal Comment 1.1:
Title: Thank you for the responses.
Comment: I'd like to thank the authors for their responses and the additional provided information & insights -- especially the visualisations regarding slots & image regions;
Having read the other reviews and rebuttal, I will stick with my original rating and recommend weak acceptance
---
Reply to Comment 1.1.1:
Comment: Thank you for your insightful feedback and constructive comments, which have been invaluable in enhancing our manuscript. We will incorporate the additional results and discussions in the final version. | Summary: A new methodology, REconstruct and MAtch (REMA) is introduced to learn a more robust and generalizable feature set in computer vision models. REMA relies on a slot attention module to learn sparse embeddings of features which characterize a target object, and this module is then coupled with a high-order relational reasoning (HORR) module which creates a graph representation of the object. This graph representation encodes how the sparse features relate to each other. Because the feature encodings are both simplified through the sparse attention module and contain a learned topological relation from the HORR module, they are expected to be robust to domain drift. The robustness to domain drift is demonstrated using several benchmark datasets with consistent networks and training hyperparameters shared between them. REMA consistently and clearly outperforms SOTA methods for OOD generalization; the results are convincing.
Strengths: The methodology is novel; the combination of the two modules represents a new contribution to the area of identifying robust features for computer vision. The results reported are extensive, including latent space analysis of how features are represented, ablation studies, and comparison against $\approx$20 other OOD generalization methods, with the proposed REMA method outperforming all of them. An improved method for robust OOD model training is a significant contribution, and the work put into reporting the technical aspects of the methodology will encourage dedicated researchers to continue developing this approach to robust feature identification.
Weaknesses: The report of the proposed method is exceedingly technical. Discussion of the intuition motivating the method is restricted to a brief statement concerning the human visual system, and no connection back to the human visual system is made throughout the remaining discussion of the methodology. No further motivation or context is provided for algorithmic design choices, which prevents the reader from understanding *why* the method works well. The emphasis is entirely on the *how*. I suspect that a significant literature review was performed as part of the algorithm design; sharing the literature review context for these design choices would have helped motivate and clarify the methodology. It could easily have been included in the appendix without detracting attention from the main contribution of the work. Alternately, the context could have been included in the main result, and the implementation details left to the appendix.
The benefit of the HORR module is open to some question. The impact of the HORR module vs the SSR module is discussed in Section 4.3, Table 4, and shown in Figure 6 with the tSNE embeddings. The results of Table 4 are reported without uncertainty, and the values are close enough (within 2%-8.5% of the baseline without REMA) to make uncertainty a valuable indicator of the role of each module. Given the stochastic variation inherent to the tSNE algorithm, Figure 6c (SSR w/o HORR) and 6d (SSR w/ HORR) could be considered identical since there are no repeated embeddings reported, and no figure of merit describing the variance for repeated embeddings and clustering is included. Visually, the embeddings are very close, so although the value of the SSR module is clearly demonstrated in disentangling latent features in Fig. 6c, the value of the HORR module has some doubt. (An example of suitable metrics for quantifying the author's claim of better clustering in Figure 6d vs. Figure 6c might be DBSCAN or OPTICS applied to multiple embeddings, and quantified with the mean and std of the v-measure score for the clusters.) The supplemental Figures 6-8 do little to clarify the situation; the authors state only that SSR and HORR behave differently depending on the type of corruption. An additional place to clarify the behavior of the individual SSR and HORR modules would have been in Figure 4a, where the grad cam results with and without the SSR and HORR modules could have been reported. Finally, the comment in the supplement regarding training the SSR module first suggests that although including the HORR module may be beneficial, it is not necessarily key to the success of the work as no mention of training HORR first is made.
Overall, the results are extensive, and support the conclusion that REMA is an improvement on the SOTA for OOD robustness. But the lack of contextualization and motivation for the algorithm in general, and the behavior of the different modules specifically, is a weakness of the paper.
Technical Quality: 4
Clarity: 2
Questions for Authors: 1. The stated goal of the Selective Slot-based Reconstruction (SSR) module is to create a sparse embedding of target features. In section 3.3, it would seem that a variational auto encoder type approach is used to train the encoder. Sparsity can already be considered forced due to the size of the feature vector/latent space of the VAE; a more sparse feature vector is just one that is smaller in this context. How does the additional SSR module change the latent representations to make them more sparse? Is it just removing specific frequency components? Why is it not possible to do this directly on the latent space of the encoder through a specialized loss function without the use of an additional module?
2. In section 3.1 "(A) standard MLP skip connection is applied" after the GRU; why? What is the intuition here?
3. What is the minimum and maximum connectivity of the hypergraphs constructed in Section 3.2? How would this dimensionality relate to the dimensionality of the sparse embeddings? Of the original feature vector extracted from the encoding module?
4. Please provide citation(s) for the statement in Supplemental section C.1 "sparse modeling based on slots may struggle to accurately separate the scene into several main components".
5. In the supplemental section B.2, why is it reasonable to expect that the SSR and HORR modules have different effects based on the type of data corruption? Shouldn't the keypoints of a target object (and therefore the sparse embedding of those key points as well as the topology of how those keypoints relate to each other) be unaffected by whether it is fog or frost corrupting the image?
Confidence: 2
Soundness: 4
Presentation: 2
Contribution: 4
Limitations: The authors report limitations in the supplemental material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your insightful comments and appreciation of our work. We address each question in detail and provide further clarifications below.
> **Q1:Motivation for algorithmic design choices (SSR + HORR).**
(1) Although we aim to imitate the human vision process for OOD generalization, we cannot guarantee that the learned components will perfectly align with concepts readily understood by humans, due to the lack of fine-grained human annotations. To solve this issue, our SSR is a data-driven approach that enables the deep model itself to possess the capability of abstraction from input data. The model essentially learns an attention mask, where each position's value indicates its importance/association relative to the class label. Given the need to learn objectness or high-level concepts, slot attention naturally comes to mind as a classic solution for object-centric learning. By doing so, as shown in **the attached PDF (Fig. 1)**, the regions segmented by SSR mostly align with human understanding, such as distinguishing between an animal's head and body.
(2) Having identified the main components, we naturally consider the relationships among them. Graph networks are a major tool for introducing relational inductive bias. Given that slots are sparse and require higher-order connections to fully capture the relationships, and since ordinary graphs can only model pairwise relationships, we opt for hypergraphs. Moreover, regarding how to associate objects of the same category across different domains, most previous methods utilize direct alignment (Fig. 1 of the main paper). However, now that we have identified the main components and their internal relationships, and both have been modeled as vertices and edges of a hypergraph, cross-domain connections are naturally achieved via graph matching.
> **Q2: The benefit of the HORR module.**
(1) Standard error in Tab. 4. In fact, the results in Table 4 are averaged over 3 random seeds. Please see **the attached PDF (Tab. 1)**.
(2) w/o HORR vs. w/ HORR. As suggested, we have run t-SNE 10 times for each case and then applied DBSCAN to each embedding. Then, we calculate the mean and std of V-measure score: w/o HORR (Mean V-measure: 0.65, std of V-measure: 0.08) vs. w/ HORR (Mean V-measure: 0.79, std of V-measure: 0.03).
(3) Additional Gram-CAM. We added these results to **the attached PDF (Fig. 2)**.
(4) Training Sequence. HORR first will lead to an unstable training process due to the dense latent features (without using SSR). Thus, joint training or SSR first would be better choices for our work.
> **Q3-1: A VAE-type approach is used to train the encoder.**
As suggested, we trained an encoder using a VAE, using an equal number of mean and var as there are slots, for comparison. Please see **the attached PDF (Tab. 1)**. Noting that models with VAE achieve better performance than the ERM baseline, this validates our motivation to seek sparsity. However, there remains a performance gap compared to our SSR, demonstrating the superiority of our module.
> **Q3-2: How does the additional SSR module change the latent representations to make them more sparse?**
SSR actively reorganizes the latent space into discrete, interpretable slots, each capturing distinct and salient features of the input data. It structurally enhances sparsity by ensuring each slot is maximally informative and minimally redundant, thus facilitating sparse representations. This goes beyond just removing specific frequency components. Directly using a specialized loss function would lack this level of granularity and control. The modular nature of the SSR (data-driven) allows for targeted optimization and adaptation to diverse datasets.
> **Q4: MLP skip connection.**
This is a standard step in slot attention, where (1) MLP allows the model to learn complex patterns and relationships between the slots and the input data. (2) Skip connections aid in preserving the original information from the input throughout the network.
> **Q5: Details about hypergraphs and the dimensionality of embedding.**
As indicated in L143-144, the number of hyperedges is equal to the number of slots. The dimension of each slot is 256, matching the original feature dimension from the encoding module.
> **Q6: Citation(s) for the statement in Supplemental section C.1**
We aim to divide objects into several components based on class labels. However, for scenes without clear foreground and background distinctions, this approach encounters challenges. In cases of tuberculosis—a binary classification problem—the X-ray images exhibit diffuse characteristics w/o specific lesions like lung nodules, which are causally linked to conditions such as malignant nodules and lung cancer. Thus, the difference is related to the image's style, making it hard to segment the original image into distinct parts based on the presence or absence of disease. While we did not find literature specifically addressing this issue, there are some related studies (e.g., [1,2]) that can be referenced.
[1] Unsupervised Learning of Discriminative Attributes and Visual Representations. In CVPR, 2016.
[2] Bridging the Gap to Real-World Object-Centric Learning. In ICLR, 2023.
> **Q7: Why is it reasonable to expect that the SSR and HORR modules have different effects based on the type of data corruption?**
Sorry for the confusion. You are correct in pointing out that we aim to ensure that "the keypoints of a target object—and therefore the sparse embedding of those keypoints as well as the topology of how those keypoints relate to each other—remain unaffected by external factors such as fog or frost," emphasizing topological homogeneity. We will revise this paragraph for clarity. | Rebuttal 1:
Rebuttal: We sincerely appreciate all four reviewers for their time and effort in providing feedback and suggestions on our work. We are glad that reviewers recognize our paper to be *novel* (jWEa, M7aK), *well-motivated* (M7aK, rhAB, EaaQ), and performing *extensive experiments and ablation studies* (jWEa, M7aK, rhAB).
We have addressed the comments and questions in individual responses to each reviewer. The main changes we made include:
- We have provided additional visual results and experimental comparisons, explained the design motivations of our algorithm, and offered insights into the proposed SSR and HORR (jWEa).
- We clarified some design details, discussed complexity, and provided insights into the actual 'slot correspondence' and representation (M7aK).
- We conducted both quantitative and qualitative discussions on parameter sensitivity, computational efficiency, and performance in real-world scenarios (rhAB).
- We provided clarifications to eliminate significant misunderstandings about the problem definition, baseline selection, and experimental comparisons (EaaQ).
If you have any further questions or require additional clarification, please feel free to raise them during the author-reviewer discussion phase. Thank you!
Pdf: /pdf/f9aaf3ffe8f9f093305f21fb3953cf288b2725a3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Long-range Brain Graph Transformer | Accept (poster) | Summary: This paper employs a random walk approach to capture long-range dependencies in the brain through a feature engineering scheme. The computed adaptive factors are incorporated into node features and used to train a Transformer model.
Strengths: 1. The idea of capturing long-range dependencies in the brain is reasonable.
2. The paper is well-written and easy to understand.
3. The experimental results are strong on the two datasets tested.
Weaknesses: 1. Some design choices in the proposed method are not well-motivated:
- Using the designed adaptive factors (Eq. 1) instead of Pearson correlations.
- Introducing the degree matrix in the random walk kernel.
2. The parameter \( k \) in Section 3.2 is not clearly defined; it is assumed that \( k = K \).
3. The descriptions of the dataset construction and experimental settings are unclear. Specifically:
- Which atlas is used, and how many ROIs does it contain?
- What exact value is used to threshold the connectivity to obtain the adjacency matrix?
- Given that the number of hops is set up to 32, it is expected that the adjacency matrix is super sparse. However, such a high threshold is uncommon in brain network analysis.
4. The paper conducts experiments on only two datasets, which is insufficient. The ADNI dataset used contains only two classes with 130 subjects. It would be better to extend the experiments to include more datasets as referenced in [1], which includes six datasets and over 1.3k subjects for ADNI.
5. There is a lack of comparison with the latest related works. For example:
- [2] introduces a graph augmentation using ALFF features.
- [3] presents a global matching-based graph kernel that captures dynamic changes in evolving brain networks.
- [4] utilizes clustering-based graph pooling for readout.
6. It would be valuable to incorporate the proposed ALGA method with GNN baselines such as GCN and GAT to evaluate whether it can improve their performance.
7. The paper lacks deeper analysis from a neuroscience perspective. It is important to provide insights from the model results, such as identifying specific ROIs related to the target disease and whether these findings align with medical literature.
References:
[1] Data-driven network neuroscience: On data collection and benchmark. NIPS 2023
[2] A-GCL: Adversarial graph contrastive learning for fMRI analysis to diagnose neurodevelopmental disorders. MIA 2023
[3] Effective Graph Kernels for Evolving Functional Brain Networks. WSDM 2023
[4] Contrastive Graph Pooling for Explainable Classification of Brain Networks. IEEE TMI 2024
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Which atlas is used in the dataset?
2. Which ROIs obtain high attention, and do these match domain observations for the diseases studied?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations have been discussed in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your detailed feedback and your questions. We hope we have well addressed your concerns. If there are any other issues remaining, we are pleased to address them further.
**W1.a**
In fact, we employ Pearson correlations as adaptive factors in random walk, without any modifcation. Since the correlation between ROIs reflects their communication strength, which is crucial for comprehending the functional organization, dysfunctions, and information propagation. We take the correlation between ROIs as adaptive factors to influence the exploration mechanism of random walk, which induces ROIs with stronger connectivities to exhibit higher transfer probabilities of the next hop. This manner simulates real-world brain-wide communication.
We recognize that this part of the presentation may not have been clearly articulated, leading to potential misunderstandings. Therefore, we have make the necessary revisions in the revised paper to enhance clarity.
**W1.b**
The introduction of degree matrix can help to obtain richer information about brain-wide communication and is very commonly used in network analytics [1]. Since the degree matrix provides the number of degrees for each ROI, its ability to reflect the active state of the ROI in communication is important in determining which ROIs play a key role in information propagation. Hence, a degree matrix determines the transfer probability of a node to its neighboring nodes and highly influences the behavior of walker [2]. We have now incorporated these details in the revised paper.
**W2**
We apologize for our mistake, we did mix up $k$ and $K$. As you say, what we use is $k = K$. $k$ is generally denoted as the number of hops in random walks. We apologize for not defining it explicitly, which has now been addressed in the revised paper.
**W3&Q1**
While nearly all of the experimental settings are presented in the source code, we acknowledge that our descriptions of these settings were indeed insufficient. We have now incorporated the details in Appendix A.
Specific details requested by the reviewer includes:
- For the ABIDE dataset, we use the Craddock 200 atlas with 200 ROIs. For the ADNI dataset, we use the AAL atlas with 90 cortical ROIs and 26 cerebellar ROIs.
- The threshold is 0.3.
- This is a mistake in the number of hops reported in the paper. The experiments were performed with 16 hops. We tried the hops commonly used for long-range dependency capture in network analysis by random walks through a grid search, and chose 16 hops. As shown in Figure 4(a) in the paper, larger hops do not lead to better results since brain graphs are relatively dense.
**W4&W5**
Upon receiving comments, we promptly review the referenced papers and their corresponding open-source codes. As shown in Tables 1&2&3 of the global response, we have added additional datasets (PPMI, Matai, TaoWu, and Neurocon) and baselines (A-GCL [3] and ContrastPool [4]).
Regarding the complete ADNI dataset [5], we discovered from the GitHub repository provided by the authors that due to data protocol issues, it has not yet been made publicly available. Regarding [6], we find that the original papers did not provide corresponding code links, and also not released in github. We also contacted the authors, but as of the end of the rebuttal, we have not received responses. However, we need to state that [6] more focuses on learning time series of fMRI using a global and local matching-based graph kernels to construct dynamic brain networks, whereas ALTER is more concerned with capturing long-distance dependencies from static brain networks using random walk kernel, which are fundamentally different from each other.
In the field of neuroscience, it is often challenging to obtain large amounts of data due to various constraints. And often use a smaller dataset (≤130 individuals), which are sufficient to support conclusions [7, 8, 9, 10]. We initially focused only on the ADNI and ABIDE because they are widely used for disease prediction. However, due to the significant challenges in preprocessing, we were unable to process the complete fMRI data from the ADNI dataset. While only two classes and 130 subjects were included, our method still shows excellent performance. This demonstrates the effectiveness of ALTER with limited data, suggesting that it may better adapt to other datasets with similarly limited quantities. This adaptability can potentially lead to more successful outcomes in other brain science-related medical tasks.
**W6**
As shown in the Table 5 of the global response we did additional experiments and achieved better performance. This is because ALGA can capture the long-range dependencies in the brain network.
**W7&Q2**
Based on the SHAP model, we have supplemented the experiment in neuroscience perspective, and concluded that ALTER is can recognize disease-related regions and give high attention. The results are illustrated in Figure 1(a) of the global response.
**References**
[1] Graph neural networks with learnable structural and positional representations. In ICLR 2022.
[2] How to Count Triangles, without Seeing the Whole Graph. In KDD 2020.
[3] A-GCL: Adversarial graph contrastive learning for fMRI analysis to diagnose neurodevelopmental disorders. MIA 2023.
[4] Contrastive graph pooling for explainable classification of brain networks. TMI 2024.
[5] Data-driven network neuroscience: On data collection and benchmark. In NeurIPS 2023.
[6] Effective graph kernels for evolving functional brain networks. In WSDM 2023.
[7] Spatio-Temporal Graph Hubness Propagation Model for Dynamic Brain Network Classification TMI 2024.
[8] RH-BrainFS: Regional Heterogeneous Multimodal Brain Networks Fusion Strategy. In NeurIPS 2023.
[9] Functional brain network reconfiguration during learning in a dynamic environment. Nature Communications 2020.
[10] Increased global integration in the brain after psilocybin therapy for depression. Nature Medicine 2022.
---
Rebuttal Comment 1.1:
Title: Follow-up Questions
Comment: I am glad to find that the authors have addressed most of my concerns.
However, I still have some follow-up questions:
1. Regarding the threshold used for sparsifying the connectivity to obtain the adjacency matrix, does 0.3 mean retaining edges with Pearson correlation larger than 0.3 or keeping the top 30% of edges? Additionally, does this thresholding drop all negative edges, or are absolute values used?
2. For the new experiment conducted, are you using the same settings as the original paper, or have you applied your own settings to them?
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We would like to reply to your questions and comments as follow:
**Q1.** We keep edges with Pearson correlation greater than 0.3. This threshold also removes the negative connections. This strategy aligns with the methodologies commonly adopted in previous studies [1, 2].
**Q2.** Yes, we followed the settings in the original papers. Specifically, for ContrastPool, although it utilizes the same datasets as ours (PPMI, Matai, TaoWu, and Neurocon), we ensured fairness by using the thresholded correlation matrix as the adjacency matrix. In A-GCL, since we use different datasets than those in the paper, we performed a parameter search based on the paper's recommendations to ensure the fairness of the results. Where the batch size was searched from {8, 16, 32, 64}, the learning rate of the parameter µ was searched from {0.0001, 0.0005, 0.001, 0.005, 0.01}, and the learning rate of the parameter z was searched from {0.0005, 0.001 , 0.01}.
We sincerely thank you for your efforts in reviewing our paper. We hope we have resolved all the concerns, and we will deeply appreciate it if you could reconsider the score accordingly. We are always willing to address any of your further concerns.
[1] RH-BrainFS: Regional Heterogeneous Multimodal Brain Networks Fusion Strategy. In NeurIPS 2023.
[2] BrainGNN: Interpretable Brain Graph Neural Network for fMRI Analysis. MIA 2021. | Summary: This paper highlights a significant gap in the existing literature on brain network representation learning, specifically the inadequacy of current methods to effectively capture long-range dependencies, leading to limited an integrated understanding of brain-wide communication. To bridge this gap, the paper introduces ALTER, an innovative model designed for adaptive awareness of long-range dependencies and brain network representation learning. The model consists of a long-range aware strategy for capturing long-range dependencies and a transformer framework for integrating multi-level communication understanding.
Strengths: (1) This paper is generally clear and well-written, providing a comprehensive analysis of the problem to be solved, supported by some convincing references, and emphasizing the urgent need to come up with solutions that can capture long-distance dependencies between brain ROIs. At the same time, the authors make the problem better understood with illustrations.
(2) The logic of the paper is smooth. The authors clarify the issue, after which they elaborate on the theoretical causes of the issue, and finally design a specific biased random walk strategy for the theoretical causes and obtain encouraging experimental results.
(3) I found the method presented here is technically sound with excellent results. From the experimental results, it is shown that the proposed adaptive long-range aware strategy is very effective in long-range dependencies capture and can greatly enhance the effectiveness of the disease diagnosis task. In addition, the ablation study conducted on various components reveals the superiority of the adaptive long-range aware strategy.
(4) The authors clearly and frankly understand the drawbacks and strengths of the method, e.g. the method does not allow for a good trade-off between long- and short-range. This makes this method a clear one to build on top of, prompting further interesting work in highly important domain.
Weaknesses: 1. Although the results of the ablation experiments verify that biased random walk is efficient, it is difficult to convince me of the necessity of introducing this module based only on the theoretical description of biased random walk in the methods section. As far as I know, random walk is common in graph representation learning. Therefore, I would like to ask the authors to provide me with a more in-depth analysis of biased random walk, otherwise I do not understand why biased random walk is effective in capturing long-range dependencies in brain networks.
2. Transformer-based graph representation learning is not original in brain network analysis. It is recommended that the authors provide a specific explanation for fusing long-range and short-range dependent embeddings using the graph Transformer.
3. Lack of interpretable analysis. Interpretive analysis of brain network representation learning models geared toward brain disease diagnosis is important. In the context of brain network analysis, I suggest that papers should explain and identify the brain regions or networks that are most relevant to the task of brain disease classification.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Have the authors considered tasks other than binary categorization?
2. Am I correctly understanding that the dimension of the original feature embedding and the long-range dependency embedding after splicing is equal to the original feature embedding dimension? In the representation in Figure 2, I get the impression that the dimension after splicing is unchanged, but this does not seem to be the case from the text. Could the authors clarify this point?
3. The authors conducted experiments on fMRI data sets, and fMRI is only single-modal. Is the biased random walk strategy of ALTER still valid on multi-modal data?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: See weaknesses and questions for details.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for acknowledging the novelty of the proposed method and for suggesting relevant analysis, which we have included in the global response. We hope to provide satisfying answers to the concerns raised. If there are any other issues remaining, we are pleased to address them further.
**W1**
Our model aims to capture long-range dependencies within brain networks leveraging the biased random walk, which has a Markovian nature that captures long-range communication among brain ROIs by sampling and encoding sequences into embeddings. We agree that random walk is common in graph representation learning. However, thedifferent pairs of ROIs in brain networks usually exhibit different communication strengths in brain activity. As a result, traditional random walk methods are usually not applicable to brain networks, and cannot capture long-range dependencies in brain networks.
**W2**
We acknowledge that Transformer is commonly used in brain network analysis. The focus of the proposed method is on capturing long-range dependencies within brain networks. Transformer inherently has limitations in capturing long-range dependencies, which prevents a comprehensive understanding of brain-wide communication. Instead, the proposed ALTER method introduces an adaptive long-range aware strategy to explicitly capture long-range dependencies within brain networks. It then integrates long-range and short-range dependencies using the adaptive mechanism of Transformer, thereby achieving a multi-layered understanding of brain networks.
**W3**
As shown in Figure 1(a) of the global response we supplemented the interpretability analysis.
**Q1**
In the submitted manuscript, we evaluated performance solely on binary classification tasks using the ADNI and ABIDE datasets. To further demonstrate the effectiveness of the proposed method, we have conducted additional experiments on the PPMI dataset with four class, as suggested by reviewer s7Az. The experimental results are presented in the Table 1 of the global response. The results show that the proposed ALTER method achieved the best performance, as it effectively captures long-distance dependencies between brain ROIs.
**Q2**
Thank you very much for your question. Since we concatenate the long-range dependency embeddings directly with the original feature embeddings without any other operation, the dimensionality of the concatenated features is not equal to that of the original feature embeddings. We have now revised the paper to improve the clarity of the methodology.
**Q3**
Thank you very much for your question. The value of the biased random walk strategy in ALTER for multi-modal data is indeed a topic worth exploring. However, due to some fundamental differences between functional and structural brain networks constructed based on fMRI and DTI, directly applying the biased random walk strategy to capture long-range dependencies in them may not be feasible and would require tailoring the ALTER model. Nonetheless, the biased random walk strategy can be adjusted by learning consistent and complementary communication patterns between functional and structural brain networks, which may enhance its adaptability to multimodal brain network data. We have now incorporated this as a future work of the study in the revised paper.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: I appreciate the detailed clarifications and additional interpretable experiments provided. These have effectively addressed my previous concerns and underscored the significance of this work. Of course, I am inclined to increase my score and recommend acceptance of the paper.
Overall, the manuscript is well written and represents a valuable contribution to the research field. I look forward to seeing the integration of the aforementioned discussion in the next version, particularly the interpretability analysis. | Summary: The study employs the adaptive long-range aware graph transformer (ALTER) to tackle the challenge of weak comprehension in whole-brain communication, which arises from the failure to capture long-range dependencies in brain networks. Initially, the study encodes long-range dependencies into long-range embeddings through biased random walk sampling, thereby enriching the embeddings with information on long-range dependencies. Subsequently, the study takes into account the significance of both short- and long-range dependencies in brain network analysis tasks. It introduces the graph transformer, which uses a self-attention mechanism to integrate these dependencies between brain ROIs. The objective is to capture varying levels of communication connections within the human brain. Experimental results demonstrate that ALTER outperforms current SOTA graph learning baselines, achieving superior AUC and ACC scores on the ABIDE and ADNI datasets.
Strengths: (1) This work is significant. How to facilitate the understanding of communication and information processing among brain ROIs is a key issue. The authors provide a clear overview of the necessity of long-range dependencies for understanding communication and information processing among brain ROIs, and propose an effective method to capture long-range dependencies in brain networks.
(2) The paper is aesthetically pleasing in its writing form, especially the diagrams and charts that make it easy for the reader to understand the exact process of the work done. Meanwhile, the authors provide detailed preprocessing and implementation details. As a result, this paper is highly reproducible.
(3) The paper provides complete experimental results. Besides comparisons with the baseline and ablation studies on modules, the appendix section validates the state-of-the-art of the adaptive long-range dependency aware strategy on multiple readout functions, further enhances reproducibility, and facilitates a deeper understanding of the method.
(4) ALTER is an ingeniously crafted framework that is able to adaptively perceive both long- and short-range dependencies in brain networks. In addition, ALTER interestingly takes inter-ROI correlations into account in the capture of long-range dependencies in brain networks.
Weaknesses: I have identified three main weaknesses that need to be clarified during the rebuttal process, which is the reason I gave a weak acceptance despite the strengths. If my points are properly addressed, I will be happy to review my scores based on the results of the rebuttal process. I numbered the comments to facilitate discussion.
1. The technical contributions are neutral. The proposed ALTER seems to be a combination of the random walk and the graph transformer, but its novelty and difficulty are unclear.
2. ALTER obtained SPE scores that were 5% lower than BrainNETTF on the ADNI dataset and SEN scores that were 1.3% lower than Graphormer on the ABIDE dataset. This performance lag is significant, but the authors do not seem to have clearly explained the reasons for the SPE and SEN lags.
3. The authors state in line 202 that ALERT uses a linear layer to inject long-range dependencies into the brain graph transformer, but that direct collocation also accomplishes the above, and the authors do not state the necessity of introducing a linear layer.
Technical Quality: 3
Clarity: 3
Questions for Authors: In addition to what I wrote in the "Weaknesses" section, I have a couple of questions:
1. How does the author define long-range dependence and short-range dependence?
2. Did the authors attempt to choose other correlation measures than the Pearson correlation coefficient to define the adaptive factors?
3. Could you elaborate on the need to introduce the graph transformer in the paper? Would replacing the graph transformer with a GNN affect the results?
4. Could the authors elaborate on the correlation between the attention map and the example graph in Figure 5?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper emphasizes the possible negative social impact of work. However, in my opinion, the negative societal impact is not only medical errors due to prediction errors, but should also include the negative ethical impact of the model. Adding ethical concerns would help the reader better understand the potential impact of the proposed methodology.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your detailed assessment of our work and for highlighting the merits of our approach, as well as the importance of the problem. We address all concerns below, if there are any other issues remaining, we are pleased to address them further:
**W1**
Brain networks are inherently dense, which has led to the increasing use of transformers for their analysis. However, a significant limitation of transformers is their inability to effectively capture long-range dependencies, which limits the integrated understanding of brain-wide communication. The proposed ALTER is concerned with the capture of long-range dependencies between ROIs, not the combination of random walk and graph transformer. In particular, we design the ALGA strategy, which simulates real-world brain-wide communication by utilizing adaptive factors to evaluate the communication strength between ROIs and capturing long-range dependencies in brain networks through a Markov process. We have now re-emphasized the novelty and difficulty of the proposed ALTER in the revised manuscript.
**W2**
We acknowledge that the proposed method exhibits a lower SPE on the ADNI dataset compared to BrainNETTF and a lower SEN on the ABIDE dataset compared to Graphormer. However, considering the standard deviation, our method demonstrates more stability, which can be attributed to its ability to capture long-range dependencies within the brain network. Moreover, our method significantly outperforms the baselines on other metrics across both datasets. In particular, on the ADNI dataset, our method shows a 10.3% improvement in the SEN metric compared to the sub-optimal method. We have provided a detailed explanation of this aspect in the revised manuscript.
**W3**
Thank you for requesting clarity on the linear layer. The main objective of introducing a linear layer to map the initial long-range embedding to the transformer is to enable end-to-end learning of the long-range embedding to participate in updates and enhance its expressiveness. This method is able to enhance the expressiveness of the random walk embedding by capturing multiple types of graph propagation, which in turn facilitates the integration of global information in the brain graph. We have now added these details in the revised paper. If we do not use a linear layer and directly concatenate the initial long-distance embedding with the original features, this reduces the expressiveness of the long-distance dependencies in the brain graph learning. We additionally performed ablation experiments on ABIDE dataset to demonstrate the effectiveness of introducing a linear layer.
|Method |AUC|ACC|SEN|SPE|
|--- |-----------| ----------- | -----------| ----------- |
| w/o Linear| 81.0(1.5) | 76.4(2.3)| 73.6(5.8) | 74.4(5.4) |
| ALTER |82.8 (1.1) | 77.0 (1.0) | 77.4 (3.4) | 76.6 (4.6) |
**Q1**
we utilize the proposed ALTER method to qualify these dependencies as they manifest in the data. In brain network analysis, long-range dependence and short-range dependence refer to interactions between neurons in the spatial dimension. Among them, short-range dependencies usually refer to interactions that occur in the same brain region or between anatomically neighboring brain regions. In contrast, long-distance dependence usually spans larger spatial brain regions and involves the information transmission between different functional regions. Such long-range dependencies assume the integration of global information, which plays a key role in complex cognitive functions. In the manuscript, we only deal with the capture of long-range dependence without defining it explicitly at the neuroscience level.
**Q2**
Indeed, we have not yet explored other correlation measures besides the Pearson correlation coefficient to define the adaptive factors in our study. The proposed method represents an initial attempt to capture long-range dependencies in brain networks. In future work, we plan to investigate some non-linear correlation measures, to further refine and potentially improve the adaptability of our method. It is worth noting that studies in neuroscience predominantly rely on Pearson correlation.
**Q3**
In ALTER, the self-transformer mechanism is introduced to adaptively integrate the long-range and short-range dependencies between ROIs. Meanwhile, at the suggestion of reviewer s7Az, we try to replace the transformer with 1-layer GCN and 2-layer GCN. The experimental results are shown in the Table 5, which it is proved that transformer usually achieves better results due to its ability of adaptive learning.
**Q4**
In Figure 5 we show additional examples of long-range capture at the individual level on the ABIDE dataset. In Figure 5(a)(b)(c) we can see that despite the fact that the two red labeled nodes are 5 hops apart, there is still a high attention value between the two ROIs. In Figure 5(d) we can see that there is a high attention value between 6 and 12 which are 6 hops apart.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The reply has addressed my concerns. Overall, this work is of high quality and offers a novel perspective compared to recent studies. I support the acceptance of this work. | Summary: This work proposes Adaptive Long-range aware TransformER (ALTER), a brain graph transformer to capture long-range dependencies between brain ROIs utilizing biased random walk.
Strengths: 1. This work introduces a novel brain graph transformer with adaptive long-range awareness, which leverages the communication strengths between ROIs to guide the capturing of long-range dependencies.
2. The result demonstrates that ALTER consistently outperforms generalized graph learning methods and other graph learning-based brain network analysis methods.
Weaknesses: 1. Even though the proposed ALTER is better than all selected baselines, some baselines like BrainGNN can provide both the prediction and interpretation. It is unclear whether the ALTER can also explain the important disease-specific pattern and find the biomarkers.
2. It is unclear how to implement the comparable baselines and how to build the brain graph for them.
3. It is unclear how to preprocess the fMRI in detail and how to conduct the quality control.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the final loss function for this task?
2. How many times do you repeat the experiment? Have you conducted k-fold cross-validation to examine the result?
3. How about the ALTER’s ability to capture long-range dependencies in the patient's group instead of choosing one example to show in Figure 4? Can it get the same conclusion in the group-level analysis?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the comments on our work. Below we address the questions raised. We hope we have well addressed your concerns. If there are any other issues remaining, we are pleased to address them further.
**W1**
As shown in Figure 1(a) of the global response, we have analyzed the output using the SHAP model and confirmed that our results reflect AN-related ROIs. The proposed ALTER promotes an integrated understanding of brain-wide communication by capturing long-range dependencies and achieves superior performance in disease prediction tasks. Long-range dependencies, as complementary to short-range dependencies, can deliver unique insights into the organization and behavior of brain networks associated with neuropsychiatric disorders [1, 2, 3]. ALTER effectively captures long-range dependencies using the Markov process-based ALGA strategy, which has the ability to further explain the important disease-specific pattern and find the biomarkers. It is worth noting that the proposed model employs Transformers which are very well known for extracting the important features using its attention mechanism, which is effectively used for interpretability.
**W2**
We apologize for not providing a clear description of how to implement the comparable baselines and build the brain graphs for them. We have added a section in the appendix to clarify the implementation details of the comparable baselines and the construction of the brain graphs.
Specifically, to ensure a fair comparison, we use the open-source codes of BrainGNN, BrainNETTF, and FBNETGEN. For SAN, Graphormer, LRGNN, and GraphTrans, we adapt their open-source codes and modify them to suit the brain network datasets. For BrainNetGNN, we implemente it ourselves following the settings described in the paper. During the parameter tuning, we follow the tuning of BrainNETTF [4] for SAN, BrainGNN, FBNETGEN, Graphormer, and BrainNETTF. For BrainNetGNN, we search the number of GRU layers {1, 2, 3}. For LRGNN, we vary the aggregation operations {8, 12} with the number of cell {1, 3}. For GraphTrans, we search the number of GNN layers {1, 2, 3, 4} with the hidden dimension of 100.
We utilized functional connectivity matrix to compute a brain graph for BrainNETTF, which is computed by calculating the correlation between brain regions using the processed fMRI. The details of computing these correlation matrices is also incorporated to the revised paper. For BrainNetGNN and FBNETGEN, the models required the processed fMRI as input. BrainGNN, SAN, Graphormer, LRGNN, and GraphTrans required the correlation matrix and adjacency matrix. As mentioned in the paper, the adjacency matrix is obtained by thresholding (≥0.3) the correlation matrix.
**W3**
We thank the reviewer for requesting a clear explanation of the preprocessing steps for fMRI. We preprocess the fMRI using the Data Processing Assistant for Resting-State Function (DPARSF) MRI toolkit. Specifically, we removed the first 10 time points from the downloaded nii data according to the default mode and chose slice timing, where the middle layer was the reference slice. meanwhile, we set the head motion correction to ‘Friston 24’, and selected automask and Nuisance covariates regression. the others were basically set according to the default mode. Then, considering individual differences, we choose to perform ‘Normalize by DARTEL’, and for the definition of ROIs, we adopt the altas already available in DPARSF. Then, we construct brain networks $G = \left( {V,X,A} \right)$ for each fMRI.
During the experiment, for the ABIDE dataset, since we directly adopted the processed brain network from [4] and used it as the correlation matrix, its quality control refers to [4]. For the ADNI dataset, besides the above preprocessing, we performed head motion correction, slice timing correction, realigning and normalize. Please note that we have followed the standard protocol used by the other research studies to ensure that any bias and noise have been removed from the dataset.
**Q1**
The final loss function for this task is the cross-entropy loss as the model address a classification task. This is already implemented in our open-source code as shown. We have now revised the paper to improve the clarity of the methodology.
**Q2**
For all the experiments, we repeated them 10 times. Instead of k-fold cross-validation, we conducted repeated random split validation. To ensure that our results are trustworthy, we additionally performed 5-fold cross-validation (as shown in Table 4 in the global response).
**Q3**
We have computed the average across individuals to perform group-level analysis, as this approach aligns with the methodologies commonly adopted in similar studies [5]. The average graph and the corresponding attention heatmap are illustrated in Figure 1(b)&(c) of the global response. We can observe that ALTER captures group-level long-distance dependence, but it is not very significant relative to the individual-level. This may be due to certain individual differences in patients, including age and gender, which can affect the brain-wide communication [6].
**References**
[1] Space-independent community and hub structure of functional brain networks. NeuroImage 2020.
[2] Long-range connections are more severely damaged and relevant for cognition in multiple sclerosis. Brain 2019.
[3] Engineering brain assembloids to interrogate human neural circuits. Nature Protocols 2022.
[4] Brain network transformer. In NeurIPS 2022.
[5] Structure-function coupling in the human connectome: A machine learning approach. NeuroImage 2021.
[6] Local structure-function relationships in human brain networks across the lifespan. Nature 2022. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful assessment of our work, as well as the useful feedback and actionable suggestions they provided. We are pleased that they found our work to be meaningful (reviewer u6uE) and reasonable (reviewer s7Az), that the experimental results are strong (reviewers s7Az, dPgM and xvKE), and that the manuscript is clear (reviewer xvKE). We will incorporate their suggestions to improve the presentation in future revisions of the paper.
Below we provide additional empirical analysis based on the recommendations raised by the reviewers. Each reviewer's individual questions will be answered in separate responses. The new results include:
**Interpretability Analysis.** We used the SHAP model for interpretability analysis on the ADNI dataset. We calculated the SHAP values of the attention matrix. From the results, it can be observed that the hippocampal regions of AD cases have positive SHAP values and the Top-10 ROIs with the highest SHAP values are almost always correlated with ADNI prediction, which is generally consistent with the results in [1].
**More Sufficient Experiments.** we evaluate the proposed ALTER on additional datasets (PPMI, Matai, TaoWu, and Neurocon). Meanwhile, we add more models including A-GCL [2] and ContrastPool [3].
**More Ablation Studies.** (1) We used a 5-fold cross validation replacing the repeated random split validation for evaluating the proposed ALTER. (2) We evaluate the performance of the ALGA strategy in combination with different GNN baselines, including 1-layer GCN, 2-layer GCN, 1-layer GAT, and 2-layer GAT.
**Group-level Analysis.** We have computed the average across individuals to perform group-level analysis, and then presented the average graph and the corresponding attention heatmap.
**References**
[1] Multimodal deep learning for Alzheimer’s disease dementia assessment. Nature communications 2022.
[2] A-GCL: Adversarial graph contrastive learning for fMRI analysis to diagnose neurodevelopmental disorders. MIA 2023.
[3] Contrastive graph pooling for explainable classification of brain networks. TMI 2024.
Pdf: /pdf/d78aff410494252f0057f50d330cbeeeb128b659.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Efficient Leverage Score Sampling for Tensor Train Decomposition | Accept (poster) | Summary: This paper gives a better randomized alternating least squares algorithm for computing tensor factorizations. It's based on an exact characterization of leverage scores of the matrization of tensors via a suitable intermediate orthonormal representation. This is justified rigorously, and significant empirical gains were demonstrated.
Strengths: The approach taken is natural, and the bounds obtained are quite powerful.
The experiments considered both synthetic and real data sets, and demonstrated clear gains in the parameter regimes considered.
Weaknesses: I'm a bit concerned about the setting of the experiments, which seem to be 3-dimensional dense tensors. My understanding is that a lot of the more complicated tensor instances are sparse and in higher dimensions. However, I'm not sure whether those have low rank representations.
Technical Quality: 3
Clarity: 4
Questions for Authors: Would it be possible to check how the orthogonality conditions are maintained in intermediate steps under the inexact arithmetic caused by round off errors? Aka. are the conditions for the characterizations of leverage scores preserved exactly? (this has been answered)
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and feedback, please find responses below:
**Weaknesses:**
> I'm a bit concerned about the setting of the experiments, which seem to be 3-dimensional dense tensors. My understanding is that a lot of the more complicated tensor instances are sparse and in higher dimensions. However, I'm not sure whether those have low rank representations.
Correct. We added dense synthetic and real data experiments to show that our proposed approach has a better time complexity than TT-ALS and TT-SVD and works slightly better than rTT-SVD in terms of fit. The main point is to demonstrate that our new approach works as well as other types of TT decompositions for dense data. As SVD-based methods cannot handle high-dimensional sparse tensors. For the more complicated tensors, we added a sparse tensor data experiment as you can see in the second part, the setting is different and more complicated as SVD-based TT decomposition cannot be used and we just compared our approach with the classical TT-ALS method.
**Questions:**
> Would it be possible to check how the orthogonality conditions are maintained in intermediate steps under the inexact arithmetics caused by round off errors? Aka. are the conditions for the characterizations of leverage scores preserved exactly?
We did not observe any numerical instabilities due to a round of errors during the orthogonalization step (which is performed using the stable QR decomposition routine provided by numpy.linalg) and we believe the characterization of leverage scores is preserved exactly (up to machine precision).
---
Rebuttal Comment 1.1:
Title: thank you
Comment: Thank you for the detailed responses.
My concern w.r.t. roundoffs is that equality conditions like equation (6) might not be robust to perturbations to the original matrix. However, I now see that it's only for the initial matrix, not for the intermediate products of sampling. So I agree that it's inherently robust.
I will raise my presentation score, and leave the overall unchanged. | Summary: The authors proposed a leverage score sampling-based TT-ALS method to reduce the computational complexity of the traditional TT-ALS. Experimental results verify the performance of the proposed method.
Strengths: The paper is well written with good theoretical analysis and desired experimental performance. The method using leveraging score sampling is technically sound for TT decomposition.
Weaknesses: The contribution and novelty are not clearly stated compared with [Malik and Becker, 2021]. In [Malik and Becker, 2021], the leverage sampling was applied to TR. As TT can be treated as a special case of TR, the author does not clearly state that it is necessary to develop a new method for TT, or that there are some new theories or findings that are distinct from the previous TR structure.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. My main concern about this work is its relation to the leverage score sampling-based TR-ALS [Malik and Becker, 2021]. As TT can be seen as a special case of TR, what is the main contribution of the current work compared with leverage score sampling-based TR-ALS? Please clarify that it is necessary to develop a new method beyond [Malik and Becker, 2021] (i.e. has a better theoretical guarantee or has better performance than leverage score sampling-based TR-ALS).
2. For real data experiments, how about the performance compared with other tensor decomposition structures, such as TR? As you can set the first and last TR-rank as 1 thus it reduces to a TT-like structure.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and feedback, please find responses below:
**Weaknesses:**
> The contribution and novelty are not clearly stated compared with [Malik and Becker, 2021]. In [Malik and Becker, 2021], the leverage sampling was applied to TR. As TT can be treated as a special case of TR, the author does not clearly state that it is necessary to develop a new method for TT, or that there are some new theories or findings that are distinct from the previous TR structure.
In [Malik et al, 2021], the leverage scores were approximated using a product of simpler distributions. They sample according to the leverage scores of each core. While in our paper, we proposed a novel data structure that can compute the exact leverage scores. Moreover, the time complexity to solve one least-square of ALS with the method of Malik et al. Malik et al approach is $NIR^4$ +#iter. $NIR^{(2N+2)}/(\epsilon\delta)$, which is still exponential in the order of the tensor $N$, while our result does not have any exponential dependency to the order of the tensor. We will add and emphasize in the camera-ready version that our approach differs from theirs in two main aspects:
- We did not use approximation for finding the leverage scores, instead a novel data structure is proposed for finding **exact** leverage scores.
- In Malik et al, the runtime for the least-square solve still has an exponential dependency to the order of a tensor while our approach is free of this exponential dependency.
**Questions**
> My main concern about this work is its relation to the leverage score sampling-based TR-ALS [Malik and Becker, 2021]. As TT can be seen as a special case of TR, what is the main contribution of the current work compared with leverage score sampling-based TR-ALS? Please clarify that it is necessary to develop a new method beyond [Malik and Becker, 2021] (i.e. has a better theoretical guarantee or has better performance than leverage score sampling-based TR-ALS).
Indeed, TT is a special case of the TR decomposition. However, in [Malik and Becker, 2021], leverage score approximation is used for the TR-ALS with time complexities #iter · $NIR^{2N+2}/(\epsilon\delta)$ (further improved to and #iter · $N^3R^8 (R + I/\epsilon)/\delta $ in [Malik, 2022]), respectively. In contrast, our method has a lower time complexity and provides a better theoretical guarantee (please see corollary 4.4 in our paper). Therefore, our method goes beyond the works of Malik et al. As mentioned in their works, computing leverage scores requires computing the left singular values which has the same cost as solving the original least squares problem. For this reason, they approximate the leverage scores. Instead, we propose a novel data structure to sample from the **exact** leverage scores at a lower cost than their proposed approaches.
> For real data experiments, how about the performance compared with other tensor decomposition structures, such as TR? As you can set the first and last TR-rank as 1 thus it reduces to a TT-like structure.
It is true that TT is a special case of TR, but note that the main problem we address in our paper is how to scale TT-ALS to very large tensors using randomized techniques. The purpose of our experiments is to demonstrate how we efficiently achieve this goal with exact leverage score sampling. Extending our approach to TR-ALS could be interesting, however it would be very challenging due to the lack of canonical forms for TR decomposition.
We hope we have addressed all your concerns and answered your questions (in which case we kindly ask you to consider increasing your score). We are happy to clarify any additional point during the discussion period.
---
Rebuttal Comment 1.1:
Comment: Thank the reviewer for their reply. I've increased the score. | Summary: This paper presents an efficient algorithm to use leverage-score sampling to solve least squares problems arising as subproblems in a larger alternating least squares (ALS) algorithm for building an approximate Tensor Train (TT) decomposition. The paper reports empirical evaluation of the proposed algorithm, showing runtime improvement compare to the baseline un-sampled problems.
Strengths: - Empirical evaluation show a runtime improvement with respect to unsketched TT-ALS, without reducing fit.
- The paper presents an efficient algorithm for leverage score sampling of ALS subproblems in TT-ALS.
Weaknesses: Major comments (affecting recommendation):
- The proposed method is compared empirically only to a small number of baselines. Seems that the most likely competatitor (Chen et al. 2023) is not compared against.
- Runtime improvement in the experiments is modest. This is a sampling based method, so if sampling is very efficient (due to efficient leverage score approximation) I would expect a big improvement in runtime. This might not be the case if either more iterations are now needed for convergence or computing the sampling probabilities is very expensive. Unclear from then experiments which is the case.
- The paper does not present an analysis end-to-end. It only analyzes how to subsample the ALS subproblems, and thus accelerate them. Not analyzed is how the solution of the subsampled system in lieu of the full subproblem affect ALS convergence.
- Many details required for implementation of the algorithm are omitted. Many points regarding the algorithm are not clear.
Minor comments (do not affect the recommendation):
- Lines 125 - 126: add citation.
- Figure 3 left: is this fit in the y-axis or misfit?
- Line 108-109: I think you mean *omission* of multiplicative terms.
- Lines 58-60: shouldn't runtime be O(jIR^3)?
- Line 54: The term "left matricization" is used without defining it yet.
- Line 55: I think it should be R_j
- Eq (2): I think on the right it should be A[i,:]^T. Similar issue with Eq (3).
Technical Quality: 3
Clarity: 2
Questions for Authors: - Lines 96-97: Sketching is a good way to circumvent the need to compute leverage scores, and TensorSketch allows us to sketch matrices with Kronecker structure. And indeed, the authors point out that [Chen et al. 2023] do this. How do their results compare to the present paper? Why is what you are proposing a better idea than using TensorSketch?
- Lines 64-66: What is the significance of this observation regarding the complexity of the algorithm?
- Lines 125 - 126: One the canonical form has been computed, can we solve the ALS subproblem efficiently?
- Did you compare empirically your method to the method of [Chen et al. 2023].
- Is your algorithm computing the canonical form anew in each ALS iteration, or are you somehow updating it?
- Does your algorithm maintain/computes the leverage scores of each row (the number of such scores is exponential), or do you present only an efficient way to sample from it without forming it?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Nothing to add.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and respond to the feedback below:
**Weaknesses:**
> [W1][Comparison against (Chen et al. 2023)]
The TensorSketch approach by Chen et. al. requires an **exponential** sample count in the tensor dimension q to achieve the $(\epsilon, \delta)$-guarantee on the accuracy of each least-squares solve. See Theorem 4.4 of their work and note the 3^q term in the sample complexity, where (in their notation) q is the tensor dimension. By contrast, our algorithm requires a sample count that has **no dependence** on the tensor dimension q; the sample count depends only on the column count of the design matrix and the parameters $\epsilon$nd $\delta$. The time to draw these samples in our method is linearly proportional to q, the tensoring dimension, a modest cost. The efficiency of our method stems from the fact that we sample from the exact distribution of squared row norms of the design matrix. By contrast, TensorSketch applied without tree structure modifications, such as those proposed by Ahle et al., suffers from an exponential output sketch size. We will clarify this point at the end of Section 2 and in the Introduction.
> [W2][Runtime improvement]
The runtime improvements are most significant for large sparse tensors. Figure 4 shows the accuracy (y-axis, higher is better) vs. ALS iteration time for our method vs. non-randomized ALS. The speedup per iteration can be as high as 26x for lower ranks. For the NELL-2 tensor, the plot shows that accuracy within three significant figures of non-randomized ALS was achieved roughly 3-4x faster than an optimized non-randomized ALS baseline. We will note this at the beginning of Section 5 (Experiments) and will highlight these facts further in Section 5.2.
> [W3][End-to-end analysis]
Correct. Our analysis provides guarantees on the accuracy of individual least-squares problems, in line by prior works by Cheng et. al, Larsen and Kolda, and Malik. Under additional assumptions on the tensor, convergence guarantees can be derived for sketched ALS: see, for example, https://proceedings.mlr.press/v119/gittens20a/gittens20a.pdf. The global guarantees derived in this paper by Aggour, Gittens, and Yener depend on sketching guarantees established for each linear least-squares problem, which we provide in our work.
> [W4][Implementation details]
The revised version of our draft will include (in section 4.1) detailed descriptions of the BuildSampler and RowSample functions, the data structure described in the appendix to draw samples from each orthonormal core flattening. Our Git repository, which is public at https://anonymous.4open.science/r/fast_tensor_leverage-EB01, includes a simple, slow reference implementation written in Python for this data structure, providing sufficient detail to understand and fully replicate our method.
> Minor comments
Thanks for catching these typos, we will correct and address them all. For Figure 3, the y-axis is the fit.
**Questions:**
>[Q1] [Our approach vs TensorSketch]
We believe we addressed this point in the comment above on the exponential complexity of TensorSketch without appropriate modifications. Our method offers sub-exponential sample complexity and worst-case runtime complexity to achieve guarantees on the solution to each least-squares problem. The approach by Chen. et. al. requires a worst-case sketch size exponential in the tensor dimension.
> [Q2][Complexity of the algorithm]
The significance here is that data structure construction (as well as subsequent updates to the data structure) does not increase the asymptotic complexity of ALS tensor train decomposition.
> [Q3][proceeding only with the canonical form]
Even after computing the canonical form (which makes the design matrix orthonormal for each linear least squares problem orthonormal), we must multiply the matricized tensor against the chain of TT cores placed in canonical form. **This is the major computational bottleneck** - without sketching, the runtime cost for this matrix multiplication scales as $O(nnz(T) N R^2)$, where $nnz(T)$ is the number of nonzeros in the tensor, $N$ is the order of the tensor, and $R$ is the tensor train rank. The tensor may have hundreds of millions of nonzero values. Sketching allows us to select only a subset of rows from the design matrix and the corresponding subset of rows from the matricized tensor, reducing the cost to $O(N R^2)$ * (number of selected nonzeros extracted by the sketch). We can update our draft to clarify this point in Section 1.1.
> [Q4][Emprical comparison with Chen et al. 2023]
We did not compare empirically with Chen et. al results. To the best of our knowledge, there is no public code available to replicate their results and compare with them. Also, as explained previously, from the theoretical point of view, the method proposed by Chen et. al cannot scale to the size of the tensors that we consider in the large sparse tensors experiments.
> [Q5][Updating canonical form in each iteration]
We update the canonical form without recomputing it entirely at each substep of ALS. After solving for the optimal value for each core, we compute the QR decomposition of its left or right flattening and replace the newly-computed core with the appropriately-reshaped Q factor. In this way, only one core needs to be updated after each least-squares solve to maintain the canonical form.
> [Q6][Computing the leverage scores]
We do not compute the leverage scores of all rows, since, as you pointed out, there are exponentially-many rows in the tensor dimension. To design a computationally-efficient sampling algorithm, we build an efficient data structure to **sample** from this distribution without materializing it.
We hope we have addressed all your concerns and answered your questions (in which case we kindly ask you to consider increasing your score). We are happy to clarify any additional point during the discussion period.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer, considering that the end of the discussion period is approaching, do you have any further questions? We think we have answered all your concerns in our rebuttal and hope you'll consider increasing your score.
---
Rebuttal Comment 1.2:
Title: [W1][Comparison against (Chen et al. 2023)]
Comment: Thank you for the detailed answers and clarifications. It will be beneficial to include many of the clarifications in revised versions of the manuscripts. I will consider how your answers affect my final score, which is "borderline" to begin with.
However, let me followup the discussion on Chen et al. 2023. I see your point. However, note that Thm 4.4 in Chen et al. 2023 gives exact constants (no O() expression) and this is where the 3^q comes from. In contrast, your results is J=O(R^2/(\eps * \delta)) where the constant is unclear. For all I know, the O() hides a 3^q constant! So: if you flesh out the constant, do you really do not have any dependence on the tensor order?
Re empirical comparison: even if we accept that there is a theoretical advantage in the form of a dependence of the order, this does not necessarily tell the whole story for a small q (where typically q=3 is the most common use-case). In fact, if q is fixed to 3, then 3^q is now a constant, and it really matters how it compares the constant of the your algorithm.
I still hold that Chen et al. 2023 is the most appropriate empirical baseline, and without it the empirical evaluation is lacking.
---
Reply to Comment 1.2.1:
Comment: >Thank you for the detailed answers and clarifications. It will be beneficial to include many of the clarifications in revised versions of the manuscripts.
Thank you again for you review and questions. We will include all the points mentioned in our rebuttal in the revisions which will improve the clarity.
> However, let me followup the discussion on Chen et al. 2023. I see your point. However, note that Thm 4.4 in Chen et al. 2023 gives exact constants (no O() expression) and this is where the 3^q comes from. In contrast, your results is J=O(R^2/(\eps * \delta)) where the constant is unclear. For all I know, the O() hides a 3^q constant! So: if you flesh out the constant, do you really do not have any dependence on the tensor order?
There is no hidden dependency on the tensor order in the big O.
In Corollary 4.4, the number of samples required for each least square problem, J=O(R^2/(\eps * \delta)), is independent on the tensor order N (N is not hidden in the constant). Thus, for one sweep of the ALS algorithm, the complexity is linear in the tensor order, as stated in the second part of the corollary (i.e. the overall ALS complexity is O(#it εδ R^4 N \sum_{j=1^N} (log I_j + I_j), where N is the order of the tensor). We will clarify this point in the revision.
> even if we accept that there is a theoretical advantage in the form of a dependence of the order, this does not necessarily tell the whole story for a small q (where typically q=3 is the most common use-case). In fact, if q is fixed to 3, then 3^q is now a constant, and it really matters how it compares the constant of the your algorithm.
We agree that if q is treated as a constant, then the asymptotic complexity is the same. However, we believe that designing an algorithm with only a linear dependency on the order, instead of the exponential one given in Chen et al, constitute a non-trivial contribution and advancement compared to Chen et al. In addition, the TT decomposition is particularly suited and relevant for (very) high order tensors where the tensor order cannot be treated as constant (as demonstrated by its uses in quantum physics for simulating many-body systems where the order corresponds to the number of particles in the system).
> I still hold that Chen et al. 2023 is the most appropriate empirical baseline, and without it the empirical evaluation is lacking.
Unfortunately, the lack of a publicly available code for the method presented in (Chen et al, 2023) did not allow us to include it in the synthetic experiment. Moreover, to the best of our understanding, the method proposed by Chen et. al could not scale to the size of the tensors that we consider in the large sparse tensors experiments. | Summary: This paper considers the problem computing the tensor train TT decomposition of large tensors and proposes a novel randomized approach for efficiently solving the problem. In particular, the Alternating Least Squares (TT-ALS) algorithm is considered and exact leverage scores sampling approach is proposed to accelerate the approach. A data structure based approach is devised to efficiently compute the leverage scores of the appropriate matricization of the tensors and sample rows from them. Theoretical analysis show the sampling complexity. Numerical experiment results are presented which to illustrate the performance of the proposed algorithm.
Strengths: The strengths of the paper are:
1. A novel randomized approach is presented for computing the TT decomposition of large tensors.
2. New data structure and sampling approach are proposed. Theoretical results are presented.
3. Numerical results show that the proposed method is efficient.
Weaknesses: The weaknesses of the paper are:
1. The randomized SVD approach seems to be more efficient in terms cost compared the the prosed method.
2. Certain aspects of the paper can be improved (see below).
Technical Quality: 3
Clarity: 3
Questions for Authors: The paper presents an interesting approach to compute the TT decomposition of large tensors.
However, I have the following comments that can further improve the paper:
1. r-TT-SVD seems better -
Based on numerical results presented, it appears the randomized SVD based approach (r-TT-SVD) takes significantly lower time, yet achieves comparable performance (fit) as other methods. Are TT-ALS and rTT-ALS popular and relevant for TT decomposition. How do these methods differ in the quality of the decompositions computed?
In some cases, rTT-SVD might have a slightly lower fit. But, since the runtime is so low, perhaps a larger sketch size would yield similar fit at lower cost. Details about the sketch sizes used are missing.
2. Presentation:
Certain aspects of the presentation can be improved, and there are few minor typos.
i. Are TT decompositions popular in tensor applications? Overall, they seem to have higher computational cost and are not very interpretable. Few alternate tensor decompositions achieve better compression with lower cost. A discussion on the motivation for the use of TT decomposition will benefit the paper (readers can better appreciate the results).
ii. Introduction has tensor jargons which might not be known to general AI audience, such as 3D core chains, left-matricization, contraction of tensor, etc. It is better to avoid these or define them before use.
iii. The computational cost of solving eq (1) can be added.
iv. Theorem 1.1, point 1 has a $j$ missing in the runtime cost.
v. Not sure what is the subscript (2) in Corollary 4.4.
Many of the results in the paper is based on another Arxiv paper (which is not peer-reviewed). So, correctness of these results is not established.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See above comments
------
Post Rebuttal:
I have read other reviews and authors' responses to all reviews. The responses have adequately addressed my concerns. The work has many merits. I have raised my score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and feedback, please find responses below:
**Weaknesses:**
> The randomized SVD approach seems to be more efficient in terms cost compared the the prosed method.
Randomized SVD cannot scale to large tensors. Indeed, to decompose a tensor using randomized SVD at each step we need to produce a Gaussian tensor of size $s_j\times I^{(N-1)}$, where N is the order of the tensor, $I$ is the physical dimension of each mode and $s_j$ is the size of the sketch. This approach works well when $N$ is small. But randomized svd cannot handle very high-order tensors. In the experiment part, we only use tt-svd and randomized tt-svd for the dense tensors with small $N$, but these two algorithms cannot scale to the case of large sparse tensors in the second part of the experiments.
**Questions:**
> The paper presents an interesting approach to compute the TT decomposition of large tensors. However, I have the following comments that can further improve the paper:
r-TT-SVD seems better. Based on numerical results presented, it appears the randomized SVD based approach (r-TT-SVD) takes significantly lower time, yet achieves comparable performance (fit) as other methods. Are TT-ALS and rTT-ALS popular and relevant for TT decomposition.
yes : TT-ALS is a very popular algorithm, notably in the quantum physics community, where it is closely related to the density renormalization group (DMRG) algorithm.
> How do these methods differ in the quality of the decompositions computed?
Regarding the quality of the decomposition, TT-ALS usually finds better solutions than TT-SVD (TT-SVD can actually be used as an initialization for TT-ALS). Note however that the focus of our paper is on improving the TT-ALS algorithm using randomization. Indeed, TT-ALS is a popular algorithm, but, as mentioned previously, the cost of solving the least-squares problems in ALS is exponential in the order of the tensor which is not efficient for decomposing very large tensors. The main goal of our paper is to make the ALS efficient with randomization and doable for high-order tensors. Therefore, the improvement of a TT-ALS algorithm is the main focus of our paper.
> In some cases, rTT-SVD might have a slightly lower fit. But, since the runtime is so low, perhaps a larger sketch size would yield similar fit at lower cost.
That’s correct, by increasing the sketch size we can have a better quality of sketch for rTT-SVD. However, as mentioned earlier, the SVD-based approach is not suitable for decomposing high-order tensors. Even if by increasing the sketch size rTT-SVD still suffers from the curse of dimensionality when N is very large. Therefore, the quality of the sketch of rTT-SVD cannot be compared to rTT-ALS in the case of sparse high-order tensors (section 5.2 second part of the experiments.)
> Details about the sketch sizes used are missing.
In Section 5.1, the sketch sizes are set to $J=5000$ and $J=2000$ for the synthetic and real data, respectively (we mention it in the text but we will add this information in the caption of Figure 3 and Table 1 as well).
> Presentation: Certain aspects of the presentation can be improved, and there are few minor typos.
i. Are TT decompositions popular in tensor applications? Overall, they seem to have higher computational cost and are not very interpretable. Few alternate tensor decompositions achieve better compression with lower cost. A discussion on the motivation for the use of TT decomposition will benefit the paper (readers can better appreciate the results).
TT decomposition enjoys stable and numerical stabilities while finding rank-r CP decomposition is NP-hard and the number of parameters of a Tucker decomposition grows exponentially with the order of the tensor. In contrast, the number of parameters is linear in the order of the tensor for both TT and TR decompositions, but TR decomposition is known to suffer numerical stability issues. For these reasons, TT is more popular in tensors and quantum physics communities (where it is known as Matrix Product States / MPS). To address your concern about motivating the TT format, we will include a more comprehensive motivation in the introduction.
> ii. Introduction has tensor jargons which might not be known to general AI audience, such as 3D core chains, left-matricization, contraction of tensor, etc. It is better to avoid these or define them before use.
If we understand correctly, your concern is mostly about the main theorem in the introduction which may cause difficulties for other audiences. We will clarify this by clearly stating before the theorem that all relevant definitions are given in Section 3.1.
> iii. The computational cost of solving eq (1) can be added.
Solving Eq (1) exactly (without randomization) has a computational cost of $O(I^N)$. It is mentioned in the introduction but we will emphasize it in the revision in section 3.2.
> iv. Theorem 1.1, point 1 has a 𝑗 missing in the runtime cost.
Correct. We will add j in the final version. Thank you for catching this typo.
> v. Not sure what is the subscript (2) in Corollary 4.4.
This denotes the second mode unfolding of the tensor which is defined in Definition 3.1 in the paper.
> Many of the results in the paper is based on another Arxiv paper (which is not peer-reviewed). So, correctness of these results is not established.
The paper “Fast exact leverage score sampling from Khatri-rao products with applications to tensor decomposition.” by Bharadwaj et el. has been published in Neurips 2023, we will correct the citation in the camera-ready version. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their comments and feedback, which we all answered in the individual rebuttals below.
We summarize some of our answers to the main points raised by the reviewers (see individual rebuttals for details).
- [SVD-based vs ALS-based approach]: Overall TT-ALS is very popular in the quantum physics community and usually finds better solutions than TT-SVD. Randomized TT-SVD cannot scale to high-order tensors. Each step of the randomized TT-SVD decomposition requires generating a random Gaussian matrix (classical random projection) that only can handle small-order tensors.
- [Comparison with Chen et al, 2023 paper]: The TensorSketch approach proposed by Chen et al, 2023~(https://arxiv.org/pdf/2309.08093) requires **exponential** sketch size in the tensor dimension while our algorithm requires the sketch size that has no dependence on the tensor dimension; the sketch size depends only on the column count of the design matrix and the parameters $\epsilon$ and $\delta$.
- [Comparison with Malik and Becker, 2021 paper]: The approach proposed by Malik et al, 2021~(https://proceedings.mlr.press/v139/malik21b/malik21b.pdf) requires **approximating** the leverage scores. However, in our paper, we proposed a novel data structure that computes the **exact** leverage scores. Moreover, in Malik et al, the runtime for the least-square solve has an **exponential dependency** to the order of a tensor while our approach is **free** of exponential dependency to the order of a tensor.
- [Dense and sparse tensors experiments]: For the experiment part, we added two separate sections, one for the dense and one for the sparse tensors. Our purpose for the dense tensor section is to show that our approach has a better time complexity than TT-ALS and TT-SVD and matches rTT-SVD in terms of fit. However, SVD-based decompositions cannot handle high-order (sparse) tensors.
For the sparse tensors section, we compared our proposed approach with the classical TT-ALS.
We hope we have addressed all your concerns and answered your questions and we are happy to clarify any additional points during the discussion period. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Persistent Test-time Adaptation in Recurring Testing Scenarios | Accept (poster) | Summary: The paper proposes a novel method called Persistent Test-time Adaptation (PeTTA), aimed at addressing the gradual performance degradation of models when used for long-term test-time adaptation (TTA). Traditional TTA methods adapt to continuously changing environments but fail to account for the cumulative errors and performance drops that can occur when these environments reoccur over time. The study demonstrates, through the simulation of a simple Gaussian Mixture Model classifier, how TTA methods might gradually fail in such recurring environments. PeTTA achieves a balance between adaptation and preventing model collapse by monitoring the model's tendency to crash and adjusting the adaptation strategy accordingly, significantly improving the model's stability and performance in long-term testing.
Strengths: (1) The study introduces a new test scenario—recurring TTA—which realistically simulates conditions that might be encountered in real-world applications. This setup is more practical than traditional static test scenarios and helps reveal potential long-term issues with TTA methods.
(2) By using simulations with Gaussian Mixture Model classifiers, the study thoroughly analyzes the reasons for performance degradation in TTA methods and proposes a theoretically supported solution. PeTTA's stable performance across multiple benchmarks validates its effectiveness.
Weaknesses: (1) Although the recurring test scenario has theoretical significance, in practical applications, environmental changes involve more than just lighting conditions; factors such as shooting angles and weather conditions also impact the captured data. Therefore, a simple Gaussian Mixture Model classifier may not fully simulate the variations in real complex scenarios.
(2) While virtual data experiments provide theoretical support, whether their results fully apply to real scenarios requires further validation. Using real images for experiments in theoretical analysis would better validate the practical applicability of the PeTTA method.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. We agree with the reviewer that the real-world scenarios are significantly more complicated. Despite its simplicity, our $\epsilon$-GMMM model can *empirically demonstrate the behavior of a collapsing TTA model on a real-world CIFAR-10-C dataset* (see the similarity between Fig. 3(a) and Fig. 4(a)). To the best of our knowledge, this is the first attempt to establish a theoretical foundation for studying the collapse of TTA in the simplest case, promoting future research on the theoretical aspects of the collapsing TTA model.
2. Yes, a theoretical study on real images would certainly validate their practical applicability. However, the challenge of modeling real images lies in the difficulty of theoretically analyzing the effect of added noise as it propagates through a highly complex machine-learning model. The idea behind $\epsilon$-GMMM is to simplify this complex process where the update rule at each step is rigorously defined. Nevertheless, the *inspiration gained from this simple study* facilitates the development of PeTTA, which *demonstrated its ability to meet several real-world continual TTA benchmarks.*
---
Rebuttal Comment 1.1:
Comment: Thank you for your response to the weaknesses I pointed out. However, I believe the rebuttal doesn't fully address the concerns regarding the practical applicability of the proposed method in real-world scenarios.
While the theoretical foundation provided by the \(\text{-GMMM}\) model is appreciated, the core of my concern lies in the practical validation and applicability of your method under more complex and varied real-world conditions. Simply demonstrating empirical behavior on a dataset like CIFAR-10-C, while useful, doesn't fully capture the diversity and complexity of real-world environmental changes, such as varying shooting angles, weather conditions, and other factors beyond lighting.
---
Rebuttal 2:
Comment: - We thank the reviewer for the comment on our rebuttal. In this study, and in the area of continual TTA research, the evaluation benchmark using CIFAR-10/100-C or ImageNet-C is the *standard benchmarking approach* for all the works in this line of research.
- Those datasets are *designed to simulate* up to 15 conditions that typically appear in real-world conditions (such as snow, fog, and brightness reflecting weather conditions, or motion blur, zoom blur due to camera/hand motions and jpeg, image noises due to capturing condition or image sensor quality), not just lighting factors you mentioned. Please visit [1] for more information. We evaluated our method in the most severe condition.
- Besides image corruptions, we also evaluated on DomainNet126 dataset with 4 domains: clipart, painting, real, and sketch.
- Again, the key focus of this paper is *NOT on evaluating the reality of the evaluation protocol, or closing the gap between laboratory experiments and real-world deployment*. Here, we *point out the risk of the model collapsing*, even in the simplest setting that all previous works adopted, with a small extension (longer time horizon) on the current continual TTA evaluation protocol.
- We are *thrilled to extend our experiments with your suggestion*, but currently, we are not aware of any publicly available dataset, especially with the time constraint of the rebuttal period that matches the criteria you mentioned. We believe that the current evaluations *are sufficient to convey our key message*. We agree that a more realistic evaluation is necessary and we will keep exploring in the future.
In summary, we acknowledged the gap between real-world deployment and the evaluation of synthetic data, which is commonly used in the test-time adaptation community does exist. However, we provided here some follow-up comments to justify that this setup is sufficient for us to convey the core message of this study. We hope this perspective will be considered in the ongoing discussions and evaluation of our work.
[1] Hendricks et al., Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. ICLR’19. | Summary: This paper investigates the risk behind long-term test time adaptation. To achieve this, the authors simulate a long-term test data stream called Recurring Test-Time Adaptation by repeating a single period continual TTA setting for 20 times, and propose a Persistent TTA (PeTTA). Within the proposed algorithm, the authors calculate the Mahalanobis distance between the feature distributions at the current time slot and before test-time adaptation to weight the regularization term and model updated momentum. Moreover, the authors also leverage an Anchor Loss in the probability space to further avoid over-adapting process. To validate the proposed method, the authors conduct the experiments on several TTA benchmark datasets, e.g. CIFAR10-C and ImageNet-C. The results demonstrate the effectiveness of the proposed method which provides stable adapting process in long-term test time adaptation scenarios.
Strengths: 1. This paper investigates a realistic and valuable TTA setting, whereas the test data stream is enough long and most of the existing TTA methods based on self-training self-supervised objectives produce poor results.
2. This paper is well-written and sounds interesting.
3. The authors attempted to theorize the causes of long-term TTA failures.
Weaknesses: 1. The novelty is incremental. Methodologically, the Anchor Loss proposed in this paper is similar to that used in [A] (compare eq.8 in the manuscript with eq. 5 in [A]), only replacing L2 distance between two probabilities to Cross Entropy. In the ablation study, I observe most of the promotion is provided by this Anchor Loss module on CIFAR100, DN and IN-C datasets. Based on these, I suggest the authors to provide some experimental comparisons on these two TTA method and more discussions on the differences between them.
2. The danger brought by self-training based TTA is well-known [B], while there is no denying that the author may be correct in making the theoretical analysis in the manuscript. There are many TTA methods proposed to alleviate this issue, for example, Balanced BatchNorm proposed by [A] alleviates the impact from the biased class to batchnorm, class diversity weighting utilized in [C] avoid the accumulation of prediction bias during test time. Unfortunately, they are not discussed and compared in the experiments of the manuscript.
[A] Towards Real-World Test-Time Adaptation: Tri-Net Self-Training with Balanced Normalization, AAAI 2024.
[B] On Pitfalls of Test-time Adaptation, ICLR 2023.
[C] Universal Test-time Adaptation through Weight Ensembling, Diversity Weighting, and Prior Correction, WACV 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness section above.
By the way, how about the performance of feature alignment based TTA method e.g. [D] in the recurring TTA scenario.
[D] Revisiting Realistic Test-Time Training: Sequential Inference and Adaptation by Anchored Clustering Regularized Self-Training. TPAMI 2024.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: There is not nearly enough technical novelty, and there is a lack of discussion and comparison of some of the competing methods that should be compared.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comments on the weaknesses part:**
1. *We respectfully disagree with the comment about the novelty since this study is not all about the anchor loss*. Indeed, we *acknowledged the anchor loss is not a new idea or our novel point* on line 751 (Appendix E5) and will include a proper citation to the anchor network in [A] for completeness. The anchor loss alone is insufficient to avoid the model collapsing as we responded to Reviewer rAs4. Nevertheless, the main contributions of this paper are the *theoretical analysis* and the *sensing of divergence for an adaptive model update* in PeTTA. We also acknowledged that the model collapsing is observed in previous studies, as an inspiration for improving the performance but to the best of our knowledge, there is no theoretical investigation on this phenomenon. The suggestion of comparing PeTTA and [A] is interesting. We conducted additional experiments and provided the discussion in the following comment.
2. We sincerely appreciate the reviewer for suggesting recent methods such as ROID (WACV'24) [C] and TRIBE (AAAI'24) [A]. We have benchmarked their performance in Table III-V. Even though TRIBE - a SOTA model can provide stronger adaptability, outweighing the PeTTA model, and baseline RoTTA in the first several recurrences, *the risk of the model collapsing still presents in TRIBE* when increasing the observation period as demonstrated in the case of CIFAR-10-C (Fig. I(b)). Nevertheless, this result *underscores the importance of our proposed recurring TTA* setting for an extended testing stream evaluation. We also experimented with the class diversity weighting utilized in ROID [C], unfortunately, it cannot handle the temporally correlated testing stream in the recurring/practical TTA as PeTTA, and tend to be collapsed at the beginning. This is consistent with the finding by ROID's authors in Tab. 4 - ablation study of [C]. *ROID with class diversity weighting cannot handle the practical TTA scenario and falls behind PeTTA on all benchmarks*.
Since PeTTA utilizes RoTTA as a baseline approach, TRIBE - a much better baseline would be interesting for us to combine the amazing adaptability of TRIBE and the collapsing prevention of PeTTA to create a stronger baseline in future work. Overall, further evaluations of PeTTA against the most recent approaches in this rebuttal highlight (1) the value of recurring TTA in spotting the lifelong performance degradation of continual TTA and (2) the novel design of the sensing model divergence and adaptive update in PeTTA, inspired by a theoretical analysis to address the collapsing phenomenon of the TTA model. The revised paper will include a discussion of these methods and an experimental comparison with them as outlined in the rebuttal PDF.
**Comments on the questions part:**
We appreciate the reviewer's suggestion regarding the feature-based alignment TTA mentioned in [D]. While sharing several similarities in the design, this work does not directly study the performance degradation of continual TTA methods as PeTTA. Nevertheless, evaluating the performance of this approach under our recurring TTA setting is interesting and straightforward due to the simplicity of our setting. We will mention [D] in the revised paper and leave the evaluation for future work due to the length of the rebuttal period and since [D] has just officially appeared on the 8/2024 issue of TPAMI 2024.
---
Rebuttal Comment 1.1:
Title: Response to Authors' Rebuttal
Comment: Thanks for the additions to the TRIBE and ROID experiments, especially the TRIBE experiments, which allayed my concerns about the anchor loss module. The discussion of similar existing work here is necessary, and I suggest that the authors include these discussions and experiments in their camera-ready revision. Up to this point, all my concerns have been addressed, and overall this is a good paper for exploring the robustness of long time TTA methods. I would like to increase my score to Weak Accept to suggest acceptance.
---
Reply to Comment 1.1.1:
Comment: We thank reviewer MNYK for the positive feedback on our rebuttal. We are excited that your concerns regarding the anchor loss, novelty of PeTTA, and additional comparison with the TRIBE, ROID have been successfully addressed. Noteworthily, the discussion here emphasizes the risk of model collapse, even with the latest state-of-the-art continual TTA methods, and highlights the significance of PeTTA. We will incorporate these points into our revised paper. | Summary: The paper provides theoretical and empirical analyses on the error accumulation and model collapse in continuous TTA scenarios. From the analyses, the authors discover the risk of using constant key hyperparameters ($\alpha$ and $\lambda$ in RoTTA) and periodic reset of model parameter (in RDumb). They propose Persistent Test-time Adaptation (PeTTA) with adaptive $\alpha_t$ and $\lambda_t$ depending on distribution shifts and anchor loss $\cal{L}_{AL}$. The proposed method shows lower and stabler error over the continuous distribution shifts.
Strengths: - The paper considers a practical scenario of TTA, continuous TTA.
- The theoretical analysis explains an interesting outcome of error accumulation in TTA, called model collapse: a collapsed model is prone to misclassifying several classes as a few classes. This is also verified in an experiment.
- Based on the findings of model collapse, the authors propose a mechanism of detecting divergence of $\theta_t$ and adaptively selecting $\alpha_t$ and $\lambda_t$, which is well justified and verified.
- The authors provide an extensive set of experiments demonstrating their findings and the superiority of the proposed method, compared to the state of the arts: RoTTA and RDumb.
Weaknesses: - The paper focuses solely on recurring TTA scenarios, potentially overlooking other types of domain shifts such as non-cyclic domain shifts and label distribution shifts. The proposed method appears too specific to recurring TTA scenarios and may be prone to failures, particularly under label distribution shifts.
- The sensitivity of hyperparameter choices, particularly $\alpha_0$ and $\lambda_0$, is not studied. Additionally, there is insufficient justification for the hyperparameter choices of other algorithms. This raises concerns about the empirical study's claim of the proposed method's superiority.
- The proposed method requires access to the source dataset, as indicated in Equation 6.
Technical Quality: 3
Clarity: 3
Questions for Authors: The following questions include the concerns in the weakness.
- Can PeTTA work under scenarios of (i) non-cyclic domain shift; and (ii) label distribution shift?
- Can you provide further results on longer time horizon than 20? I want check if the error accumulation is fully resolved by PeTTA.
- Is the performance of PeTTA sensitive to the choice of $\alpha_0$ and $\lambda_0$? If so, how can we select them in practice?
- How did you choose the hyperparameters of PeTTA for the experiment result? Were the hyperparameters of other algorithms tuned in the same way of PeTTA?
- Why there is no balancing hyperparameter in front of $\cal{L}_{AL}$?
- Is PeTTA runnable without the access of source dataset?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - In the current manuscript, the limitation of this work is not described in detail. It would be great if you can provide more specific description on the direction of further improvement.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comments on the Weakness Part:**
1. We would like to emphasize that recurring TTA serves as a *diagnostic tool* for catching the lifelong performance degradation of continual TTA, and even *in this simplest case, several SOTA continual TTA algorithms fail to preserve their performance*. This raises awareness in the community when evaluating their methods (visit Appendix D.2. for further discussion). Extending our recurring TTA to include more challenging scenarios and complex types of shifts is necessary, but should be addressed in the following work.
2. We have provided additional justifications and discussions on the choices of hyper-parameters for consideraiton. See the comments below.
3. See the comments below.
**Comments on the Questions Part:**
1. Yes, (i) our method can handle non-cyclic domain shift. PeTTA does not make any assumptions or utilize this property of the testing stream. As an example, it achieves good performance on CCC [1] where the corruptions are algorithmically generated, non-cyclic with two or more corruption types can happen simultaneously - see the Appdx. F.4. Regrading (ii), for all experiments, the label distribution is temporally correlated (non-iid) following [2, 3]. The class distribution within each data batch can be highly imbalanced, with some classes dominating the others. The robustness of our PeTTA with label distribution shifts is demonstrated up to this extent.
2. Absolutely, PeTTA is experimented with 40 recurrences in Tab. II and Fig.I (a). The experimental results confirm the persistence of PeTTA. Additionally, the performance of PeTTA over an extended time horizon is presented in Table 13 (Appendix F4). In this case, the model is adapted to over 5.1 million images, which is significantly more than the default 20 recurrences.
3. In PeTTA, $\alpha_0=1e^{-3}$ is the initial learning rate for adaptation. We do not tune this hyper-parameter, and the choice of $\alpha_0$ is universal across all datasets, following the previous works/compared methods (e.g., RoTTA, CoTTA).
Since $\lambda_0$ is more specific to PeTTA, we included a sensitive analysis with different choices $\lambda_0$ on CIFAR-10/100-C and ImageNet-C in Table VI for the sake of time. Overall, the choice of $\lambda_0$ is not extremely sensitive, and while the best value is $1e^1$ on most datasets, other choices such as $5e^0$ or $5e^1$ also produce roughly similar performance. Selecting $\lambda_0$ is intuitive, the larger value of $\lambda_0$ stronger prevents the model from collapsing but also limits its adaptability as a trade-off.
In action, $\lambda_0$ is just an initial value and will be adaptively scaled with the sensing model divergence mechanism in PeTTA, meaning it does not require careful tuning. More generally, the choice of this hyper-parameter can be tuned similarly to the hyper-parameters of other TTA approaches, via the use of an additional validation set, or some accuracy prediction algorithm [4] when labeled data is not available.
4. Except for the recurring testing condition, each recurrence follows the standard continual TTA established in previous studies. Hence, for all compared methods, we use the best parameters provided by the authors. The performance after the first visit is manually verified to ensure the reproducibility of the original work. Noteworthy, the primary observation of this work is to determine how long these approaches can sustain their initial performance.
5. Since the purpose of the anchor loss is to guide the adaptation under the drastic domain shift (Lines 684-691, Appendix E.2), we empirically found out that it is unnecessary to introduce an additional hyper-parameter and let the adaptive regularization term take the leading role of collapse prevention.
6. No, PeTTA requires sampling from the source dataset to perform. Nevertheless, Appdx. E4 demonstrates that only a small number of samples is required for reliably estimating the empirical mean and covariance matrix. PeTTA relies solely on the typical assumptions used in the other methods it is compared against (e.g., EATA, RMT). Please visit Appendix E.4 for our discussions on the feasibility of accessing the source dataset.
[1] Robust Test-Time Adaptation in Dynamic Scenarios, CVPR'23.
[2] RDumb: A simple approach that questions our progress in continual test-time adaptation, NeurIPS'22.
[3] Gong et al., NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation, NeurIPS'22.
[4] Lee et al., AETTA: Label-Free Accuracy Estimation for Test-Time Adaptation, CVPR’24.
**Comments on the Limitations Part:**
We thank the reviewer for the suggestion. Appendix E elaborates some limitations mentioned in Section 6 in detail. The direction of further improvement will be included in the revised paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses and additional experiments, which address most of my major concerns. I would raise my score (5->6).
---
Reply to Comment 1.1.1:
Comment: We appreciate Reviewer oiWU for the feedback on our rebuttal. It's great to know that most of your major concerns about PeTTA's evaluation on extended testing scenarios beyond our proposed recurring TTA, as well as hyper-parameter selection, have been addressed.
---
Rebuttal 2:
Title: Action required: discussion phase
Comment: Dear reviewer oiWU,
the authors have posted their rebuttal. Could you please aim to reply to the authors latest till August 12th end of day, in case there are any further comments the authors would like to add/clarify?
Thanks again for your efforts in reviewing!
-AC | Summary: The authors proposed a practical TTA scenario called recurring TTA and, within this scenario, suggested the best-performing TTA methodology, which they named persistent TTA (PeTTA), measured by various benchmark performances.
Strengths: 1. The proposed recurring TTA scenario reflects the challenging and practical situation well.
2. PeTTA is a simple yet effective solution. Especially, PeTTA excels in collapse prevention.
Weaknesses: 1. I think Corollary 1 is not a rigorous condition for model collapse. Of course it guarantees the decrease of distance ($d_t^{0 \rightarrow 1}$), the distance could converge to another value because $\lim_{t \rightarrow \tau}\epsilon_t$ is not always $p_1$. Just saying 'The model collapse happens when this condition holds for a sufficiently long period' is not proper.
2. I think the role of $\mathcal{L}\_\mathrm{AL}$ is similar to $\mathcal{R}(\theta')$, but $\mathcal{L}\_\mathrm{AL}$ is not controlled by $\lambda$. However, in Table 3, adding $\mathcal{L}\_\mathrm{AL}$ leads to significant performance improvement in several benchmarks. The authors should provide the difference between $\mathcal{L}\_\mathrm{AL}$ and $\mathcal{R}(\theta')$, and explain why they chose the current design.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The authors said EATA is the baseline. However, I couldn't find any results of EATA.
2. It seems that the images on the left and right in Figure 4(c) are switched.
3. How about applying only $\mathcal{L}\_\mathrm{AL}$ without $\lambda$ in Table 3?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comments on the Weaknesses Part:**
1. In Lemma 1, we mathematically showed that under Assumption 1, we have $\lim_{t \rightarrow \tau}\epsilon_t = p_1$. Furthermore, the convergence of $\epsilon_t$ to $p_1$, and the collapsing behavior when $\epsilon_t$ selected following Collary 1 are both empirically validated through a numerical simulation in Sec. 5.1. Since the rate of collapsing depends on various factors, including both data-dependent and algorithm-dependent, the Corollary 1 here holds as a generic condition for model collapsing. Nevertheless, we will further explore specific conditions/settings that can make this statement more rigorous in future work.
2. The motivation, and reasoning for our design choice behind the anchor loss and the regularization term are detailed in lines 684-691 of Appendix E.2. While using anchor loss is beneficial on many benchmarks, it is not sufficient on its own to achieve PeTTA’s performance, as shown in Table I of the rebuttal PDF. In response to reviewer MNYK, we expanded our evaluation to include TRIBE (AAAI'24), a more recent robust TTA algorithm that also utilizes a concept similar to anchor loss. Despite demonstrating better adaptability, this method is still prone to model collapse over a longer time horizon, necessitating the exploration of additional strategies beyond a simple anchor loss.
**Comments on the Questions Part:**
1. All tables show the performance of MECTA with EATA backbone (line 242). MECTA is an advanced version of EATA, with all the components preserved, and the batch normalization blocks are replaced with MECTA blocks, showing higher efficacy. For completeness, the experiments of a standalone EATA adapter are also included in Table III-V of this rebuttal PDF. In short, *EATA still suffers from performance degradation in the recurring TTA setting just like MECTA and other methods*.
2. Yes, thank you for catching that. This figure will be updated in the revised paper.
3. In Table I, we provided an additional ablation study where a baseline model is trained with and without the anchor loss (no regularization). The results show that while having initial benefits on some benchmarks, *trivially applying the anchor loss alone is incapable of eliminating* the lifelong performance degradation in continual TTA.
---
Rebuttal 2:
Comment: Thanks for the response. My concerns are addressed and I would raise my score.
---
Rebuttal Comment 2.1:
Comment: We appreciate reviewer rAs4 for the feedback. We're pleased that your concerns about our theoretical analysis (Corollary 1) and the role of the anchor loss $\mathcal{L}_{AL}$ have been resolved after the rebuttal. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful comments and valuable feedback on our work. During the rebuttal period, we have extensively conducted additional experiments: benchmarking the performance of EATA and the most recent continual TTA methods (ROID (WACV’24), and TRIBE (AAAI’24)). The persistence of PeTTA is further justified over a recurring TTA with 40 visits (twice longer than previous experiments). Furthermore, we provided a discussion on the role of anchor loss $\mathcal{L}_{AC}$ and sensitivity analysis on the choice of $\lambda_0$ - a PeTTA’s hyper-parameter. Lastly, we offered a point-by-point response to each reviewer's comment below.
Pdf: /pdf/d5ed201082f8b5811522135a11cf01e6cc224d05.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Graph Convolutions Enrich the Self-Attention in Transformers! | Accept (poster) | Summary: To overcome the oversmoothing effects in general transformer settings, the author considers to treat the attention module as the graph filter and propose to use a polynomial graph filter (graph signal processing) techniques to alleviate oversmoothing. Specifically, the author considers the induced attention matrix from features as the adjacency matrix and construct a filter that allows exhibiting high-pass and low-pass filter. As low-pass filters provides smoothing tendency and high-pass filter provides edge tendency, the author leverages an adaptive filter to alleviate oversmoothing effects on Transformer domain.
Strengths: 1. The idea is clear and well understood, the author leverages the idea in GNN fields to the transformer models.
2. The high-pass and low-pass filter is analyzed and the author used an approximation to approximate the polynomial filter.
3. The experiments covers many different applications of transformers.
Weaknesses: 1. The novelty is very limited. The idea of building adaptive high-pass, low-pass filter is well studied in GNN field and has been leveraged in many works to solve oversmoothing in GNN. The idea of attention module as graph filter is also proposed in many Graph transformer models. The main contribution of this paper is simply change the direction from using attention as graph filters to use graph filters as attentions, which is not very interesting.
Technical Quality: 3
Clarity: 3
Questions for Authors: In the Theorem F.1, the description is not very clear. I assume "the sum w0 +w1σi +wkσi^{k} will be smaller than g(σi)" indicates that g(σi) will become smaller as i increase, but this depends on the scale of w1 and wk, even as σi^{k}->0 with i is large, if the rate wk/w1 is large, such tendency can only be guaranteed asymptotically, in the limited i case, this doesn't always hold true. Please correct me if I am wrong.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer bzWT for the review and feedback, highlighting our strengths in
1. Clear idea applying GNN concepts to transformer models.
2. Analytical approach to high-pass and low-pass filters with polynomial approximation.
3. Extensive experiments across various transformer applications.
Below, we would like to address each of your questions and concerns:
**Response to W1: comparison with GNN and Graph Transformer**
While inspired by graph signal processing (GSP), we believe our contribution has significant novelty:
GFSA uniquely interprets dynamic self-attention as a directed graph shift operator, unlike conventional GNNs and graph Transformers that typically deal with static undirected adjacency matrices. This difference sets GFSA apart from works like Polyformer [1] and Specformer [2].
Polyformer combines a Graph Transformer with a graph filter and defines the graph filter using the adjacency matrix as a graph shift operator. It uses Transformers to learn the coefficients of polynomial expansion. Specformer uses the eigenvalues of the adjacency matrix as input to the Transformer to design a spectral GNN, but it doesn't alter the fundamental structure of Transformers, using them as encoder-decoders.
Our approach is based on the following equation:
$$\mathbf{y} = \mathbf{H}\mathbf{x} = \begin{cases}{\color{blue}{\sum_{k=0}^{K} w_k \mathbf{\bar{A}}^k\mathbf{x}}} = {\color{red}{\mathbf{V}^\intercal \big(\sum_{k=0}^{K} w_k\mathbf{\Lambda}^k \big) \mathbf{V}\mathbf{x}}} & \text{if} \ \mathbf{\bar{A}} \text{ is undirected} \\\\ \color{blue}{\sum_{k=0}^{K} w_k \mathbf{\bar{A}}^k\mathbf{x}} & \text{if} \ \mathbf{\bar{A}} \text{ is directed} \end{cases}$$
When implementing graph filters, the red spectral decomposition requires diagonalization of the graph shift operator matrix. In symmetric cases, such as undirected graphs, spectral decomposition is always guaranteed, allowing consistent application of the red method. However, in directed graphs, spectral decomposition is not guaranteed and thus cannot always be applied.
In contrast, the blue matrix-polynomial method does not require diagonalization. This enables consistent application of graph signal processing to both directed and undirected matrices. This approach is essential for self-attention matrices where diagonalization is impossible, making GFSA more broadly applicable and theoretically sound.
Furthermore, we propose a method to approximate higher-order polynomial terms. This approximation is key to achieving performance gains without computational costs.
In conclusion, GFSA integrates GSP with Transformers, addressing non-diagonalizable self-attention matrices and offering a scalable model applicable across various domains.
> [1] Bo et al., "Specformer: Spectral Graph Neural Networks Meet Transformers." ICLR 2023
>
> [2] Jiahong et al., “PolyFormer: Scalable Node-wise Filters via Polynomial Graph Transformer”, KDD 2024
**Response to Q1: Theorem 3.1**
Thanks for your question. We have corrected the typo $g(\sigma_i) \rightarrow \sigma_i$ on line 696.
We agree that if the ratio $w_k/w_1$ is large, it may not always be a low-pass filter. Theorem 3.1 proves that under specific coefficient conditions, the graph filter becomes either low-pass or high-pass. For example, if the coefficients are all positive and their sum is 1, the graph filter becomes a low-pass filter. Alternatively, if the coefficients are $w_k=(-\alpha)^k$ with sufficient large $K$, the graph filter becomes a high-pass filter.
Additionally, we have revised Theorem 3.1 to address your concern:
**(Theorem 3.1 Filter characteristics based on coefficient values)**. Let $\bar{\mathbf{A}}$ be a self-attention matrix interpreted as a graph with connected components. Consider the polynomial graph filter defined by $\sum_{k=0}^K w_k \bar{\mathbf{A}}^k$, where $w_2, w_3, \ldots, w_{K-1} = 0$ and only $w_0$, $w_1$, and $w_K$ are non-zero. If the coefficients $w_k$ for $k=0,1,K$ are positive and their sum is 1, then the polynomial filter acts as a low-pass filter, attenuating high-frequency components and promoting smoothness across the graph. Conversely, if $w_k=(-\alpha)^k$ for $k=0,1,K$ and $\alpha \in (0,1)$, the polynomial filter exhibits high-pass filter behavior.
**Proof.** For the case of low-pass filter, the graph filter acts as low-pass filter if $|g(\sigma_i)/g(\sigma_1)|<1$ for $\forall i \geq 2$ where $\sigma_i$ indicates the $i$-th singular value of $\bar{\mathbf{A}}$ . From the assumption that the coefficients $w_k$ for $k=0,1,K$ are positive and their sum is 1, we derive that
$\left|g(\sigma_1)\right| = \left|w_0 + w_1 + w_K\right| = 1$,
and
$\left| g(\sigma_i)\right| =\left| w_0 + w_1\sigma_i + w_K\sigma_i^k \right| < \left| w_0 + w_1\sigma_i + w_K\sigma_i \right| < \left| w_0 + w_1 + w_K \right| = 1$.
Therefore, as $|g(\sigma_i)/g(\sigma_1)|<1$, the graph filter acts as low-pass filter.
For the case of high-pass filter, the graph filter acts as a high-pass filter if $|g(\sigma_i)/g(\sigma_1)| > 1$ for $\forall i \geq 2$. From the assumption that the coefficients $w_k=(-\alpha)^k$ for $k=0,1,K$ and $\alpha \in (0,1)$ with sufficient large $K$, we derive that
$\left| \frac{\lim_{K \rightarrow \infty} g(\sigma_i)}{\lim_{K \rightarrow \infty} g(\sigma_1)} \right| = \left|\frac{\lim_{K \rightarrow \infty} w_0 + w_1 \sigma_i + w_K \sigma_i^K}{\lim_{K \rightarrow \infty} w_0 + w_1 + w_K} \right| =\left|\frac{\lim_{K \rightarrow \infty} -\alpha + \alpha^2 \sigma_i^2 + (-\alpha)^K \sigma_i^K}{\lim_{K \rightarrow \infty} -\alpha + \alpha^2 + (-\alpha)^K}\right| = \left| \frac{ -\alpha + \alpha^2 \sigma_i^2}{ -\alpha + \alpha^2 }\right| = \left| \frac{ -1 + \alpha\sigma_i^2}{ -1 + \alpha }\right| > 1$
Therefore, as $|g(\sigma_i)/g(\sigma_1)| >1$ with sufficient large $K$, the graph filter acts as high-pass filter.
This proof supports that the behavior filter depends directly on the sign and values of the coefficients.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification on the matrix power part. While I believe that such matrix power has been utilized commonly in the early-works of GNN resolving heterophilic graphs (such as [1]), a taylor approximation does provide an easy computation savoir. However, since the self-attention matrix is dynamically changed rather than a static adjacency, this adds on a matrix power computation for each forward phase, which I believe is non-trivial computation cost. I also believe that this part should be added to the main text rather than appendix, this confused the reader's understanding of what the paper is trying to achieve.
Since the paper proposed to use matrix power directly as a filter for both high and low pass filter learning, it becomes important that Theorem 3.1 should be carefully inspected as it lays the theoretical foundation for the paper. For your revised theorem and proof, I have the following questions:
1. The proof for your high-pass filter case doesn't look right. Although the conclusion I think remains, but in the third step, isn't that be $\frac{1-\alpha \sigma_i + (- \alpha)^K \sigma_i^K}{1 - \alpha+(- \alpha)^K} $ ?
2. The assumption for low pass filter requires learnable constrained by sum to 1 and the assumption for high-pass filter is even stricter. In general, what constrains can we make in either loss or forward process to ensure such conditions? What guarantees we can learn such weights that satisfy the assumptions?
3. Theorem 3.1 is applied to the exact A_{k} case, while the author has provided a bound for how the approximated polynomial is close to the exact case, it seems non-trivial to directly adapt the conclusion of exact singular value analysis to the approximate case. Especially the assumption requires K to be sufficiently large. I made a simple derivation following the revised proof with the formula Equation 10 replacing the A-{K} polynomial. From what I observed, the final ratio is:
$|g(\sigma_i)/g(\sigma_1)| = \frac{\sigma (1- \sigma)}{1-\frac{K-1}{K-2}}$ with $K->\infty$. This suggests that the final filter depends only on the singular value (or the learned directed normalized adjacency matrix), but independent of the learnable w_{0},w_{1},w_{2}. If this is the case, then I would question the necessity of the learnable parameters as this is similar to the red case which is altering the eigenvalue of the learned A directly.
I am very concerned about the accuracy of this part as this is a key novelty (or at least theoretically) and the key reason why method should work and I am worried that the non-trival adaptation from exact case to approximate case leads to a illusion that $w_0,w_1, w_k$ is the key.
[1] Abu-El-Haija, S., Perozzi, B., Kapoor, A., Alipourfard, N., Lerman, K., Harutyunyan, H., Steeg, G.V. & Galstyan, A.. (2019). MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing. ICLR.
---
Rebuttal 2:
Comment: Dear Reviewer bzWT,
Thank you for your comments on the computational complexity of GFSA and your careful review and follow-up questions of Theorem 3.1 and its proof. We want to address your questions and concerns in two main parts.
---
**[Response to the computational complexity of GFSA]**
First, to address the computational overhead of GFSA, we have implemented a selective application strategy, as detailed in *Section 6 of* *our main text*. By applying GFSA only to even-numbered layers, we effectively reduce runtime increases while maintaining comparable performance to full-layer integration.
Moreover, we understand your concern about the added complexity in the context of Transformers. As you know, the original self-attention requires quadratic computational cost, which led to the development of efficient/linear Transformers with linear complexity in recent years. To address this, we want to highlight the compatibility of GFSA with efficient/linear Transformers.
As detailed in our response to `Reviewer k2id`, we have conducted experiments showing that:
1. GFSA can be effectively integrated with Efficient attention [1], one of the linear Transformers, maintaining its efficiency benefits while improving performance. Here are some key results:
| | ListOps (2K) | Image (1K) |
| --- | --- | --- |
| Efficient Attention | 36.9% / 49.2s / 0.57GB | 40.2% / 121.1s / 1.14GB |
| **Efficient Attention + GFSA** | **37.9%** / 53.8s / 0.67GB | **40.4%** / 135.8s / 1.33GB |
> Note: Results are shown as Accuracy (%) / Running time (s/1K-steps) / Peak training memory usage (GB)
2. There are potential efficiency improvements for GFSA itself. By designing $\mathbf{\bar{A}}$ as $\mathbf{\bar{A}}=\mathbf{QK}$, we can compute $\mathbf{\bar{A}}^2\mathbf{V} = (\mathbf{QK})(\mathbf{QK})\mathbf{V}$ in a manner similar to Efficient attention mechanism, maintaining linear complexity even for the squared operation in GFSA.
These findings show that GFSA with a linear Transformer can enhance the performance while preserving the efficiency benefit. For a more detailed discussion of GFSA compatibility with a linear/efficient Transformer, see our response to `Reviewer k2id`, "**4. Limitations (scalability and extendability)**".
> [1] Shen, Zhuoran, et al. "Efficient attention: Attention with linear complexities." WACV 2021.
---
Rebuttal 3:
Comment: **[Responses to Theorem 3.1 and its proof]**
**Response to Q1.**
You are correct, and we thank you for pointing this out. There was a typo in the third step of the high-pass filter proof. We apologize for any confusion this may have caused. To clarify, the corrected one is as follows:
$\left| \frac{\lim_{K \rightarrow \infty} g(\sigma_i)}{\lim_{K \rightarrow \infty} g(\sigma_1)} \right| = \left|\frac{\lim_{K \rightarrow \infty} w_0 + w_1 \sigma_i + w_K \sigma_i^K}{\lim_{K \rightarrow \infty} w_0 + w_1 + w_K} \right| =\left|\frac{\lim_{K \rightarrow \infty} 1 - \alpha \sigma_i + (-\alpha)^K \sigma_i^K}{\lim_{K \rightarrow \infty} 1 - \alpha + (-\alpha)^K}\right| = \left| \frac{ 1 -\alpha\sigma_i }{ 1 -\alpha }\right| > 1$
**Response to Q2.**
As you mentioned, Theorem 3.1 assumes that the coefficients have specific values, and the coefficient of the graph filter learned through deep learning cannot always satisfy those exact values. However, what we want to clarify through this theorem is that **the value of coefficients determines the characteristics of the graph filter.**
Our goal in learning the coefficients of GFSA is not to create an exact low-pass or high-pass filter, but to design a graph filter that can appropriately employ frequency information of various scales for downstream tasks. The theorem provides a theoretical foundation for understanding how the coefficients influence the behavior of the filter. In practice, we do not enforce strict constraints on the coefficients during training. Instead, we allow the model to learn the most appropriate coefficients for the task at hand, which may result in filters that combine both low-pass and high-pass characteristics to varying degrees.
**Response to Q3.**
Theorem 3.1 is indeed a proof for the exact $\mathbf{\bar{A}}^K$. However, contrary to your concern, **the characteristic of our GFSA still depends on the value of coefficients** $w_k$. To address your concerns, we extend the proof using the approximated $\mathbf{\bar{A}}^K$ used by our GFSA.
**Proof for low-pass filter:** We prove the low-pass filter result for GFSA. For the case where $w_0, w_1$, and $w_K$ are positive and their sum is 1, we prove that $\left| g(\sigma_i) \right| < 1$.
Since $\sigma_1 = 1$, we have:
$g(\sigma_1)=w_0+w_1\sigma_1+w_K(\sigma_1+(K-1)(\sigma_1^2-\sigma_1))=w_0+w_1+w_K=1$.
Hence, proving the low-pass filter result is equivalent to showing $\left| g(\sigma_i) \right| < 1$, and $g(\sigma_i)$ is bounded as follows:
$\left| g(\sigma_i)\right| =\left| w_0 + w_1\sigma_i + w_K(\sigma_i + (K-1) (\sigma_i^2-\sigma_i)) \right| \overset{(1)}{<} \left| w_0 + w_1\sigma_i + w_K\sigma_i \right| = \left|\sigma_i \right|< 1$
This inequality (1) satisfies since $\left| \sigma_i + (K-1) (\sigma_i^2-\sigma_i) \right| = \left| \sigma_i((K-1)\sigma_i - (K-2)) \right| < \left|\sigma_i((K-1)-(K-2))\right| = \sigma_i$.
Therefore, we complete the proof for the low-pass filter.
**Proof for high-pass filter:** When $w_k = (−\alpha)^k/(k+1)$ where $k = 0, 1, K$ and $\alpha \in (0,1)$, we prove:
$\left| \frac{\lim_{K \rightarrow \infty} g(\sigma_i)}{\lim_{K \rightarrow \infty} g(\sigma_1)} \right| = \left|\frac{\lim_{K \rightarrow \infty} w_0 + w_1 \sigma_i + w_K (\sigma_i + (K-1) (\sigma_i^2-\sigma_i))}{\lim_{K \rightarrow \infty} w_0 + w_1 + w_K (1 + (K-1) (1-1))} \right| \\ =\left|\frac{\lim_{K \rightarrow \infty} 1 - \frac{\alpha}{2}\sigma_i + \frac{(-\alpha)^K}{(K+1)}(\sigma_i + (K-1) (\sigma_i^2-\sigma_i))}{\lim_{K \rightarrow \infty} 1 - \frac{\alpha}{2} + \frac{(-\alpha)^K}{(K+1)}}\right|$
$= \left|\frac{1 - \frac{\alpha}{2}\sigma_i + \big(\lim_{K \rightarrow \infty}\frac{(-\alpha)^K}{(K+1)}(\sigma_i + (K-1) (\sigma_i^2-\sigma_i))\big)}{1 - \frac{\alpha}{2}}\right| \\ \overset{(1)}{=}\left|\frac{1 - \frac{\alpha}{2}\sigma_i}{1 - \frac{\alpha}{2}}\right| > 1$
This equality (1) satisfies since $\lim_{K \rightarrow \infty}\frac{(-\alpha)^K}{(K+1)}(\sigma_i + (K-1) (\sigma_i^2-\sigma_i)) = \lim_{K \rightarrow \infty}\frac{(-\alpha)^K}{(K+1)}((K-1) (\sigma_i^2-\sigma_i)) = \lim_{K \rightarrow \infty}(-\alpha)^K\frac{(K-1)}{(K+1)}(\sigma_i^2-\sigma_i) = 0$.
Therefore, it shows that the graph filter acts as a high-pass filter.
---
We appreciate your careful analysis of our work. You provide an interesting perspective, which we will gladly add to the revised paper. Your comments have allowed us to focus Theorem 3.1 more closely on our GFSA, which has significantly improved our theoretical foundation.
However, according to our proof, the final ratio you derived is difficult for us to understand. We have encountered some difficulty as the final ratio has $K$ and seems to diverge when $K$ approaches infinity. *If our response does not address your concerns, could you kindly provide us with your derivation?*
---
Rebuttal 4:
Comment: $\left| \sigma_i + (K-1) (\sigma_i^2-\sigma_i) \right| = \left| \sigma_i((K-1)\sigma_i - (K-2)) \right| < \left|\sigma_i((K-1)-(K-2))\right| = \sigma_i$ looks suspicious. When $K->\infty$, I don't think you can make an inequality of shrinking $\sigma_i(K-1)$ to $(K-1)$, this is like claiming 2 $\infty$ < 3 $\infty$, which doesn't make sense. Plus, it is weird to see something that tends to $\infty$ can be bounded by 1.
So I don't think the low-pass filter proof is correct. In fact, It think it means that w2 and $\sigma_i$ term dominates the result. If you divide the K-1 to both denominator and numerator, w0, w1 doesn't seem to have any effect on the final convergence.
---
Rebuttal Comment 4.1:
Comment: Dear Reviewer bzWT,
Thanks for your thoughtful review.
However, contrary to your concern, the proof of the low-pass filter does not assume that $K$ approaches infinity. It only assumes that the coefficients are positive and their sum is 1, which is why the inequality holds. The assumption that $K$ goes to infinity is only applied in the proof of the high-pass filter.
We hope our response addresses your concern. | Summary: This paper proposes a novel approach to enhance the self-attention mechanism in transformers by drawing on graph signal processing principles. The authors reframe the standard self-attention as a low-pass graph filter and introduce a more generalized filter (GFSA) capable of capturing low-pass, high-pass, or combined filtering behaviors. They evaluate the effectiveness of GFSA across diverse tasks, showcasing its potential to improve performance.
Strengths: - The core idea is easily grasped and demonstrates promising results across various transformer variants in different domains.
- The authors evaluate GFSA on a wide range of tasks spanning natural language processing, computer vision, speech recognition, and graph regression, highlighting its versatility.
Weaknesses: - While the oversmoothing problem motivates the proposed method, the experiments lack a focused analysis of its impact on transformers with varying depths (number of layers).
- While the paper conducted experiments in a variety of tasks in different fields to show the versatility of the proposed method, not study in depth each task could also be a potential weakness. In particular for the graph regression task, the authors only considered two datasets and a single baseline model (and the improvement over the larger model on PCQMv2 is marginal 0.860 vs 0.862). Stronger baselines such as [GPS](https://arxiv.org/abs/2205.12454) and additional datasets would provide a more rigorous assessment of GFSA's practical value. Additionally, the choice of the baselines in other tasks might also suffer from the same weakness, as I believe most of them are relatively old methods. Finally, their performance statement in Fig.1 may be misleading to practitioners without acknowledging the limitations with their selected baselines.
- Given the abundance of research on oversmoothing in transformers, the paper should include comparisons with more recent and relevant methods (e.g., [64]) to strengthen its claims. Currently, the paper only compares to very few comparison partners (ContraNorm [27] and [73]) or even without any comparison in some tasks.
- The paper states that the oversmoothing issue is due to the low-pass filter nature of the self-attention. By introducing a 2nd order term in Eq.10, they claim that their proposed method can learn both low-pass and high-pass filters, and even combined filters. To validate this empirically, a deeper analysis of the learned coefficients $w_0,w_1,w_K$ across different layers would provide concrete evidence for the GFSA's ability to learn diverse filter types.
Technical Quality: 1
Clarity: 3
Questions for Authors: - L205: for the self-attention with the skip connection, shouldn't $w_0$ be positive?
- In the NLP experiments, did the pretrained models use the standard self-attention? If so, does this approach make sense to replace them with GFSA only during fine-tuning?
- The paper only discusses the runtime overheads. Does GFSA introduce any potential memory overheads?
Confidence: 4
Soundness: 1
Presentation: 3
Contribution: 2
Limitations: The paper discussed the additional runtime overhead of GFSA compared to the original self-attention. However, other potential limitations are not discussed, such as potential memory overheads, scalability of GFSA to long sequences, or extendability to efficient/linear transformers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer k2id for the review and feedback, highlighting our strengths in
1. Easily understandable core idea with promising results across diverse transformer variants.
2. Versatile performance demonstrated across a wide range of tasks in multiple domains.
Below, we would like to address each of your questions and concerns:
**Response to W1: varying depth**
We would like to highlight the comprehensive layer-wise analysis we have conducted.
Fig. 2 compares DeiT and DeiT+GFSA across all 24 layers. The cosine similarity plot (Fig. 2b) shows how GFSA mitigates oversmoothing as depth increases. DeiT shows increasing cosine similarity in deeper layers, indicating oversmoothing, while DeiT+GFSA maintains lower, more stable similarity across layers.
Similar results are observed for BERT on STS-B (Fig. 3) and Graphormer on ZINC (Fig. 4).
The singular value distributions (Fig. 2c, 3c, 4c) further support our findings.
This layer-wise analysis provides strong evidence of GFSA's effectiveness in addressing oversmoothing across the full depth of the studied models.
**Response to W2: weak baseline**
To provide a rigorous assessment of GFSA's practical value, we have conducted additional experiments applying GFSA to SOTA models that use self-attention, GPS [1], and Graph-ViT [2] on LRGB [3] and Benchmarking GNNs [4]. For experimental results, we refer the reader to Table 1 and Table 2 in the PDF file of the general response.
For GPS, we replace its self-attention module with our GFSA while maintaining its best configuration and other hyperparameters. For Graph-ViT, we apply GFSA to the Hadamard self-attention method. These experiments follow the settings used in both papers.
Our new results show consistent improvements across various datasets. These additional experiments show the versatility and effectiveness of GFSA beyond our initial studies.
> [1] Rampášek et al. "Recipe for a general, powerful, scalable graph transformer." NeurIPS 2022
>
> [2] He et al. "A generalization of vit/mlp-mixer to graphs." ICML 2023
>
> [3] Dwivedi et al. "Long range graph benchmark." Advances in NeurIPS 2022
>
> [4] Dwivedi et al. "Benchmarking graph neural networks." JMLR
**Response to W3: comparison with other recent and relevant methods**
We would like to highlight the analysis already present in our paper.
As shown in Appendix Table 13, we have included comparisons with several SOTA methods. Notably, we have compared GFSA with NeuTRENO, a recent method designed to address oversmoothing. Our GFSA outperforms NeuTRENO while maintaining a similar parameter count to the original DeiT-S.
Table 13 not only includes models addressing oversmoothing but also other recent advanced models such as SpectFormer and SVT. This provides a broader context for GFSA's performance across different types of SOTA approaches.
Regarding [64], we were unable to include a direct comparison since the code is not public. However, we believe our current comparisons, which include both oversmoothing-specific methods and other recent innovations, provide a comprehensive evaluation of GFSA's effectiveness.
**Response to W4: learned filter analysis**
We have addressed your suggestion by visualizing the frequency responses, which directly represent the impact of learned coefficients, for all 12 layers of BERT with and without GFSA. These visualizations are now included in a PDF file of the general response.
Our analysis reveals:
1. GFSA learns diverse filter types across layers, evolving from low-pass in early layers to a mix of low and high-pass in middle layers and shifting towards higher-frequency responses in deeper layers.
2. BERT+GFSA consistently shows higher magnitude responses at higher frequencies compared to vanilla BERT, especially in deeper layers.
3. While vanilla self-attention functions primarily as a low-pass filter, GFSA utilizes a wider range of frequencies.
These findings provide evidence for GFSA's effectiveness in mitigating oversmoothing and preserving high-frequency information. We thank the reviewer for this valuable suggestion, which has enhanced the depth and rigor of our paper's contribution.
**Response to Q1: skip-connection**
We appreciate your careful observation regarding self-attention with skip connections. In standard Transformers, the residual term is positive. However, in our GFSA, $w_0$ serves a slightly different purpose. $w_0$ adjusts the weight of the identity matrix within each attention head, controlling how much of the original input should be preserved. Unlike traditional skip connections, $w_0$ can be learned, set to a fixed value, or even set to 0, depending on task requirements. When learned, $w_0$ provides flexibility to adapt the residual-like connection based on the task. When fixed to a positive value, it can more closely mimic traditional residual connections.
**Response to Q2: fine-tuning PLM with different architecture**
Thank you for your question about GFSA implementation in our NLP experiments.
We replaced standard self-attention with GFSA during the fine-tuning phase of pre-trained models. This approach aligns with recent practices in the field, as seen in works that modify pre-trained model structures during fine-tuning [1,2,3].
Our approach offers flexibility in coefficient initialization and learning. One option is to initialize $w_0$ and $w_K$ to 0 (see Appendix A), allowing them to be learned during fine-tuning. This strategy provides a smooth transition from standard self-attention to GFSA, enabling task-specific adaptation.
> [1] Fine-tune BERT with Sparse Self-Attention Mechanism, EMNLP-IJCNLP 2019
>
> [2] Fast transformers with clustered attention, NeurIPS 2020
>
> [3] Enhancing Self-Attention with Knowledge-Assisted Attention Maps, ACL 2022
**Response to Q3: memory overheads**
Please refer to our **Response to Q1** of Reviewer XKs2.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. However, some of my concerns are still not fully addressed.
- W1: to clarify my point, my concern is whether oversmoothing really correlates with the transformer's performance. So a more fair setup would be to measure the performance of a transformer model with varying depth and compare it with GFSA (for each layer number).
- W2: thank you for the additional results. However, the improvement is not statistically significant as is most of the time within the confidence interval of the baseline method.
- W3: thank you for pointing me to the relevant tables. I don't think putting important results in the Appendix is a good practice. Furthermore, after checking the results, I found only one additional baseline (DeiT-S + NeuTRENO) conducted on an image classification task, and the improvement is again very marginal. I believe a more systematic comparison of the proposed method and more recent baselines on all considered tasks would largely strengthen the conclusions of this work.
- Unfortunately, you did not address my points mentioned in the **Limitations**, particularly the scalability of GFSA to long sequences, or extendability to efficient/linear transformers. I believe these points are essential to assess the practical impact of this work.
---
Rebuttal 2:
Comment: Dear Reviewer k2id,
We would like to thank the reviewers for carefully analyzing our work and raising concerns that remain unresolved. We address the 4 concerns you raised below:
---
**1. Regarding oversmoothing correlation with Transformer performance:**
We would like to highlight that we have reported such an analysis, which provides evidence for the effectiveness in mitigating oversmoothing and improving performance across different depths.
As shown in Table 14, while DeiT can potentially improve with 24 layers due to increased parameters, our cosine similarity analysis (See Fig.2 (a)) shows that oversmoothing at deeper depths could limit performance gains. This is where GFSA shows its effectiveness. GFSA allows the DeiT to better use the increased capacity of deeper layers. For example, GFSA improves DeiT-S by 2.54 points at 12 layers and 1.46 points at 24 layers. GFSA also shows larger improvements over ContraNorm at deeper depths.
For convenience, we re-present the results of DeiT-S in Table 14 below:
| #Layers | 12 | 16 | 24 |
| --- | --- | --- | --- |
| DeiT-S | 77.32 | 78.25 | 77.69 |
| DeiT-S + ContraNorm | 77.80 | 79.04 | 78.67 |
| **DeiT-S + GFSA** | **79.86** | **80.83** | **79.15** |
This depth-wise analysis, combined with our previous layer-wise behavior studies (e.g., cosine similarity), provides our empirical results of GFSA in mitigating oversmoothing and improving performance in varying model depths.
---
**2. Statistical significance of improvements:**
While the improvements in CIFAR-10 (p = 0.184, paired t-test comparing GraphGPS and GraphGPS+GFSA) and MNIST (p = 0.108, paired t-test) may not be statistically significant, Cohen's D values of 0.455 and 0.565 respectively indicate medium effect sizes. These medium effect sizes suggest that ***our improvements are still meaningful*** in practice. As [1] argues, effect size can reveal practical significance that p-values alone might miss
Note that statistical testing was only feasible with GraphGPS, which provided results from 10 runs, unlike Graph-ViT.
For Graph-ViT, it is significant that GFSA can support Graph-ViT to outperform the Graph-MLP-Mixer on the MolTOX21 dataset, as shown below.
| Method (GINE as a graph encoder) | MolTOX21 |
| --- | --- |
| Graph-MLP-Mixer | 0.7868 ± 0.0043 |
| Graph-ViT | 0.7851 ± 0.0077 |
| **Graph-ViT + GFSA** | **0.7895 ± 0.0069** |
In this context, we would appreciate it if you could understand that our GFSA can be effective in broader graph benchmark datasets.
> [1] Sullivan, Gail M., and Richard Feinn. “Using effect size—or why the P value is not enough.” Journal of Graduate Medical Education 4.3 (2012): 279-282.
---
**3. Comparison with recent baselines:**
We thank your suggestion for a more systematic comparison. To address this, we would like to emphasize that our full Table 12 in Appendix includes a comprehensive comparison of models aiming to improve upon DeiT, including the most relevant baselines we could fairly compare with our approach.
To strengthen our comparison, we have now included PRepBN[2] and GTP-ViT[3], which were not initially included in our analysis. To ease a clear comparison with our 12-layer DeiT model, we provide a concise overview of GFSA's performance relative to the most relevant and recent baselines, including PRepBN. We believe this addition offers a more systematic evaluation of image classification.
| Method | Top-1 Acc |
| --- | --- |
| DeiT-S (12-layers) | 79.8 |
| + LayerScale | 80.5 |
| + LateInsertion | 80.5 |
| + ClassAttention | 80.6 |
| + AttnScale | 80.7 |
| + FeatScale | 80.9 |
| + HAT | 80.9 |
| + Diverse | 80.6 |
| + ContraNorm | 80.4 |
| + NeuTRENO | 80.7 |
| + *PRepBN* [2] | *80.2* |
| + *GTP* [3] | *79.5* |
| **+ GFSA** | **81.1** |
> [2] Guo, Jialong, et al. "SLAB: Efficient Transformers with Simplified
Linear Attention and Progressive Re-parameterized Batch Normalization." ICML 2024.
>
> [3] Xu, Xuwei, et al. "GTP-ViT: Efficient Vision Transformers via Graph-based Token Propagation." WACV **2024.
---
***continued in the next comment***
---
Rebuttal 3:
Comment: **4. Limitations (scalability and extendability):**
We apologize for not fully addressing your concerns about limitations earlier and appreciate the opportunity to clarify these important points.
We conducted additional experiments using the Long Range Arena benchmark[4], which is specifically designed for evaluating performance on long-range dependencies. We tested GFSA on ListOps (2K sequence length) and Image (1K sequence length) datasets using standard Transformers and Efficient Attention [9] models as backbones.
Regarding **scalability** to long sequences, GFSA shows improved performance on these long-sequence tasks. For the Image dataset, Transformer+GFSA outperforms Transformer and some linear Transformers such as Linformer[5], Longformer[6], and YOSO-E[7].
Concerning the **extendability** of efficient/linear Transformers, we successfully integrated GFSA with Efficient Attention[9], demonstrating its compatibility. Efficient Attention+GFSA shows performance gains over Efficient Attention. Importantly, this integration maintains the efficiency benefits of linear transformers. For instance, on the Image dataset, Efficient Attention+GFSA uses only 1.33GB memory and 135.8s/1K-steps, compared to 11.20GB and 737.2s/1K-steps for Transformer+GFSA.
Furthermore, we would like to highlight the potential for efficiency improvements in GFSA. Recent linear complexity Transformers[8,9] propose an approximated self-attention by replacing the softmax operation and rearranging matrix multiplication as $\mathbf{Q(K^\intercal V)}$. In GFSA, the main computational cost comes from calculating $\mathbf{\bar{A}}^2$. However, if we design $\mathbf{\bar{A}}$ as $\mathbf{\bar{A}}=\mathbf{QK}$, then $\mathbf{\bar{A}}^2\mathbf{V} = (\mathbf{QK})(\mathbf{QK})\mathbf{V}$, which allows us to maintain linear complexity even for the squared operation in GFSA by computing it in the same manner as these efficient attention mechanisms. Therefore, the Taylor approximation of the high-order filter in GFSA can be done efficiently.
These results show that our GFSA is scalable to longer sequences and can be effectively extended to efficient/linear transformers. The improvements for longer sequence lengths, along with the efficiency maintained when combined with linear transformers, address concerns about the practical impact of the task.
- Table: Accuracy (%)
| | ListOps (2K) | Image (1K) |
| --- | --- | --- |
| Transformer | 37.1 | 38.2 |
| **Transformer + GFSA** | **37.6** | **40.2** |
| Linformer | 37.3 | 37.8 |
| Longformer | 37.2 | 39.1 |
| YOSO-E | 37.3 | 39.8 |
| Efficient Attention | 36.9 | 40.2 |
| **Efficient Attention + GFSA** | **37.9** | **40.4** |
- Table: Running time (s/1K-steps) and the peak training memory usage (GB)
| | ListOps (2K) | Image (1K) |
| --- | --- | --- |
| Transformer | 198.3/5.50 | 345.1/5.88 |
| **Transformer + GFSA** | 635.8/ 10.87 | 737.2/11.20 |
| Linformer | 63.4/1.73 | 158.5/3.45 |
| Efficient Attention | 49.2/0.57 | 121.1/1.14 |
| **Efficient Attention + GFSA** | 53.8/0.67 | 135.8/1.33 |
> [4] Tay, Yi, et al. "Long range arena: A benchmark for efficient transformers." *arXiv preprint arXiv:2011.04006* (2020).
>
> [5] Wang, Sinong, et al. "Linformer: Self-attention with linear complexity." *arXiv preprint arXiv:2006.04768* (2020).
>
> [6] Beltagy, Iz, Matthew E. Peters, and Arman Cohan. "Longformer: The long-document transformer." *arXiv preprint arXiv:2004.05150* (2020).
>
> [7] Zeng, Zhanpeng, et al. "You only sample (almost) once: Linear cost self-attention via bernoulli sampling." ICML 2021.
>
> [8] Katharopoulos, Angelos, et al. "Transformers are rnns: Fast autoregressive transformers with linear attention." ICML 2020.
>
> [9] Shen, Zhuoran, et al. "Efficient attention: Attention with linear complexities." WACV 2021.
---
We hope that our additional responses will address your unresolved concerns.
Best regards,
GFSA authors
---
Rebuttal Comment 3.1:
Comment: Dear Reviewer k2id,
As the Reviewer-Author discussion period ends in less than 24 hours, we wanted to reach out to you to see if you are happy with our responses.
We have made efforts to address the concerns you raised, particularly:
1. Providing analysis on oversmoothing correlation with DeiT accuracy
2. Clarifying the statistical significance of our improvements
3. Expanding our comparison with recent baselines
4. Addressing limitations in scalability and extendability
We believe that your comments have allowed us to improve our paper. We would appreciate your thoughts on whether our extended responses and additional experiments have satisfactorily addressed your concerns. If there are any remaining issues or points that require further clarification, please let us know.
Thank you again for your time and dedication in reviewing our work.
Kind regards,
GFSA Authors
---
Reply to Comment 3.1.1:
Comment: Dear Reviewer k2id,
We would like to thank the reviewer once again for taking the time to provide us with valuable feedback, which has enabled us to strengthen our paper with new experiments and clarifications during this rebuttal period.
As the rebuttal period is nearing its end, we would like to inquire as to whether our additional responses adequately addressed the concerns you raised.
Furthermore, we would like to express our gratitude for your time and efforts during this rebuttal period. We hope that our responses will allow the reviewer to consider a fresh evaluation of our work.
Best regards,
GFSA Authors | Summary: This work proposes to enhance self-attention by considering the high-order power of the attention matrix inspired by graph convolution networks and graph signal processing, to overcome the oversmoothing issues of transformers.
To reduce the computational overhead, the authors propose to use a first-order Taylor approximation to the higher-order power of the attention matrix.
With the slightly larger complexity, the proposed technique can improve the performance of Transformers in various fields, across computer vision, natural language processing, etc.
Strengths: 1. This paper is well written.
2. The comprehensive empirical experiments well support the advantage of the proposed method and the statement of the work.
Weaknesses: 1. The notation in the equation on Line 99 is confusing (the "exponential" in the self-attention matrix).
2. The Taylor approximation well actually hurt the capacity of the graph filters significantly, degenerating to the power of 2, regardless of the value of $K$.
The eq (10) to degenerate to $\hat{\omega}_0 \boldsymbol{I} + \hat{\omega}_1 \bar{\boldsymbol{A}} + \hat{\omega}_2 \bar{\boldsymbol{A}}^2$ for all $K$, where $\hat{\omega}_0, \hat{\omega}_1, \hat{\omega}_2$ are learnable coefficients. Therefore, this technique is just for the power of $2$ and the Taylor approximation actually does not contribute.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Besides the runtime, can you also provide the GPU memory consumption comparisons between GFSAs and the base models?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No specific aware.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer XKs2 for the review and positive feedback, highlighting our strengths in
1. Well-written paper.
2. Comprehensive experiments strongly support the advantage of GFSA.
Below, we would like to address each of your questions and concerns:
**Response to W1: confusing notation**
Thank you for pointing out the notation in the equation on Line 99. We have updated the paper to clarify the exponential notation in the self-attention matrix, which should resolve any confusion.
**Response to W2: Taylor approximation**
We appreciate your careful consideration of our method. We respectfully disagree with the view that our approach degenerates to only second-order terms and would like to provide a more detailed explanation of our Taylor approximation approach.
The Taylor approximation in our work is specifically used to estimate higher-order terms ($\bar{\boldsymbol{A}}^K$ for $K > 2$) efficiently. It is not intended to limit the model to second-order interactions but rather to capture the essence of higher-order terms without incurring computational costs. As detailed in Eqs. (6)-(8) of our paper, we use a first-order Taylor approximation at point $a = 1$, combined with a forward finite difference method to approximate the derivative. This results in the approximation: $\bar{\boldsymbol{A}}^K ≈ \bar{\boldsymbol{A}} + (K-1)(\bar{\boldsymbol{A}}^2 - \bar{\boldsymbol{A}})$. This formulation includes both $\bar{\boldsymbol{A}}$ and $\bar{\boldsymbol{A}}^2$ terms, but importantly, it scales with $K$, allowing it to capture aspects of higher-order interactions.
In Eq. (10), our GFSA filter is defined as $H_{GFSA} = w_0 \boldsymbol{I} + w_1 \bar{\boldsymbol{A}} + w_K(\bar{\boldsymbol{A}} + (K-1)(\bar{\boldsymbol{A}}^2 - \bar{\boldsymbol{A}}))$. The learnable coefficients $w_0$, $w_1$, and $w_K$ provide the model with the flexibility to balance the contributions of different order terms. Notably, $w_K$ scales the approximated higher-order term, allowing the model to adjust its influence based on the task requirements. Tables 8, 10, and 11 show improved performance as $K$ increases. This improvement would not be observed if the model were limited to second-order interactions.
This approximation allows us to capture higher-order effects without the full computational burden of calculating $\bar{\boldsymbol{A}}^K$ directly for large $K$, which would be expensive for many practical applications. While our approach does use a Taylor approximation to estimate higher-order terms, it is not equivalent to a simple second-order polynomial. The approximation retains a dependency on $K$, allowing it to capture aspects of higher-order interactions in a computationally efficient manner.
**Response to Q1: GPU Memory**
Thank you for your question regarding GPU memory consumption. We have measured GPU memory usage (GB) during training for image classification and natural language understanding tasks. The results are presented in the table below.
The increase in GPU memory consumption can be attributed to the calculation of $\bar{\boldsymbol{A}}^2$ in our GFSA method. However, it's crucial to consider this increase in the context of GFSA's overall impact on model performance and efficiency. GFSA adds only about 100 parameters to the model, yet yields improvements across various tasks.
Moreover, our 12-layer DeiT+GFSA outperforms 24-layer DeiT, which has significant implications for computational efficiency. As detailed in Table 20, 24-layer DeiT requires approximately 50 minutes per epoch, while 12-layer DeiT+GFSA needs only about 30 minutes per epoch yet achieves higher accuracy. This demonstrates that GFSA can achieve better results with lower overall memory requirements and reduced runtime compared to simply increasing the number of layers in the base model.
Considering these factors - the performance improvements, the minimal increase in parameters, and the potential for reduced runtime - we believe the additional GPU memory usage is acceptable.
| Model | 12-Layer | 24 Layer |
| --- | --- | --- |
| DeiT-S | 1.6 | 3.2 |
| DeiT-S+GFSA | 2.1 | 4.0 |
| Model | CoLA | SST2 | MRPC | QQP | STSB | MNLI | QNLI | RTE | Avg |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| BERT$_{BASE}$ | 2.31 | 2.73 | 3.73 | 4.42 | 4.33 | 4.45 | 4.45 | 4.45 | 3.86 |
| BERT$_{BASE}$ + GFSA | 2.39 | 2.90 | 4.13 | 5.00 | 4.86 | 4.98 | 5.00 | 4.98 | 4.28 |
| ALBERT$_{BASE}$ | 1.69 | 2.89 | 4.43 | 4.56 | 4.55 | 4.57 | 4.57 | 4.57 | 3.98 |
| ALBERT$_{BASE}$ + GFSA| 1.77 | 3.15 | 5.01 | 5.14 | 5.12 | 5.14 | 5.14 | 5.14 | 4.45 |
| RoBERTa$_{BASE}$ | 2.48 | 2.96 | 3.91 | 4.60 | 4.49 | 4.59 | 4.60 | 4.62 | 4.03 |
| RoBERTa$_{BASE}$ + GFSA | 2.55 | 3.13 | 4.30 | 5.19 | 5.05 | 5.17 | 5.20 | 5.15 | 4.47 |
---
Rebuttal 2:
Comment: Dear reviewer XKs2,
We would like to provide additional clarification to address your concern about our Taylor approximation method.
To further demonstrate that our GFSA does not degenerate to second-order terms and that the Taylor approximation contributes meaningfully, we would like to emphasize the empirical result we have already provided in our paper, specifically in Appendix S, Table 37.
For your convenience, as shown in Table 37 of the paper here:
| Datasets | #Params | CoLA | SST-2 | MRPC | QQP | STS-B | MNLI-m/mm | QNLI | RTE | Avg |
|----------|---------|------|-------|------|-----|-------|-----------|------|-----|-----|
| BERT_BASE | 110M | 56.79 | 93.81 | 88.70 | 88.32 | 88.16 | 84.96/84.15 | 91.63 | 66.06 | 82.51 |
| + GFSA (approximated $\mathbf{\bar{A}}^K$) | 110M | 59.56 | 94.15 | 90.60 | 88.46 | 88.33 | 85.12/85.06 | 91.95 | 68.95 | 83.58 |
| + GFSA (actual $\mathbf{\bar{A}}^K$) | 110M | 59.85 | 94.27 | 89.80 | 88.43 | 88.32 | 84.95/84.89 | 91.76 | 68.23 | 83.39 |
This data clearly shows that our GFSA method with the approximated $\mathbf{\bar{A}}^K$ achieves performance very close to that of using the actual $\mathbf{\bar{A}}^K$. The average performance across all GLUE tasks is higher for the approximated $\mathbf{\bar{A}}^K$ compared to the actual $\mathbf{\bar{A}}^K$.
These results successfully relieve the reviewer's concern that our approach degenerates to only second-order terms. If this were the case, we would expect to see a significant performance drop when using the approximated $\mathbf{\bar{A}}^K$ compared to the actual $\mathbf{\bar{A}}^K$. Instead, we observe comparable or even slightly improved performance. Additionally, the approximation allows us to balance computational efficiency with model expressiveness.
We believe this empirical evidence, along with our previous response, addresses your concern about the effectiveness of our Taylor approximation approach. Please let us know if you have any remaining concerns, and we will happily respond. Thank you!
Best regards,
GFSA authors
---
Rebuttal Comment 2.1:
Comment: Thank you for the rebuttal.
The rebuttal addresses most of my concerns.
One thing I am not certain about is the gain and complexity trade-off. The proposed method consistently shows improvements on various tasks, with a larger computational complexity $O(N^3)$.
In practice, based on runtime and memory provided, the extra computational cost is acceptable. However, the performance improvement is not remarkable either.
Considering my current score, I tend to retain the current score for now and would like to decide whether to raise the score during the reviewer and AC discussion stage.
---
Reply to Comment 2.1.1:
Comment: Dear reviewer XKs2,
We thank you for considering the gain and complexity of the trade-off. We want to address your concerns about our complexity from 3 perspectives below:
**[GFSA in selective layers]**
Regarding the computational overhead of GFSA, we introduced a strategy for computational complexity in Section 6. By applying GFSA only to even layers, we effectively reduced the runtime increase while maintaining performance similar to that of full-layer integration.
**[Possibility of improving complexity by using an efficient matrix operation]**
As we mentioned in Appendix P, our GFSA computes $\bar{\mathbf{A}}^2$, which has a time complexity of $O(n^2d + n^3)$. However, assuming we use the algorithm in [1], the time complexity becomes $O(n^2d + n^{2.371552})$. In practical terms, if $n^{0.371552}$ is smaller than $d$, the time complexity of GFSA approaches $O(n^2d)$.
In real-world applications, especially those leveraging GPU acceleration, the practical computational cost of matrix operation could be better than the theoretical bound. Because GPUs with multiple CUDA cores provide acceleration for matrix operations. This means that the actual wall-clock time for these computations can be lower than the theoretical time complexity.
**[Additional results for GFSA with linear complexity]**
We want to share **additional results** that may address your concern about the computational complexity, which we reported in our response to Reviewers `k2id` and `bzWT`. While our method has $O(n^2d + n^3)$ complexity due to the nature of self-attention, we have found that GFSA can be applied with linear complexity calculations while retaining its benefits.
In experiments using Long Range Arena benchmark [2], we successfully integrated GFSA with Efficient Attention [3], an approach that maintains linear complexity. The results show that Efficient Attention+GFSA outperforms Efficient Attention alone while preserving efficiency. For example, on Image dataset (1K sequence length), Efficient Attention+GFSA uses only 1.33GB memory and 135.8s/1K-steps, compared to 11.20GB and 737.2s/1K-steps for Transformer+GFSA.
| Method | ListOps (2K) | Image (1K) |
| --- | --- | --- |
| Transformer | 37.1 | 38.2 |
| **Transformer + GFSA** | **37.6** | **40.2** |
| Linformer | 37.3 | 37.8 |
| Longformer | 37.2 | 39.1 |
| YOSO-E | 37.3 | 39.8 |
| Efficient Attention | 36.9 | 40.2 |
| **Efficient Attention + GFSA** | **37.9** | **40.4** |
> Note: Results are shown as Accuracy (%)
| | ListOps (2K) | Image (1K) |
| --- | --- | --- |
| Transformer | 198.3/5.50 | 345.1/5.88 |
| **Transformer + GFSA** | 635.8/ 10.87 | 737.2/11.20 |
| Linformer | 63.4/1.73 | 158.5/3.45 |
| Efficient Attention | 49.2/0.57 | 121.1/1.14 |
| **Efficient Attention + GFSA** | 53.8/0.67 | 135.8/1.33 |
> Note: Results are shown as Running time (s/1K-steps) / Peak training memory usage (GB)
We also found a way to maintain linear complexity even for the squared operation in GFSA. By designing $\mathbf{\bar{A}}$ as $\mathbf{\bar{A}}=\mathbf{QK}$, we can efficiently compute $\mathbf{\bar{A}}^2\mathbf{V} = (\mathbf{QK})(\mathbf{QK})\mathbf{V}$, similar to recent linear complexity Transformers [3,4].
These findings show that GFSA can be implemented with linear complexity and still provide performance improvements. This addresses the concern about the computational cost and shows that the gain-complexity trade-off can be more favorable than initially presented.
> [1] Williams et al. New bounds for matrix multiplication: from alpha to omega. In SODA, 2024.
>
> [2] Tay, Yi, et al. "Long range arena: A benchmark for efficient transformers." *arXiv preprint arXiv:2011.04006* (2020).
>
> [3] Shen, Zhuoran, et al. "Efficient attention: Attention with linear complexities." WACV 2021.
>
> [4] Katharopoulos, Angelos, et al. "Transformers are rnns: Fast autoregressive transformers with linear attention." ICML 2020.
---
We hope that our responses address your concerns and that you will be able to increase your rating. If you have any questions, please let us know.
Best regards,
GFSA authors | Summary: The paper proposed a graph-filter-based self-attention mechanism to improve its effectiveness. The authors conduct experiments in various fields, including natural language understanding, computer vision, automatic speech recognition, graph regression, and code classification, showing the generalization of the proposed method. The proposed method is claimed to make a difference in the over-smoothing problem, where representations across layers converge to indistinguishable values, leading to significant performance degradation.
Strengths: 1. Sufficient Experiments. The paper conducts experiments on natural language understanding, computer vision, automatic speech recognition, graph regression, and code classification. The authors try their best to show the generalization of the proposed method.
2. Well Organized. The paper articulated the viewpoint in a clear and organized manner.
3. Reproducibility. The paper shows the pseudo-code, which clearly presents the implementation of GFSA in the appendix. The paper shows detailed experiment settings in appendices I to O. The code is available with instructions. However, an error occurred when I tried to run it, which is reported in the following questions.
Weaknesses: 1. Sensitivity of K. The authors did not give instructions on selecting the value of K. The sensitivity of K is discussed in appendices, showing that 'For each dataset, there is optimal K and the performance of models using GFSA is generally robust to changes in K'. All the experiments searched for values of K. However, the search for K is time-consuming.
2. Misleading Figure. Fig 1 shows the performance of the proposed method. However, the inconsistent proportions exaggerate the model's effectiveness.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Miss code. When I was running the experiment of code classification, using the command “python run_exp.py --model_tag codet5_base --task defect --sub_task none” as instructed in the readme file, an error occurred reporting ‘can't open file '/run_gen.py'’ . It seems that some code is missing.
2. More questions about the sensitivity of K. Is it a necessary step to search the optimal K when facing a new dataset? Or is it robust in K? I noticed that, in Theorem 4.1, Ek is relevant to K.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors addressed the limitations, including effectiveness and efficiency.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer Py4c for the review and feedback, highlighting our strengths in
1. Extensive experimentation across diverse domains demonstrating the generalizability of GFSA.
2. Clear and well-organized presentation of ideas.
3. Strong emphasis on reproducibility with detailed implementation information and code availability.
Below, we would like to address each of your questions and concerns:
**Response to W1: Sensitivity of $K$**
We understand your concern about the time required to find the optimal $K$. Our tasks across various domains include both fine-tuning and training from scratch. For fine-tuning tasks with relatively short training times, such as the GLUE benchmark, searching for $K$ values from 2 to 10 is not a significant hindrance (see Appendix I.2). However, for tasks like ImageNet that require training from scratch, we flexibly set a narrower search space of 2 to 5 for the 12-layer DeiT-S model.
**Response to W2: Misleading figure**
We acknowledge that the different scales and metrics for each task could potentially lead to misinterpretation. We have redrawn the plot based on specific backbones, showing only the percentage of improvement. We hope that the updated Figure 1 in the pdf file of the general response addresses this concern and eliminates any potential for misleading information.
**Response to Q1: Missing code**
We apologize for the inconvenience. We have uploaded the `run_gen.py` file, and the experiment should now run successfully. Please check the same link provided in the paper for the updated code repository.
**Response to Q2. Sensitivity of $K$**
Regarding the necessity of searching for the optimal $K$ for new datasets, our experiments suggest that while there is an optimal $K$ for each dataset, the performance of models using GFSA is generally robust to changes in $K$. However, for best results, we recommend conducting a search for the optimal $K$ when dealing with a new dataset.
As for the relation between $K$ and $E_k$ in Theorem 4.1, you are correct in noticing this connection. The theorem provides a theoretical foundation for understanding how $K$ affects the model's performance. While the model shows robustness to $K$ in practice, the optimal value can vary depending on the specific dataset and task.
Importantly, we would like to draw your attention to Table 8 in Appendix J.2, which demonstrates the effectiveness of GFSA across different $K$ values. This table shows that for the three datasets, all tested $K$ values (from 2 to 9) result in better performance compared to the original GPT2 model. This provides evidence that GFSA consistently improves performance across a range of $K$ values. This robustness to $K$ is a strength of our method.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer Py4c,
As the Reviewer-Author discussion period ends in less than 24 hours, we wanted to reach out regarding our response to your valuable feedback. Please let us know if you would like us to help with any further details on our response.
We believe that your comments have allowed us to improve our analysis of the sensitivity of $K$ and address potential misunderstandings in Figure 1. In addition, we have addressed concerns and questions about the sensitivity of K.
Additionally, we would like to draw your attention to our response to other reviewers, `k2id` and `XKs2`, where we discussed **the scalability of GFSA and its extendability to linear/efficient Transformers**. We believe these findings might be of interest to you as they further show the broader applicability and potential of our method.
Thank you again for your time and dedication in reviewing our work!
Kind regards,
GFSA Authors
---
Rebuttal Comment 1.2:
Title: A gentle reminder
Comment: Dear Reviewer Py4c,
We thank the reviewer again for your time and feedback that allowed us to strengthen the paper with clarifications during this important rebuttal period.
As the end of the rebuttal period is fast approaching we were wondering if our answers in the rebuttal were sufficient enough to address the important concerns.
Finally, we are very appreciative of your time and effort in this rebuttal period and hope our answers are enough for the reviewer to consider a fresher evaluation of our work with a potential score upgrade if it's merited.
Kind regards,
GFSA Authors
---
Rebuttal 2:
Comment: Hello, Reviewer. The author has submitted a response to your comments. Whether or not it addresses your concerns, it would be greatly appreciated if you could acknowledge that you have reviewed the reply. | Rebuttal 1:
Rebuttal: Dear reviewers,
We sincerely appreciate your feedback and constructive comments on our paper. We are grateful for the recognition of several key strengths in our work:
1. Extensive experiments across diverse domains demonstrating the broad applicability of GFSA
2. Clear, well-organized presentation of ideas
3. Emphasis on reproducibility with detailed code and settings
4. Versatility of GFSA across different transformer variants and fields
5. Innovative application of graph filter concepts to transformers
6. Easily graspable core idea facilitating adoption
7. Comprehensive empirical support for the advantages of GFSA
**Important: We have uploaded a one-page PDF file with additional materials addressing your concerns.**
This pdf includes:
- Updated Fig. 1: The original Fig.1 was potentially misleading because different metrics for different tasks were shown in the same figure, so we changed the radar chart to show the improvement ratio compared to the backbone. This change is related to W2 of reviewer Py4c.
- New Tables 1 and 2: To address the weakness (W2) of reviewer k2id that only two datasets were considered for the tasks in the graph domain, we include additional experimental results applying GFSA to GraphGPS and Graph-ViT. We provide more rigorous evaluations of GFSA by adding 7 datasets: Peptide-Func, Peptide-Struct, ZINC, MNIST, CIFAR10, Molhiv, and MolTOX21.
- New Fig. 2: The analysis of filter responses based on the learned coefficients of GFSA across the different layers of BERT is shown. As the layer deepens, there is a noticeable shift towards higher frequency responses, indicating a move towards high-pass filtering. Therefore, Fig. 2 provides the evidence for GFSA's effectiveness in preserving high-frequency information. This figure is related to W4 of reviewer k2id.
We encourage you to review this material as it provides visualizations and additional results that may enhance your understanding of our responses to your questions and concerns.
Pdf: /pdf/72716537a6b0203adeac9e5f9dfb48251e2b2e14.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Exploiting Descriptive Completeness Prior for Cross Modal Hashing with Incomplete Labels | Accept (poster) | Summary: Cross-modal hashing (CMH) has attracted much attention due to its low computational and storage costs while maintaining high-dimensional cross-modal semantic similarity. To address the incomplete annotation challenge of CMH, this paper proposes a novel Prompt Contrastive Recovery method, PCRIL. The method includes prompt contrastive recovery (PCR) and complementary semantic augmentation (CSA) modules. Experimental evaluation verifies the effectiveness of the proposed method. Overall, this paper has done a sufficient and concrete work. However, the novelty of the proposed method is limited, and the comparison in this experiment is also limited.
Strengths: 1.This paper addresses the problem of incomplete annotations in cross-modal hashing, which is very common and challenging in practical applications.
2.The prompt contrastive recovery (PCR) module proposed in this paper effectively perceives incompleteness through label prompts and can well restore the information semantics, which has been well demonstrated by experiments.
Weaknesses: 1.The authors lack comparison with some recently proposed SOTA methods, no one within two years, so the experimental results are not convincing.
2.This paper conducted experiments on three datasets with different label known ratios, but some comparison methods provided relevant experimental results on the IAPR TC-12 dataset. The author did not provide relevant experimental results.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1.There are no comparative methods within two years. Can the author perform some recent methods to show the effectiveness of the proposed method.
2.It is recommended that the authors conduct more experiments on the IAPR TC-12 dataset to more intuitively illustrate the effectiveness of the model.
3.Please elaborate on how to remove negative pairs from unknown classes.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Prompt construction currently relies on a pre-trained CLIP model with a limited number of textual labels, which hinders its ability to enrich labeled samples.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Authors' Responses to Reviewer `MbLT`'s Comments
> Q1. The authors lack comparison with some recently proposed SOTA methods, no one within two years, so the experimental results are not convincing.
We further compare our method with:
- CMHML [1], a recent method investigating CMH with incomplete labels.
- MITH [2], a recent deep CMH method which also adopt CLIP as the backbone.
The results are shown in **Table 2 in the PDF file**. These results further demonstrate the superiority of the proposed approach against recent SOTAs and validate its effectiveness.
[1] Ni, H., Zhang, J., Kang, P., Fang, X., Sun, W., Xie, S., & Han, N. (2023). Cross-modal hashing with missing labels. Neural Networks, 165, 60-76.
[2] Liu, Y., Wu, Q., Zhang, Z., Zhang, J., & Lu, G. (2023, October). Multi-Granularity Interactive Transformer Hashing for Cross-modal Retrieval. In Proceedings of the 31st ACM International Conference on Multimedia (pp. 893-902).
> Q2. This paper conducted experiments on three datasets with different label known ratios, but some comparison methods provided relevant experimental results on the IAPR TC-12 dataset. The author did not provide relevant experimental results.
It's worth noting that these methods give no experimental results regarding incomplete labels. We compare our method on IAPR TC-12 with them by re-implementing their code. The results are shown in **Table 3 (PDF file)**, where our method outperforms the compared baselines significantly, especially in highly incomplete cases. This demonstrates the consistency of our method across benchmarks.
> Q3. There are no comparative methods within two years. Can the author perform some recent methods to show the effectiveness of the proposed method.
It is recommended that the authors conduct more experiments on the IAPR TC-12 dataset to more intuitively illustrate the effectiveness of the model.
As stated above, we provide results of these experiments in the **PDF** file. The effectiveness of our method is validated.
> Q4. Please elaborate on how to remove negative pairs from unknown classes.
If we understands this correctly, the question is how to remove unknown pairs. This is achieved in our work through an **Adaptive Negative Masking (ANM)** strategy.
The widely-used setting "assume negative" achieves this by turning all pairs with unknown relationship ($S_{ij}=u$) into negative ones ($S_{ij}=0$), which introduces substantial false negative sample pairs. The "only known" setting avoids pairwise learning on all unknown pairs to eliminate their influence. However, the negative relationship can disappear completely in highly unknown cases.
To remove unknown pairs while eliminating the false negative side-effect, we propose to stochastically mask the unknown entries as negative to restore a balanced ratio $r=|\mathcal N(S^D)|/|\mathcal P(S^D)|$ for positive and negative pairs. Empirically, $r$ is a small positive number to alleviate introducing false negatives. In such a way, we augment the final similarity supervision $S$ with less recovered negative values but robust and higher performance.
---
Rebuttal Comment 1.1:
Title: Thanks to the authors for the response
Comment: Thank you for your prompt and comprehensive response to my review. Your detailed answers have effectively addressed my concerns. The additional experiments are a valuable addition, significantly strengthening the overall contribution of the paper. In addition, the proposed method is novel and well-explained. To the best of my knowledge, this is the first work that studies on CLIP-based label recovery strategies in cross-modal hashing. I am confident that this paper makes a significant contribution to the field of cross-modal hashing with incomplete labels and will be of great interest to the community. Therefore, I am raising my rating to Accept.
---
Rebuttal 2:
Comment: Thank you very much for your positive feedback and for raising our rating. We greatly appreciate your thorough review and the constructive suggestions regarding our experiments. We are delighted that our responses and additional experiments have effectively addressed your concerns. Your recognition of the novelty and significance of our work is highly encouraging. Thank you again for your invaluable review.
Title: Thanks to reviewer MbLT | Summary: This paper presents a novel cross-modal hashing method named PCRIL, which explores the indispensable but challenging problem of incomplete label recovery in multi-label learning. It conceives a CLIP-based prompting scheme and a complementary semantic propagation mechanism, enabling PCRIL to restore unknown labels and calibrate pairwise similarities. The paper exhibits a strong motivation, technical soundness, and a well-structured organization. Generally, the idea of the paper is interesting, especially in constructing a learnable label prompt to perceive the missing labels in cross-modal learning.
Strengths: - This paper explores the indispensable but challenging problem of incomplete label recovery in multi-label learning. The authors identify this crucial problem for cross-modal hashing and propose an effective solution for this problem. The proposed method is technically sound.
- The paper conceives a CLIP-based prompting scheme and a complementary semantic propagation mechanism, enabling the proposed method to restore unknown labels and calibrate pairwise similarities.
- The authors design a simple learnable prompt to encode class combinations into CLIP embeddings.
- The neglected semantic labels and pairwise similarities can be removed and recovered through the proposed architecture.
- Extensive experiments have been conducted on MIRFlickr-25K, NUS-WIDE, and MS COCO.
Weaknesses: - The CLIP-based prompting has been intensively studied and this paper needs to make sufficient analysis and experimental validation of its key contribution, i.e., the three types of contrastive learning, particularly in terms of their advantages over plain prompt contrastive recovery.
- Can the authors analyze the effects of the ways of the prompt construction in this task?
This paper contributes a new problem and some new ideas to the CMH literature. The authors should further explain the necessity of the given research topic in practical situations.
The Method section should provide more explanation of these figures. Currently, these parts are separated, making the model difficult to follow, especially given the article's innovative points.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses. The authors are advised to thoroughly resolve the above concerns to convince the reviewer.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Authors' Responses to Reviewer `zzeX`'s Comments
Thank you for your thoughtful review. We are pleased that you found our approach novel and technically robust. Your positive feedback on our contrastive recovery method and extensive experiments is greatly appreciated. Your insights are invaluable to us.
> Q1. The CLIP-based prompting has been intensively studied and this paper needs to make sufficient analysis and experimental validation of its key contribution, i.e., the three types of contrastive learning, particularly in terms of their advantages over plain prompt contrastive recovery.
In section 4.3, we have provided extensive ablation of the prompt contrastive recovery. In Table 3, we have experimentally validated our advantage over plain 'phrasal' prompt, improving greatly upon the recovery precision and the final mAP.
If contrastive learning is performed directly on the positive set, i.e., without selecting the anchor and constructing the three types of negative sets, we argue that the PLTS search can become ineffective because no margin is learned between similar but different label sets. However, substituting PLTS with a scan in the entire label space would consume substantial time.
> Q2. Can the authors analyze the effects of the ways of the prompt construction in this task?
In our experiments, we construct learnable prompt for class combinations and have analyzed it through comparisons with plain phrasal prompt and conventional CLIP-like single-class prompt.
The **plain prompt** is constructed by stacking textual phrases of each class to form a prompt of the label set. Such static prompt is sensitive to the textual content it chooses and performs poor as validated in Table 3. Compared to the plain prompt, our learnable prompt can automatically adjust class-related content through contrastive learning, gaining much higher performance.
The **single-class prompt** is a learnable version of the original CLIP prompt. However, it still performs not comparable to our method. The class relationship of multi-label data is intrinsically nonlinear. Therefore, by integrating different labels in a single CLIP text prompt, the model learns to capture complex relationship among classes, fostering precise contrastive learning.
> Q3. This paper contributes a new problem and some new ideas to the CMH literature. The authors should further explain the necessity of the given research topic in practical situations.
Our method addresses the challenge of cross-modal hashing (CMH) with incomplete labels. CMH extracts pairwise relationship from label annotations for efficient information retrieval across various data modalities, such as images and text. This capability is crucial for practical applications like web search engines, social media, and e-commerce platforms.
In real-world scenarios, obtaining fully annotated datasets for CMH is often impractical. **User-generated tags** on websites, particularly on social media, are frequently incomplete and biased. For example, a user searching for "gluten-free vegan dessert recipes" may struggle to find relevant results because many recipes are simply tagged as "dessert" or "vegan," without indicating whether they are also gluten-free. Another instance of incomplete annotations occurs in **expert systems**, where labeling is costly, such as with medical images that are rarely fully annotated despite the practical demands for medical image retrieval.
Our proposed method extends the traditional research topic of incomplete labels in recognition tasks to multi-modal and retrieval contexts, making CMH more robust against incomplete annotations.
> Q4. The Method section should provide more explanation of these figures. Currently, these parts are separated, making the model difficult to follow, especially given the article's innovative points.
Figure 2-4 illustrates the motivation of positive anchors for contrastive learning, the general framework of our method, and the Potential Label Tree Search (PLTS) process, respectively.
**Figure 3** statistically illustrates the long-tailed distribution of the sample' unique class combinations. Such phenomenon implies that there are dominating combination cases that overwhelms common CMH models and hides other infrequent ones against model fitting. This motivates us to propose our method in **Figure 2**, which learns to reconstruct sample-label correspondence through anchor subsets selected from the entire positive label.
As our main contribution, the **top-left part in Figure 2** defines a sequence of negative sets compared to the anchor. By embedding these label sets into learnable prompts, we perform contrastive learning to pull the anchor feature together with its sample while push the negative sets away from it. In this way, the contrastive objective in Eq.(3) would effectively separate similar label subsets with different completeness levels.
The learned model with such ability allows us to propose the PLTS, whose process is demonstrated in **Figure 4**. Meanwhile, **Figure 2 top part** shows the relationship of PLTS and our contrastive learning. With the margin learned among label set embeddings, the PLTS recovers potential labels via a greedy search from the sample's positive set. It enforces the searched and joined positive classes to raise the score defined in Eq.(2), therefore recovering potential labels based on the contrastive model.
We would integrate the above explanations to appropriate lines in the manuscript.
---
Rebuttal Comment 1.1:
Comment: We deeply value your insightful feedback and recommendations, which have greatly enhanced the quality of our work. We welcome any additional thoughts and discussions you may have at any time. | Summary: The manuscript tackles the challenges of generating high quality hash codes for cross-modal retrieval in the presence of incomplete labels, which creates uncertainty in distinguishing between positive and negative pairs. To address the issue, a novel Prompt Contrastive Recovery approach called PCRIL is proposed, which progressively identifies promising positive classes from unknown label sets and recursively searches for other relevant labels.
The proposed PCRIL framework jointly performs semantic recovery and pairwise uncertainty elimination for efficient cross-modal hashing with incomplete labels. In particular, they consider each subset of positive labels and construct three types of negative prompts through deletion, addition and replacement for prompt learning. Augmentation techniques are also derived for addressing extreme cases of significant unknown labels and lack of negative pairwise supervision. Experimental results show significant improvement in mAP of the proposed solution with respect to the current SOTA methods.
Strengths: (1) The idea proposed in the paper seems to be novel and interesting to resolve the incompleteness for cross-modal hashing. Typically, prompt processing and contrastive learning are combined to formulate prompt constrastive recovery, which effectively detect the potential classes and enhance the similarity learning.
(2) Moreover, the augmentation techniques are also very helpful for handling extreme cases including unknown labels and negative pairs where an asymmetric mix-up is introduced and adaptive negative masking is devised .
(3) The experimental evaluation is detailed and sufficient by evaluating the proposed method on three datasets with hyper-parameter tuning and visualization.
Weaknesses: (1) Some of the techniques are proposed lacking clear motivation. For instance, it is not very clear that why the mix-up augmentation is asymmetric.
(2) There are already related work on contrastive learning on prompt such as
(a) https://arxiv.org/pdf/2205.01308, "Contrastive Learning for Prompt-Based Few-Shot Language Learners". NAACL 2022.
(b) https://openaccess.thecvf.com/content/CVPR2023/papers/Zhang_CLAMP_Prompt-Based_Contrastive_Learning_for_Connecting_Language_and_Animal_Pose_CVPR_2023_paper.pdf
It is important to compare the proposed method to these existing work to see if there are performance gains. Also it seems that compared to reference (b), the proposed method is quite similar as they are also based on prompt contrastive learning for cross-modal retrieval or hashing except that they are applied to different tasks (pose estimation vs hashing).
Technical Quality: 3
Clarity: 3
Questions for Authors: As mentioned above, the paper needs to highlight and clearly states the motivation of some proposed techniques.
The discussion, comparison and clarification of current methods in this field need to be included.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: From the impact, it seems the method is incremental and focuses on a relatively small scope.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Authors' Responses to Reviewer `Sq2X`'s Comments
Thank you for your thorough review and insightful comments. We are delighted that you found our PCRIL approach innovative and effective in resolving incompleteness in cross-modal hashing. Your appreciation of our proposed prompt contrastive learning is very encouraging and appreciated.
> Q1. Some of the techniques are proposed lacking clear motivation. For instance, it is not very clear that why the mix-up augmentation is asymmetric.
The motivation of our asymmetric mix-up is to further **eliminate uncertainty** in the sample-label relationship, particularly for missing labels. The **original symmetrical mix-up augmentation is not designed for label-incomplete cases**. Due to the existence of unknown classes, cases where sample $\pmb x_i$ complements $\pmb x_j$ but $\pmb x_j$ does not complement $\pmb x_i$ can occur frequently.
For instance, $\pmb l_i$ = (`sky`=1, `star`=0, `moon`=1, `person`=u) complements $\pmb l_j$ = (`sky`=1, `star`=1, `moon`=u, `person`=0) because the `moon` tag in $\pmb l_i$ can eliminate the uncertain `moon`=u in $\pmb l_j$. However, $\pmb x_j$ does not fit in $\pmb x_i$ because filling the nonexisting `person` in $\pmb l_j$ to $\pmb l_i$ does not change the class `person`'s value.
Therefore, this observation motivates us to propose our complementary mix-up design. The asymmetrical maching score gives $\delta_{ij}=0$ and $\delta_{ji}=1$, which solves such issues reasonably and effectively.
> Q2. There are already related work on contrastive learning on prompt such as
(a) https://arxiv.org/pdf/2205.01308, "Contrastive Learning for Prompt-Based Few-Shot Language Learners". NAACL 2022.
(b) https://openaccess.thecvf.com/content/CVPR2023/papers/Zhang_CLAMP_Prompt-Based_Contrastive_Learning_for_Connecting_Language_and_Animal_Pose_CVPR_2023_paper.pdf
It is important to compare the proposed method to these existing work to see if there are performance gains. Also it seems that compared to reference (b), the proposed method is quite similar as they are also based on prompt contrastive learning for cross-modal retrieval or hashing except that they are applied to different tasks (pose estimation vs hashing).
We appreciate the two works that reviewer pointed out, however, we would like to emphasize that the studied tasks are different. More distinctive challenges can appear in the retrieval task with incomplete labels, e.g., vanished pairwise similarity and low-quality sample-label correspondence. Meanwhile, both our prompt construction and contrastive learning exhibit large uniqueness compared to the provided work.
The work [1] improves language learners through clustering text examples of the same class. The method substitutes contextual demonstrations like `It is` with different prompts (e.g. `I think it is`) to produce different views of the example. The contrastive learning objective in this work is a widely-used SupCon [3] loss.
- Compared to [1], our proposed method constructs **learnable** prompts for *the classes themselves*, in order to exploit the CLIP for classes' completeness knowledge.
- Compared to their objective, our score-based contrastive margin loss, Eq.(4), involves insertion, deletion, and replacement, the three types of negative subsets, to separate class combinations of different completeness levels in terms of their CLIP scores. Such separation enables the model to discover potential positive labels in our partial-label scenario.
The work [2] proposes to leverage language information to estimate animal pose keypoints. This work is more similar to ours since they also use learnable prompts for CLIP.
- However, their **single-class textual prompt** each filled with the name of one pose is similar to the learnable version of the original CLIP prompts, while our multi-class prompt construction considers both learnability and authenticity for complex multi-label samples.
- Their contrastive learning is performed between image feature maps and their pose prompts. Ground-truth point's feature and its corresponding prompt are considered a positive pair, while others are considered negative. In our work, we randomly select subsets of positive classes as anchor sets for each sample. The positive pairs are the sample-anchor pairs, while the negative ones are the sample and the negatively modified anchor. Because the stochasticity in selecting the anchor set, our learning scheme can produce diverse pair relationships. Therefore, one significant difference is that the pair relationship in our work is dynamical and can discover label completeness knowledge through the learning process of our contrastive objective.
[1] Jian, Y., Gao, C., & Vosoughi, S. (2022). Contrastive learning for prompt-based few-shot language learners. arXiv preprint arXiv:2205.01308.
[2] Zhang, X., Wang, W., Chen, Z., Xu, Y., Zhang, J., & Tao, D. (2023). Clamp: Prompt-based contrastive learning for connecting language and animal pose. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 23272-23281).
[3] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., ... & Krishnan, D. (2020). Supervised contrastive learning. Advances in neural information processing systems, 33, 18661-18673.
> Q3. As mentioned above, the paper needs to highlight and clearly state the motivation of some proposed techniques.
The discussion, comparison and clarification of current methods in this field need to be included.
Please see the responses above.
---
Rebuttal Comment 1.1:
Title: Response to the authors' feedback
Comment: Thanks for the detailed feedback. The responses and clarifications have addressed most of our comments. I have also read other reviewers' comments. I tend to maintain my original score.
---
Reply to Comment 1.1.1:
Title: Thanks to reviewer Sq2X
Comment: Thank you again for your thorough review. We are glad that our clarifications have addressed your questions. We respect your decision and are grateful for your valuable input, which have significantly strengthened the clarity and focus of our paper. | Summary: The authors propose a novel approach, Prompt Contrastive Recovery for Incomplete Labels (PCRIL), for cross-modal hashing with incomplete labels in this paper. They utilize a learnable CLIP prompt to encode selected anchor class combinations and employ a contrastive learning paradigm to construct multiple negative variants of the anchor set. Additionally, they introduce tree search methods for label recovery and develop augmentation strategies to handle extreme cases of unknown labels and negative pair scarcity. Extensive experiments on various datasets validate the effectiveness of their approach.
Strengths: The paper demonstrates strong originality by combining a learnable CLIP prompt, contrastive learning paradigm, and tree search for label recovery in cross-modal hashing. It presents a well-founded and thoroughly tested solution, with comprehensive analysis and experiments that validate its effectiveness. The writing is clear and organized, making the methodology easy to understand. The contributions offer new insights and advancements that can benefit both researchers and practitioners in the field.
Weaknesses: The paper could highlight its contributions by providing a detailed comparison with existing methods and including more recent studies in the related work section. A deeper theoretical analysis explaining the effectiveness of the proposed methods is needed.
Technical Quality: 2
Clarity: 3
Questions for Authors: How does the reliance on the CLIP model's text token limit impact the overall performance of your approach? What are the specific challenges associated with needing sufficient multi-labeled samples?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have identified and acknowledged several limitations of their work. And they have mentioned the limitation of the prompt construction relying on the pretrained CLIP model with a limited number of text tokens, but would it be beneficial for the authors to provide more details on how this limitation specifically affects the performance of their model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Responses to Reviewer `MwQQ`'s Comments
We sincerely appreciate your detailed review. We would thank your recognition of the originality and effectiveness of our approach, as well as your acknowledgment of our comprehensive analysis and clear presentation. Your feedback is highly valued and encouraging for our work.
> Q1. The paper could highlight its contributions by providing a detailed comparison with existing methods and including more recent studies in the related work section.
Existing studies in multi-label learning regarding incomplete labels mainly focus on single-modal recognition tasks. Compared with them, our method tackles missing labels in a **multi-modal problem**, i.e., Cross-Modal Hashing (CMH), in which **pairwise relationship** can also become sparse. Most of the related methods in classification tasks are hardly adopted directly into CMH due to distinct learning schemes. For instance, DualCoOp [1] learns class-level positive and negative prompts to transform their recognition task into a sample-class contrastive learning problem. The learning paradigm is designed only for the recognition task and only to enable learning with unknown labels. Instead, our method explicitly recovers potential classes and is an attempt to solve specific issues in CMH with incomplete labels, i.e., both the loss of sample-class correspondence and pairwise relationship.
**Recent CMH studies** have several attempts [2-4] to solve the incomplete labels problem. However, they are all **non-deep** methods without fine-grained measurement of sample-label consistency. In contrast, our proposed method can not only consider minor cases with the selected anchor machenism, but also integrate precise multi-modal knowledge to recover the classes. Their ability for potential class discovery is limited to only distinguish salient ones.
[1] Sun, X., Hu, P., & Saenko, K. (2022). Dualcoop: Fast adaptation to multi-label recognition with limited annotations. Advances in Neural Information Processing Systems, 35, 30569-30582.
[2] Liu, X., Yu, G., Domeniconi, C., Wang, J., Xiao, G., & Guo, M. (2019). Weakly supervised cross-modal hashing. IEEE Transactions on Big Data, 8(2), 552-563.
[3] Ni, H., Zhang, J., Kang, P., Fang, X., Sun, W., Xie, S., & Han, N. (2023). Cross-modal hashing with missing labels. Neural Networks, 165, 60-76.
[4] Yong, K., Shu, Z., Wang, H., & Yu, Z. (2024). Two-stage zero-shot sparse hashing with missing labels for cross-modal retrieval. Pattern Recognition, 155, 110717.
> Q2. A deeper theoretical analysis explaining the effectiveness of the proposed methods is needed.
Consider a label encoder model $\mathcal M$ that produces optimal label encodings under loss function Eq.(4). Given a sample $\pmb x_i$ encoded as $\pmb h_i$ with its positive, negative, and unknown set $K_p(0)$, $K_n(0)$, and $K_u(0)$. For simplicity, we consider the $\omega$-th iteration of PLTS, the searched class $c_u \in K_u(\omega)$ is combined back to acquire $K_p(\omega+1) = K_p(\omega) \cup \{c_u\}$. With the assumption of $\mathcal M$'s generalizability, we can acquire $$\Phi^i(\mathcal M(K_p(\omega+1))) - \Phi^i(\mathcal M(K_p(\omega))) \ge m.$$ This is true for all omega if the termination condition is associated with the original margin $m$ rather than $\frac{m}{2}$ which we empirically choose to gain higher recall. We can further obtain $$\Phi^i(\mathcal M(K_p(\omega^*))) - \Phi^i(\mathcal M(K_p(0))) \ge \omega^* m.$$ This nonnegligible gap $\omega^*m$ implies that PLTS exploits the model $\mathcal M$'s ability to perceive and maximize the label completeness according to the score function $\Phi$, therefore effectively recovering potential classes.
> Q3:
How does the reliance on the CLIP model's text token limit impact the overall performance of your approach?
We should clarify that we have already pointed out this limitation in manuscript Line 314-315. The following is a more detailed analysis for the impact of token limit.
In **Flickr25K**, the most annotated dataset among the three evaluated dataset, the sample with the most classes contains 14 distinctive class annotations, which **does not exceed** the largest capacity of 14 classes within 77 tokens of the CLIP model. For **MS COCO**, the evaluated dataset with most classes, there are totally 12 samples exceeding the capacity, taking up **less than 0.014\%** of all samples.
Furthermore, even learning on a rich-annotated dataset, substituting the CLIP backbone with, e.g., long-CLIP, may sufficiently expand the token capacity. Besides, learning with reduced labels further decreases the impact of such token limitation.
In a nutshell, the negative impact is limited and can be eliminated with some simple changes to the model. How to overcome the token limitation is an *open problem* and we hope our analysis would inspire future work in these directions.
> Q4. What are the specific challenges associated with needing sufficient multi-labeled samples?
We should also clarify that we have already indicated this limitation in manuscript Line 316-317. The following is a more detailed analysis.
Although the model can effectively recover labels even with high unknown proportions at **70%** as we illustrated in Table 1, it intrinsically relies on multi-label annotations to select non-trivial anchor sets. Without enough multi-labeled data (due to either the dataset itself or the high unknown proportion), it's difficult to utilize **the interactions between different labels**.
In real-world retrieval tasks, multi-label insufficiency is relatively rare. At present, there is comparatively little research on this open topic, which can inspire future work for incomplete labels.
---
Rebuttal 2:
Title: Discussion After Rebuttal
Comment: Thank you again for the time, thorough reviews, and constructive suggestions, which inspire us a lot for future work.
Based on your comments, we provided the responses, clarifications, as well as theoretical and experimental comparisons with current research on this topic.
Due to the approaching ddl of the author-reviewer discussion, we hope to further discuss with you whether your concerns have been addressed or not. If you still have any unclear parts of our work, please let us know. Thanks. | Rebuttal 1:
Rebuttal: # Global Reply to all reviewers
We would like to extend our gratitude to all the reviewers for their insightful comments and unanimous acknowledgement of our paper in the following aspects:
1. The task addressed by this work is both interesting and significant for real-world cross-modal hashing applications (zzeX, MbLT).
2. The proposed PCRIL method has clear motivation and strong novelty by introducing prompt-based contrastive learning to perceive incomplete classes for the task (MwQQ, Sq2X, zzeX).
3. The paper is clearly written, well-structured, and easy to understand (MwQQ). Our prompt contrastive recovery is effective in addressing the challenge of incomplete labels in cross-modal hashing (Sq2X, MbLT).
4. It shows substantial performance improvements over existing methods (FiXs) and our contributions has been validated with extensive and convincing experiments(MwQQ, Sq2X, zzeX).
5. The contributions provide new insights and advancements that benefit future research in the field (MwQQ).
It is worth noting that the **PDF file** contains explanations of the **motivations and interrelationships of each component** in our method (Figure 1), as well as **the latest experimental results** including extended ablation results, comparison with recent SOTA methods, and results on a new dataset IAPR TC-12 (from Table 2 to 4, respectively).
In addition, we hereby provide highlights to some common queries regarding our current work.
1. **Motivation from Figure 3**. For cross-modal retrieval, each multi-modal instance is associated with a label annotation within a predefined class set. Figure 3 is a statistical analysis of **unique** positive label combination patterns in all samples' annotations. This illustrates the long-tailed distribution of class combinations. For rare label cases, sample-label pairs hardly exist in the data. Our proposed contrastive anchors can enrich this relationship and help label recovery in all cases.
2. **Theory for PLTS effectiveness**. If the objective Eq.(3) is fully minimized, the scores can carry label completeness information. By separating class sets that have edit distance $D$ by a large gap $G=Dm$, The trained model guarantees to increase the CLIP score from the original incomplete label by $G$ through the potential label tree search process, which effectively perceives potential classes from the unknown set.
We have provided detailed responses to each reviewer's feedback including these questions. Please find our point-to-point responses in the individual replies.
**References** for the PDF file.
[1] Ni, H., Zhang, J., Kang, P., Fang, X., Sun, W., Xie, S., & Han, N. (2023). Cross-modal hashing with missing labels. Neural Networks, 165, 60-76.
[2] Liu, Y., Wu, Q., Zhang, Z., Zhang, J., & Lu, G. (2023). Multi-Granularity Interactive Transformer Hashing for Cross-modal Retrieval. In Proceedings of the 31st ACM International Conference on Multimedia (pp. 893-902).
Pdf: /pdf/b77250f938645ead70845df2d271f4ed587fbd58.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In this paper, to solve the problem of unknown labels in the task of cross-modal retrieval, the authors aim to progressively identifies promising positive classes from unknown label sets and recursively searches for other relevant labels.
Strengths: (1)Compared with existing related works, the proposed method has a large performance improvement.
(2)The author proposed a new strategy to solve the problem of label missing in thr cross-mdoal retriveval task.
Weaknesses: (1)The meaning of the figure 3 is unclear. What is the meaning of “sorted index of label sets?” what is it relation to the positive label subset of each sample? How can we draw the conclusion that “the number of label sets is limited to the number of training samples.” from figure 3?
(2)What is the motivation and theory of the Negative Subsets and Contrastive Learning? For the first type of negative subsets, why the anchor set is changed to negative subset by deleting a positive label and the difference between $K_a^i$ and $K_d^{i,s}$ should be minimized in the loss function of equation 4?
(3)In Potential Label Tree Search section, how to compute the score $Phi$? What is the theory of the termination condition? How to get this observation?
(4)The content is somewhat disjointed of the positive anchors and the PLTS.
(5)The ablation study setting is not reasonable. As can be seen, ANM and CSA are two augmentation strategies for handing extreme cases. The main contribution of the paper is the Prompt Contrastive Recovery. Thus, variant version like B w/PCR, B w/PCR+CSA, B w/CSA should also be explored.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see aboved weeknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The explanation of the main contribution part should be improved. The authors should give more motivation and related theory to verify its soundness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Responses to Reviewer `FiXs`'s Comments
Thank you for your thoughtful review and positive feedback. We are pleased that you recognize the improvements and the effectiveness of our method. Your comments are greatly appreciated and please find our point-to-point response below
> Q1. The meaning of the figure 3 is unclear. What is the meaning of “sorted index of label sets?” what is its relation to the positive label subset of each sample? How can we draw the conclusion that “the number of label sets is limited to the number of training samples.” from figure 3?
**(We illustrate our following points in the PDF Figure 1 for better clarity.)**
**Meaning of Figure 3.** Please find our explanation to the meaning of Figure 3 in the global response. Suppose annotation $l_i = (1,1,0,u)$ indicates the presence of the first 2 classes `sky` and `sea`. The positive label set is therefore $K^i_p=(sky, sea)$. For Figure 3, the x-axis of the plot is sorted by the frequency of each set. Therefore, “sorted index of label sets” simply means label set index, sorted based on frequency.
The Figure 3 illustrates the **long-tailed distribution** of unique positive label combinations, which means there are many **rare** label combinations in the dataset associated with **limited samples**. For instance, although the above $K^i_p$ is common, the subset $\{beach, sea\}$ without `sky` can appear very few times. The lack of training samples of rare label sets can cause **severe bias** in typical learning systems, which only align $x_i$ with $K^i_p$. Therefore, this motivates us to consider contrastive learning on anchor sets (subsets of $K^i_p$) to help recover unknown labels.
> Q2. What is the motivation and theory of the Negative Subsets and Contrastive Learning? For the first type of negative subsets, why the anchor set is changed to negative subset by deleting a positive label and the difference between $K_d^i$ and $K_a^{i,s}$ should be minimized in the loss function of equation 4?
The **motivation** behind using Negative Subsets and Contrastive Learning in our model aligns with our discussion in Q1 about handling rare label combinations. The anchor sets are selected randomly to help **enrich** the label-sample pairs. However, due to reduced positive classes, they do not perfectly align with the sample. Nonetheless, we can solve this dilemma by considering the missing of labels as relative *edit distances* to the full labels. For instance, when a positive tag is missing in the anchor, the *edit distances* to both the full and the anchor sets increase and the embedding (or specifically, learnable prompt) should be adjusted to reflect this change. This adjustment ensures that the model can better differentiate between similar but distinct items, improving its overall precision.
To clarify a potential misunderstanding in the comment: our goal is **not** to minimize the difference between $K_d^i$ (the negative subset) and $K_a^{i,s}$ (the anchor set with positive labels). Instead, our objective **maximizes** this difference. In our loss function Eq.(4), we employ a contrastive loss strategy that seeks to **enlarge the discrepancy** between embeddings of the original anchor set and those of the negative subset. This discrepancy forces the model to create more distinct embeddings for negative subsets, thereby pushing the embedding of the anchor set with positive tags to be more accurate and distinct.
> Q3. In Potential Label Tree Search section, how to compute the score? What is the theory of the termination condition? How to get this observation?
**PLTS Scores.** The score $\Phi$ during PLTS is defined the same as Eq.(2) and is the same score used in Eq.(4). To compute the score $\Phi^i(K)$ for a given label set $K$ and sample $\pmb x_i$, we first construct a prompt $P(K)$ according to Eq.(1), then compute its CLIP score with the instance by Eq.(2).
**Termination Condition.** The termination condition $\Phi^i(K^i_p(\omega^*) \cup \{c^*_u\}) < \Phi^i(K^i_p(\omega^*)) + \frac{m}{2}$ is associated with $m$, which is the margin we used in the contrastive loss Eq.(3) to separate different levels of sample-label similarity scores, i.e., $\Phi$.
In our global response, we explain that by optimizing Eq.(3), our method separates different label sets by a gap proportional to the edit distance. The termination condition yields completed labels with largely increased CLIP scores. In our method, we empirically set the margin *during PLTS* as $\frac{m}{2}$ to attach more importance to recall over precision.
> Q4. The content is somewhat disjointed of the positive anchors and the PLTS.
As also responded to comment 1-1, Figure 3 demonstrates the long-tailed distribution of samples' unique label combinations, which includes many rare cases and some dominating cases. The positive anchors are chosen randomly at each training step to enable coverage of a broader class combinations. For instance, the combination of (`sky`, `beach`, `sea`) is a common one while only (`beach`, `sea`) is quite rare. Our method selects random anchor sets that can effectively involve uncommon cases. As the anchor sets are randomly selected at each training step, this covers most class subsets during training. Due to this randomness, **an anchor (positive) set can be a negative set for another larger anchor set**. Therefore, each positive anchor found in the PLTS iteration $j$ is regarded as a negative set in the $(j+1)$-th iteration.
> Q5. The ablation study setting is not reasonable. As can be seen, ANM and CSA are two augmentation strategies for handling extreme cases. The main contribution of the paper is the Prompt Contrastive Recovery. Thus, variant version like B w/PCR, B w/PCR+CSA, B w/CSA should also be explored.
We provide ablation results with B w/PCR, B w/PCR+CSA, and B w/CSA in Table 1 in the PDF file. Stable improvements by our proposed components have been made to the default AN setting.
---
Rebuttal 2:
Comment: Thank you for the valuable comments and suggestions. We are encouraged that you appreciated our contributions including the novel strategy and large performance improvements.
Since there is limited time left for discussion, if you have any other questions, we would like to provide further clarifications and discussions about this work. Any discussion is welcome. Thanks.
---
Rebuttal 3:
Comment: Thanks for your response. I incline to keep my initial score. | null | null | null | null | null | null |
Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices | Accept (poster) | Summary: The paper generalizes multiple existing structured matrices by means of Einsum. The scaling law of the structured matrices with different rank, compute intensity, and parameters-per-FLOPs is analyzed on GPT-2. Since the high-rank, non-parameter-sharing einsum operations obtain the best results, the paper proposes a sparse mixture of structured linear experts which also has high-rank, non-parameter-sharing taxonomy parameters with further generalization. Additionally, the learning rate scaling rule for the Einsum-induced structured matrices via maximal update parameterization is introduced.
Strengths: - The paper provides a novel point of view for understanding the structured matrices via Einsum operations.
- The paper is clearly written with sufficient details.
- The comprehensive analysis of the interesting aspects of the structured matrices leaves valuable insights for future research.
Weaknesses: - Although the text is well written, it was hard to keep track of a dozen of alphabets indicating dimensions, taxonomy variables, etc. Reminding the role of each alphabet occasionally would improve the readability.
- The continuity of the taxonomy space is questionable. It seems like the taxonomy space is discrete because rank, FLOPs, and number of parameters are all integers. Even if they are normalized, the parameters would still reside in a finite sized space of rational numbers from 0 to 1.
Technical Quality: 4
Clarity: 2
Questions for Authors: - Is the scaling law of Einsum generalizable to the larger models (e.g., $\ge$7B parameters)?
- Are there any insights or possible reason that the authors think why the high-rank, no parameter sharing leads to the best results whereas the compute intensity does not affect the accuracy?
- Could the taxonomy parameters have any real numbers other than rational numbers?
Confidence: 5
Soundness: 4
Presentation: 2
Contribution: 4
Limitations: - The framework and the analysis is confined to a macroscopic point of view--the test was conducted upon fixing the einsum configuration and the FLOPs across the layers, whereas the optimal configuration and FLOPs might vary from layer to layer.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful and supportive review. We agree that the large number of variables presents readability challenges and will update the paper to provide reminders about the meanings of the indices. We now provide several clarifications and new results inspired by your comments.
**On continuity of the parameterization**.
While quantities including FLOPs, parameter count, and rank are discrete, the continuous parameterization of the space of Einsums with non-negative real-valued coordinates $\theta$ is valid. First, any non-negative $\theta$ satisfying the condition $\theta_{{X}{A}} + \theta_{{X}{B}} + \theta_{{X}{A} {B}} = \theta_{{Y}{A}} + \theta_{{Y}{B}} + \theta_{{Y}{A} {B}} = 1$ produces a valid structure once the resulting weight factors are rounded to have integer sizes. Including irrational entries in $\theta$ is completely allowed. Now it suffices to show that any two such coordinates, say $\theta$ and $\theta + \epsilon$ indeed represent a distinct set of structures as the models scale, for any $\epsilon \neq 0$. This is because for large enough dimensions $d$, any small difference of $\epsilon$ in the coordinates will lead to a larger than 1 difference in the size of the weight factors along some axis, which will persist even after rounding to nearest integers. In other words, we have shown that the space of allowed $\theta$ is real-valued and is a bijection with the space of unique Einsum structures.
**On experiments with larger models**.
In Figure 1 in the [rebuttal's pdf](https://openreview.net/attachment?id=cH4w74hFGe&name=pdf), inspired by your comments, we show results from new experiments on larger GPT-2 models. We train 12 layered models up to the size of original GPT-2 Small [5], adopting the original vocabulary with 50,257 tokens and using a context length of 512. The results agree with our findings in Section 4 which used a reduced vocabulary of 96 tokens and a shorter context length of 128 to save cost. Due to computational constraints, we cannot experiment with 7B parameter models.
In addition, we perform experiments with other architectures and datasets in Figure 2 and 3 in the [rebuttal's pdf](https://openreview.net/attachment?id=cH4w74hFGe&name=pdf), including Vision Transformers on image generation and MLP on synthetic regression. The results confirm that our findings generalize to much broader settings.
**On a studying per layer configurations.**
You raise a really interesting and exciting direction for future work. Indeed, a proper exploration of different structures per layer would require a significant amount of compute and, most importantly, to construct a method to explore the combinatorial search space. We believe this is an important next step and these are the type of questions that we are excited to see that our work is raising.
We value your support and thoughtful questions. We put a significant effort into our response and would appreciate it if you could consider increasing your score in light of our replies.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I suggest including the definition of continuity of the taxonomy parameters discussed above in the paper. I will keep my score. | Summary: This paper proposes a general framework to cover different linear layers by continuous parametrization. The authors conduct extensive experiments to demonstrate several optimal design principles for linear layers, and further propose a novel sparse MoE architecture that improves upon existing works.
Strengths: This paper is clearly written and easy to understand
Weaknesses: - The novelty of this work is not clear enough from my perspective
- Empirical results may need further improvements to better support the proposed method
Technical Quality: 2
Clarity: 2
Questions for Authors: - I am first puzzled by the novelty of this work. While the authors have conducted extensive experiments on different architectures, what are the key insights and contributions of this submission? It seems that the idea of combining BTT and (sparse) MoE is novel, but it seems straightforward and the authors may provide some more insights for it.
- While the authors mentioned in section 2 that generalization to more than 2 factors is straightforward, it seems not really the case as the selection of different indexes ($\alpha$, $\beta$, … in (1)) requires manual design to ensure expressive power. As such, the authors may need to provide some examples on using more factors and see how existing methods may be covered by such more expressive frameworks.
- Throughout this paper, I only see experiments on GPT-2, which may not be sufficient to derive general conclusions regarding the scaling laws of different Einsums. The authors may need to consider different model architectures (e.g., BERT, ViT) to better support any conclusions here.
- Moreover, current experiments are only conducted on one data set, which is not sufficient to support such general conclusions in this submission either. The authors should also consider experiments on different data sets to derive general conclusions.
- Also, the comparison of different MoE architectures in Figure 6 may not be enough to support the superiority of BTT-MoE. While Figure 3 and 4 indicate that BTT may be optimal for dense architectures, it may not be directly generalized to MoE architectures. The authors may need to include some other MoE architectures, possibly low-rank-MoE as low-rank performs quite close to BTT from Figure 3.
## Post rebuttal
After checking the rebuttal, I still doubt the claim that solely two parameters $\omega$ (parameter sharing) and $\Psi$ (closeness to full-rank) reliably led to better scaling laws, which motivates the authors to design BTT and combine it with MoE. Despite such weakness, I think the authors have made their contribution clear with sufficient support (additional architectures, datasets and other MoE structures). I have increased my score towards acceptance.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: This paper does not have direct negativie societal impact from my perspective.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback. Inspired by your comments, we have run additional experiments and provide clarifications below. We hope that you will consider these new results and clarifications in your final assessment.
**On the key contributions of our work:**
We highlight that this work provides at least 4 novel and impactful contributions.
1. We provide a unifying framework to conveniently parameterize a large and continuous space of hardware-efficient structured matrices via Einsums. We show this space contains many popular structured matrices explored in prior works, such as low-rank, Kronecker, Tensor-Train, Monarch, and Block Tensor-Train, while most structures within this space are novel. We further develop an informative taxonomy of the space of Einsums based on key properties relevant to machine learning, including the extent of parameter-sharing, matrix rank, and computational complexity.
2. We perform the first extensive comparison of compute-optimal scaling laws for structured matrices in language modeling. State-of-the-art large language models (LLMs), including the recent Llama 3.1 405B, are purposely trained to be compute-optimal [1, 2], whereas prior works on training with structured matrices do not compare performance under compute optimality (e.g. by training for too many epochs) [3,4]. Our results therefore provide the more appropriate comparison for evaluating structured matrices in realistic LLM training. Indeed, our results reveal that structures such as Monarch that significantly outperform dense in other contexts at best match dense performance in compute-optimal scaling laws.
3. We show that differences in the compute-optimal scaling laws across a wide range of structures are mostly governed by a small number of variables defined in our taxonomy. Small $\omega$ (less parameter sharing) and large $\psi$ (closer to full-rank) reliably led to better scaling laws, while $\nu$ (how dense-like a structure is) can be varied while leaving the scaling laws almost unchanged. These insights will make future search for performant structured matrices significantly more efficient than random.
4. Guided by the insight that full-rank ($\psi=1$) structures that maximize parameters ($\omega=0$) per unit of compute performs the best (as argued in our taxonomy and experiments), we propose BTT-MoE, a novel Mixture-of-Experts (MoE) architecture obtained by sparsifying computation in the BTT structure, proving to be 440% and 28% more compute-efficient than dense and standard MoE for training GPT-2 respectively.
**On experiments with additional architectures and datasets.**
Focusing on compute-optimal scaling laws limits the range of datasets that we can study and therefore we focused on language, as it is standard in scaling laws literature. However, to address your valid concern we have now incorporated Figure 2 and Figure 3 in the [rebuttal's pdf](https://openreview.net/attachment?id=cH4w74hFGe&name=pdf) where we run new experiments to verify that our findings regarding the relative performance of different Einsums continue to hold in other settings: 1) Vision transformers trained with cross-entropy loss for autoregressive image generation on CIFAR-5M; and 2) MLP trained with Mean-Squared-Error loss on synthetic data generated by a large and randomly initialized MLP. The results demonstrate that our findings in the GPT-2 experiments indeed generalize to other architectures and datasets. We will include these results in the updated paper.
**On comparing against MoE with other structures.**
Following your suggestion, we compare against two additional MoE architectures in Figure 4 in the [rebuttal's pdf](https://openreview.net/attachment?id=cH4w74hFGe&name=pdf): Dense-MoE and Low-Rank-MoE, which similarly replace all linear layers including those in the attention blocks with an MoE over dense or low-rank matrices. The results show that indeed BTT has a unique advantage over other structures for constructing structured MoEs. We will include these results in the updated paper.
**On generalization to more than 2 factors.**
The set of indices in Equation 1 does not require manual design to ensure expressive power. Instead, we obtain Equation 1 by simply allowing all possible indices to exist. Following the discussion on Line 92, this set of indices can be directly read off from a graphical representation of the Einsum with each index corresponding to a hyperedge among the input, output, and the weight factors. As a result, generalization to more than $N > 2$ factors follows by constructing a graphical representation of the $N$ factor Einsum, assigning an index to each hyperedge, and writing down the resulting Einsum expression. It is easier to visualize this generalization and therefore we now demonstrate this process for $N=3$ in Figure 4 (c) in the [rebuttal's pdf](https://openreview.net/attachment?id=cH4w74hFGe&name=pdf) and show how the resulting expression covers the general 3-factor case as well as BTT. We will include this new figure for $N=3$ as it visually makes the generalization to more factors apparent
Thank you again for your review. We made a significant effort to address your questions, which has substantially improved our work. We would appreciate it if you would consider raising your score in light of our response.
[1] Dubey et al. 2024. The Llama 3 Herd of Models.
[2] Hoffmann et al. 2022. Training compute-optimal large language models.
[3] Dao et al. 2022. Monarch: Expressive structured matrices for efficient and accurate training
[4] Qiu et al. 2024. Compute Better Spent: Replacing Dense Layers with Structured Matrices | Summary: In this paper, the authors explore the computational efficiency of various structured layers in language modeling tasks. Specifically, they propose a general parametrization of linear operators and conduct an empirical study on the conditions for scalable decomposition based on three key characteristics: rank, compute intensity, and parameters per flops. Moreover, the authors integrate Mixture-of-Experts (MoE) with structured layers, conducting experiments and comparing the results to those obtained using the standard MoE.
Strengths: I believe that the topic of this paper is quite important and timely. With the increasing size of models, it is crucial to find effective ways to efficiently compress them. Among the various matrix and tensor factorization approaches available for compression, it becomes essential to unify them and determine which aspects make them most effective.
Weaknesses: 1) I understand that to check multiple different configurations of layers while having limited computational resources you need to somehow restrict your experiments. But what is lacking in my opinion, is verifying your findings with at least several runs in the proper setting.
2) As far as I understand, classic MoE is applied only to FFN, while you also apply it to all the layers, including Q, K, V. This may affect comparison with MoE. See, for example, in Figure 6.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1) Did you try other than BTT structured layers in the MoE experiments? Why do you only use BTT?
2) It seems to me that rounding, e.g., $d_{in}^\theta$ to the nearest integer interferes with maintaining the shape $d_{in}$. Is it actually the case and if so, how do you deal with it?
3) Do you expect that all the observations about $\omega, \psi \mu$ will remain the same with bigger dictionary?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thoughtful feedback. Indeed, unifying existing structured approaches and performing extensive and well-controlled comparisons between them is an important contribution of this work. We now provide additional results and clarifications to your questions.
**On experiments with larger vocabulary and longer context**.
Thank you for raising this point as these new results strengthen the paper significantly.
In Figure 1 in the [rebuttal's pdf](https://openreview.net/attachment?id=cH4w74hFGe&name=pdf), we now train models up to the size of original GPT-2 Small [1], adopting the vocabulary of 50,257 tokens and using a context length of 512. The results agree with our previous findings in Section 4 with a reduced vocabulary and a shorter context, showing that our results indeed hold in more realistic settings. Moreover, in Figure 2 and Figure 3, we evaluate on two additional datasets including image generation with Vision Transformers and synthetic regression with MLPs, further demonstrating the generality of our findings. We will include these results in the updated paper.
**On additional MoE comparisons**.
Initially we followed the standard approach of only comparing against the MoE that is only applied to the FFN layers. However, to address your concern on the lack of MoE modules in attention layers of the baseline, we now compare with two additional alternatives in Figure 4 in the [rebuttal's pdf](https://openreview.net/attachment?id=cH4w74hFGe&name=pdf): Dense-MoE and Low-Rank-MoE, which similarly replace all linear layers including those in the attention blocks with an MoE over dense or low-rank matrices. The results show that BTT still has a unique advantage over other structures for constructing structured MoEs, even when applying MoE to all layers. We will include these results in the updated paper.
**On rounding to the nearest integer**. Indeed, rounding to the nearest integers as described in Section 2 can produce a matrix $W$ whose input and output dimensions slightly deviate from the original desired shape. We take the simplest approach to address this issue by padding or truncating the input and output vectors. At scale, the number $\delta N$ of padded or truncated elements becomes vanishingly small relative to the original dimension $N$, as $\delta N / N$ scales as $O(N^{-c})$ where $c$ is the smallest non-zero elements in $\theta$. We will update the paper to clarify this consideration.
Thank you again for your constructive feedback and support. We made a significant effort to address your questions which has improved our work substantially; we would appreciate it if you would consider raising your score in light of our strong response and the significance of our work. We believe this paper will help provide a foundation for a nascent and immensely impactful new research area.
[1] Radford et al. 2019. Language models are unsupervised multitask learners
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. Based on your comments I decided to increase my score. | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their feedback and questions. We provide a general response here and individual replies in separate posts below. Inspired by comments from reviewers, we include multiple new experiments encompassing new datasets, new architectures, and alternative MoE structures that significantly strengthen our findings and demonstrate their applicability in much broader settings.
We appreciate the reviewer's recognition of this work's importance and timeliness. Scaling foundation models is primarily bottlenecked by compute in the dense linear layers, where structured matrices offer promising efficiency gains. Therefore, a comprehensive analysis of structured matrices' potential to enhance dense model scaling laws, along with identifying general properties of structures that correlate with their scaling behavior, is of significant value to the field.
**We highlight several particularly novel and impactful contributions in our work:**
1. We provide a unifying framework to conveniently parameterize a large and continuous space of hardware-efficient structured matrices via Einsums. We show this space contains many popular structured matrices explored in prior works, such as low-rank, Kronecker, Tensor-Train, Monarch, and Block Tensor-Train, while most structures within this space are novel. We further develop an informative taxonomy of the space of Einsums based on key properties relevant to machine learning, including the extent of parameter-sharing, matrix rank, and computational complexity.
2. We perform the first extensive comparison on the compute-optimal scaling laws of structured matrices for language modeling. State-of-the-art large language models (LLMs), including the recent Llama 3.1 405B, are purposely trained to be compute-optimal [1, 2], whereas prior works on training with structured matrices do not compare performance under compute optimality (e.g. by training for too many epochs) [3,4]. Our results therefore provide the more appropriate comparison for evaluating structured matrices in realistic LLM training. Indeed, our results reveal that structures such as Monarch, which significantly outperform dense in other contexts, at best match dense performance under compute-optimal scaling laws.
3. We show that differences in the compute-optimal scaling laws across a wide range of structures are mostly governed by a small number of variables defined in our taxonomy. Small $\omega$ (less parameter sharing) and large $\psi$ (closer to full-rank) reliably led to better scaling laws, while $\nu$ (how dense-like a structure is) can be varied while leaving the scaling laws almost unchanged. These insights will make future search for performant structured matrices significantly more efficient, providing a guiding foundation for this important emerging research area.
4. Guided by the insight that full-rank structures that maximize parameters per unit of compute perform the best, we propose BTT-MoE, a novel Mixture-of-Experts (MoE) architecture obtained by sparsifying computation in the BTT structure, proving to be 440% and 28% more compute-efficient than dense and standard MoE for training GPT-2, respectively.
**We now summarize the new experiments we run inspired by the reviewers’ feedback.** We present results and figures in the [attached pdf](https://openreview.net/attachment?id=cH4w74hFGe&name=pdf).
1. We verify that our findings regarding the relative performance of different structures continue to hold in the following additional setups:
- A more standard GPT-2 training setup. We train models up to the size of original GPT-2 Small [5], adopting the original vocabulary with 50,257 tokens and using a context length of 512. The results agree with our findings with in Section 4 which used a reduced vocabulary of 96 tokens and a shorter context length of 128 to save cost.
- Vision transformers trained with cross-entropy loss for autoregressive image generation on CIFAR-5M.
- MLP trained with Mean-Squared-Error loss on synthetic data generated by a large and randomly initialized MLP.
We chose CIFAR-5M image generation and the synthetic regression as additional tasks because they contain enough training examples required to study compute-optimal scaling laws, unlike other commonly used datasets such as CIFAR-10 or ImageNet classification.
2. We show that BTT-MoE outperforms two additional MoE architectures: Dense-MoE and Low-Rank-MoE, which similarly replace all linear layers including those in the attention blocks with an MoE over dense or low-rank matrices. The results show BTT has a unique advantage over other structures when used in the proposed structured MoE architecture.
We hope the reviewers can consider these results and clarifications, and the broader context of this work and its significance, in their final assessment.
[1] Dubey et al. 2024. The Llama 3 Herd of Models.
[2] Hoffmann et al. 2022. Training compute-optimal large language models.
[3] Dao et al. 2022. Monarch: Expressive structured matrices for efficient and accurate training
[4] Qiu et al. 2024. Compute Better Spent: Replacing Dense Layers with Structured Matrices
[5] Radford et al. 2019. Language models are unsupervised multitask learners
Pdf: /pdf/b7f1d35add51703ced8c3a4dc39de2fcd8567195.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Graph Neural Networks Need Cluster-Normalize-Activate Modules | Accept (poster) | Summary: The paper introduces a novel plug-and-play module named Cluster → Normalize → Activate (CNA) to enhance the performance of Graph Neural Networks (GNNs). The CNA module is designed to address the issue of oversmoothing, which occurs in deep GNN architectures and limits their ability to solve complex tasks. The module operates by clustering nodes into super nodes, normalizing them, and applying individual activation functions. The authors demonstrate the effectiveness of CNA through extensive experiments on node classification, property prediction, and regression tasks, showing significant improvements in accuracy and a reduction in mean squared error compared to existing methods.
Strengths: * The paper presents a creative solution to a well-known problem in GNNs, oversmoothing, by introducing the CNA module. This approach is a significant advancement in the field of deep learning on graph-structured data.
* The authors provide a thorough empirical evaluation of the CNA module across various tasks and datasets, which substantiates the effectiveness of their proposed method.
Weaknesses: * The paper lacks a theoretical analysis to support the empirical findings. A more rigorous theoretical underpinning could strengthen the claims made about the CNA module's effectiveness. Could the authors provide a theoretical analysis or proof that supports the empirical results?
* The paper does not fully address the computational complexity added by the clustering step in the CNA module in theory or practice, which could be a concern for very large graphs or real-time applications, even with small number of parameters. Are there any optimizations or alternative clustering methods considered to address potential scalability issues?
* In Table 4 why SageConv+CNA has less parameters than GraphSage But GCNConv+CNA has more parameters than GCN? And the number varies a lot.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments and finding our method creative, a significant advancement, and empirical evaluation thorough. We address your concerns next.
### Q1 (Theoretical underpinnings):
We agree that an improved understanding of the mechanisms driving CNAs strong empirical performance would be very helpful. Therefore, we added a discussion of the theoretical properties of CNA and its relation to oversmoothing to the global comment above.
In summary, we first show how existing proofs of the necessity of oversmoothing don’t apply to GNNs with CNA. We continue by arguing why they cannot trivially be repaired, since one can construct a hypothetical variant of CNA that certainly inhibits oversmoothing entirely. In practical settings, CNA takes a middle ground between classic oversmoothing GNNs and this potentially immune construction.
### Q2 (Scalability):
We totally agree that scalability is an issue easily overlooked in GNNs. There are two possible approaches to this: (1) Asymptotic behavior and (2) practical scalability.
(1): The runtime of k-means scales linearly with the size of the dataset, which here is the number of nodes in the graphs (cf. lines 187-191 in the submission). Its space requirement is favorable too. This is one reason we chose it over other methods, such as hierarchical clustering, since those methods typically require at least quadratic runtime. We did initially experiment with using Gaussian Mixture Models (which still do scale linearly in the dataset size), yet found their increased overall GNN performance to be insufficient to justify their high practical costs. This leads us to the next reason, its practical performance.
(2): Interested in investigating the practical scaling properties of CNA, we applied it to the rather large *ogbn-arxiv* dataset (e.g., see Table 4). With 169k nodes and 1M edges, it shows that CNA can be applied to large-scale data.
We do recognize the pursuit of scaling to yet larger datasets as one key task for future work. Since the heuristic clusterings of k-means were already sufficient, we can be optimistic about significant further performance improvements. As perfect clustering does not seem to be necessary, subdividing the nodes into independent partitions in which we perform clustering might be feasible while maintaining good overall performance. This would allow for any clustering algorithm to get parallelized. Moreover, even faster procedures like random projections clustering might suffice [Fern et al., 2003].
### Q3 (Parameter counts in Table 4):
The sole purpose of the experiment in Table 4 was to show that CNA allows for much smaller models at similar performance. To this end, for row (i), we trained a SAGEConv+CNA model to rival the results of SAGEConv without CNA. For a fair comparison, we also show the exact same configuration, but without CNA, in the table below. Similarly, row (ii) shows that GCNConv+CNA can similarly surpass the state-of-the-art with fewer parameters. We also present that model's performance without CNA:
| Model | Accuracy (↑) | Number of Parameters (↓) |
| -------- | ------------ | ------------------------ |
| SAGEConv | 59.97±0.33% | 34780 |
| GCNConv | 69.66±0.27% | 388440 |
### References
(1) Fern, Xiaoli Zhang, and Carla E Brodley (2003): Random Projection for High Dimensional Data Clustering: A Cluster Ensemble Approach. In Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003). Washington DC.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' responses and my concerns are resolved. I have raised my score to 5
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for acknowledging that we have resolved all your concerns. If there are any further comments from your side, we will be happy to address them before the rebuttal period ends. If there are none, then we would appreciate it if you could reconsider your rating and support our paper for acceptance.
Regards,
Authors | Summary: The authors propose a new normalization scheme they call "CNA." They propose clustering the nodes according to their features and computing the normalization statistics for each cluster.
This is essentially a batch norm with groups tailored for graphs.
This normalization is then augmented with a learnable activation function. The three components above are fused together to a "CNA" block to be used after each message passing layer in an MPGNN.
The authors then perform several experiments on node classification and regression tasks.
The authors examine the effect of CNA on the performance of MPGNNs with numerous layers on the Citeceer and Cora datasets. They further perform benchmark experiments, showcasing impressive improvement on the Citeceer dataset.
Afterwards, some additional ablation studies are conducted, considering different subsets of the CNA components showing how the performance is affected.
Strengths: 1. I see why the proposed method is reasonable for helping combat over-smoothing, at least in some scenarios.
After all, enforcing normalization for nodes with similar features seems to be stronger than enforcing normalization for all nodes.
2. The presentation is neat, and the writing is satisfying.
Weaknesses: 1. Although the "How" is clear, I am not fully convinced by the "Why". The authors did not provide any theoretical insight into why CNA works.
2. I also find the empirical experiments lacking. I would have expected to see many more experiments demonstrating why CNA helps combat over-smoothing, especially pricing analysis on different types of graphs (e.g., homophilic vs. heterophilic, inductive vs. transductive, etc...)
3. The authors didn't examine how the dataset properties affect the effectiveness of CNA
4. The experiments do not convince why CNA is preferred over other normalization layers.
5. I do find the aggregation part of CNA slightly unrelated to the main purpose of the paper and incremental to the results (as clearly indicated by the ablation studies).
6. Some of the benchmark's improvements seem to be statistically insignificant (the improvement in the mean performance seems small compared to the stds).
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the "Weaknesses" section
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: No potential negative societal impact is observed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments and for finding the writing satisfying. We address your concerns next.
### Q1 (Why CNA works):
We now provide theoretical underpinnings of why CNA works. Please have a look at the global response.
### Q2 (Empirical evaluation):
We already provided empirical results on
- node classification,
- multi-scale node regression,
- oversmoothing,
- parameter parsimony,
- ablation experiments, and
- hyperparameter sensitivity.
We focused on transductive node property prediction, following the works of, for example, Zhao et al. (2020), Zhou et al. (2020), and Rusch et al. (2023). We see no fundamental issues with settings where graphs vary. For instance, CNA works well in graph-level tasks too:
| Dataset | Accuracy with CNA (↑) | Accuracy with ReLU (↑) |
| -------- | --------------------- | ---------------------- |
| MUTAG | **81.60±4.18%** | 78.42±6.55% |
| ENZYMES | **50.01±3.25%** | 36.97±3.08% |
| PROTEINS | **74.44±2.49%** | 72.72±2.60% |
Here, we ran GCNConv with ReLU and compared it with CNA (@ 10 clusters) on three datasets: MUTAG, ENZYMES, and PROTEINS [Morris et al., 2020]. Results are averaged over five seeds. CNA improves classification accuracy for all data sets, confirming the effectiveness of CNA across various tasks.
We now extended the inspection of the occurrence of oversmoothing by visualizing the Dirichlet energy in shallow to even deeper GNNs, which is well-known to measure (over)smoothing [Rusch et al., 2023]. Please see Figure 1 in the rebuttal PDF. We see how the vanilla GNNs typically start in the undersmoothing regime with low DE, transition to an optimal region with elevated DE, and finally deteriorate due to oversmoothing with low DE again. A high DE is a necessary condition for good model performance, as all CNA models and the linearized GATConv and GCNConv prove. This strongly suggests that CNA inhibits oversmoothing at greater depths and thereby unlocks a large portion of its performance increase. The increased expressivity then allows it to achieve better accuracies than the linearized GNNs, combining the best of both worlds.
### Q3 (Effect of dataset properties):
We are happy to shine more light on the details of our findings. Table 6 in the appendix can hereby help us to get an overview of the dataset properties and statistics. There are different aspects along which we can evaluate the effectiveness of CNA:
- **Number of nodes and edges**: Since CNA increases the model expressivity and capacity, its performance benefit is stronger on larger datasets. In particular, if we look at the results of Tab. 3, classic methods work better on the rather small Texas and Wisconsin tasks. Furthermore, the original ablation study in Tab. 5 was run on Cora, which is only of medium size. If you look at our response to your (Q5), you can see that the benefits are much larger on the huge ogbn-arxiv.
- **Degree of Homophily**: Our key ingredient CNA is not directly affected by a graph being homo- or heterophilic. This is because the clustering step and subsequent normalization and activation are invariant to the topology of the graph. Indeed, we see good performance on both homophilic graphs such as Cora, and heterophilic ones such as Chameleon. We had these degrees in our paper in Tab. 6 in the appendix.
- **Number of features**: CNA appears to be able to better leverage large numbers of features. If again looking at Tab. 3, the difference due to CNA on Computers, Photo, and Pubmed is rather small, where the number of features is only moderate. Results tend to be better when more features are available, possibly due to easier clustering.
- **Number of classes** (for node classification tasks): CNA can successfully be applied to small and large numbers of classes alike, as is apparent when comparing Cora with 7 and CoraFull with 70 classes (CoraFull) as seen in Tab. 3.
### Q4 (Comparison to other normalization techniques):
Thank you for the question. We provide the new results. Please have a look at the global response.
### Q5 (Ablation of activation function):
Most commonly employed normalization techniques are followed by a simple scalar transformation, including the prolific BatchNorm, InstanceNorm, LayerNorm, and GroupNorm (see also Huang et al. (2023), specifically Tab. I). We extend this notion with much more powerful transformations, namely Rational activations, which are themselves universal scalar function approximators. This helps to further increase the expressivity of GNNs.
Please also have a look at our response to (Q2) of reviewer gBSJ.
### Q6 (Statistical significance):
We kindly disagree and concur with the three other reviewers:
- “CNA module enhances the expressiveness and performance of GNNs” (Reviewer jRzY)
- “very strong improvement” (Reviewer gBSJ)
- “showing significant improvements” (Reviewer v2xE)
Table 1 shows that introducing CNA into existing architectures increases their classification performance by consistently more than an impressive 10%. Table 2 shows that CNA achieves the best overall results on regression tasks, too. In Table 3, we compare CNA to the state-of-the-art on an extensive collection of standard benchmark datasets. We find that CNA is the best architecture overall.
### References
- Huang et al. Normalization Techniques in Training DNNs: Methodology, Analysis and Application. TPAMI 2023
- Morris et al. TUDataset: A Collection of Benchmark Datasets for Learning with Graphs. ICML 2020 Workshop
- Rusch et al. A Survey on Oversmoothing in Graph Neural Networks. arXiv 2023
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
After thoroughly reviewing your response to my concerns and those of other reviewers, I believe that the additional clarifications and experiments you provided help to emphasize the proposed method's contribution and quality. As a result, I have raised my score to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you for the comment and raising the score. We would be happy to answer any further concerns before the rebuttal period ends. If there are none, then we would appreciate if you can reconsider your rating and support our paper for acceptance. | Summary: The paper describes a new updating rule based on a sequence of operation clustering, normalization and a learnable activation to replace the original plain relu-like update message passing and empirically show that such learnable updating function gains large performance improvement on existing benchmark datasets. The author claims the new updating rule alleviates oversmoothing to some extent and has better expressive power from a learnable activation function. Compared with existing graph normalization, the learnable involved for normalization in this paper is postponed to the learnable activation stage to ensure a better expressivity. The clustering used here is a simple Kmeans clustering and the paper suggests that simple kmeans is sufficient to guarantee performance and save computation time.
Strengths: 1.The paper shows a clear structure of the new updating rules, the writing is clear and the presentation is easy to understand.
2.The empirical experiments suggest a very strong improvement gain in terms of the baseline models, even compared with existing PWC leaderboard, somewhat indicate the effectiveness of the new updating method.
3.The computation resources needed is suggested to be manageable compared with more complex architectures.
4. The idea is simple and seems to be applicable universally to most MPNN structures.
Weaknesses: 1. Although the empirical results show a great performance gain on Cora and Citeseer dataset, this CNA module lacks theoretical analysis on why it works and where the performance gain comes from. It is very unintuitive to consider why normalization over a knn based cluster can show a much better performance, there lacks a comparison between existing graph normalization techniques and the proposed one for a better comparision.
2. From the ablation study in Table 5, it suggests that simply with cluster and normalization gives most of the performance gain on Cora data, the effect of learnable activation modules seems to be negligible. The author claims the expressive power is preserved in this learnable activation part. So why the performance gain mostly come from the first two part (from the results of Table 5)? This is somewhat contradict to the author's claim.
3. To better reflect the method's ability of alleviating oversmoothing, the author should reflect the layer wise dirichlet energy plot for this method and also consider extend the layer number from 32 to at least 64, as produced from previous papers, like G2-gating.
4. The lack of discussion on the clustering method (using only k-means) leads to concern about the stability of the method. K-means clustering is known to be not stable. It is weird there is no discussion about the sensitivity of the choice of cluster number, metric choice, etc.
Technical Quality: 2
Clarity: 3
Questions for Authors: As suggested in the weakness. My most concern is on point 2 and 4.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The author suggests the lack of large-scale dataset experiments and limited connection to oversmoothing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments and for acknowledging our strong results, universality of the method, and clear writing. We will address the concerns next.
### Q1 (Additional theoretical analysis and comparison to other normalization techniques):
Please see our general response/comment for more theoretical insights.
Regarding the normalization techniques, there indeed are many methods specifically for GNNs, as also discussed in Section 2 on Related Work. In particular, GraphNorm for graph-level tasks (as opposed to node-level ones) provides partial normalization, where features are normalized across each separate graph [Cai et al., 2021]. PairNorm normalizes features to zero mean and constant total pairwise squared distance between nodes [Zhao and Akoglu, 2020]. However, **considering groupings of the nodes can further improve the effectiveness of normalization. This was also shown by Zhou et al. (2020), where learned soft-clustering was performed instead of nonparametric hard k-means as in our method.** Generally, one wants to not only normalize within clusters but also apply a learned transformation [Ioffe and Szegedy, 2015]. This makes the constituents of the cluster distinct from the other ones.
We further note that we replaced the affine transformation following most normalization techniques with a more expressive Rational activation. Overall, CNA proved to be highly effective. Please see comparisons to other normalization techniques in the global comment.
We further added a discussion of theoretical properties related to oversmoothing to the global comment above. We believe that reducing the impact of oversmoothing significantly contributes to the improved performance of CNA observed in our thorough empirical evaluation. See also our response to your third question (Q3) for deeper empirical insights.
### Q2 (Ablation study):
Much like with other architectures, different components contribute to the overall performance to differing degrees, depending on the dataset. For example, if the ablation study is performed on the *ogbn-arxiv* dataset instead of on Cora, the necessity of the learned Activate step is much more visible:
| Ablated variant | Accuracy (↑) |
| --------------- | ------------------- |
| CNA | **74.64±0.13%** |
| CN + ReLU | 69.55±0.42% |
The table shows the results of training a GNN with GCNConv with cluster and normalize steps, but with (CNA) and without rational activations (CN+ReLU). We further note that full CNA improves the convergence speed of GNN learning over just CN, as shown in Figure 2 in the rebuttal PDF.
### Q3 (Alleviating oversmoothing in deeper GNNs):
Thank you for this astute comment! Please have a look at top 2 graphs of Figure 1 in the rebuttal pdf. There, we reran and extended the results on Cora and CiteSeer beyond 32 layers to 64 and even 96 layers, again with five seeds. This further confirms that CNA maintains high performance even at great depths. Coming in second are again the models with removed activations (“linear”). Lastly, vanilla GNNs quickly deteriorate in performance at increasing depths.
Moreover, we visualized the final Dirichlet energy (DE) of each of the models in the Fig 1 of rebuttal pdf, which is well-known to measure (over)smoothing [Rusch et al., 2023]. We see how the vanilla GNNs typically start in the undersmoothing regime with low DE, transition to an optimal region with elevated DE, and finally deteriorate due to oversmoothing with low DE again. A high DE is a necessary condition for good model performance, as all CNA models and the linearized GATConv and GCNConv prove. This shows that CNA inhibits oversmoothing at greater depths and thereby unlocks a large portion of its performance increase. The increased expressivity then allows it to achieve better accuracies than the linearized GNNs, combining the best of both worlds.
### Q4 (Stability of k-means):
Indeed, we can provide a more in-depth discussion of the properties of k-means to justify its use and good empirical performance.
Since we were also interested in the sensitivity of CNA to the number of clusters, we performed a dedicated study showing that good configurations can easily be found by hyperparameter search. We describe those results in lines 300-304 and Figure 6. Overall, this confirms that k-means often provides decent clustering in practical settings and is sufficiently stable [Ben-David et al., 2007]. We also found that using much more computationally expensive Gaussian Mixture Models did not significantly improve the final performance of the GNNs. In our work, we compared nodes by their Euclidean distance, which we found to work reliably in our experiments. However, other applications might use different or even domain-specific distances. This flexibility is a benefit of CNA, allowing for more task context to be used in modeling.
We will add this discussion of the stability of k-means to the main paper.
References
(1) Ben-David et al. Stability of K-Means Clustering. Learning Theory 2007
(2) Cai et al. GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training. ICML 2021
(3) Ioffe et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. ICML 2015
(4) Rusch et al. A Survey on Oversmoothing in Graph Neural Networks. arXiv 2023
(4) Zhao et al. PairNorm: Tackling Oversmoothing in GNNs. ICLR 2020.
(5) Kaixiong et al. Towards Deeper Graph Neural Networks with Differentiable Group Normalization. NeurIPS 2020
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed explanation and experiment results on the Dirichlet energy. I found most of the part convincing. I have, however, one question related to the theoretical proof. I understand that it is non-trivial to provide a rigorous proof on how CNA directly avoids oversmoothing and your analysis on how it is not suitable for the existing case and the extreme case analysis seems convincing. Yet, I found there lacks two important aspects that should be necessarily be investigated:
1. In the survey paper [1], Rusch mentioned that in addition to the Dirichlet energy, expressive power is equally important to improve the performance of the deep GNN model. Therefore, an theoretical analysis on the expressive power of CNA compared with non-CNA module should be provided.
2. Since KNN has shown sufficient good performance for the CNA clustering part, I wonder what exactly is it clustering, any analysis on that part should provide valuable theoretical insights on why this method works. My current intuition is that the cluster is basically a different forms of rewiring from feature space and the normalization applied serve as a similar functionality of feeding the feature rewiring edge information to the nodes.
I think the two points should be addressed more in the theoretical analysis as we are easy to observe the Dirichlet energy is well preserved and it is in fact easy to preserve such energy according to [1].
Overall, I think the paper is very valuable if the two points can somehow be addressed.
[1] Rusch, T., Bronstein, M.M., & Mishra, S. (2023). A Survey on Oversmoothing in Graph Neural Networks. ArXiv, abs/2303.10993.
---
Rebuttal 2:
Comment: Thank you for taking the time to provide the feedback on our rebuttal. In the following, we want to address both aspects you pointed out:
### 1. Expressive power of CNA
The expressive power of Rationals were studied by both Delfosse et al. (1) and Telgarsky (2). The author of (2) proven Rationals to be better approximants than polynomials in terms of convergence. All the while, Delfosse et al. (1), show in their that in actuality Rationals amplify a model with high neural plasticity. More to this point, they provided a proof that Rationals can dynamically make use of a residual connection: a Rational embeds a residual connection ⇔ m>n. See p. 4 of the paper (1) for the theorem and its proof. Throughout our experimental evaluations, we used m=5 and n=4, which answers where the expressive power of CNA modules stems from.
### 2. Role of Clustering in CNA
We agree with your intuition. More to the point, Shi at al. (3) demonstrate in their work that clustering minimizes redundant connectivity and so makes message passing more effective. And, as you are pointing out, normalization facilitates feeding the rewiring edge information to the nodes.
We will add these points to the paper. Thanks again, your inputs have helped to make our manuscript more clear. We hope we have addressed all your concerns and it will be great if you can reconsider your score
##### **References**
(1) Delfosse et al., Adaptive Rational Activations to Boost Deep Reinforcement Learning. ICLR 2024.
(2) Matus Telgarsky, Neural Networks and Rational Functions. ICLR 2017.
(3) Shi et al., ClusterGNN: Cluster-Based Coarse-To-Fine Graph Neural Network for Efficient Feature Matching. CVPR 2022.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer,
Thank you again for the follow up question which we had answered a couple of days ago. We hope we have now convinced you of the valuability of our method. Plese do let us know if there are any further concerns from your side.
Regards,
Authors
---
Rebuttal 3:
Comment: Thank you for the comment!
### On Expressive Power of Learnable Activations in Context of GNNs
To the best of our knowledge, our work is the first one using Rationals in the context of GNNs. On the other hand, research on learnable activation functions is limited, and our work contributes to this area by exploring the use of Rationals in GNNs.
Some papers do explore the effects of activation functions on the expressive power of GNNs:
Khalife and Basu (1) study the power of graph neural networks and the role of the activation function shows that changing the activation function can drastically change the power of GNNs.
Yu et al. (2) investigate in their work influences of different activation functions in GNNs and find that the choice of activation function can significantly affect the performance.
Although these studies do not specifically focus on learnable activation functions, they do highlight the importance of activation functions in determining the expressive power of GNNs. Both discuss the impact of activation functions on the expressive power of GNNs in the context of the Weisfeiler-Lehman (WL) test. More to this point, they do not specifically address how activation functions can help exceed the limit of the WL-test.
The first paper (1) proves that GNNs with piecewise polynomial activations cannot distinguish certain non-isomorphic graphs, while those with non-piecewise polynomial activations (like sigmoid, hyperbolic tan) can distinguish them in two iterations.
The second paper (2) investigates the expressive power of Graph Transformers with different activation functions (softmax, tanh, sigmoid) and finds that sigmoid is the most powerful, enabling the Graph Transformer to achieve the same level of expressiveness as the WL test.
Molina et al. (3) show that Rationals can approximate either tanh or sigmoid functions, allowing for end-to-end learning of deep networks without the need to manually select fixed activation functions. CNA builds upon these findings.
Neither (1) and (2) nor other related works explicitly discuss how activation functions can help exceed the limit of the WL-test. The focus is primarily on understanding how different activation functions affect the expressive power of GNNs within the bounds of the WL-test.
### Role of Clustering and Normalization in CNA
Some papers do explore the concepts of normalization and rewiring in GNNs, providing empirical evidence:
Chen et al. (4) study on learning graph normalization for GNNs discusses the importance of normalization in improving the performance of GNNs.
Caso et al. (5) propose a novel graph rewiring approach to improve GNNs' performance on graph-related tasks.
Zhou et al. (6) offer a review of methods and applications of GNNs, highlighting the role of normalization and rewiring in enhancing the efficiency and effectiveness of GNNs.
Although these studies do not specifically address how normalization facilitates feeding rewiring edge information to nodes in combination with clustering, they do emphasize the importance of normalization and rewiring in improving the performance of GNNs. As far as we are aware, sadly, there are no dedicated theoretical studies on this particular question yet. We suggest that future research could investigate this topic further, potentially leading to new insights and improvements in GNNs.
As far as Shi et al. are concerned, we agree that there is no theoretical analysis there, but the reduction of redundancy is the whole motivation behind the paper, and the experimental results show that.
### Conclusion
In conclusion, our proposed method, CNA, contributes to the current state of research on the expressive power of learnable activations in the context of GNNs. We have addressed the reviewer's concerns and provided an overview of the related works on this topic, in extension the **theoretical analysis of the combined CNA modules in the global comment**. While there are no dedicated theoretical studies on the specific role of clustering and normalization in CNA yet, we suggest that future research investigates this topic further.
### References
(1) Khalife, S., & Basu, A.. On the power of graph neural networks and the role of the activation function. arXiv preprint arXiv:2307.04661.
(2) Yu et al., Activation Function Matters in Graph Transformers, ICLR 2024
(3) Molina et al., Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks. ICLR 2020.
(4) Chen et al., Learning Graph Normalization for Graph Neural Networks, Neurocomputing 2022
(5) Caso et al., Renormalized Graph Neural Networks, preprint on arXiv:2306.00707v1
(6) Zhou et al., Graph neural networks: A review of methods and applications, AI Open 2020 | Summary: This paper proposes a novel module, CNA (Cluster-Normalize-Activate), to address the oversmoothing problem in Graph Neural Networks (GNNs). The CNA module operates in three steps: clustering node features, normalizing them within clusters, and applying learnable activation functions to each cluster.
Strengths: 1. Experiments and analysis are solid. Results show that the CNA module enhances the expressiveness and performance of GNNs, particularly in node classification and regression tasks.
2. The CNA module can be applied to various GNN architectures.
3. Incorporating the CNA module requires fewer or comparable parameters than existing SOTA methods.
Weaknesses: 1. In Table 4, I would encourage to report the performance and parameter amount of SAGEConv and GCNConv baselines as well. It could help readers to have a clearer understanding of how many parameters are introduced by the proposed CNA module.
2. Can the CNA module improve the performance in graph-level tasks?
Technical Quality: 3
Clarity: 4
Questions for Authors: See in Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments and for acknowledging that our experiments and analysis are solid and that our method is easily adaptable. We will address your concerns next.
### Q1 (Results on SAGEConv and GCNConv):
Thank you for taking a closer look. We have now run new experiments using the same SAGEConv and GCNConv without CNA. Please find the results in the table below. As it can be seen, the accuracy drops by a huge margin, whereas the change in the number of parameters is minimal.
| Model | Accuracy (↑) | Number of Parameters (↓) |
| -------- | ------------ | ------------------------ |
| SAGEConv | 59.97±0.33% | 34780 |
| GCNConv | 69.66±0.27% | 388440 |
A Rational activation only introduces 10 learnable parameters, since the activation is performed pointwise on each entry of the node feature vector. This means that for 5 clusters and 20 layers, we add 5\*20\*10 = 1000 new learnable parameters.
### Q2 (CNA on graph-level tasks):
We gladly extended our evaluation of CNA to graph-level learning, and report the results below. In particular, we ran GCNConv with ReLU and compared it with CNA (@ 10 clusters) on three datasets: MUTAG, ENZYMES, and PROTEINS [Morris et al., 2020]. Averaged over five seeds, CNA improves classification accuracy for all data sets:
| Dataset | Accuracy with CNA (↑) | Accuracy with ReLU (↑) |
| -------- | --------------------- | ---------------------- |
| MUTAG | **81.60±4.18%** | 78.42±6.55% |
| ENZYMES | **50.01±3.25%** | 36.97±3.08% |
| PROTEINS | **74.44±2.49%** | 72.72±2.6% |
This confirms the effectiveness of CNA across a variety of tasks.
References
(1) Morris, Christopher, Nils M Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann (2020): TUDataset: A Collection of Benchmark Datasets for Learning with Graphs. Graph Representation Learning and Beyond (GRL+), ICML 2020 Workshop.
---
Rebuttal Comment 1.1:
Comment: Thanks for the additional experiments! I will keep my positive score. | Rebuttal 1:
Rebuttal: ### Global Response/Comment
We want to thank all reviewers for their time and effort in improving this work. We particularly appreciate your acknowledgment of the relevance of the issue, the novelty of the approach, and the extensive and convincing experiments. Your comments and questions helped improve the paper, and we hope to have clarified all of them below.
Since there were multiple requests for theoretical analysis, we provide that one here in the global response. We wrote dedicated answers to all your other questions. Furthermore, please see the attached PDF.
**Theoretical Analysis**
It has been suggested to more formally discuss the relationship of CNA to oversmoothing. There are two directions in which we can tackle this. (1) We can show *how* previous proofs of the necessary occurrence of oversmoothing in vanilla GNNs are not applicable when CNA is used. (2) We can provide a reason for *why* these proofs are not easily repairable and how CNA breaks free of the oversmoothing curse.
**(1)** The Rational activations of CNA trivially break the assumptions of many formalisms due to their potential unboundedness and not being Lipschitz continuous. This includes Prop. 3.1 of Rusch et al. (2023), where, however, the core proofs on oversmoothing are deferred to Rusch et al. (2022). There, again, the activation $\sigma$ is assumed to be point-wise and further narrowed to ReLU the proof in Appendix C.3. Regarding the more recent work of Nguyen et al. (2023), we again note that CNA violates the assumptions neatly discussed in Appendix A. The CNA module can either be modeled as part of the message function $\psi_k$ or even as part of the aggregation $\oplus$. However, in both cases, the proof of Prop. 4.3 (which is restricted to regular graphs) breaks down. In the former case, there appears to be no immediate way to repair the proof of Eq. (15) in Appendix C.3. In the latter case, providing upper bounds in Appendix C.2 is not exactly straightforward.
**(2)** This begs the question of why that is difficult. The answer is that we specifically built CNA to break free of the current limitations of GNNs. Empirically, this indeed hurdled oversmoothing. In theory, this makes sense once we consider two possible extremes that arise as special cases of CNA. Consider a graph with $N$ nodes. On one end, we can consider CNA with $N$ clusters and Rationals that approximate some common activation, such as ReLU. This exactly recovers the standard MPNN architecture, which is known to be doomed to oversmoothing under reasonable assumptions (see above). We can alternatively consider the same setting with only one cluster, i.e., MPNNs with global normalization. It is also known to exhibit oversmoothing empirically [Zhou et al., 2020]. Conversely, we can consider $N$ clusters, with constant Rational activations given by $R_i (x) = i$ for each cluster $i \in \{1, \dots, N\}$. Obviously, the Dirichlet energy of that output is lower bounded and does not vanish no matter the number of layers. In practice, we employ, of course, between one and $N$ clusters, and thereby trade off the degree to which the GNN is affected by oversmoothing. Bounding this relationship goes far beyond the scope of the current work, and is definitely an interesting direction for future work.
**Overall motivation: why CNA works**
- Cluster: We first identify groups of nodes with similar properties. We later learn separate projections for each such group, ensuring distinct feature representations. It is reasonable to assume such a structure exists, especially in classification datasets.
- Normalize: To maintain good stability during the propagation through the layers, we normalize the features within each cluster.
- Activate: Finally, we project the nodes with separate learned activations per cluster. These expressive functions also take up the work of the usual learned transformation after normalization steps.
**Comparison to other graph normalization techniques**
We also provide a comparison of CNA to BatchNorm (5), Deep Group Normalization (4), and PairNorm (6):
| Model and Normalization | Accuracy on Cora (↑) |
| ----------------------- | -------------------- |
| GCNConv | 82.2% |
| GCNConv + BatchNorm | 73.9% |
| GCNConv + DGN | 82.0% |
| GCNConv + PairNorm | 71.0% |
| GCNConv + CNA (ours) | **93.66±0.48%** |
Results for the other methods were taken from Zhou et al. (2020). You can find details on how other ablated variants of CNA than the ones shown here fare by consulting Table 5. We also note that Table 2 already compares PairNorm with CNA.
**References**
(1) Rusch, T. Konstantin, Benjamin Paul Chamberlain, Michael W. Mahoney, Michael M. Bronstein, and Siddhartha Mishra (2023): Gradient Gating for Deep Multi-Rate Learning on Graphs. ICLR 2023.
(2) Rusch, T Konstantin, Benjamin P Chamberlain, James Rowbottom, Siddhartha Mishra, and Michael M Bronstein (2022): Graph-Coupled Oscillator Networks. ICML 2022. Baltimore, Maryland, USA.
(3) Nguyen, Khang, Hieu Nong, Vinh Nguyen, Nhat Ho, Stanley Osher, and Tan Nguyen (2023): Revisiting Over-Smoothing and over-Squashing Using Ollivier-Ricci Curvature. ICML 2023, 202:25956–79. Honolulu, Hawaii, USA.
(4) Zhou, Kaixiong, Xiao Huang, Yuening Li, Daochen Zha, Rui Chen, and Xia Hu (2020): Towards Deeper Graph Neural Networks with Differentiable Group Normalization. NIPS ’20. Red Hook, NY, USA.
(5) Ioffe et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. ICML 2015
(6) Zhao et al. PairNorm: Tackling Oversmoothing in GNNs. ICLR 2020
Pdf: /pdf/67cdba094b8297511bba6b391e6fd405c4e91957.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Higher-Rank Irreducible Cartesian Tensors for Equivariant Message Passing | Accept (poster) | Summary: This work builds upon advances on equivariant and many-body architectures for the construction of neural network potentials. It lays out the formalism to substitute the conventionally-used spherical tensors in higher-rank models for Cartesian tensors. Taking as a reference the MACE architecture, the authors intend to show that this is a competitive approach to SOTA models in terms of accuracy and computational efficiency.
Strengths: The exploration of new methods to more efficiently learn machine learning force fields is an active area of research, and the use of higher-rank Cartesian approaches is quite novel, in contrast to the use of the spherical basis. The authors demonstrate that they can obtain results on benchmarks datasets that compete with SOTA models (both spherical and higher-rank Cartesian). The exposition of mathematical concepts is quite clear for a reader familiar with the literature. In terms of accuracy, the model is very satisfactory.
Weaknesses: Although the achievement of competitive performance when compared to SOTA models is relevant enough, and the mathematical machinery is novel for the neural network potential field, I am not sure whether the authors have been able to demonstrate in some way why their method should be chosen in contrast to MACE, for example. The comparison of inference times seems to not favor the use of ICTP. However, I am aware of the fact that the formalism laid out in the paper allows the construction of other architectures, and that the design space of these models could be further investigated to find even more efficient models.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) It seems that the first model to explore the idea of higher-rank Cartesian tensors was TensorNet [30], even though it is not flexible to incorporate arbitrary ranks, and it does not explicitly account for many-body interactions. I miss some more thorough discussion on their differences, even more taking into account that [30] seems to display competitive performance to ICTP without making use of those more sophisticated approaches. I would encourage the authors to include some discussion in this regard. Is ICTP a combination of CACE and TensorNet?
2) I do not intend the authors to address the following question with more experiments, I acknowledge the limited time frame, but: experiments have been conducted on datasets consisting of single systems. Do the authors have any reference of how the model performs on datasets with varying chemical composition?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment of the manuscript and have addressed each point they raised below. All numerical results for the experiments conducted for this review are presented in Tabs. 1, 2, and 3 of the attached PDF. We also include Fig. 1, illustrating inference time and memory consumption of ICTP and MACE as a function of $L$ and $\nu$.
**W1:** To demonstrate the advantage of a Cartesian approach we can consider computational complexities for equivariant convolutions ($\mathcal{O}(E N_\text{ch} 9^{L} L!/(2^{L/2}(L/2)!))$ and $\mathcal{O}(EN_\text{ch}L^5)$ for ICTP and MACE, respectively) and the product basis ($\mathcal{O}(M N_\text{ch} K (9^{L} L!/(2^{L/2}(L/2)!))^{\nu-1})$ and $\mathcal{O}(M N_\text{ch} L^{\frac{1}{2}\nu(\nu + 3)})$, respectively). For more details on the asymptotic computational complexity, see W2.1 and Q1 by the Reviewer 69Cm and Q3 by the Reviewer qF29. Furthermore, Tab. 1 and Fig. 1 assess the inference time and memory consumption varying rank of messages $L$ and tensors embedding atomic environments $l_\text{max}$; we also vary the number of contracted tensors $\nu$. Our results in W2.1 by the Reviewer 69Cm and Q3 by the Reviewer qF29 demonstrate that MACE scales worse than ICTP when increasing $\nu$. Particularly, for MACE with $L=3$, we could raise $\nu$ to a maximal value of 4 since, for larger values, we obtained an OOM error on an NVIDIA A100 GPU with 80GB. Spanning the $\nu$-space more efficiently may be important to improve the model's expressive power for tasks requiring correlations of higher body orders. These correlations become more important when, e.g., environments are degenerate with respect to lower body orders, and higher accuracy is required [B, G].
Apart from the theoretical complexity and obtained inference times, as seen from an implementation perspective, symmetric tensors allow for more efficient implementations and algorithms for the general matrix-matrix multiplications (GEMM), which PyTorch has not yet provided. Finally, our approach can exploit the symmetry of tensors when computing forces and stresses, omitting the transpose calculation.
**Q1:** Our approach includes TensorNet and CACE as special cases. A TensorNet-like architecture could be defined with $\nu=2$ and $L = l_\text{max} = 2$, though with equivariant convolution filters. We provide the corresponding results in Tab. 3. We found model configurations with $\nu=3$ and $L=2$ (and $L=1$) outperform the model configuration with $\nu=2$ and $L = l_\text{max} = 2$ by factors of $\leq$ 1.4 and $\leq$ 1.2 in energy and force RMSEs, respectively. Tab. 2 in the manuscript demonstrates a better accuracy for ICTP by $\leq$ 2.3 compared to CACE. We will add this discussion to the revised manuscript.
More specifically, TensorNet uses rank-2 reducible Cartesian tensors to embed atomic environments and decomposes them into irreducible ones before computing products with invariant radial filters, i.e., before computing messages. It includes explicit 3-body features in a message-passing layer since it computes a matrix-matrix product between node features and messages. CACE uses reducible higher-rank Cartesian tensors to embed local atomic environments and their full tensor contractions (see also MTP or GM-NN) to build invariant many-body filters. Our approach uses exclusively irreducible Cartesian tensors for embedding environments, equivariant convolutions, and product basis. Thus, we do not mix irreducible representations during our many-body message passing. We use irreducible tensor products for the equivariant convolution and go beyond invariant filters. Finally, we systematically construct equivariant many-body messages.
**Q2:** We added results for ICTP and MACE for the large-scale Ta-V-Cr-W data set [C], which is diverse and includes 0 K energies, forces, and stresses for 2-, 3-, and 4-component systems and 2500 K properties in 4-component disordered alloys [C]. It contains 6711 configurations with sizes ranging from 2 to 432 atoms in the periodic cell. We were experimenting with this data set before the rebuttal and performed a hyperparameter search for both models to obtain suitable relative weights for energy, force, and virial losses. No configuration for MACE provides competitive accuracy for energies and forces simultaneously. Tab. 3 shows that MACE at most matches the accuracy of ICTP on forces but is typically outperformed by a factor of $\leq$ 2.0 on energies.
---
Rebuttal Comment 1.1:
Title: Rebuttal reply
Comment: I would like to thank the authors for their rebuttal. They have addressed my concerns satisfactorily, providing extensive clarifications, particularly on how TensorNet and CACE are related to the present work, a computational complexity comparison to MACE, and additional experiments. Furthermore, they provide good additional results, both in terms of inference times and in terms of accuracy on a more diverse dataset. Given this, and after reading other reviewers' impressions and how they are addressed by the authors, I rise my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
We thank you for your prompt response and for raising your score. Your feedback and suggestions have significantly improved our work.
---
Rebuttal 2:
Title: Tab. 1: Inference times and memory consumption as a function of L and ν for the 3BPA data set.
Comment: All values are obtained by averaging over five independent runs. Best performances are highlighted in bold. Inference time and memory consumption are measured for a batch size of 10. Inference time is reported per structure in ms; memory consumption is provided for the entire batch in GB.
| | *L* = 1 | | *L* = 2 | | *L* = 3 | |
|:---|:---|:---|:---|:---|:---|:---|
| | ICTP | MACE | ICTP | MACE | ICTP | MACE |
| Inference times | | | | | | |
| *ν* = 1 | **0.76 ± 0.17** | 1.02 ± 0.03 | **0.87 ± 0.18** | 1.38 ± 0.04 | **0.98 ± 0.26** | 1.88 ± 0.03 |
| *ν* = 2 | **0.59 ± 0.20** | 1.12 ± 0.03 | **1.03 ± 0.21** | 1.52 ± 0.05 | **1.34 ± 0.08** | 2.0 ± 0.10 |
| *ν* = 3 | **0.79 ± 0.22** | 1.23 ± 0.03 | **1.15 ± 0.08** | 1.67 ± 0.03 | **1.85 ± 0.13** | 2.23 ± 0.03 |
| *ν* = 4 | **0.94 ± 0.17** | 1.41 ± 0.11 | **1.31 ± 0.21** | 1.83 ± 0.01 | **2.07 ± 0.20** | 2.53 ± 0.01 |
| *ν* = 5 | **1.02 ± 0.17** | 1.52 ± 0.08 | **1.72 ± 0.07** | 2.26 ± 0.03 | **3.61 ± 0.02** | OOM |
| *ν* = 6 | **1.00 ± 0.07** | 1.77 ± 0.05 | **1.83 ± 0.16** | 27.85 ± 0.01 | **16.76 ± 0.35** | OOM |
| Memory consumption | | | | | | |
| *ν* = 1 | 0.05 ± 0.00 | **0.04 ± 0.00** | 0.08 ± 0.00 | **0.06 ± 0.00** | 0.21 ± 0.00 | **0.13 ± 0.00** |
| *ν* = 2 | 0.05 ± 0.00 | **0.04 ± 0.00** | 0.08 ± 0.00 | **0.07 ± 0.00** | 0.28 ± 0.09 | **0.13 ± 0.00** |
| *ν* = 3 | 0.05 ± 0.00 | **0.04 ± 0.00** | 0.10 ± 0.00 | **0.08 ± 0.00** | 0.51 ± 0.03 | **0.23 ± 0.00** |
| *ν* = 4 | **0.05 ± 0.00** | **0.05 ± 0.00** | **0.18 ± 0.08** | 0.30 ± 0.00 | **1.07 ± 0.10** | 4.16 ± 0.00 |
| *ν* = 5 | **0.05 ± 0.00** | 0.07 ± 0.00 | **0.35 ± 0.07** | 3.18 ± 0.00 | **5.07 ± 0.02** | OOM |
| *ν* = 6 | **0.11 ± 0.09** | 0.22 ± 0.00 | **0.93 ± 0.00** | 50.49 ± 0.00 | **28.48 ± 0.03** | OOM |
---
Rebuttal 3:
Title: Tab. 2: Energy (E, meV) and force (F, meV/Å) RMSEs for the 3BPA data set and ν = 1.
Comment: All values are obtained by averaging over five independent runs. Best performances are highlighted in bold. Inference time and memory consumption are measured for a batch size of 100. Inference time is reported per structure in ms; memory consumption is provided for the entire batch in GB.
| | | ICTP (*L* = 2) | MACE (*L* = 2) |
|:-------------------------------|:---:|-------------------:|-----------------:|
| 300 K | E | **12.90 ± 1.06** | **13.50 ± 1.71** |
| | F | **29.90 ± 0.25** | **30.18 ± 0.38** |
| 600 K | E | **29.97 ± 0.94** | **31.32 ± 2.16** |
| | F | **62.80 ± 0.45** | **63.04 ± 0.73** |
| 1200 K | E | **81.03 ± 1.64** | **81.54 ± 2.02** |
| | F | **146.96 ± 1.30** | 149.44 ± 1.94 |
| Dihedral slices | E | **22.84 ± 2.96** | 28.08 ± 4.04 |
| | F | **48.82 ± 5.25** | **49.62 ± 2.92** |
| Inference time | | **2.62 ± 0.02** | 2.96 ± 0.06 |
| Memory consumption | | 32.57 ± 0.00 | **23.32 ± 0.00** |
---
Rebuttal 4:
Title: Tab. 3: Energy (E, meV) and force (F, eV/Å) RMSEs for Ta-V-Cr-W subsystems.
Comment: Results are obtained by averaging over 10 independent runs. Best performances are highlighted in bold. Inference time and memory consumption are measured for a batch size of 50. Inference time is reported per atom in μs; memory consumption is provided for the entire batch in GB.
| Subsystem | | ICTP (*L* = 2) | ICTP (*L* = 1) | ICTP (*L* = 0) | MACE (*L* = 2) | MACE (*L* = 1) | MACE (*L* = 0) | ICTP (*L* = 2, *ν* = 2) | MTP | GM-NN | EAM |
|:---|:--:|---:|---:|---:|---:|---:|---:|---:|---:|---:|---:|
| TaV | E | **1.02 ± 0.27** | **1.21 ± 0.54** | 1.65 ± 1.06 | 1.72 ± 0.67 | 1.76 ± 0.53 | 2.24 ± 1.34 | **1.24 ± 0.50** | 1.94 | 1.54 | 32.0 |
| | F | **0.020 ± 0.002** | 0.022 ± 0.002 | 0.024 ± 0.002 | **0.019 ± 0.002** | **0.020 ± 0.003** | 0.022 ± 0.002 | 0.023 ± 0.002 | 0.050 | 0.029 | 0.404 |
| TaCr | E | **1.81 ± 0.29** | **1.94 ± 0.23** | 2.13 ± 0.19 | 3.26 ± 0.42 | 3.31 ± 0.44 | 4.18 ± 0.56 | 2.4 ± 0.33 | 3.26 | 2.98 | 43.6 |
| | F | **0.025 ± 0.007** | **0.024 ± 0.006** | 0.027 ± 0.005 | 0.029 ± 0.01 | **0.026 ± 0.007** | 0.028 ± 0.007 | **0.026 ± 0.006** | 0.057 | 0.038 | 0.343 |
| TaW | E | **1.75 ± 0.11** | 1.87 ± 0.14 | 2.45 ± 0.31 | 2.73 ± 0.53 | 3.21 ± 0.55 | 3.57 ± 0.48 | 2.19 ± 0.54 | 2.72 | 2.99 | 44.8 |
| | F | **0.017 ± 0.002** | **0.018 ± 0.002** | 0.020 ± 0.002 | **0.017 ± 0.002** | **0.018 ± 0.002** | 0.019 ± 0.002 | **0.018 ± 0.002** | 0.038 | 0.025 | 0.248 |
| VCr | E | **1.74 ± 1.2** | 2.52 ± 2.43 | **2.13 ± 1.24** | **2.19 ± 0.78** | 2.82 ± 1.28 | 3.11 ± 1.42 | **1.89 ± 1.27** | **2.29** | 2.82 | 44.8 |
| | F | **0.016 ± 0.002** | 0.018 ± 0.001 | 0.019 ± 0.001 | **0.016 ± 0.001** | **0.017 ± 0.001** | 0.018 ± 0.002 | 0.019 ± 0.001 | 0.036 | 0.025 | 0.270 |
| VW | E | **1.32 ± 0.2** | **1.46 ± 0.16** | 1.69 ± 0.21 | 1.9 ± 0.19 | 1.94 ± 0.23 | 2.42 ± 0.24 | 1.61 ± 0.16 | 2.50 | 2.00 | 21.3 |
| | F | **0.014 ± 0.002** | **0.015 ± 0.002** | 0.018 ± 0.003 | **0.014 ± 0.002** | **0.015 ± 0.002** | 0.017 ± 0.002 | 0.016 ± 0.002 | 0.037 | 0.023 | 0.292 |
| CrW | E | **2.18 ± 0.93** | **2.45 ± 1.53** | 2.76 ± 1.15 | **2.31 ± 1.18** | 2.84 ± 0.98 | 4.14 ± 1.38 | 3.12 ± 1.90 | 4.35 | 2.87 | 23.4 |
| | F | **0.018 ± 0.004** | **0.020 ± 0.005** | 0.024 ± 0.008 | **0.020 ± 0.009** | **0.019 ± 0.006** | 0.023 ± 0.007 | 0.022 ± 0.006 | 0.041 | 0.029 | 0.248 |
| TaVCr | E | **0.79 ± 0.08** | 0.92 ± 0.17 | 1.00 ± 0.24 | 2.26 ± 0.54 | 2.71 ± 0.66 | 3.92 ± 0.77 | 0.97 ± 0.13 | 2.43 | 1.97 | 34.1 |
| | F | 0.027 ± 0.001 | 0.029 ± 0.002 | 0.033 ± 0.002 | **0.023 ± 0.002** | **0.024 ± 0.001** | 0.028 ± 0.001 | 0.031 ± 0.002 | 0.054 | 0.045 | 0.313 |
| TaVW | E | **1.00 ± 0.2** | **0.98 ± 0.18** | 1.26 ± 0.23 | 1.8 ± 0.35 | 1.97 ± 0.44 | 2.29 ± 0.86 | **0.95 ± 0.25** | 1.67 | 1.70 | 39.6 |
| | F | **0.021 ± 0.001** | 0.022 ± 0.001 | 0.025 ± 0.001 | **0.021 ± 0.002** | 0.023 ± 0.001 | 0.026 ± 0.001 | 0.023 ± 0.001 | 0.043 | 0.034 | 0.321 |
| TaCrW | E | **1.16 ± 0.15** | **1.28 ± 0.13** | 1.58 ± 0.29 | 1.67 ± 0.38 | 1.48 ± 0.50 | 2.08 ± 0.57 | **1.24 ± 0.11** | 2.08 | 2.19 | 23.6 |
| | F | **0.022 ± 0.001** | 0.024 ± 0.001 | 0.027 ± 0.001 | 0.028 ± 0.002 | 0.030 ± 0.002 | 0.033 ± 0.002 | 0.026 ± 0.001 | 0.051 | 0.039 | 0.327 |
| VCrW | E | **1.00 ± 0.16** | **1.07 ± 0.14** | 1.37 ± 0.13 | 1.97 ± 0.5 | 2.21 ± 0.42 | 2.86 ± 0.64 | **1.10 ± 0.14** | 1.37 | 1.94 | 19.4 |
| | F | **0.018 ± 0.001** | 0.019 ± 0.001 | 0.022 ± 0.001 | **0.017 ± 0.001** | 0.019 ± 0.001 | 0.021 ± 0.001 | 0.020 ± 0.001 | 0.040 | 0.031 | 0.314 |
| TaVCrW (0 K) | E | **1.22 ± 0.07** | 1.30 ± 0.1 | 1.48 ± 0.16 | 2.26 ± 0.55 | 2.48 ± 0.46 | 3.60 ± 0.54 | **1.33 ± 0.17** | 2.09 | 2.16 | 50.8 |
| | F | **0.021 ± 0.002** | **0.022 ± 0.002** | 0.025 ± 0.002 | **0.022 ± 0.001** | 0.023 ± 0.002 | 0.027 ± 0.001 | 0.024 ± 0.002 | 0.049 | 0.037 | 0.488 |
| TaVCrW (2500 K) | E | **1.63 ± 0.07** | 1.74 ± 0.11 | 2.09 ± 0.09 | 2.22 ± 0.48 | 2.34 ± 0.59 | 3.68 ± 0.70 | 2.06 ± 0.09 | 2.40 | 2.67 | 59.4 |
| | F | **0.116 ± 0.002** | 0.121 ± 0.002 | 0.141 ± 0.003 | **0.119 ± 0.007** | 0.126 ± 0.006 | 0.150 ± 0.003 | 0.140 ± 0.002 | 0.156 | 0.179 | 0.521 |
| Overall | E | **1.38 ± 0.09** | 1.56 ± 0.21 | 1.80 ± 0.18 | 2.19 ± 0.31 | 2.42 ± 0.31 | 3.17 ± 0.28 | 1.67 ± 0.21 | 2.43 | 2.32 | 37.14 |
| | F | **0.028 ± 0.001** | **0.029 ± 0.001** | 0.034 ± 0.001 | **0.029 ± 0.001** | 0.030 ± 0.001 | 0.034 ± 0.001 | 0.032 ± 0.001 | 0.054 | 0.043 | 0.443 |
| Inference time | | 51.78 ± 1.18 | 25.09 ± 0.02 | 14.59 ± 0.01 | 29.48 ± 0.23 | 15.37 ± 0.04 | 4.43 ± 0.00 | 14.97 ± 0.09 | 17.57 | 7.25 | 0.50 |
| Memory consumption | | 36.78 ± 0.00 | 16.93 ± 0.00 | 8.48 ± 0.00 | 28.82 ± 0.00 | 13.87 ± 0.00 | 5.91 ± 0.00 | 13.15 ± 0.00 | – | – | – | | Summary: This paper introduces the use of higher-rank irreducible Cartesian tensors as an alternative to spherical tensors for equivariant message passing in machine learning interatomic potentials. The authors illustrate clearly on how to construct these tensors and their products, prove equivariance properties, and evaluate the approach empirically on several molecular datasets.
Strengths: * The mathematical foundations are clearly illustrated, with detailed explanations of how to construct irreducible Cartesian tensors and compute their products.
* The experiments on out-of-domain extrapolation, particularly on the 3BPA and acetylacetone datasets, provide valuable insights into the generalization capabilities of the proposed method.
* The paper demonstrates that irreducible Cartesian tensor-based models can achieve comparable or sometimes better performance than state-of-the-art spherical tensor models.
Weaknesses: * The empirical evaluation is limited to relatively simple molecular datasets. The paper would be strengthened by including experiments on more challenging datasets such as MD22 or heterogeneous datasets like QM9.
* The efficiency gain and the performance gain is not that appealing to my eye. It seems little more than “instead of using that math, you can use this math!” without a very strong theoretical justification for why to do so. The author could have done a better job of explaining what is the fundamental difference/advantage of the proposed cartesian tensors when compared with the sphereical tensors.
Technical Quality: 2
Clarity: 3
Questions for Authors: * What are the core differences between this method and TensorNet? A clearer comparison would help position this work in the context of existing literature.
* Is the proposed model compatible with Hamiltonian prediction? This could be an interesting avenue for future work.
* Can the authors provide plots showing how their model scales with increasing L?
* The paper mentions "transferability" in line 225. Could the authors clarify what they mean by this term in this context?
* The claim that Cartesian tensors are advantageous to spherical tensors requires further explanation. From a representation power perspective, aren't they equivalent? Is it possible that the observed performance gains are due to hyperparameter tuning rather than fundamental differences in representation power?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive assessment of the manuscript. We have addressed each point they raised below. All numerical results for the performed experiments are presented in Tabs. 1–3 and Fig. 1 of the attached PDF.
**W1:** We added a large-scale data set that aims to assess the model's performance on a varying number of atom types/components and relaxed (0 K) as well as high-temperature structures. The Ta-V-Cr-W data set is diverse and includes 0 K energies, forces, and stresses for 2-, 3-, and 4-component systems and 2500 K properties in 4-component disordered alloys [C]. It contains 6711 configurations with sizes ranging from 2 to 432 atoms in the periodic cell. We were experimenting with this data set before the rebuttal and performed a hyperparameter search for both models to obtain suitable relative weights for energy, force, and virial losses. No configuration for MACE provides competitive accuracy for energies and forces simultaneously. Tab. 3 shows that MACE at most matches the accuracy of ICTP on forces but is typically outperformed by a factor of $\leq$ 2.0 on energies.
We decided not to use MD22 and QM9 since they do not include variations in atom types or MD trajectories, respectively.
**W2:** We agree that we could better motivate our approach regarding the computational advantages compared to spherical tensors. Considering results from Q3 and computational complexities (Q1 by the Reviewer 69Cm) for equivariant convolutions ($\mathcal{O}(E N_\text{ch} 9^{L} L!/(2^{L/2}(L/2)!))$ and $\mathcal{O}(EN_\text{ch}L^5)$ for ICTP and MACE, respectively) and the product basis ($\mathcal{O}(M N_\text{ch} K (9^{L} L!/(2^{L/2}(L/2)!))^{\nu-1})$ and $\mathcal{O}(M N_\text{ch} L^{\frac{1}{2}\nu(\nu + 3)})$, respectively), employing irreducible Cartesian tensors offers more than simply replacing mathematical expressions. Indeed, ICTP is more efficient in covering larger $\nu$ (and larger $L$ if $\nu$ is large) than MACE; see also Q3.
We expect our approach to inspire the development of computationally efficient models and frameworks using strategies other than those employed for spherical ones. We also demonstrate improved efficiency by leveraging the symmetries of tensor products and coupled product features; see Tab. 2 in the manuscript. Apart from the above, symmetric tensors allow for more efficient implementations and algorithms for the general matrix-matrix multiplications (GEMM), which PyTorch has not yet provided. Finally, our approach can exploit the symmetry of tensors when computing forces and stresses, omitting the transpose calculation.
**Q1:** Our approach includes TensorNet as a special case with $\nu=2$ and $L = l_\text{max} = 2$, though with equivariant convolution filters; for results, see Tab. 3. Please see also W1 and Q2 by the Reviewer 69Cm. TensorNet uses reducible Cartesian tensors to embed atomic environments and decomposes them into irreducible ones before computing products with invariant radial filters. It includes explicit three-body features since it computes a matrix-matrix product between node features and messages. Our approach uses exclusively irreducible Cartesian tensors for embedding atomic environments, equivariant convolutions, and product basis. Thus, we do not mix irreducible representations during message passing. We use irreducible tensor products for the equivariant convolution and go beyond invariant filters. Finally, we systematically construct equivariant many-body messages.
**Q2:** We see no hurdle to applying our approach to $N$-center properties such as single-particle Hamiltonian matrices; see [D]. Our approach also includes all operations required for [E] and [F].
**Q3:** We agree that an ablation study on $L$ and $\nu$ would improve our work; see also W2.1 and Q1 by the Reviewer 69Cm. Tab. 1 and Fig. 1 show the inference time and memory consumption for varying ranks $L$ (messages) and $l_\text{max}$ (tensors embedding environments); we also vary the number of contracted tensors $\nu$. Indeed, ICTP outperforms MACE for most parameter values. Particularly, ICTP allows spanning the $\nu$-space more efficiently and, thus, improves models' expressive power for tasks requiring correlations of higher body orders. These correlations become more important when, e.g., environments are degenerate with respect to lower body orders, and higher accuracy is required [B, G].
ICTP is also more computationally efficient if $\nu = 3$ and $L \leq 4$. Note that $L \leq 4$ is sufficient for most applications in physics; see, e.g., [A]. For neighborhood orientations with $L$-fold symmetries, however, at least rank-$L$ tensors may be required [B]. These symmetries are typically lifted in atomistic simulations. Fig. 1 shows that for $L > 4$, Cartesian models will also be advantageous if $\nu > 4$. These results agree with our complexity analysis in Q1 by the Reviewer 69Cm.
**Q4:** When we mention an interatomic potential's transferability, we refer to its ability to accurately predict energies, forces, and stresses for crystal structures, temperatures, and stoichiometries on which it was not trained; see [77] in the manuscript.
**Q5:** We agree that spherical and irreducible Cartesian tensors (also reducible ones) should have comparable expressive power for a fixed $L$ since both tensors are related through a linear transformation; see [40, 49, 59-61] in the manuscript. However, our results in Q3 demonstrate that MACE scales worse than ICTP when increasing $\nu$, i.e., when increasing the expressive power of the model. Particularly, for MACE with $L=3$, we could raise $\nu$ to a maximal value of 4 since, for larger values, we obtained an OOM error on an NVIDIA A100 GPU with 80GB. We also conducted a careful hyperparameter tuning for MACE and ICTP to ensure a fair comparison, expecting similar energy and force errors due to the comparable expressive power of spherical and Cartesian tensors.
---
Rebuttal 2:
Title: Tab. 1: Inference times and memory consumption as a function of L and ν for the 3BPA data set.
Comment: All values are obtained by averaging over five independent runs. Best performances are highlighted in bold. Inference time and memory consumption are measured for a batch size of 10. Inference time is reported per structure in ms; memory consumption is provided for the entire batch in GB.
| | *L* = 1 | | *L* = 2 | | *L* = 3 | |
|:---|:---|:---|:---|:---|:---|:---|
| | ICTP | MACE | ICTP | MACE | ICTP | MACE |
| Inference times | | | | | | |
| *ν* = 1 | **0.76 ± 0.17** | 1.02 ± 0.03 | **0.87 ± 0.18** | 1.38 ± 0.04 | **0.98 ± 0.26** | 1.88 ± 0.03 |
| *ν* = 2 | **0.59 ± 0.20** | 1.12 ± 0.03 | **1.03 ± 0.21** | 1.52 ± 0.05 | **1.34 ± 0.08** | 2.0 ± 0.10 |
| *ν* = 3 | **0.79 ± 0.22** | 1.23 ± 0.03 | **1.15 ± 0.08** | 1.67 ± 0.03 | **1.85 ± 0.13** | 2.23 ± 0.03 |
| *ν* = 4 | **0.94 ± 0.17** | 1.41 ± 0.11 | **1.31 ± 0.21** | 1.83 ± 0.01 | **2.07 ± 0.20** | 2.53 ± 0.01 |
| *ν* = 5 | **1.02 ± 0.17** | 1.52 ± 0.08 | **1.72 ± 0.07** | 2.26 ± 0.03 | **3.61 ± 0.02** | OOM |
| *ν* = 6 | **1.00 ± 0.07** | 1.77 ± 0.05 | **1.83 ± 0.16** | 27.85 ± 0.01 | **16.76 ± 0.35** | OOM |
| Memory consumption | | | | | | |
| *ν* = 1 | 0.05 ± 0.00 | **0.04 ± 0.00** | 0.08 ± 0.00 | **0.06 ± 0.00** | 0.21 ± 0.00 | **0.13 ± 0.00** |
| *ν* = 2 | 0.05 ± 0.00 | **0.04 ± 0.00** | 0.08 ± 0.00 | **0.07 ± 0.00** | 0.28 ± 0.09 | **0.13 ± 0.00** |
| *ν* = 3 | 0.05 ± 0.00 | **0.04 ± 0.00** | 0.10 ± 0.00 | **0.08 ± 0.00** | 0.51 ± 0.03 | **0.23 ± 0.00** |
| *ν* = 4 | **0.05 ± 0.00** | **0.05 ± 0.00** | **0.18 ± 0.08** | 0.30 ± 0.00 | **1.07 ± 0.10** | 4.16 ± 0.00 |
| *ν* = 5 | **0.05 ± 0.00** | 0.07 ± 0.00 | **0.35 ± 0.07** | 3.18 ± 0.00 | **5.07 ± 0.02** | OOM |
| *ν* = 6 | **0.11 ± 0.09** | 0.22 ± 0.00 | **0.93 ± 0.00** | 50.49 ± 0.00 | **28.48 ± 0.03** | OOM |
---
Rebuttal 3:
Title: Tab. 2: Energy (E, meV) and force (F, meV/Å) RMSEs for the 3BPA data set and ν = 1.
Comment: All values are obtained by averaging over five independent runs. Best performances are highlighted in bold. Inference time and memory consumption are measured for a batch size of 100. Inference time is reported per structure in ms; memory consumption is provided for the entire batch in GB.
| | | ICTP (*L* = 2) | MACE (*L* = 2) |
|:-------------------------------|:---:|-------------------:|-----------------:|
| 300 K | E | **12.90 ± 1.06** | **13.50 ± 1.71** |
| | F | **29.90 ± 0.25** | **30.18 ± 0.38** |
| 600 K | E | **29.97 ± 0.94** | **31.32 ± 2.16** |
| | F | **62.80 ± 0.45** | **63.04 ± 0.73** |
| 1200 K | E | **81.03 ± 1.64** | **81.54 ± 2.02** |
| | F | **146.96 ± 1.30** | 149.44 ± 1.94 |
| Dihedral slices | E | **22.84 ± 2.96** | 28.08 ± 4.04 |
| | F | **48.82 ± 5.25** | **49.62 ± 2.92** |
| Inference time | | **2.62 ± 0.02** | 2.96 ± 0.06 |
| Memory consumption | | 32.57 ± 0.00 | **23.32 ± 0.00** |
---
Rebuttal 4:
Title: Tab. 3: Energy (E, meV) and force (F, eV/Å) RMSEs for Ta-V-Cr-W subsystems.
Comment: Results are obtained by averaging over 10 independent runs. Best performances are highlighted in bold. Inference time and memory consumption are measured for a batch size of 50. Inference time is reported per atom in μs; memory consumption is provided for the entire batch in GB.
| Subsystem | | ICTP (*L* = 2) | ICTP (*L* = 1) | ICTP (*L* = 0) | MACE (*L* = 2) | MACE (*L* = 1) | MACE (*L* = 0) | ICTP (*L* = 2, *ν* = 2) | MTP | GM-NN | EAM |
|:---|:--:|---:|---:|---:|---:|---:|---:|---:|---:|---:|---:|
| TaV | E | **1.02 ± 0.27** | **1.21 ± 0.54** | 1.65 ± 1.06 | 1.72 ± 0.67 | 1.76 ± 0.53 | 2.24 ± 1.34 | **1.24 ± 0.50** | 1.94 | 1.54 | 32.0 |
| | F | **0.020 ± 0.002** | 0.022 ± 0.002 | 0.024 ± 0.002 | **0.019 ± 0.002** | **0.020 ± 0.003** | 0.022 ± 0.002 | 0.023 ± 0.002 | 0.050 | 0.029 | 0.404 |
| TaCr | E | **1.81 ± 0.29** | **1.94 ± 0.23** | 2.13 ± 0.19 | 3.26 ± 0.42 | 3.31 ± 0.44 | 4.18 ± 0.56 | 2.4 ± 0.33 | 3.26 | 2.98 | 43.6 |
| | F | **0.025 ± 0.007** | **0.024 ± 0.006** | 0.027 ± 0.005 | 0.029 ± 0.01 | **0.026 ± 0.007** | 0.028 ± 0.007 | **0.026 ± 0.006** | 0.057 | 0.038 | 0.343 |
| TaW | E | **1.75 ± 0.11** | 1.87 ± 0.14 | 2.45 ± 0.31 | 2.73 ± 0.53 | 3.21 ± 0.55 | 3.57 ± 0.48 | 2.19 ± 0.54 | 2.72 | 2.99 | 44.8 |
| | F | **0.017 ± 0.002** | **0.018 ± 0.002** | 0.020 ± 0.002 | **0.017 ± 0.002** | **0.018 ± 0.002** | 0.019 ± 0.002 | **0.018 ± 0.002** | 0.038 | 0.025 | 0.248 |
| VCr | E | **1.74 ± 1.2** | 2.52 ± 2.43 | **2.13 ± 1.24** | **2.19 ± 0.78** | 2.82 ± 1.28 | 3.11 ± 1.42 | **1.89 ± 1.27** | **2.29** | 2.82 | 44.8 |
| | F | **0.016 ± 0.002** | 0.018 ± 0.001 | 0.019 ± 0.001 | **0.016 ± 0.001** | **0.017 ± 0.001** | 0.018 ± 0.002 | 0.019 ± 0.001 | 0.036 | 0.025 | 0.270 |
| VW | E | **1.32 ± 0.2** | **1.46 ± 0.16** | 1.69 ± 0.21 | 1.9 ± 0.19 | 1.94 ± 0.23 | 2.42 ± 0.24 | 1.61 ± 0.16 | 2.50 | 2.00 | 21.3 |
| | F | **0.014 ± 0.002** | **0.015 ± 0.002** | 0.018 ± 0.003 | **0.014 ± 0.002** | **0.015 ± 0.002** | 0.017 ± 0.002 | 0.016 ± 0.002 | 0.037 | 0.023 | 0.292 |
| CrW | E | **2.18 ± 0.93** | **2.45 ± 1.53** | 2.76 ± 1.15 | **2.31 ± 1.18** | 2.84 ± 0.98 | 4.14 ± 1.38 | 3.12 ± 1.90 | 4.35 | 2.87 | 23.4 |
| | F | **0.018 ± 0.004** | **0.020 ± 0.005** | 0.024 ± 0.008 | **0.020 ± 0.009** | **0.019 ± 0.006** | 0.023 ± 0.007 | 0.022 ± 0.006 | 0.041 | 0.029 | 0.248 |
| TaVCr | E | **0.79 ± 0.08** | 0.92 ± 0.17 | 1.00 ± 0.24 | 2.26 ± 0.54 | 2.71 ± 0.66 | 3.92 ± 0.77 | 0.97 ± 0.13 | 2.43 | 1.97 | 34.1 |
| | F | 0.027 ± 0.001 | 0.029 ± 0.002 | 0.033 ± 0.002 | **0.023 ± 0.002** | **0.024 ± 0.001** | 0.028 ± 0.001 | 0.031 ± 0.002 | 0.054 | 0.045 | 0.313 |
| TaVW | E | **1.00 ± 0.2** | **0.98 ± 0.18** | 1.26 ± 0.23 | 1.8 ± 0.35 | 1.97 ± 0.44 | 2.29 ± 0.86 | **0.95 ± 0.25** | 1.67 | 1.70 | 39.6 |
| | F | **0.021 ± 0.001** | 0.022 ± 0.001 | 0.025 ± 0.001 | **0.021 ± 0.002** | 0.023 ± 0.001 | 0.026 ± 0.001 | 0.023 ± 0.001 | 0.043 | 0.034 | 0.321 |
| TaCrW | E | **1.16 ± 0.15** | **1.28 ± 0.13** | 1.58 ± 0.29 | 1.67 ± 0.38 | 1.48 ± 0.50 | 2.08 ± 0.57 | **1.24 ± 0.11** | 2.08 | 2.19 | 23.6 |
| | F | **0.022 ± 0.001** | 0.024 ± 0.001 | 0.027 ± 0.001 | 0.028 ± 0.002 | 0.030 ± 0.002 | 0.033 ± 0.002 | 0.026 ± 0.001 | 0.051 | 0.039 | 0.327 |
| VCrW | E | **1.00 ± 0.16** | **1.07 ± 0.14** | 1.37 ± 0.13 | 1.97 ± 0.5 | 2.21 ± 0.42 | 2.86 ± 0.64 | **1.10 ± 0.14** | 1.37 | 1.94 | 19.4 |
| | F | **0.018 ± 0.001** | 0.019 ± 0.001 | 0.022 ± 0.001 | **0.017 ± 0.001** | 0.019 ± 0.001 | 0.021 ± 0.001 | 0.020 ± 0.001 | 0.040 | 0.031 | 0.314 |
| TaVCrW (0 K) | E | **1.22 ± 0.07** | 1.30 ± 0.1 | 1.48 ± 0.16 | 2.26 ± 0.55 | 2.48 ± 0.46 | 3.60 ± 0.54 | **1.33 ± 0.17** | 2.09 | 2.16 | 50.8 |
| | F | **0.021 ± 0.002** | **0.022 ± 0.002** | 0.025 ± 0.002 | **0.022 ± 0.001** | 0.023 ± 0.002 | 0.027 ± 0.001 | 0.024 ± 0.002 | 0.049 | 0.037 | 0.488 |
| TaVCrW (2500 K) | E | **1.63 ± 0.07** | 1.74 ± 0.11 | 2.09 ± 0.09 | 2.22 ± 0.48 | 2.34 ± 0.59 | 3.68 ± 0.70 | 2.06 ± 0.09 | 2.40 | 2.67 | 59.4 |
| | F | **0.116 ± 0.002** | 0.121 ± 0.002 | 0.141 ± 0.003 | **0.119 ± 0.007** | 0.126 ± 0.006 | 0.150 ± 0.003 | 0.140 ± 0.002 | 0.156 | 0.179 | 0.521 |
| Overall | E | **1.38 ± 0.09** | 1.56 ± 0.21 | 1.80 ± 0.18 | 2.19 ± 0.31 | 2.42 ± 0.31 | 3.17 ± 0.28 | 1.67 ± 0.21 | 2.43 | 2.32 | 37.14 |
| | F | **0.028 ± 0.001** | **0.029 ± 0.001** | 0.034 ± 0.001 | **0.029 ± 0.001** | 0.030 ± 0.001 | 0.034 ± 0.001 | 0.032 ± 0.001 | 0.054 | 0.043 | 0.443 |
| Inference time | | 51.78 ± 1.18 | 25.09 ± 0.02 | 14.59 ± 0.01 | 29.48 ± 0.23 | 15.37 ± 0.04 | 4.43 ± 0.00 | 14.97 ± 0.09 | 17.57 | 7.25 | 0.50 |
| Memory consumption | | 36.78 ± 0.00 | 16.93 ± 0.00 | 8.48 ± 0.00 | 28.82 ± 0.00 | 13.87 ± 0.00 | 5.91 ± 0.00 | 13.15 ± 0.00 | – | – | – |
---
Rebuttal Comment 4.1:
Title: Questions on Ta-V-Cr-W system
Comment: Thank you for your response.
I am curious on how you trained the Ta-V-Cr-W systems. Did you train them jointly or separately? If separately, can model train on low temperature extrapolates to high temperature? How long does it take to train the system? Also, how do you handle the heterogeneity in the system for predicting the energies and forces? Can I find a reference for the dataset?
---
Rebuttal 5:
Comment: Dear Reviewer,
We thank you for your prompt response. We noticed that other reviewers cannot read your comment. Therefore, we will add it to allow them to follow our discussion:
> Thank you for your response.
>
> I am curious on how you trained the Ta-V-Cr-W systems. Did you train them jointly or separately? If separately, can model train > on low temperature extrapolates to high temperature? How long does it take to train the system? Also, how do you handle the > heterogeneity in the system for predicting the energies and forces? Can I find a reference for the dataset?
We have addressed each of your questions below:
* We train ICTP and MACE using all Ta-V-Cr-W subsystems simultaneously. Particularly, as already stated in the general response, all models are trained using 5373 configurations (4873 are used for training and 500—for early stopping), while the remaining 1338 configurations are reserved for testing the models' performance. The performance is tested separately using 0 K binaries, ternaries, quaternaries, and near-melting temperature four-component disordered alloys.
* Models trained exclusively on 0 K subsystems are not expected to generalize to near-melting temperature four-component disordered alloys. The 0 K subsystems span: (i) different atomic combinations for relaxed binary, ternary, and quaternary systems; (ii) different low-temperature ordering in the Ta-V-Cr-W family (B2 ordering, B32 ordering, random binary solid solution, BCC interface); (iii) all possible phase separations on the TaVCrW lattice (B2/B2 ordering, B2/B32 ordering, B32/B32 ordering, B2/random binary ordering, B32/random binary ordering, random binary/random binary ordering). None overlaps sufficiently in local environments with high-temperature (2500 K) disordered structures. For more details on the data set, we refer to the "Description of the data set" section of the original publication [C].
* Training a single model requires up to 12 hours on an NVIDIA A100 GPU with 80GB.
* We did not implement any specific step for handling the heterogeneity in the Ta-V-Cr-W data set. We only increased the mini-batch size to 32 for both models to account for energy statistics and reduced the relative weight of the force loss.
* For the dataset, we have referenced [C] in our previous response (K. Gubaev, V. Zaverkin, P. Srinivasan *et al.*: Performance of two complementary machine-learned potentials in modelling chemically complex systems. *npj Comput. Mater.* **9**, 129 (2023)). The data set can be accessed via the link: [https://doi.org/10.18419/darus-3516](https://doi.org/10.18419/darus-3516).
We hope we could properly address your questions and awaiting on your response.
---
Rebuttal 6:
Title: Thanks for the response
Comment: Sorry for the oversight. I was not paying so much attention to the general response. That addresses most of my concern. But I am still worried that the performance/efficiency improvements is not significant and stronger baselines (such as Equiformer V2) or more standardized datasets (such as MD22/OCP as mentioned by Reviewer 69Cm) are needed.
---
Rebuttal 7:
Comment: Dear reviewer,
We again noticed that other reviewers cannot read your comment. Therefore, we will add it to allow them to follow our discussion:
> Sorry for the oversight. I was not paying so much attention to the general response. That addresses most of my concern. But > I am still worried that the performance/efficiency improvements is not significant and stronger baselines (such as Equiformer > V2) or more standardized datasets (such as MD22/OCP as mentioned by Reviewer 69Cm) are needed.
We want to point out that the official review does not mention comparing to an additional baseline, such as EquiformerV2. Besides that, baselines, such as MACE, Allegro, NequIP, TensorNet, and CACE, which we used in our work, are current state-of-the-art models. Therefore, we do not see how adding experimental results for EquiformerV2 could further contribute to demonstrating the performance and efficiency advantages of our approach.
We evaluated ICTP using rMD17, 3BPA, and Acetylacetone, which other state-of-the-art models commonly use. Also, as requested by the reviewer, we included another, more challenging data set (Ta-V-Cr-W) and motivated our choice. Including yet another benchmark data set, such as MD22 or OC20/22, would not improve the value of our work. As we already explained, the suggested MD22 data set does not include variations in atom types. Thus, it would not provide additional insights beyond those we acquired with rMD17; the models' performance would again be tested on vibrational degrees of freedom of a single molecule. Furthermore, and contrary to the Ta-V-Cr-W data set, OC20/22 does not allow systematic evaluation of the models' performance across different crystal structures, temperatures, and stoichiometries.
We would appreciate further clarification on the reviewer's concerns regarding the models and data sets used in the manuscript and the response to the official review. We are eager to better understand the specific reasons why our current evaluation is not convincing so far.
---
Rebuttal Comment 7.1:
Title: Thanks for the response
Comment: Thank you for your prompt response. I will raise my score. However, I hope that more challenging datasets will be included in the revised version. MD22 can measure long-range effects, and OCP has numerous other baseline performances reported in the literature. To provide a more comprehensive evaluation, it would be convincing to add Equiformer V2 as a baseline to the Ta-V-Cr-W system. This should be relatively straightforward, and I am interested in understanding the relative performance of this model.
---
Reply to Comment 7.1.1:
Comment: Dear reviewer,
We thank you for your feedback on our work and for raising your score. | Summary: In this work, the authors proposed higher-rank irreducible Cartesian Tensor Product, and explored its usage in equivariant neural networks design in scientific applications such as molecular modeling. The authors firstly prove that irreducible Cartesian Tensor Product is equivariant to O(3) group, and further show that higher-rank (e.g., > 2) operations can be used in an efficient way for models using many-body interactions. Experiments are conducted to demonstrate the effectiveness of the proposed approach.
Strengths: 1. The problem this work aims to tackle is of great significance in real-world scientific applications.
2. The proposed approach is interesting and can potentially improve a new class of equivariant neural networks for crucial tasks.
3. The paper is easy to follow.
Weaknesses: 1. **The motivation of this work needs to be better explained and presented**. As stated in the Introduction and Related Works, the major disadvantage of spherical tensors is computationally demanding, which motivates the development of Cartesian-Tensor-Product-based approaches. However, the authors do not well discuss the disadvantages of existing Cartesian-Tensor-Product-based approaches (e.g., inefficiency in scaling tensor ranks up), which is necessary as a solid support for the motivation of this work. Besides, the lack of comprehensive discussion and comparisons between this work and existing Cartesian-Tensor-Product-based approaches makes the actual value of this work doubtful for readers who are not familiar with the context.
2. **Experimental results are weak in demonstrating the superiority of the proposed approach**:
- Lack of detailed efficiency comparisons: in this work, the authors demonstrate that the proposed irreducible Cartesian Tensor Product can be used for both two-body and many-body feature interaction or equivariant convolutions with better theoretically-proved efficiency. However, there are no comprehensive comparisons covering different operations and also different rank L.
- Lack of large-scale experiments: all datasets used in this work (rMD17, 3BPA and Acetylacetone) have limited scales of molecular systems and number of samples. Since the proposed approach is claimed to bring benefits in efficiency, it would be necessary to verify it on larger-scale datasets such as OC20/22.
- Lack of experiments on applying iCTP for two-body operations only: MACE is mainly used to compare iCTP and spherical tensor products. However, two-body operations like equivariant feature interaction/equivariant convolution are also widely used in equivariant networks. It would be better to further verify the effectiveness of iCTP on these operations only to demonstrate its generality.
Overall, it is of great significance to design more powerful and efficient equivariant networks for real-world applications. However, several issues exist in the current submission. My recommendation is Borderline Accept, and I will carefully read the rebuttal and other reviews to decide whether to decrease or increase my scores.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Could you detailedly explain the difference in computational complexity between iCTP and spherical Tensor Products on both two-body and many-body interactions?
2. Could you comprehensively compare this work with TensorNet and compare the strengths and weaknesses of them?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback and comments. We have addressed each of their points below and included all new results in Tabs. 1–3 and Fig. 1 of the attached PDF.
**W1:** We agree that discussing these points would improve our work. Indeed, our approach includes TensorNet and CACE as special cases. A TensorNet-like architecture could be defined with $\nu=2$ and $L = l_\text{max} = 2$, though with equivariant convolution filters; see Tab. 3. For the computational complexity, see W2.1 and Q1.
Unlike TensorNet and CACE, which use invariant radial (both) and many-body (CACE) filters, we propose equivariant convolution filters based on irreducible Cartesian tensors. Additionally, while TensorNet and CACE embed atomic environments using reducible tensors, we use exclusively irreducible ones and ensure that irreducible representations are not mixed during message-passing. TensorNet decomposes reducible rank-2 tensors before computing messages and builds explicit 3-body features. In contrast, our approach introduces the product basis using irreducible Cartesian tensor products and systematically constructs equivariant many-body messages. The limited message-passing mechanisms in TensorNet and CACE restrict their architectures and expressive power. Our approach enables the systematic construction of O(3) equivariant models and enhances their expressive power.
**W2.1:** We agree that an ablation study on $L$ and $\nu$ would improve our work. Tab. 1 and Fig. 1 show the inference time and memory consumption for varying ranks $L$ (messages) and $l_\text{max}$ (tensors embedding environments); we also vary the number of contracted tensors $\nu$. Indeed, ICTP outperforms MACE for most parameter values. Particularly, ICTP allows spanning the $\nu$-space more efficiently and, thus, improves models' expressive power for tasks requiring correlations of higher body orders. These correlations become more important when, e.g., environments are degenerate with respect to lower body orders, and higher accuracy is required [B, G].
ICTP is also more computationally efficient if $\nu = 3$ and $L \leq 4$. Note that $L \leq 4$ is sufficient for most applications in physics; see, e.g., [A]. For environments with $L$-fold symmetries, however, rank-$L$ tensors may be required [B]. These symmetries are typically lifted in atomistic simulations. Fig. 1 shows that for $L > 4$, Cartesian models will also be advantageous if $\nu > 4$. These results agree with our complexity analysis in Q1.
**W2.2:** We added a large-scale data set that aims to assess the model's performance on a varying number of atom types/components and relaxed (0 K) as well as high-temperature structures. The Ta-V-Cr-W data set is diverse and includes 0 K energies, forces, and stresses for 2-, 3-, and 4-component systems and 2500 K properties in 4-component disordered alloys [C]. It contains 6711 configurations with sizes ranging from 2 to 432 atoms in the periodic cell. We run a hyperparameter search for both models to obtain suitable relative weights for energy, force, and virial losses. No configuration for MACE provides competitive accuracy for energies and forces simultaneously. Tab. 3 shows that MACE at most matches the accuracy of ICTP on forces but is typically outperformed by a factor of $\leq$ 2.0 on energies.
We decided not to use OC20/22 as it does not allow systematic evaluation of models' performance across different crystal structures, temperatures, and stoichiometries, which can facilitate further method development.
**W2.3:** Tab. 2 shows ICTP/MACE results with $\nu=1$, i.e., using only two-body interactions. ICTP has accuracy comparable to or better than that of MACE and inference times smaller by 1.13.
**Q1:** For irreducible Cartesian tensors, the computational complexity of a tensor product is $\mathcal{O}(9^{L} L!/(2^{L/2}(L/2)!))$; see Section B3. Thus, for two-body interactions, we get $\mathcal{O}(E N_\text{ch} 9^{L} L!/(2^{L/2}(L/2)!))$ ($E$: number of edges; $L$: tensor rank; $N_\text{ch}$: number of features). The Clebsch-Gordan (CG) tensor product in spherical models has the complexity of $\mathcal{O}(L^5)$. Thus, for an equivariant convolution, we get $\mathcal{O}(EN_\text{ch}L^5)$. For many-body interactions, we obtain $\mathcal{O}(M N_\text{ch} K (9^{L} L!/(2^{L/2}(L/2)!))^{\nu-1})$ and $\mathcal{O}(M N_\text{ch} K L^{5(\nu-1)})$ for ICTP and MACE, respectively ($M$: number of nodes; $\nu$: number of contracted tensors; $K = \text{len}(\eta_\nu)$: all possible $(\nu-1)$-fold tensor contractions). Spherical models can use generalized CG coefficients, resulting in $\mathcal{O}(M N_\text{ch} K L^{\frac{1}{2}\nu(\nu + 3)})$. The factor $K$ is removed in MACE by restricting the parameterization to uncoupled features, i.e., we have $\mathcal{O}(M N_\text{ch} L^{\frac{1}{2}\nu(\nu + 3)})$. However, this choice of the product basis makes MACE more computationally efficient than ICTP only for large $N_\text{ch}$ and small $\nu$; see also W2.1. Tab. 2 in the manuscript also shows that leveraging the symmetry of tensor products and coupled features improves the computational efficiency of ICTP.
**Q2:** ICTP includes TensorNet as a special case with $\nu=2$ and $L = l_\text{max} = 2$, though with equivariant convolution filters; see also W1 and Tab. 3. TensorNet uses reducible Cartesian tensors to embed atomic environments and decomposes them into irreducible ones before computing products with invariant radial filters. It includes explicit three-body features since it computes a matrix-matrix product between node features and messages. Our approach uses exclusively irreducible Cartesian tensors for embedding environments, equivariant convolutions, and product basis. Thus, we do not mix irreducible representations during message passing. We use irreducible tensor products for the equivariant convolution and go beyond invariant filters. Finally, we systematically construct equivariant many-body messages.
---
Rebuttal 2:
Title: Tab. 1: Inference times and memory consumption as a function of L and ν for the 3BPA data set.
Comment: All values are obtained by averaging over five independent runs. Best performances are highlighted in bold. Inference time and memory consumption are measured for a batch size of 10. Inference time is reported per structure in ms; memory consumption is provided for the entire batch in GB.
| | *L* = 1 | | *L* = 2 | | *L* = 3 | |
|:---|:---|:---|:---|:---|:---|:---|
| | ICTP | MACE | ICTP | MACE | ICTP | MACE |
| Inference times | | | | | | |
| *ν* = 1 | **0.76 ± 0.17** | 1.02 ± 0.03 | **0.87 ± 0.18** | 1.38 ± 0.04 | **0.98 ± 0.26** | 1.88 ± 0.03 |
| *ν* = 2 | **0.59 ± 0.20** | 1.12 ± 0.03 | **1.03 ± 0.21** | 1.52 ± 0.05 | **1.34 ± 0.08** | 2.0 ± 0.10 |
| *ν* = 3 | **0.79 ± 0.22** | 1.23 ± 0.03 | **1.15 ± 0.08** | 1.67 ± 0.03 | **1.85 ± 0.13** | 2.23 ± 0.03 |
| *ν* = 4 | **0.94 ± 0.17** | 1.41 ± 0.11 | **1.31 ± 0.21** | 1.83 ± 0.01 | **2.07 ± 0.20** | 2.53 ± 0.01 |
| *ν* = 5 | **1.02 ± 0.17** | 1.52 ± 0.08 | **1.72 ± 0.07** | 2.26 ± 0.03 | **3.61 ± 0.02** | OOM |
| *ν* = 6 | **1.00 ± 0.07** | 1.77 ± 0.05 | **1.83 ± 0.16** | 27.85 ± 0.01 | **16.76 ± 0.35** | OOM |
| Memory consumption | | | | | | |
| *ν* = 1 | 0.05 ± 0.00 | **0.04 ± 0.00** | 0.08 ± 0.00 | **0.06 ± 0.00** | 0.21 ± 0.00 | **0.13 ± 0.00** |
| *ν* = 2 | 0.05 ± 0.00 | **0.04 ± 0.00** | 0.08 ± 0.00 | **0.07 ± 0.00** | 0.28 ± 0.09 | **0.13 ± 0.00** |
| *ν* = 3 | 0.05 ± 0.00 | **0.04 ± 0.00** | 0.10 ± 0.00 | **0.08 ± 0.00** | 0.51 ± 0.03 | **0.23 ± 0.00** |
| *ν* = 4 | **0.05 ± 0.00** | **0.05 ± 0.00** | **0.18 ± 0.08** | 0.30 ± 0.00 | **1.07 ± 0.10** | 4.16 ± 0.00 |
| *ν* = 5 | **0.05 ± 0.00** | 0.07 ± 0.00 | **0.35 ± 0.07** | 3.18 ± 0.00 | **5.07 ± 0.02** | OOM |
| *ν* = 6 | **0.11 ± 0.09** | 0.22 ± 0.00 | **0.93 ± 0.00** | 50.49 ± 0.00 | **28.48 ± 0.03** | OOM |
---
Rebuttal 3:
Title: Tab. 2: Energy (E, meV) and force (F, meV/Å) RMSEs for the 3BPA data set and ν = 1.
Comment: All values are obtained by averaging over five independent runs. Best performances are highlighted in bold. Inference time and memory consumption are measured for a batch size of 100. Inference time is reported per structure in ms; memory consumption is provided for the entire batch in GB.
| | | ICTP (*L* = 2) | MACE (*L* = 2) |
|:-------------------------------|:---:|-------------------:|-----------------:|
| 300 K | E | **12.90 ± 1.06** | **13.50 ± 1.71** |
| | F | **29.90 ± 0.25** | **30.18 ± 0.38** |
| 600 K | E | **29.97 ± 0.94** | **31.32 ± 2.16** |
| | F | **62.80 ± 0.45** | **63.04 ± 0.73** |
| 1200 K | E | **81.03 ± 1.64** | **81.54 ± 2.02** |
| | F | **146.96 ± 1.30** | 149.44 ± 1.94 |
| Dihedral slices | E | **22.84 ± 2.96** | 28.08 ± 4.04 |
| | F | **48.82 ± 5.25** | **49.62 ± 2.92** |
| Inference time | | **2.62 ± 0.02** | 2.96 ± 0.06 |
| Memory consumption | | 32.57 ± 0.00 | **23.32 ± 0.00** |
---
Rebuttal 4:
Title: Tab. 3: Energy (E, meV) and force (F, eV/Å) RMSEs for Ta-V-Cr-W subsystems.
Comment: Results are obtained by averaging over 10 independent runs. Best performances are highlighted in bold. Inference time and memory consumption are measured for a batch size of 50. Inference time is reported per atom in μs; memory consumption is provided for the entire batch in GB.
| Subsystem | | ICTP (*L* = 2) | ICTP (*L* = 1) | ICTP (*L* = 0) | MACE (*L* = 2) | MACE (*L* = 1) | MACE (*L* = 0) | ICTP (*L* = 2, *ν* = 2) | MTP | GM-NN | EAM |
|:---|:--:|---:|---:|---:|---:|---:|---:|---:|---:|---:|---:|
| TaV | E | **1.02 ± 0.27** | **1.21 ± 0.54** | 1.65 ± 1.06 | 1.72 ± 0.67 | 1.76 ± 0.53 | 2.24 ± 1.34 | **1.24 ± 0.50** | 1.94 | 1.54 | 32.0 |
| | F | **0.020 ± 0.002** | 0.022 ± 0.002 | 0.024 ± 0.002 | **0.019 ± 0.002** | **0.020 ± 0.003** | 0.022 ± 0.002 | 0.023 ± 0.002 | 0.050 | 0.029 | 0.404 |
| TaCr | E | **1.81 ± 0.29** | **1.94 ± 0.23** | 2.13 ± 0.19 | 3.26 ± 0.42 | 3.31 ± 0.44 | 4.18 ± 0.56 | 2.4 ± 0.33 | 3.26 | 2.98 | 43.6 |
| | F | **0.025 ± 0.007** | **0.024 ± 0.006** | 0.027 ± 0.005 | 0.029 ± 0.01 | **0.026 ± 0.007** | 0.028 ± 0.007 | **0.026 ± 0.006** | 0.057 | 0.038 | 0.343 |
| TaW | E | **1.75 ± 0.11** | 1.87 ± 0.14 | 2.45 ± 0.31 | 2.73 ± 0.53 | 3.21 ± 0.55 | 3.57 ± 0.48 | 2.19 ± 0.54 | 2.72 | 2.99 | 44.8 |
| | F | **0.017 ± 0.002** | **0.018 ± 0.002** | 0.020 ± 0.002 | **0.017 ± 0.002** | **0.018 ± 0.002** | 0.019 ± 0.002 | **0.018 ± 0.002** | 0.038 | 0.025 | 0.248 |
| VCr | E | **1.74 ± 1.2** | 2.52 ± 2.43 | **2.13 ± 1.24** | **2.19 ± 0.78** | 2.82 ± 1.28 | 3.11 ± 1.42 | **1.89 ± 1.27** | **2.29** | 2.82 | 44.8 |
| | F | **0.016 ± 0.002** | 0.018 ± 0.001 | 0.019 ± 0.001 | **0.016 ± 0.001** | **0.017 ± 0.001** | 0.018 ± 0.002 | 0.019 ± 0.001 | 0.036 | 0.025 | 0.270 |
| VW | E | **1.32 ± 0.2** | **1.46 ± 0.16** | 1.69 ± 0.21 | 1.9 ± 0.19 | 1.94 ± 0.23 | 2.42 ± 0.24 | 1.61 ± 0.16 | 2.50 | 2.00 | 21.3 |
| | F | **0.014 ± 0.002** | **0.015 ± 0.002** | 0.018 ± 0.003 | **0.014 ± 0.002** | **0.015 ± 0.002** | 0.017 ± 0.002 | 0.016 ± 0.002 | 0.037 | 0.023 | 0.292 |
| CrW | E | **2.18 ± 0.93** | **2.45 ± 1.53** | 2.76 ± 1.15 | **2.31 ± 1.18** | 2.84 ± 0.98 | 4.14 ± 1.38 | 3.12 ± 1.90 | 4.35 | 2.87 | 23.4 |
| | F | **0.018 ± 0.004** | **0.020 ± 0.005** | 0.024 ± 0.008 | **0.020 ± 0.009** | **0.019 ± 0.006** | 0.023 ± 0.007 | 0.022 ± 0.006 | 0.041 | 0.029 | 0.248 |
| TaVCr | E | **0.79 ± 0.08** | 0.92 ± 0.17 | 1.00 ± 0.24 | 2.26 ± 0.54 | 2.71 ± 0.66 | 3.92 ± 0.77 | 0.97 ± 0.13 | 2.43 | 1.97 | 34.1 |
| | F | 0.027 ± 0.001 | 0.029 ± 0.002 | 0.033 ± 0.002 | **0.023 ± 0.002** | **0.024 ± 0.001** | 0.028 ± 0.001 | 0.031 ± 0.002 | 0.054 | 0.045 | 0.313 |
| TaVW | E | **1.00 ± 0.2** | **0.98 ± 0.18** | 1.26 ± 0.23 | 1.8 ± 0.35 | 1.97 ± 0.44 | 2.29 ± 0.86 | **0.95 ± 0.25** | 1.67 | 1.70 | 39.6 |
| | F | **0.021 ± 0.001** | 0.022 ± 0.001 | 0.025 ± 0.001 | **0.021 ± 0.002** | 0.023 ± 0.001 | 0.026 ± 0.001 | 0.023 ± 0.001 | 0.043 | 0.034 | 0.321 |
| TaCrW | E | **1.16 ± 0.15** | **1.28 ± 0.13** | 1.58 ± 0.29 | 1.67 ± 0.38 | 1.48 ± 0.50 | 2.08 ± 0.57 | **1.24 ± 0.11** | 2.08 | 2.19 | 23.6 |
| | F | **0.022 ± 0.001** | 0.024 ± 0.001 | 0.027 ± 0.001 | 0.028 ± 0.002 | 0.030 ± 0.002 | 0.033 ± 0.002 | 0.026 ± 0.001 | 0.051 | 0.039 | 0.327 |
| VCrW | E | **1.00 ± 0.16** | **1.07 ± 0.14** | 1.37 ± 0.13 | 1.97 ± 0.5 | 2.21 ± 0.42 | 2.86 ± 0.64 | **1.10 ± 0.14** | 1.37 | 1.94 | 19.4 |
| | F | **0.018 ± 0.001** | 0.019 ± 0.001 | 0.022 ± 0.001 | **0.017 ± 0.001** | 0.019 ± 0.001 | 0.021 ± 0.001 | 0.020 ± 0.001 | 0.040 | 0.031 | 0.314 |
| TaVCrW (0 K) | E | **1.22 ± 0.07** | 1.30 ± 0.1 | 1.48 ± 0.16 | 2.26 ± 0.55 | 2.48 ± 0.46 | 3.60 ± 0.54 | **1.33 ± 0.17** | 2.09 | 2.16 | 50.8 |
| | F | **0.021 ± 0.002** | **0.022 ± 0.002** | 0.025 ± 0.002 | **0.022 ± 0.001** | 0.023 ± 0.002 | 0.027 ± 0.001 | 0.024 ± 0.002 | 0.049 | 0.037 | 0.488 |
| TaVCrW (2500 K) | E | **1.63 ± 0.07** | 1.74 ± 0.11 | 2.09 ± 0.09 | 2.22 ± 0.48 | 2.34 ± 0.59 | 3.68 ± 0.70 | 2.06 ± 0.09 | 2.40 | 2.67 | 59.4 |
| | F | **0.116 ± 0.002** | 0.121 ± 0.002 | 0.141 ± 0.003 | **0.119 ± 0.007** | 0.126 ± 0.006 | 0.150 ± 0.003 | 0.140 ± 0.002 | 0.156 | 0.179 | 0.521 |
| Overall | E | **1.38 ± 0.09** | 1.56 ± 0.21 | 1.80 ± 0.18 | 2.19 ± 0.31 | 2.42 ± 0.31 | 3.17 ± 0.28 | 1.67 ± 0.21 | 2.43 | 2.32 | 37.14 |
| | F | **0.028 ± 0.001** | **0.029 ± 0.001** | 0.034 ± 0.001 | **0.029 ± 0.001** | 0.030 ± 0.001 | 0.034 ± 0.001 | 0.032 ± 0.001 | 0.054 | 0.043 | 0.443 |
| Inference time | | 51.78 ± 1.18 | 25.09 ± 0.02 | 14.59 ± 0.01 | 29.48 ± 0.23 | 15.37 ± 0.04 | 4.43 ± 0.00 | 14.97 ± 0.09 | 17.57 | 7.25 | 0.50 |
| Memory consumption | | 36.78 ± 0.00 | 16.93 ± 0.00 | 8.48 ± 0.00 | 28.82 ± 0.00 | 13.87 ± 0.00 | 5.91 ± 0.00 | 13.15 ± 0.00 | – | – | – |
---
Rebuttal 5:
Comment: Thank you for your clarifications. Most of my concerns have been addressed. I choose to increase my rating to 6.
---
Rebuttal Comment 5.1:
Comment: Dear reviewer,
We thank you for raising your score. Your feedback and suggestions have significantly improved our work. | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers,
We thank you for your time and effort in evaluating the manuscript and providing positive feedback and constructive suggestions. Below, we provide a general response to your comments, with more details available in the individual discussions. We will revise the manuscript accordingly, incorporating all results produced during this review process.
All reviewers shared similar concerns:
* We improved the motivation for our work by discussing its differences from and advantages over recent Cartesian models, like TensorNet and CACE. We clarified that we are using irreducible Cartesian tensors for all parts of our message-passing layers, ensuring that irreducible representations are not mixed. Additionally, we define equivariant convolution filters that extend beyond the invariant ones used by recent work. We construct equivariant many-body messages using the product basis. Thus, our work enables a systematic construction of message-passing architectures that are equivariant under the action of $O(3)$ using irreducible Cartesian tensors, enhances the expressivity of resulting models, and captures TensorNet/CACE as special cases. We support our discussion by evaluating the TensorNet-like model on the Ta-V-Cr-W data set, which we included for this review.
* We extended our complexity analysis to demonstrate further the advantages of models based on irreducible Cartesian tensors over their spherical counterparts. For equivariant convolutions (two-body interactions), we obtain $\mathcal{O}(E N_\text{ch} 9^{L} L!/(2^{L/2}(L/2)!))$ and $\mathcal{O}(EN_\text{ch}L^5)$ for ICTP and MACE, respectively. For the product basis (many-body interactions), we obtain $\mathcal{O}(M N_\text{ch} K (9^{L} L!/(2^{L/2}(L/2)!))^{\nu-1})$ and $\mathcal{O}(M N_\text{ch} L^{\frac{1}{2}\nu(\nu + 3)})$ for ICTP and MACE, respectively. Here, $M$ is the number of nodes, $E$ is the number of edges, $N_\text{ch}$ is the number of feature channels, $L$ is the maximal tensor rank, $\nu$ is the number of contracted tensors, and $K = \text{len}(\eta_\nu)$ counts all possible $(\nu-1)$-fold tensor contractions. The obtained complexities show that ICTP gets more computationally efficient than MACE as $\nu$ increases. Larger values of $\nu$ become more important when, e.g., local atomic environments are degenerate with respect to lower body orders, and higher accuracy is required [B, G]. We support our complexity analysis by performing an ablation study on the hyper-parameter $L$ and $\nu$ using the 3BPA data set.
* We included the large-scale Ta-V-Cr-W data set in our analysis of the models' performance [C]. This data set includes 0 K energies, atomic forces, and stresses for binaries (i.e., two different atom types), ternaries (i.e., three different atom types), and quaternary (i.e., four different atom types) and near-melting temperature properties in four-component (i.e., four different atom types) disordered alloys. In total, this benchmark data set contains 6711 configurations that are computed with density functional theory (DFT). More precisely, there are 5680 0 K structures: 4491 binary, 595 ternary, and 594 quaternary structures, along with 1031 structures sampled from molecular dynamics (MD) at 2500 K. Structure sizes range from 2 to 432 atoms in the periodic cell. All models are trained using 5373 configurations (4873 are used for training and 500—for early stopping), while the remaining configurations are reserved for testing the models' performance. The performance is tested separately using 0 K binaries, ternaries, quaternaries, and near-melting temperature four-component disordered alloys. ICTP systematically provides energies and forces better than the current state-of-the-art (MTP/GM-NN) for the Ta-V-Cr-W data set. In contrast, for MACE, we were not able to identify a set of relative weights for energy, forces, and virial losses that consistently yield competitive results for both energies and forces.
All numerical results for the experiments conducted for this review are presented in Tabs. 1, 2, and 3 of the attached PDF. We also include Fig. 1, illustrating inference time and memory consumption of ICTP and MACE as a function of $L$ and $\nu$.
We look forward to your feedback and await your opinions on our responses.
Yours Sincerely,
The authors.
**References**
[A] I. Grega, I. Batatia, G. Csányi *et al.*: Energy-conserving equivariant GNN for elasticity of lattice architected metamaterials. *Int. Conf. Learn. Represent.* [https://arxiv.org/abs/2401.16914](https://arxiv.org/abs/2401.16914) (2024)
[B] C. K. Joshi, C. Bodnar, S. V. Mathis *et al.*: On the Expressive Power of Geometric Graph Neural Networks. *Int. Conf. Learn. Represent.* [https://arxiv.org/abs/2301.09308](https://arxiv.org/abs/2301.09308) (2023)
[C] K. Gubaev, V. Zaverkin, P. Srinivasan *et al.*: Performance of two complementary machine-learned potentials in modelling chemically complex systems. *npj Comput. Mater.* **9**, 129 (2023)
[D] J. Nigam, M. J. Willatt, and M. Ceriotti: Equivariant representations for molecular Hamiltonians and $N$-center atomic-scale properties. *J. Chem. Phys.* **156**, 014115 (2022)
[E] O. T. Unke, M. Bogojeski, M. Gastegger *et al.*: SE(3)-equivariant prediction of molecular wavefunctions and electronic densities. *Adv. Neural Inf. Process. Syst.* [https://arxiv.org/abs/2106.02347](https://arxiv.org/abs/2106.02347) (2021)
[F] I. Batatia, L. L. Schaaf, G. Csányi *et al.*: Equivariant Matrix Function Neural Networks. *Int. Conf. Learn. Represent.* [https://arxiv.org/abs/2310.10434](https://arxiv.org/abs/2310.10434) (2024)
[G] S. N. Pozdnyakov, M. J. Willatt, A. P. Bartók *et al.*: Incompleteness of Atomic Structure Representations. *Phys. Rev. Lett.* **125**, 166001 (2020)
Pdf: /pdf/e750edc0968f07e9a9d663cffecdf19fd537d4aa.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning predictable and robust neural representations by straightening image sequences | Accept (poster) | Summary: This paper presents a simple self-supervised learning objective which aims to "straighten" representation trajectories in latent space - maximizing cosine similarity of consecutive deltas of representation (i.e take three time steps, calculate the difference in representation between each pair of consecutive representations and calculate the cosine similarity over these differences).
To prevent collapse the author suggest using two common regularization losses - one pushes the variance of each representation dimension to one and one decorrelates the representation dimensions.
The objective is used to train networks on two simple, synthetic sequential datasets and is demonstrated to learn interesting representations demonstrated in a variety of ways - readout accuracy, robustness to noise and adversarial attacks.
Strengths: *Originality:*
While based on some existing work (and very close to [14], as the authors note) I found the work original in the way it is applied and actually simpler than existing work (a good thing!). I also appreciate the general context of the work and the neuroscience connections that can be made here,
*Quality:*
This is very solid work - while the idea is simple, the analysis and breadth of the _existing_ experiments are very good. I appreciated the multiple angles the learned representation is inspected - statistically across the data, depth wise across layers and downstream task readouts. This is a nice example of how investigating the results of a method shed light on the "idea" of a paper and its significance.
Having said that, see below for some comments on the scope of experiments at large.
*Clarity:*
The paper was a pleasure to read, with clear and concise language, no attempt to make things more complicated than they are and good figures and captions. See below for some (minor) comments.
*Significance:*
Probably the weakest point of the paper - this is a very "small" paper in scope and (see reasons below) I think its actual contribution to the community is a bit limited. Having said that, it is an "idea" paper (rather than a "results" paper) so could serve as a basis for future work increasing the potential contribution.
Weaknesses: *Scope of experiments:*
While I appreciated the depth and breadth of the existing experiments I think the scope of the experimental validation is still a bit limited. I appreciate the difficulty in running larger scale experiments with video data for many groups due to resource limitations, however, I feel that in this case, the use of synthetic data alone, and beyond that, just MNIST and CIFAR10, is a very limiting factor. I would have loved to see experiments on larger, natural(istic) datasets. For an objective like the one proposed in the paper, using sequences with very simple, almost linear transformations across time makes the results a lot less interesting than they could have been - no wonder that a model trained to learn simple transformations actually does it with data which is only comprised of simple transformation. The interesting question is what this model can do when the transformation are complex and non-linear as the ones in natural video - multiple objects moving, viewpoint changes etc.
Beyond that I would be interested to see measures of straightness to some existing, pre-trained models - especially from the family of state-of-the-art SSLs like DINO v2, MAE etc. Do they naturally learn "straight" representation even though not trained on sequences? can you fine-tune them with a straightening loss?
To be clear, I do not expect the authors to run any new experiments during the rebuttal period as I appreciate what I am suggesting here may take more time and resources than is possible for this short amount of time (but needless to say I'm happy to see any results the authors may be able to produce in this context).
*Presentation (minor):*
While I enjoyed the presentation in general I think the use of coloured labels (Figure 2C for example) right on the plot is weird and confusing. A legend would have been more useful. A bit more content in the captions would have been nice as well - Figure 3, for example has insets which are not explained in the caption at all.
Technical Quality: 4
Clarity: 4
Questions for Authors: Beyond the comments above I have some more minor questions:
1. The text mentions that in order to calculate the "straightness" the representation needs to be flattened - while this makes sense, I do not understand why we see such a big jump in "straightness" around flatenning layers (Fig 2B and Fig 4B) - I would expect this layer to have no effect (because the flattening happens either way) but it's the biggest "jump" in the plots.
2. The authors suggest that adversarial examples for common networks have imperceptible perturbations and it is implied, I think, that adversarial examples for straightening models are different - but I think these are not shown in the paper, am I wrong?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: The authors address some of the limitations in the paper, I hope to see a bit more of them in light of what's written above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and comments.
**Scope of experiments:** We acknowledge the limitations of our current experiments. In the global response, we explained why we did not use natural videos, and our plan for improvement.
**DINO v2 and MAE:**
- To give a partial answer to the reviewer’s questions, we tested pre-trained DINO on our sequential CIFAR dataset (code adapted from [8]). The representations are not naturally straightened. In fact, none of the SSL models we tested in Figure 5 naturally shows straightening; representations become straighter only after the straightening loss is added to their original loss. For DINO, adversarial robustness also improves after augmented with straightening. We provided Figure R3 in the rebuttal PDF to emphasize this result.
- In the short time frame we were not able to find a reliable implementation of DINO v2 or MAE trained on CIFAR10. But it was shown in [16] that MAE (see Figure 24) and ViT-based DINO (see Figure 7) do not naturally straighten natural videos.
**Presentation:**
- In the rebuttal PDF we provide an updated version of Figure 2C with legends, where we also added regression results for the shuffled case.
- The insets of Figure 3 show a schematic diagram of the representation geometry for each case. A) Trajectories are more parallel if they are from the same digit and the same transformation class; B) C) D) Trajectories are more orthogonal if they are from different classes. We will revise the caption to clarify this.
**Flattening layer effects:** Flattening has no effect on straightness – in fact, they are included as a sanity check. But spatial pooling and fully-connected linear layer can improve straightness because by selectively projecting responses to a lower dimension, they alleviate noise and represent coarser and smoother features [19].
**Qualitative assessment for adversarial images:** We did not make a claim regarding the adversarial examples for straightening modelsr - see additional discussion in the global response.
**Expanding the writing for the limitation section:** We will expand the discussion of the scope of the work in the Limitations section of the updated paper.
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed response.
Comment: I thank the reviewers for the time and effort taken to respond to my (and other's) review.
As I mentioned in the original review I think that is very interesting work that should be accepted - many of my concerns have been answered in the rebuttal and the ones that haven't been (I still think natural video would have been cool to see here) are not a reason to reject the paper.
I am therefore increasing the score and am looking forward to see this line of work evolve in the future. | Summary: the current manuscript introduces a self-supervised learning (SSL) objective inspired by biological vision systems. It proposes an objective that promotes the "straightening" of neural representations of image sequences, facilitating linear prediction. The proposed method is tested on small and synthetic datasets like sequential MNIST and CIFAR-10, demonstrating that the learned representations are more predictive, robust to noise and adversarial attacks, and can enhance other SSL methods when used as a regularizer.
Strengths: - The paper is relatively easy to follow and well structured.
- The paper is motivated by findings from biological vision systems, providing a novel angle to the objective function design in SSL.
- The paper provides a clear geometric explanation of how *straightening* contributes to class separability, which helps in understanding the underlying mechanics of the proposed method.
Weaknesses: - The main weakness is the novelty claim regarding the objective function. Similar loss functions and concepts have been explored in other works. Examples are: [1-5]
- The paper needs to better differentiate its approach from existing methods that use linear predictors or phase-pooling for straightening.
- another major issue is the experimental settings that are primarily conducted on synthetic datasets. While this allows controlled comparisons, it limits the demonstration of the method's applicability to real-world data.
- and comparisons to existing state-of-the-art SSL methods are somewhat limited. More extensive benchmarking against a broader range of methods could strengthen the claims.
References:
[1] VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning published at ICLR 2022
[2] Learning a Depth Covariance Function published at CVPR 2023
[3] MLVICX: Multi-Level Variance-Covariance Exploration for Chest X-ray Self-Supervised Representation Learning
[4] Variance Covariance Regularization Enforces Pairwise Independence in Self-Supervised Representations
[5] An Information-Theoretic Perspective on Variance-Invariance-Covariance Regularization
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the weaknesses and limitations.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The paper presents an interesting approach to SSL by leveraging a biologically inspired objective that promotes the *straightening of neural representations over time*. This approach is evaluated on synthetic datasets, showing improved robustness and predictive performance. However, the novelty of the objective function is not fully established, as similar concepts have been previously explored. Additionally, the reliance on synthetic datasets limits the demonstration of the method's real-world applicability. While the paper provides a clear geometric intuition for its approach and shows promising results, more extensive comparisons with state-of-the-art methods and testing on real-world datasets would strengthen the claims. Therefore, based on the current presentation and evaluation, the novelty and practical impact of the proposed objective function is not sufficiently justified.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and comments.
**General comments on novelty:** see global response.
**How straightening differs from the references mentioned by the reviewer:** References pointed out by the reviewer seem to focus on why and when the variance-covariance regularizer is useful, while our contribution focuses on the straightening objective. In fact, what we showed in Figure 5 is that straightening can be accompanied by various forms of regularization and still learn useful and robust representations. VICReg is one of the main reference models we compared to (and beat!).
**How straightening differs from linear prediction and phase pooling:**
- Straightening is the simplest form of second-order linear prediction in that it requires no parameter setting in the prediction step. Yet, it demonstrates excellent predictive power for the transformations we tested on (see Figure 2D and Appendix B), and yields unexpected robustness that it is not explicitly trained for. It would not be surprising that a general parametrization of a second-order linear predictor could perform at least as well as straightening, but we would argue that achieving comparable results with a simpler objective corresponds to a significant contribution (rather than a lack of novelty).
- [14] uses a complex architecture including things like phase-pooling and further relies on an autoencoder structure and a pixel-level prediction loss to prevent information collapse. Our solution succeeds with a much simpler architecture. Critically, [14] did not show any quantitative evaluation of the learned representations, while we compared our results to state-of-the-art SSL models that are difficult to beat.
**Reasons for not having used natural videos:** see global response.
**More comparison to benchmark results:** While it is unrealistic to compare to all models on the market, we have added a new comparison with the DINO method shown in Figure R3 of the rebuttal PDF. We were able to show that in this instance as well, representations become more adversarially robust when the DINO objective is augmented with a small amount of straightening (weight for straightening is 0.005).
---
Rebuttal Comment 1.1:
Title: response by Reviewer jxcp
Comment: I have carefully reviewed the feedback from other reviewers, considered the author’s rebuttal, and global responses and followed the ensuing discussion. I appreciate the authors' thorough responses, particularly their clarification on W1 and new experimental results (B and C) during the rebuttal period. therefore I will raise my score from 4 to 6. | Summary: This is a very interesting paper that wants to show that robustness is a consequence of perceptual straightening during training -- both areas that have largely remained disconnected in vision. In particular because adversarial robustness is generally studied from a theoretical perspective, or empirical cat-and-mouse scenarios of new attacks vs defense (or privacy). In the case of perceptual straightening, since it has only recently been introduced as a neuroAI-like inspired motif that should be added in machine vision systems it is not clear if straightening will provide robustness or the other way around (determining the causal factor -- though some previous work has shown this to certain degree). This paper shows that straightening a model will provide robustness.
Strengths: * The topic of perceptual straightening is relevant and heavily under-discussed at the intersection of computational vision and representational learning.
* I think it is nice to use an organic version of SSL via temporal conditioning as a way to do perceptual straightening
* The goal of the paper is easy to understand: apply perceptual straightening on a model, then test to see if it will be more robust than its non-straightened version (it is!)
Weaknesses: - The paper could do a better job exploring the **qualitative assessment** of the response of different training networks (with and with-out straightening and SSL) **for adversarial images**. Authors make a strong case that robustness to adversarial images is stronger for perceptually straightened (PS) neural networks, only showing curves, but is this actually the case if we qualitatively look at samples? Will attacking a PS-NN with a target fish label actually morph the image into a fish? See for example: Berrios & Deza. SVRHM 2022. Santurkar et al. NeurIPS 2019 and recently Gaziv et al. NeurIPS 2023.
- In Figure 2C I like that there is a shuffled condition to break the linearity of the straightening. What’s not clear to me is if in the other bar plots within figure C the shuffle condition has achieved 0 decording since it is not visible (eg. location, size, orientation)? Or was the experiment not ran? Would authors be able to clarify this?
Technical Quality: 3
Clarity: 4
Questions for Authors: * There is something odd about using synthesized data (of MNIST?) as a proxy for real video data with natural scene statistics. I wish there was an experiment with real video data such as down-sampled video data from YouTube or Autonomous driving video data (real vs shuffled). Perhaps I missed this in the paper, and am curious to know authors thoughts on this.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Missing papers:
- Santurkar et al. NeurIPS 2019. Image Synthesis with a Single (Robust) Classifier
- **Kong & Norcia. SVRHM 2021. Are models trained on temporally-continuous data streams more adversarially robust?**
- Berrios & Deza. SVRHM 2022. Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4
- Gaziv et al. NeurIPS 2023. Robustified ANNs Reveal Wormholes Between Human Category Percepts
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and comments.
**Qualitative assessment for adversarial images:** see global response.
**Regression results:** The purpose of training straightening on shuffled frames is to validate that straightening indeed makes use of the temporal correlations of inputs, and that if the correlations are destroyed, learning fails. We did not run regression on location/size/orientation for the shuffled case, because we thought the failure in decoding object identity would be sufficient. But for clarification and completeness, we’ve now verified that decoding accuracies are essentially zero - location/size/orientation information is completely lost in the shuffled case. These results are shown in Figure R2 of the rebuttal PDF.
**Reasons for not having used natural videos:** see global response.
**The Kong & Norcia paper:** Thanks for pointing out this relevant paper, which shows models trained on temporally-continuous data (the SATCam video frames) are more adversarially robust than those trained on ImageNet. We will certainly add a citation. This is complementary to our contribution suggesting that natural video statistics have temporal structure exploitable by contrastive learning. Note however that the straightening objective is fundamentally different from the two objectives used in this paper, namely temporal classification (classifying frames to the episode they belong to) and temporal contrastive learning (frames are positive examples when they are close in time). | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their comments and questions. This global response addresses questions that were raised by multiple reviewers, and the other points are addressed in individual responses.
**Why didn’t we use a natural video dataset?**
- To link robustness and straightening, our primary comparisons are to SSL models trained and tested on static image datasets, so we sought to match these in terms of training data and evaluation pipeline. Under these conditions, our model is on par or better than the competition.
- We had considered the possibility of training both the reference and straightening models on a video dataset. But typical video datasets lack sufficient object class variety [25] (for example, object-centric natural video datasets such as YouTube-BoundingBoxes [Real2017] or Objectron [Ahmadyan2021] contain only 23 and 9 object classes, respectively). Some efforts have started to align the data distribution of the two domains, but well-accepted benchmarks have not been established yet. However, given the shared concern from the reviewers we will include a minimal version of this experiment in the final paper. Under the same training and testing pipeline, we will compare the relative performance of reference models and the straightening model on robustness and object detection.
- In addition to their insufficient object class variety, the predictable structure of natural videos evolves at multiple timescales. It is not clear whether a feedforward architecture that takes in one frame at a time and makes predictions at a single temporal scale is enough to fully take advantage of such structure. As described in the Discussion section, we are currently working on a hierarchical extension of straightening that is better suited to capture the interaction across multiple spatial and temporal scales. We think this will be a separate contribution and is not the focus of this submission. Still, it is worth noting that this additional complexity is not needed for the single-scale straightening model to be on par with or better than SSL state-of-the-art.
**Qualitative assessment for adversarial images**
- Supervised and self-supervised representations exhibit similar performance for generalization and noise robustness [Geirhos2020], and both are susceptible to imperceptible adversarial attacks.
- For the invariance-based SSL models and straightening: In Figure R1 of the rebuttal PDF, we show two examples of untargeted attacks under a span of attack budgets. For small budgets, their adversarial images are indistinguishable from their original counterparts. When the budget is large (L2 norm above 2.0), attacks generated from the straightening model appear more visually apparent than the invariance counterparts. The straightening attacks seem to concentrate on key parts of the object, while the invariance attacks are distributed throughout the image. This suggests some degree of alignment between the straightened representations and human perception. However, this effect is not easily quantifiable, therefore, we did not include these in the paper. The adversarial images generated for untargeted and targeted attacks do not look qualitatively different.
- More generally, we do not think it fair to compare adversarial robustness performance or adversarial images of robustly trained networks and SSL networks, as the former are specifically trained to correct for the mistakes of adversarial images. Importantly, robustified networks are extremely costly to train while our solution achieves robustness with minimal compute costs.
**Some general comments on novelty and significance**
- Our work builds on a foundation established in several previous publications: 1) straightening provides a particular form of predictive coding, and provides a specific objective that has been shown consistent with biological representations [3, 9, 20]; 2) recent SSL developments provide effective methods for preventing representational collapse [2]; 3) recent empirical attempts to characterize, post-hoc, straightening in trained neural networks [16, 26]. The work that comes closest to ours in goals is [14], which tried using straightening as part of a learning objective with a more complex architecture. We are the first to show that straightening can actually achieve competitive SSL performance - specifically: 1) straightening learns richer semantic representations than state-of-the-art contrastive methods (as it keeps more information about the input than invariance objectives) and 2) these representations automatically inherit noise robustness. This makes our form of straightening an important new tool in the SSL toolkit, one which we expect to generalize to other learning setups (at the very least as a useful regularizer). Thus, this paper is just a first step, and we expect the community will build upon and expand our results.
Additional references: \
[Real2017] Real, E., Shlens, J., Mazzocchi, S., Pan, X., & Vanhoucke, V. (2017). Youtube-boundingboxes: A large high-precision human-annotated data set for object detection in video. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 5296-5305). \
[Ahmadyan2021] Ahmadyan, A., Zhang, L., Ablavatski, A., Wei, J., & Grundmann, M. (2021). Objectron: A large scale dataset of object-centric videos in the wild with pose annotations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 7822-7831). \
[Geirhos2020] Geirhos, R., Narayanappa, K., Mitzkus, B., Bethge, M., Wichmann, F. A., & Brendel, W. (2020). On the surprising similarities between supervised and self-supervised models. arXiv preprint arXiv:2010.08377.
Pdf: /pdf/cee2d1fde361540dcdc890d205e9688bebe97b2f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MatFormer: Nested Transformer for Elastic Inference | Accept (poster) | Summary: The paper proposes Matformer, a technique to achieve elastic inference where one model is trained, encapsulating sub-models that can be extracted on demand. The main idea to achieve this is to apply a Matryoshka "self-stacking" of hidden states in the FNN blocks of transformers, which are randomly sampled at train time to for all sub-models to be trained concurrently. Experimental results on Million-parameter transformers show that this idea works very well in producing low complexity models at inference time.
Strengths: The idea proposed is very original: train one model and obtain many sub-models at inference time.
Experimentally, the approach is shown to work well with LLMs and ViTs, and shows promising results in terms of scaling with vanilla transformers. In addition to compression, it is shown that an actual speed-up in execution can be achieved in tandem with speculative decoding.
Finally, the paper is very well written; thank you to the authors for making everything so clear that I was able to understand everything in my review.
Weaknesses: I have identified a few weaknesses in the paper, most of which are included in the "questions" section below. Please address those:
- I have doubts on the application of the Matryoshka approach to the attention block (see questions)
- In terms of training time, it is unclear that the proposed approach would be better than training small models, when taking into account FLOPs per iteration and speed of convergence (larger models need more tokens).
- Experiments are limited to models with less than a billion parameters, while the description uses Llama 34B, 40B, and 70B as motivating examples.
-Minor: There are no theoretical results.
Technical Quality: 3
Clarity: 3
Questions for Authors: I have several questions to the authors:
Issues in the motivation of the work:
- The authors identify compression/pruning as one option to fit a bigger model to a compute budget, but argue that such scheme requires additional training. This is not very fair, since Matformer itself trains from scratch, one could also do quantization-aware, pruning-aware, compression-aware training from scratch. While this approach would still necessitate one trianing session for one model size, I disagree with the author's assertion that compression "requires additional training".
- At line 40, the authors claim that Matformer is applied to both attention and FFN blocks. However, Figure 1 shows that this is only applied to the hidden state of the FFN, and also later sections in the work (Section 3 where the scheme is presented, and Section 4 where results are reported) only use Matformwer on the FFN blocks. It seems like this in an overclaim in the introduction which should be removed. Unless Matformer can be applied to the Attention. If so, how do we define Matformer for Attention? In the FFN, it is very clear how the hidden state is arbitrarily truncated to smaller chunks in a Matryoshka fasion. What is the corresponding operation in Attention, is it applied to the attention heads? How would that impact LLM serving optimization such as KV cache optimization, FlashAttantion and etc...? And how would it differ from grouped query attention? A very vague hint to the potential of doing that is given at lines 147. Please discuss further, or just drop the issue of attention and explicitly state that the method is applied to FNN only. Right now, it's a bit confusing what is or can be done for the attention block.
On the related work section: Overall this is a good survey of similar works. I would suggest comparing to Flextron [1]. This work essentially does the same thing as Matformer as far as I can tell (authors, please let me know what the differences are). It is contemporary since it came out around the same time as the Neurips deadline, so this does not in any way impact my recommendation score for this paper. However, I think it is still good to compare to Flextron, who does elastic inference on Billion-parameter models.
[1] Flextron: Many-in-One Flexible Large Language Model, Cai et al., ICML2024
Questions on the training scheme proposed:
- Does the random sampling strategy affect convergence time? Since at each iteration, we are sampling one of exponentially many sub-networks, does it take more time to train (in terms of iterations or "epochs").
- How does the computational cost of training (measured in FLOPs) compares to training small models from scratch. I'd assume that training a small model from scratch would converge faster since parameter volume is known to be correlated with training token availability. And also, a smaller model would have a smaller computational cost per iteration. So, all in all, is it really easier to use Matformer, rather than training small models. I would appreciate a quantitative answer to this question, if possible.
P.S., the example of Llama3 34B, 40B, 80B is specifically talking to what I am referring to (even though it doesn't take into account the algorithmic convergence). We need to train for a number of FLOPs that is needed by Llama3-80B in order to produce a 40B model.
Comments on the experiments:
- In the earlier sections, a lot is being made about Llama models, yet all reported results are on Million-parameter models. I understand that Meta did not provide a training recipe for Llamas (nor a training dataset) - but there are open source implementations of Billion-parameter models, such as Megatron-LM. It would be good to see how Matformer scales beyond 850M parameters. E.g., Flextron does produce elastic inferencing up to 8B parameters. Currently, while all results are good, I am worried that using such small models may be deriving conclusions from toy examples.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, there are no worries on that, and the authors did attach the Neurips checklist to the back of their paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and support. We clarify the main concerns below:
1."...unclear that the proposed approach would be better than training small..."
We clarify that MatFormer does not use more data or compute compared to the baselines trained separately and can be scaled like the baselines. Rather than training different sized baseline models from scratch and separately, our approach trains them together and enjoys the benefits of training common parameters jointly.
Consider 4 baselines trained using FLOPs-S/M/L/XL for Baseline-S/M/L/XL, resulting in FLOPs_TOTAL=FLOPs_S+...+FLOPs_XL. These separate baselines, collectively use memory Memory-S+Memory-M+Memory-L+Memory-XL. MatFormer, in comparison, has memory=max(Memory-S,...,Memory-XL)=Memory-XL. The FLOPs used by MatFormer does not exceed FLOPs-TOTAL.
Moreover, Baselines offer only the fixed few models they were explicitly trained for, whereas MatFormer provides thousands of accurate models that were never explicitly trained for.
3. "...I disagree...that compression "requires additional training"."
We'd like to clarify what we meant by “additional training”. MatFormer, once trained (using the same FLOPs as the 4 baseline models), provides 1000s of models that can be extracted without any post-hoc training. In contrast, with compression, after training a large model, additional post-hoc processes are required to obtain smaller models for each new latency budget. To generate 1000s of models, each of these processes would need to be repeated individually. We will add this nuance in the final draft.
4. "At line 40, the authors claim that Matformer is applied to both attention and FFN..."
We have preliminary results on applying MatFormer to the attention block in Figure 8 (Appendix F2). Similar to the FFN case, we can get many submodels for free using Mix’n’Match. As you correctly referenced, we apply MatFormer structure to the number of attention heads n. Specifically, we use the “first” n/8 heads in MatFormer-S, first n/4 of heads (superset of n/8 heads) in MatFormer-M, first n/2 heads in MatFormer-L and all n heads in MatFormer-XL. This results in the attention KV cache being reduced proportionally—⅛ / ¼ / ½ for MatFormer-S/M/L. Optimization is the same as optimizing a smaller model having n/8, n/4, or n/2 heads respectively. Flash Attention works on individual heads independently, so it will act on whatever heads are being considered for a given model. For grouped query attention, MatFormer could be applied group-wise, prioritizing the first few groups and so on.
5. "...would suggest comparing to Flextron [1]..."
We thank the reviewer for their feedback, and will include these details in the final draft.
6. "Does the random sampling .. convergence time .. sampling one of exponentially many sub-networks..."
We would like to clarify that we sample only one of g models per step, not one of the exponentially many possible sub-networks. Convergence is not affected by this. We have also experimented with alternative sampling methods, such as rotating sequentially through Model-S, M, L, and XL. These methods yielded similar performance to random sampling. The total training time for MatFormer is always less than or equal to the time required to train the individual baseline models separately.
6. "How does the computational cost of training (measured in FLOPs) compares to training..."
Consider the Llama3 model family as an example —34B, 40B, 80B models. Assume each model is trained for X, Y, and Z steps with the same batch size.
In contrast, MatFormer trains a single 80B model, thus taking significantly less memory than maintaining three separate 34B, 40B, and 80B models. During training, MatFormer samples and optimizes the 34B submodel for X steps, the 40B submodel for Y steps, and the entire 80B model for Z steps. This approach consumes the same total FLOPs as training the individual 34B, 40B, and 80B models separately. The advantages of MatFormer include:
Enhanced Model Quality: MatFormer submodels, particularly at smaller scales, benefit from extended training of shared parameters, resulting in higher quality compared to baseline models of the same size (Appendix B4, 1st point).
Mix’n’Match: MatFormer provides 1000s of models through Mix’n’Match without requiring additional training. In comparison, baseline models only provide the explicitly trained 34B, 40B, and 80B models. Post-hoc techniques such as pruning or distillation adds substantial computational overhead, as discussed previously.
Thus, MatFormer leverages the same amount of FLOPs as training individual models but achieves greater efficiency and flexibility by consolidating the training process, avoiding the need for extensive post-hoc adjustments.
We hope this explanation clarifies the computational cost and efficiency benefits of using MatFormer relative to training smaller models from scratch.
7. "... limited to models with less than a billion parameters..." "...worried that using such small models..."
We share your interest in scaling up MatFormer. Although we show this for 850M parameter models due to resource constraints, using MatFormer becomes critical as we scale to larger sizes, where training models with intermediate sizes becomes increasingly challenging. We use these larger models as motivating examples for elastic inference algorithms that enable users to obtain any sized model during inference without any post-training. We hope this clarifies our intention behind mentioning Llama models.
We will try to include billions parameter sized models in the final draft. Additionally, our scaling laws show that MatFormer scales as well as baselines. This suggests that larger models will behave similarly
---
We would be very happy to discuss any further questions about the work, and would really appreciate an appropriate increase in score if reviewers’ concerns are adequately addressed.
---
Rebuttal Comment 1.1:
Title: Responses to rebuttal
Comment: Thanks for the rebuttal. I would like to continue the conversation:
1 & 3 & 6. The authors are side-stepping the fact that smaller models converge faster. For instance a smaller models would consume fewer tokens until its accuracy saturates. As such, training small models is much cheaper than training large models. On the other hand, how is convergence speed impacted with elastic training? Since you're training many-in-one, do you need to consume more tokens until the accuracy of the many-in-one models saturate?
4. OK and thanks for acknowledging that you only have preliminary results for the attention part.
5. Sounds good. Note that Flextron has experiments for Billion-scale models, as opposed to Matformer's results using Million-scale models. Also Flextron uses an auto-router. But again, because this work is contemporary, I do not take into consideration this in my recommendation.
7. Sounds good, thank you for agreeing that the results need to be scaled up. Good luck trying to add those to the final draft.
In conclusion, I appreciate the responses, and I am glad the authors mostly agree with my initial assessment. I maintain that this is a borderline paper leaning on the accept side. But I certainly do not think that the contribution warrants a very strong accept.
---
Reply to Comment 1.1.1:
Title: Response to the follow up
Comment: We thank the reviewer for the prompt reply. We answer their follow up question here:
> For instance a smaller models would consume fewer tokens until its accuracy saturates ... do you need to consume more tokens until the accuracy of the many-in-one models saturate?
We do not require more tokens during elastic training, regardless of whether the baseline models to compare against are small or large, rather _**less**_ tokens compared to baselines to reach the same quality and saturation as them. Let’s consider two baseline models, A and B, with model A being smaller or equal in size to model B. Our MatFormer model, MatFormer-B, is of the same size as the larger model B. Within MatFormer-B, consider a subnetwork, MatFormer-A, which mirrors the size and architecture of Baseline-A.
During training, we optimize MatFormer-A for the same number of tokens and steps as Baseline-A and MatFormer-B for the same number of tokens and steps as Baseline-B. Consequently, each granularity in MatFormer is trained for the same number of steps and tokens as its corresponding Baseline model. And the total compute (FLOPs) and total memory required is also same or less than training the two baseline models (Baseline-B, Baseline-A).
Due to parameter sharing within MatFormer, optimizing either of the granularity MatFormer-A or MatFormer-B also partially trains the other granularity for free in addition, without requiring additional FLOPs. This happens due to parameter sharing, and results in MatFormer-A and MatFormer-B achieving performance _**at least**_ as that of Baseline-A and Baseline-B (Fig. 2 (a))
In other words, to achieve saturation and quality levels equivalent to Baseline-A and Baseline-B, MatFormer requires using fewer tokens and less compute compared to the Baselines. While using the same compute and tokens results in better performance.
Note this is not the main advantage of MatFormer. The primary advantage of MatFormer lies in its ability to provide thousands of intermediate models between MatFormer-A and MatFormer-B, all while using the same total FLOPs, tokens, and compute as the Baseline models, which only provide two fixed-size models.
----
We welcome further questions about the work, and would really appreciate an appropriate increase in score if reviewers’ concerns are adequately addressed | Summary: The paper introduces Matformer, an elastic modeling strategy for inference that provides flexibility for latency and cost requirements. The authors base their method on the recently proposed matryoshka representation learning (MRL) paradigm to introduce nested substructures in the transformer blocks. Specifically, they introduce this in the FFN layers of the transformer and allocate a hyperparameter $g$ to design the number of granular blocks. In the paper, they use $g=4$, where the granularities are mapped as $\{d_{ff}, d_{ff}/2, d_{ff}/4, d_{ff}/8 \}$, where $d_{ff}$ represents the inner dimension of the FFN blocks. The models are trained following the original loss functions used to train baseline models (i.e., models without MRL blocks). The paper showcases results for decoder-only LLMs, encoder-only ViTs, and adaptive retrieval following a simple NN retrieval metric. The decoder-only LLMs also show that the models exhibit scaling laws similar to the baseline models (with slightly shifted constants) and show how a single model can be used efficiently for speculative decoding scenarios, with an optional attention-cache sharing mechanism.
Strengths: 1. The paper presents a simple yet effective method to introduce elasticity into existing transformer architectures without relying on compute-intensive methods such as NAS (though the method is composable with NAS).
2. To simplify the model design, The model relies on the efficient MRL formulation (ie, using nested sub-structures) over $g$ different layers per FFN block.
3. For inference, instead of relying on NAS, the authors propose a simple Mix-n-Match strategy to combine all the different nested sub-substructures. They propose a simple "least slope" increasing method for the granularity of the model for best performance (simplifying the design space for the users)
3. The proposed method applies to different transformer networks (both decoder-only and encoder-only). The authors train decoder-only LLMs and encoder-only ViTs to showcase the method's flexibility.
4. The authors back their paper with comprehensive evaluations, ablation studies, results on scaling trends, and comparisons against relevant baselines (traditional training and other elastic inference methods).
5. For the LLM inference use case, the authors showcase how additional capabilities, such as speculative decoding, are inherent to the model's design.
Weaknesses: 1. Scaling the proposed Matformer models will be difficult. For a given model scale, to keep the models "equivalent" to the original baselines (for downstream accuracy/log perplexity loss), the MatLM models, for example, are trained with 4x more tokens. Training for such long durations is computationally intractable at model scales that are deployed today.
- The scaling trends (from the trained models) also do not look too promising. For example, at the XL scale for MatLM or the B-16 scale for ViTs, the MatLM models seem to perform very similarly to the baseline models. Also, if you look at the loss/accuracy for the MatLM-XL models, you get very similar losses/accuracy to the base model - which "sees" 4x fewer tokens, so overall gains are minimal.
3. While the authors show through ablations that their "least slope" method of Mix-n-Match works best, in practice, it seems difficult to select granularities unless some quick experiments are not done. For e.g., at the model depths presented in the paper, it is to figure out to use M+L blocks to outperform an L baseline potentially, but as depth increases, will relying on only two granularities work - or will there be a need to define a fine-grained slope (still following the least-slope method).
4. While the speculative decoding scenario shown is interesting, generally for effective speedups (1.5x ~ 2.5x) from speculative decoding rely on much smaller draft models relative to target models (for e.g., the original paper referred to in the paper - there is ~10x gap in model sizes). Even if the MatLM models are scaled up, with the current granularities, the smallest model size will be 50%-60% smaller than the largest model size).
- This may not pave the way to more effective speedups unless more granular blocks $g$ are introduced. However, that is counter-intuitive to the training FLOPs spent as you'd effectively scale up more tokens $\propto$ number of $g$ blocks.
- Is the intuition here to rely on better consistency and attention-cache sharing capabilities to scale for more effective speedups?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the authors elaborate more on the attention-cache sharing mechanism? Is the sharing speed because the draft models see a better cache from the largest model's inference pass (because the largest model has to generate 1 final token) during the subsequent generation phase? And does that eventually help improve the consistency of generated tokens? If so, this favors setups where more tokens are speculated and verified. Can the authors comment on how many tokens they speculate per step?
2. In some places, it is a little difficult to understand what the results correlate to in graphs. For eg, in Section 4.2.2, the authors state "For example, with a loss of < 0.5% accuracy, MatViT-L/16 can reduce compute cost by 40%" - but it is unclear where in the graph this result is evident. There are many other instances like this in the paper. Can the authors either point to such results in the graphs? Or be more verbose in the Appendix about how these results are inferred for the readers to understand.
3. For some of the results, where the authors claim better results - the "betterness" claims are very weak. For e.g., the authors claim 0.35% better accuracy for the L/16 ViT models, but this also seems very close to the seed range for some models of this size. Similarly, for the MatLM results in Section 4.1 - they claim 0.01 log perplexity is a better result than the DynaBERT approach. This is, again, a very minor difference. Can the authors expand on this by showing either seed numbers (which might be difficult in rebuttal time - I understand this) or highlighting other papers that talk about this to understand the significance of the results?
4. For many of the downstream results in the Appendix, several results in Tables 9 and 10 are very close to random chance accuracy - making it a little difficult to gauge the gains from using the MatLM training recipe. Can the authors comment on this? Are some tasks more representative of the model's performance at these scales, which can help readers understand the significance of the results?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, the limitations of most results and methods are presented throughout the paper. Nothing extra is needed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and support. We clarify the main concerns below:
1. "Scaling ... will be difficult...Training .. intractable at model scales that are deployed today."
We clarify that MatFormer does not use more data or compute compared to the baselines trained separately and can be scaled similarly to the baselines. Rather than training different sized baseline models from scratch and separately, our approach trains them together and thereby enjoys the benefits of training the common parameters jointly.
Consider 4 baselines trained using FLOPs-S/M/L/XL for Baseline-S/M/L/XL, resulting in total FLOPs_TOTAL=FLOPs_S+...+FLOPs_XL. These separate baseline models, collectively use memory equivalent to Memory-S+Memory-M+Memory-L+Memory-XL.
MatFormer, in comparison, has memory=max(Memory-S,...,Memory-XL)=Memory-XL. The FLOPs used by MatFormer does not exceed FLOPs-TOTAL.
2. "The scaling trends... do not look too promising..."
The aim of MatFormer is to provide 1000s of models along a #param-vs-performance curve that is at least as good as the baseline #param-vs-performance curve, while using less memory and the same FLOPs as training a fixed number of baselines. Note that MatFormer gives 1000s of models on this optimal curve for free, whereas the baseline provides only a fixed number of explicitly trained models (4 in our paper). MatFormer achieves this (Figure 2), where its #param-vs-performance curve matches around the XL scale but outperforms the baseline significantly at smaller scales.
Additionally, it is possible to adjust the sampling probability while maintaining the same total FLOPs, thereby achieving higher performance across models from S to XL (as shown in Table 6). We will add this discussion to the paper to clarify this.
3. "While the authors show through ablations their "least slope" method of Mix-n-Match..."
In initial experiments, we found that this trend was consistent at different model scales, and the least slope method can be calculated mathematically for a given latency budget. The intuition behind this is that since "uniform" submodels are used during training, i.e. all layers are S/M/L/XL, making the least change to these explicitly trained granularities ought to produce the best results. We further discuss this in Appendix D1, and believe same heuristic should work when generalizing MatFormer further.
4. "While the speculative decoding...effective speedups unless more granular blocks are introduced."
Ans: Thank you for the insightful comment. We believe that the smallest model can be much smaller than 50% of the XL model, especially as we scale to much larger sizes. Consider the Llama-3.1 70B model, with model_dim=8k. We can have the smallest model as 1/16th of the total parameters, resulting in the S model having approximately ((70-2.5)/16+2.5)≈7B parameters. This is ~10x smaller than the 70B XL model.
Yes, having more granularities would allow for a much smaller S model. To avoid higher training costs, the smallest granularity can be quite small while maintaining the same n(granularities) - we could have 1/16, 1/4, 1/2 as granularity ratios for the S, M, L models relative to the XL model. Thank you for highlighting this, we will incorporate this discussion into the paper.
5. Is the intuition here to rely on better consistency ... more effective speedups?
Yes, better consistency will contribute to more effective speedups. We presented significantly higher consistency as a key advantage of MatFormer and illustrated two of the many possible use cases for it. For Decoder LMs, speculative decoding benefits from this consistency (Section 4.1.1). For Encoders, it enables adaptive retrieval (Section 4.2.2).
6. "Can the authors elaborate more on the attention-cache sharing..."
You are correct. Since S,XL models share attention blocks, when the XL model verifies the S model’s drafts, it performs a forward pass, resulting in the generation of the XL model’s KV Cache. This cache can overwrite the S model’s cache. This is not possible with baselines because their embedding spaces are different. We verified this empirically and found that baselines failed completely.
We observed higher speedups when sharing the attention-cache (Table 2). For our experiments we speculated 3 draft tokens before verification through the larger model.
7. "...little difficult to understand what the results correlate to in graphs..."
Thank you for pointing this out. In the MatViT instance, we were referring to Fig. 5(b). Here, the Mix’n’Match model having 175M parameters has less than 0.5% accuracy drop compared to the XL model which has 300M parameters, resulting in ~40% speedup. We’ll clarify this and go over the paper to ensure there are no ambiguities.
8. "... seed numbers... the significance of the results?"
For ImageNet-ViT models, the variance between runs is marginal – which we observed across multiple seeds. 0.35% gain in the range of 85% accuracy is statistically significant. We also want to mention that this gain is a bonus and is not the main takeaway for MatFormer. For LMs, 0.01 log perplexity is quite significant. On two different seeds we gave ~15% more FLOPs to DynaBERT than MatFormer, but the 0.01 log perplexity gap was not bridged. We’ll clarify this further in the final draft.
9."... readers understand the significance of the results?"
At small scales, the models may not be sufficient to provide meaningful comparisons for individual evaluation tasks, which can result in noise. To mitigate this, we use 25 different evaluation tasks and average the performance over them to judge the final performance. This provides a more reliable assessment of the model's capabilities, which strongly correlate with loss/perplexity. In future work, we aim to scale to even larger models.
---
We welcome further questions about the work, and would really appreciate an appropriate increase in score if reviewers’ concerns are adequately addressed.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response to the reviews and additional experimental results. After reading all reviews and response, and the overall global response, I will update my score to accept (score: 7).
Please incorporate necessary changes for the unclear sections in the revised draft. | Summary: This paper presents MatFormer, a nested Transformer architecture for elastic inference deployment constraints. It follows the principle of matryoshka representation learning and incorporate nested structure in the FFN modules of Transformers. Experiments show that MatFormer can (1) reliably obtain 582-850M model from a single 850M model, (2) preserve the metric-space structure for adaptive large-scale retrieval for extracted encoder, (3) friendly to speculative decoding.
Strengths: (1) Important problem and a very interesting solution.
(2) The writing is straightforward and easy to understand.
(3) Results are comprehensive and multi-dimensional. In additional to the main claim, its ability to be reliably used in speculative decoding shows the generalization ability of the methodology
Weaknesses: The reviewer does not think there are major problems. Please see questions.
Technical Quality: 4
Clarity: 4
Questions for Authors: (1) Can you add a discussion section with MoE?
(2) Is it possible to obtain a MatFormer model directly from a pretrained checkpoint (e.g. in some form of up-cycling)?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Please see question 2.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their appreciation of our contributions, and answer the questions asked below:
--------
1. Can you add a discussion section with MoE?
**Answer**: We agree that a discussion section on MoE would be appropriate, and will include a discussion section on this in the final draft. Both Matformer and MoE models are conditional computation architectures that can activate certain portions of the model depending on the input. For dynamic workloads, where the compute resources or the input hardness changes for each model query, we can use the universal MatFormer model to dynamically extract the optimal submodel for token-based routing in LLMs, similar to MoE models that focus on inference efficiency (Kudugunta et al., 2021; Li et al., 2022).
- Kudugunta, S., Huang, Y., Bapna, A., Krikun, M., Lepikhin, D., Luong, M. T., & Firat, O. (2021). Beyond distillation: Task-level mixture-of-experts for efficient inference.
- Li, M., Gururangan, S., Dettmers, T., Lewis, M., Althoff, T., Smith, N. A., & Zettlemoyer, L. (2022). Branch-train-merge: Embarrassingly parallel training of expert language models.
---------
2. Is it possible to obtain a MatFormer model directly from a pretrained checkpoint (e.g. in some form of up-cycling)?
**Answer**: It is indeed possible to obtain a MatFormer model directly from a pretrained checkpoint. In Table 7 (Appendix G1), we validate this with MatViT. While training MatViT from scratch results in more accurate submodels, we are still able to obtain deployable submodels by finetuning ViT with the MatViT objective using 2% of the training budget.
---
Rebuttal Comment 1.1:
Comment: Thank you! This is great. I am particularly interested in the idea of up-cycling for a potential future work - I will keep the original score of 9. Please consider accept this paper!
---
Reply to Comment 1.1.1:
Comment: Thank you for the support of the work. We are excited about the future applications of MatFormer as well! | Summary: The authors proposed a novel Transformer architecture called MatFormer to provide elastic inference across diverse deployment constraints. Specifically, the authors incorporate a nested Feed Forward Network (FFN) block structure within a standard Transformer model. During training, the authors optimize the parameters of multiple nested FFN blocks with varying sizes, enabling the extraction of hundreds of accurate smaller models without incurring additional computational costs. Experimental results on different model classes (decoders and encoders) and modalities (language and vision) demonstrate the effectiveness of the proposed method. My detailed comments are as follows.
Strengths: 1. The idea of incorporating a nested sub-structure within the standard Transformer and optimizing all the g granularities to produce a single, universal elastic model is interesting.
2. The paper introduces Mix’n’Match, a simple heuristic with no computation overhead that finds optimal submodels within a given parameter budget, outperforming more complex NAS methods without any training cost.
3. The results show the proposed method generalizes well to both decoder-only language models (MatLM) and vision encoders (MatViT), which has great potential in practice.
4. The paper is easy to read and provides enough experimental details to reproduce.
Weaknesses: 1. The idea of jointly optimizing a nested sub-structure is similar to Slimmable networks [A] and the supernet in Neural Architecture Search. More explanations are required to clarify the differences between them. What are the new challenges of applying the nested sub-structure to transformer architectures?
2. Some important details of dynamic workloads are missing. It would be better for the authors to show more details about how to use the universal MatFormer model to dynamically extract the optimal submodel for each token or query.
3. Although the paper shows promising results in experimental settings, it lacks extensive evaluation in real-world deployment scenarios. It would be better for the author to deploy the quantized models on GPUs or CPUs and report the memory consumption, inference speed as well as accuracy.
[A] Slimmable neural networks. ICLR 2019.
Technical Quality: 3
Clarity: 3
Questions for Authors: NA
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and support towards the paper. We clarify the main concerns raised by the reviewer:
-----------
1. The idea of jointly optimizing a nested sub-structure is similar to Slimmable networks [A] and the supernet in Neural Architecture Search. More explanations are required to clarify the differences between them. What are the new challenges of applying the nested sub-structure to transformer architectures?
**Answer**: Slimmable networks optimizes all models simultaneously. This idea is also used in DynaBERT, where it applies this training technique on transformer based architecture rather than on CNNs like Slimmable Networks. Hence, as a baseline we compare with DynaBERT (which is also more recent) and demonstrate the advantages of the MatFormer training methodology, which optimizes only one model at a time. This results in more gradient updates for the same FLOPs and memory. Furthermore, neither Slimmable Networks nor DynaBERT incorporate a strategy for selecting submodels beyond the few explicitly trained submodels. In contrast, MatFormer employs Mix’n’Match as an efficient and near-optimal model selection strategy, offering 1000+ models at no additional cost. We will clarify these differences further in the revised paper draft.
-----------
2. Some important details of dynamic workloads are missing. It would be better for the authors to show more details about how to use the universal MatFormer model to dynamically extract the optimal submodel for each token or query.
**Answer**: In this paper, we primarily focus on pretraining a single model that can result in multiple performant submodels without any additional training. It is possible to use these resulting submodels with many types of algorithms that are geared towards using multiple models to improve latency such as speculative decoding (Leviathan et al., 2023; Kim et al,. 2023), model routing (Ong et al., 2024; FrugalGPT) and cascade algorithms geared towards inference efficiency (Narasimhan et al 2024, Kolawole et al 2024).
There are recent works which build on top of our work and apply routing for dynamic workloads. We skip mentioning them to not break double blind policy, but will certainly include a discussion in our revised version. We believe that developing new algorithm variants to use MatFormer for dynamic workloads is a promising area for future work, and will add a discussion of this to the final draft.
- Y. Leviathan, M. Kalman, and Y. Matias. Fast inference from transformers via speculative 479 decoding. 2023.
- Kim, Sehoon, Karttikeya Mangalam, Suhong Moon, Jitendra Malik, Michael W. Mahoney, Amir Gholami, and Kurt Keutzer. "Speculative decoding with big little decoder." Advances in Neural Information Processing Systems 36 (2024).
- Ong, I., Almahairi, A., Wu, V., Chiang, W. L., Wu, T., Gonzalez, J. E., ... & Stoica, I. (2024). RouteLLM: Learning to Route LLMs with Preference Data. arXiv preprint arXiv:2406.18665.
- Chen, L., Zaharia, M., & Zou, J. (2023). Frugalgpt: How to use large language models while reducing cost and improving performance. arXiv preprint arXiv:2305.05176.
- Narasimhan, H., Jitkrittum, W., Rawat, A. S., Kim, S., Gupta, N., Menon, A. K., & Kumar, S. (2024). Faster Cascades via Speculative Decoding. arXiv preprint arXiv:2405.19261.
- Kolawole, S., Dennis, D., Talwalkar, A., & Smith, V. (2024). Revisiting Cascaded Ensembles for Efficient Inference. arXiv preprint arXiv:2407.02348.
-----------
3. Although the paper shows promising results in experimental settings, it lacks extensive evaluation in real-world deployment scenarios. It would be better for the author to deploy the quantized models on GPUs or CPUs and report the memory consumption, inference speed as well as accuracy.
**Answer**: We thank the reviewer for their feedback, and will include these details in the final draft.
We welcome further questions about the work, and if key issues are addressed, we would greatly appreciate an appropriate increase in score.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer 51tp
Comment: Thanks for your detailed responses. After reading all reviews and responses, I will maintain my score to accept. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Referring Human Pose and Mask Estimation In the Wild | Accept (poster) | Summary: This paper introduces a new task named as Referring Human Pose and Mask Estimation (R-HPM), which adopt text/point/scribble to represent a specific person and estimate its pose and segmentation mask. To achieve this goal, this paper proposes a new R-HPM dataset named RefHuman and a new method UniPHD to perform R-HPM. Experiments on the proposed dataset and MSCOCO demonstrate the effectiveness of the proposed method.
Strengths: 1. The proposed R-HPM task is useful and complementary to existing HPM tasks, which can bring new insights into this area.
2. The proposed dataset RefHuman is large and can support the following research in R-HPM. The provided text/point/scribble annotation is complete and can accurately describe a specific person.
3. Experiments on the proposed dataset and MSCOCO demonstrate the effectiveness of the proposed method in R-HPM task.
Weaknesses: 1. The definition of point and scribble in Sec.3.1 is not clear. Does point prompt only contains only one point and scribble contains 12 points? How to define the point in point prompt? This paper should give a form definition, not textual description.
2. Some confusion in experiments. First, what is the meaning of dagger in Table 2. If I understand correct, dagger denotes to adopt all images from MSCOCO (~60K) into training, but RefHuman only contains 20K images, how UniHPD can utilize the rest 40K images without ref annotation? Second, I think * should be the default evaluation configuration (namely choose the top-1), so what is the evaluation configuration without * and why adopt it as default configuration?
3. How about adopting multiple types of prompts to perform R-HPM? For example, using both text and point simultaneously to refer a specific person.
4. Evaluation metric is not suitable to R-HPM. This paper adopts AP to evaluate performance, which is designed to evaluate multiple objects in one image. However, in referring setting, only single instance is involved, therefore single instance evaluation metric such as PCKh@0.5 is more suitable for R-HPM, or just evaluating the keypoint error/segmentation IoU is ok.
5. To establish a comprehensive benchmark, the authors are expected to test some existing methods so that the following work can take as reference. None of the compared methods in Table 2 are referring-based methods, so this paper should reimplement some referring-based methods to test their performance on RefHuman, e.g., some referring segmentation methods.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weakness.
Overall, I think the motivation and contribution of this work is pretty good. But there are still some questions that should be clarified in the revised paper.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper already discusses limitation in Sec.6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer DJJG for their comments and appreciation of our work. In response to the concerns expressed in Weaknesses and Questions, we provide the following answers:
> Does a point prompt contain only one point, while a scribble contains 12 points? This paper should provide a formal definition for both point and scribble.
Yes, the point prompt contains only one point, while the scribble prompt contains 12 points uniformly sampled from the curve. We will add the following formal definitions as suggested:
**Point prompt**: A single point $\mathbf{p}$=$(x,y)$ at any position in the target area, where $x$ and $y$ are the horizontal and vertical coordinates.
**Scribble prompt**: A scribble can be a continuous, free-form curve represented by an ordered set of $n$ points {$(x_{1},y_{1}), (x_{2},y_{2}), ..., (x_{n},y_{n})$} anywhere in the target area. In this work, we discretize the curve by uniformly sampling 12 points to form the scribble prompt $\mathbf{s}$ = \{$(x_{\left\lfloor kn/12 \right\rfloor},y_{\left\lfloor kn/12 \right\rfloor}) \mid k = 1, 2, \ldots, 12$\}.
> Dagger denotes to adopt all images from MSCOCO into training? How UniHPD can utilize the rest 40K MS COCO images without ref annotation?
Yes, dagger means that all images from MS COCO have been used in training, consistent with previous pose estimation works. Besides the images in RefHuman, we generated point and scribble prompt annotations for the remaining MS COCO images to support the training of UniPHD$^{\dagger}$. We will release these annotations as well.
> I think * (namely choose the top-1) should be the default evaluation configuration. What is the evaluation configuration without * and why adopt it as default configuration?
We agree with the reviewer and will adopt $^*$ (top-1 ranked query group) as the default evaluation configuration in the final paper, as it reflects practical deployment where a model outputs a single result per input (L289). Some results are shown in Table A, our method still demonstrates top-tier performance under the suggested PCKh\@0.5 and oIoU metrics.
Our method without $^*$ uses 20 query groups for Pose AP and Mask AP evaluation (oIoU always uses only top-1), consistent with previous query-based pose estimation methods. For instance, GroupPose and ED-Pose use over 100 queries to enhance recall, despite most MSCOCO images contain fewer than three people.
*Table A: Comparison on RefHuman using the top-1 query group* ($^*$).
| | Pose PCKh\@0.5 (Text) | Mask oIoU (Text) | Pose PCKh\@0.5 (Scribble) | Mask oIoU (Scribble) |
|:-|:-:|:-:| :-:|:-:|
| Uni-ED-Pose | 78.3 | 74.5 | 89.6 | 84.9 |
| Uni-GroupPose | 78.0 | 74.7 | 89.3 | 85.6 |
| Ours | **79.2** | **75.3** | **90.3** | **86.0** |
> How about adopting multiple types of prompts to perform R-HPM? For example, using both text and point simultaneously to refer a specific person.
Interesting idea! We sequentially cross-attend visual features with point and text prompts to enhance multimodal representations for R-HPM, achieving significantly improved results due to the complementary information from both prompts, as shown in Table B. Future research could use our introduced RefHuman dataset to develop more advanced models.
*Table B: Ablation of using both text and point prompts.*
| Prompts | Pose PCKh\@0.5 | Mask oIoU |
|:-|:-:|:-:|
| Text | 79.2 | 75.3 |
| Point | 88.7 | 82.5 |
| Text+Point | **91.4** | **86.6** |
> PCKh\@0.5 and IoU are more suitable as R-HPM metrics.
Thank you for this insightful advice. We have reported the overall IoU for segmentation evaluation, consistent with the referring segmentation task, and will adopt PCKh\@0.5 as the primary pose estimation metric in the revised paper. Please kindly refer to Table A in the previous response. Our model still demonstrates top-tier performance under PCKh\@0.5 and oIoU metrics.
> Re-implement some referring segmentation methods to test their performance on RefHuman.
Thanks for the suggestion. We trained popular open-sourced referring segmentation models on our dataset, with results in Table C. Using only text prompt to train our model, we achieve top-tier performance compared to competitors like SgMg and GRES. These results will be included in the revised paper.
*Table C: Comparison to language-conditioned segmentation models.*
| Method | Backbone | oIoU |
|:-|:-:|:-:|
| LAVT [1] | Swin-T | 74.5 |
| CGFormer [2] | Swin-T | 75.3 |
| GRES [3] | Swin-T | 75.9 |
| SgMg [4] | Swin-T | 75.9 |
| Ours | Swin-T | **76.3** |
[1] LAVT: Language-Aware Vision Transformer for Referring Image Segmentation, CVPR, 2022.
[2] Contrastive Grouping with Transformer for Referring Image Segmentation, CVPR, 2023.
[3] GRES: Generalized Referring Expression Segmentation, CVPR, 2023.
[4] Spectrum-guided Multi-granularity Referring Video Object Segmentation, ICCV, 2023.
---
Rebuttal 2:
Comment: Dear Reviewer DJJG,
Thank you for your diligent review of our submission. We have carefully addressed each of your concerns and provided our responses. We would greatly appreciate any additional comments, as your feedback is crucial in strengthening our work.
Your time and consideration are invaluable to us. | Summary: In this paper the authors tackle the problem of in-the-wild human pose estimation in a “referring” setting where the goal is to determine the pose of a person referred to using either a text prompt or positional prompt. To achieve this, the authors annotate MS COCO dataset with over 50K annotated instances for 2D keypoints, mask and prompt (either as text, points or scribbles). They use this dataset to train a model called UniPHD which consists of several submodules. The results show the paper is able to train a strong baseline model which will be useful for future research.
Strengths: S1. This paper introduces a new task of referring human pose estimation by releasing a large dataset of 50K annotations (as an extension to MS COCO) enabling researchers to train models that can interact with the model using text and points/scribble.
S2. The paper also releases a baseline method for the same task with an aim to learn end-to-end R-HPM.
Weaknesses: W1. The motivation for this obtaining pose in a “referring” manner is unclear. For instance, it might be more valuable to focus a text based human detection as opposed to pose estimation because the former can open up many avenues such as human tracking for research.
W2. Multimodal encoder is shallow potentially limiting the interaction of the different encoded features. It might be helpful increase the capacity of the the multimodal encoder model.
W3. Parts of the paper not are fully clear. I understand that the paper aims to be “single-stage” but on L205-L214, there seem to be multiple stages where candidates are detected and then used for subsequent pose estimation. (Which makes it two-stage?)
Minor comments
* Table. 1 does not require last column
* Table. 3 typo “Scibble" —> “Scribble”
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1. How does Eq. 4 fit into mask prediction task? Is that applied only to the pose prediction branch?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The authors have briefly acknowledged limitations of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer apkP for their comments and appreciation of our work. In response to the concerns expressed in Weaknesses and Questions, we provide the following answers:
> The motivation for obtaining pose in a "referring" manner is unclear. It might be more valuable to focus on text-based human detection, which can open up many avenues.
Our task predicts both **mask** and pose of the referred person simultaneously using common prompts to provide detailed, identity-aware human representations that benefit human-AI interaction.
This has the following advantages over text-based human detection:
(1) Our segmentation provides finer-grained, less noisy information than detection, crucial for robotics and precision tasks.
(2) Our pose estimation adds essential semantics for behavior understanding, complementing mask/box information.
(3) Support for diverse prompts (text, point, scribble), broaden the scope of human-AI interaction.
(4) Our proposed dataset supports various settings, including text-based human detection mentioned by the reviewer.
Reviewer DJJG also thinks the motivation of this work is pretty good.
> It might be helpful to increase its capacity.
Thanks for this valuable suggestion. We enhanced the capacity of the multimodal encoder by adding three layers or increasing feature dimensions from 256 to 384. As shown in Table A, both strategies slightly improved overall performance, demonstrating that our current settings are highly effective. We will incorporate the results in the revised paper.
*Table A: Increasing the capacity of the multimodal encoder.*
| | Pose AP (Text) | Mask AP (Text) | Pose AP (Scribble) | Mask AP (Scribble) |
|:-|:-:|:-:| :-:|:-:|
| Baseline | **66.6** | 62.1 | 74.6 | 70.0 |
| w/ more layers | 66.3 | 62.0 | **74.9** | **70.4** |
| w/ higher dimensions | **66.6** | **62.8** |74.5 | 70.2 |
> The method seems to be two-stage, according to L205-214.
To address the reviewer's concern, we will remove the term "one-stage" since it does not affect our contribution. Previous papers like ED-Pose described "two-stage" methods that first perform detection, then pose estimation on cropped single-human images, or use heuristic grouping to process numerous detected instance-agnostic keypoints.
Our method uses linear layers to identify a high-scoring point $\mathbf{c}$ on multimodal features for initializing keypoint queries (L205-214), which directly regress target keypoint positions on the *entire* image.
Our query initialization is similar to the human query selection in ED-Pose, which claims to be one stage.
Hence, we call our method one stage because it is end-to-end and avoids cropping, pose estimation on cropped single-human images, or heuristic grouping.
Moreover, the initialization on L205-214 does not even play a significant role in our method, evidenced by an ablation study using only learnable keypoint queries. As shown in Table B, this results in only a minor performance decrease, demonstrating the effectiveness of our method. We will include this ablation study in the paper.
*Table B: Ablation of query initialization.*
| | Pose AP (Text) | Mask AP (Text) | Pose AP (Scribble) | Mask AP (Scribble) |
|:-|:-:|:-:| :-:|:-:|
| w/o initialization | 65.5 | 61.0 | 74.0 | 69.6 |
| Ours | **66.6** | **62.1** | **74.6** | **70.0** |
> Table 1 does not require last column and typo “Scibble" —> “Scribble”.
Thank you for pointing this out. We will remove the last column from Table 1, correct the pointed typo and carefully check the complete paper.
> How does Eq. 4 fit into mask prediction task? Is that applied only to the pose prediction branch?
Our graph attention, including edge construction (Eq. 4), is applied concurrently to both mask and pose prediction branches. Our query set $\mathbf{Q}^{IP} \in{\mathbb{R} ^{(k+1) \times D}}$ comprises one instance (prompt) query and $k$=17 keypoint queries. The graph attention treats all queries in $\mathbf{Q}^{IP}$ as nodes and models keypoint-to-keypoint, keypoint-to-instance, and instance-to-keypoint relations, enhancing all queries simultaneously. Ultimately, the instance query generates dynamic filters for mask prediction, while the keypoint queries estimate keypoint positions.
---
Rebuttal 2:
Comment: Dear Reviewer apkP,
Thank you for your diligent review of our submission. We have carefully addressed each of your concerns and provided our responses. We would greatly appreciate any additional comments, as your feedback is crucial in strengthening our work.
Your time and consideration are invaluable to us.
---
Rebuttal Comment 2.1:
Comment: The rebuttal has addressed some of the concerns. Hence, I increase the score to a weak accept. I continue to think that there is room to improve the presentation and soundness of the paper, but the contributions are valuable.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer apkP,
We greatly appreciate your recognition of our work's contributions and the upgraded score. Your valuable comments have been crucial in refining our paper. As suggested, we will further improve the presentation and soundness in the final version. | Summary: This paper proposes a new task called Referring Human Pose and Mask Estimation and introduces the corresponding RefHuman dataset, which is beneficial for research on human behavior comprehension. Additionally, the authors present a model that leverages three types of prompts for this task. The proposed UniPHD model achieves promising performance in this area.
Strengths: 1. The proposed R-HPM task and the RefHuman dataset are beneficial to related research.
2. The proposed method shows promising performance on both the new task and traditional human pose estimation tasks.
Weaknesses: 1. The descriptions of the method, especially in Section 4.3, is confusing. For example, what is the relationship between F^(vl) and P’? How do you enhance the template based on P’? And why do “the keypoint queries struggle to perceive the prompt information and lacks interactions with each other, challenging the target-awareness and instance coherence” ? It seems that these keypoint queries can interact with each other in existing decoders. More detailed descriptions and explanations are needed to understand this work.
2. The author proposed that existing research overlooks joint human pose and mask estimation, which provides comprehensive human representations. However, the paper lacks comparisons of the proposed model with other variants, such as UniPHD without the mask head and UniPHD without the pose head. Such comparisons could help clarify the relationship and benefit of the pose estimation and mask estimation tasks.
3. The proposed method needs further validation. Providing an ablation study of the proposed query initialization method and comparing model parameters and computational complexity would be helpful.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer TbW6 for their comments and appreciation of our work. In response to the concerns expressed in Weaknesses and Questions, we provide the following answers:
> What is the relationship between $\mathbf{F}^{vl}$ and $\mathbf{P}^{'}$, and how do you enhance the template based on $\mathbf{P}^{'}$?
Thank you for the comments. We extract the multimodal embedding $\mathbf{F}^{vl}_{\mathbf{c}}$ at the highest-scoring position $\mathbf{c}$ (evaluated using linear layers) to estimate a set of $k$=17 keypoint positions $\mathbf{P}^{'}$ for better query initialization. We enhance the keypoint query template $\mathbf{Q}^{P} \in{\mathbb{R} ^{k \times D}}$ by adding semantically rich multimodal embeddings $\mathbf{F}^{vl} _{\mathbf{P}^{'}}$ extracted at positions $\mathbf{P}^{'}$, which also serve as reference points for deformable attention in the decoder. We will explain this in more detail in the revised paper.
> Why do "the keypoint queries struggle to ....."? It seems that these keypoint queries can interact with each other in existing decoders.
Perhaps there is some misunderstanding which can be removed by rephrasing the sentence as "However, *after local detail aggregation,* the keypoint queries struggle to perceive .......".
We first update each query embedding separately by capturing local details, which lacks query interactions. Therefore, we mention this in L222 to introduce our global dependency modeling, which captures keypoint-to-keypoint, keypoint-to-instance, and instance-to-keypoint relations to enhance target awareness and instance coherence. We did not claim that other works' decoders cannot perform keypoint query interactions.
> Comparisons of UniPHD with its variants, such as UniPHD without the mask head and UniPHD without the
pose head.
Thanks for the valuable suggestion. We performed the comparisons in Table A, which reveal that removing either head adversely affects the performance. This confirms the effectiveness of our synergistic decoder, which facilitates keypoint-instance interactions, enabling bidirectional information flow and enhancing both predictions. We will add this ablation study to the paper.
*Table A: Ablation of prediction heads.*
| | Pose AP (Text) | Mask AP (Text) | Pose AP (Scribble) | Mask AP (Scribble) |
|:-|:-:|:-:| :-:|:-:|
| w/o pose head | - | 61.8 | - | 68.7 |
| w/o mask head | 63.3 | - | 72.7 | - |
| Ours | **66.6** | **62.1** | **74.6** | **70.0** |
> Provide an ablation study of the proposed query initialization.
Table B shows the ablation study on query initialization by removing the query enhancement discussed in our response to the first comment. Results show a minor performance decrease due to the absence of dynamic spatial priors.
*Table B: Ablation of query initialization.*
| | Pose AP (Text) | Mask AP (Text) | Pose AP (Scribble) | Mask AP (Scribble) |
|:-|:-:|:-:| :-:|:-:|
| w/o enhancement | 65.5 | 61.0 | 74.0 | 69.6 |
| Ours | **66.6** | **62.1** | **74.6** | **70.0** |
> Compare model parameters and computational complexity.
Thank you for this suggestion. Table C compares the model parameters and FPS (measured on a single RTX4090 GPU). Our method demonstrates better performance at comparable FPS while using slightly more parameters. We will include this comparison in the revised paper.
*Table C: Comparison with competitors using the scribble prompt for inference.*
| | Pose AP | Mask AP | FPS | Params |
|:-|:-:|:-:| :-:|:-:|
| Uni-ED-Pose | 72.9 | 69.2 | 50 | **175.7M** |
| Uni-GroupPose | 73.0 | 69.0 | **52** | 177.7M |
| Ours | **74.6** | **70.0** | 50 | 184.0M |
---
Rebuttal 2:
Comment: Dear Reviewer TbW6,
Thank you for your diligent review of our submission. We have carefully addressed each of your concerns and provided our responses. We would greatly appreciate any additional comments, as your feedback is crucial in strengthening our work.
Your time and consideration are invaluable to us.
---
Rebuttal Comment 2.1:
Title: Post-rebuttal
Comment: I appreciate the authors' response. The additional explanations and ablation studies improve the quality of the paper and effectively demonstrate the design's effectiveness. As a result, I have raised the score.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer TbW6,
Thank you for your positive feedback! We are glad that our rebuttal has addressed your concerns. Your constructive comments have been very helpful in refining our work, and we will incorporate these additional results in the final paper. | null | null | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers for their thoughtful and constructive feedback.
We are glad that **all** the reviewers recognize the importance of our newly proposed task of Referring Human Pose and Mask Estimation and appreciate the significance of our new dataset, RefHuman. The reviewers also agree that the proposed UniPHD method achieves top-tier performance on the RefHuman and MS COCO datasets.
We have conducted all additional ablations and experiments suggested by the reviewers. We will revise our manuscript according to all the comments and remain committed to continuous improvement. We eagerly await your final decision. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effective for LMMs | Accept (poster) | Summary: The authors present a new architecture StackFormer - an effective and simple way to infuse fine-grained visual tokens from CLIP vision transformer to the early layers of LLaVA-1.5 and LLaVA-Next language models, without increasing the sequence length of visual tokens for LLMs. It doesn't require architecture change while significantly increases the number of tokens LLMs can take, so it improves accuracy especially for high-resolution images and videos.
Strengths: The authors propose an effective and simple way to increase the resolution of the visual part of VLM to increase the accuracy of the VLM, splitting the images into patches, separately applying ViT-CLIP to them, collecting (mosaic) the feature maps into a single high-resolution feature map as whole-image feature, and using residual connections to embed this feature map into the LLM.
StackFormer outperforms its baseline model LlaVA on both VQAv2, GQA, POPE as well as on Text-Oriented and Zero-shot Video QA benchmarks.
StackFormer achieves best performance when the backbone is fine-tuned, while when fine-tuning a backbone without a StackFormer, the improvement is limited.
Weaknesses: In Fig. 1 and 2 are missing details about the implementation of StackFormer:
- how exactly you split the high-resolution image into patches
- and how exactly you split the high-resolution visual tokens into different token sets with spatial dilation
While you write that StackFormer achieves the best trade-off between performance and effectiveness without introducing extra visual tokens, specific indicators of overhead costs when using Stackformer are not provided. Flops, parameters, latency, memory consumption, accuracy are not provided in a single table to compare Stackformer with other VLMs.
Technical Quality: 2
Clarity: 2
Questions for Authors: Can you show in more detail in the Figures 1 and 2:
- how exactly you split the high-resolution image into patches?
- and how exactly you split the high-resolution visual tokens into different token sets with spatial dilation?
Can you provide numerical indicators (flops, params, latency, memory consumption) that Stackformer achieves the best trade-off between performance and effectiveness compared to other VLMs?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Limitation and Future Work:
The paper presents limited options for processing high-resolution images by naively splitting the image into many patches and applying VIT-CLIP to them separately. While there are many approaches to processing high-resolution images using transformers: ViTDet [1], SwinV2 [2], Patch-Fusion [3], ... or simply naively resize the ViT-CLIP model to the required image resolution, using 2D interpolation of the pre-trained position embeddings [4].
1. Exploring Plain Vision Transformer Backbones for Object Detection, Yanghao Li, Hanzi Mao, Ross Girshick, Kaiming He, 2022
2. Swin Transformer V2: Scaling Up Capacity and Resolution, Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo, 2021
3. PatchFusion: An End-to-End Tile-Based Framework for High-Resolution Monocular Metric Depth Estimation, Zhenyu Li, Shariq Farooq Bhat, Peter Wonka, 2023
4. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby, 2020
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We provide a detailed architecture figure in our **rebuttal pdf**. We recommend referring to Figure. 1 and Figure. 2 in our **rebuttal pdf** for a better understanding of the high-resolution token processing.
## q1-2: How to split high-resolution images into patch crops; and how to split high-resolution visual tokens into different sets.
Please refer to **global author rebuttal**, and Fig.1 and Fig.2 in our **rebuttal pdf**.
## q3: Comparison of flops, params, latency
Thanks for the great suggestion! We compare our StackFormer with LLaVA-1.5 and another representative work VILA [1]. As shown in the table, our model improves the baseline by introducing a ~3% increase in FLOPs without significantly increasing the iteration time during training. Additionally, our StackFormer does not require extra data or the intermediate stage training used in VILA while achieving comparable performance, which is of great training efficiency. We will include the analysis and comparison in our revision.
**Comparison on 7B models**
|Method | Training cost | Training Data (PT+SFT) | iter time |TFLOPs | Params |SEED | TextVQA | DocVQA | ChartQA | InfoVQA |
| ----| ---- | ---- |---- |---- |---- | ---- |---- |---- |---- |---- |
LLaVA-1.5 | ~0.1k GPU hours | 558K+665K | 19.0 | 26.7 | 7.06B | 58.6 | 58.2* | 28.1 | 18.2 | 25.8
VILA-1.5 $\star$ |~5k GPU hours | 50M+1M| 17.2 | 26.7 | 7.06B | 61.1 | 64.4* | 44.5* | 25.7 | 32.5 |
LLaVA-1.5+StackFormer | ~0.1k GPU hours| 558K+665K | 17.9 | 27.5 | 7.06B | 60.6 | 62.4* | 39.1 | 19.2 | 29.8 |
LLaVA-1.5+StackFormer $\star$ | ~0.1k GPU hours | 558K+665K | 18.3| 27.5 |7.06B | 63.3 | 64.5* | 39.3 | 21.0 | 30.1 |
*For iter time, we average the time cost of 100 training iterations under the same 8xA100 machine*.
*$\star$ indicates that the vision encoder is fine-tuned during the SFT stage*.
*\* indicates that the images of the benchmark training set are observed during SFT stage*.
[1] VILA: On Pre-training for Visual Language Models
## q4: Other approaches to processing high-resolution images
Thanks for sharing different works in the related domain. In this work, our primary goal is to efficiently and effectively process image tokens extracted from high-resolution images for **large multimodal models (LMMs)**, rather than the techniques on how to extract high-resolution features. Therefore, we employ a simple and widely used approach, *i.e.*, adaptive multi-crop, for high-resolution feature extraction to ensure a fair comparison with LlaVA and other works.
To further comprehend our findings, we follow the suggestion and conduct experiments using other approaches to process high-resolution images, as follows.
1. [*Our default setting*] Multi-crop sub-images + StackFormer-LLM (freeze ViT).
2. Multi-crop sub-images + StackFormer-LLM (unfreeze ViT): We unfreeze the parameters of ViT during the SFT stage, building on the approach in #1.
3. Multi-crop sub-images + StackFormer-ViT (unfreeze ViT): We use the first 20 layers of ViT to extract multi-crop high-resolution features and the last 4 layers to stack these features. The stacked features are then fed into a projection module as visual token inputs for LMMs.
4. whole image, 2D interpolation on ViT + StackFormer-LLM (unfreeze ViT): We directly interpolate the positional embedding of ViT to process high-resolution images. StackFormer is then used to stack the extracted features. ViT needs to be unfrozen because the input resolution differs from the pre-training vision encoder.
5. whole image, ViT-det + StackFormer-LLM (unfreeze ViT): We apply techniques from ViTDet, incorporating global and local attention in transformer blocks and discarding the CLS token. StackFormer is used for token stacking in the LLM.
We conduct ablation experiments using Phi-3 (3B) as our language model. As shown in the table below, our StackFormer on LLM/ViT (#1, #2, #3) achieves significant performance gains compared to the baseline model. However, other methods, such as 2D positional interpolation and ViT-Det, result in performance drops due to inconsistencies between image pre-training and the LMM SFT stage. These techniques commonly used on detection often require longer training schedules (*e.g.*, 100 epochs on COCO for ViTDet), which may be unsuitable for the one-epoch SFT pipeline used in LMMs.
#### Ablations on processing high-resolution images with phi-3 (3b LLM)
| # |Method | AVG | SEED | TextVQA | DocVQA | ChartQA | InfoVQA
|---- | ---- |---- |---- |---- | ---- |---- |---- |
|0 |Baseline | 38.1 | 62.6 | 55.7 | 28.1 | 15.8 | 28.3
|1 |Multi-crop, Stackformer-LLM | 40.9 | 62.9 | 58.4 | 37.8 | 16.6 | 28.9 |
|2 |Multi-crop, Stackformer-LLM (unfreeze ViT) | 42.5 | 63.9 | 60.3 | 39.0 | 19.1 | 30.1
|3 |Multi-crop, Stackformer-ViT | 42.0 | 64.0 | 60.1 | 38.4 | 17.1 | 30.6
|4 |Whole image, 2d pos-interpolation, Stackformer-LLM| 36.1 |62.7 | 53.4 | 25.1 | 14.8 | 24.5
|5 |Whole image, ViT-Det style, Stackformer-LLM | 34.1 | 60.3 | 48.6 | 22.6 | 14.2 | 25.0
Furthermore, we conduct additional experiments to utilize StackFormer on LLM/ViT (#1, #2, #3) with 7/13B LLM models, which further demonstrate the effectiveness of our StackFormer. Please refer to part 3 (Additional Results on vicuna-1.5) in **global author rebuttal**.
We believe that other techniques for extracting high-resolution features with vision transformers, such as patch-fusion [1] and swin-attention [2], may have the potential to obtain higher performance than the commonly-used naive multi-crop approach. We leave these explorations in the future work.
[1] PatchFusion: An End-to-End Tile-Based Framework for High-Resolution Monocular Metric Depth Estimation.
[2] Swin Transformer V2: Scaling Up Capacity and Resolution.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the answers, detailed schemes and additional results. This makes the approach clearer. Based on this, I increase the rating of the paper to "6: Weak Accept" | Summary: This paper proposes a new visual token organization method. Specifically, it proposes to stack visual tokens instead of the commonly used stringing. Experiments show that the proposed StackFormer can improve performance on TextVQA, DocVQA, and InfoVQA.
Strengths: - The proposed method is novel, different from the commomly-used stringing method.
- Experiments show that the proposed method StackFormer can significantly improve performance on some datasets, especially the traditional VQA datasets.
Weaknesses: - There are many typos in the paper. The authors need to improve their writing and polish the paper. Like Line 18 "StackFormeruses" --> "StackFormer uses"; Line 134: Multi-modal Language Models (LLMs). Sometimes it uses LMMs and sometimes MLLMs. Both are ok, but please use only one in the same paper.
- There is no significant improvement for LLaVA-Next on MLLM benchmarks.
Technical Quality: 4
Clarity: 2
Questions for Authors: - In table 1, what do † and * mean?
- Could you explain why StackFormer cannot improve LLaVA-NeXt on MLLM benchmarks?
Confidence: 5
Soundness: 4
Presentation: 2
Contribution: 4
Limitations: the authors have discussed the limitations in the paper. StackFormer cannot significantly improve larger model LLaVA-NeXt on MLLM benchmarks. It may be another limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## q1: Typo
Thank you for pointing it out, we have polished the representation and will update it in the next version.
## q2: Improvement for LLaVA-Next on MLMM benchmarks
Thank you for your valuable comments! We suspect that two main factors contribute to the results: (1) the image resolution, and (2) the quality of the SFT dataset (please refer to L227-L228 in our main paper).
LLaVA-Next scales up the effective resolution from $336 \times 336$ to {$672\times 672$, $336\times 1344$, $1344 \times 336$} by using dynamic multi-image crops and stringing visual tokens [1]. This approach allows the model to handle most resolutions in MLLM benchmarks as mentioned. Quantitatively, we analyzed the average width and height of images from each benchmark. As shown in the table below, the average resolution of multi-modal benchmarks is significantly smaller than that of text-oriented benchmarks. Consequently, the multi-image crops already suffice to cover most of those benchmarks in LLaVA-Next. Further upsampling the image to a higher resolution using bilinear interpolation does not introduce additional information. As a result, the proposed layer stacking of visual tokens extracted from upsampled images does not necessarily show gains on them as significant as the originally high-resolution text-oriented benchmarks.
|Benchmark | average resolution (height $\times$ width) |
| ----| ---- |
TextVQA | 819.4 $\times$ 952.3
DocVQA | 2098.6 $\times$ 1782.8
InfoVQA | 3002.4 $\times$ 1161.5
SEED | 899.2 $\times$ 1090.0
POPE | 478.8 $\times$ 584.7
MMMU | 488.2 $\times$ 723.0
MMVet | 797.4 $\times$ 1059.5
As discussed in our main submission, the SFT-768K data used in LLaVA-Next includes a portion of private data collected from the LLaVA demo. This part of the data is used to generate responses with GPT-4V to obtain high-quality instruction-following data. In contrast, our SFT-748K dataset lacks this component, leading to limitations on the GPT4-evaluated benchmark, *i.e.* MM-Vet.
Furthermore, we observe that MLLM benchmarks emphasize evaluating reasoning, hallucination, *etc.*, while text-oriented benchmarks prioritize fine-grained perception and understanding (please refer to Fig. 3 in **our rebuttal PDF**). Additionally, previous works [2] indicate that MLLM benchmarks, such as MMMU, benefit more from LLM capabilities rather than visual representations. We suggest these differences may contribute to that StackFormer for LLaVA-Next does not obtain significant gains compared to text-oriented benchmarks.
## q3: In table 1, what do † and * mean?
In Table 1, † indicates that the model is continuously fine-tuned based on the LLaVA-Next checkpoint. Since the training codes and the 765K instruction tuning data used in LLaVA-Next are not publicly available, we mix a 748K SFT dataset based on information from the LLaVA-Next blog. However, we do not have access to the private data used in LLaVA-Next or the exact combination ratios among the datasets. Consequently, we fine-tuned our model using our 748K dataset and LLaVA-Next to ensure a fair comparison.
Additionally, * denotes that the training images from the downstream benchmark are observed during the SFT stage. Besides, we mark MMMU* to indicate that we are reporting the validation results for MMMU. We will clarify these details in the next version of our paper.
[1] LLaVA-NeXT: Improved reasoning, OCR, and world knowledge
[2] LLaVA-NeXT: What Else Influences Visual Instruction Tuning Beyond Data?
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I would like to keep my score of 7: Accept. | Summary: This paper proposes a method to add more visual information to a MM-LLM without increasing the number of tokens processed by the model. The idea is simple, just add visual tokens to the existing hidden representation between each layer of the transformer. The approach is evaluated on many tasks and shows good results.
Strengths: The paper is well written, the approach would be reproducible from the given descriptions. The idea is novel and simple and effective. The experiments are thorough.
Weaknesses: Overall the paper is well done. The experiments are thorough, the idea is well explained, and the method is reproducible.
Technical Quality: 3
Clarity: 4
Questions for Authors: None
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: na
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments! We are pleased that you find our work novel and simple! If you have any additional questions, please feel free to add detailed comments; we are happy to answer them and sincerely hope to address your concerns if any. | null | null | Rebuttal 1:
Rebuttal: First of all, we sincerely appreciate all your valuable comments and suggestions.
In this work, we proposed a new model called Stackformer to handle the long sequence of visual tokens in large multimodal models (LMMs). Unlike all previous works that string visual tokens into a long sequence, we instead stack them layer by layer and feed them into the large language models using a simple residual connection. As demonstrated by our extensive empirical studies, the proposed method significantly improves the LMMs' ability to handle high-resolution images while keeping the context length unchanged for LLMs.
We are encouraged that all reviewers gave positive ratings to our work and recognize the merits of our work including:
* **Novelty**: "The idea is novel" (R-QJM9), "The proposed method is novel, different from the commonly-used stringing method" (R-SQxC).
* **Simplicity**: "simple and effective" (QJM9), "The authors propose an effective and simple way" (REjR).
* **Effectiveness**: "simple and effective" (QJM9), "can significantly improve performance on some datasets" (SQxC), "The authors propose an effective and simple way" (REjR).
We carefully read the comments by all reviewers and attempted to provide comprehensive responses accordingly. Please find the rebuttal below each official review. We hope the responses could answer the questions raised by reviewers and address any concerns about our work.
Thanks again to all reviewers for the time and effort!
***
## Author Global Responses
We provide a detailed architecture figure in our **rebuttal pdf**. We recommend referring to Figure. 1 and Figure. 2 in our **rebuttal pdf** for a better understanding of the high-resolution token processing.
### 1. How to split high-resolution images into patch crops
We split high-resolution images into sub-image crops using dynamic high-resolution techniques [1,2,3]. Given the grid pinpoints template $A$={($a^{h}_i, a^{w}_i$)} and the input resolution of vision encoder $\mathrm{r}$, the resolution candidates are calculated as $R$={($r \cdot a^{h}_i, r \cdot a^{w}_i$)}. For each image $I$, we first select the best-fitting resolution from $R$ to resize $I$. The resized image is then split into fixed-size sub-images of $r \times r$ accordingly.
For example, we use CLIP-ViT-336 as the image encoder and define the grid pinpoints templates $A$ = {(1,2), (1,3), (1,4), (2,1), (2,2), (3,1), (4,1)} to ensure at most 4 sets of high-resolution tokens for stacking by default. This setup allows us to obtain the resolution candidates $R$ = {(336, 336), (336, 672), (336, 1008), (336, 1344), (672, 336), (672, 672), (1008, 336), (1344, 336)}. For each input image, we first select the best-fitting resolution. The image is then resized and split into $336 \times 336$ sub-images accordingly. Thus, the high-resolution sub-images can be directly encoded with the image encoder.
Please refer to Fig. 1 in our **rebuttal pdf** for a clearer understanding.
[1] SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models.
[2] SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models.
[3] LLaVA-NeXT: Improved reasoning, OCR, and world knowledge.
### 2. How to split high-resolution visual tokens into different sets
Given the high-resolution visual tokens $\mathbf{X}^{hires} \in \mathbb{R}^{(a^h \cdot h) \times (a^w \cdot w) \times C}$, we apply 2D sampling to divide the tokens into different sets. Here, $h$ and $w$ represent the image feature shape of the vision encoder, while $a^h$ and $a^w$ denote the aspect ratio of the resized image. The 2D sampled token sets for StackFormer are calculated as $\mathbf{X}^{stack} = \{\mathbf{X}^{hires}[i::a^{h}, j::a^{w}]\}$, where $i$ = {$0, 1, \ldots, a^{h} - 1$} and $j$ = {$0, 1, \ldots, a^{w} - 1 $}.
Please refer to Fig.2 in our **rebuttal pdf** for a better understanding.
### 3. Additional Results on vicuna-1.5 (7b/13b LLM)
Inspired by Reviewer REjR, we further conduct experiments on 7b and 13b LMMs. For StackFormer-ViT, we use the first 20 layers of ViT to extract multi-crop high-resolution features and the last 4 layers to stack these features. The stacked features are then fed into a projection module as visual token inputs for LMMs, without creasing the visual context length.
The results demonstrate that our StackFormer can be effectively utilized for both LLM and ViT.
*$\star$ indicates that the vision encoder is fine-tuned during SFT stage.*
| # | Method | VQAv2 | GQA | TextVQA | DocVQA | InfoVQA | SEED | POPE | MMMU | MM-Vet |
|---- |---- | ---- |---- |---- |---- |---- |---- |---- |---- |---- |
|0 | Llava-1.5-7b | 78.5 | 62.0 | 58.2 | 28.1 | 25.8 | 58.6 | 85.9 | 35.3 | 30.5
|1 | Stackformer-LLM-7b | 79.5 | 63.1 | 62.4 | 39.1 | 29.8 | 60.6 | 86.7 | 35.7 | 29.9
|2 | Stackformer-LLM-7b $\star$ | 81.1 | 63.9 | 64.5 | 39.3 | 30.1 | 63.3 | 86.7 | 37.1 | 29.8
|3 | Stackformer-ViT-7b $\star$ | 80.4 | 64.1 | 63.5 | 41.0 | 30.0 | 62.3 | 87.6 | 34.9 | 33.0
| #| Method | VQAv2 | GQA | TextVQA | DocVQA | InfoVQA | SEED | POPE | MMMU | MM-Vet |
---- |---- | ---- |---- |---- |---- |---- |---- |---- |---- |---- |
0 | Llava-1.5-13b | 80.0 | 63.3 | 61.3 | 30.3 | 28.4 | 61.6 | 85.9 | 35.3 | 30.5
1 |Stackformer-LLM-13b | 80.9 | 64.2 | 64.6 | 41.5 | 33.0 | 63.5 | 87.7 | 35.2 | 35.9
2 |Stackformer-LLM-13b $\star$ | 82.1 | 65.1 | 65.2 | 43.1 | 34.0 | 64.4 | 86.6 | 34.7 | 36.2
3 |Stackformer-ViT-13b $\star$ | 81.1 | 64.2 | 63.9 | 41.7 | 33.1 | 63.0 | 86.6 | 34.7 | 31.1
Pdf: /pdf/2a5f6b6d95d17569d3f95fd359735aaf45081f3a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Pretrained Optimization Model for Zero-Shot Black Box Optimization | Accept (poster) | Summary: This paper focus on zero-shot black box optimization. They propose a pretrained optimization model (POM) to pretrain on a training dataset, achieving good results on BBOB benchmark and two robot control tasks. The authors design several parts including LMM and LCM for the POM to achieve the generation of sample strategy. The experiment results are comprehensive with the main experiment and some in-depth experiments.
Strengths: 1. The design of the POM make sense in general. The authors draw inspiration from optimization process and hand-crafted a lot of design for the LMM and LCM.
2. The training process make sense too. The construction of the training dataset consider the coverage of the landscape feature. The design of loss function is really instructive, considering both convergence and diversity.
3. The algorithm present good experiment results in both synthetic BBOB dataset and robot control task, and achieve good results on both low and high dimensional scenario.
Weaknesses: 1. POM contains so many hand-crafted design, but are still under the scheme of mutate and then crossover, what's the motivation to decide this is the best design?
2. From the 'Ablation Study', we learn that the `mask` is the most important design of the POM which is just a random process without any theoretical guarantee. This would raise the concern of the whole design of the framework because those hand-crafted parts aren't even more important than the random mask. Also the NOTRAIN one perform better when one of the core design been discarded, which is weird.
3. More results can be presented. (e.g. there are only results for 100D when BBOB test with optimal solution disturbed)
4. The training dataset is relatively simple, it's easy for a model to overfit to it.
5. Related work can be more comprehensive. For example, for the part of 'LLM for Optimization', there are several works working in resolving black-box optimization problems using LLMs, such as https://arxiv.org/abs/2403.01131, https://arxiv.org/abs/2401.02051...). It would be nice to have a more comprehensive introduction to these related works.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What will the results be when functions in BBOB dataset being rotated and shifted simultaneously? In the main experiment, the optimal points are all located in 0. Although in appendix there are results for shifted version, but rotation is also another usual operation to examine the robustness of some algorithms.
2. How do you implement those baseline methods? It will be nice to mention this.
3. For LES and LGA, was them being trained in the same dataset under the setting in this paper?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see weaknees and question part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **weakness 1**
We do not claim that the current version of POM is the best design, but it does show excellent performance in experiments. We also do not emphasize that POM must be under the scheme of mutate and then crossover. We just design LMM and LCM modules for solution generation according to the needs of POM, and ensure that POM can be pre-trained end-to-end through gradient.
**weakness 2**
The problems you mentioned just prove the rationality of the POM module design.
1) mask
The reason for introducing `mask` is explained in detail in lines 121-126. Its function is to limit the information exchange within the population, prevent the population from quickly converging to the optimal individual, resulting in loss of diversity. This design is similar to the dropout operation of deep neural networks. Without the dropout operation, the neural network will easily fall into the local optimum and overfit on the training set.
It is worth noting that the role of `mask` is to make LCM and LMM work better. `mask` is like a catalyst, while LCM and LMM can be likened to reactants. We cannot say that the catalyst itself is more important than the reactants.
2) `NOTTRAIN` and `NO LMM`
`NOTRAIN` performs better than `NO LMM`, which means LMM is a very core component. If LMM is lost, the trained model performance is not as good as `NOTRAIN`.
**weakness 3**
The experimental results of BBOB (d=100 ,optimal solutions disturbed), Bipedal Walker and Enduro proved the excellent performance of POM. These experimental results have been able to verify the optimization and generalization capabilities of POM. We further show the experimental results of BBOB(d=500, disturbed).
| **F** | **GLHF** | **CMA-ES** | **LSHADE** |
| :--------------: | :----------------------: | :--------------------: | :--------------------: |
| **F1** | **1.29E+02(1.29E+02**) | 6.12E+02(3.15E+01) | 2.06E+02(2.18E+01) |
| **F2** | 5.22E+00(5.22E+00) | 6.28E+01(3.25E+00) | **3.13E+00(5.25E-01)** |
| **F3** | 4.55E+03(4.55E+03) | 8.93E+03(2.57E+02) | 4.00E+03(1.15E+02) |
| **F4** | 9.98E+03(9.98E+03) | 2.62E+04(3.08E+03) | **8.82E+03(5.22E+02)** |
| **F5** | 5.41E+03(5.41E+03) | **1.43E+03(7.16E+01)** | 1.49E+03(6.53E+02) |
| **F6** | 3.00E+05(3.00E+05) | 8.20E+05(1.09E+05) | **2.66E+05(3.11E+04)** |
| **F7** | **1.19E+03(1.19E+03)** | 8.06E+03(3.59E+02) | 1.34E+03(2.63E+01) |
| **F8** | **8.37E+05(8.37E+05)** | 2.03E+07(3.52E+05) | 1.79E+06(2.36E+05) |
| **F9** | **3.24E+03(3.24E+03)** | 1.76E+07(1.87E+06) | 5.02E+05(3.04E+04) |
| **F10** | 4.03E+06(4.03E+06) | 5.75E+07(5.25E+06) | **3.51E+06(4.43E+05)** |
| **F11** | **4.28E+02(4.28E+02)** | 6.48E+03(2.62E+02) | 5.80E+02(1.90E+02) |
| **F12** | **1.24E+09(1.24E+09)** | 1.74E+10(1.02E+09) | 2.28E+09(1.34E+08) |
| **F13** | **1.12E+03(1.12E+03)** | 2.48E+03(1.37E+02) | 1.42E+03(4.37E+01) |
| **F14** | **1.02E+01(1.02E+01)** | 1.14E+02(1.16E+01) | 1.64E+01(9.05E-01) |
| **F15** | **4.32E+03(4.32E+03)** | 9.31E+03(1.84E+02) | 4.75E+03(3.34E+02) |
| **F16** | **4.91E+01(4.91E+01)** | 6.70E+01(1.73E+00) | 5.79E+01(2.12E+00) |
| **F17** | **2.19E+00(2.19E+00)** | 1.06E+01(8.56E-01) | 3.14E+00(1.84E-01) |
| **F18** | **8.29E+00(8.29E+00)** | 3.54E+01(2.41E+00) | 1.29E+01(7.45E-01) |
| **F19** | **7.72E+00(7.72E+00)** | 1.03E+02(6.85E+00) | 1.56E+01(8.92E-01) |
| **F20** | **-4.67E+00(-4.67E+00)** | 3.41E+05(2.53E+04) | 5.10E+03(7.19E+02) |
| **F21** | **7.79E+01(7.79E+01)** | 8.61E+01(1.45E-01) | **7.02E+01(3.09E+00)** |
| **F22** | 8.10E+01(8.10E+01) | 8.46E+01(1.08E+00) | **7.08E+01(7.65E-01)** |
| **F23** | 1.67E+00(1.67E+00) | 1.94E+00(1.02E-01) | **1.64E+00(1.29E-02)** |
| **F24** | 7.34E+03(7.34E+03) | 1.56E+04(3.21E+02) | **7.31E+03(4.71E+02)** |
| **win/tie/loss** | -/-/- | 22/1/1 | 14/2/8 |
We hope our reply will be recognized by you.
**weakness 4**
POM does not overfit the training set, and experimental results show that POM exhibits generalization capabilities far beyond its training distribution.
**weakness 5**
Thank you for your valuable comments. We have added these references in the Related Work section, which makes our related work more complete (see Section 2, LLM for Optimization).
“
*LLaMoCo [38] and EoH [39] use LLM to generate code to solve optimization problems, but the performance of LLaMoCo depends on carefully designed instructions and prompts, and EoH has expensive evaluation costs.*
”
**Question 1**
We think your suggestion is very correct and reasonable. In fact, in all BBOB-related experiments, we performed corresponding projections or rotations on the objective function. We only removed the deviation from the optimal solution in the main experimental part.
**Question 2**
The implementation of these baseline methods is described in detail in Appendix section D, and their parameter settings are described in detail in Appendix section E.
**Question 3**
They were not trained on the same training set. We did not need to train LES and LGA, as they claim to be trained models that can be used out of the box. Second, the training frameworks of LES and LGA are not open source, so we could not train them. Third, the training task sets of LES and LGA contain many BBOB functions, which is beneficial to them. Even so, their performance is not as good as POM.
---
Rebuttal Comment 1.1:
Title: Response to the Authors
Comment: Thanks for the response from the authors, the response addressed most of my questions.
Two further minor questions are as follows,
1. In the newly provided results on BBOB 500D, data in the columu GLHF seems a bit weird that the optimal solutions and the standard deviation are the same.
2. For the BBOB testsuits, do you mean that you use the default projections and rotations provided in the original problems set?
---
Rebuttal 2:
Comment: **Q1**
We deeply apologize for the oversight in the configuration of our experimental code, which led to an error where the standard deviations were identical in the experimental results provided during the rebuttal period. We have rerun the experiment and obtained the correct mean values (and standard deviations), as shown in the table below. We are pleased to report that the performance of the POM method remains significantly superior to the best baselines, which is consistent with our previous conclusions. We are very grateful for the careful review by the reviewers.
| **F** | **POM** | **CMA-ES** | **LSHADE** |
| :--------------: | :---------------------: | :--------------------: | :--------------------: |
| **F1** | **1.29E+02(2.60E+00)** | 6.12E+02(3.15E+01) | 2.06E+02(2.18E+01) |
| **F2** | 5.75E+00(4.71E-01) | 6.28E+01(3.25E+00) | **3.13E+00(5.25E-01)** |
| **F3** | 4.72E+03(4.77E+01) | 8.93E+03(2.57E+02) | **4.00E+03(1.15E+02)** |
| **F4** | 1.02E+04(7.66E+02) | 2.62E+04(3.08E+03) | **8.82E+03(5.22E+02)** |
| **F5** | 5.46E+03(7.62E+01) | **1.43E+03(7.16E+01)** | 1.49E+03(6.53E+02) |
| **F6** | 3.18E+05(3.59E+04) | 8.20E+05(1.09E+05) | **2.66E+05(3.11E+04)** |
| **F7** | **1.13E+03(3.76E+01)** | 8.06E+03(3.59E+02) | 1.34E+03(2.63E+01) |
| **F8** | **8.62E+05(2.71E+04)** | 2.03E+07(3.52E+05) | 1.79E+06(2.36E+05) |
| **F9** | **3.24E+03(1.72E-01)** | 1.76E+07(1.87E+06) | 5.02E+05(3.04E+04) |
| **F10** | 3.53E+06(3.87E+05) | 5.75E+07(5.25E+06) | **3.51E+06(4.43E+05)** |
| **F11** | **4.25E+02(2.61E+01)** | 6.48E+03(2.62E+02) | 5.80E+02(1.90E+02) |
| **F12** | **1.19E+09(6.29E+06)** | 1.74E+10(1.02E+09) | 2.28E+09(1.34E+08) |
| **F13** | **1.13E+03(1.48E+01)** | 2.48E+03(1.37E+02) | 1.42E+03(4.37E+01) |
| **F14** | **1.02E+01(2.52E-01)** | 1.14E+02(1.16E+01) | 1.64E+01(9.05E-01) |
| **F15** | **4.15E+03(9.83E+01)** | 9.31E+03(1.84E+02) | 4.75E+03(3.34E+02) |
| **F16** | **4.67E+01(1.61E+00)** | 6.70E+01(1.73E+00) | 5.79E+01(2.12E+00) |
| **F17** | **2.18E+00(2.41E-02)** | 1.06E+01(8.56E-01) | 3.14E+00(1.84E-01) |
| **F18** | **8.40E+00(8.93E-02)** | 3.54E+01(2.41E+00) | 1.29E+01(7.45E-01) |
| **F19** | **7.62E+00(3.10E-01)** | 1.03E+02(6.85E+00) | 1.56E+01(8.92E-01) |
| **F20** | **-3.43E+00(3.29E+00)** | 3.41E+05(2.53E+04) | 5.10E+03(7.19E+02) |
| **F21** | 7.76E+01(2.88E-01) | 8.61E+01(1.45E-01) | **7.02E+01(3.09E+00)** |
| **F22** | 8.07E+01(3.29E-01) | 8.46E+01(1.08E+00) | **7.08E+01(7.65E-01)** |
| **F23** | **1.60E+00(7.20E-02)** | 1.94E+00(1.02E-01) | 1.64E+00(1.29E-02) |
| **F24** | **7.31E+03(2.04E+02)** | 1.56E+04(3.21E+02) | 7.31E+03(4.71E+02) |
| **win/tie/loss** | -/-/- | 22/1/1 | 14/3/7 |
**Q2**
We did not utilize the default projections and rotations provided in the original problem set because we aimed to test the model's performance on a broader range of scenarios on the BBOB benchmark. Consequently, we strictly adhered to the standard BBOB procedure to generate a series of BBOB function parameters, including projections and rotations.
---
Rebuttal Comment 2.1:
Title: Response to the Authors
Comment: Thanks the authors for replying, all of my concern has been addressed. Good work and I would like to raise my score to 7.
---
Reply to Comment 2.1.1:
Comment: Thank you immensely for your valuable recognition and support towards our work. Your insightful suggestions have not only guided us but also significantly contributed to the enhancement of our paper. We sincerely appreciate your time and effort in helping us refine our research. | Summary: This paper studies zero-shot optimizers for blackbox optimization problem. The core idea is to pretrain a hypernetwork that generate suitable optimization strategies on a subset of tasks; at test time, the hypernetwork can thus be deployed to propose the optimizer for a given unseen task. The key technical contribution of this work includes the architectural design for this hypernetwork as well as a meta learning algorithm for training it. Empirical studies are conducted on BBOB benchmark as well as Bipedal Walker and Enduro dataset, where the proposed method achieves superior final results compared with several prior arts.
Strengths: 1. The idea is straightforward yet grounded in the rich existing literature of hypernetwork in AutoML.
2. The hypernetwork, trained in a meta-learning fashion, exhibits fairly consistent generalization ability on the task considered.
3. Components of the proposed method are extensively studied.
Weaknesses: 1. Presentation: Some parts of this paper are a bit vanilla and takes some efforts to follow. For example, section 3 jumps right into detailing the architectural design. Providing high level overview as well as prioritizing the most important designs could benefit the readability of this work.
2. Analysis. The empirical results suggest that the proposed method performs strongly on high dimensional scenarios, but it seems the paper did not provide the intuition behind why it is the case.
Technical Quality: 2
Clarity: 1
Questions for Authors: On Bipedal Walker task, the proposed method seems to exhibit substantially higher variance compared with other baselines. Are there any intuitions for this behavior?
Confidence: 1
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: I did not seem to find any discussion on the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **weakness 1**
Thank you very much for your valuable suggestions. We think your suggestion can really help us enhance the readability of the paper. We have made the following changes to Section 3: 1) First, we briefly introduced the overall model architecture of POM and gave the overall model structure diagram; 2) Further, we introduced the core components of POM in detail. 3) Finally, we introduced the details of POM training and testing.
I hope our changes will be recognized by you!
**weakness 2**
In the paper, we have made a visual analysis of why POM achieves such excellent performance (see section 4.4). We found that both LMM and LCM can adaptively adjust their own optimization strategies to achieve a good balance between exploration and utilization. Specifically, in the LMM module, we found the following phenomenon (lines 284-289):
*“ 1) Generally, superior individuals receive higher weights during LMM, showcasing POM's ability to balance exploration and exploitation as the population converges. 2) Across diverse function problems, POM dynamically generates optimization strategies, highlighting its adaptability and contributing to robust generalization. 3) Disadvantaged individuals exhibit a more uniform weight distribution, potentially aiding in their escape from local optima and enhancing algorithm convergence.”*
In the LCM module, we found the following phenomenon (see lines 291-299):
*“LCM displays the capacity to adaptively generate diverse strategies for individuals across different ranks in the population, revealing distinct patterns among tasks and rankings. Notably, top-ranking individuals within the top 20, such as those ranked 1st, 5th, and 18th, exhibit a flexible crossover strategy. The dynamic adjustment of crossover probability with population evolution aids in preserving dominant genes and facilitating escape from local optima. Conversely, lower-ranking individuals show an increasing overall probability of crossover, promoting exploration of disadvantaged individuals and enhancing the algorithm's exploration capability. LCM proficiently generates adaptive crossover strategies across tasks, individuals, and convergence stages, significantly boosting both convergence and exploration capabilities.”*
POM has obtained strong optimization and generalization capabilities through pre-training, and has a very strong optimization efficiency. The performance of POM on all problems far exceeds the baselines of other manually designed optimization strategies (DE, ES, LSHADE, CMAES) and the baselines that obtain optimization capabilities through pre-training (LGA, LES), which proves the effectiveness and rationality of POM.
**Question 1**
The Bipedal Walker task is a very difficult robot control task with very sparse reward signals. This task is very challenging. We can find that LES almost directly failed on this task, while DE, ES, LGA and LSHADE all showed premature convergence. These poor-performing algorithms all exhibit a common characteristic: small variance. Both POM and CMAES, which perform best on this task, exhibit significantly higher variance than other algorithms, which implies that stronger exploration capabilities are required to solve this task. For example, when CMA-ES encounters an evolutionary failure, it will increase the standard deviation of the search distribution and explore better points from a larger space; POM will adaptively adjust the optimization strategy. Among them, the mask module enhances the randomness of POM and avoids the LMM module's over-reliance on outstanding individuals in the population, thus improving the exploration ability of POM.
We did observe in the experiment that there are a large number of failed evolutions in the population evolution process. It may be that after many consecutive generations of attempts, a new and better individual will suddenly be found. This results in a strong randomness in the convergence process, so the convergence curves of CMA-ES and POM appear to have relatively large variances.
**limitations**
Thank you very much for your comments! We have conducted a very detailed analysis of POM in the paper. We also pointed out the advantages and limitations of POM that we observed from the experiments. We have added a summary of limitations as follows:
*"limitations*
1) *Model size: In the experiment, we found that the relationship between the model size and the performance of POM is not a strict linear relationship. Although the larger the model, the more difficult it is to train, there is still no very quantitative design criterion between model size, training data volume and training difficulty.*
2) *Time performance: We introduced an operation similar to the attention mechanism, whose time complexity is $O(n^2)$, which makes POM require a lot of time cost when processing large-scale populations. How to reduce and improve the time efficiency of POM is also worthy of further study.*"
---
Rebuttal 2:
Comment: We hope that our responses have adequately addressed the concerns you previously raised. Your further insights would be greatly appreciated, as they are instrumental in enhancing the quality of our work. We understand that you are busy, and we truly value the time and effort you put into this process. Thank you in advance for your continued support. | Summary: The paper introduces POM, a neural-network-based evolutionary algorithm for black-box optimization. POM is trained on diverse optimization tasks to enable adaptation to new tasks. POM outperforms the baselines on BBOB benchmark and two robot control tasks.
Strengths: The proposed method performs better than the baselines on the BBOB benchmark and two robotic tasks.
**Increased score from 3 to 6 after discussion**
Weaknesses: - The paper is missing some key references [1, 2, 3]. These methods learn from diverse tasks and are able to adapt to new tasks without any finetuning. This invalidates some of the claims made in the paper (lines 28-29, line 78)
- Section 3 is difficult to read without a background section on population-based optimization. It should describe a general evolutionary algorithm first and how POM parameterizes different components of the algorithm with a neural network.
- Section 3.2 is overly mathy with unnecessary equations. This distracts away from the main idea of the proposed model, which is using neural networks to parameterize evolution. For example, equations (6) and (10) are simply describing multi-head attention and feed-forward layers, which are very standard in deep learning and it's unnecessary to present the equations here. It is also redundant to specify each element of the network parameters $\theta_1$ and $\theta_2$, etc.
- Section 3.3 sounds difficult to believe. How can we expect the same set of parameters ($\theta_1$ and $\theta_2$) to learn from many tasks without feeding metadata of each task? For example, if two tasks have conflicting gradient signals, they get canceled out, and the model basically learns nothing. Specifically, for a certain $x_t$, for task $i$, the model should produce $x_{t+1}$, but for task $j$, the model should produce $x_{t+1}'$, and $x_{t+1}$ and $x_{t+1}'$ are conflicting, then how does the model learn in this case?
- Section 4 is confusing without any experiment setup information. Where are the zero-shot and few-shot settings mentioned in the introduction? How is POM adapted to a new objective after training?
- POM has advantages over the baselines because it was pretrained while the baselines were not, so the better performance of POM is not surprising.
[1] Nguyen, Tung, and Aditya Grover. "Transformer Neural Processes: Uncertainty-Aware Meta Learning Via Sequence Modeling." International Conference on Machine Learning. PMLR, 2022.
[2] Nguyen, Tung, Sudhanshu Agrawal, and Aditya Grover. "ExPT: synthetic pretraining for few-shot experimental design." Advances in Neural Information Processing Systems 36 (2024).
[3] Nguyen, Tung, and Aditya Grover. "LICO: Large Language Models for In-Context Molecular Optimization." arXiv preprint arXiv:2406.18851 (2024).
Technical Quality: 3
Clarity: 2
Questions for Authors: - What do the authors mean by "zero-shot" optimization? Does it mean optimizing a new objective function without seeing any (x, y) pair from that function (is this even possible)? Or do the authors mean optimizing a new objective function without any gradient training/fine-tuning? If the latter is the case, then there already exists methods that can perform optimization without training/fine-tuning on the new objective (see my comments above).
- How did the author decide on the input to LMM (H) and LCM (Z)?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We've carefully reviewed your feedback, addressed all queries, added the refs, and made revisions as per your suggestions. Your further review and acceptance of the updated manuscript would be highly valued.
**Weaknesses 1**
We have included references to [1-3] in the related work section of the POM due to their significant contributions.
*"TNPs [40], ExPT [41] and LICO78 [ 42] use transformer to solve BBO problems and have achieved good results. TNPs requires contextual information of the target problem, and neither ExPT nor LICO can be directly used to solve tasks with different dimensions from the training task."*
Howerver, we respectfully disagree with your viewpoint for the following reasons:
In [1], it is mentioned: "*the number of evaluation points $N$ and the number of context points $m$ are generated from the same uniform distribution as in training.*" And there are three experimental tasks in [1]. When solving these tasks, [1] is retrained separately. These might imply that [1] may not be suitable for addressing Zero-shot Optimization problems.
In Section 2.1 of [2], it is mentioned that “during the adaptation phase, one can use the pretrained model to optimize any objective function $f$ in the same domain $X \subseteq \mathbb{R}^d$ .” In section 4.3 of [3], it is mentioned that “after training, a single LICO model can be used for optimizing various objective functions within the domain $X \subseteq \mathbb{R}^d$ .” This could suggest that references [2-3] may not be directly applicable to our particular context.
POM overcomes these limitations.
At the same time, we found these related papers, which also gave us a lot of inspiration.
- Probing the Decision Boundaries of In-context Learning in Large Language Models.
- ClimaX: A foundation model for weather and climate.
- Temporal predictive coding for model-based planning in latent space.
**Weaknesses 2**
Appendix A provides an overview of the background knowledge related to population-based optimization. Owing to constraints on space, we have chosen to include this information in the appendix rather than the main text. Section 3 meticulously outlines the specifics of POM, thereby ensuring that the reproduction of POM is feasible without a priori knowledge of population-based optimization techniques.
**Weaknesses 3**
Sec. 3.2 details POM's structure and mechanism for reproducibility. Eq. (6) and (10) present specialized model designs, not standard multi-head attention or feed-forward layers, including weights, activations, and normalization.
**Weaknesses 4**
POM does not have the conflict you claim. The usage of POM is similar to CMA-ES. It can directly optimize the target task without any metadata. POM learns general optimization strategies instead of fitting a specific optimization task. It only needs to input the fitness of the population and some features within the population to adaptively generate optimization strategies.
**Weaknesses 5**
We introduced the concepts of Zero-shot Optimization and Few-shot Optimization in section 3.1 (L87-L90).
The experiment in **Section 4.2** verified the performance of POM in the Zero-shot Optimization scenario (see Fig. 2, 3 and the corresponding appendix for the results). In this section, we directly used the trained POM to solve the target problem.
In **Section 4.3 Fine-tuning Test**, we further verified the performance of POM in the **Few-shot Optimization** scenario. Here, we fine-tune POM using a small number of simulated functions of the target task. The experimental results are shown in Fig. 6.
The trained POM can be directly used to handle new target tasks, which is described in detail in Algorithm 2 (line 167).
**Weaknesses 6**
This is the advantage of POM. The baselines include two algorithms (LES and LGA) pre-trained by Meta-BBO. Although they have also been pre-trained, their performance are far inferior to POM. POM can obtain superior performance through pre-training, while baselines cannot.
**Question 1**
The concept of Zero-shot Optimization introduced in section 3.1 is as follows:
“**Definition 1 Zero-shot Optimization**. An optimizer is applied directly to solve $f$ without any tuning.”
The references [1-3] mentioned in Q1 cannot achieve zero-shot optimization. For details, see the reply to Q1.
**Question 2**
The details of these designs are described in detail in section 3.2.
**LMM**: The role of LMM is to generate a set of candidate solutions $\mathbf{V}^t$ based on the population $\mathbf{X}^t$. Therefore, the input of LMM is based on fitness information, that is, $\mathbf{H}^t$ (see lines 107-118 for details). The input information of LMM does not include the solution $\mathbf{x}$, because $\mathbf{x}$ will have different dimensions and follow different distributions under different tasks. If the input information includes $\mathbf{x}$, POM will not be able to generalize to different task dimensions.
**LCM**: The input of LCM needs to take into account the fitness information $[\hat{f}_i^t,\hat{r}^t_i]$ of $\mathbf{x}^t_i$. Intuitively, if $\mathbf{x}^t_i$ has a low fitness and a poor ranking in the population, then its genes are likely to be eliminated and should not enter the offspring. At the same time, LCM should take into account the cosine similarity between $\mathbf{x}^t_i$ and its candidate solutions $\mathbf{v}^t_i$. Intuitively, if the similarity between $\mathbf{x}^t_i$ and its candidate solution $\mathbf{v}^t_i$ is very low, selecting more genes of the candidate solution $\mathbf{v}^t_i$ into the offspring individuals helps encourage the exploration of the model.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response.
I still believe the writing can be significantly improved for better clarity. The main text should provide sufficient background and motivate the approach well before delving into the technical details. Moreover, the methodology section is a bit too verbal and mathy, and some equations are unnecessary. I do not see a difference between Eq (6) and (10) and the standard feed forward and attention layers.
Regarding the baselines, TNP, ExPT, and LICO can all be used for zero-shot optimization because they do not require further fine-tuning after pretraining. However, they do require the new objective to lie in the same dimension/domain as the pretraining tasks. I do not think the other papers the authors mentioned are relevant to this work.
Can the authors explain why POM does not have the conflicting issue I mentioned in the original review? Imagine there are two tasks in pretraining that have opposite objectives, I do not see how POM can learn a general optimization strategy from both tasks that have conflicting gradient information without any contextual information from each task.
I will keep my original score for now.
---
Rebuttal 2:
Comment: We hope that our responses have adequately addressed the concerns you previously raised. Your further insights would be greatly appreciated, as they are instrumental in enhancing the quality of our work. We understand that you are busy, and we truly value the time and effort you put into this process. Thank you in advance for your continued support.
---
Rebuttal 3:
Comment: First and foremost, I would like to express my sincere gratitude for your comments and suggestions regarding our manuscript submitted to NeurIPS 2024. We have diligently revised our paper in accordance with your feedback.
We earnestly hope that you will reconsider our revised manuscript and appreciate the improvements made. If you find that our efforts and the results now meet the standards of the conference, we kindly request that you consider increasing the score of our paper. We highly value your assessment and look forward to your final approval.
Thank you once again for your valuable time and expertise.
## Q1: I still believe the writing can be significantly improved for...
Thank you for your valuable suggestions, which have helped us improve the expression of our paper. To better introduce the background knowledge and clarify the motivation behind the design of POM, we have followed your advice and added the following content in Section 3 before detailing the mechanism of POM (see Section 3.2 for details).
"
***3.2 Classic Population Optimization Algorithm***
*In this section, we use Differential Evolution (DE) as an example to review classic evolutionary algorithms. DE [20,43] is a prominent family within evolutionary algorithms (EAs), known for its advantageous properties such as rapid convergence and robust performance [44,45]. The optimization strategy of DE primarily involves mutation and crossover operations.*
*The classic DE/rand/1 crossover operator is illustrated in Eq. (1) (additional examples are listed in Appendix A.2). Each mutation strategy can be viewed as a specific instance of Eq. (2); Further details are provided in Appendix A.2. Additionally, we represent the mutation strategy in a matrix form, as shown in Eq. (3). The matrix $\mathbf{S}$ evolves with the generation index *t*, indicating that the mutation strategy adapts across different generations. Consequently, we propose a module to enhance the performance of the mutation operation, which leverages the information from the population of the *t*th generation to generate $\mathbf{S}^t$. This serves as the motivation for our design of the LMM.*
$\mathbf{v}\_i^t = \mathbf{x}\_{r1}^t+F\cdot(\mathbf{x}\_{r2}^t-\mathbf{x}\_{r3}^t)$ (1)
*In the crossover phase at step *t*, DE uses a fixed crossover probability $cr_i^t$ ∈ [0,1] for each individual $\mathbf{x}_i^t$ in the population, as shown in Eq. (9). The crossover strategy for the entire population can then be expressed as a vector $\mathbf{cr}^t = (cr_1^t, cr_2^t, ..., cr_N^t)$. Our goal is to design a module that adaptively generates $\mathbf{cr}^t$ using the information from the population. This approach allows for the automatic design of the crossover strategy by controlling the parameter *cr*. This serves as the motivation for our design of LCM.*
Title: Reply to new discussion (1/3)
---
Rebuttal 4:
Comment: ## Q2: Moreover, the methodology section is a bit too verbal and mathy, and some equations are unnecessary. I do not see a difference between Eq (6) and (10) and the standard feed forward and attention layers.
In this section, we present detailed formulas to ensure the reproducibility of POM, which are also crucial for elucidating the mechanism of POM. We will then discuss in detail the differences between these formulas and the standard self-attention and feed-forward layers.
### The difference between Eq (6) and standard self-attention
**A. Self-Attention**
1. **Input Transformation**:
$$ \mathbf{Q}= \mathbf{X}\mathbf{W}^Q, \quad \mathbf{K} = \mathbf{X}\mathbf{W}^K, \quad \mathbf{V} = \mathbf{X}\mathbf{W}^V$$
2. **Attention**:
$$\text{Self-Attention}(\mathbf{Q}, \mathbf{K}, \mathbf{V}) = \text{softmax}\left(\frac{\mathbf{Q}\mathbf{K}^T}{\sqrt{d_k}}\right)\mathbf{V}$$
3. **Output**:
$$\text{Output} = \text{Self-Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})$$
**B. Eq. (6)**
1. **Input Transformation**:
$$
\mathbf{\hat{H}}^t=\text{Tanh}(\mathbf{H}^t\mathbf{W}_{m1}+\mathbf{b}\_{m1}),\quad \mathbf{Q}^t=Tanh(\mathbf{\hat{H}}^t\mathbf{W}\_{m2}+\mathbf{b}\_{m2}), \quad \mathbf{K}^t=Tanh(\mathbf{\hat{H}}^t\mathbf{W}\_{m3}+\mathbf{b}\_{m3})
$$
2. **Attention**:
$$\text{LMM-Attention}(\mathbf{Q}, \mathbf{K}, \mathbf{X})= \text{mask}\left(\text{softmax}\left( \text{Tanh}\left(\frac{\mathbf{Q}\mathbf{K}^T}{\sqrt{d_k}}\right)\right),r_{\text{mask}}\right)\mathbf{X}$$
3. **Output**:
$$\text{Output} = \text{LMM-Attention}(\mathbf{Q}, \mathbf{K}, \mathbf{X})$$
The differences between Eq (6) and standard self-attention are evident as follows:
1. **No Calculation of V**: In Eq (6), we do not compute $\mathbf{V}$; instead, we use $\mathbf{X}$ in the attention calculation. This approach is crucial for achieving good performance in our optimization task, whereas standard self-attention struggles to adapt to our problem scenario.
2. **Different Processing of Input Information**: We introduce the Tanh activation function and a bias term, which helps enhance the performance of POM. This is a significant departure from the standard self-attention mechanism.
3. **Attention Matrix Mapping and Masking**: For the computed attention matrix, we apply the Tanh function to map it to the range $[-1,1]$. Additionally, we use a random mask operation, setting elements to zero with a probability of $r_{mask}$.
In summary, there are several key differences between our approach and standard self-attention. These differences are crucial for the performance of POM, and it is essential to detail them thoroughly. This is not a shortcoming but rather a necessary distinction.
### 2 Eq (10) and feed forward layer
**A. feed forward**
$$\text{Output} = \mathbf{W}_2 \cdot \text{ReLU}(\mathbf{W}_1 \cdot \mathbf{X} + \mathbf{b}_1) + \mathbf{b}_2$$
**B. Eq (10)**
$$\mathbf{cr}^t = \text{sigmoid}\left(\text{layernorm}\left(\tanh\left(\mathbf{Z}^t \times \mathbf{W_{c1}} + \mathbf{b_{c1}}\right) | \mathbf{\tau}\right) \times \mathbf{W_{c2}} + \mathbf{b_{c2}}\right)$$
It is worth noting that, although the two are very similar in form, Eq (10) provides a detailed implementation scheme. Any modifications to Eq (10), such as replacing the sigmoid function with ReLU, result in a significant decline in POM's performance. This is because we rely on the sigmoid function to map the output to the range $[0,1]$, representing probability values. Changes to other activation functions or hyperparameters in Eq (10) also lead to performance degradation.
## Q3: Regarding the baselines, TNP, ExPT, and LICO can all be used for zero-shot optimization because they do not require further fine-tuning after pretraining. However, they do require the new objective to lie in the same dimension/domain as the pretraining tasks.
It is clear that TNP, ExPT, and LICO require retraining in different dimensional scenarios when faced with tasks of varying dimensions. This does not align with the definition of Zero-shot Optimization, which is "an optimizer that can be directly applied to solve a continuous black-box optimization problem $f$ without any tuning."
***"Definition 1 Zero-shot Optimization** : Zero-shot optimization refers to an optimizer that is applied directly to solve a continuous black-box optimization problem $f$ without any tuning. This means that the optimizer does not require any contextual information about
$f$ and can be directly used to handle problems of any dimensionality."*
Title: Reply to new discussion (2/3)
---
Rebuttal 5:
Comment: ## Q4: Can the authors explain why POM does not have the conflicting issue I mentioned in the original review...
Conflicts between optimization objectives do not necessarily lead to conflicts in optimization strategies. POM focuses on learning the mapping from the ranking information of individuals within a population to the search strategies, rather than adopting a fixed optimization strategy to suit all tasks.
Specifically, the LMM module adaptively generates the strategy matrix $\mathbf{S}^t$ using the population's ranking information ($\hat{r}^t_i$) and the information representing their relative advantages $\hat{f}_i^t$ (normalized from $f(\mathbf{x}_i^t)$, representing individual $i$. Note that we can convert all problems to minimization problems by adding a negative sign).
The LCM module uses the population's ranking information ($\hat{r}^t_i$), the relative advantage information of individuals within the population $\hat{f}_i^t$, and the similarity information between parent and offspring $sim_i^t$ to adaptively generate the crossover strategy.
This means that POM is not fitting a specific problem $f$, nor is it searching for an optimal $\mathbf{x}^* \in \mathbb{R}^d$ in its solution space. Instead, it searches for optimal POM strategy parameters $\mathbf{\theta}^*$ in the strategy space through these training tasks. Therefore, even if there are conflicting objective functions, it does not affect the performance of POM. Furthermore, the training algorithm proposed by POM, MetaGBT, integrates gradient information from a set of tasks, making the training process more stable and easier.
Let's simulate the situation you mentioned. Suppose there is a function $f(\mathbf{x})=\sum_i{x_i}$, with task one being $f_1=\mathop{\arg\max}\limits_{\mathbf{x}}f(\mathbf{x})$ and task two being $f_2=\mathop{\arg\min}\limits_{\mathbf{x}}f(\mathbf{x})$. Clearly, these two tasks are in direct conflict.
Assuming that at the current moment, the populations for both tasks are the same, they are in the following state:
$$
\mathbf{X}=
\begin{bmatrix}
1 & 2 & 3 \\\\
4 & 5 & 6 \\\\
7 & 8 & 9 \\\\
1 & 3 & 4 \\\\
\end{bmatrix}
\mathbf{X}_1=
\begin{bmatrix}
7 & 8 & 9 \\\\
4 & 5 & 6 \\\\
1 & 3 & 4 \\\\
1 & 2 & 3 \\\\
\end{bmatrix}
\mathbf{X}_2=
\begin{bmatrix}
1 & 2 & 3 \\\\
1 & 3 & 4 \\\\
4 & 5 & 6 \\\\
7 & 8 & 9 \\\\
\end{bmatrix}
$$
$$
\mathbf{U}_1=\mathbf{CR}\cdot\mathbf{V}_1+(1-\mathbf{CR})\cdot\mathbf{X}_1=
\left[
\begin{matrix}
6.96 & 7.96 & 8.96 \\\\
4.54 & 5.55 & 6.55 \\\\
3.25 & 4.79 & 5.79 \\\\
4.51 & 5.65 & 6.64 \\\\
\end{matrix}
\right]
$$
$$
\mathbf{U}_2=\mathbf{CR}\cdot\mathbf{V}_2+(1-\mathbf{CR})\cdot\mathbf{X}_2=
\left[
\begin{matrix}
1.02 & 2.03 & 3.03 \\\\
1.14 & 2.89 & 3.89 \\\\
2.68 & 3.83 & 4.83 \\\\
2.27 & 3.54 & 4.54 \\\\
\end{matrix}
\right]
$$
Finally, after performing selection between $\mathbf{X}$ and $\mathbf{U}$, the resulting population is:
$$
\mathbf{X}_1'=
\left[
\begin{matrix}
7.00 & 8.00 & 9.00 \\\\
4.54 & 5.55 & 6.55 \\\\
3.25 & 4.79 & 5.79 \\\\
4.51 & 5.65 & 6.64 \\\\
\end{matrix}
\right] \mathbf{X}_2'=
\left[
\begin{matrix}
1.00 & 2.00 & 3.00 \\\\
1.14 & 2.89 & 3.89 \\\\
2.68 & 3.83 & 4.83 \\\\
2.27 & 3.54 & 4.54 \\\\
\end{matrix}
\right]
$$
**The average fitness for Task 1 improved from 13.25 to 17.8175, demonstrating an enhancement in the performance of Task 1. Conversely, the average fitness for the population of Task 2 decreased from 13.25 to 8.9025, which also indicates an improvement in the performance of Task 2.**
**This shows that even with conflicting objective functions, Their optimization strategies do not necessarily conflict. This demonstrates that POM learns a universal optimization strategy, and there is no conflict as you claimed.**
Title: Reply to new discussion (3/3)
---
Rebuttal Comment 5.1:
Comment: I thank the authors for the response. Your clarification has addressed my major concern about the conflicting issue. The detail that helped resolve my confusion was that LCM and LLM used the population's ranking and relative performance, which contain information about the task/objective (the same x can be ranked 1st for objective 1 but ranked last for objective 2). I increased my score from 3 to 6.
With regard to writing, I was aware of the differences in some design choices of the proposed method with standard layers, but I still believe these details should be deferred to later sections so that readers can appreciate the main idea.
---
Reply to Comment 5.1.1:
Comment: I wish to express my sincere appreciation for the invaluable assistance and dedication you have provided during the review process of my paper. Your insightful comments and thoughtful suggestions have significantly contributed to the development and refinement of my work. Your efforts have been of immense help in enhancing the quality and clarity of my research. Thank you for your generous support. It is greatly appreciated. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
PrefPaint: Aligning Image Inpainting Diffusion Model with Human Preference | Accept (poster) | Summary: This paper tries to align diffusion models for image inpainting with human aesthetic standards via a reinforcement learning framework.
Strengths: 1. This paper proposes aligning diffusion models for image inpainting with human preferences by integrating human feedback through reinforcement learning, which improves the quality of generated images.
2. This paper presents a dataset containing 51,000 inpainted images annotated with human preferences, addressing the lack of evaluation datasets for image inpainting.
3. The proposed method can be applied to various applications, such as image extension and novel view synthesis, providing visually impressive results.
Weaknesses: 1. There has been a lot of work introducing human preferences into diffusion models, such as Human Preference Score (ICCV 2023), ImageReward (NeurIPS 2023), DPOK (NeurIPS 2023), and D3PO (CVPR 2024).
a. This paper does not fully elaborate on the work related to diffusion models with human preferences.
b. It is necessary to supplement the comparison with these methods, including differences in methodology and advantages in experimental results.
2. A user study is necessary to evaluate the quality of the generated results.
Technical Quality: 3
Clarity: 2
Questions for Authors: See Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors outlined potential future directions in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Reviewer G8Cx
### Q1. Differences from Other Diffusion Alignment
**Our work is significantly different from existing diffusion alignment work**. While some related works exist in the area of text-to-image tasks involving human preference, our work is **the first to align diffusion-based image inpainting with human preference through reinforcement learning (RL)**. Initially, our proposal of the **human preference inpainting dataset** enables the feasibility of this task. Without such a dataset that includes high-quality labels based on human preference, undertaking inpainting alignment would be **impossible**.
Technically, we propose a reward accuracy-aware weight strategy into the RL process to further accelerate the training process and boost performance, which has also not been investigated in the mentioned works. We will add further illustrations about the works related to diffusion model alignment with human preference.
### Q2. Experimental Comparisons with your Mentioned Methods
Moreover, to further resolve the reviewer's concern, we have experimentally compared our method with your mentioned methods. Specifically, we simply summarize the implementation of each method.
(1) Human Preference Score (ICCV 2023)[1] learns a negative prompt to map the diffusion process to low-quality samples. Then, in the inference process, the negative sample is utilized in the classifier-free guidance~(CFG) to push the generation trajectory away from low-quality samples.
(2) ImageReward (NeurIPS 2023)[2] trains a reward model and then applies the reward model as a loss metric to end-to-end optimize the diffusion model accompanied by a reconstruction loss. We also conduct the ablation study on reward training strategy in Table 3 in our paper. Our method employs a regression-driven training strategy, while ImageReward (NeurIPS 2023) [2] a classification-drive strategy.
(3) DPOK (NeurIPS 2023) [3] simultaneously optimizes the whole trajectory of a reverse diffusion process and utilizes the KL divergence to panel the regularization, avoiding a large distribution shift.
(4) D3PO (CVPR 2024) [4] adopts the RL strategy from direct performance optimization (DPO)[5], directly optimizes the model on the reward labeling data to minimize the probability of low-quality samples and increase the probability of high-quality samples.
**The experimental results shown in the table below further validate the advantage of the proposed method.**
| Methods (All metrics the larger the better) | WinRate | T2I | Reward | CLIP | BLIP | CA |
| ------------------------------------------- | ----------- | --------- | -------- | -------- | -------- | -------- |
| (1) Human Preference Score | 58.03\% | -16.67 | 0.26 | 0.20 | 0.47 | 0.40 |
| (2) ImageReward | 65.10\% | 13.12 | 0.29 | 0.22 | 0.48 | 0.44 |
| (3) DPOK (KL weight=0.1) | 64.59\% | 11.43 | 0.32 | 0.21 | 0.48 | 0.43 |
| (3) DPOK (KL weight=1.0) | 62.67\% | 9.36 | 0.30 | 0.21 | 0.48 | 0.43 |
| (4) D3PO | 59.74\% | -19.20 | 0.26 | 0.21 | 0.46 | 0.41 |
| **PrefPaint(Ours)** | **71.27\%** | **21.53** | **0.37** | **0.23** | **0.49** | **0.45** |
Moreover, the authors will provide further discussion in the final version of the paper.
### User Study
First, we clarify that the labeling of our dataset is provided by a **professional data labeling company**, with all annotators being professionals trained in similar tasks. The reward scoring is highly accurate and capable of assessing the results of inpainting under criteria based on human preference. We also provide some examples scoring by our reward model in Fig **S4** in the uploaded one-page PDF file, the assessment of inpainted images is closely aligned with human preference. Moreover, to alleviate your concerns, we have also carried out a user study to evaluate our superiority. We have randomly selected about 130 groups of results and conducted a user study involving 10 users, as detailed below. The WinRate map and the diagram of the user study platform can be found in Fig. **S3** and Fig. **S5** of the uploaded one-page PDF file.
| Methods | Kandinsky | Palette | SDv2.1 | Runway (BaseModel) | **PrefPaint (Ours)** |
| ------- | --------- | ------- | ------ | ------------------ | -------------------- |
| Rank | 4.74 | 3.79 | 3.11 | 2.26 | **1.10** |
| Var | 0.35 | 0.49 | 0.60 | 0.48 | **0.24** |
---
Rebuttal 2:
Title: The authors are looking forward to your feedback. Let's discuss.
Comment: Dear Reviewer **G8Cx**,
We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. We understand that you may be reviewing multiple papers and have a busy schedule. In our previous response, we made sure to address your remaining concerns directly and thoroughly. We eagerly await your further feedback on our responses.
Best regards,
The Authors | Summary: This paper makes the first attempt to align diffusion models for image inpainting with human preferences by integrating human feedback through reinforcement learning. The authors theoretically deduce the accuracy bound of the reward model, modulating the refinement process of the diffusion model to robustly improve both efficacy and efficiency. Additionally, they construct a dataset containing 51,000 inpainted images annotated with human preferences.
Strengths: 1. This paper provides theoretical insights into the upper bound of the error of the reward model, ensuring the reliability and accuracy of the reinforcement learning process.
2. This paper presents a dataset for image inpainting tasks that incorporates human aesthetic preferences. This dataset will facilitate further research into the evaluation of image inpainting models and aid in generating high-quality images that better align with human aesthetics.
Weaknesses: 1. Since reward models and feedback learning trained based on human preferences have been introduced in Text-to-Image Generation (T2I), it seems they are simply applying the same process from T2I to Image Inpainting. This work should clarify the distinctions from T2I to highlight its innovation.
2. The core contribution of this paper is Equation 11; however, its rationale, implementation details, and effectiveness lack explanation and validation. First, the selection of hyperparameters $k$ and $b$ is not addressed, despite their importance. Second, there is a lack of detailed explanation from Equation 11 to its implementation in the model. Third, the choice of using $e$ instead of other functions needs to be validated. Finally, the description of the ablation study on $k$ in Table 3 is unclear, making it difficult to assess its effectiveness. There is also a lack of ablation analysis for the $b$ and $e$.
3. The weights assigned to the three scores in the manuscript are [0.15, 0.15, 0.7]. Why are these scores combined into a weighted score instead of being used individually? Furthermore, how were these weights determined? The basis for this weighting is unclear.
4. What does the "Rank" metric in Table 2 signify? It lacks explanation.
5. There are details missing regarding the dataset annotation. Such as the number of annotators employed, whether training was required for them, and the amount of time each individual spent on the annotation task.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors provide the future work in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Reviewer Hftn
### Q1. Differences from T2I Methods
We confirm that our method is **NOT** simply applying the text-to-image alignment scheme to image inpainting. The **technical novelty** of the proposed method primarily lies in modeling reward accuracy and adaptively controlling the regularization reward strength, which has not been investigated by T2I methods before. Moreover, the ablation studies presented in the right part of Table 3 in the main paper validate the superiority of our design, which can achieve a **+106\%** speed acceleration while maintaining high performance (even increasing WinRate by up to **1.7\%**). We will add further discussion comparing our method with current T2I methods. **More importantly**, we want to note that this is the **first** time to explore the diffusion model human preference alignment problem on the task of image inpainting. To enable the feasibility of this task, we built the **first dataset**, where inpainted images are labeled with human preference through a professional data annotation company. Without such a dataset, it is almost **impossible** to conduct this work. We believe it will advance this field. Our dataset holds significant research value and application scenarios in several related areas, including image quality assessment tasks and other tasks involving human preference. We also provide some examples scored by our reward model in Fig **S4** in the uploaded one-page PDF file, the assessment of inpainted images is closely aligned with human preference.
### Q2. More Explanations of Eq. (11)
**Selection of Hyper-parameters**. The critical thing for pasteurization is the region of the weight factor $\gamma$. In the left of Table~3 (e) and (f) of our manuscript, we experimentally validate the two sets of $k=0.05,b=0.7$ and $k=0.065,b=0.9$. The experimental results show that our selection is better than the other settings. Moreover, please refer to our response to the third question, where we detailedly investigate the different hyper-parameters and weighting functions.
**Implementation details**. We first prepare the matrix $\mathbf{V}^{-1}$ by calculating $\mathbf{V} = \mathbf{Z}^T\mathbf{Z} + \lambda \mathbf{I}$, where $\mathbf{Z}$ represents the concatenation of feature embeddings before the last MLP layer of the reward model, encompassing all training dataset samples. During the diffusion training process, for each sample, we obtain a feature $z$ from the reward model. We then calculate $\gamma$, which serves as the weight factor for the final RL loss, adaptively adjusting the magnitude of the gradient. Thanks for the comments; we will include the corresponding content in the final version.
**Other Functions \& Ablation Study**. To further resolve your concern, we have applied another linear function to parameterize $\gamma$ as table below. Specifically, $\gamma = -1.9*\| \boldsymbol{z}\|_{\mathbf{V}^{-1}} + 0.06$. **The experimental results indicate that the exponential function provides the best regularization effect. In contrast, the linear function and static constant do not fully exploit the regularization effect of the reward upper boundary.**
| Function ($x = \| \boldsymbol{z}\|_{\mathbf{V}^{-1}}$)| Range |k | b | WinRate | Reward |
|----|----|----| ---- | ----| ---- |
| $\gamma = e^{- kx + b}$(Ours) | [1.00, 1.87] | 0.05| 0.7| **71.27\%** |**0.37**|
| $\gamma = e^{- kx + b}$ | [1.00, 2.23] | 0.065 | 0.9| 70.47\% | 0.36|
| $\gamma = e^{- kx} + b_1/x + b_2$ | [1.10, 1.78] |0.10| [0.1, 0.85]| 70.07\% | 0.37|
| $\gamma = e^{- kx} + b_1/x + b_2$ | [1.10, 2.22] |0.12| [0.8, 0.85]| 69.95\% | 0.36|
| $\gamma = -kx + b$|[1.10, 1.81] |1.9 | 0.06| 60.28\% | 0.28 |
| $\gamma = b$ |--|--|1.43|65.95\%|0.34|
### Q3. Weight Range
Since RL requires a reward value to guide the training direction, we must combine scores to decide whether to push the diffusion model away from the reconstruction sample or bring them closer together. We determine these weights using our scoring scheme. Specifically, the first two scores focus on partial aspects, such as structure and texture, respectively. The third score, however, reflects the overall impression, which is more comprehensive. Therefore, it is reasonable to assign lower weights to the first two metrics and a higher weight to the final metric. Additionally, to further address your concern, we had the labeling supplier company directly rank these samples to validate the consistency between our weighted combination of scores and the human labeling rank. Specifically, the rank of several reconstructions by both human experts and previous label weighting schemes. Then, we calculate the consistency between the human ranking and the weighted score ranking. The experimental results are shown in the following table. The proposed weighting combination scheme is validated by its high consistency with the human scoring scheme. The scoring system is shown in Fig. **S6** of the uploaded one-page PDF file.
|Weighted Score|Rank1(\%)|Rank2(\%)|Rank3(\%)|All(\%)|
|-----|-----|----|-----|-----|
| **0.15, 0.15, 0.7 (Ours)** |**0.93**|**0.92**|**0.93** |**0.93** |
| **0.10, 0.10, 0.8**|**0.93**|**0.92**|**0.93**|**0.93** |
| 0.20, 0.20, 0.60| 0.92| 0.92|0.92|0.92|
| 0.30, 0.30, 0.40| 0.87|0.86|0.88|0.87|
| 0.50, 0.40, 0.10|0.85|0.87|0.84|0.85|
| 0.80, 0.10, 0.10|0.84|0.85|0.85|0.85|
### Q4. Meaning of the Metric "Rank"
“Rank” in Table 2 indicates an average order (from the best to the worst) of all metrics. Thus, a lower rank indicates better performance.
### Q5. Dataset Details
Note that the labeling is provided by a **professional data annotation company**. All annotators are professionals trained in similar tasks. The annotation time for each pair (3 samples) averages about 2 minutes. A total of 24 annotators were employed. We will provide more details in the final version.
### Q6. Limitations
We have **indeed** discussed some limitations in the final paragraph of the conclusion section.
---
Rebuttal 2:
Title: The authors are looking forward to your feedback. Let's discuss.
Comment: Dear Reviewer **Hftn**,
We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript and your favorable recommendation. We understand that you may be reviewing multiple papers and have a busy schedule. In our previous response, we made sure to address your remaining concerns directly and thoroughly. We eagerly await your further feedback on our responses.
Best regards,
The Authors
---
Rebuttal Comment 2.1:
Title: Thanks for the response.
Comment: By combining the other reviews and responses, I consider that this work has contributions on leveraging human preference to diffusion-based image inpainting through reinforcement learning, and the provided dataset also helps related research. Thus I would like to upgrade my rating.
---
Reply to Comment 2.1.1:
Title: Thanks for your recommendation.
Comment: The authors sincerely appreciate your feedback. | Summary: This paper is the first to use reinforcement learning in diffusion-based image synthesis. This significantly improves the quality since image synthesis is usually a one-to-many mapping, which may not be suitable for conventional learning methods. To generate reward functions for RL, this paper also gathers and releases a new dataset on image synthesis with human preference annotation. Additionally, the paper computes the theoretical upper bound on the error of the reward for more efficient RL training.
Strengths: 1. Release a new dataset for image inpainting and outpainting benchmarks with human preference annotation. The generation process of the dataset is well-documented.
2. First incorporate RL with diffusion-based method on image inpainting and outpainting.
3. The authors provide extensive experiments, supplementary materials with results on multiple datasets, and a project page to demonstrate their results.
Weaknesses: 1. Minor question on the motivation of amplification factor.
2. Possible missing references.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. In L152, how often do these "relatively large errors" occur? Will the relatively smaller errors balance out these large errors? Since the amplification $\gamma$ greatly improves the performance, does this suggest that the errors occur frequently so that the static $\gamma$ do not handle these very well? From Fig.2, it is shown that the range of $\| z \|_{v^{-1}}$ is [0, 1]. Does amplification $\gamma$ for $k=0.05$ and $b=0.7$ have a range of [1.9, 2.0] (which seems narrow and thus static)?
2. How does this paper compare to [1]? [1] leverage the BERT model for reward scoring. How do you justify the choice of human evaluation instead of foundation models for reward function? Do you have ablation experiments to backup the decision?
[1] Black, Kevin, et al. "Training diffusion models with reinforcement learning." arXiv preprint arXiv:2305.13301 (2023).
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Reviewer yfx6
### Q1. Error of Reward Model \& Amplification Factor
We make statistics of reward estimation errors, and the results are shown in Fig. **S1** of the uploaded one-page PDF file. Here we make a table to briefly show the results. Although the proportion of very large error samples is not large, the incremental performance of our method may lie in a more suitable choice of amplification function, as evidenced by the table in our response to the Q2 from the reviewer **Hftn**.
Sorry for causing the confusion of the reward range. Actually, the x-axis of Fig. 2 of the manuscript is normalized. The range of `$||z||V^{-1}$` lies approximately between 1.5 and 14. Thus, the range of the amplification factor is approximately between exp(0)=1 and exp(0.625)=1.86.
| Reward Error | [0, 0.25) | [0.25, 0.5) | [0.5, 0.75) | [1.0, 1.25) | [1.25, 1.5) | [1.5, 1.75) | [1.75, +inf) |
| ------------ | --------- | ----------- | ----------- | ----------- | ----------- | ----------- | ------------ |
| Percentage | 43.99% | 30.57% | 16.24% | 6.53% | 1.97% | 0.59% | 0.06% |
### Q2. Reward Model
We have experimentally validated the consistency between the proposed method and the BERT score [1]. As shown in the Fig. **S2** in the uploaded one-page PDF file, the deeper color indicates the larger errors in the BERT score. We can see that there are plenty of samples whose BERT scores are **unrelated** to human labeling even contradictory, shown as the upper left or lower right regions. We also provide some examples scoring by our reward model in Fig **S4** in the uploaded one-page PDF file, the assessment of inpainted images is closely aligned with human preference. Thus, our proposed human-labeled dataset **is necessary** for the alignment of inpainting tasks. Note that our dataset is labeled by professional data annotation company. We will add more discussion with [1] in the final version. Thanks for the advice.
[1] Black, Kevin, et al. "Training diffusion models with reinforcement learning." arXiv preprint arXiv:2305.13301 (2023).
---
Rebuttal 2:
Title: The authors are looking forward to your feedback. Let's discuss.
Comment: Dear Reviewer **yfx6**,
We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript and your favorable recommendation. We understand that you may be reviewing multiple papers and have a busy schedule. In our previous response, we made sure to address your remaining concerns directly and thoroughly. We eagerly await your further feedback on our responses.
Best regards,
The Authors
---
Rebuttal Comment 2.1:
Comment: The authors have addressed all of my concerns.
---
Reply to Comment 2.1.1:
Title: Thanks for your recommendation.
Comment: The authors sincerely appreciate your feedback. | Summary: This paper attempt to align diffusion models for image inpainting with human aesthetic standards through reinforcement learning framework. To train the model, this paper construct a dataset containing 51,000 inpainted images annotated with human preferences. Extensive experiments on inpainting comparison and downstream tasks, such as image extension and 3D reconstruction, is provided in this paper.
Strengths: 1. This paper is well presented and easy to read.
2. This paper provids detailed experiments to validate the effectiveness of the proposed method.
3. This paper have collect a small scale dataset for human preference of image inpainting results, which can be useful for the ressearch field.
Weaknesses: 1. This paper seems to use a common reinforcement learning pratice on text-to-image diffusion model. What is the novelty of the proposed method? Why would this practice better than simply finetuning the model on high-quality (human prefered) inpainting data?
2. There are missing citations and comparisons with the following methods:
- A Task is Worth One Word: Learning with Task Prompts for High-Quality Versatile Image Inpainting
- BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion
- Hd-painter: High-resolution and prompt-faithful text-guided image inpainting with diffusion models
3. Can you provide an ablation of how the training data quantity influence final results? The training dataset only contain 17,000 images and 51,000 inpainted samples, which is quite small in diffusion model training.
Technical Quality: 2
Clarity: 3
Questions for Authors: See weekness
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authore have include limitation in the paper, but I would recommend the authore discuss more on the limitation and negative societal impact.
Flag For Ethics Review: ['Ethics review needed: Data quality and representativeness', 'Ethics review needed: Discrimination, bias, and fairness', 'Ethics review needed: Human rights (including surveillance)']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Reviewer 85gD
### Q1. Difference between the Proposed Method, Common Reinforcement Learning, and Supervised Fine-tuning
In the following, We first clarify that our method is **NOT** a straightforward practice of the common reinforcement learning on diffusion model alignment. Then, we address your **confusion** about supervised fine-tuning (i.e., directly tuning on high-quality data) and reinforcement learning-based tuning.
(1) **Not the same task as text alignment RL task.** While some related works exist in the area of text-to-image tasks involving human preference, our work is the first to introduce reinforcement learning (RL)-based alignment into the task of image inpainting. Initially, our proposal of the human preference inpainting dataset enabled the feasibility of this task. Without such a dataset that includes high-quality labels based on human preference, undertaking inpainting alignment would be impossible. Note that our dataset is annotated by professional data annotation company. The **technical novelty** of the proposed method, compared to conventional RL, primarily lies in our modeling of reward accuracy and the adaptive control of reward regularization strength using the reward error upper bound, as detailed in Secs. 3.2 and 3.3 of our paper.
(2) **Supervised fine-tuning cannot address our task.** We first clarify some experimental results that our baseline model (the pretrained diffusion model) has already trained on plenty of GT samples for image inpainting, which is even better than our high-preference samples. However, the limited performance of the pretrained model, with comparison of Runway and Ours in Tables 1 and 2 of our paper, indicates the supervised fine-tuning model is hard to accurately and effectively modifying model generation on the perspective of preference alignment.
Moreover, to further address your concerns, we have experimentally validated the neural network performance using fine-tuning based methods.
| No. | Method|Train Dataset| Val Dataset|T2I| BLIP|CLIP|CA| Reward|
|----|----|----|----|----|----|----|----|----|
| (a) | Original (Best Samples) | -- | Training Prompt | 3.61 | **0.49** | 0.22 | 0.45 | 0.19 |
| (b) | FineTuning-Model | Training Prompt | Training Prompt | -4.89 | **0.49** | 0.22 | 0.44 | 0.12 |
| (c) | FineTuning-Model | Training Prompt | Val Prompt| -13.61 | 0.48 | 0.21 | 0.40 | 0.02 |
| (d) | RL-Model(Ours) | Training Prompt | Val Prompt | **14.90** | **0.49** | **0.23** | **0.45** | **0.37** |
where (a) signifies the high-quality subset generated from the original samples and selected by our reward labels; (b) represents the reconstructed images by the fine-tuned model; (c) corresponds to the fine-tuned model's performance on the validation dataset; (d) indicates the generated samples by the RL-based model evaluated on the validation dataset. We observe that the fine-tuned model performs not well on the training set and shows even more undesirable performance on the validation set. In contrast, the proposed method accurately aligns with human preferences.
### Q2. Comparisons with Additional inpainting Methods
We have experimentally compared with your mentioned papers. For all these methods, we assessed performance using their publicly released models. We utilized our test dataset to comprehensively evaluate their performance. As shown in the table below, our method significantly outperforms all the compared methods.
| Metrics (the larger the better) | T2I | BLIP | CLIP | CA(Incep.) | Reward | \# Param(M) | Infer. Time(s) |
| ------- | ---- | ------ | ------ | ----- | ----- | ------- | -------- |
| PowerPaint(v-1) | -4.44 | 0.46 | 0.21 | 0.42 | -0.057 | 819.72 | **5** |
| PowerPaint(v-BrushNet) | -3.84 | 0.46 | 0.20 | 0.42 | -0.036 | 1409.88 | 16 |
| BrushNet(realistic-V15VAE) | 1.26 | 0.46 | 0.22 | 0.43 | 0.137 | 1409.86 | 16 |
| HdPaint(ds8) | -4.57 | 0.47 | 0.21 | 0.44 | -0.059 | **451.47** | 60 |
| **PrefPaint(Ours)** | **11.60** | **0.49** | **0.23** | **0.45** | **0.374** | 819.72 | **5** |
Moreover, we also tested the Winrate compared with our baseline model (Runway). Our method also greatly surpass all compared methods, which validates the effectiveness of the proposed RL-based alignment scheme.
| WinRate (v.s. BaseModel) (the larger the better) | S=1 | S=2 | S=3 |
| ----- | ----| ------ | ----- |
| PowerPaint(v-1)[ECCV2024] | 27.06\% | 39.92\% | 47.38\% |
| PowerPaint(v-BrushNet)[ECCV2024] | 29.86\% | 43.12\% | 52.01\% |
| BrushNet(realistic-V15VAE)[ECCV2024] | 49.49\% | 62.83\% | 69.22\% |
| HdPaint(ds8)[Arxiv2023] | 33.37\% | 43.41\% | 49.03\% |
| **PrefPaint(Ours)** | **71.27\%** | **85.88\%** | **93.50\%** |
### Q3. Size of Training Dataset
We believe you have a **misunderstanding** here. We clarify that the dataset is **primarily used to train the reward model rather than the diffusion model**. With an accurate reward model, we can easily train diffusion models on scaled datasets by allocating more prompts. We empirically validate that the dataset size is sufficient for training a reward model. Specifically, as shown below, we train the reward model with 10K, 20K, 30K, and 50K data samples, respectively.
| Size | 1W | 2W | 3W | 5W |
| ----- | ----- | ----- | ----- | ---- |
| Reward Accuracy ↑ | 72.5\% | 74.0\% | 75.3\% | 75.9\% |
We observe that the accuracy of the reward model becomes saturated as the size of the training dataset increases. Therefore, we believe that 50K data samples are sufficient for training an accurate reward model. Additionally, the proposed method has already outperformed SOTA, validating its effectiveness.
---
Rebuttal 2:
Title: The authors are looking forward to your feedback. Let's discuss.
Comment: Dear Reviewer **85gD**,
We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. We understand that you may be reviewing multiple papers and have a busy schedule. In our previous response, we made sure to address your remaining concerns directly and thoroughly. We eagerly await your further feedback on our responses.
Best regards,
The Authors
---
Rebuttal Comment 2.1:
Title: Response to the author's rebuttal
Comment: I change my score to 5 since the authors' feedback addressed most of my concerns. I strongly recommend the authors add the new experiment results into their final version.
---
Reply to Comment 2.1.1:
Title: Thanks for your recommendation.
Comment: The authors appreciate your feedback. We promise that the comparions with additional methods listed in the rebuttal will be be incorporated into the final version or the supplementary materials. | Rebuttal 1:
Rebuttal: ## General Response
We thank all reviewers for their time and constructive comments. We sincerely thank Reviewer **yfx6** for the affirmation of the motivation and novelty behind our task, as evidenced by comments such as "Release a new dataset..." and " First incorporate RL with image inpainting ..." , as well as acknowledging the thoroughness of our experimental evaluations, as highlighted by comments like "The authors provide extensive experiments, ...".
Furthermore, we are thankful to Reviewer **Hftn** for acknowledging the theoretical contributions outlined in our paper, as illustrated by comments such as "This paper provides theoretical insights into ...", and for recognizing the significance of our dataset in addressing this emerging task, as indicated by the comment "This paper presents a dataset for image in...". We value the insightful feedback provided by all reviewers, which has greatly enriched our work.
We believe we have **clearly and directly** addressed all concerns. Here, we would like to summarize a few key clarifications regarding the contributions of our work.
(1) Our method makes the **first** exploration of the diffusion model human preference alignment problem on the task of image inpainting and proposes a novel benchmark with human experts labeled preference score. While many T2I models use RL to enhance the consistency between text meaning and image content, our method specifically focuses on the task of image inpainting. As far as we know, the alignment of the diffusion model in this context has not been explored before.
(2) We propose a **human preference-aware inpainting dataset** enabled the image inpainting alignment based on Reinforcement Learning. Without such a dataset that includes high-quality labels based on human preference, undertaking inpainting alignment would be impossible. Note that our dataset is annotated by a professional data annotation company.
(3) The **technical novelty** of the proposed method mainly lies in modeling the upper bound of reward estimation error and using it to adaptively control the reinforcement learning regularization strength, an approach that, to the best of our knowledge, has not been previously investigated.
(4) The reason for using RL for diffusion alignment lies in the differences between the training and testing process of diffusion models. Specifically, the diffusion model is trained on individual steps from a decoupled probabilistic flow, while its inference process involves running the entire trajectory and projecting random noise onto data samples. Since the score function (noise) for each step cannot be accurately estimated, the accumulated errors for each reversing step may cause the reconstructions to drift away from the targets. RL is a method that can optimize the entire trajectory (Markov Chain) and account for the accumulated error of each step, as the reward is measured based on the final reconstruction
Thanks again for the time and effort. We appreciate any further questions and discussions.
**Last but not least, we will make the reviews and author discussion public regardless of the final decision. Besides, we will include the newly added experiments and analysis in the final manuscript/supplementary material.**
Pdf: /pdf/888eb857b7ba0a2fa639d18af7956aa76c9466df.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
TrajCLIP: Pedestrian trajectory prediction method using contrastive learning and idempotent networks | Accept (poster) | Summary: The paper proposes a trajectory prediction approach for multi-agent configurations. The historical and future trajectories are encoded in terms of spatio-temporal interaction features (STIF) using Agentformer [28] and scene-agent interaction features (SAIF) using a transformer based on Fourier transform. The historical and future trajectory encoders share the same architecture but have different parameters. The encoded features are then summed and passed to a generative model [21] to represent the trajectory space. Finally, a transformer predicts the next position. The encoded features for historical and future trajectories (STIF and SAIF) are trained using CLIP [16] to force the historical and future encoded features to share the same space and help achieve temporal continuity for the trajectory prediction task. The approach is evaluated on the ETH-UCY, SDD and SNU datasets and shows promising results compared to related methods.
Strengths: - The paper is well written and easy to follow. In addition, the approach is well presented and the evaluation is clearly described.
- The proposed approach achieves promising results on several standard benchmarks. The results are convincing.
- The paper provides a wealth of ablation studies.
- The proposed combination of coding and decoding is interesting and makes a solid contribution. In particular, the adaptation of CLIP from image-text pairs to the historical-future trajectory feature space is novel.
Weaknesses: - The paper makes various solid and small contributions, but it lacks a novel idea. This is a limitation, but it's not a major point.
- The related work would benefit from a more direct comparison of the proposed method with the related works.
- The motivation for relying on an idempotent generative network over other generative models, e.g. GANs, is not well explained.
- According to the literature, the article "Human Trajectory Prediction via Neural Social Physics" by Yue et. al. (2022) achieves average ADE and FDE of 0.17 and 0.24 respectively on ETH, while 6.52 ADE and 10.61 FDE are achieved on SDD. The comparisons should include all state-of-the-art methods.
Technical Quality: 3
Clarity: 3
Questions for Authors: - For the Online Adaptation task, prediction times are another crucial factor to consider alongside performance metrics. Is there any reason not to report them?
- Could the authors provide details of the hyper-parameters used during training? Implementation details are missing
- There are significant discrepancies between the results you report in Table 1 for Traj++ and the results reported in other publications, such as Traj++ in "Human Trajectory Prediction via Neural Social Physics". Could the authors provide more details on the experimental setup?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper does not dedicate space to discuss the limitations. This is a clear missing point.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1:The motivation for relying on an idempotent generative network over other generative models, e.g. GANs, is not well explained.
Further explanation on the starting point for choosing the idempotent generation framework. Our motivation is that, for trajectory prediction task, the trajectory features of an agent should be consistently aligned in the feature space of both historical and future trajectories. However, GANs and CVAEs (SocialGAN) both have separate encoding-decoding structures, and diffusion inference samples the initial space for generation, which leads to a certain degree of inconsistency in the feature space. The idempotent generation framework is an affine transformation in the feature space, which can effectively bridge the gap in the feature space caused by the generation framework. In addition, the idempotent neural framework has the characteristic of global projection, which can more effectively improve generalization ability, allowing our method to perform well on different tasks without fine-tuning. Our extensive quantitative and qualitative experiments have fully demonstrated this point. Please refer to Reviewer AYPL Q3 for more experiential analysis.
> Q2: article NSP-SFM[1] performs better. The comparisons should include all state-of-the-art methods.
Further explanation on the comparison with the SOTA methods, we will compare and analyze with the methods mentioned by the reviewers in the final version. (1) We acknowledge that the model in the article NSP-SFM[1] has better predictive quantitative performance than TrajCLIP. However, the NSP-SFM method first relies on a CNN-based network to extract goal information from the map, which is not feasible for many complex scene prediction tasks as the goal information is not available. Secondly, the method's estimation capability for multimodal trajectories is not achieved by a generative model and does not consider the diversity of pedestrian intentions during the prediction process; moreover, the network has a large number of parameters, with its model parameter count being more than 50 times that of other trajectory prediction networks. In contrast, TrajCLIP's predictions do not need to rely on goal information and can be predicted independently, without being limited by the prediction anchor points, and with a smaller number of parameters. We are more focused on comparing with similar methods to demonstrate the model's performance and capabilities in the context of the predictive task's focus points.
We compared four recent trajectory prediction state-of-the-art method from CVPR 2024 in Table 4. our method achieves the best ADE/FDE results on the ETH-UCY dataset.
> Q3:For the Online Adaptation task, prediction times are another crucial factor to consider alongside performance metrics. Is there any reason not to report them?
We have included implementation details in Table 1 in attached PDF. Our tiny model meets the requirements for a real-time prediction, as it can predict trajectories in 0.0615 seconds. Additionally, our lightweight model is only 3.45MB in size, and its computational complexity is relatively low compared to its model size, making it deployable on most hardware platforms.
> Q4:Could the authors provide details of the hyper-parameters used during training? Implementation details are missing.
For training process, our batch size was set to 64, epochs to 100, and the learning rate was 0.01, which was half-formed every 25 epochs. We used the Adam optimizer. Our model was trained on an RTX 3090, with the encoder and contrastive learning pre-training requiring approximately 8 GPU hours, and the full framework training taking about 9.6 GPU hours.
> Q5: There are significant discrepancies between the results you report in Table 1 for Trajectron++ and the results reported in other publications, such as Trajectron++ in "Human Trajectory Prediction via Neural Social Physics". Could the authors provide more details on the experimental setup?
Explanation of the reproduction details for Trajectron++. When reproducing Trajectron++[3], there was a data leakage issue in the original released code implementation, leading to better metrics. The original author also acknowledged this on GitHub issue #26. "The function used for differentiating is **derivative of** and calls **np.gradient**. np.gradients approximates in different ways for boundary points and non-boundary points. For an example of calculating velocities, np.gradients will give $v _x[t] = (x[t+1]-x[t-1])/(2*dt)$ at time t." This method inadvertently includes future trajectory information in the last frame of historical data, which significantly affects the model's predictive performance. The author of Trajectron++ suggests using the correct differentiation method, such as **np.ediff1d**, to preprocess velocity and acceleration inputs to avoid data leakage caused by using information beyond the last observed time step. (Due to conference policy restrictions, we cannot provide a link.) After resolving this data issue, we reproduced the experiment using the method suggested by the author, which resulted in differences in the cited values when compared with other papers. In addition, the recent CVPR 2024 paper: SingularTrajectory [2] and many other works also adopt the same practice when citing the Trajectron++ work, and the metrics are consistent.
[1] Yue, Jiangbei, Dinesh Manocha, and He Wang. "Human trajectory prediction via neural social physics." European conference on computer vision. Cham: Springer Nature Switzerland, 2022.
[2] Bae, Inhwan, et al. "SingularTrajectory: Universal Trajectory Predictor Using Diffusion Model." CVPR. 2024.
[3] Salzmann, Tim, et al. "Trajectron++: Dynamically-feasible trajectory forecasting with heterogeneous data." Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVIII 16. Springer International Publishing, 2020.
---
Rebuttal Comment 1.1:
Comment: The rebuttal does a very good job of addressing my concerns and most of the other reviewers' points. For this reason, I am moving to a positive rating (updated scores). | Summary: This paper proposes to utilize contrastive learning for pedestrian trajectory prediction. A STIF encoder is used to extract spatial-temporal features and is trained with data augmentation. A SAIF utilizes the Fast Fourier Transform to extract the interaction information among the agents. The authors incorporate the idempotent loss for training. Experiments on three datasets show the efficacy of the proposed method.
Strengths: 1. The idea of using contrastive learning and idempotent loss is interesting.
2. The performance is good. The proposed method achieves significantly better results than SOTA on SDD and others.
Weaknesses: 1. I have some doubts about the intuition of using CLIP between history and future trajectories. See questions below.
2. There is no speed or computational cost comparison. Since the method involves multiple stages, a comparison of training/inference costs is needed.
3. The proposed method does not consider visual scene information.
4. There is no error analysis. When does the method fail and why?
Minor comments:
Section 2.2 Generalization Framework -> Generative Framework. They are different things.
The text in Figure 2 is too small to read.
Line 263 “We conducted comparative ”, Line 270 “we compare our model”. Better to use a consistent present tense when describing your method.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. I have some doubts about the intuition of using CLIP learning between the historical feature and the future trajectory feature. The authors propose to align historical feature space with future feature space (Line 186). Does this lead to identical trajectory prediction, i.e., copying the historical trajectory as the future trajectory? Different from aligning images and text that have the same semantic meaning, the historical trajectory and future trajectory of the same agent may be different due to the difference in the spatial location (hence the surroundings are different).
2. Since the method involves multiple stages, how does the model compare to the baselines in terms of inference speed and training cost?
3. There is no error analysis. When does the method fail and why?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors did not discuss the limitations of the method and failure cases.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1:I have some doubts about the intuition of using CLIP between history and future trajectories.
Regarding further clarification on the unified encoding of historical and future trajectories using CLIP. In our method, the trajectory encoders are designed to capture motion characteristics rather than static spatial data, enabling synchronization of feature spaces between the future and historical trajectory encoders. Considering that inputs for pedestrian trajectory prediction consist of sequential relative coordinates, the encoders focus on the agent's trajectory, surrounding dynamics, and their interactions. The decoder then infers future locations from these features. For datasets like ETH\_UCY, where 20 frames equal 5 seconds, spatial consistency is maintained by analyzing the same two frames. The use of STIF and SAIF is for modeling trajectory and interaction features, respectively, aligning them in the feature space to prevent redundancy in trajectory predictions that could arise from spatial alignment.
> Q2: There is no speed or computational cost comparison. Since the method involves multiple stages, a comparison of training/inference costs is needed.
For training process, our batch size was set to 64, epochs to 100, and the learning rate was 0.01, which was half-formed every 25 epochs. We used the Adam optimizer. Our model was trained on an RTX 3090, with the encoder and contrastive learning pre-training requiring approximately 8 GPU hours, and the full framework training taking about 9.6 GPU hours.
For inference process, as shown in Table 1, our medium-sized model meets the requirements for a real-time prediction task in terms of both inference speed and model size, as it can predict trajectories in 0.0615 seconds. Additionally, our lightweight model is only 3.45MB in size, and its computational complexity is relatively low compared to its model size, making it deployable on most hardware platforms.
> Q3:The proposed method does not consider visual scene information.
Thank you for the new perspective provided by the reviewer. Our focus lies in how to align the trajectory features and interaction features of agents in the historical and future spaces within trajectory prediction, and we are concerned with whether this method can be generalized to various subtasks at a low cost. Due to the high real-time requirements of autonomous driving, we chose not to introduce image information. Admittedly, using the features of visual information from map scenes to improve the performance of trajectory prediction is indeed another angle worth paying attention to. We will consider how to efficiently integrate visual information to further enhance the predictive effect of the model in our subsequent work.
> Q4: Minor comments: Section 2.2 Generalization Framework -> Generative Framework. They are different things. The text in Figure 2 is too small to read. Line 263 “We conducted comparative ”, Line 270 “we compare our model”. Better to use a consistent present tense when describing your method.
We will address these issues (e.g., typos, grammar mistakes) in the camera-ready version.
> Q5:There is no error analysis. When does the method fail and why?
We express our gratitude for the feedback. Due to spatial limitations, a summary of the limitations of our methodology will be presented in the camera-ready version. Our approach primarily focusses on the modelling of trajectory prediction tasks, in line with common practices that omit scene imagery as input. Therefore, empirical validation of the model's performance in practical scenarios is essential. As illustrated in Figure 1 in attached PDF, our method falls short in ensuring collision avoidance in complex, high-density environments, necessitating further research. | Summary: This paper utilizes idempotent generative network to perform multiple tasks in pedestrian trajectory prediction and achieves state-of-the-art performance in those tasks, showing its great representation and generalization ability.
The proposed model has the following main components:
1. Spatio-Temporal Interaction Feature (STIF) and Scene-Agent Interaction Feature (SAIF) based encoder
2. CLIP for past and future feature alignment
3. IGN for mapping from past to future in aligned feature space.
The authors also provide a theoretical analysis of IGN, CLIP and data augmentation, but some issues need to be addressed.
Strengths: 1. First-time usage of IGN in pedestrian trajectory prediction task, which is novel.
2. The paper is well-written and easy to follow.
3. Experiment on various kinds of pedestrian trajectory prediction tasks is sufficient and a universal model for those tasks is the main tendency.
Weaknesses: 1. This is not the first work to use constructive learning in pedestrian trajectory prediction, since it was used in long-tailed pedestrian trajectory prediction for forming better latent representations [1, 2]. The author should cite them and clarify the difference.
2. Missing citation and comparison for SingularTrajectory[3], which is also a universal model for pedestrian trajectory prediction. It defines a singular space that can well represent trajectory features for multiple tasks without alignment between past and future.
3. Some questions need to be addresses. See questions.
[1] Wang, Yuning, Pu Zhang, Lei Bai, and Jianru Xue. "Fend: A future enhanced distribution-aware contrastive learning framework for long-tail trajectory prediction." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1400-1409. 2023.
[2] Makansi, Osama, Özgün Cicek, Yassine Marrakchi, and Thomas Brox. "On exposing the challenging long tail in future prediction of traffic actors." In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 13147-13157. 2021.
[3] Bae, Inhwan, Young-Jae Park, and Hae-Gon Jeon. "SingularTrajectory: Universal Trajectory Predictor Using Diffusion Model." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17890-17901. 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Alignment is quite useful for IGN but not as much useful as IGN when applying to CVAE and diffusion according to ablative experiments. Why was that? This issue needs more analysis and clarification.
From the ablative experiments, when CLIP is not used, IGN is worse than CVAE and diffusion (from b, d, f in Tab.4). Meanwhile, CLIP cannot provide pronounced improvement as IGN when applied on CVAE and diffusion. The author just says combining them achieves the best results but motivation of combining IGN and CLIP is not very clear. Is it due to some intrinsic characters of IGN that requires feature alignment while other models do not drop as much as IGN when removing it? More detailed analyses of the phenomena is needed.
2. Is the affine transformation mentioned in Fig.1 same as the trainable matrix W in CLIP?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Not seen in main content.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: This is not the first work to use constructive learning.
We thanks reviewers for providing a new perspective. The mentioned two works, long-tail analysis[1] and FEND[2], both utilize contrastive learning, but our work differs in the problems it addresses with contrastive learning. These two mostly use contrastive learning to capture features from historical trajectories. They aim to get better representations of historical trajectory encodings in latent space, which will improve their ability to predict trajectories for the long-tail part of the dataset.}
For example, long-tail analysis uses a Kalman filter to directly predict output trajectory errors as a basis, distinguishing difficult and easy samples. It then conducts contrastive learning on difficult (long-tail part) historical trajectories to enhance the feature representation capabilities of the long-tail part; FEND uses VAE to extract features from the entire trajectory and performs offline clustering on the features to obtain labels; during training, it uses offline labels for contrastive learning on historical trajectories to guide the feature generation of historical trajectories (making features extracted from more similar historical trajectories closer), enhancing the prediction effect of the long-tail part.
However, our work, not only uses contrastive learning for better encoding of historical trajectories but also acts on future trajectories. While ensuring historical and future motion consistency, it achieves a more unified latent space representation of future and historical trajectories, thereby enhancing the model's predictive generalization capability.
> Q2: Missing citation and comparison for SingularTrajectory.
SingularTrajectory[3] introduces a diffusion model for multimodal pedestrian trajectory prediction, utilizing singular space rather than trajectory coordinates to convey motion information and capture the dynamics of trajectories. It also employs an adaptive anchor-based strategy for dynamically adjusting trajectory anchors. We acknowledge the merits of this technique, but our approach surpasses this work in terms of performance and adaptability. The summary is as follows:
(1) In the performance comparison on the ETH-UCY dataset as shown in Table 3, our model achieves a 14\% lower Average Displacement Error (ADE) than the SingularTrajectory model (0.18 for our model versus 0.21 for SingularTrajectory), with the difference in Final Displacement Error (FDE) being negligible.
(2) For transfer learning and few-shot learning tasks, given the differences in task settings and data processing between our method and SingularTrajectory, we conducted experiments following the experimental framework described in their paper. The results of these experiments are presented in Tables. 2 and 3. Observations indicate that our model attains comparable ADE performance to SingularTrajectory in transfer learning tasks and slightly outperforms in FDE. In few-shot learning tasks, our model demonstrates superior performance in both ADE and FDE.
(3) Regarding the task of online adaptability, the architecture of SingularTrajectory, which incorporates complex diffusion models and scene semantic segmentation techniques, results in slower inference speed, making online learning impractical. In contrast, our model consistently outperforms in this task.
> Q3:More detailed analyses for ablation experimental phenomena about CLIP and generative framework (IGN, CVAE, diffusion).
We further explain the analysis of the ablation study. The CVAE framework involves an encoder obtaining a feature and then randomly sampling to get the mean and variance, which are used as inputs for the decoder. The diffusion method starts with a random initialization of an input and then gradually denoises. This randomness makes alignment work less effective (that is, even if aligned, the presence of random sampling still makes the generated results uncontrollable). As shown in Table 4 in main content, the ablation experiments of CLIP and the generation framework were conducted. The comparative experiments (c)(d) and (e)(f) illustrate that the two random methods of CVAE and diffusion methods make the improvement effect of alignment work not significant. On the ETH\_UCY dataset, the addition of CLIP only improved ADE/FDE by 0.01/0.03 and 0.01/0.01. However, the network structure of IGN is an MLP, which is an affine transformation and does not have this issue; hence, as in our method, alignment is significantly effective on IGN. When CLIP is removed, as shown in experiments (b)(c)(e), since the two feature spaces cannot be connected through IGN without alignment, the performance drops by 0.06/0.17 compared to when CLIP is added.
> Q4:Is the affine transformation mentioned in Fig.1 same as the trainable matrix W in CLIP?
In our approach, CLIP is only utilized to facilitate the alignment of feature spaces during the pre-training phase. The affine transformation of the idempotent generative network operates independently of the parameters of CLIP. However, using CLIP's trainable matrix W as an initialization for the network's parameters could enhance the generative framework. Since CLIP's trainable matrix W can project the feature space into a unified fusion space, it is possible to employ this matrix W as an initial parameter within the idempotent generative framework and subsequently further train the idempotent generative network based on it. This is a perspective worth exploring, and we will conduct further research on this attempt in the subsequent studies.
[1] Makansi, Osama, et al. "On exposing the challenging long tail in future prediction of traffic actors." ICCV. 2021.
[2] Wang, Yuning, et al. "Fend: A future enhanced distribution-aware contrastive learning framework for long-tail trajectory prediction." CVPR. 2023.
[3] Bae, Inhwan, et al. "SingularTrajectory: Universal Trajectory Predictor Using Diffusion Model." CVPR. 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's response. The reply addressed most of my concerns, and I am inclined to maintain my original score. | Summary: This paper presents TrajCLIP, a novel method for pedestrian trajectory prediction that utilizes contrastive learning and idempotent neural networks. The authors propose an interesting approach to address some limitations of existing methods, particularly in terms of generalization and modeling complex trajectory distributions.
Strengths: 1 The paper introduces an innovative idea of using contrastive learning to align the feature spaces of historical and future trajectories. This approach has the potential to improve the model's ability to generalize across different scenarios.
2 The use of idempotent neural networks for global feature mapping is a creative solution to prevent overfitting to specific dataset distributions.
3 The combination of time-domain and frequency-domain features in the trajectory encoder is an interesting approach that could capture more comprehensive trajectory information.
Weaknesses: 1 The paper lacks a discussion on the computational complexity and runtime performance of the proposed method compared to existing approaches.
2 More analysis of the limitations of the proposed method and potential failure cases would strengthen the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1 Provide more detailed explanations of the idempotent neural network implementation and training process.
2 Add a section discussing the computational requirements and runtime performance of TrajCLIP.
3 Include a discussion on the limitations of the proposed method and potential scenarios where it might not perform well.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations discussed in the Experiments seem too easy and do not fully reflect the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: Add a section discussing the computational requirements and runtime performance of TrajCLIP.
We appreciate you bringing up the model performance experiment. As the table illustrates, we have contrasted our approach with alternative techniques in terms of model size, computational complexity, and inference speed. Our medium-sized model meets the requirements for a real-time prediction task in terms of both inference speed and model size, as it can predict trajectories in 0.0615 seconds. Additionally, our lightweight model is only 3.45MB in size, and its computational complexity is relatively low compared to its model size, making it deployable on most hardware platforms.
| | Model Size (MB) | Computational Complexity (GFlops) | Infer Speed (s) |
|----------------|:---------------:|:---------------------------------:|:---------------:|
| trajectron++ | 0.53 | 2.48 | 0.0223 |
| AgentFormer | 6.78 | 12.79 | 0.1874 |
| MID | 2.42 | 9.06 | 0.8802 |
| Y-net | 203.23 | 35.70 | 1.0716 |
| TUTR | 0.46 | 3.51 | 0.0577 |
| Ours-tiny | 3.45 | 5.26 | 0.0615 |
| Ours(TrajCLIP) | 14.94 | 18.96 | 0.2108 |
*Comparison of our method with other existing methods in terms of model size, computational complexity, and inference speed. Inference speed refers to the time required to input an 8-frame trajectory and predict the next 12 frames.*
> Q2:More analysis of the limitations of the proposed method and potential failure cases would strengthen the paper.
We express our gratitude for the feedback. Due to spatial limitations, a summary of the limitations of our methodology will be presented in the camera-ready version. Our approach primarily focusses on the modelling of trajectory prediction tasks, in line with common practices that omit scene imagery as input. Therefore, empirical validation of the model's performance in practical scenarios is essential. As illustrated in Figure 1 in attached PDF, our method falls short in ensuring collision avoidance in complex, high-density environments, necessitating further research.
> Q3: Provide more detailed explanations of the idempotent neural network implementation and training process.
We will provide a detailed summary of the model's experimental details and training process in the camera-ready version and will also release the source code. The experimental details of the idempotent neural network can be summarised as follows: First, we froze the pre-trained historical trajectory encoder and trained the manifold predictor, that is, the idempotent neural network, as well as the manifold decoder. The manifold predictor was trained using reconstruction loss, idempotent loss, and tightness loss, while the manifold decoder was trained using L2 loss. Training the idempotent and tightening losses requires gradient clipping. For clarity, we provide the training code as follows. Moreover, an RTX 3090 was used to train our model. Our training parameters included a 64-batch size, 100 epochs, and a 0.01 learning rate that was half-formed every 25 epochs. The Adam optimizer was employed.
```python
def ign_train(f, f_copy, Z_H):
# f, f_copy : MLP_θ
predict_Z_F = f(Z_H)
f_z = Z_F.detach()
ff_z = f(f_z)
f_fz = f_copy(fz)
# calculate losses
loss_rec = (predict_Z_F - Z_F).pow(2)
loss_idem = (f_fz - fz).pow(2)
loss_tight = -(ff_z - f_z).pow(2)
# optimize for losses
loss = loss_rec + loss_idem + loss_tight * 0.1
opt.zero_grad()
loss.backward()
opt.step()
```
[1] Shocher, Assaf, et al. "Idempotent Generative Network." The Twelfth International Conference on Learning Representations.
---
Rebuttal 2:
Comment: The author's response has resolved some of my questions, but I still have questions about the Computational Complexity and Inference Speed of the paper. Compared to the previous state-of-the-art papers, this paper has significantly increased the number of parameters, but the performance improvement is limited. So I will maintain my "Borderline accept" score for this paper. | Rebuttal 1:
Rebuttal: Thank you to all reviewers for your valuable comments and recognition of the novelty of our work. We have provided further elaboration and clarification in response to the reviewers' feedback, along with additional experiments to supplement the explanation. We will address each reviewer's comments and queries individually to assist in understanding this work. And we will release the source code and address all the reviewers' issues in the camera-ready version.
We have included more experiment results in the attached file, to facilitate the reviewers understanding of our responses.
Here, we list the additional experiments according to each reviewer's comments.
**[Xn2A-Q2, AYPL, NFeW-Q5, iNsW]** limitation and potential failure discussion: we have conducted supplemental experiments and discussed the limitation and failure cases of our method as shown in Figure 1.
**[Xn2A-Q1, NFeW-Q2]** model computational complexity discussion: We have conducted additional experiments and discussed the comparison of model size, computational complexity and inference speed between our TrajCLIP and other recent popular pedestrian trajectory prediction models, as shown in Table 1.
**[AYPL-Q2]** comparison for SingularTrajectory[1]: We compare our work with SingularTrajectory from transfer learning and few-shot learning, as shown in Table 2 and Table 3.
**[iNsW-Q3]** comparisons with all state-of-the-art methods: we compare our work with multiple SOTA methods[1-4], as shown in Table 4.
| | Model Size (MB) | Computational Complexity (GFlops) | Infer Speed (s) |
|:--------------:|:---------------:|:---------------------------------:|:---------------:|
| trajectron++ | 0.53 | 2.48 | 0.0223 |
| AgentFormer | 6.78 | 12.79 | 0.1874 |
| MID | 2.42 | 9.06 | 0.8802 |
| Y-net | 203.23 | 35.70 | 1.0716 |
| TUTR | 0.46 | 3.51 | 0.0577 |
| Ours-tiny | 3.45 | 5.26 | 0.0615 |
| Ours(TrajCLIP) | 14.94 | 18.96 | 0.2108 |
*Tab1. Comparison of our method with other existing methods in terms of model size, computational complexity, and inference speed. Inference speed refers to the time required to input an 8-frame trajectory and predict the next 12 frames.*
| ADE | A2B | A2C | A2D | A2E | AVG |
|:------------------:|:-----:|:-----:|:-----:|:-----:|:-----:|
| SingularTrajectory | 0.29 | 0.59 | 0.51 | 0.42 | 0.45 |
| Ours(TrajCLIP) | 0.30 | 0.59 | 0.49 | 0.43 | 0.45 |
| FDE | A2B | A2C | A2D | A2E | AVG |
|:------------------:|:-----:|:-----:|:-----:|:-----:|:-----:|
| SingularTrajectory | 0.57 | 1.19 | 1.08 | 0.81 | 0.91 |
| Ours(TrajCLIP) | 0.59 | 1.20 | 0.99 | 0.83 | 0.90 |
*Tab 2. Comparison of ADE/FDE for transfer learning on the ETH-UCY dataset between our method and SingularTrajectory. The ETH, HOTEL, UNIV, ZARA1, and ZARA2 scenes are denoted as A, B, C, D, and E, respectively.*
| Few-Shot | ETH | HOTEL | UNIV | ZARA1 | ZARA2 | AVG |
|:------------------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| SingularTrajectory | 0.35/0.46 | 0.14/0.21 | 0.26/0.44 | 0.21/0.36 | 0.18/0.31 | 0.23/0.35 |
| Ours(TrajCLIP) | 0.34/0.42 | 0.15/0.23 | 0.24/0.39 | 0.21/0.34 | 0.17/0.30 | 0.22/0.34 |
*Tab 3. Comparison of ADE/FDE for few-shot learning on ETH-UCY with SingularTrajectory.*
| | ETH | HOTEL | UNIV | ZARA1 | ZARA2 | AVG |
|:-----------------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| LMTraj-SUP | 0.41/0.51 | 0.12/0.16 | 0.22/0.34 | 0.20/0.32 | 0.17/0.27 | 0.22/0.32 |
| MSN-SC | 0.27/0.39 | 0.13/0.18 | 0.22/0.45 | 0.18/0.34 | 0.15/0.27 | 0.19/0.33 |
| HighGraph | 0.33/0.56 | 0.13/0.21 | 0.23/0.47 | 0.19/0.33 | 0.15/0.25 | 0.21/0.36 |
| SingularTrajectory| 0.35/0.42 | 0.13/0.19 | 0.25/0.44 | 0.19/0.32 | 0.15/0.25 | 0.21/0.32 |
| Ours(TrajCLIP) | 0.36/0.57 | 0.10/0.17 | 0.19/0.41 | 0.16/0.28 | 0.11/0.20 | 0.18/0.33 |
*Tab 4. Comparison of ADE/FDE for performance with SOTA methods on the ETH-UCY dataset.*
[1] Bae, Inhwan, Young-Jae Park, and Hae-Gon Jeon. "SingularTrajectory: Universal Trajectory Predictor Using Diffusion Model." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17890-17901. 2024.
[2] Bae, Inhwan, Junoh Lee, and Hae-Gon Jeon. "Can Language Beat Numerical Regression? Language-Based Multimodal Trajectory Prediction." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[3] Wong, Conghao, et al. "SocialCircle: Learning the Angle-based Social Interaction Representation for Pedestrian Trajectory Prediction." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[4] Kim, Sungjune, et al. "Higher-order Relational Reasoning for Pedestrian Trajectory Prediction." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
Pdf: /pdf/47af1b3014a20a7caa99921873f1d540a55ba873.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ExID: Offline RL with Intuitive Expert Insights in Limited-Data Settings | Reject | Summary: The paper introduces ExID, an offline reinforcement learning algorithm that enhances learning performance in limited data scenarios by combining domain knowledge in the form of simple decision trees with agent experience replay data.
Strengths: * Domain Knowledge Utilization: ExID incorporates domain knowledge to guide decision-making in data-limited scenarios
* Teacher-Student Architecture: A teacher network, informed by domain knowledge, regularizes a student critic network to improve generalization.
* Regularization with Domain Knowledge: The algorithm uses a regularization term to align the critic's decisions with the teacher's advice for states covered by domain knowledge.
Weaknesses: * Discrete Action Space Limitation: The algorithm is currently limited to discrete action spaces, necessitating future extensions for continuous action domains.
* Hyperparameter Tuning Challenge: The need for precise hyperparameter tuning complicates the deployment of ExID in scenarios where extensive optimization is impractical.
* The paper does not have enough strong experiment comparisions. The methods of the paper is related with offline RL methods, such as SCQ[1], ReDS[2], A2PR[3], CPED[4]. But it lacks the experiments comparisions with offlien RL methods. I think adding some SOTA baseline methods will improve your paper. It is not required that experimental comparisons must be given, but at least add some discussion with these methods to the paper.
References:
[1] Shimizu, Yutaka, et al. "Strategically Conservative Q-Learning." arXiv preprint arXiv:2406.04534 (2024).
[2] Singh, Anikait, et al. "ReDS: offline reinforcement learning with heteroskedastic datasets via support constraints." Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023.
[3] Liu, Tenglong, et al. "Adaptive Advantage-Guided Policy Regularization for Offline Reinforcement Learning." In International Conference on Machine Learning (ICML). PMLR, 2024.
[4] Zhang, Jing, et al. "Constrained policy optimization with explicit behavior density for offline reinforcement learning." Advances in Neural Information Processing Systems. 2023
Technical Quality: 3
Clarity: 2
Questions for Authors: * How does ExID perform when the true optimal policy deviates significantly from the provided domain knowledge? Can the algorithm recover from incorrect domain knowledge without additional corrective mechanisms?
* What measures are in place to ensure robustness when the domain knowledge contains errors or biases? How sensitive is the algorithm to such inaccuracies?
* The paper mentions a real sales promotion dataset, but how does ExID perform in other real-world scenarios? Are there any plans for more extensive real-world testing to validate the algorithm's practical utility?
* How should practitioners select the hyperparameters $\lambda$ and $k$ in the absence of extensive computational resources?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: * The paper only conducts experiments in several simulated environments and a real-world sales promotion dataset, which may not fully verify the effectiveness and applicability of the algorithm in more diverse and complex real-world scenarios.
* The performance of the ExID algorithm heavily relies on the quality of the domain knowledge. If the domain knowledge is incomplete, inaccurate, or biased, it may mislead the learning process and result in suboptimal policies. Moreover, obtaining high-quality domain knowledge can be challenging and time-consuming in practice.
* The proposed method mainly concentrates on discrete action spaces, and its performance and applicability in continuous action spaces are not clear. This limits the algorithm's utility in many real-world control tasks that involve continuous action spaces.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. Please find our response below:
**W1: Discrete Action Space Limitation**
- We conducted experiments on discrete action space domains as many important real world problems use discrete action policies. For example navigation with actions forward or turning, finance trading with actions buy, hold or sell etc. We have also mentioned the use of discrete policies in our limitations.
- However, we would like to highlight the proposed methodology can be extended to any continuous domain problem by using the regularization in Eq 4 during critic $(Q_s^\theta)$ training for continuous domain and using actions from actor network $(\pi_s)$ for cross entropy loss in Eq 7.
- We empirically demonstrate this by conducting additional experiment by training policy on continuous version of sales promotion task (action range coupon (0 - 5), discount (0.6 - 1) ) and continuous Type-1 Diabetes basal bolus control task (action range basal and bolus (-1 to 1) (results in Q3).
| Environment |CQL+$\mathcal{D}$ |SCQ |EXID|Performance gain|
|--|--|--|--|--|
| SP|679.25$\pm$ 35.02 | 708.44 $\pm$ 52.19 |827.76 $\pm$ 43.79 | 14.38% |
**W2 and Q4: Hyperparameter Tuning Challenge**
- We agree hyperparameter tuning in offline setting is challenging. In practice this can be done by using tuning methods like Bayesian Optimization and observing the behavior of Q values as proposed by [2].
- $k$ and $\lambda$ should be selected with peak Q value observed during training. We discuss the effect of different $k$ and $\lambda$ in Fig 11 and Fig 12 (supplement) and empirically show $\lambda$ = 0.5 and $k$ = 30 works well in most setting. We also show the proposed methodology is better than $\lambda$ = 0 and $k = 0$ demonstrating robustness to hyperparameters.
- The Q based tuning does not require additional computational resource.
[2] Kumar, A., Singh, A., Tian, S., Finn, C., & Levine, S. (2021). A workflow for offline model-free robotic reinforcement learning. arXiv preprint arXiv:2109.10813.
**W3: Strong experiment comparisons**
- The paper compares ExID to SOTA discrete offline RL methods. However continuous control algorithms like SCQ, ReDS, A2PR, CPED also suffer from performance degradation for OOD state. We empirically compare EXID to SCQ for SalesPromotion continuous task. Please find the results in table of W1.
- The suggested offline RL methods use action constraints to correct OOD actions for states in dataset but do not have any mechanism for states unseen in dataset. It has been recently established generalization is the main bottleneck of offline RL [3].
- ExID addresses this generalization gap by using knowledge distillation and teacher network updates to correct actions for states not seen during training. We will add this discussion in our revised manuscript.
[3] Park, Seohong, et al. "Is Value Learning Really the Main Bottleneck in Offline RL?." _arXiv preprint arXiv:2406.09329_ (2024).
**Q1: Deviation and Recovery from incorrect Domain Knowledge**
- In practice it is never possible to obtain perfect domain knowledge. Thus the algorithm updates the initial teacher network ($\pi_t^\omega$) obtained from domain knowledge using Eq 5, 6 and 7 during training.
- Thus ExID uses a self correction methodology to find the optimal policy even when there is significant deviation from the domain knowledge. Please refer to Pg 5 for details of the teacher update.
- The domain knowledge used for the experiments is also not optimal as reported in performance of behavior cloned teacher under the column $\mathcal{D}$ of Table 1 and 2.
**Q2: Robustness to errors and biases in domain knowledge.**
- The self correction methodology provides robustness to the errors and biases in domain knowledge. We discuss the Effect of varying $\mathcal{D}$ quality in Sec 5.6. Fig 6 a shows using imperfect knowledge (Rule 3) and knowledge with high error (Rule 2) with ExID can bring substantial performance improvement over baseline.
- We agree using absolutely incorrect domain knowledge will cause performance degradation as shown in Fig 6 a and mentioned in limitations. However, domain knowledge is available for practical problems in domains like business and healthcare.
**Q3: Other real world scenarios**
We conducted experiments on 6 benchmark and open source datasets for reproducibility. The method can be applied to many real problems in business, healthcare where domain knowledge is available. However such datasets are often proprietary. If the reviewer could kindly suggest any other opensource real world dataset we would be happy to conduct experiments. Additionally we tested ExID for Type 1 diabetes control task by administrating basal and bolus insulin level [2]. *This is a continuous control task*
- The following basic basal bolus insulin control is known for diabetic patients
Domain knowledge :
1. basal = u2ss * BW / 6000 where
u2ss: The patient's or default steady-state insulin.
BW: The patient's or default body weight.
2. meal > 0 $\implies$ bolus = (carbohydrate / carbohydrate_ratio) + (current_glucose - target_glucose) / correction_factor)/ sample_time
The offline data is obtained from open access NeoRL2 github repository.
| Environment |$\mathcal{D}$| CQL+$\mathcal{D}$ |EXID|Performance gain|
|--|--|--|--|--|
| SimGlucose |17.53 $\pm$ 3.02 | 21.79 $\pm$ 3.60 |30.82 $\pm$ 6.95 | 41.44% |
[2]. Jinyu Xie. Simglucose v0.2.1 (2018) [Online].
**Q4: Hyperparameter selection**
Please refer to our response of weakness 1.
**Limitation : Dependence on high quality domain knowledge**
We would like to highlight the domain knowledge used in this work is suboptimal (obtained from heuristic) and incomplete (covering only part of state space). This can be observed in Table 1 and 2 under $\mathcal{D}$ where performance of the initial domain knowledge is not optimal. Heuristic based domain knowledge is available for many real world use cases for business, healthcare etc.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer UWfE
Comment: Thanks for your clear explanation and careful answers, which have addressed my concern. I see you add the comparison experiments with SCQ. I expect you to add more comparison experiments or discussions with other SOTA offline RL methods ReDS, A2PR, and CPED as mentioned earlier in your paper. Then, I am willing to raise my score.
---
Reply to Comment 1.1.1:
Title: Additional baselines
Comment: We thank the reviewer for acknowledging the rebuttal and for raising important questions that strengthen our paper. Please find the below comparison with mentioned offline RL methods:
**ReDS: offline reinforcement learning with heteroskedastic datasets via support constraints**
The main contribution of RedS is to provided distribution constraints against a reweighted version of the behavior policy. This facilitates the policy to freely choose state-by-state how much the learned policy should stay close to the behavior policy for states seen in the the dataset. This method is applicable to the datasets showing heteroskedastic distributions (different behavior actions for the same states) whereas our methodology employs action correction for OOD states that are not observed in the dataset. Since our dataset is not heteroskedastic this methodology is not directly comparable with ExID.
**Adaptive Advantage-Guided Policy Regularization for Offline Reinforcement Learning**
A2PR trains a VAE similar to SCQ to identify high advantage that differ from those present in the dataset. The VAE is trained with $\log p{\psi}(a|s) \geq \mathbb{E}{q_{\phi}(z|a,s)} \left[ \mathbb{1}\{f(A(s,a)) > \epsilon_A\} \log p_{\psi}(a|z,s) \right] - \text{KL}\left[q{\phi}(z|a,s) \parallel p(z|s)\right]$ where $s \in \mathcal{B}r$. This method does not estimate actions for $s \notin {B}r$ which ExID does via knowledge distillation.
**Constrained policy optimization with explicit behavior density for offline reinforcement learning**
CPED uses a flow-GAN model to explicitly estimate the density of behavior policy. This facilitates choosing different actions which are safe for the for states in dataset. The flow GAN model is trained on the dataset generated by behavior policy and does not account for the states outside the dataset.
Please find the comparison of the baselines with ExID in the table below:
| Environment | CQL+$\mathcal{D}$ | SCQ | A2PR | CPED | EXID |
|-------------|------------------|-----|------|------|------|
| SP | 679.25$\pm$ 35.02 | 708.44 $\pm$ 52.19 | 712 $\pm$ 32.09 | 715 $\pm$ 47.31 | 827.76 $\pm$ 43.79 |
In summary all these methods do not employ any action correction mechanism for OOD states outside the dataset leading to performance degradation in case of limited data. As a result these algorithms perform almost similarly on sales promotion dataset. ExID distills knowledge for OOD states from domain knowledge leading to performance enhancement over the baseline methods. We add this discussion to the revised manuscript.
If there are any further questions we will be happy to answer them. Otherwise we will be grateful if you could reconsider your score. | Summary: This paper studies offline RL when data is limited. The authors propose a domain knowledge-based regularization technique to learn from an initial tracker network and limited data buffer. The experiments verified the effectiveness of the proposal, which outperforms the classic RL baseline methods.
Strengths: 1. The proposed method is simple and technically reasonable.
2. The experimental results on the real sales promotion dataset show the proposal is a promising solution in real-world applications.
Weaknesses: 1. The technical novelty is limited. Despite the claimed use of expert knowledge, the method adopted by the paper is to directly train a policy from the knowledge, which assumes that the information provided by the domain knowledge is at the state-action level (a decision tree in this paper), which limits the feasibility of this method. Compared to the use of knowledge between latent concepts discussed in neuro-symbolic learning, I think it's more like traditional model distillation.
2. In practice, limited offline data may come from domain knowledge-based strategies, such as human-designed rules, thus I have great concerns about whether these two can promote each other. Empirical studies on more real-world datasets or rigorous theoretical analysis will provide support to this issue and further improve this work.
3. The introduction uses the sales task as an example, but the visualization is based on the Mountain Car dataset.
4. Definition 4.1 seems strange, why not directly define the offline dataset as a subset of the complete state spaces?
5. The $\eta$ in Proposition 4.2 is not well defined.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Could the “Intuitive Expert Insights” be formally defined?
2. The difference between the proposal and the traditional model distillation.
---
After rebuttal, my concerns have been addressed. I decide to raise the score to 6.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have provided a discussion about the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. Please find our response below:
**W1: Limited novelty and comparison with traditional knowledge distillation**
- The domain knowledge considered in our setting is imperfect and is updated using expected improvement of RL policy in a completely offline manner using Eq 6 and 7 which is different from traditional knowledge distillation methods. This update is essential as shown by the ablation study in Fig 5 c.
- The study of performance degradation of offline RL due to limited data is unexplored in literature as agreed upon by Reviewer Cy8u. To the best of our knowledge, a knowledge distillation based regularization approach has not been studied for Out of Distribution (OOD) action correction for unseen states in dataset (OOD states) in offline RL prior to this work.
- Please refer to discussion in Pg 18 Section E "Related Work: Knowledge Distillation" for comparison with prevalent knowledge distillation methods in RL.
**W2 : Offline data from domain knowledge, empirical studies on more real-world datasets or rigorous theoretical analysis**
- In practice offline data generally comes from human demonstrations. Since it is costly to explore all possible states during data collection, the offline data distribution is often narrow. The policy learned on such demonstrations fails on encountering OOD states. Please refer to Pg 16 Fig 8 for pictorial representation of the complementarity of domain knowledge to the dataset.
- The policy performance is improved over these OOD states via incorporation of domain knowledge. Proposition 4.2 formalizes this notion with respect state coverage and $\mathcal{D}$ quality theoretically establishing performance improvement is possible as highlighted by reviewer Cy8u. Please refer to pg 13 App A for the full proof.
- We conducted experiments on 6 standard datasets. The results of Minigrid environment are in Table 3 pg 19 due to page limitation in the main manuscript. Additionally we tested ExID for diabetes management task by administrating basal and bolus insulin level [2]. *This is a continuous control task*.
- The following basic basal bolus insulin control is known for diabetic patients
Domain knowledge :
1. The basal insulin is based on the insulin amount to keep the blood glucose in the steady state when there is no (meal) disturbance.
basal = u2ss (pmol/(L*kg)) * body_weight (kg) / 6000 (U/min)
2. The bolus amount is computed based on the current glucose level, the target glucose level, the patient's correction factor and the patient’s carbohydrate ratio.
bolus = ((carbohydrate / carbohydrate_ratio) + (current_glucose - target_glucose) / correction_factor)/ sample_time
The offline data is obtained from open access NeoRL2 github repository.
| Environment |$\mathcal{D}$| CQL+$\mathcal{D}$ |EXID|Performance gain|
|--|--|--|--|--|
| SimGlucose |17.53 $\pm$ 3.02 | 21.79 $\pm$ 3.60 |30.82 $\pm$ 6.95 | 41.44% |
- It has been recently established generalization is the main bottleneck of offline RL [1]. ExID addresses this generalization gap by using knowledge distillation and teacher network updates to correct actions for states not seen during training as shown in Fig 4 Pg 8.
[1] Park, Seohong, et al. "Is Value Learning Really the Main Bottleneck in Offline RL?." _arXiv preprint arXiv:2406.09329_ (2024).
[2]. Jinyu Xie. Simglucose v0.2.1 (2018) [Online].
**W3: Visualization on Mountain Car dataset**
As the state space in mountain car is only two dimensional and the entire dataset is available it is easier to visualize the performance degradation for OOD states. The state space of sales promotion is 40000 (4 features for 10,000 users). Hence visualization of SP task is difficult. We show the performance improvement for Sales Promotion over baseline in Fig 3c Pg 7.
**Definition 4.1**
We define $\mathcal{B}_r$ as with respect to full dataset and not just states space because $\mathcal{B}_r$ also contains actions, rewards and next states. A performance drop is a result of both the conditions in Def 4.1 please refer to our analysis in Appendix B Pg 16.
**Definition of $\eta$**
Due to page limitations we provide the definition $\eta$ in Appendix A Pg 13. For any deterministic policy $\pi$ the performance return is formulated as $\eta(\pi) = E_{\tau \sim \pi}[\sum_{t=0}^{\infty} \gamma^t r(s_t, a_t)]$ following [3]. Where $\tau$ is trajectory, $\gamma$ is the discount factor and $r$ is the reward function. We will move the definition to the main text in revised manuscript.
[3] Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In 347 Proceedings of the Nineteenth International Conference on Machine Learning, pages 267–274, 2002
**Definition of domain knowledge**
Domain knowledge $\mathcal{D}$ is formally defined as hierarchical decision nodes capturing $S \to A$ where $A$ is non optimal as represented by Eq. 2 Pg 3. This can be generalized to any source of knowledge capturing $S \to A$ mapping for example dataset without reward or next state labels. General guidelines are typically available in practical domains like business, healthcare and autonomous driving which can be distilled in teacher network. _ExID will work with any type of domain knowledge that can be represented as S → A mapping._
**Difference between the proposal and the traditional model distillation**
Please refer to our response for W1.
---
Rebuttal Comment 1.1:
Title: Thank you for your feedback
Comment: Dear reviewer NyjZ,
Thank you for your insightful review comments and for asking important questions that helped us improve the work. As the discussion period approaches towards end, kindly let us know if we have addressed your concerns and if there are feedbacks for discussion. Otherwise we will be grateful if you could reconsider your score.
Thank you again for your efforts and we value your feedback deeply.
Regards,
Authors
---
Rebuttal 2:
Title: Please confirm you've read the author response
Comment: Dear reviewer,
Can you please confirm that you've read the author's responses?
Given that the other review is now both vote for acceptance, it's important for you to voice any remaining concerns if you still believe the paper should not be accepted.
Thank you! | Summary: The paper introduces a novel technique ExID, a domain knowledge-based regularization method, that adaptively refines initial domain knowledge to boost performance of offline reinforcement learning (RL) in limited-data scenarios. The key insight is leveraging a teacher policy, trained with domain knowledge, to guide the learning process of the offline-optimized RL agent (student policy). This mitigates the issue of erroneous actions in sparse samples and unobserved states by having the domain knowledge-induced teacher network to cover them. And the initial domain knowledge would be improved when the student policy reaches a better perform than the teacher policy. Empirical evaluations on standard discrete environment datasets demonstrate a substantial average performance increase compared to traditional offline RL algorithms operating on limited data
Strengths: 1. Originality: The paper's originality lies in its integration of domain knowledge into offline RL through a teacher policy network. This approach addresses performance degradation in limited-data settings, which is a novel and underexplored area. The introduction of the domain knowledge-based regularization technique and adaptive refinement of initial domain knowledge are particularly innovative.
2. Quality: The quality of the work is evidenced by the solid theoretical analysis and the thorough empirical evaluations conducted on multiple standard datasets, including OpenAI Gym environments (Mountain Car, Cart-Pole, Lunar Lander) and MiniGrid environments, as well as a real-world sales promotion dataset. The results consistently show that ExID outperforms existing offline RL algorithms in these settings.
3. Clarity: The paper is well-structured, with clear explanations of the problem, methodology, and results. The use of diagrams and tables helps understand the motivation of the problem (figure 1), the proposed method (figure 2), illustrate the effectiveness of ExID (Table 1-2). Each section logically follows from the previous one, making the overall argument easy to follow.
4. Significance: By tackling the challenge of limited data in offline RL, the paper makes a significant contribution to the field. The proposed approach has practical implications for various real-world applications where data is scarce and expert knowledge is available, such as in business, healthcare, and robotics.
Weaknesses: 1. Generalization to Continuous Domains: The paper is limited to discrete action spaces, which restricts its applicability to a broader range of RL problems involving continuous action spaces. This limitation is acknowledged by the authors.
2. Scalability: The scalability of ExID to more complex environments that requires a complex representation (e.g., a significant large tree) of domain knowledge is not thoroughly explored. It would be beneficial to understand how the method performs in such settings and what challenges might arise because the challenging of updating the domain knowledge represented in a complex representation could hinder the learning process of the student policy in the proposed method ExID.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Comparative Analysis: Can the authors provide a more detailed comparison with other domain knowledge-based methods? Specifically, how does ExID differ in its approach to leveraging domain knowledge compared to methods like DKQ and CQL SE that
are mentioned in Related Work?
2. Examples of Domain Knowledge Improvement: having some examples of the improved domain knowledge could make the paper stronger and more convincing.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors acknowledge several limitations of their work, including the reliance on the quality of domain knowledge and the focus on discrete action spaces. While these limitations are well-addressed in the paper, it may be worth to consider a broad evaluation:
* Conducting experiments on a wider variety of environments that have larger state and action spaces, would provide a more comprehensive evaluation of the method's applicability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your appreciative and constructive feedback. Please find our response below:
**W1 Generalization to Continuous Domain**
- The proposed methodology can be extended to any continuous domain problem by using the regularization in Eq 4 : $\mathcal{L}(\theta) = \mathcal{L}cql(\theta) + \lambda
E_{s \sim \mathcal{B}r \land s \models \mathcal{D}} [Q_s^\theta(s, a_s)-Q_s^\theta(s,a_t)]^2$ during critic $(Q_s^\theta)$ training for continuous domain and using actions from actor network $(\pi_s)$ for cross entropy loss in Eq 7 : $\mathcal{L}(\omega) = -\sum_{s \models D} (\pi_t^\omega(s)log(\pi_s(s)))$.
- We empirically demonstrate this by conducting additional experiment by training policy on continuous version of sales promotion task (action range coupon (0 - 5), discount (0.6 - 1) ) and continuous Type-1 Diabetes basal bolus control task (action range basal and bolus (-1 to 1) for SimGlucose dataset. Please refer to the results in table below:
| Environment |CQL+$\mathcal{D}$ |SCQ |EXID|Performance gain|
|--|--|--|--|--|
| SP|679.25$\pm$ 35.02 | 708.44 $\pm$ 52.19 |827.76 $\pm$ 43.79 | 14.38% |
| Environment |$\mathcal{D}$| CQL+$\mathcal{D}$ |EXID|Performance gain|
|--|--|--|--|--|
| SimGlucose |17.53 $\pm$ 3.02 | 21.79 $\pm$ 3.60 |30.82 $\pm$ 6.95 | 41.44% |
**W2 Scalability**
EXID can be scaled for complex domain knowledge trees if the teacher neural network is able to capture the $S \to A$ mapping of the tree. However the effect of complexity on the teacher network is currently beyond the scope of this paper.
**Q1 Comparative Analysis**
- DKQ uses Q operator guided by domain knowledge using Eq: $\mathcal{T}{\mathcal{F}} Q(s, a) := r(s, a) + \gamma E{s' \sim P(s'|s, a)} \left[ \sum_{i=1}^{K} \alpha_i \max_{a' \in \text{supp}(f_i)} Q(s', a') \right]$
which requires the importance of all actions $a' \in \text{supp}(f_i)$ to be labelled from domain knowledge. The domain knowledge is not updated during the training process. This operator also only works on the states observed in the dataset. Contrary to this our method does not require action support labels for each action and incorporates the domain knowledge for unseen states through knowledge distillation and teacher update.
- CQL SE uses an uncertainty-weighted regularization of OOD actions using safety experts represented by $Q(s, a) = r + \gamma \ast \max_{a'} Q(s', a') - \underbrace{(1 - conf(s)) \ast (a - \pi_T(s))^2}_{\text{uncertainty weighted learning from the safety expert}}$. The safety expert is considered to be optimal and not updated during training. conf(s) is calculated based on states observed in the dataset and this method does not account for unseen states.
**Q2 Examples of Domain Knowledge Improvement**
Since the domain knowledge is partial, direct improvement of $\pi_t^\omega$ is not always observed. We show domain knowledge improvement using ablation study in Fig 5c which depicts no teacher update leads to a suboptimal policy. This is also depicted using baseline CQL+D where simply using domain knowledge without improving it does not lead to an optimal performance.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. My concerns have been addressed. And I would like to remain my score.
---
Reply to Comment 1.1.1:
Title: Thank you for acknowledging the rebuttal
Comment: Dear Reviewer Cy8u,
Thanks for your kind support and for helping us improve the paper! We will incorporate the additional discussions in the revised version of the manuscript.
Thank you again for your valuable comments and guidance.
Best,
Authors | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their time and constructive feedback and for highlighting the following strengths.
**Strengths** – Novel and unexplored (Reviewer Cy8u), solid theoretical analysis (Reviewer Cy8u), simple and technically reasonable (Reviewer Cy8u, Nyzj), Practically applicable (Reviewer Cy8u, NyjZ), Domain Knowledge Utilization (Reviewer UWfE).
We acknowledge the areas of improvement the reviews have suggested and have made concerted efforts to address them. We would like to highlight our response to the major concerns and then respond to the individual reviews.
**Discrete Action Space Limitation and more real world experiments**
- We conducted experiments on discrete action space domains as many important real world problems use discrete action policies. For example navigation with actions forward or turning, finance trading with actions buy, hold or sell etc. We have also mentioned the use of discrete policies in our limitations.
- However, we will like to highlight the proposed methodology can be extended to any continuous domain problem by using the regularization in Eq 4 : $\mathcal{L}(\theta) = \mathcal{L}cql(\theta) + \lambda
E_{s \sim \mathcal{B}r \land s \models \mathcal{D}} [Q_s^\theta(s, a_s)-Q_s^\theta(s,a_t)]^2$ during critic $(Q_s^\theta)$ training for continuous domain and using actions from actor network $(\pi_s)$ for cross entropy loss in Eq 7 : $\mathcal{L}(\omega) = -\sum_{s \models D} (\pi_t^\omega(s)log(\pi_s(s)))$.
- We empirically demonstrate this by conducting additional experiment by training policy on continuous version of sales promotion task (action range coupon (0 - 5), discount (0.6 - 1) ) and continuous Type-1 Diabetes basal bolus control task (action range basal and bolus (-1 to 1) for SimGlucose [2] dataset. Please refer to the results in table below:
| Environment |CQL+$\mathcal{D}$ |SCQ [1] |EXID|Performance gain|
|--|--|--|--|--|
| SP|679.25$\pm$ 35.02 | 708.44 $\pm$ 52.19 |827.76 $\pm$ 43.79 | 14.38% |
All the experiment plots conducted during rebuttal are provided in attached pdf.
| Environment |$\mathcal{D}$| CQL+$\mathcal{D}$ |EXID|Performance gain|
|--|--|--|--|--|
| SimGlucose |17.53 $\pm$ 3.02 | 21.79 $\pm$ 3.60 |30.82 $\pm$ 6.95 | 41.44% |
- In the manuscript we have shown empirical results on 6 benchmark opensource datasets and theoretically establish in proposition 4.2 the use of imperfect domain knowledge can lead to performance improvement in offline RL policies.
For the rebuttal phase we have done the following additional experiments as suggested by reviewers
1. Empirical study on continuous sales promotion and Type 1 diabetes task to address discrete action space limitation raised by reviewer UWfE and reviewer Cy8u.
2. Empirical comparison with baseline SCQ [1] on Sales Promotion task for strong experiment comparison as suggested by reviewer UWfE.
3. Experiment on Type-1 Diabetes basal bolus control dataset for other real world experiments as suggested by reviewers UWfE and NyjZ
[1] Shimizu, Yutaka, et al. "Strategically Conservative Q-Learning." arXiv preprint arXiv:2406.04534 (2024).
[2]. Jinyu Xie. Simglucose v0.2.1 (2018) [Online].
Pdf: /pdf/9dc413ff97797bd9ffc321c4734a87498be8ab0f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Label Noise: Ignorance Is Bliss | Accept (poster) | Summary: This paper presents a theoretical framework for learning under multi-class, instance-dependent label noise. It introduces the novel concept of Relative Signal Strength (RSS) to measure the impact of noise and uses it to derive upper and lower bounds on excess risk. Notably, it proves the minimax optimality of Noise Ignorant Empirical Risk Minimization (NI-ERM) and provides conditions for immunity to label noise. Bridging theory and practice, the paper proposes a two-step 'feature extraction + NI-ERM' approach, achieving state-of-the-art performance on the CIFAR-N dataset.
Strengths: 1. Theoretical Depth:
This paper introduces the novel concept of Relative Signal Strength (RSS) to quantify the impact of noise in label learning. Using RSS, the authors mathematically derive precise upper and lower bounds for excess risk.
2. Combination of Theory and Practicality:
The research seamlessly bridges theory and practice by developing a two-step 'feature extraction + NI-ERM' approach based on their theoretical analysis. This method achieves high performance using simple linear classifiers instead of complex deep learning models, validating the theoretical predictions about the optimality of NI-ERM in practical applications. This successful translation of theoretical insights into effective practical methods is a significant strength of the paper.
3. Surprising Results:
One of the most striking aspects of this work is the mathematical proof that simple Noise Ignorant Empirical Risk Minimization (NI-ERM) is minimax optimal. The authors theoretically guarantee accurate learning even under high noise levels, such as up to 90% in 10-class problems. These surprising findings challenge conventional wisdom about the necessity of complex noise handling techniques and achieve state-of-the-art performance with a remarkably simple approach.
4. Experimental Validation:
This paper provides experimental validation of its theoretical claims by achieving top performance on real-world noisy datasets like CIFAR-10N and CIFAR-100N. They demonstrate that performance changes with increasing noise levels align closely with their theoretical predictions, providing empirical support for their theoretical framework.
5. Practicality:
The proposed method achieves high performance without relying on data augmentation or complex hyperparameter tuning
Weaknesses: 1. Insufficient Introduction:
The introduction lacks a comprehensive overview of the paper's content and contributions. A more detailed exposition of the overall approach and key findings would better prepare readers and emphasize the paper's significance. This could include a clearer roadmap of the theoretical and practical aspects of the work.
2. Validity of Relative Signal Strength (RSS) Definition:
The definition of RSS is not intuitive and lacks a sufficient explanation of how it represents the signal content of the noisy distribution relative to the clean distribution. While examples are provided, they primarily demonstrate RSS calculation rather than explaining the fundamental reasoning behind its definition. Since all theoretical results in the paper are based on the RSS definition, it is crucial to establish whether RSS accurately represents the degree to which the noisy distribution contains signal compared to the clean distribution. A more thorough justification and explanation would strengthen the paper's theoretical foundation. I will raise a question about the definition of RSS in the question section to seek further clarification on this point.
3. Experimental Validity Concerns:
The two-step method employs pre-trained feature extractors. While this demonstrates the effectiveness of NI-ERM, it may not provide a fair comparison with other methods. To more convincingly demonstrate the superiority of NI-ERM, the paper could:
a) Compare NI-ERM using feature extractors from models trained with other SOTA noisy label methods.
b) Apply SOTA methods to the same high-quality feature extractors used for NI-ERM, training only the final linear layer for a fairer comparison.
These additional experiments would provide a more robust validation of the NI-ERM approach in practical scenarios.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Definition of RSS:
I am curious about how RSS measures certainty about the label. The definition of RSS does not seem intuitive. Generally, in deep learning, KL divergence is used to measure the difference between two probability distributions. The definition of RSS shows a different tendency compared to KL divergence. For example, let's consider three probability vectors: p1 = [0.05, 0.7, 0.25], p2 = [0.25, 0.7, 0.05], and p3 = [0.1, 0.6, 0.3]. The RSS between p1 and p2 is 1.44, and the RSS between p1 and p3 is 0.67. If we assume p1 is the clean distribution, it can be interpreted that p2 contains more label information. However, the KL divergence between p1 and p2 is 1.96, and between p1 and p3 is 1.05, indicating that p1 and p3 are closer in terms of distribution. Is there a justification for claiming that p2 contains more label information despite having a higher KL divergence?
2. Gap between Theory and Practice:
While the paper bridges the gap between theory and experiments by using pretrained feature extractors, as mentioned in line 254, generally training the feature extractor with NI-ERM does not perform well. This differs from the theory presented in the paper. What do you think is the reason for this discrepancy?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limited Application to Classification Problems:
While noisy label problems exist in various tasks, the theoretical analysis and experimental validation in this paper focus solely on classification problems, particularly image classification tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. The "strength" section of your review indeed summarizes our contributions. We also especially thank you for bringing up the point about the relation of RSS and KL divergence. In the next version, we will include the example you provided and explain the relations between the two concepts.
## Regarding "Weaknesses":
> 1. Insufficient Introduction: ... A more detailed exposition of the overall approach and key findings would better prepare readers and emphasize the paper's significance.
Thank you for this suggestion. We will certainly include a paragraph outlining the content and contributions in the next version.
> 2. Validity of Relative Signal Strength (RSS) Definition: The definition of RSS is **not intuitive and lacks a sufficient explanation** of how it represents the signal content of the noisy distribution relative to the clean distribution. While examples are provided, they primarily demonstrate RSS calculation rather than explaining the fundamental reasoning behind its definition.
Thank you again for this comment. Fortunately, there are a few lines from the appendix (line 526-528) that we will move to the main text that show precisely where the definition of RSS comes in to play.
In short, the clean excess risk is
$$
R(f) - R^* = \int (\max_i [\eta(x)]_i - [\eta(x)]_f ) d P_X(x),
$$
where we see the denominator of RSS.
While the noisy excess risk is
$$
\widetilde{R}(f) - \widetilde{R}^* = \int (\max_i [\widetilde{\eta}(x)]_i - [\widetilde{\eta}(x)]_f ) d P_X(x),
$$
where we see the numerator of RSS.
RSS is the right definition because it is what **pops up naturally** when bounding the clean excess risk with the noisy one.
As for more intuition, see also our response to your first question, below.
> Experimental Validity Concerns... To more convincingly demonstrate the superiority of NI-ERM, the paper could: a) Compare NI-ERM using feature extractors from models trained with other SOTA noisy label methods. b) Apply SOTA methods to the same high-quality feature extractors used for NI-ERM, training only the final linear layer for a fairer comparison.
These are reasonable requests. For:
a) Although we have not had time to run these experiments, we already have some insight into what would happen: the performance relates to the "feature quality", assessed in terms of classification accuracy with no noise, see line 279-282.
b) We have ran additional experiments by fixing the "high-quality feature" and training the final linear layer with different robust losses [2-3] and robust training procedures [4], see Table 2 in the pdf attached to the global rebuttal. Our NI-ERM approach is highly competitive.
## Regarding "Questions":
> 1. Definition of RSS: I am curious about how RSS measures certainty about the label... The definition of RSS shows a different tendency compared to **KL divergence**. For example, let's consider three probability vectors: p1 = [0.05, 0.7, 0.25], p2 = [0.25, 0.7, 0.05], and p3 = [0.1, 0.6, 0.3]. The RSS between p1 and p2 is 1.44, and the RSS between p1 and p3 is 0.67. If we assume p1 is the clean distribution, it can be interpreted that p2 contains more label information. However, the KL divergence between p1 and p2 is 1.96, and between p1 and p3 is 1.05, indicating that p1 and p3 are closer in terms of distribution. Is there a justification for claiming that p2 contains more label information despite having a higher KL divergence?
Thanks for bring up this excellent point.
The short answer is:
**KL divergence considers the similarity between two (whole) distributions, while the task of classification only focuses on predicting the $\arg \max$.**
In the example you mentioned, $p_1$ is the clean class distribution, and $p_2, p_3$ can be viewed as two noisy copies of it. $p_3$ is closer to $p_1$ in terms of KL divergence, but $p_2$ provides more information in terms of predicting the $\arg \max$ of $p_1$. There is no conflict, intuitively: the difference of $p_1$ and $p_2$ lies in that the probability of being class 1 and 3 got swapped, but the "margin", aka gap between the largest probability and second largest, is still $0.7 - 0.25 = 0.45$. While in $p_3$, the "margin" is $0.6 - 0.3 = 0.3$, which is smaller than $p_2$.
To conclude, although $p_1$ and $p_3$ are "closer" in terms of distribution, $p_2$ provides more information regarding predicting the $\arg \max$ of $p_1$ than $p_3$ does.
We would like to include your example in our paper and write a paragraph about the relation between KL divergence and RSS.
> 2. Gap between Theory and Practice: While the paper bridges the gap between theory and experiments by using pretrained feature extractors, as mentioned in line 254, generally training the feature extractor with NI-ERM does not perform well. This differs from the theory presented in the paper. What do you think is the reason for this discrepancy?
This is because the Natarajan dimension of a large neural net is too big, and therefore the upper bound in Thm 2 is vacuous (bigger than 1). With a lot more data, the bound would be meaningful and NI-ERM would probably work well for end-to-end training (although the amount of data and computational resources required would probably be prohibitive). For a linear classifier, the Natarajan dimension is upper bounded by the dimension of the self-supervised features ([1], Thm 29.7), thus providing much better control on the excess risk.
Reference:
[1] Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algorithms, 2014.
[2] Aritra Ghosh and Himanshu Kumar. Robust loss functions under label noise for deep neural networks. AAAI, 2017.
[3] Aritra Ghosh, et al. Making risk minimization tolerant to label noise. Neurocomputing, 2015
[4] Pierre Foret, et al. Sharpness-aware minimization for efficiently improving generalization. ICLR, 2021.
---
Rebuttal 2:
Title: Main Concerns Addressed by Authors' Response
Comment: Thank you for the author's response and new experiments. The author's explanation has alleviated many of my concerns. I now understand why the authors set the RSS in that way and how you tried to capture the signal of the clean posterior differently from KL divergence. While I'm still not fully convinced that RSS is the best definition for representing how much signal from the clean distribution is included, I believe it is sufficient for the proposed theories to have meaningful implications. I will raise my score to accept.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response.
Your review is very beneficial and definitely helps the paper to be better framed.
We really appreciate that.
Thanks. | Summary: The work provides a new insight on how to deal with instance-dependent label noise under the context of multi-class classification problem. Under certain conditions, they prove that training a classifier as if there is no noisy labels is the best course of action.
This idea is presented in details supported by several theorems with finite sample analysis as well as by several experiments on synthetic and a real dataset (CIFAR-10N).
Strengths: I enjoy reading this paper. The work presents a surprisingly simple idea to deal with noisy labels, especially instance-dependent noise, and show that it is the optimal learning strategy under some conditions.
- The authors introduce the Relative Signal Strength (RSS) to quantify how much noisy label distribution can reveal about the clean label distribution.
- Based on this measure, the authors suggest a class of noisy label learning problems that can be guaranteed to be solvable (Theorem 2) by just ignoring the existence of noisy labels. This is somewhat still intuitive because RSS is positive and the excessive risk is measured based on 0-1 loss.
- They also present a min-max analysis to show that for the same class of problems, the above learning strategy achieves an optimal rate. This is really a surprising and intriguing result.
- Lastly, they show how applicable this class of problems can be in practice (Theorem 4, 5): the label noise probability vector should have the same argmax as the clean probability vector.
Weaknesses: - The key condition for ignoring the existence of noisy labels to work is to have $\mathcal{A}_0 = \mathcal{X}$ (or $\mathcal{A}_0$ covers most of $\mathcal{X}$), which in turns requires $\text{argmax } \widetilde{\eta}(x) = \text{argmax } \eta(x)$. This condition would be violated if certain classes are tricky and easy to mistaken one as another, for example, labeling leopard, lion, cheetah, tiger. Therefore, the condition that \kappa > 0 might be restrictive in this regard.
On the other hand, it would be intuitively feasible to still learn a classifier under this type of noise as long as one mistakes a class with another class in a consistent way. In that case, having permutation ambiguity is inevitable, but it is not a very detrimental as the cost of post-processing to disambiguating permutation is relatively cheap.
- The experiment section is limited:
+ How is performance of the proposal compared to baselines under various noise level?
+ More baselines should be included, such as BLTM[1], MEIDTM[2], MaxMIG[3].
[1] Yang, Shuo, et al. "Estimating instance-dependent bayes-label transition matrix using a deep neural network." International Conference on Machine Learning. PMLR, 2022.
[2] Cheng, De, et al. "Instance-dependent label-noise learning with manifold-regularized transition matrix estimation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[3] Cao, Peng, et al. "Max-mig: an information theoretic approach for joint learning from crowds." arXiv preprint arXiv:1905.13436 (2019).
Technical Quality: 3
Clarity: 3
Questions for Authors: - Does the analysis depend on any particular loss used during training?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. We are especially glad that you enjoyed reading our paper.
> The key condition for ignoring the existence of noisy labels to work is to have A0=X (or A0 covers most of X), which in turns requires $\arg \max \widetilde{\eta}(x)= \arg \max \eta(x)$. This condition would be violated if certain classes are tricky and easy to mistaken one as another, for example, labeling leopard, lion, cheetah, tiger. Therefore, the condition that $\kappa > 0$ might be restrictive in this regard. On the other hand, it would be intuitively feasible to still learn a classifier under this type of noise as long as one mistakes a class with another class in a consistent way. In that case, having permutation ambiguity is inevitable, but it is not a very detrimental as the cost of post-processing to disambiguating permutation is relatively cheap.
That is an interesting scenario. It seems to us that "disambiguate a label permutation" requires additional information (e.g., human feedback), which is not a part of our problem statement.
If we allow further "post-processing", that would be an interesting research problem worth exploring.
> Does the analysis depend on any particular loss used during training?
The theoretical analysis is on zero-one loss. To incorporate surrogate losses, one can use the classification-calibration [1] argument.
As for practical performance, we have tried more than 10 different multi-class losses, they end up performing similarly. We could incorporate that in our next version.
As for now, Table 2 in the attached pdf file to the global rebuttal shows the comparison of cross entropy to several "noise robust losses" [2-3], their results are comparable.
> The experiment section is limited... More baselines should be included, such as BLTM, MEIDTM, MaxMIG.
Thanks for mentioning these papers, we will include them in our next version.
At this point, due to time constraint, the additional experiments we have ran are shown in the attached pdf to the global rebuttal. Feel free to take a look, thanks.
Reference:
[1] Peter Bartlett, Michael Jordan, and Jon McAuliffe. "Convexity, classification, and risk bounds." Journal of the American Statistical Association, 2006.
[2] Aritra Ghosh and Himanshu Kumar. Robust loss functions under label noise for deep neural networks. In Proceedings of the AAAI conference on artificial intelligence, 2017.
[3] Aritra Ghosh, et al. Making risk minimization tolerant to label noise. Neurocomputing, 2015
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for addressing my comments. All my questions are answered and I have no further comments. I will keep my initial rating. | Summary: The author proposes a Label Noise Learning (LNL) method that assumes a noise transition matrix. The author introduces the concept of Relative Signal Strength (RSS), which is calculated as the ratio of the signal difference between the true prediction and the prediction under label noise. The author demonstrates that the set satisfying the condition of RSS being greater than zero is noise immune. Additionally, the author provides a method to define a noise transition matrix that upholds this condition.
Strengths: The author presents a novel perspective for solving the LNL problem. The interpretations of RSS proposed by the author are sufficiently insightful for peers in the LNL field. Moreover, the author's method is well-suited for use with pretrained feature extractors and aligns with current trends in the AI field.
Weaknesses: There are two major concerns. The first is the low reproducibility. Despite the robust interpretation of the proposed method, the lack of a specific algorithm makes it challenging to conceptualize a clear learning approach. Specifically, it would be beneficial to provide examples of e(x) in Theorem 5, as well as pseudocode for feature extraction and the derivation of the transition matrix. Secondly, it is challenging to interpret the advantages of the proposed method from the experiments. (1) The author conducted experiments solely on light synthetic data, making it difficult to ascertain the effectiveness of the theoretically-based method in practical noise scenarios. (2) The author employs a highly trained pretrained model, which introduces an unfair factor in comparisons with other methods. Notably, there is experimental evidence suggesting that self-supervised pre-training can enhance the performance of existing methods (https://arxiv.org/pdf/2103.13646v2). I recommend including the author's method applied to various pretrained models in Table 1, or adding the use of pretrained models for existing methods to provide a fair comparison.
Technical Quality: 3
Clarity: 4
Questions for Authors: I have given high marks to the author's novel perspective and the reasonable interpretations and proof methods presented. Despite the proposed method's low reproducibility and unproven practical performance, it has the potential to positively influence other research in the LNL field. Therefore, I have assigned a rating of "weak accept". If the author provides more detailed treatment of the proposed method and richer empirical interpretations, I believe the paper would be strong enough for acceptance.
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper.
> I have given high marks to the author's novel perspective and the reasonable interpretations and proof methods presented... Therefore, I have assigned a rating of "weak accept". If the author provides more detailed treatment of the proposed method and richer empirical interpretations, I believe the paper would be strong enough for acceptance.
We are glad that you find our paper helpful. **A small point: the current rating seems to corresponding to "borderline accept" rather than "weak accept".** Thanks.
>There are two major concerns. The first is the low reproducibility. Despite the robust interpretation of the proposed method, the lack of a specific algorithm makes it challenging to conceptualize a clear learning approach. Specifically, it would be beneficial to provide examples of e(x) in Theorem 5,
Actually, $e(x)$ is not part of our algorithm. It is a quantity that describes a theoretical condition under which NI-ERM is consistent. It is not needed by the algorithm.
> it would be beneficial to provide ... pseudocode for feature extraction and the derivation of the transition matrix.
Actually, we do not propose a method for feature extraction. We simply use existing methods (e.g., pre-trained ResNet available in pytorch model zoo).
We also do not estimate a noise transition matrix. This is a mathematical object that we use to state theorems about the performance of NI-ERM, but it is not a quantity that is needed as input to an algorithm.
We have a description of our practical method, which is described in steps 1 and 2 in Section 6 (line 260-263). It is actually a meta-algorithm, allowing the user flexibility in choosing the feature extractor and empirical risk minimizer.
> Secondly, it is challenging to interpret the advantages of the proposed method from the experiments. (1) The author conducted experiments solely on light synthetic data, making it difficult to ascertain the effectiveness of the theoretically-based method in practical noise scenarios.
In paper, we have results for real data (MNIST, CIFAR), including the noisy CIFAR dataset which has real-world (human generated) noisy labels.
In response to your (and other reviewers') requests, we have also performed some additional experiments. See the results in the pdf file attached to global response (and our responses to other reviewers), those demonstrates our NI-ERM is hard to beat.
> (2) The author employs a highly trained pretrained model, which introduces an unfair factor in comparisons with other methods. Notably, there is experimental evidence suggesting that self-supervised pre-training can enhance the performance of existing methods (https://arxiv.org/pdf/2103.13646v2). I recommend including the author's method applied to various pretrained models in Table 1, or adding the use of pretrained models for existing methods to provide a fair comparison.
Thanks for bringing this up, we will include the paper in the reference.
In respond to your request, we have ran additional experiments, all based on the same pretrained model, see Table 2 in the pdf attached to the global response.
Notice:
the "noise rate of $90\\%$" in Table 1 of the referred paper (https://arxiv.org/pdf/2103.13646v2) corresponds to "actual noise rate $P(Y \neq \widetilde{Y}) = 0.90 \times (1-1/10) = 81 \\%$".
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed review. I realized that I had misunderstood the paper, and through the rebuttal and the opinions of other reviewers, I now understand that the author was justifying the use of the linear probing approach. Thanks to this, my concerns about low reproducibility have been alleviated, but I still have other remaining concerns. As highlighted in the global rebuttal, the power of a well-trained feature extractor cannot be overlooked. I have an additional question at this point. Did the other methods also involve training only the classifier? I am curious whether the author’s theoretical interpretation is empirically valid: if the performance of other methods remains high with the well-trained feature extractor being frozen, the practical significance of the author’s findings would be greatly diminished.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply.
> I now understand that the author was justifying the use of the linear probing approach.
Practically, yes. Motivated by the theory, we are demonstrating (Noise-Ignorant) linear probing itself as a promising approach for label noise.
> Did the other methods also involve training only the classifier? I am curious whether the author’s theoretical interpretation is empirically valid: if the performance of other methods remains high with the well-trained feature extractor being frozen, the practical significance of the author’s findings would be greatly diminished.
As for Table 2 in the attached pdf file to the global rebuttal, yes, the feature extractor is frozen.
The simple method performs as good as (and sometimes better than) other more complicated "robust" methods, which shall be good news for practitioners.
Thanks.
---
Rebuttal 2:
Comment: Thank you for your reply. As author continues to assert, one of the undeniable contributions of the author is demonstrating that LP is a simple yet competitive method using the RSS metric. However, the limited applicability of the proposed method may provide little assistance to colleagues researching LNL. Therefore, my concern has been partially addressed, and I will maintain my original evaluation: borderline accept. | Summary: In this work, the authors use a new theoretical framework for analyzing learning under label noise in multi-class classification.
The proposed framework is based on relative signal strength (RSS), which measures noisiness data points in the training sets.
Based on RSS, the authors propose new upper and lower bounds on excess risk and identify when the classifier learned from label noise is consistent. Based on the theoretical results, a new simple learning framework, called Noise Ignorant Empirical Risk Minimisation (NI-ERM), is proposed, which basically performs standard ERM learning on nosy data. To practically apply NI-ERM, the authors proposed a simple framework of learning linear classifiers on top of a feature extractor trained in an unsupervised/semi-supervised way. The effectiveness of this approach is validated in a few experiments where the method was tested on popular benchmarks under different noise distributions.
Strengths: - The paper is sound and, despite being very theoretical, is easy to read, as authors step by step explain the thinking process.
- Newly obtained bounds are indeed simple in form.
- The effectiveness of the new two-step NI-ERM is confirmed by a few empirical empirical experiments.
Weaknesses: - It seems to me, that the analysis conducted under the Relative Signal Strength framework does not provide a lot of new surprising insights, the conclusions more or less confirm findings from previous works.
- Training feature extractors in an unsupervised/semi-supervised way might be more difficult in the case of some more specialized applications than, for example, general image classification used in the experiments.
- The considered analysis and proposed approach is limited to classifier accuracy, while in many applications other task losses/utilities are often considered.
Technical Quality: 3
Clarity: 3
Questions for Authors: Not really, I will be happy to read the authors comments on my points from the weakness section.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations were discussed. I see no negative social impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper.
We are especially glad that you find our paper easy to read. Below we would like to address some concerns.
> It seems to me, that the analysis conducted under the Relative Signal Strength framework does not provide a lot of new surprising insights, the conclusions more or less confirm findings from previous works.
Would you be willing to provide specific references to prior work to help us frame our response? Thanks.
We agree that the empirical idea of ignoring the noise is not original (e.g., [1]).
Theoretically, however, to our knowledge no prior published articles have proved the optimality of the noise-ignorance approach. We are also unaware of any lower bound analysis on label noise that treat a setting as general as ours.
> Training feature extractors in an unsupervised/semi-supervised way might be more difficult in the case of some more specialized applications than, for example, general image classification used in the experiments.
Fortunately, foundation models are being developed for a rapidly expanding list of application domains, including audio, video, graphs, text and tabular data ([3], Section 4). Generalizable strategies for self-supervised learning are also advancing at a rapid pace, e.g., self-distillation, masked text/image modeling ([3], Section 2).
> The considered analysis and proposed approach is limited to classifier accuracy, while in many applications other task losses/utilities are often considered.
Our results can be extended to the balanced error, and we expect that they also extend naturally to cost-sensitive 0/1 loss. Extensions of our work to other performance measures would be an important research question.
Reference:
[1] Aritra Ghosh and Andrew Lan. Contrastive learning improves model robustness under label noise. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
[2] Ruixuan Xiao, et al. Promix: Combating label noise via maximizing clean sample utility. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 2023.
[3] Jonas Geiping, Quentin Garrido, Pierre Fernandez, Amir Bar, Hamed Pirsiavash, Yann LeCun, and Micah Goldblum. A cookbook of self-supervised learning. arXiv preprint arXiv:2304.12210,2023. | Rebuttal 1:
Rebuttal: We thank all reviewers for your time and effort. Our paper will be substantially better as a result of your comments and questions. Here we provide responses to questions that seem most likely to be of interest to all reviewers.
## Theory
> KL divergence vs. Relative Signal Strength (RSS)
We thank Reviewer G7Jo for bringing up the question on the relation of KL divergence and our new concept RSS. We will include a new paragraph and a new example in our next version to illustrate this. This shall make the concept of RSS more intuitive.
The short answer is: KL divergence considers the similarity between two (whole) distributions, while the task of classification only focuses on predicting the $\arg \max$, making RSS the correct measure. See our response to G7Jo for more detail.
> uses "uniform label noise" in minimax lower bound proof?
No, we did not use uniform label noise in the proof. Our lower bound holds for general instance-dependent label noise. For details, see our response to Reviewer rXjd.
## Experiments
> include comparison to more methods, apply SOTA methods to the same high quality features, linear probing - then fine tuning (LP-FT), ...
These are reasonable requests. We have run additional experiments comparing our NI-ERM approach to previously proposed "noise robust" methods, see **attached pdf** file for details.
In Table 1, we report results for LP-FT (linear probing then fine tuning), and find that it underperforms our approach when noise is present.
In Table 2, we compare against two recently published robust training methods by giving those methods access to the same high-quality DINOv2 features that we used. Again we find that NI-ERM is superior. We also reran our own method using some robust losses (mean absolute error, sigmoid) and find that these are comparable to cross-entropy (which we used for the submission).
In Table 3, we ran experiments using a synthetically generated, non-uniform instance-dependent label noise. Again, NI-ERM is superior.
We will add of these results to the next version of the paper.
We thank all reviewers again.
Pdf: /pdf/3f9b3bdaeb4a2108fb33e5e0cc7e45324ae1881d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper investigates multiclass classification under label noise, specifically instance-dependent label noise in which the noise can depend on the features as well (a.k.a. local noise). A minimax result is derived that lower-bounds the misclassification probability of a classifier. The paper finds a good empirical practice to be the procedure of training a linear classifier on some separately-learned features.
Strengths: The paper's analysis is correct, and the results relative to the CIFAR-N leaderboard are impressive. (I wish I was able to see the leaderboard, however; several browsers failed to load more than a blank screen.) The message of the paper -- in the presence of unknown noise, learning final-layer classifiers is more robust than learning a classifier from scratch -- is an old one, known empirically to be worth using. The paper's message is clear, and it is overall clearly written.
Weaknesses: The problem addressed in this paper is very common, and people use many heuristic practical strategies it. The paper states that there are theoretical contributions leading to practical consequences. These practical consequences (recommending ignoring the noise) are not new, and the theory lends no understanding on when this might succeed and fail.
The practical consequences are to encourage linear probing (learning a linear classifier on top of a frozen feature layer) on features learned separately from possibly different data. This is possibly the most commonly used idea in practice, including in many situations where it's assumed that there is no noise by default, even when there is. This is not benchmarked thoroughly on different noise distributions. Many other papers in similar areas use Dirichlet distributions to precisely simulate different non-uniform noise, which would be a more thorough synthetic benchmark (e.g. Garg et al. "A Unified View of Label Shift Estimation"). On the real-data side, I would like to see different fine-tuning strategies also compared.
The theory is quite brittle in a couple of ways. It only takes into account the gap to the top class probability, making some of the definitions like A_0 brittle as well; for many classes of noise distributions A_0 could be quite small to empty. The minimax lower bound construction again uses uniform label noise, meaning that it does not adapt to the actual structure of the instance-dependent label noise; the results showing NI-ERM matching this bound are therefore fairly weak.
Presentation-wise, the paper introduces many new terms without apparent reason; these are used correctly, but their combined effect is not to simplify and may be misleading as to the results. An example is "immunity" - it is easy to construct class-conditional noise distributions which would break the minimax results, and they tend to be non-uniform in interesting ways.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why not simulate with instance-dependent label noise of some kind (e.g. simulate cluster-dependent label noise) instead of uniform label noise? Uniform label noise is easy to combat - simply being oblivious works.
- The main message of the paper is that linear probing deals with label noise better than full training / fine-tuning of the network. Are the authors aware of the two-step strategy of linear probing then fine-tuning the whole network, as popularized recently as a general strategy for encouraging robustness to distribution shift? How does this do? (ref. Kumar et al. '22, "Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution")
- How about estimating label shift and benchmarking similar methods, which all implicitly rely on the adjustments to the calibration of a classifier? The Garg et al. reference given above, and methods therein, are SOTA here, and I would think that if there is significant class-conditional noise in \tilde{Y}, then estimating this would be key and perform better than NI-ERM.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. We appreciate your feedback and constructive criticism. In response to your comments, we would like to address your concerns regarding weakness of the theory and the experiments. We believe we can clarify many points.
## "Brittleness" of theory
> The theory is quite brittle in a couple of ways ... The minimax lower bound construction again uses **uniform** label noise, meaning that it does not adapt to the actual structure of the instance-dependent label noise
**No, we did not use uniform label noise in the proof.** See the construction of $\eta(x), \widetilde{\eta}(x)$ in line 437 Eqn. (2) - (4). To be specific, the label noise in that construction puts all probability mass into one specific class, while uniform label noise would spread probability mass into all classes instead.
The reviewer may be alluding to the fact that we "let $J \sim Uniform (1,2, ..., K)$" in line 454, or "let $B \sim Uniform (1,2 )^{V-1}$" in line 468. This is a technique called the "probabilistic method" that is commonly used in the minimax lower bound proofs (see, e.g., [7] Section 14).
We re-iterate that our lower bound proof did not use uniform label noise. The bound applies exactly to **instance-dependent label noise**.
> It only takes into account the gap to the top class probability, making some of the definitions like $A_0$ brittle as well; for many classes of noise distributions $A_0$ could be quite small to empty.
If $A_0$ is empty, then our theory still returns a meaningful result. In this case, the noisy and clean Bayes classifiers disagree almost everywhere, and therefore $\epsilon = 1$, leading to a (big) lower bound of $(1-1/K)$ on the clean excess risk. So we are not sure why you say our theory is brittle here, and we welcome clarification.
"the gap to the top class probability" is fundamental in learning theory, see for example the standard assumptions of Massart and Tsybakov [1-3].
## Practical performance
> The results relative to the CIFAR-N leaderboard are impressive. I wish I was able to see the leaderboard, however; several browsers failed to load more than a blank screen.
We have contacted the research team who maintains the CIFAR-N leaderboard. They have fixed it, welcome to try it again <http://noisylabels.com/>.
Otherwise, wayback machine records its status on May 23rd, see <https://web.archive.org/web/20240523101740/http://noisylabels.com/>.
> These practical consequences (recommending ignoring the noise) are not new, and the theory lends no understanding on when this might succeed and fail.
We acknowledge that the practical idea of ignoring label noise is not new [4], but the full power of this approach has not been previously recognized. For example, prior work that has suggested ignoring the label noise usually augments this approach with additional heuristics such as fine-tuning with early stopping [5]. We will add a paragraph about previous works that practically suggest ignoring the noise.
Furthermore, our theory precisely describes when the method succeeds: It is nearly minimax optimal under the setting of Thm 2, and consistent under the settings discussed in Section 5.
> How about estimating label shift ... The Garg et al. reference given above, and methods therein, are SOTA here, and I would think that if there is significant class-conditional noise in $\tilde{Y}$, then estimating this would be key and perform better than NI-ERM.
**Label shift is a different problem (compared with label noise).** In label shift, the X-marginal distribution changes (from source to target) whereas with label noise, it does not. Furthermore, label shift methods assume access to an unlabeled test dataset, which is also not present in the label noise setting. Finally, label shift methods estimate the class prior for the test data (which is a $K$ dimensional vector), not class-conditional noise parameters in a class-conditional noise model (which is a $K \times K$ matrix).
Or, are you asking if "label shift-ignorant ERM" works in a label shift problem? It sounds like an interesting research question and worth exploring. Again we welcome clarification.
> Are the authors aware of the two-step strategy of linear probing then fine-tuning the whole network... How does this do?
Yes, we are. We have ran additional experiments: **LP-FT is generally beneficial when there is no label noise, but becomes harmful when noise rate increases**, see Table 1 in the pdf attached to the global rebuttal.
> Why not simulate with instance-dependent label noise of some kind (e.g. simulate cluster-dependent label noise) instead of uniform label noise?
We have ran additional experiments: we simulated instance-dependent label noise using a commonly adopted approach in [8]. Our NI-ERM performs better than the approach proposed in [8]. See Table 3 in the pdf attached to the global rebuttal for detail.
Reference:
[1] Alexander B Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics, 2004.
[2] Pascal Massart and Élodie Nédélec. Risk bounds for statistical learning. The Annals of Statistics, 2006.
[3] Vu Dinh, et al. Learning from non-iid data: Fast rates for the one-vs-all multiclass plug-in classifiers, TAMC 2015.
[4] Aritra Ghosh and Andrew Lan. Contrastive learning improves model robustness under label noise. CVPR Workshop, 2021.
[5] Yihao Xue, et al. Investigating why contrastive learning benefits robustness against label noise. ICML, 2022.
[6] Ruixuan Xiao, et al. Promix: Combating label noise via maximizing clean sample utility. IJCAI, 2023.
[7] Luc Devroye, László Györfi, and Gabor Lugosi. A Probabilistic Theory of Pattern Recognition, 1996.
[8] Xiaobo Xia, et al. Part-dependent label noise: Towards instance-dependent
label noise. Advances in Neural Information Processing Systems, 2020.
---
Rebuttal 2:
Title: Good rebuttal, concerns remain
Comment: Thanks to the authors for their detailed and precise comments. As I write below, I agree with the authors on many points (and have changed my review accordingly), but would leave much of my initial assessment intact.
In my initial review, I mis-spoke on a couple of points, which I will clarify. The lower bound construction does indeed use instance-dependent label noise on a discrete space, not uniform label noise. Label shift is indeed a different problem, and I do not mean to suggest this is the same setting; but the simulations/benchmarking of the papers in that area comes far closer to practical relevance here. Thanks to the authors for their experiments in the rebuttal, including on LP-FT.
The NI-ERM method itself is oblivious of the existence of any noise, and there is no new algorithm or loss function. Yet it is standard practice because it seems to sometimes work. So the paper's contribution is looking to justify where the standard procedure does and doesn't work - in which label noise situations would it be fine to use, and where should one look to do something else - where is ignorance bliss?
This is why the brittleness of the theory matters in this context. I am very familiar with the standard label noise literature (Tsybakov, Massart, et al.) and the authors are correct about the theory here also being in that spirit. However, that theory is most robust/successful (and originally developed) for binary classification, in which the problem reduces to one where the authors' RSS is a good pointwise signal-to-noise measure. As K > 2, the min over the K classes in the RSS starts to eliminate more and more information, and lower the RSS even when learning would appear to be intuitively possible. For e.g. Imagenet (K=1000) or many similar-scale problems, the classes themselves show significant structure, with some being easily confusable for each other, and others less so; the sets A_{\kappa} could be small or even null, in my practical opinion. Clearly this will not nontrivially happen for K = 2, and tends to happen less for low K.
Ideally a reasonable \kappa (to get a reasonable \epsilon) should scale nicely with K, which would make the upper bounds quite relevant. I don't have good intuition about whether this applies for noise distributions in the wild. (It would be great to see if this is correct by computing A_{\kappa} for a real dataset, which can be done approximately with sampling. If A_{kappa} is nontrivial for reasonable \kappa; the inverse dependence of the bounds on \kappa means that we do need some statistically nontrivial gap).
This of course affects the model class over which the minimax result is proved. The result is strong (like any minimax result is) *over its posterior drift model class* - but \Pi may be nearly empty. The lower bound construction is certainly correct, but its tightness, and the logic of the proof, again rely on the model class \Pi.
All this means I am not really able to answer the question, as a practical matter, of when ignorance is bliss. The practical relevance of the theory here appears limited for higher K (though it could be very relevant for low K). There are of course many theories of learning under label noise that extend beyond the binary setting, but those often lead to a new algorithm, loss function, etc. Since this is not true here, the "practically interpretable" aspects of the theory are important in assessing the contribution.
---
Rebuttal 3:
Comment: We thank reviewer for the detailed response.
## Contribution of the paper
> ... so the paper's contribution is looking to justify where the [Noise ignorant] procedure does and does not work.
Actually, we feel that this is not our main focus (although the exploration of section 5 is concerned with this, e.g., Thm 4).
The main theoretical focus of the paper is **what is the fundamental limit (e.g., minimax risk) in learning with label noise**. In other words, if we **do not assume any further structure on the data**, what can we hope to get, and what works for this. It is in this regard that Section 4 (and the preceding definitions of Section 3) are strong: they identify that the minimax risk is captured directly by the RSS, and that the RSS is the 'right' object to think about when considering label noise.
Note that the RSS is a novel way of characterising the noisy labels problem. The framework not only admits tight bounds, suggesting that at least in a minimax sense RSS is the natural object to study, it also is flexible enough to capture structure (of label noise), one example is demonstrated through Thm 3.
## What if number of classes $K$ is big?
> As $K > 2$, the min over the K classes in the RSS starts to eliminate more and more information, and lower the RSS even when learning would appear to be intuitively possible. For e.g. Imagenet ($K=1000$) or many similar-scale problems, the classes themselves show significant structure, with some being easily confusable for each other, and others less so... Ideally a reasonable $\kappa$ (to get a reasonable $\epsilon$) should scale nicely with K
When $K$ becomes large, even the standard learning scenario becomes provably hard (e.g., literature on extreme classification).
We imagine the same also holds for label noise setup.
As for a solution for this, just as you have suggested, is to explore 'structure', formally speaking, assume **sparsity**: both $\eta, \widetilde{\eta}$ are $C$-sparse, which corresponds to the scenario you described as "...some (classes) being easily confusable for each other, and others less so".
Now, let us examine the concept of RSS again:
$$
M(x; \eta, \widetilde{\eta}) = \min_{j \in \\{1, 2, \dots, K\\} } \frac{ \max_i [\widetilde{\eta}(x)]_i - [\widetilde{\eta}(x)]_j }{ \max_i [\eta(x)]_i - [\eta(x)]_j },
$$
For the "unrelated classes" $j$, where both $[\widetilde{\eta}(x)]_j$ and $[\eta(x)]_j$ are zero,
$$
\frac{ \max_i [\widetilde{\eta}(x)]_i - [\widetilde{\eta}(x)]_j }{ \max_i [\eta(x)]_i - [\eta(x)]_j } = \frac{ \max_i [\widetilde{\eta}(x)]_i }{ \max_i [\eta(x)]_i },
$$
thus it will not deflate the RSS.
Therefore, the minimum of $j$ over $\\{1, 2, \dots K\\}$ reduces to the set containing non-zero indices of $\eta$ and $\widetilde{\eta}$, which contains at most $2C$ elements (from the $C$-sparse assumption above).
To conclude, with an additional sparisty assumption which takes the noise structure into account, our notion of RSS "scales well with $K$".
## $A_0$ being null?
> For e.g. Imagenet (K=1000) ... with some (classes) being easily confusable for each other, and others less so; the sets $A_{\kappa}$ could be small or even null, in my practical opinion.
In order to claim "$A_{\kappa}$ could be small or even null", several things need to be taken into account:
- value of $\kappa$
- overall hardness of the classification task: $\eta(x)$
- quality of labeller: $\widetilde{\eta}(x)$
For an un-trained labeler (e.g., common people like you and us), images of some classes are "easily confusable", thus the resulting $A_0$ can be quite small.
For the "ImageNet labeler(s)" who have gone through detailed instruction and careful quality control, the (final) label quality is much better, thus $A_0$ shall be quite big.
For a fixed unlabeled image dataset, and a fixed value of $\kappa$, how big $A_{\kappa}$ depends on (human) label quality, which shall be analyzed case-by-case.
Towards this end, we thank you for bringing up the task of "calculating the RSS level of real-world noisy dataset". We could start from the CIFAR-N dataset. This is a big project, beyond the scope of the current paper, but would benefit from the framework constructed in this paper.
---
To conclude, the main point of the paper is not to "justify when NI-ERM does and does not work", but to develop a theoretical understanding of the general label noise problem (through the new concept of RSS). We thank you for bringing up the question on how the theoretical concept can be used to explain "label noise in the wild", we believe it is important and worth studying. Thank you again. | null | null | null | null | null | null |
Over-parameterized Student Model via Tensor Decomposition Boosted Knowledge Distillation | Accept (poster) | Summary: In this work, the authors focus on the knowledge distillation (KD) task, using tensor decomposition to enhance the performance of the student model. Leveraging the principle of overparameterization, the authors employ the Matrix Product Operator (MPO), also known as tensor train matrix, to reformulate the original weight matrix. Additionally, they propose a “distillation loss” to measure the distance between the student’s weights and the teacher’s weights. In the experiments, the proposed method is integrated with existing KD methods, and the results demonstrate a clear improvement in KD performance.
Strengths: The paper introduces an innovative application of tensor decomposition. Typically, tensor decomposition is used for dimension reduction, adhering to the low-rank principle. However, this paper utilizes tensor decomposition in a novel manner: employing MPO to construct overparameterized learning models. While I am not entirely convinced why the overparameterized MPO is superior to traditional matrix decomposition or other tensor networks, the numerical results indicate that this approach could be a promising new avenue for using tensor networks to solve more machine learning problems.
Weaknesses: The clarity of the paper needs improvement. For instance, Figure 1 is not fully comprehensible as the meaning of the arrows is not clearly explained. Additionally, the experiment settings do not clearly describe how the weights are reshaped into the higher-order tensor format.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Since the entire paper is based on the principle of overparameterization of the student model, it would be beneficial to explain in the preliminary section why the overparameterization principle is relevant to the KD problem. This addition would help non-experts follow the main idea of the paper more smoothly.
2. In lines 176-177, the phrase “losing its ability to think independently” is highlighted. I am confused by this statement. Could you provide more interpretation of this claim, supported with sufficient evidence?
3. Please offer more interpretation of the central and auxiliary tensors mentioned in lines 184-185, using formulas or figures, for example. I cannot clearly understand their differences from the current non-rigorous descriptions.
4. Why does using MPO provide better performance than SVD? Is it because the MPO forms more “linear layers” than SVD? Have you considered other tensor network formats such as tree tensor networks?
5. How does the selection of TT-ranks (e.g., d_k in Eq. 2) affect the performance?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper lacks a discussion on the limitations of the proposed method. I suggest that the authors address this aspect to provide a more balanced and comprehensive evaluation of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for the constructive comments and suggestions, which are very helpful in improving our paper. The following responses will be incorporated into the revised paper.
**Q1. The impact of over-parameterization on student model performance.**
**Reply:** Thank for your excellent comment. Since the performance of student models are typically limited by their number of parameters, increasing the number of model parameters can significantly enhance performance (*e.g.* TinyViT-21M (91.9) *v.s.* TinyViT-5M (87.6) in top-5 accuracy on Imagenet V2). Thus, increasing parameters is beneficial for enhancing the performance of the student model. We will include information about the impact of over-parameterization on the performance of student model in the revised version of the preliminary section.
**Q2. The meaning of "think independantly".**
**Reply:** Great remark! We will rephrase this statement to be "learn independently". In the OPDF, auxiliary tensor alignment allows the student model to learn task-relevant information. During the distillation process, learning through the central tensor not only enables the model to imitate the teacher model, but also equips it with the capability to learn independently from the original labels, detached from the teacher model. Consequently, this endows the model with the potential to surpass the teacher model.
Conversely, if the entire parameter matrix is directly aligned, then the student model can only emulate the teacher model. The effectiveness of this method is further evidenced by the experimental results. As shown in Table 1 in our paper, after incorporating OPDF, the student model outperforms the teacher model by 1.2% on the RTE dataset.
**Q3. The interpretation of central and auxiliary tensors.**
**Reply:** Thanks for your comment. The definition of central and auxiliary tensors is shown in lines 152-153 in our paper (please kindly refer to the Figure 1b). MPO allows for the decomposition of parameter matrices into a series of tensors:
$${MPO}~(\mathbf{M})= \prod_{k=1}^n \mathbf{T_{(k)}}[d_{k-1},i_k,j_k,d_k].\tag{1}$$
$$d_k = \min\bigg(\prod_{m=1}^k i_m\times j_m, \prod_{m=k+1}^n i_m\times j_m\bigg).\tag{2}$$
Following Refs. [1,2], the tensor right in the middle is termed as central tensor, and the rest as auxiliary tensor.
*References:*
[1] P. Liu, et al. Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators.ACL2021
[2] Z.F. Gao, et al. Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models. COLING2022.
**Q4. Reasons why MPO outperforms SVD and the justification for not using MPO over other tensor networks.**
**Reply:** This is an insightful question. MPO generally surpasses SVD because adding eigenvalue dimensions in SVD improves very little of the model capacity, due to the fixed degrees of freedom, expanding parameters through SVD has an upper limit. In contrast, MPO employs matrices of eigenvectors without increasing eigenvalue dimensions, enabling potentially unlimited growth in model parameters.
We also have considered other tensor network format like CP decomposition (CPD) and Tucker decomposition. Generally, the algorithm capacity is larger as the number of tensors $n$ increases (denoting more tensors). When $n > 3$, MPO has smaller time complexity than Tucker decomposition. It noted that SVD can be considered as a special case of MPO when $n = 2$ and CPD is a special case of Tucker when the core tensor is the super-diagonal matrix. The specifics are illustrated in **Table A**. We will investigate other tensor networks as part of our further work.
**Table A.** Inference time complexities of different low-rank approximation methods. Here, $n$ denotes the number of the tensors, $m$ denotes $max({{{\{i_k\} }^n_{k=1}}})$ means the largest ${i_k}$ in input list, and $d$ denotes $max({{{ \{d_k^{'} \} }^n_{k=0}}})$ meaning the largest dimension $d_{k}^{'}$ in the truncated dimension list.
| Category | Method| Inference Time |
| ----- | :-------------------: | :--------------: |
| **Trucker** | $Trucker_{(d=1)} (CP)$ | $O(nmd^2)$ |
| **Trucker** | $Trucker_{(d>1)}$ | $O(nmd + d^n)$ |
| **MPO** | $MPO_{(n=2)} (SVD)$ | $O(2md^3)$ |
| **MPO** | $MPO_{(n>2)}$ | $O(nmd^3)$ |
**Q5. The impact of TT-Ranks on performance.**
**Reply:** Excellent comment! Indeed, we have conducted experiments regarding the impact of $d_k$ on the model performance (please kindly refer to Appendix Table S.4). We can observe that the performance of our approach consistently stabilizes around certain values, indicating that our method is *not sensitive* to the specific MPO techniques used. Therefore, when over-parameterizing, we should focus primarily on the decomposition scale rather than the MPO method employed.
**Q6. The limitations of OPDF.**
**Reply:** Great remark! As discussed in Section 5.3 in our paper, while OPDF can enhance the performance of the student model through over-parameterization, there are inherent limits to these benefits. Additionally, due to over-parameterizing the KD model, OPDF results in higher memory consumption and longer training cost. The memory usage and training cost before and after using OPDF can be found in **Table 1** in the *one-page PDF rebuttal file*.
While OPDF increases the memory usage and training time, this impact diminishes as the dataset size grows, meaning that the ratio of additional memory and training time to the original requirements decreases. We will include an enhanced discussion on the limitations of OPDF in the revised paper.
**Concluding remark:** We sincerely thank you for putting forward excellent comments. We hope the above responses are helpful to clarify your questions. We look forward to addressing any additional questions. Your consideration of improving the rating of our paper will be much appreciated!
---
Rebuttal 2:
Title: Looking forward to your feedback
Comment: Dear Reviewer XV5e,
We're keen to know if there are any remaining concerns that require attention or if there are additional discussions that should take place. Your insights are greatly appreciated as they contribute to the refinement of the paper. Looking forward to your feedback. Thank you.
Best regards,
The Authors
---
Rebuttal 3:
Title: Request your feedback before the end of the discussion period
Comment: Dear Reviewer XV5e:
As the author-reviewer discussion period will end soon, we would appreciate it if you could kindly review our responses at your earliest convenience. If there are any further questions or comments, we will do our best to address them before the discussion period ends.
Thank you very much for your time and efforts!
Sincerely,
The Authors
---
Rebuttal 4:
Title: Kindly request your feedback before the end of the discussion period
Comment: Dear Reviewer XV5e:
As the author-reviewer discussion period is soon ending, we would appreciate it if you could review our responses and provide your feedback at your earliest convenience. If there are any further questions or comments, we will do our best to address them before the discussion period ends.
Thank you very much for your time and efforts!
Sincerely,
The Authors
---
Rebuttal Comment 4.1:
Comment: Sorry for the delayed reply. I appreciated the detailed and clear response from the side of authors. It answered most of my concern in the review. I would adjust the recommendation score from 4 to 5.
---
Reply to Comment 4.1.1:
Title: Thank you for raising the score
Comment: Dear XV5e:
Thank you for your positive feedback. We will include the additional experiments and texts in the revised paper.
Thank you for your time and efforts!
Best regards,
The Authors | Summary: This paper introduces the Over-Parameterization Distillation Framework (OPDF), which addresses performance degradation in limited-parameter student networks after knowledge distillation (KD). OPDF proposes an overparameterized student model that utilizes the tensor-decomposition technique known as matrix product operator (MPO), allowing for a significant increase in parameters during student training time without imposing additional inference time burden. Experimental validation is performed across multiple KD tasks to assess the effectiveness of the proposed technique.
Strengths: + The proposed technique appears to be quite novel. The use of MPO to expand parameters in the student model during the KD process, along with the tensor alignment loss function to improve student model performance, introduces innovative approaches that could offer significant advantages, particularly on low-computational devices.
+ The proposed methodology is easy to understand, even though it includes some abstract concepts. The authors have effectively structured their methodology by first providing a high-level overview of their technique with an illustrative figure, followed by detailed explanations of their important components.
+ Extensive experiments are conducted across both NLP and CV tasks. Multiple knowledge distillation (KD) techniques are employed to demonstrate how their contributions is orthogonal to existing methods. Also, parameters introduced in training as well as inference process are clearly shown.
+ The study extensively examines the impact of overparameterization scale, learning rate, and other components of ODPF through ablation experiments. This study helps to justify the effectiveness of their technique.
Weaknesses: - The paper does not provide information on the time required for the student network's overparameterization using MPO and the contraction of decomposed matrices into the original matrix. Having this information would be important for understanding the practical implications of the technique.
- The experiments are conducted on a relatively smaller model. I am curious about the feasibility of applying this technique to LLM/VLMs with billions of parameters. A significant concern is whether their approach, which involves decomposing the teacher network, can scale effectively to such large models. A deeper discussion on this topic would provide valuable insights.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please refer to Weaknesses section
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for the positive feedback along with constructive comments and suggestions, which are very helpful in improving our paper. We are also grateful that you recognized the strengths and contributions of our paper. Moreover, the following responses will be incorporated into the revised paper.
**Q1. Time Consumption for decomposing and reconstructing the parameter matrix.**
**Reply:** This is a great remark. We listed the time of overparameterization using MPO and the contraction of decomposed matrices into the original matrix in **Table A** as follows. It can be observed that the time required for decomposition and reconstruction is acceptable compared to the training duration (please alse see Appendix C).
**Table A.** The spending time (s) of decomposition and reconstruction.
| Cases | RTE | MRPC | STS-B | CoLA | SST-2 | QNLI | QQP | MNLI |
| ------------------------------- | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: |
| **BERT-of-Theseus** Decomposition | 397.4 | 308.5 | 400.2 | 154.6 | 797.7 | 671.6 | 584.7 | 137.5 |
| **BERT-of-Theseus** Reconstruction | 2.3 | 2.0 | 2.4 | 0.7 | 12.8 | 3.3 | 10.9 | 0.8 |
| **LGTM** Decomposition | 403.6 | 369.5 | 377.8 | 192.2 | 131.4 | 123.9 | 80.2 | 83.0 |
| **LGTM** Reconstruction | 8.6 | 6.8 | 7.2 | 2.6 | 1.6 | 0.9 | 1.0 | 1.5 |
| **DBKD** Decomposition | 117.5 | 189.1 | Na | 168.5 | 153.1 | 166.2 | 110.7 | 165.0 |
| **DBKD** Reconstruction | 0.9 | 1.0 | Na | 0.8 | 0.9 | 0.8 | 0.7 | 0.8 |
| **AD-KD** Decomposition | 291.7 | 313.0 | 232.6 | 235.5 | 119.0 | 148.0 | 148.1 | 171.6 |
| **AD-KD** Reconstruction | 1.6 | 1.8 | 1.2 | 1.4 | 0.9 | 1.0 | 2.5 | 1.2 |
**Q2. Applying OPDF to model with billions of parameters.**
**Reply:** This is an insightful question. We have implemented OPDF on the GPT-2-760M, OPT-6.7B, and LLAMA-7B models, with corresponding teacher models of GPT-2-1.5B, OPT-13B, and LLAMA-13B, respectively. The Rouge-L scores of these models on five instruction-following datasets are presented in **Table B** (see below). Our results indicate that OPDF significantly enhances the distillation efficiency for larger models across all datasets, demonstrating its efficacy even for models with billions of parameters.
**Table B.** Distillation results on larger models with OPDF.
| Model | #Params | Method | Dolly | SelfInst | Vicuna | S-NI | UnNI | Avg. | # Train Params | # Inference Params |
| --------- | :-------: | :-------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------------: | :------------------: |
| **GPT-2** | 1.5B | Teacher | 27.6 | 14.3 | 16.3 | 27.6 | 31.8 | 23.5 | 1.5B | 1.5B |
| **GPT-2** | 760M | w/o KD | 25.4 | 12.4 | 16.1 | 21.5 | 24.0 | 19.9 | 760M | 760M |
| **GPT-2** | 760M | KD | 25.9 | 13.4 | 16.9 | 25.3 | 28.0 | 21.9 | 760M | 760M |
| **GPT-2** | 760M | KD+OPDF | **26.1** | **14.1** | **17.5** | **25.7** | **28.6** | **22.4** | 1.3B | 760M |
| **OPT** | 13B | Teacher | 29.2 | 18.4 | 17.8 | 30.4 | 36.1 | 26.4 | 13B | 13B |
| **OPT** | 6.7B | w/o KD | 27.6 | 16.4 | 17.8 | 30.3 | 28.6 | 24.1 | 6.7B | 6.7B |
| **OPT** | 6.7B | KD | 28.3 | 17.0 | 17.5 | 30.7 | 26.7 | 24.0 | 6.7B | 6.7B |
| **OPT** | 6.7B | KD+OPDF | **28.5** | **17.7** | **17.9** | **31.4** | **29.8** | **25.1** | 14B | 6.7B |
| **LLaMA** | 13B | Teacher | 29.7 | 23.4 | 19.4 | 35.8 | 38.5 | 29.4 | 13B | 13B |
| **LLaMA** | 7B | w/o KD | 26.3 | 20.8 | 17.5 | 32.4 | 35.8 | 26.6 | 7B | 7B |
| **LLaMA** | 7B | KD | 27.4 | 20.2 | 18.4 | 33.7 | 37.9 | 27.5 | 7B | 7B |
| **LLaMA** | 7B | KD+OPDF | **27.5** | **21.6** | **19.7** | **34.8** | **40.0** | **28.7** | 10B | 7B |
---
Rebuttal Comment 1.1:
Title: Thanks for the Rebuttal
Comment: Dear Authors,
Thank you for your effort toward addressing my concerns. My concerns have been addressed and therefore, I would like to maintain my original score.
---
Rebuttal 2:
Title: Thank you for your feedback
Comment: Dear Reviewer Dajm,
Thank you for your positive feedback. We will include the new experiments and texts in our revised paper.
Thank you very much for your time and effort!
Best regards,
The Authors | Summary: The authors propose to start with an over-parameterised student model. This is realised using high-order tensors that can reconstruct the original parameter matrices. The idea is that this over-parameterised model will benefit more from knowledge distillation.
Strengths: The ideas is quite interesting/novel for over-parameterising the student during training but preserving the inference parameters (since the higher-order tensors can just be contracted to reconstruct the original weights).
Weaknesses: TinyViT-5M⚗ achieves 80.7% top-1 with 5.4M parameters using a just a distillation token and a logit distillation loss. Comparing this with TinyViT-5M+OPDF 80.0% top-1 and 9.9M/5.4M inference parameters, it is hard to see the benefit? if anything the additional SVD operations and reconstruction losses, make OPDF much more difficult to adopt. What is worse it that this difference is even more significant when going to the larger TinyViT models.
It is understandable that transformers have been the main focus of this work, however it would be good to see some experiments with CNNs. Matrix decomposition for the transformer linear layers is easy (with SVD), but when going to higher-order tensors it will be NP-hard. All the theory presented in this paper is for generalising to arbitrary dimensions, yet all the experiments are done on 2-dimensional weights. The paper would be a lot easier to follow without the pre-mature generalisation. i.e. showing equation (4) in 2-dimensions with SVD and I am not sure if the introduction of the MPO framework is needed.
L177: I am not sure what you mean by "think independantly?". Besides the point that models don't think, if the student can match the teacher fully then that is perfectly fine. The only goal is preserve the performance of the teacher for a much smaller model.
L595: 160 days to pre-train TinyViT is very significant! I understand this is because ImageNet-21K is very large, but why was this chosen benchmark chosen over, for example, a more standard KD benchmark, such as that in DeIT [1] or CRD [2]. Is 160 days an expected scale of time for the number of GPUs used here?
[1] Training data-efficient image transformers & distillation through attention ICML 2021
[2] Contrastive Representation Distillation ICLR 2020
Small points/spelling:
Fig 1a "Batchs -> Batches"
Technical Quality: 3
Clarity: 2
Questions for Authors: What is this normalization step in figure 1 and Alg S.1 L9? and its importance? If it is just to preserve the scale and stop exploding values after reconstruction, this could be explained in the text a bit.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have adequately addressed these limitations in the checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Reply to Reviewer fkms
We sincerely thank you for the constructive comments and suggestions. The following responses will be incorporated into the revised paper.
**Q1. Results on CV tasks are different from original TinyVit paper. Compared to the distillation results of the original method, OPDF does not offer any advantages.**
**Reply:** Great comment! However, we would **clarify** that *the tasks (TinyViT-5M⚗ achieves 80.7\% top-1 with 5.4M parameters) in the original TinyViT paper differ from those in Table 2 of our paper*. In lines 230-231 (please see our paper), we report CV task results based on the performance of models tested directly **without fine-tuning**, whereas the original TinyViT paper presents results **after fine-tuning**.
Typically, the more parameters a pretrained model has, the greater the performance improvement after fine-tuning, which likely explains why larger models exhibit greater performance gaps. In our replication results, our method improved TinyVit's result from 77.4 to 80.0 under the same conditions, demonstrating OPDF's efficacy.
**Q2. Using OPDF in CNN distillation models.**
**Reply:** Thanks for your comment. Ref. [1] affirms MPO's ability to decompose CNNs, confirming its applicability. To show OPDF's effectiveness in CNN distillation, we conducted extra experiments using the methodology from Ref. [2]. This involved distilling a WideResNet from a larger to a smaller version, incorporating OPDF into OFD and classical knowlesge distillation (KD) frameworks. Results and parameters are detailed in **Table 2 and 3** of the *one-page PDF rebuttal file*.
Our results demonstrate that OPDF enhances distillation performance in classical KD and OFD through over-parameterization, underscoring its adaptability in CNN models. This finding will be included in the revised paper. Additionally, as shown in **Table 3** of the *one-page PDF rebuttal file*, OPDF does not increase inference parameters, thus preserving inference time.
**References:**
[1] Z.F. Gao et al. Compressing deep neural networks by matrix product operators. Physical Review Research, 2020, 2(2): 023300.
[2] Heo, B. et al. A comprehensive overhaul of feature distillation. ICCV2019.
**Q3. The difference between MPO and SVD. Necessity and Effectiveness of MPO.**
**Reply:** Thanks for your question. MPO and SVD are different. Since adding dimensions of eigenvalues in SVD does not increase the degrees of freedom, directly expanding the model parameter quantity through SVD improves *very little* of the model capacity. Moreover, the over-parametrization process via SVD is subjected to an upper limit on the number of parameters. Conversely, MPO uses matrices of eigenvectors without expanding eigenvalue dimensions, potentially allowing unlimited model parameter growth.
For example, for a matrix ${W} \in \mathcal{R}^{I\times J}$, the parameter limit via SVD is $I^2 + J^2$. In contrast, over-parameterizing ${W}$ using MPO with $n$ tensors allows more parameters, as given by follwing equation, proving more effective than SVD.
$$N = \sum_{k=1}^{m} i_kj_kd_{k-1}d_k.$$
Besides allowing the decomposition of the parameter matrix into an arbitrary number of effective parameters, OPDF demonstrates more significant performance improvements compared to SVD, as detailed in Section 5.2 of our pape. According to Tables 1 and 2 in our paper, while incorporating SVD generally enhances the knowlesge distillation (KD) model's performance across most datasets, it does not perform as well as OPDF.
**Q4. The meaning of "think independantly".**
**Reply:** Great remark! We will rephrase this statement to be "learn independently". In the OPDF, auxiliary tensor alignment allows the student model to learn task-relevant information. Through distillation with the central tensor, the student model not only can imitate the teacher model, but also equips itself with the capability to learn independently from the original labels, detached from the teacher model. Consequently, this endows the model with the potential to surpass the teacher model.
Conversely, if the parameter matrix is fully aligned, the student model merely emulates the teacher model. Experimental results validate this, with Table 1 showing that after adopting OPDF, the student model exceeds the teacher by 1.2% on the RTE dataset.
**Q5. The training time for TinyViT is excessively long.**
**Reply:** Excellent comment! One GPU day refers to *running a single GPU for one day*, and actual distillation time can be calculated using multi-GPU parallelism. By utilizing four parallel servers (each equipped with 8 NVIDIA A100 GPUs) for training, the actual training time needed amounts to 5 days. Initially, the TinyViT paper stated a distillation time of 140 GPU days, which increased to 160 GPU days (14% rise) after using the OPDF method, significantly enhancing effectiveness.
Table 1 of our paper shows the parameter changes after parameterization, e.g., DBKD(53M) *vs.* DBKD+OPDF(83M), with performance increased from 74.4 to 77.6. It is evident that introducing additional training parameters incurs a certain training cost increase, yet it notably boosts the distillation performance.
**Q6. The meaning and significance of normalization.**
**Reply:** Great question! During MPO decomposition, normalization distributes information evenly across tensors. As depicted in Algorithm 1 in our paper, after $n$ SVD iterations, all eigenvalue information accumulates in the final tensor, while the preceding $n-1$ tensors are derived from unitary matrix decompositions and do not contain significant information. Normalization effectively distribute information across each tensor.
**Concluding remark:** We sincerely thank you for putting forward excellent comments. We hope the above responses are helpful to clarify your questions. We look forward to addressing any additional questions. Your consideration of improving the rating of our paper will be much appreciated!
---
Rebuttal 2:
Title: Looking forward to your feedback
Comment: Dear Reviewer fkms,
We're keen to know if there are any remaining concerns that require attention or if there are additional discussions that should take place. Your insights are greatly appreciated as they contribute to the refinement of the paper. Looking forward to your feedback. Thank you.
Best regards,
The Authors
---
Rebuttal 3:
Comment: Thank you for your thorough response and additional experiments. Most of my concerns have been addressed and in light of the other reviewers remarks I am happy to raise my score. However, I am still unsure about 2 main parts.
Firstly, although I do understand and appreciate that the authors only report the linear probe performance and thus a comparison to TinyViT, DeiT etc is unfair, I would like to know why this is done? The authors don't seem to do this for the NLP related tasks. Are there other KD works that train models and evaluate in this fashion? (specifically for vision-tasks). Vision KD is a lot more more mature than NLP KD, especially for feature distillation, and so I believe a comparison here is important.
Secondly, the experiments on CNN models is greatly appreciated and does strengthen this submission. It is important to show that the generalisation has practical utility rather than just for the sake of generality. The authors have addressed my concern over this point theoretically too. These CIFAR100 results are fully fine-tuned which is nice to see and partly addresses my previous concern. OFD is argueably quite an old paper. Is there any chance for a comparison to some other architecture pairs provided in the CRD [1] benchmark? and using some more recent KD methods. There are some simple methods with code here [2].
Hope the authors can address my concerns.
[1] "Contrastive Representation Distillation" ICLR 2020
[2] https://paperswithcode.com/sota/knowledge-distillation-on-cifar-100
---
Rebuttal 4:
Title: Additional vision KD results (part 1)
Comment: We greatly appreciate your feedback along with two additional questions. Please see our reply to your *first question* as follows.
**Q1. The application of linear probe performance metrics; distillation results affter fine-tuining.**
**Reply:** Thanks for your question. In fact, Table 5 in original TinyVit paper [1] has already utilized linear probe performance to assess the efficacy of TinyVit. Hence, we opted it for comparison to effectively demonstrate the validity of OPDF.
Moreover, we have fine-tuned models previously distilled following the tasks set in TinyViT (where TinyViT-5M⚗ achieves 80.7% top-1 accuracy with 5.4M parameters), and the results are showed in **Table A** below to show the efficiency of our proposed OPDF.
The experiments results show that even when employing the same experimental setup in TinyVit, OPDF consistently yields a significant enhancement in the performance of TinyVit. We will include these results in the revised paper.
**Table A.** Models are pretrained on ImageNet-21k and then finetuned on ImageNet-1k, Imagenet-Real and Imagenet-V2 (top-1/top-5).
| Datasets | Imagenet-1k | Imagenet-Real | Imagenet-V2 | # Train Params | # Inference Params |
| --------- | :--------: | :--------:| :--------:| :--------: | :-----------: |
| TinyVit-5M | 80.7/95.6 | 87.5/97.8 | 68.3/89.7 | 5.4 | 5.4 |
| TinyVit-5M+OPDF |**81.8/96.9** | **87.9/98.4** | **69.5/90.4** | 9.9 | 5.4 |
| TinyVit-11M | 83.2/96.5 | 88.3/98.1 | 72.9/91.4 | 11 | 11 |
| TinyVit-11M+OPDF| **85.1/97.1** | **89.0/98.5** | **74.1/93.3** | 23 | 11 |
| TinyVit-21M | 84.8/97.3 | 88.9/98.5 | 75.1/93.5 | 21 | 21 |
| TinyVit-21M+OPDF| **86.5/97.9** | **89.7/98.9** | **76.2/94.7** | 38 | 21 |
BTW, the code of applying OPDF to other recent CNN KD methods (e.g., CRD) is currently executing. We will supplement the corresponding results as soon as the model training is completed. Hence, we will respond to your *second question* soon. Thank you for your patience.
**References:**
[1] Wu, et al. TinyViT: Fast Pretraining Distillation for Small Vision Transformers. ECCV 2022, pages 68–85.
---
Rebuttal Comment 4.1:
Comment: Thank you for your extensive effort in response to my questions. I am happy to raise my score to 5 BA, and I am also very keen to see what will come back with regards to using other recent CNN KD methods.
---
Reply to Comment 4.1.1:
Title: Additional vision KD results (part 2)
Comment: The following response reports the additional KD results for more recent vision models.
**Q2.The Performance of OPDF on some architecture pairs provided in CRD and several recent CNN KD methods.**
**Reply:** Thanks for your great question. We have applied OPDF to several architecture pairs in CRD and recent KD methods to show the capability of our method in enhancing the performance of CNN distillation models. The new results are reported in **Table B**.
It can be seen that our proposed OPDF can enhance student model performance in several architecture pairs provided in the CRD benchmark using more recent KD methods. We will include these results in the revised paper.
**Table B.** Distillation results of various CNN architectures on CIFAR-100.
| Teacher to Student | ResNet110 to ResNet20 | ResNet110 to ResNet32 | ResNet32x4 to ResNet8x4 |
| --------- | :--------: | :--------:| :--------:|
| Teacher | 74.31 | 74.31 | 79.42 |
| CRD | 71.46 | 73.48 | 75.51 |
| CRD+OPDF | **72.35** | **74.57** | **75.83** |
| NST | 69.53 | 71.96 | 73.30 |
| NST+OPDF | **70.98** | **72.34** | **74.00** |
| ITRD [1] | 71.99 | 74.26 | 76.19 |
| ITRD+OPDF | **72.48** | **75.92** | **77.73** |
| KD+LSKD [2] | 71.99 | 74.26 | 76.19 |
| KD+LSKD+OPDF | **72.33** | **74.93** |**76.89** |
Since the discussion period is soon ending, this is the best result we could supply at this moment after nightless working on the new experiments.
Thank you very much for your time and efforts. Your consideration of further improving the rating of our paper will be much appreciated!
**Reference:**
[1] Miles, et al. Information theoretic representation distillation. In British Machine Vision Conference (BMVC), 2022.
[2] Sun, et al. Logit standardization in knowledge distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15731–15740, 2024. | Summary: This paper proposes a novel over-parameterization framework designed to enhance the effectiveness of knowledge distillation. This framework employs MPO as a tensor decomposition technique to expand small models into larger ones to give the student model more capacity. Moreover, to enhance the effectiveness of knowledge distillation, a tensor constraint loss is introduced to align the teacher and student model. Extensive experiments verify the superiority of the method.
Strengths: 1. Enhancing the capacity of the student model through tensor decomposition is novel and it does not incur additional inference overhead.
2. The loss constraint for aligning the auxiliary tensors between the student and teacher models is also quite different from the conventional logit or feature matching, representing a new approach of matching in KD.
3. The experiments are comprehensive; the authors test the method on many benchmarks in both CV and NLP, proving the effectiveness of the method.
Weaknesses: 1. I would like to know how much additional distillation cost (memory and time cost) will be incurred by introducing such tensor decomposition technique and loss constraint for aligning the auxiliary tensors between the student and teacher models.
2. Especially in the era of LLM, the high cost of distillation often limits its application. Methods like LoRA have been proposed to reduce the trainable model parameters. I am concerned that if this approach goes against the current mainstream research directions?
3. Can the authors provide some distillation results on larger models to further validate the applicability of the approach on large models?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for the constructive comments and suggestions, which are very helpful for improving our paper. We are also grateful that you recognized the strengths and contributions of our paper. Moreover, the following responses will be incorporated into the revised paper.
**Q1. Additional distillation cost (memory and time) incurred by introducing OPDF.**
**Reply:** Great remark! We listed the distillation cost (memory and time cost) of the original model and the model after applying OPDF in **Table 1** in the *one-page PDF rebuttal file*. We can observe that as the number of parameters obtained from MPO decomposition increases, both the training time and memory cost increase. However, as the dataset size increases, the ratio of additional time and memory required for training by OPDF to the original training requirements generally exhibits a decreasing trend (e.g., 0.6/0.4 for RTE *vs* 0.3/0.1 for MNLI in BERT-of-Theseus model). Therefore, the additional time and memory introduced by our method become less of a critical bottleneck affecting the training speed as the dataset size increases. Hope this clarifies your question.
**Q2. OPDF may conflict with mainstream research directions aimed at reducing trainable model parameters (e.g., LoRA).**
**Reply:** Excellent comment! We would like to clarify that our approach has different goal compared with LoRA. Whereas LoRA is designed to reduce the number of parameters during the lightweight fine-tuning process, OPDF aims to enhance the capabilities of existing knowledge distillation models through an over-parameterization procedure.
Nonetheless, we performed tests to assess the effect of over-parameterization through MPO on model performance during the lightweight fine-tuning process. The results are presented in **Table A**. We observe that the implementation of MPO can augment the efficacy of BERT-base models without extending the inference duration or expanding the parameter volume. Conversely, the adoption of LoRA necessitates the parameter volume for inference, which consequently lengthens the inference time.
**Table A.** The result of lightweight fine-tuning on GLUE by using MPO and LoRA.
| Datasets | RTE | MRPC | STS-B | CoLA | SST-2 | QNLI | QQP | MNLI | Avg. | # Train Params | # Inference Params |
| --------- | :--------: | :--------: | :-----------: | :---------: | :----------: | :---------: | :--------: | :---------: | :--------: | :--------------: | :------------------: |
| BERT-base | 70.5 | 86.5 | 86.6 | 54.2 | 92.0 | 91.2 | 91.0 | 84.2 | 82.0 | 110M | 110M |
| LoRA | 71.5 | 89.8 | 88.6 | 58.3 | 91.5 | 90.3 | 91.7 | 83.3 | 83.1 | 295K | 110M+295K |
| +MPO | **72.3** | **90.0** | **89.0** | **60.6** | **92.5** | **91.5** | **92.3** | **85.1** | **83.7** | 341M | 110M |
**Q3. Distillation results on larger models.**
**Reply:** Great comment! We have implemented OPDF on the GPT-2-760M, OPT-6.7B and LLAMA-7B models, with the corresponding teacher models of GPT-2-1.5B, OPT-13B and LLAMA-13B, respectively. We have reported the Rouge-L scores of these models on 5 instruction-following datasets, with the results displayed in **Table B**. We can observe that, after implementing OPDF, the efficiency of distillation on large models is improved across all datasets. This demonstrates that our method is also highly effective on larger models.
**Table B.** Distillation results on larger models with OPDF.
| Model | #Params | Method | Dolly | SelfInst | Vicuna | S-NI | UnNI | Avg. | # Train Params | # Inference Params |
| --------- | :-------: | :-------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------------: | :------------------: |
| **GPT-2** | 1.5B | Teacher | 27.6 | 14.3 | 16.3 | 27.6 | 31.8 | 23.5 | 1.5B | 1.5B |
| **GPT-2** | 760M | w/o KD | 25.4 | 12.4 | 16.1 | 21.5 | 24.0 | 19.9 | 760M | 760M |
| **GPT-2** | 760M | KD | 25.9 | 13.4 | 16.9 | 25.3 | 28.0 | 21.9 | 760M | 760M |
| **GPT-2** | 760M | KD+OPDF | **26.1** | **14.1** | **17.5** | **25.7** | **28.6** | **22.4** | 1.3B | 760M |
| **OPT** | 13B | Teacher | 29.2 | 18.4 | 17.8 | 30.4 | 36.1 | 26.4 | 13B | 13B |
| **OPT** | 6.7B | w/o KD | 27.6 | 16.4 | 17.8 | 30.3 | 28.6 | 24.1 | 6.7B | 6.7B |
| **OPT** | 6.7B | KD | 28.3 | 17.0 | 17.5 | 30.7 | 26.7 | 24.0 | 6.7B | 6.7B |
| **OPT** | 6.7B | KD+OPDF | **28.5** | **17.7** | **17.9** | **31.4** | **29.8** | **25.1** | 14B | 6.7B |
| **LLaMA** | 13B | Teacher | 29.7 | 23.4 | 19.4 | 35.8 | 38.5 | 29.4 | 13B | 13B |
| **LLaMA** | 7B | w/o KD | 26.3 | 20.8 | 17.5 | 32.4 | 35.8 | 26.6 | 7B | 7B |
| **LLaMA** | 7B | KD | 27.4 | 20.2 | 18.4 | 33.7 | 37.9 | 27.5 | 7B | 7B |
| **LLaMA** | 7B | KD+OPDF | **27.5** | **21.6** | **19.7** | **34.8** | **40.0** | **28.7** | 10B | 7B |
**Concluding remark:** We sincerely thank you for reviewing our paper and putting forward thoughtful comments/suggestions. We hope the above responses are helpful to clarify your questions. We will be happy to hear your feedback and look forward to addressing any additional questions. Your consideration of improving the rating of our paper will be much appreciated!
---
Rebuttal Comment 1.1:
Comment: I am confused about why the authors claimed that the inference parameters of adding LoRA will increase. What I mean is that performing KD on LLMs is expensive. Reducing the number of trainable parameters might be a way to lower the cost of performing KD.
However, since the authors have presented the time and memory cost of the proposed method and it seems to be acceptable, I would like to raise my score by 1.
---
Rebuttal 2:
Title: Looking forward to your feedback
Comment: Dear Reviewer Qtmi,
We're keen to know if there are any remaining concerns that require attention or if there are additional discussions that should take place. Your insights are greatly appreciated as they contribute to the refinement of the paper. Looking forward to your feedback. Thank you.
Best regards,
The Authors
---
Rebuttal 3:
Title: Thank you for raising the score
Comment: Thank you for your positive response. We would like to clarify that during the inference with LoRA, the parameters that need to be computed are $W+\Delta W$, where $W$ represents the weights of the pre-trained model (PLM) and $\Delta W$ the additional parameters introduced by LoRA. This is why we mentioned in our reply that applying the LoRA method during inference increases the number of parameters, thereby increasing the inference time.
Lightweight fine-tuning (LoRA) and model compression (KD) represent two different tracks for deploying large PLM. While LoRA is cost-effective, it slightly increases the inference time due to the added $\Delta W$ parameters. On the other hand, KD results in a compressed model that reduces both inference time and computational overhead while increasing the training cost.
Moreover, compressed models obtained through KD may have diverse applications on the edge. For instance, they are particularly suitable for deployment in resource-constrained environments, such as mobile devices and embedded systems.
Again, thank you very much for increasing the rating of our paper! | Rebuttal 1:
Rebuttal: Global Response:
Dear Reviewers:
We would like to thank you for your constructive comments, which are very helpful in improving our paper. We have posted the point-to-point reply to each question/comment raised by you. And we have listed three additional tables in the *one-page PDF rebuttal file*, which contained "Training time and Memory Cost for OPDF", "Distillation results on various CNN architectures with OPDF" and " Train Params and Inference Params of CNN model". Please do feel free to let us know if you have any further questions.
We are also pleased that the reviewers recognized the novelty and versatility of our work. In particular, we thank the reviewers for recognizing the *preservation of the inference parameters* (Qtmi and fkms), *adequacy of the experiment* (Dajm), and *novelty* (XV5e) of our method.
Thank you very much.
Best regards,
The Authors of the Paper
Pdf: /pdf/fcf3e8518b2937b428e6800f42afc0fb7bc5cbd5.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding | Accept (poster) | Summary: Paper addresses the lost-in-the-middle effect, observed in the past for some LLMs, with a method called Multi-scale Positional Encoding (Ms-PoE) where position is encoded using different scales for each attention head. More precisely, for each head a re-scaling ratio is substituting the position m of a token with m/r (r can be 1.5 for instance) and different values of r are used for each attention head. This multi-scale view is evaluated experimentally and authors claim that this mitigates the ‘lost-in-the-middle’ effect.
Strengths: -a new approach for positional encoding Multi-scale Positional Encoding (Ms-PoE)
-approach is plug-and-play (no additional fine-tuning needed)
Weaknesses: -The "lost-in-the-middle" effect is often assumed to impact all LLMs. But recent research indicates that this effect does not uniformly affect all LLMs, though the effect was observed on models such as Vicuna7B.
see for instance https://arxiv.org/abs/2403.20262 which shows that not all LLMs display the lost-in-the-middle effect on a long context benchmark
-For evaluation, the ZeroSCROLLS benchmark was used but authors could have considered incorporating other benchmarks designed for long context scenarios, such as "needle in the haystack" tasks. There are also many other relevant benchmarks available, including: NarrativeQA; LongEval; LongBench; LongBench-Chat; Loogle; ∞Bench and Long Range Arena (LRA) among others
=> on that aspect paper could have been stronger experimentally if evaluating the approach on more long context benchmarks
Technical Quality: 3
Clarity: 3
Questions for Authors: -eq(2): how are Rmin & Rmax fixed ?
-fig 5: why only vicuna model evaluated, does Llama-2-7B-Chat also display a lost-in-the-middle effect ?
-tab.3 are the differences between begin/middle/end results really significant ? A statistical test (for instance one-tailed Welch’s t-test) would have been welcome here
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No 'limitations' section was provided
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer LVQT for supporting our work and providing constructive suggestions. To address Reviewer LVQT’s concerns, we provide point-wise responses below.
**[Q1: No Lost-in-the-Middle Effect]** Thank you for the insightful question. We have observed similar findings to those in [1], particularly with **LLaMA-2-7B-Chat**, which does not exhibit the lost-in-the-middle phenomenon but rather shows performance loss at the beginning. The results are reported in Table R1. Additionally, after applying Ms-PoE, consistent improvements are achieved by an average of 1.5 accuracy. This further demonstrates the effectiveness of our approach for enhancing the context utilization of LLM inference. We’ve added the results in the updated draft.
[1] ELITR-Bench: A Meeting Assistant Benchmark for Long-Context Language Models.
Table R1. Comparison results of LLaMA-2-7B-Chat with and without Ms-PoE on the MDQA tasks.
|Key-Document-Index|Beginning|Middle|End|Average|
|---|---|---|---|---|
|w.o. Ms-PoE|52.6|53.2|64.0|56.6|
|w. Ms-PoE|56.0|55.6|64.6|58.7|
**[Q2: More Benchmark Results]** Thanks for the suggestions. We conducted additional experiments on all 13 tasks from the LongBench benchmark for further evaluation, and the results are presented in Table R2. Our findings show that Ms-PoE consistently demonstrated effectiveness across all 13 tasks, with improvements reaching up to 6.67 and an average of 2.2. It is important to note that we fixed $R_{min}=1.2$ and $R_{max}=1.8$ for all tasks without tuning on the test set.
Table R2. Comparison results of LLaMA-2-7B-Chat with and without Ms-PoE on the LongBench benchmark. We use the same scaling ratios without further turning.
| Methods | MultiFieldQA-en | LCC | GovReport | HotpotQA | Passage Count | Qasper | MultiNews | SAMSum | TriviaQA | PassageRetrieval-en | RepoBench-P | TREC | 2WikiMQA | Average |
| ---------- | --------------- | ----- | --------- | -------- | ------------- | ------ | --------- | ------ | -------- | ------------------- | ----------- | ----- | -------- | -------- |
| w.o Ms-PoE | 33.51 | 59.77 | 27.97 | 30.10 | 3.74 | 19.27 | 24.36 | 39.45 | 82.81 | 10.00 | 49.22 | 57.33 | 28.14 | 35.82 |
| w. Ms-PoE | 37.33 | 62.03 | 29.87 | 34.08 | 4.60 | 20.96 | 24.69 | 39.79 | 85.28 | 16.67 | 50.11 | 58.67 | 30.19 | 38.02 |
**[Q3: Determination of $R_{min}$ and $R_{max}$]** Thanks for the question, we determine the $R_{min}$ and $R_{max}$ via an ablation study on the MDQA task. Specifically, we randomly selected 500 samples from MDQA tasks as the validation set and examined the effect of different scaling ratios, the results are reported in Table 4 where $R_{min}=1.2$ and $R_{max}=1.8$ demonstrates superior performance. We then apply the same ratios to other downstream tasks without further adjustment.
**[Q5: Welch’s t-test of Table 3]** Good suggestion. We conducted a further statistical analysis of the results presented in Table R3. As illustrated in Table R3, the baseline method shows a significant difference in performance when critical documents are positioned in the middle versus the beginning of the inputs, as well as between the middle and the end. However, the difference between the beginning and the end is not significant (p-value > 0.05). In contrast, when applying our Ms-PoE method, there is no significant difference in performance between the beginning, middle, and end positions, and it also achieves better average performance. These findings further confirm the effectiveness of our approach, and we have included these results in the updated manuscript.
Table R3. Welch’s t-test of the results when different ordering metrics are applied. The Null hypothesis states that there is no difference between the means of two results.
| p-value | Begin v.s. Middle | Begin v.s. End | Middle v.s. End |
| :----------------: | ------------------ | -------------- | --------------- |
| Baseline | 0.033 | 0.792 | 0.016 |
| Random | 0.002 | 0.740 | 0.001 |
| Sequential | 0.055 | 0.519 | 0.202 |
| Entropy | 0.194 | 0.877 | 0.145 |
| Position-Awareness | 0.391 | 0.562 | 0.151 |
---
Rebuttal Comment 1.1:
Title: Response to Reviewer LVQT
Comment: Dear Reviewer LVQT,
We sincerely thank you for your time and effort in reviewing our work. Your constructive and insightful feedback has been invaluable in helping us improve the quality of our manuscript. We have carefully addressed each of your comments and hope that we have successfully resolved all of your concerns. We are open to further discussion if you have any additional questions and look forward to your response.
Best regards,
Authors
---
Reply to Comment 1.1.1:
Title: We are keen to discuss further with you
Comment: Dear Reviewer LVQT,
We sincerely appreciate your valuable time and constructive feedback. We have carefully addressed each of your concerns. As the discussion period deadline approaches, we would be grateful if you could inform us if there are any further questions. Thank you!
Best,
Authors | Summary: This paper proposes a plug-and-play method named Ms-PoE to mitigate the lost-in-middle challenge of LLMs. Specifically, Ms-PoE leverages multi-scale position embeddings to enhance information awareness in different parts of the context. Without fine-tuning the model, Ms-PoE achieves an average accuracy gain of up to 3.8 on the Zero-SCROLLS benchmark over the original LLMs.
Strengths: 1. MS-PoE is a plug-and-play, simple yet efficient method, which is training-free and can achieve good performance.
2. The observation of "position-aware" attention heads is insightful, and applying re-scaling ratio dynamically to each attention head is reasonable.
3. I strongly agree with the authors' opinion that "the persistent difficulty faced by most LLMs in identifying relevant information situated in the middle of the context has not been adequately tackled."
Weaknesses: 1. It seems that the performance improvement on MDAQ and ZeroSCROLLS benchmark is small compared to the "Self-Extend" method, which can be seen as a special type of "Multi-Scale Positional Encoding", i.e., "Single-Scale Positional Encoding".
2. Although this paper suggests that Ms-PoE is suitable for "Long Contexts," I believe it is suitable for any context situation. What exactly defines a "Long Context"? Is it a context length above 12K? Additionally, I have not come across any analysis of the impact of context length and how Ms-PoE can help the model understand the middle of long contexts. It might be beneficial to conduct experiments to determine whether Ms-PoE can enhance the model's understanding of the intermediate content of the context. This could involve analyzing different context lengths, such as inserting a specific length of context and evaluating the model's predictions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What's the performance of MS-PoE(with LLama-2-7B-Chat as the backbone model) on LongBench-EN benchmark ?
2. Why chose 1.2 and 1.8 for R_{min} and R_{max} in Equation 2 ?
3. See Weakness 2
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See Question and Weakness above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate Reviewer xsrg for acknowledging our method is “efficient”, and the observation of attention heads is “insightful”. To address Reviewer xsrg’s concerns, we provide pointwise responses in the following.
**[Q1: Limited Improvements]** We respectfully disagree that our performance improvement is limited compared to “Self-Extend.” As demonstrated in Table 2, “Self-Extend” achieves similar performance to direct positional interpolation, or “Single-Scale Positional Encoding,” with gains of 1.20 and 29.76 on the MDQA and KV Retrieval tasks, respectively. In contrast, our Multi-Scale Positional Encoding achieves significantly higher gains of 3.92 and 43.72 on the same tasks, respectively, clearly surpassing the improvements achieved by “Self-Extend.”
**[Q2: Ablation Studies of Context Lengths]** That’s a good suggestion. We conducted additional evaluations of our method across varying input lengths by changing the number of documents from 3 to 15. In these experiments, the key document is consistently positioned in the middle of the input to assess the LLMs' ability to capture middle context information. As depicted in Table R1, when the input length is short, our approach shows negligible improvement compared to the Baseline, with a 0.2 accuracy gain. However, as we increase the number of documents, thereby lengthening the input, significant improvements are achieved, with gains up to 6.4 in accuracy.
Table R1: Results of the effectiveness of our approach across different input lengths. The relevant key documents are located in the middle of the inputs and experiments are conducted with Vicuna-7B
|Number-of-Docs|3|5|7|10|15|
|---|---|---|---|---|---|
|Baseline|69.2|63.8|62.8|57.4|52.8|
|Ours|69.4|64.4|67.0|63.0|59.2|
|Improvements|0.2|0.6|4.2|5.6|6.4|
**[Q3: Results on LongBench-EN Benchmark]** Thanks for the question. We conduct additional experiments of all 13 tasks from the LongBench-EN benchmark. Results are reported in Table R3. We can observe that Ms-PoE achieves consistent performance improvement without any finetuning.
Table R3. Comparison results of LLaMA-2-7B-Chat with and without Ms-PoE on the LongBench benchmark. We use the same scaling ratios without further turning.
| Methods | MultiFieldQA-en | LCC | GovReport | HotpotQA | Passage Count | Qasper | MultiNews | SAMSum | TriviaQA | PassageRetrieval-en | RepoBench-P | TREC | 2WikiMQA | Average |
| ---------- | --------------- | ----- | --------- | -------- | ------------- | ------ | --------- | ------ | -------- | ------------------- | ----------- | ----- | -------- | -------- |
| w.o Ms-PoE | 33.51 | 59.77 | 27.97 | 30.10 | 3.74 | 19.27 | 24.36 | 39.45 | 82.81 | 10.00 | 49.22 | 57.33 | 28.14 | 35.82 |
| w. Ms-PoE | 37.33 | 62.03 | 29.87 | 34.08 | 4.60 | 20.96 | 24.69 | 39.79 | 85.28 | 16.67 | 50.11 | 58.67 | 30.19 | 38.02 |
**[Q4: Determination of $R_{min}$ and $R_{max}$]** Thanks for the question, we determine the $R_{min}$ and $R_{max}$ via an ablation study on the MDQA task. Specifically, we randomly selected 500 samples from MDQA tasks as the validation set and examined the effect of different scaling ratios, the results are reported in Table 4. We then apply the same ratios to other downstream tasks without further adjustment.
---
Rebuttal Comment 1.1:
Title: Respond to authors
Comment: Thanks for your response.
Although Ms-PoE is an efficient and plug-and-play method, it may require some parameter-selection processes and the hyper-parameter may vary for different situations.
I want to understand the impact of R_min and R_max. Do these two values significantly affect the final results?
---
Reply to Comment 1.1.1:
Title: Respond to further questions
Comment: Thanks for the good question. We conducted an ablation study to examine the impact of $R_{min}$ and $R_{max}$ on the final results. The findings are presented in Table 4 and discussed in Section 4.3 A2. Our study indicates that there is a sweet point for scaling ratios that enhances performance ($R_{min} = 1.2$, $R_{max} = 1.8$). However, when the scaling ratios are either too small (e.g., 0.5) or too large (e.g., greater than 2), performance tends to degrade. Additionally, such sweet point shows good generalization, as verified by downstream tasks such as key-value retrieval, Longbench, and Zeroscrolls. | Summary: This paper addresses the 'lost-in-the-middle' issue in large language models (LLMs) by introducing Multi-scale Positional Encoding (Ms-PoE). This approach enhances LLMs' ability to handle relevant information in the middle of the context without fine-tuning or added overhead. Ms-PoE uses position index rescaling and distinct scaling ratios for different attention heads to maintain essential knowledge. Experiments show Ms-PoE improves LLM accuracy, achieving up to a 3.8% gain on the Zero-SCROLLS benchmark.
Strengths: 1. This paper is well-written and very easy to follow.
2. The analysis of the position embedding sensitivity of different attention heads is very interesting and intriguing.
Weaknesses: 1. The improvement is actually limited, despite the method’s efficiency and ease of use.
2. The heuristic of the scaling ratio allocation process is a little arbitrary. There could be better solutions than just using an arithmetic sequence to achieve that. It is not strictly proved that a larger S_P means that the model should have a less rescaling ratio and vice versa. There could be possibilities that different heads could use the same scaling ratio or the other way around.
3. Moreover, there is no evidence that the rescaled position embedding could make the attention heads like the “Top in Figure 4” better at catching relevant information except for an empirical marginal task performance improvement and the comparison with other strategies in Section 4.3. An attention map of the attention heads after the rescaling could better strengthen this main claim of the paper. The authors could also partially show this by examining the S_P variation before and after the rescaling.
4. In Section 3.1, directly adjusting the hyperparameter and seeing the results on the test set is not very appropriate since it means the test set information is leaked. The authors should analyze the scaling ratio on the validation set and see if it is consistent with that in the test set. Moreover, the selection of the hypermeters R_min and R_max is also probably acquired by the test set performance, reflected by the ablation experiments in Table 4.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. L212-215: “With a small scaling ratio LLMs tend to focus more on the most recent part of the context while with a large ratio, LLMs favor the beginning part” Are there any supporting results for this claim?
2. Did you choose the hyperparameters of R_min and R_max directly from their performance on the test set?
3. I’m happy to raise the scores if all the concerns are well addressed.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer qA6u for acknowledging our work as “interesting and intriguing”. We provide pointwise responses in the following.
**[Q1: Limited Improvements]** We respectfully disagree with the claim that our improvements are limited. Our method offers a plug-and-play solution to enhance current open-source LLMs without any fine-tuning. And the enhancements are consistent across multiple benchmarks, including a 3.8-point average improvement on ZeroSCROLLS (Table 1), a 3.92-point increase in Multi-Document QA tasks, and a 43.72-point gain in KV Retrieval tasks (Table 2). Additionally, as shown in Table R1 in the uploaded PDF, our evaluations on the LongBench benchmark further demonstrate a consistent improvement of up to 6.67 points. Therefore, we believe our improvements are both consistent and significant.
**[Q2: Studies of Scaling Ratio Allocation]** Thank you for the insightful comments. In our experiments, we chose the heuristic linear (arithmetic sequence) rescaling ratio due to its simplicity and input-independent nature. We can also rescale each attention head using other strategies, such as directly via the position-aware score ($S_P$) while it introduces additional computational costs. Since when implementing a scaling ratio based on the position-aware score, each input would have distinct rescaling ratio values. This would necessitate recalculating the $\theta$ value in RoPE on the fly, leading to additional computational overhead. Additionally, we can also allow different heads to share the same scaling ratio.
To further address Reviewer qA6u’s concerns, we explored multiple scaling ratio allocation strategies and reported the results in Table R2. For the exponential and cosine strategies, we aimed to examine whether non-linear allocation strategies perform well. For the stepwise solution, we allowed different attention heads to share the same scaling ratio, and the position-aware score provided evidence for rescaling directly based on its raw scores. We observed that the position-aware strategy achieves similar results to the linear strategy but requires extra computational overhead. Similarly, allowing different attention heads to share the same scaling ratio yields comparable results. Therefore, based on the superior performance and computational efficiency, we chose the linear assignment method.
**Details**: We compare several strategies in our experiments, including (i) Exponential: $r_i = 1.8 - (1.8-1.2) \cdot 0.9^{i}$, where $r_i$ is the i-th scaling ratio and 0.9 is the multiplication factor for scaling ratio decay. (ii) Cosine: $r_i = 1.8 - (1.8-1.2)cos(\frac{i}{n_h-1}\phi)$, where $n_h$ is the number of attention heads. (iii) Stepwise: $r_i = 1.2 + \frac{1.8-1.2}{3} round (\frac{i*4}{n_h})$, (iv) Position-Aware score: $r = 1.2 + \frac{(1.8-1.2)\cdot(S_P-min(S_P))}{max(S_P)-min(S_P)}$, as well as the Linear strategy in our current implementation.
Table R2. Results of different scaling ratio strategies on the MDQA task with Vicuna-7B.
|Key-Document-Index|1|3|5|7|10|Average|
|---|---|---|---|---|---|---|
|Baseline|64.0|61.0|57.4|58.4|64.8|61.12|
|Ms-PoE (Linear)|65.6|64.2|63.0|65.2|67.2|65.04|
|Exponential|64.0|60.0|63.6 | 63.6 | 63.6 |62.96|
|Cosine|66.4| 63.0| 63.2 | 65.6 |66.4 |64.92|
|Stepwise| 62.6|63.6 |63.6 |67.6 |68.0 | 64.88|
|Position-Aware score|65.2|64.0|63.6 | 65.6 | 66.3 |64.94|
**[Q3: Visualization of Attention Pattern before and after Rescaling]** Thank you for the suggestion. We visualized the attention patterns of different attention heads and reported the variation of $S_P$ before and after rescaling. The results are demonstrated in Figures R1 in the uploaded PDF. We found that the rescaling step effectively enhances the context utilization of LLMs, making some non-"position-aware" heads focus on critical information. Quantitatively, $S_P$ consistently increases with an average improvement from 1.79% to 1.93%.
**[Q4: Determination of $R_{min}$ and $R_{max}$]** Thanks for pointing it out. We’d like to clarify that we didn’t search the best hyperparameter of $R_{min}$ and $R_{max}$ on the test set. Instead, we randomly selected 500 samples from MDQA tasks as the validation set and did the ablation studies of different scaling ratios on this validation set. We found that selecting a ratio between 1.2 and 1.8 significantly boosts the performance. Then. we apply the same ratios to all other tasks without further adjustment.
**[Q4: Scaling Ratio Affects the Favored Zone]** Thanks. We report the raw accuracy results corresponding to Figure 3 in Table R3 where we position the key documents at the beginning, middle, or end of the sequences. From Table R3, we can observe that “changing the scaling ratio also affects the favored zone of LLMs. With a small scaling ratio (e.g., 0.5), LLMs tend to focus more on the most recent part of the context, while with a large ratio (e.g., 2.5), LLMs favor the beginning part”. We have included this table in the revised version for a clearer understanding of the content discussed in Lines 212-215.
Table R3: Accuracy results for the MDQA task when key documents are positioned at various locations within the sequences. Different rescaling ratios are applied, with all attention heads sharing the same rescaling ratio.
|Llama-2-7B-Chat | Beginning | Middle| End|
|---|---|---|---|
|0.5|36.6|41.8|59.4|
|1|52.6|53.2|64.0|
|1.5|59.0|58.4|59.8|
|2|59.4|59.4|59.8|
|2.5|62.4|55.6|56.0|
|Vicuna-7B | Beginning | Middle| End|
|---|---|---|---|
|0.5|56.0|51.0|68.0|
|1|64.0|57.4|64.8|
|1.5|65.2|60.0|64.0|
|2|61.5|59.0|62.5|
|2.5|59.5|57.5|57.0|
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses. They resolved most of my concerns and I've raised my scores accordingly. Hope you all the best!
---
Reply to Comment 1.1.1:
Title: Thanks for the responses
Comment: Thanks for the responses. We are glad that our responses resolved your concerns and we will include additional results in the updated draft. Also, many thanks for raising the score. | null | null | Rebuttal 1:
Rebuttal: We thank Reviewer qA6u, xsrg, and LVQT for their constructive suggestions and valuable questions. Additional supplementary materials are provided in the PDF, including:
- The attention patterns before and after rescaling **[Reviewer qA6u]**
- More results on LongBench. **[Reviewer qA6u, xsrg, LVQT]**
Pdf: /pdf/5bcd8f402bf726aa6ce8ebe9b3d1b5aa622c576f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention | Accept (spotlight) | Summary: This paper presents a sparse calculation technique for the attention mechanism in long-context large language models during the pre-filling stage. Specifically, the technique builds on the authors' observation of three patterns in the attention map: the A-shape pattern, the Vertical-Slash pattern, and the Block-Sparse pattern. In the proposed method, the authors first determine the optimal pattern for each attention head offline and then dynamically decide the hyperparameters for each pattern (e.g., sparse indices) on-the-fly. It is worth noting that the A-shape pattern is more regular in terms of the distribution of non-zero indices compared to the Vertical-Slash and Block-Sparse patterns. To provide a low-cost estimation of the non-zero indices in the Vertical-Slash and Block-Sparse patterns, the authors predict these indices using the matrix multiplication results between the last query vector and the key matrix and the matrix multiplication results between a mean-pooled query matrix and the key matrix, respectively. Results on three different pre-trained LLMs and four datasets indicate that the proposed method effectively maintains information from long prompts while reducing pre-filling latency compared to current state-of-the-art methods.
Strengths: 1. Impressive results: Based on the reported experimental results, the proposed method can reduce the pre-filling latency for a 1M context by ten times without compromising accuracy.
2. Well-motivated: I appreciate the analysis and high-quality figures in Section 2, which explicitly illustrate the key takeaways for readers and effectively motivate the proposed method.
3. Comprehensive summary in the related works section.
Weaknesses: 1. How to ensure that the observations on the three LLMs in this paper are generalizable to different LLMs? Since the entire method is built on the observation of the attention patterns in these three LLMs, the generalizability of this observation is crucial for the robustness of this work. For example, the block-sparse pattern is the most dynamic among the three. What if there are even more dynamic attention maps in other LLMs with specific inputs? How is the search space determined, and how general is it? The offline Kernel-Aware Optimal Sparse Pattern Search determines the pattern for each head offline, and the pattern does not change during inference regardless of the input. Is it always true that the attention pattern for a specific head remains nearly the same?
2. Lack of evaluation on larger models: For longer contexts, larger models seem to be a better option to ensure quality. However, the models evaluated in this paper are relatively small (<10B parameters). Thus, it is unclear whether the improvements still hold for larger models.
3. Lack of discussion on usage for the generation/decoding stage: Pre-filling is important because it determines the first token latency, but generation/decoding is also important because it determines the throughput. It would be beneficial to add some discussion on whether the observations and the proposed method are still applicable for the generation/decoding stage. For example, while the A-shape pattern might still exist, the sparse indices approximation in the vertical-slash pattern might not work for generation/decoding.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the dataset of input data used for the analysis in Section 2.2? How was it chosen?
2. In Figure 3(c), why does the block-sparse pattern perform so poorly in attention compared to the A-shape and Vertical-Slash patterns? Based on my understanding, the block-sparse pattern is a more general and dynamic case for the A-shape and Vertical-Slash patterns, as shown in Figure 4. Could the block size be an important factor causing this issue?
3. In Figure 3(c) and the corresponding description, I suggest explicitly mentioning which Top-K method is used. Based on my understanding, the Top-K method involves selecting the top-K tokens for all heads, while the other methods select the top-K tokens for each head. Presenting them together may confuse readers into thinking that Top-K means the top-K for each head.
4. What does "real FLOPs" mean in Line 137? FLOPs itself is a "conceptual estimation" compared to real measured latency. I believe the authors may want to refer to the FLOPs after discarding the zero values in the attention pattern. If so, please be more specific.
5. The LLaMA-3-8B-Instruct-262k is linked to LLaMA-3-70B-Instruct-262k in Line 176. Please clarify which model is used here.
6. There are minor consistency issues, such as StreamLLLM and StreamingLLM.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed the limitation and social impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. _"...generalizable... built on the observation of the attention patterns..."_
Thank you for your question. We address it from two angles: the generalization of MInference and the relative stability of dynamic sparse attention patterns across different examples.
For the generalization of MInference,
- To demonstrate the generalizability of MInference, we tested it on most open-source long-context LLMs, including LLaMA-3-8B/70B-1M, Yi-9B-200K, GLM-4-9B-1M, Phi-3-mini-128K, and Qwen2-7B-128K (**see general response PDF**). MInference consistently achieves good performance and acceleration across models with different pre-training data, training pipelines, RoPE structures, extended long-context methods, and sizes.
- Additionally, we observed that three sparse attention patterns, especially the vertical and slash patterns, are not only present in GPT-like LLMs but also in BERT[2], T5 (both Encoder and Decoder), and MLLM[3]. We also found that "induction heads" exhibit a pattern similar to the "vertical and slash" pattern (see [4]).
- By design, MInference accounts for different dynamic sparse attention patterns, which lends it a certain degree of generalization capability.
[2] SparseBERT: Rethinking the Importance Analysis in Self-Attention, ICML 2021.
[3] LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inference, 2024.
[4] A Mathematical Framework for Transformer Circuits, 2021.
For the relative stability of dynamic sparse attention patterns across different examples,
- Visualization: Fig.3(a) shows the visualization of the sparse attention pattern distribution of the same head across different tasks and examples, demonstrating good consistency.
- Attention Recall: Fig.3(c) shows the average recall rate of different tasks and examples within the searched pattern as the compression rate changes. The searched pattern achieves a higher recall rate across different tasks.
- Downstream tasks performance: Using a configuration searched from a single example, we achieve performance close to full attention across different scenarios and benchmarks, further indicating the stability of dynamic sparse attention patterns across various tasks and examples.
2. _"What if there are even more dynamic attention maps in other LLMs with specific inputs"_
Given the extreme sparsity of attention in long-context scenarios, where the significant attention weights are concentrated in a few blocks, even if the graph differs from the vertical and slash pattern and the A-shape, it can still be covered by the block-sparse pattern.
Additionally, our experiments in Question 1 demonstrate the ubiquity of this dynamic sparse pattern in current long-context LLMs and the effective of MInference.
3. _"How is the search space determined, and how general is it?"_
The search space is determined by a specific sparsity rate and adjusted according to the actual FLOPs required for different patterns in the kernel. Based on testing, the search space obtained is quite consistent across different tasks and examples.
4. _"Lack of evaluation on larger models"_
Thank you for your suggestion. We have included results for **LLaMA-3-70B-1M** in the supplementary PDF. MInference achieves performance close to or even better than full attention, especially compared to baselines.
|Methods | En.Sum | En.QA | En.MC | En.Dia | Zh.QA | Code.Debug | Math.Find | Retr.PassKey | Retr.Number | Retr.KV | Avg. |
|-|-|-|-|-|-|-|-|-|-|-|-|
|***LLaMA-3-70B-1M*** | 20.7 | 10.3 | 84.2 | 9.5 | 14.0 | 33.2 | 61.7 | 97.0 | 100.0 | 34.0 | 46.5 |
|StreamingLLM | 20.5 | 8.5 | 52.0 | **10.0** | 12.6 | 27.4 | 61.1 | 14.0 | 10.0 | 0.0 | 21.6|
|InfLLM | **24.1** | 8.1 | 57.0 | **10.0** | 12.9 | 27.4 | 52.3 | **100.0** | **100.0** | 0.0 | 39.2|
|**MInference** | 20.6 | **10.1** | **83.4** | **10.0** | **14.1** | **34.1** | **61.9** | **100.0** | **100.0** | **39.0** | **47.3** |
Table 1. Performance of different methods with different base models on InfiniteBench.
5. _"Decoding"_
Thank you for your suggestion. However, extending to decoding would require significant system work for CPU offloading, which we have earmarked for future work. Nonetheless, our preliminary experiments indicate that the extrapolation generality of these patterns is also very good, as shown in:
|Methods|En.Sum|En.QA|En.MC|Zh.QA|Code.Debug|Math.Find|Retr.Number|
|-|-|-|-|-|-|-|-|
|GLM-4-9B-1M|28.3|9.7|68.6|12.1|29.4|38.9|100.0|
|MInference|28.8|9.6|68.6|12.0|30.7|39.1|100.0|
|MInference in prefilling and decoding|28.0|9.4|67.3|11.9|31.7|39.1|100.0|
Table 3. Performance of different methods on InfiniteBench using GLM-4-9B-1M.
6. _"Dataset used in Sec 2.2"_
In Section 2.2, we used data from InfiniteBench to create Fig.3, including summarization, QA, Math, and retrieval tasks. Both (b) and (c) are results averaged after randomly selecting 10 examples from each subset.
7. _"Why does the block-sparse pattern perform so poorly?"_
This is mainly because different patterns use different block sizes in the kernel. For example, the Vertical and Slash pattern can use a 1x64 block to process vertical lines instead of 64x64 in block-sparse. If this head pattern is processed using block-sparse, it would result in significant computational waste, leading to a lower attention recall at the same sparsity rate.
8. _"I suggest explicitly mentioning which Top-K method is used"_
We apologize for any confusion caused by our writing. Yes, your understanding is correct; the Top-K mentioned in Fig.3(c) refers to token-level Top-K. We will add all corresponding granularities of TopK and rewrite Section 2.2 for better comprehension in the next version.
9. _"Real FLOPs"_
Real FLOPs refer to the FLOP values in the kernel after excluding non-zero calculations. Thank you for your suggestion; we will clarify this meaning in the next version.
10. _Error link and typos_
Thank you for your detailed review. We will correct these issues in the next version of our paper.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal!
Comment: Thank you to the authors for the detailed rebuttal!
I truly appreciate your efforts in putting everything together within such a short period.
I have raised my score to 7.
One minor issue:
For this statement: "This is mainly because different patterns use different block sizes in the kernel. For example, the Vertical and Slash pattern can use a 1x64 block to process vertical lines instead of 64x64 in block-sparse. If this head pattern is processed using block-sparse, it would result in significant computational waste, leading to a lower attention recall at the same sparsity rate."
--> I understand that the Slash pattern’s block size is smaller than that of block-sparse, i.e., 1x64 vs. 64x64. However, the logical connection between "significant computational waste" and "a lower attention recall at the same sparsity rate" is not entirely clear to me. Specifically, I believe that "significant computational waste" would lead to lower efficiency rather than lower recall.
---
Reply to Comment 1.1.1:
Title: Response by Authors
Comment: Thank you again for recognizing our work and providing very helpful comments.
1. _"...'significant computational waste' would lead to lower efficiency rather than lower recall..."_
Yes, you are correct. However, in Fig. 3(c), we defined the horizontal axis as Dense FLOPs / FLOPs in Kernel, which represents the sparsity rate within the kernel. We intended to convey that larger block sizes lead to computational waste, resulting in lower effective recall at the same sparsity rate. | Summary: A key challenge for LLM inference with processing long context lengths is time-to-first token for long prompts. This paper introduces a sparse attention method designed to accelerate prefill with long context lengths. They utilize a strategy that incorporates three different types of sparsity (A-shape, vertical-slash, and block-sparse). They calibrate for which sparsity pattern to apply to different heads, thereby adapting to the varying sparsity patterns across LLM heads. Their method is training-free and obtains significant latency gains in terms of time to first token.
Strengths: - Time to first token is a significant problem that is under-addressed in existing KV cache compression works, as many applications have long input prompts and short generation lengths
- Their approach is able to adapt dynamically to different inputs for certain attention patterns (which have different sparsity patterns), as well as to handle differing behaviors across attention heads
- They provide efficient online kernel implementations for each sparsity pattern (both for constructing the sparsity pattern for the two dynamic patterns as well as for sparse computation)
- They observe significant prefill speedups attained with minimal accuracy degradation
- They provide detailed evaluation across a range of long context tasks, as well as ablation for each type of sparsity pattern (and additional analysis showing the distribution of sparsity patterns across attention heads, etc.)
Weaknesses: - Their approach is heuristic-based (as it is based on fixed types of attention patterns), which may not generalize to new models that are released (some of these patterns could be specific to models that employ particular positional encodings or other model architecture features, which may limit generalizability)
- The offline kernel search assumes that each head will play the same role regardless of the target task (or alternatively that calibration data will always be available for the target task in order to pre-determine which general attention pattern to use and what budget to allocate to that head)
- The writing in some sections is poor (eg. end of Section 2)
Technical Quality: 3
Clarity: 2
Questions for Authors: - Do you have any analysis that shows that the attention head patterns are the same regardless of the input task (or that their search converges to the same solution for the same head across different tasks)?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. _"...which may not generalize..."_
- To demonstrate the generalizability of MInference, we tested it on most open-source long-context LLMs, including LLaMA-3-8B/70B-1M, Yi-9B-200K, GLM-4-9B-1M, Phi-3-mini-128K, and Qwen2-7B-128K (**see general response PDF**). MInference consistently achieves good performance and acceleration across models with different pre-training data, training pipelines, RoPE structures, extended long-context methods, and sizes.
- Additionally, we observed that three sparse attention patterns, especially the vertical and slash patterns, are not only present in GPT-like LLMs but also in BERT[2], T5 (both Encoder and Decoder), and MLLM[3]. We also found that "induction heads" exhibit a pattern similar to the "vertical and slash" pattern (see [4]).
- By design, MInference accounts for different dynamic sparse attention patterns, which lends it a certain degree of generalization capability.
[2] SparseBERT: Rethinking the Importance Analysis in Self-Attention, ICML 2021.
[3] LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inference, 2024.
[4] A Mathematical Framework for Transformer Circuits, 2021.
2. _"... assumes that each head will play the same role regardless of the target task..."_
Indeed, based on our experiments and observations, these dynamic sparse patterns are relatively fixed across different tasks and examples. This is corroborated by our experimental results, where a configuration searched from a single example can achieve performance close to full attention across various scenarios and benchmarks.
3. _"The writing in some sections is poor (e.g., end of Section 2)"_
We apologize for any confusion caused by our writing and will rewrite the relevant content, particularly Section 2.2, to improve readability.
4. _"...analysis that shows that the attention head patterns are the same regardless of the input task..."_
Thank you for your suggestion. We explain the relative stability of head-level dynamic sparse attention patterns from several perspectives:
- Visualization: Fig.3(a) shows the visualization of the sparse attention pattern distribution of the same head across different tasks and examples, demonstrating good consistency.
- Attention Recall: Fig.3(c) shows the average recall rate of different tasks and examples within the searched pattern as the compression rate changes. The searched pattern achieves a higher recall rate across different tasks.
- Downstream tasks performance: Using a configuration searched from a single example, we achieve performance close to full attention across different scenarios and benchmarks, further indicating the stability of dynamic sparse attention patterns across various tasks and examples.
We will rewrite the paper to highlight this point and thank you again for your helpful and insightful comments.
|**RULER** | Effective | 4K | 8K | 16K | 32K | 64K | 128K | Avg. |
|-|-|-|-|-|-|-|-|-|
|*GLM-4-9B-1M* | 64K | 93.8 | 91.6 | 89.3 | 87.4 | 85.2 | 80.8 | 88.0 |
|StreamingLLM | 4K | 93.8 | 66.9 | 58.5 | 51.4 | 45.9 | 39.1 | 59.3 |
|InfLLM | 8K | **94.7** | 89.5 | 76.4 | 66.5 | 56.8 | 53.5 | 72.9 |
|**MInference** | 64K | 94.6 | **93.1** | **91.0** | **89.6** | **85.5** | **84.0** | **89.6**|
Table 2(a). Performance of different methods on RULER.
---
Rebuttal Comment 1.1:
Title: Rebuttal Response
Comment: I appreciate the author's detailed responses addressing my comments. Their response sufficiently justifies the claims that their approach generalizes to a range of LLMs, as well as that particular heads play a similar role regardless of the task. I will therefore keep my initial score. | Summary: Targeting at the time-consuming profiling stage of long contexts, this paper proposed an efficient prefilling stage spare attention mechanism. It's based on the observation to common attention patterns. The proposed method can be integrated into most existing LLMs, such as LLama3 and Yi-9B. It achieves good performance compared to the original model while significantly reduce the computation times.
Strengths: 1: The motivation is clear and the paper is easy to understand.
2: The proposed method seems simple yet effective.
3: The experiments are comprehensive.
Weaknesses: 1: It’s tricky to claim that “we found the three patterns”. Those patterns have been found many years ago, since the years of Bert / transformer models were proposed. Even the idea of leveraging such sparsity of attention has been invented many times for different transformer based models.
Some other major concerns can be found in the "Question" section.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Is the slash-vertical pattern distinguishable from lambda-shaped patterns? By my understanding, lambda-shaped patterns are just a special case of the slash-vertical pattern. Perhaps if you replace all lambda-shaped patterns with the slash-vertical pattern, it would still work well.
2. Could you please do a comparison between the proposed MInference and H2O[1]? From my understanding, without considering the real efficiency, it’s hard for a method relying on attention scores/similarity to obtain significantly better performance than H2O.
3. From my own experience, when the tasks become more challenging, the performance of efficient attention methods might show quite different effectiveness. For example, many long context methods cannot pass the passkey retrieval test with a 100-digit passkey, while with a 5-digit passkey, most methods work perfectly. Could you please construct experiments on more challenging tasks? Especially on those most recently proposed ones, for example: [2][3].
4. Also, Llama-3 is much more robust to perturbation and error accumulation. Could you please construct experiments with some weaker LLMs, such as Mistralv0.2, Llama2, and gemma-1.1?
References:
[1] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models
[2] KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
[3] BABILong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review, and we apologize for any confusion caused.
1. _"Claim that 'we found the three patterns'"_
Thank you for your critique. We will revise the wording accordingly. Indeed, we discussed the importance of sparse attention in related works and how it inspired our research. We want to emphasize our understanding of the dynamic nature of these sparse patterns, which led us to propose the training-free MInference method.
2. _"Lambda is part of vs?"_
Yes, the A-shape pattern is a specific type of "vertical and slash" pattern. However, the A-shape head exhibits a more fixed sparsity pattern across different tasks and examples. Due to its static nature, it does not require online sparse index building and allows for more extreme sparse kernel optimization, resulting in better inference acceleration. We differentiate these three patterns based on kernel speedup and spatial locality.
3. _"Compare with H20"_
In fact, MInference and H2O optimize two different problems in long-context LLMs inference: prefilling stage latency and KV cache storage, respectively. These methods are orthogonal. Furthermore, we applied MInference with SnapKV in our paper (Table 5 in paper), which is a SoTA KV cache compression method superior to H20. MInference can be used in conjunction with KV cache compression methods to achieve even better performance.
4. _"More challenging tasks"_
Thank you for your suggestion. However, I would like to clarify that we have tested on very challenging long-context benchmarks, including RULER, InfiniteBench, Needle In A Haystack, and Language Model, with lengths ranging from 128k to 1M tokens.
- RULER includes subtasks such as Multi-Needle and Multi-hop Tracing and is one of the most challenging long-context benchmarks, sharing a similar tasks with BABILong and a similar benchmark ranking.
- We have additionally conducted evaluations on the KV cache compression benchmark[5], including LongBench (refer to Table 2(b) below) and Needle In A Haystack (see Figure 8 in the paper and Figure 1 in the general response PDF), showing that MInference does not negatively impact performance.
- InfiniteBench includes many challenging long-context tasks. For example, KV retrieval task requires LLMs to recall a 36-bit value from several random 36-bit Key-Value pairs by retrieving the value corresponding to a random 36-bit key, which is not as simple as the 5-digit passkey task mentioned in the comments.
Overall, our tests have included similar or even more challenging long-context tasks, effectively reflecting the minimal impact of MInference on the capabilities of long-context LLMs.
|**LongBench** | SingleDoc | MultiDoc | Summ. | FewShot | Synth. | Code | Avg. |
|-|-|-|-|-|-|-|-|
|*LLaMA-3-8B-262K* | 33.5 | 28.3 | 29.4 | 66.9 | 43.0 | 42.4 | 40.6 |
|StreamingLLM | 29.7 | 23.7 | 28.8 | 65.8 | 19.3 | **42.8** | 35.0 |
|InfLLM | 31.6 | 25.8 | 29.1 | 66.3 | 36.3 | 41.8 | 38.5 |
|**MInference** | **34.0** | **28.0** | **29.7** | **66.8** | **42.8** | 42.2 | **40.6**|
Table 2(b). Performance of different methods on LongBench.
[5] KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches.
5. _"Weaker models"_
Besides testing on the powerful LLaMA-3-8B-1M, we also conducted tests on the Yi-9B-200K model, which has an effective context size of only 8k in RULER, showing that MInference can achieve performance close to Full Attention without changing the effective context size. Moreover, we have supplemented our results with GLM-4-9B-1M, Phi-mini-128K, and Qwen2-7B-128K (**see general response PDF**). Notably, the latter two models have weaker long-context capabilities, with effective context sizes of 4k and 8k in RULER, respectively. They show a significant performance drop in the Needle In A Haystack task when exceeding 100K tokens, but using MInference, they can achieve close or even better performance. We will include these results in future versions. Additionally, the weak LLMs mentioned by the reviewer have context windows smaller than 32K and cannot be tested on benchmarks like InfiniteBench. We will adapt more long-context LLMs in the future.
---
Rebuttal Comment 1.1:
Comment: Thank you so much for the response. Most my concerns are solved. I raised my score to 6.
However, after carefully checking all your rebuttals, I have mores question about the effectiveness of MInference:
1: Most long context tasks are simple evidence-QA tasks, which have much lower information density compared to standard short context tasks. I'm curious about whether Minference can still "maintain accuracy" on standard short context tasks, such as GSM8K, MMLU and GPQA.
2: About the slash pattern, I'm curious about the ratio of [computed token/blocks]/[not computed token/blocks] on some really tasks, for example, KV-retrieval. In another word, how many slashes are there for a real task. Do you mind to elaborate more about this?
---
Reply to Comment 1.1.1:
Title: Response by Authors
Comment: Thank you for your recognition and detailed, insightful comments, which have been very helpful for our work.
1. _"Minference can still 'maintain accuracy' on standard short context tasks"_
Our current understanding is primarily due to the following reasons:
- The sparsity exhibited in short-context attention is generally lower than in long-context attention, but they still exhibit some degree of sparsity.
- Attention in different heads exhibits stable sparse patterns. The difficulty or information density of tasks does not change the sparsity of the heads but only alters the specific sparse index positions. As described in "induction heads", multiple layers of attention have differentiated functions, contributing to the complex reasoning capabilities.
- In our experiments, the sparsity rate used for short context sizes was not high. For LongBench, with an average length of 32k, we used the search space from Tab. 6, equivalent to 1k + 4K A-shape FLOPs, to maintain consistency.
2. _"[computed token/blocks]/[not computed token/blocks]"_
Thank you for your question. In Appendix F, we discuss the actual block-level sparsity rates within the kernel for different tasks. The vertical and slash patterns exhibit an actual sparsity rate of 78%-90% around 100k and greater than 95% above 500k. While there are slight variations in actual sparsity rates across different tasks, the overall values fluctuate around these numbers.
PS: Thank you for raising the score. It seems it has not yet been reflected in the system. If possible, could you please assist with this?
Once again, thank you for your recognition and suggestions regarding our work.
---
Rebuttal 2:
Comment: I don’t want to seem too picky, but I believe the most important thing in research is to be sincere and I am really annoyed by nowadays trends of over-claiming. Your method's contributions are already sufficient, even if it works well only in long texts. **No method is perfect**.
I have no idea about why you keep trying to emphasize that *your method works well on short tasks; such sparsity can be used overall*. If you do want to show that MInference has minimal impact on all aspects of the model, then more rigorous testing on standard tasks should be conducted.
Back to your newly updated results. Thank you for the experiments. However, I must admit, I find your gsm8k setting very odd. I have never seen 9shot; most gsm8k experiments are either 5shot, like [1, 2], or 8shot, like [3,4]. And the performance of llama3 seems like it should be better. Using more shots means increase of redundancy.
Given the effectiveness on long texts, at this time, **I will not change my score**. But I am very much looking forward to more responses from the authors to address my concerns.
[1]QWEN2 TECHNICAL REPORT, https://arxiv.org/pdf/2407.10671
[2]Gemma 2: Improving Open Language Models at a Practical Size, https://arxiv.org/pdf/2408.00118
[3]https://llama.meta.com
[4]DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model, https://arxiv.org/pdf/2405.04434
---
Rebuttal 3:
Title: Response by Authors
Comment: We are very grateful for your critique and corrections. We fully agree with and acknowledge your exposition of MInference's limitations in short-context scenarios. In fact, we have never claimed that MInference is suitable for use in short-context scenarios, and we discussed these limitations in Appendix A. We will rewrite this section in the next version of paper to highlight the discussion on limitations in short-context scenarios.
Here we provide some details to support this content:
1. MInference cannot achieve any acceleration in short-context scenarios (prompts < 10k tokens), and due to the dynamic index building, it can be 30%-50% slower than FlashAttention-2. Moreover, since the latency proportion of Attention is not high in short-context, even if the single Attention module has a speedup, the end-to-end speedup is minimal.
2. The sparsity rate of Attention in short-context scenarios is significantly lower than in long-context scenarios. For example, controlling for the same sparsity rate (96.8%) as in Fig.2(b) of the paper, the Attention recall drops from 96.4% at 128K to 89.8% at 4K.
3. If the sparsity ratio from long-context scenarios is used in short-context, MInference causes a noticeable performance degradation. For example, as shown in Table 5, the accuracy of qasper (one of single-document QA tasks in LongBench) drops from 29.64% to 8.04%.
|Methods|qasper in LongBench|
|-|-|
|LLaMA-3-8B-262K|29.24|
|MInference using 55% sparsity rate|29.64|
|MInference using 85% sparsity rate|8.04|
Table 6. Performance of different sparsity rates on qasper in LongBench using LLaMA-3-8B-262K.
_"...gsm8k setting being very odd..."_
We appreciate your critique and correction. We did not attempt any cherry-picking; we followed the experimental setup for GSM8K as per [1,2]. We will review the relevant experimental settings, align the 8-shot results, and update them in next versions of the paper.
[1] Complexity-Based Prompting for Multi-Step Reasoning, ICLR 2023.
[2] Chain-of-Thought Hub: Measuring LLMs' Reasoning Performance, Github.
Thank you once again for your efforts in reviewing our work. Your input is very helpful for the further improvement of our research, and we will update the relevant content in the next version of our paper.
---
Rebuttal Comment 3.1:
Comment: Thanks for the response.
I'm just trying to figure out what is the optimal sparsity MInference can achieve with standard tasks. It's a more fundamental problem related to transformers' internal mechanism. I totally understand that it might be too much for the rebuttal period. Looking forward to more detailed results in the next version.
---
Reply to Comment 3.1.1:
Comment: Thank you for your patience. After debugging, we found that the performance drop on GSM8K is primarily due to the extended context version LLaMA-3-8B-262K [1], which significantly lost performance on GSM8K (dropping from 78.9 to 63.8). We have now aligned the 8-shot prompt and evaluation script with lm_eval [2] using the original LLaMA-3-8B [3], with an average prompt length of 800 tokens.
We conducted experiments using the same FLOPs as the 32 local windows and 128 global tokens A-shape pattern. The actual sparsity rate in the kernel was approximately 65%. We observed that most methods experienced significant performance losses, with MInference showing a 10.5 accuracy drop compared to Full Attention. StreamingLLM and InfLLM exhibited even greater performance drops, with declines of 14.6 and 22.3, respectively. Additionally, we further increased the sparsity rate and found that MInference suffered from severe hallucinations, losing its reasoning capability.
|Methods|GSM8K|
|-|-|
|LLaMA-3-8B|78.9|
|StreamingLLM|64.3|
|InfLLM|56.6|
|**MInference**|68.4|
Table 5. Performance of different methods on GSM8K with 8-shot ICL using LLaMA-3-8B.
In summary, the current effective methods show considerable performance degradation in short-context scenarios, and further optimization is needed in future work. We will include this analysis in the next version.
Thank you once again for your insightful suggestions and the effort you put into reviewing our paper! Your feedback has been incredibly helpful to us.
[1] https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k
[2] https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
[3] https://github.com/EleutherAI/lm-evaluation-harness | Summary: This paper proposes MInference, a method to accelerate the pre-filling stage for long-context LLM generation. The key method leveraged by MInference is dynamic sparsification, which consists of three sparse patterns observed in attention matrices: the A-shape pattern, the Vertical-Slash pattern, and the Block-Sparse pattern. A sparse pattern search method is developed to minimize the sparsification error for each attention. MInference has been tested across various LLMs and benchmarks, demonstrating that it significantly accelerates LLM generation in long-context settings with no performance drop.
Strengths: - The paper is well-written and well-motivated.
- Studying how to accelerate LLM generation for long-context settings is of great practical importance.
- The experimental results of the proposed MInference method are promising.
Weaknesses: - The experimented model scales are relatively small (e.g., LLaMA-3-8B and Yi-9B) and are only constrained to dense LLMs (i.e., not MoE).
- It is not clear how general the observed sparse patterns are across LLMs of various scales.
- The implementation details on how the proposed MInference interacts with existing CUDA kernels (e.g., flashattention) are not very clear.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does MInference scale up to LLMs with larger sizes, e.g., 70B? And how does MInference perform for MoE-based LLMs, e.g., Mixtral 8x7B?
- How general are the observed sparse patterns across LLMs?
- Is MInference compatible with other inference engines, e.g., vLLM?
- Are the Vertical-Slash Head and Block-Sparse Head part of CUDA kernels in MInference, or does MInference leverage other implementations of those sparse head kernels?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations of the proposed method have been thoroughly discussed. And there is no potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. _"For larger models and MoE"_
Thank you for your suggestion. We have added experimental results for **LLaMA-3-70B-1M**, as shown in Table 1, where MInference maintains excellent performance in larger models, significantly surpassing StreamingLLM and InfLLM[1], and nearly matching full attention. Due to resource constraints, we could not provide MoE results during the rebuttal period, but based on our experience, modifications to FFN do not affect the sparse attention patterns. We will include MoE results in future versions.
|Methods | En.Sum | En.QA | En.MC | En.Dia | Zh.QA | Code.Debug | Math.Find | Retr.PassKey | Retr.Number | Retr.KV | Avg. |
|-|-|-|-|-|-|-|-|-|-|-|-|
|***LLaMA-3-70B-1M*** | 20.7 | 10.3 | 84.2 | 9.5 | 14.0 | 33.2 | 61.7 | 97.0 | 100.0 | 34.0 | 46.5 |
|StreamingLLM | 20.5 | 8.5 | 52.0 | **10.0** | 12.6 | 27.4 | 61.1 | 14.0 | 10.0 | 0.0 | 21.6|
|InfLLM | **24.1** | 8.1 | 57.0 | **10.0** | 12.9 | 27.4 | 52.3 | **100.0** | **100.0** | 0.0 | 39.2|
|**MInference** | 20.6 | **10.1** | **83.4** | **10.0** | **14.1** | **34.1** | **61.9** | **100.0** | **100.0** | **39.0** | **47.3** |
Table 1. Performance of different methods with different base models on InfiniteBench.
[1] InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Memory, 2024.
2. _"Generality ability across models"_
We tested MInference on a variety of long-context LLMs, including LLaMA-3-8B-1M, Yi-9B-200K, GLM-4-9B-1M, Phi-3-mini-128K, and Qwen2-7B-128K, all of which maintained performance close to full attention (**see general response PDF**).
Additionally, we observed that three sparse attention patterns, especially the vertical and slash patterns, are present in BERT[2], T5 (both Encoder and Decoder), and MLLM[3]. We also found that "induction heads" exhibit a pattern similar to the "vertical and slash" pattern (see [4]).
In conclusion, we believe MInference is highly generalizable across different structures, sizes, pre-training data, training pipelines, RoPE structures, and extended long-context methods.
[2] SparseBERT: Rethinking the Importance Analysis in Self-Attention, ICML 2021.
[3] LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inference, 2024.
[4] A Mathematical Framework for Transformer Circuits, 2021.
3. _"Compatible with other inference engines, e.g., vLLM"_
Yes, we have implemented MInference within vLLM. Our method only requires replacing the corresponding FlashAttention kernel and does not affect the scheduling of vLLM PageAttention or similar features. We will open-source the implementation after the paper review.
4. _"CUDA Kernel"_
We have utilized Dynamic Sparse Compiler PIT, Triton, and FlashAttention to implement CUDA Kernels for Vertical and Slash Head and Block-sparse Head, ensuring transferability and support for long-context LLMs inference across different GPUs and CUDA versions. Details can be found in Appendix C.4. We have submitted the relevant code with our submission and will open-source the implementation after the review. | Rebuttal 1:
Rebuttal: We are grateful for the diligent efforts and insightful comments from the reviewers. Your suggestions have been incredibly valuable to our work. We will address and resolve these issues in our responses and in subsequent versions of our paper. Here we respond to some common questions and have included additional experimental results (**see attached PDF**), specifically:
1. _"For larger models"_
We have supplemented the results for **LLaMA-3-70B-1M** on InfiniteBench, demonstrating that MInference continues to perform exceptionally well in larger models, outperforming SoTA baselines such as StreamingLLM and InfLLM[1], and matching the performance of full attention.
|Methods | En.Sum | En.QA | En.MC | En.Dia | Zh.QA | Code.Debug | Math.Find | Retr.PassKey | Retr.Number | Retr.KV | Avg. |
|-|-|-|-|-|-|-|-|-|-|-|-|
|*GLM-4-9B-1M* | 28.3 | 9.7 | 68.6 | 39.5 | 12.1 | 29.4 | 38.9 | 100.0 | 100.0 | 41.0 | 46.7 |
|StreamingLLM | 27.7 | 6.4 | 40.2 | 12.5 | 10.8 | 27.7 | 21.1 | 97.1 | 25.6 | 0.6 | 27.0|
|InfLLM | 28.0 | 7.3 | 45.0 | 14.0 | 10.7 | 27.9 | **39.4** | 98.0 | **100.0** | 2.6 | 37.3|
|**MInference** | **28.8** | **9.6** | **68.6** | **38.5** | **12.0** | **30.7** | 39.1 | **100.0** | **100.0** | **43.0** | **47.0** |
|***LLaMA-3-70B-1M*** | 20.7 | 10.3 | 84.2 | 9.5 | 14.0 | 33.2 | 61.7 | 97.0 | 100.0 | 34.0 | 46.5 |
|StreamingLLM | 20.5 | 8.5 | 52.0 | **10.0** | 12.6 | 27.4 | 61.1 | 14.0 | 10.0 | 0.0 | 21.6|
|InfLLM | **24.1** | 8.1 | 57.0 | **10.0** | 12.9 | 27.4 | 52.3 | **100.0** | **100.0** | 0.0 | 39.2|
|**MInference** | 20.6 | **10.1** | **83.4** | **10.0** | **14.1** | **34.1** | **61.9** | **100.0** | **100.0** | **39.0** | **47.3** |
Table 1. Performance of different methods with different base models on InfiniteBench.
[1] InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Memory, 2024.
2. _"Generality ability across models"_
- To demonstrate the generality of MInference, we tested it on various open-source long-context LLMs, including but not limited to LLaMA-3-8B-1M, Yi-9B-200K, GLM-4-1M, Phi-3-mini-128K, and Qwen2-7B-128K, compared with additional baseline InfLLM. Our method consistently achieves good performance and acceleration across models with different pre-training data, training pipelines, RoPE structures, extended long-context methods, and sizes.
- We have also included results from LongBench to further substantiate the generality of MInference in scenarios around 32K tokens.
- Moreover, we observed that three sparse attention patterns, particularly the vertical and slash patterns, are not only present in GPT-like LLMs but also in BERT[2], T5 (both Encoder and Decoder), and MLLM[3]. We also found that "induction heads" exhibit a pattern similar to the "vertical and slash" pattern (see [4]).
- In summary, we believe MInference exhibits strong generality across various model architectures.
[2] SparseBERT: Rethinking the Importance Analysis in Self-Attention, ICML 2021.
[3] LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inference, 2024.
[4] A Mathematical Framework for Transformer Circuits, 2021.
|**RULER** | Effective | 4K | 8K | 16K | 32K | 64K | 128K | Avg. |
|-|-|-|-|-|-|-|-|-|
|*GLM-4-9B-1M* | 64K | 93.8 | 91.6 | 89.3 | 87.4 | 85.2 | 80.8 | 88.0 |
|StreamingLLM | 4K | 93.8 | 66.9 | 58.5 | 51.4 | 45.9 | 39.1 | 59.3 |
|InfLLM | 8K | **94.7** | 89.5 | 76.4 | 66.5 | 56.8 | 53.5 | 72.9 |
|**MInference** | 64K | 94.6 | **93.1** | **91.0** | **89.6** | **85.5** | **84.0** | **89.6**|
Table 2(a). Performance of different methods on RULER.
|**LongBench** | SingleDoc | MultiDoc | Summ. | FewShot | Synth. | Code | Avg. |
|-|-|-|-|-|-|-|-|
|*LLaMA-3-8B-262K* | 33.5 | 28.3 | 29.4 | 66.9 | 43.0 | 42.4 | 40.6 |
|StreamingLLM | 29.7 | 23.7 | 28.8 | 65.8 | 19.3 | **42.8** | 35.0 |
|InfLLM | 31.6 | 25.8 | 29.1 | 66.3 | 36.3 | 41.8 | 38.5 |
|**MInference** | **34.0** | **28.0** | **29.7** | **66.8** | **42.8** | 42.2 | **40.6**|
Table 2(b). Performance of different methods on LongBench.
Pdf: /pdf/9d85b31fedaf6c8a220abfb6472df2846fbf511c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Compositional Generalization Across Distributional Shifts with Sparse Tree Operations | Accept (spotlight) | Summary: Authors propose a new representation that they call Sparse Coordinate Trees. When applied to Differentiable Tree Machines, they make computation much more parameter and memory efficient. Due to clever design, the SCTs allow for much more efficient tree operations by bit-shifting, indexing, and addition. Because the tree will naturally become very dense, they apply pruning to make it more sparse. They also propose how to adapt it for sequential inputs and outputs, rather than in tree-form.
In the experimentation section, they provide results on IID, zero-shot, one-shot, structural / length, and template tasks to test generalization, showing that in some ways these methods outperform previous work.
Strengths: A) Professional and clear writing
B) The number of parameters is clearly reduced from the original DTM
C) The memory usage is reasonably reduced, and very reduced for the pruned version
D) Operations are quite a bit more efficient
Weaknesses: A) It would be clearer if there was a better description of what the left, right, and cons functions are intended to accomplish, as this is quite central to the methods
B) There is a lot of extra space in the graphs and 5 runs--adding standard deviation would be nice
C) In Table 1, I am not fully convinced this dataset presents a fitting challenge. The IID are already at 1.0 for almost every method, providing no meaningful distinction between them (although on its own this is maybe fine, as the OOD tasks are the focus). However, both OOD sets go from primarily 0% in the previous works to 100% in this work--not only does it make it seem like the task is very easy once attempted, but it also makes it very difficult once again to differentiate between methods.
D) The only comparison to DTM is in Table 1--aside from lowering resource consumption, it is not clear if the methods have any performance difference (and as pointed out in C, it is not clear if they are truly equivalent or the dataset is just too easy). DTM should really be included in the other experiments. That way, it is clear if sDTM has added performance or just implementation efficiency. If it is just efficiency, then more experiments showing time, memory, parameters, etc would be more fitting than many separate results.
E) Overall, the space is not well used (lots of white space, especially in experiments). It would be better if this were used to showcase more in the paper.
F) Method consistently shows bad performance on MCD (worst of all 4 methods in Table 3, very bad in Table 4)
G) Same as F, but for length experiment
H) It is inconsistent where / when the different tasks are presented. e.g. sDTM seems to be good at 0-shot lexical and structural, but structural is only shown in Table 1. If this is where it is good, it would be much more interesting to see more of that task, than to see MCD in two places (Table 3 + 4), even though sDTM is consistently bad at these tasks.
My primary concerns are
1) I'm not fully convinced on the novelty, because how I understand, it is mostly a more efficient version of DTM, but the experiments focusing on this are very limited and it is also not compared much to DTM in terms of performance (see C, D, E)
2) the results are not great (see C, F, G). In some tasks the method underperforms across the board (length, MCD), and in others where it is good, there is a shortage of results (see H)
Technical Quality: 3
Clarity: 4
Questions for Authors: A) in line 306, you say the variants score 0.03%, however in Table 2, they have .03 -- do you mean 3% or .0003? Or are the values in the table truly in percent, and the method is just getting 1%?
B) Do I understand correctly that in Table 3, sDTM performs the worst of all metrics on all 4 tasks?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Reviewer mbkf
Thank you for the time you spent reviewing our paper. Addressing your feedback will make our submission much stronger.
## Weaknesses
A. We will make sure to improve the motivation for using left, right, and cons in the camera-ready version. The design of DTM was motivated by the fact that the vast majority of classic symbolic AI was carried out in Lisp, using binary trees as the universal data structure, and using left, right, cons as the complete set of universal structural operators (literally Turing-universal when deployed with the control operation of conditional branching on testing for symbolic equality, as noted in the paper on lines 141-144). This generality means DTM and sDTM are potentially applicable to the huge range of AI tasks previously solved with Lisp.
B. We acknowledge that Table 1 and Table 3 both have whitespace. We will use the whitespace in the tables to include summary statistics about the five runs in the camera-ready version. Can you please let us know where you see excessive whitespace in other figures and tables? Additionally, we experimented with a vertical version of Figure 2 to try and reduce whitespace. The vertical version is available in the PDF attached to our global rebuttal. We would be grateful to know whether you think that this design is a better use of space.
C. We are a bit confused by this comment. Highlighting that some models go from 100\% IID performance to 0\% OOD performance is a signature metric in compositional generalization. In addition to Table 1, we find a similar pattern on FOR2LAM (Table 2), SCAN (Table 4). This pattern is also present in other compositional generalization tasks such as COGS, PCFG, and CFQ. If this observation does not address the issue, we would appreciate a clarification of the problem we should address.
D. This point was also mentioned by other reviewers, and it is clear to us that we did not provide adequate explanation for why we did not test DTM on FOR2LAM, GeoQuery, and SCAN. Please see our response to this in the **sDTM vs DTM** section of the Global Rebuttal.
We appreciate your point that we should emphasise the efficiency gains of sDTM. In addition to parameter and memory efficiency, sDTM is also almost twice as fast as DTM, a point that we did not highlight in the submission. We will correct this in the camera-ready version so that the efficiency improvements of sDTM are better explained.
E. If you care to let us know where you find there is excessive whitespace, we would appreciate it, and would take steps to eliminate it. In order to fit within the NeurIPS style guidelines, there is only so much we can do about whitespace around Tables. We have much more control about whitespace in Figures and are happy to update any figure that you mention as poorly using space.
F. MCD distribution shift is one where we did not find benefits over Transformer and RU-Transformer. We address these limitations in the Conclusion on lines 344-347, where we also highlight that despite the poor performance on MCD shifts, our model performs well in comparison to baselines across the widest variety of distributional shifts.
G. We grouped structural and length generalization together to highlight that generalizing to novel linear positions can also be viewed as generalizing to novel structural positions in a latent parse tree. As shown in Figure 1, our model has the best overall performance on length/structural distributional shifts. One potential area for this confusion is the performance of sDTM on SCAN where the model performs significantly better when the output strings are parsed in a meaningful way.
H. Please see our response to this in the **Layout of the experimental results** section of the Global Rebuttal.
One of the goals of our submission is to highlight distributional shifts and how different models perform differently across them. With this goal in mind, we thought that it was important to include results for sDTM on MCD shifts even though sDTM's performance is disappointing.
### Primary Concerns
1. We failed to explain in our submission that DTM is unable to train on FOR2LAM and GeoQuery. We hope that our explanation in the **sDTM vs DTM** section of the Global Rebuttal adequately highlights the extent to which sDTM provides drastic efficiency improvements. Our response to Weakness H also explains how each experiment is associated with a novel contribution from our work.
2. We would like to reiterate that our results illuminate how different models, even ones designed for compositional generalization, perform differently across different datasets and distributional shifts. While NQG has good 1-shot lexical generalization accuracy, we found that it has no 0-shot generalization accuracy. Additionally, while NQG has perfect length generalization on SCAN, it does not outperform a Transformer on the GeoQuery length split and has 0\% accuracy on the Active$\leftrightarrow$Logical structural split. We hope that sDTM’s performance when considered across all of the datasets and splits in our submission highlights our contribution.
## Questions
A. Thank you for catching this typo, line 306 should read 3\% and not .03\%. Our model scores 100\% IID and 61\% on 0-shot lexical generalization.
B. Yes, we included results of GeoQuery to show how sDTM and the baselines all struggle on this task. Please see our response to this in the **Performance concerns** section of the Global Rebuttal.
## General
Please let us know if we did not adequately address any of your questions or concerns. We are committed to engaging with you during the discussion period to continue improving our paper. If you find that our changes in response to your feedback improved our submission, please consider increasing your rating for our paper.$\newline\newline$
Thank you again for your time and consideration. We look forward to hearing back from you.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you very much for your detailed and thoughtful response, you have addressed many of my concerns. There are just a few points I would like to continue discussing (the points omitted here I am happy with, and would like to thank you for addressing accordingly).
WB/E) The updated figures in the vertical format look nice, and look like they should already reduce a bit of space. In terms of additional space-saving suggestions, because it requires some playing to see the actual impacts, I can't be 100% certain if these would work, but some things I could think of to try:
- for the updated version of Figure 2, you could also abbreviate Tree positional index (e.g. Tree pos. ind., or TPI with an explanation the caption, or something else fitting), which would allow you to remove quite a bit of white-space on either side of the indices. Then you could either reduce the font size in the top part of the figure or move some of it closer together (maybe The.101 and NP.101, as fun.101 and person.111 are already close)
- for Figure 4, the boxes around Agent/Interpreter/Memory are quite high--a bit of space could be saved vertically by shrinking them a bit. This might also look a bit cleaner
- All of the tables have a lot of white space between the columns, especially where the titles are long (e.g. 1-shot lexical). If you can shorten/abbreviate them fittingly (an easy example being Length as Len., or with explanations in the captions), you can make them shorter across and either put two on one line or make them in-line with the text, like you did for Table 2. Alternatively, you can use the space to report other information (e.g. higher-level stats, as discussed in WB). Also with Table 3, because it does not take up the full width, a lot of space is wasted to the right and left of the table--you could find another table to put side-by side (e.g. table 2, with a bit of shrinking described above) or also put it in-line with the text.
Of course, this space use does not affect the rating directly, but there is some opportunity cost for the information that could have been otherwise presented with this space.
WC) You are correct, that the 1.0 IID to 0.0 0-shot lexical is typical / understandable. I'm a bit more concerned with how 0-shot lexical progresses across models--for Transformer, RU-Transformer, and NQG it is 0.0 (again, probably understandable), but for DTM and sDTM it is 1.0. If the first work that addresses the 0-shot lexical case immediately goes from 0% to 100%, is the dataset not perhaps too easy? And it makes it impossible to compare between the different methods, as they all score 100%. In the FOR2LAM dataset, for the 0-shot lexical split this accuracy goes from 0.03 in previous work to 0.61--not only is the dataset sufficiently hard to show that the problem is still unsolved, but if there were other competitors here (e.g. DTM), it is more likely there would be something more meaningful to compare between methods. As a whole, I'm concerned the bar is set too low here in the 0-shot lexical setting for in-depth method comparison.
WF) This is fair enough--unfortunate, but you're right that it's better to report the method fairly.
Primary concern 1: this makes sense, then I agree that simply being able to apply a DTM-like algorithm in these new scenarios is its own novelty
Primary concern 2: with your response in mind, I'll concede that the results are fair. My primary take-aways: exp. 1 shows improved results in 0-shot lexical and structural (as discussed in C, I'm not 100% convinced on the dataset, but either way it is clearly an improvement over previous work); FOR2LAM results are clearly improved; SCAN shows that the 0-shot lexical and template improves over NQG, although the other splits are worse, which is just a trade-off between methods; and GeoQuery results continue to be disappointing showing that the dataset is difficult (but also that sDTM is particularly affected by the difficulties). At the very least, there is a strong case for using sDTM in the 0-shot lexical case. TLDR: I agree there are some clearly compelling use-cases here, even if it is not better across the board.
Q2) How you addressed this in the general rebuttal is very nice--if you can turn this section from implying that sDTM doesn't work well (now it is now) to an analysis of how sDTM works / what information it leverages / in what scenarios it is most effective (similar to what you wrote in the general rebuttal), it could turn the section from a weakness into a strength. Saving some white-space in the figures as discussed in WB/E might give you some extra space to go more in-depth into this.
---
Reply to Comment 1.1.1:
Comment: B/E. Thank you for clarifying your concerns with the whitespace and proposing solutions. We now understand what you are asking for in the other figures and tables and will implement your suggestions. The updated version will reflect this, and we agree that the best use of the extra space this affords us is to provide "an analysis of how sDTM works / what information it leverages / in what scenarios it is most effective".
C. We understand your concern about Section 5.2 and believe that this may be addressed by further clarifying the purpose of this experiment. While we think your concern about the simplicity of the Active$\leftrightarrow$Logical is well founded, it is one of the main datasets used in the paper introducing the original DTM. As a result we felt it was important to include here in order to show that sDTM with 70x fewer parameters still retains the performance of the original DTM. To make the purpose of the experiment in Section 5.3 clearer, we proposed changing the section header to **Performance Regression (Active$\leftrightarrow$Logical)** in the general rebuttal - and will add a couple of sentences at the start of the section to this effect. As for raising lexical OOD from 0 to 1, this is one of the goals behind making the operations in the programs learned by sDTM blind to lexical identity: the tree operations exploit the the factorization of structure and content to purely manipulate structure, carrying along whatever content (symbols, familiar or novel) may be contained in the structural positions.
However, your question about the capability differences between DTM and sDTM still stands. Since both DTM and sDTM achieve ceiling performance on Active$\leftrightarrow$Logical, it is impossible to tell what the capability differences are from this experiment. We think adding DTM results to SCAN will help illuminate the differences between DTM and sDTM, especially since sDTM does not have ceiling performance across all of the splits.
WF/Primary Concern 1/Primary Concern 2: We are glad that our responses to Weakness F, Primary Concern 1, and Primary Concern 2 were helpful. With regard to the first experiment results discussed in Primary Concern 2, we hope that our discussion on Weakness C above further alleviates your concerns. Please let us know if there is anything that you wish to continuing discussing on those areas.
Q2: Thank you for your kind words, your feedback was essential in encouraging us to put down our reasoning into words. We will certainly update our discussion of GeoQuery results to include the content in our general rebuttal. | Summary: The paper proposes a novel way of representing sparse trees where nodes have vector attributes in a denser, tensorised format which they call Sparse Coordinate Trees (SCT). Essentially, the crucial component for SCTs is to represent the indices of the nodes according to their topological ordering, allowing for all nodes to be represented by a vector of indices and a tensor of attributes. Additionally, the authors also show how some traditional operations on trees, such as taking left/right subtrees or constructing a new tree from left/right branches, can be efficiently implemented with simple indexing or bitshifts when working on the binary representation of the node indices. Moreover, these operations can be parametrised in a differentiable way using modern machine learning models, such as transformers, opening the door to learning the structure of a SCT
The next contribution is then to use SCT to extend an existing neurosymbolic [1, 2] framework called Differentiable Tree machines (DTM) to be able to work with sequence data (seq2tree and seq2seq tasks) instead of just tree data (tree2tree tasks). While maintaining the semantics of DTM, the use of SCT as an inference engine is also shown to be more memory and parameter efficient. Finally, the theoretical claims of the paper are supported by a strong suite of four benchmarks.
[1] Garcez, A. D. A., & Lamb, L. C. (2023). Neurosymbolic AI: The 3 rd wave. Artificial Intelligence Review, 56(11), 12387-12406.
[2] Marra, G., Dumančić, S., Manhaeve, R., & De Raedt, L. (2024). From statistical relational to neurosymbolic artificial intelligence: A survey. Artificial Intelligence, 104062.
Strengths: 1. The paper is very well written and easy to follow. The motivation for the ideas in the paper and their explanations are clear.
2. The suite of experiments is quite extensive, covering the three kinds of tasks discussed in the paper (tree2tree, seq2tree, seq2seq) on recognised datasets. Apart from the number of experiments, the advantages of sDTM compared to the chosen baselines are also clear in most cases.
3. The idea of the paper is simple, but elegant and it is easy to see why it can give substantial improvements in terms of efficiency. It also nicely allows for the incorporation of modern machine learning models like transformers. While I am unsure about the overall impact of the work as it seems there are many questions left to answer, the questions and answers about generalising beyond mere i.i.d. training and test cases are crucial and tie in with the current rise of neurosymbolic AI.
Weaknesses: While I overall enjoyed reading this interesting paper and appreciate the provided insights, I do have some comments and questions:
1. A lot of related work is properly and extensively discussed in Section 2, yet I do believe a series of references might be missing. There are many more general neurosymbolic frameworks that use neural nets to parametrise symbolic components. Some are based on fuzzy logic [1], while others use probabilistic logic [2, 3], in contrast to being based on tree structures. For the special case of sequences, a system based on stochastic grammars also exists [4]. Even more, some of these systems go further in allowing neural embeddings to be present within the logical system [5, 6], similarly to how nodes in the tree are composed of their vectors of attributes. It is true that many of these systems have not focused on structure learning and do assume some prior knowledge, which is not a prerequisite for the proposed method. Although this area of "structure learning" (or learning a *program* as line 157 puts it) is an active area of research [7, 8]
2. In section 3.2 it is shown that the operations *left*, *right* and *cons* can be implemented very efficiently as tensor operations. However, not much is said about the additional operations of *conditional branching* and *equality-testing*. It is mentioned that the five operations together are Turing complete, but only the first three seem to be used and nothing is said about an implementation of the last two for SCT.
3. sDTM extends DTM to tasks different from just tree2tree tasks, but it is not completely clear how much of this is due to the use of SCT. For example, to allow even sDTM to deal with seq2seq tasks, the authors do need a hardcoded translation from output trees to sequences. I wonder if a similar hardcoding could not have been used for input sequences to trees, allowing vanilla DTM to deal with the same coverage of tasks, albeit with a less flexible input encoding.
4. Most of the experimental results are promising, but some results did raise some questions (see below in the questions section). It would also be nice to see some examples of some of the datasets, even if only in the appendix, to make it more tangible what the input and output is of the experimental tasks.
5. As briefly mentioned previously, the true impact of this work remains hard to guess. There is surely a lot of promise in neurosymbolic methods in general and the proposed SCTs and sDTM do show improved generalisation performance by learning both neural and symbolic components *from scratch* and *from data*. However, the use of *only* tree structures could prove limiting for applications with more intricate dependencies.
Smaller concerns:
1. Section 4.3 talks about how pruning can be used by keeping the top-$k$ nodes. However, it is unclear whether this can be done during training, since the top-$k$ operation is not differentiable.
2. On line 197 it is stated that attention is permutation invariant, yet this is not completely correct. Attention is invariant to permutations of the keys and values and only *equivariant* to permutations of the queries. It would be good to make the distinction clear to avoid confusion.
[1] Badreddine, S., Garcez, A. D. A., Serafini, L., & Spranger, M. (2022). Logic tensor networks. Artificial Intelligence, 303, 103649.
[2] Yang, Z., Ishay, A., & Lee, J. (2020, July). NeurASP: Embracing Neural Networks into Answer Set Programming. In 29th International Joint Conference on Artificial Intelligence (IJCAI 2020).
[3] De Smet, L., Dos Martires, P. Z., Manhaeve, R., Marra, G., Kimmig, A., & De Readt, L. (2023, July). Neural probabilistic logic programming in discrete-continuous domains. In Uncertainty in Artificial Intelligence (pp. 529-538). PMLR.
[4] Winters, T., Marra, G., Manhaeve, R., & De Raedt, L. (2022, June). Deepstochlog: Neural stochastic logic programming. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 9, pp. 10090-10100).
[5] Rocktäschel, T., & Riedel, S. (2016, June). Learning knowledge base inference with neural theorem provers. In Proceedings of the 5th workshop on automated knowledge base construction (pp. 45-50).
[6] Maene, J., & De Raedt, L. (2024). Soft-unification in deep probabilistic logic. Advances in Neural Information Processing Systems, 36.
[7] Shindo, H., Nishino, M., & Yamamoto, A. (2021, May). Differentiable inductive logic programming for structured examples. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 6, pp. 5034-5041).
[8] Muggleton, S. (1991). Inductive logic programming. New generation computing, 8, 295-318.
Technical Quality: 3
Clarity: 4
Questions for Authors: Apart from the concerns raised in the previous section, here are a couple more specific questions:
1. What is the intuition behind only using a single learnable parameter for the query vector? (lines 205-206)
2. While it is nice to see that sDTM generally does perform better than transformers in terms of OOD generalisation, I am left wondering if it also can not be prone to the same pitfalls as transformers as SCT and hence sDTM do utilise transformers internally to construct trees. Could you elaborate on this as it could be an important limitation of this paper? As experimental support for this limitation, the lacking performance in the experiment of Section 5.4 could be evidence. Additionally, the imperfect score of 0.61 in Table 2 can also be seen as evidence for this, given the rather simple nature of "replacing the name of a variable".
3. Experimental questions:
+ Why is the original DTM only present in Table 1 and not Table 2? Both relate to tree2tree tasks where DTM should also be applicable if I understand correctly.
+ Lines 283-285: How much of the 20% memory reduction is due to the use of pooling by attention and how much is due to using SCT?
+ Table 2: sDTM gets a score of 0.61 on the 0-shot test set where one variable name is consistently changed to another. Do all test set occurences contain the variable x that is changed to z?
+ In general, the evaluation metrics should be explained a bit more in detail. For example, what does it mean for a FOR2Lam translation to be correct? The translated AST is exactly the same as the target, or equivalent in some way?
+ Lines 321-322: the small dataset is used as argument for the lacking performance of sDTM compared to other methods, but do those other methods not also suffer from the small dataset? Transformers are known to be rather data-hungry, so I would still have expected sDTM to outperform them at least in this task. Do you have some deeper intuitions as to why this is not the case?
+ In general, why the choice of the best performance out of 5 runs? While means and standard deviations are certainly not always ideal, aggregate and variability metrics are still more insightful to gauge the consistency of the tested methods. If one does not want to use means and standard deviations/errors because of their distributional assumptions, medians and quantiles are a good solution.
+ It seems like there are more neurosymbolic methods that could be applied to the discussed tasks, such as those mentioned in the related work in lines 97-98. Why the choice for only NQG?
4. I am curious about the overall training times for all methods, to see if sDTM requires substantially more time to train or not. Can you comment on this please?
In general, I do give a more positive rating to this paper as its presentation is excellent, its contribution is interesting and its experimental evidence is quite convincing. I will gladly further increase my score if the authors can answer my concerns.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: I believe limitations are sufficiently addressed, as the conclusion specifically mentions that sDTM still struggles with some OOD generalisation tasks. However, some potentially limiting factors, such as training times, are not immediately clear.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Reviewer Hmrt
Thank you for the extensive feedback you provided on our submission! Addressing your questions and the weaknesses you identified will significantly strengthen our work. We are pleased that you found our work “very well written”, “the suite of experiments is quite extensive”, and “the idea of the paper is simple, but elegant”. Below we will address the weaknesses and questions that you identified.
## Weaknesses
1. We are grateful for the additional references that you provided. We will amend the Related Work section in the camera-ready version to include a subsection on Neurosymbolic Computation with the references you provided, as well as additional citations.
2. Thank you for catching this oversight. Conditional branching and equality-testing are the control flow mechanisms for how to sequence the structural operations as well as argument selection. In this regard, conditional branching and equality-testing are implicitly parameterized by the Agent using a Transformer. In numerous approaches to neural program synthesis, neural networks are used to parameterize program control flow (e.g. Nye et al (2020), where networks generate symbolic rule sequences). Unlike most of that work, however, in our approach the synthesized programs are themselves differentiable. We will update the camera-ready version to make this contribution of our work clear.
3. Please see our response to this in the **sDTM vs DTM** section of the Global Rebuttal.
4. Thank you for this suggestion. We included an example sample from SCAN in Figure 5 and will add addition samples across the datasets to the Appendix in the camera-ready version.
5. Extending data structures in superposition to graphs is an active area of research. We focused on trees given their importance in language processing, but we would like to generalize the techniques in our paper to graphs as well. Many data structures such as lists, stacks, and queues can be implemented using left, right, and cons, so it is possible for sDTM to succeed on tasks that would be better modeled by these data structures. This is an interesting point that we intend to investigate in follow-up work.
Smaller Concern 1: top-k selection is also done during training. Top-k selection can be viewed similarly to deterministic channel-wise dropout. As such, when a node is dropped, the node does not contribute to the output, and thus it will have a gradient of 0. We will include additional text in the camera-ready version to make sure that it is clear how top-k selection works with regards to training.
Smaller concern 2: this is a good point, thank you for your attention to detail! We will update the text to make it clear that attention is permutation equivariant.
## Questions
1. The sentence that you pointed out is written incorrectly. We have n_head query vectors each of dimension d_key. We will correct this in the camera-ready version. Thank you for spotting this typo.
2. Good point, and this ties in to point 2 you raised in the Weaknesses section. As long as the Transformer can make the correct control flow predictions, what operations to perform on what trees in memory, the transformation will have good generalization. However, sDTM can still fail to generalize correctly when the underlying Transformer that powers the agent does not make correct control flow predictions. We address some concerns about model performance in the **Performance concerns** section of the Global Rebuttal.
3. Experimental questions
1. Please see our response to this in the **sDTM vs DTM** section of the Global Rebuttal.
2. This is a good question, and we don't have a definitive answer just yet. SCT only provides memory savings when nodes are empty. DTM with more layers will eventually lead to fully dense trees, in which case all of the memory savings will be due to pooling by attention. However, in this specific example, the early sparse trees will account for some of the memory savings.
3. Yes, every test set sample contains a variable x that is changed to z in the 0-shot test set.
4. The evaluation metric that we use is Exact Match Accuracy. We will update the paper to clarify this.
5. Please see our response to this in the **Performance concerns** section of the Global Rebuttal
6. We find that sDTM is prone to getting stuck in local optima and followed previous papers in reporting the best run, as described on lines 982-986 in the Appendix. We acknowledge that summary statistics are still helpful. We want to continue reporting the best performance since this highlights the theoretical capabilities of our model, but we will also include summary statistics in the updated version.
7. We picked NQG as our signature hybrid neurosymbolic model since it was applied to different distributional shifts and datasets. Most other hybrid neurosymbolic models are heavily tied to a specific dataset or distribution shift, and we did not have the resources to adapt models for each task and distribution shift.
4. Thank you for pointing out this oversight, we should have included information about the training time for each architecture and will rectify this. On Active$\leftrightarrow$Logical, 20k steps took DTM 11 hours, sDTM 6 hours, and our Transformer 2 hours. While the forward and backward speed of Transformer is still much faster than sDTM, it uses highly optimized CUDA kernels, whereas sDTM has a lot of room for improvement.
## General
Please let us know if you have any additional questions or concerns. If we addressed your questions and concerns, please consider increasing your score. We truly appreciate the time that it must have taken to provide such a careful analysis of our submission. Thank you!
---
Rebuttal Comment 1.1:
Title: Acknowledgement of author rebuttal
Comment: Thank you for the extensive answers to my questions and concerns! I sincerely appreciate the honesty in your answers, for example mentioning that sDTM can be prone to getting stuck in local optima. I hope to find this observation together with the other remarks (such as training times and the impracticality of DTM on the FOR2LAM and GeoQuery tasks) in the camera-ready version of the paper. Additionally, I am also looking forward to the additional comparison between DTM and sDTM on the SCAN task.
I believe the clarification of using transformers as the overall control flow mechanism, leading to differentiable synthesised programs, will further improve the overall exposition as it shows which parts are symbolic and which parts are neurally parametrised. Especially since readers can then easily identify why sDTM might struggle with OOD generalisation. For example, in cases where the burden of generalisation falls on the neural component.
With respect to my first smaller concern, I now see that top-k in the context of the paper indeed can be seen as a differentiable deterministic dropout. It is only when probabilistic semantics are attached to the predictions, e.g. the predicted values are to be interpreted as probabilities, that top-k introduces complications for gradients. Specifically, the same deterministic dropout interpretation would be a biased estimate of the true gradient in case of probabilistic semantics. However, since no probabilistic semantics are being claimed, I agree with the provided answer.
I consider my concerns addressed and up my score to a full accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your very comprehensive analysis of our submission! Addressing your concerns and questions greatly improved our paper, and the camera-ready version will benefit greatly from these changes. | Summary: This work addresses the problem of compositional generalization in the domain of natural language processing. The authors highlight that incorporating tree structures into a models representation space is important for achieving compositional generalization. To this end, the authors build upon a recent method for incorporating such structure by extending the method such that it is 1. significantly more parameter/memory efficient and 2. able to handle seq2seq task opposed to just tree to tree task. The authors test their method on various natural language task and show superior compositional generalization performance across several metrics relative to baseline methods as well as improved efficiency relative to the method they build upon.
Strengths: * This paper addresses an important problem; namely, closing the gap between human's and machine's ability to generalize compositionally in natural language task.


* The paper is very well written, well structured, and easy to understand.


* Section 2 provides a solid review of prior works and does a good job contextualizing the authors contribution relative to prior works.


* The authors method yields promising empirical results both in terms of memory efficieny and performance relative to existing baselines e.g. Transformers.


* The authors are upfront and transparent when their method underperforms in Section 5.4 and aim to provide potential explanations for why this may be occurring.
Weaknesses: __1.__ I found the experiments section to be a bit unclear in its focus. As I understand, one of the core points of sDTM and thus the paper, is that sDTM is significantly more memory/compute efficiency than DTM. While the authors compare these two methods in terms of efficiency and performance in 5.2, such an experiment does not exist for the tree2tree task in 5.3. Consequently, I am a bit confused on the purpose of the experiments in 5.3 given the main message of this work.


Specifically, the current point seems to be to show that sDTM outperforms baseline methods on FOR2LAM. Given that sDTM is an extension of DTM, the scores for sDTM in isolation do not seem particularly meaningful without a relative comparison to DTM as was done in 5.2. Please let me know, however, if there is something I am missing here.


For the same reasons, I think the experiments in 5.4 and 5.5 would also benefit from reporting scores for DTM alongside sDTM. I suppose the original DTM method cannot be directly applied since these inputs involve sequences, however, if I understand correctly, it seems the same technique to deal with sequence task in sDTM can be applied to DTM.


$\newline$


__2.__ It would be important to understand the compositional generalization benefits obtained by sDTM over e.g. Transformers on more large scale models/datasets, however, I also recognize that such a study could be out of the scope of this work.


$\newline$


__3.__ There are some cases in which sDTM does not offer benefits over existing baselines as reported in Section 5.4, however, the authors are upfront about this in the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: * What is the primary purpose of the experiments in Sections 5.3-5.5?


* Do the authors have intuition for how well sDTM could scale to more complex models/datasets? In particular, given its compute efficiency over DTM?


* Do the authors envision that sDTM could be applied in task outside of natural language, e.g. visual reasoning or planning task?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper does not contain an explicit limitations section, however, the authors provide a transparent discussion about some limitations of sDTM in Section 5 and in the Conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Reviewer bgti
Thank you for the time you spent to understand our submission and provide valuable feedback! Addressing your concerns will make our paper substantially stronger. We are pleased that you found that our paper “addresses an important problem”, is “very well written”, contextualizes our contribution in relation to prior work, and “yields promising empirical results”. Below we will address the weaknesses and questions that you highlighted.
## Weakness
1. Our original submission did not adequately address the significance of each experiment, and we failed to explain when it is possible to compare DTM and sDTM. As you and other reviewers point out, we only directly compare sDTM and DTM in Section 5.2. The reason that we did not include DTM in Section 5.3, and this is a point we will be sure to highlight in the camera-ready version, is that DTM cannot fit a batch size of 1 into memory on FOR2LAM. Theoretically, DTM should be able to solve FOR2LAM, but the inefficiencies with regard to exponential growth due to tree depth (lines 111-115) make it impossible in practice. The purpose of Section 5.3 is to show that sDTM, which has the same theoretical guarantees as DTM but is practically more efficient, can scale up and solve FOR2LAM.
\
\
Your understanding of the separation between our contribution of seq2tree and seq2seq and architectural improvements in sDTM is correct. The same technique to process sequences that we introduce alongside sDTM can also be applied to DTM. GeoQuery has deeper trees than FOR2LAM, which means that it is also not feasible to test DTM on GeoQuery. However, it is practical to train both sDTM and DTM on SCAN. We will run the experiment to test DTM on SCAN and will update the camera-ready version with these results.
\
\
In order to make the purpose of each experiment clear, we propose to change the subheaders in the Results section of the camera-ready version to better reflect the significance of each experiment, with the associated dataset in parentheses. **Section 5.2: Performance Regression (Active$\leftrightarrow$Logical)** confirms that sDTM does not worsen DTM's original performance. **Section 5.3: Scalability (FOR2LAM)** investigates a tree2tree transformation task that DTM cannot handle as explained in the previous section of this rebuttal. **Section 5.3: Seq2Tree (GeoQuery)** introduces the change of processing a sequence as input instead of a tree, and **Section 5.4: Seq2Seq (SCAN)** adds an additional modification of sequence outputs.
2. We hope that our more efficient implementation of DTM will allow us in future work to compare the compositional generalization benefits of sDTM and Transformers on larger datasets and models. In this work, we focused on making it practical to test DTM on a wider variety of tasks by making it much more efficient and capable of processing sequence inputs and outputs. As mentioned in the previous section, the original DTM could not be tested on FOR2LAM and GeoQuery because it was too inefficient to fit a batch size of 1 into the available memory. sDTM provides a base for us to approach larger datasets in future work.
3. While we would be excited if sDTM was the best model across every split and every dataset, Figure 1 shows that sDTM's improvement over baselines comes when results are considered as a whole. As we note in our general response to reviewers, we aim to present results across a breadth of distributional shifts to help shed light on what kinds of generalization different architectures excel at.
## Questions
1. Please see our response to Weakness #1 above.
2. Please see our response to Weakness #2 above.
3. With regard to other modalities, we hope that sDTM can be applied to any task that can benefit from hierarchical representation and processing. As you mention, many types visual reasoning and task planning, such as scene understanding and goal decomposition into subgoals, can be formulated as hierarchical problems. The challenge here will be going from raw input such as an image into a tree-structured representation, as well as managing this encoding in a way that generalizes.
## General
Please let us know if you think that our submission would benefit from an explicit Limitations section.
\
\
We hope that we have addressed your questions and concerns. If you have any additional questions, or we did not address any of your original points, please let us know. We are committed to continuously improving our paper and appreciate your feedback. If you find that our changes in response to your feedback improved our submission, please consider increasing your support for our paper.
\
\
Thank you again for your time and consideration. We look forward to hearing back from you.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal
Comment: I thank the authors for addressing my points in detail. I have decided to increase my score to accept.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your feedback! We really appreciate the time you spent to help us make our submission stronger. | null | null | Rebuttal 1:
Rebuttal: # Global Rebuttal
We thank all of our reviewers for their careful analysis of our work. In this global response, we highlight points shared by multiple reviewers.
First, we are excited by the kind comments that reviewers provided concerning our paper. All three reviewers found our paper to be clear, easy to understand, and well written. They also highlighted the empirical performance and computing efficiency benefits of our work compared to the original DTM, especially in terms of parameter count and memory usage. We realized that in addition to the benefits that the reviewers highlighted, we failed to describe the speed benefits of sDTM over DTM. Due to the much smaller number of parameters and activation dimensions, sDTM is also much faster than DTM. On Active$\leftrightarrow$Logical, DTM takes 11 hours to to train for 20k steps, whereas sDTM takes 6 hours. We will update the camera-ready version of our paper to include empirical results showcasing the speed improvements of sDTM vs DTM.
The primary feedback across reviewers was the need for greater clarity about the relationship between the original DTM and our proposed sDTM including a more thorough comparison of the architectures. Reviewers also asked for clarification with regard to the importance of each experiment in our Results section. In the sections below, we address these two issues and explain the changes we will make to our work for the camera-ready version informed by your feedback.
## sDTM vs DTM
We agree with reviewers that since sDTM is introduced as a more efficient version of DTM, full results comparing these two techniques are essential. It was an oversight not to include more direct comparisons of sDTM and DTM, and to explain why such a comparison is not possible on certain datasets due to the limitations of the original DTM.
In addition to the representational change (Sparse Coordinate Trees Section 3) and architectural change (pooling by attention Section 4.2) to go from DTM to sDTM, we also contributed an orthogonal technique to process sequence inputs and outputs (Section 4.5). We did not adequately isolate these orthogonal contributions by comparing sDTM and DTM on sequence inputs and outputs. As the reviewers pointed out, only Section 5.2 (Active$\leftrightarrow$Logical) contains a comparison between the original DTM and our proposed sDTM. In the camera-ready version, we will also report results for the original DTM on the seq2seq task SCAN (Section 5.5).
We did not test the original DTM in Section 5.3 (FOR2LAM) and 5.4 (GeoQuery) as the original DTM is so inefficient as to be impossible to run on our available hardware; a batch of only a single sample from these datasets causes an out-of-memory error on a 16gb V100 GPU. To put this in perspective, sDTM is able to process batches of 64 samples on the same GPUs, which exemplifies the dramatic efficiency benefits of sDTM. DTM cannot practically be run on FOR2LAM and GeoQuery because of the max tree depth of samples in these datasets. As explained on lines 111-115, DTM's memory requirement grows exponentially with the max tree depth and quickly runs into performance issues. The maximum tree depth for Active$\leftrightarrow$Logical, SCAN, FOR2LAM and GeoQuery are respectively 10, 8, 14, and 16. We will update the camera-ready version of our paper to explain the absence of DTM results on FOR2LAM and GeoQuery, as well as to include a table in the Appendix with tree depth statistics for all the datasets.
## Layout of the experimental results
Reviewers also sought clarity in how each individual dataset and experiment contributes to our overall contributions. To make the relationship between each experiment and our contributions more clear, we will change the subheaders in the Results section of the camera-ready version to better reflect the significance of each experiment, with the associated dataset in parentheses. **Section 5.2: Performance Regression (Active$\leftrightarrow$Logical)** confirms that sDTM does not perform worse than DTM. **Section 5.3: Scalability (FOR2LAM)** investigates a tree2tree transformation task that DTM cannot handle as explained in the previous section of this rebuttal. **Section 5.3: Seq2Tree (GeoQuery)** introduces the change of processing a sequence as input instead of a tree, and **Section 5.4: Seq2Seq (SCAN)** adds an additional modification of sequence outputs.
## Performance concerns
Multiple reviewers pointed out that sDTM does not achieve state-of-the-art performance across all tasks, with relatively weak performance on GeoQuery (Section 5.4). While we would be excited if sDTM was the best model across every split and every dataset, we want to remind reviewers that the results should be considered as a whole, as exemplified by Figure 1.
It is worth noting that there is substantial room for improvement across every model on GeoQuery. (s)DTM is proposed to complement generic similarity-based generalization (already offered by Transformers) with compositional-structure-based generalization. In tasks lacking sufficient opportunities for compositional generalization, DTM will have limited value to augment generic transformer-style generalization. It appears that GeoQuery is such a task, because the strongly compositional symbolic methods of NQG fail. It is possible that with sufficient data, GeoQuery's latent compositional structure could be identified by NQG and DTM, but the released GeoQuery dataset has only on the order of 500 training examples. Given all methods perform well below ceiling on GeoQuery (including on the IID split), we refrain from drawing substantive conclusions based on minor differences in accuracy on this single task in isolation from the rest of our results. We will update the text to clarify DTM's performance on GeoQuery.
We thank the reviewers again for the time that they dedicated to improving our submission. By responding to their feedback, we feel that our paper is much stronger and understandable.
Pdf: /pdf/0b648af681cd1ad026cd4853625c79bff7c9ec06.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SimVG: A Simple Framework for Visual Grounding with Decoupled Multi-modal Fusion | Accept (poster) | Summary: This paper presents SimVG, which decouples multimodal understanding from downstream tasks and uses the pretrained model to perform feature extraction and multi-modal fusion. A dynamic weight-balance distillation (DWBD) module is proposed to enhance the token branch's ability. A text-guided query generation module is developed to integrate text information into queries. Performace validate the effectiveness of the proposed method.
Strengths: 1、Directly using a multi-modality encoder for multi-modal feature extraction and fusion avoids the need for redesigning a multi-modal fusion module. This simplifies the model structure.
2、The proposed DWBD module improves the model's inference efficiency through distillation.
3、Experiments on different benchmarks show better performance than previous SOTA methods
Weaknesses: There is no significant weakness in this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Will the multi-modality encoder be trained or are the parameters frozen?
2. Visualization of the feature map could be provided.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No significant limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** Will the multi-modality encoder be trained or are the parameters frozen?
**A1:** The multi-modality encoder (MME) weights are trainable throughout the training process, but the learning rate of MME is set to 0.1 times of that of the other parameters.
**Q2:** Visualization of the feature map could be provided.
**A2:** Thank you for you proposal. In the **Figure 14 of rebuttal pdf**, we add feature map visualization. This includes GradCAM-based feature heatmap visualization in MME and Attention Map visualization in the Decoder.
---
Rebuttal Comment 1.1:
Title: Official comment by Reviewer MFGp
Comment: I thank the authors for their efforts and detailed rebuttal. I have read through all the other reviewers' comments and the authors' responses. I decide to raise my score. | Summary: This manuscript introduces SimVG, a framework based on BEiT-3 that simultaneously encodes image, text, and object tokens. Additionally, it proposes a dynamic weight-balance distillation (DWBD) method to improve the simpler branch (MLP), thereby enhancing reasoning speed. The effectiveness of the proposed method is demonstrated across multiple visual grounding datasets.
Strengths: 1. The DWBD method enhances the performance of the lightweight branch by balancing the learning process, thereby improving the overall efficiency and accuracy of the model.
2. SimVG achieves competitive results across multiple visual grounding datasets, demonstrating the robustness and effectiveness of the approach.
Weaknesses: 1. The technical contribution of the proposed method appears insufficient. The approach primarily builds upon BEiT-3 by adding an object token and a fast MLP head with a distillation loss, which may seem more like an application of BEiT-3 rather than a novel contribution.
2. Although the paper aims to simplify the structure and improve reasoning speed, the overall architecture and the introduction of multiple new components (e.g., DWBD, TQG) add complexity during training. The process involves two-stage pretraining and fine-tuning steps, which can be cumbersome and resource-intensive.
3. Could you clarify whether the model was pretrained from scratch or if existing BEiT-3 weights were used?
4. I recommend thorough proofreading to enhance clarity and correctness, improving readability and quality.
Technical Quality: 2
Clarity: 2
Questions for Authors: See weakness.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** The technical contribution of the proposed method appears insufficient. The approach primarily builds upon BEiT-3 by adding an object token and a fast MLP head with a distillation loss, which may seem more like an application of BEiT-3 rather than a novel contribution.
**A1:** Please refer to **A1 in Response to Reviewer TBfU** for comparisons with BEiT-3.
**Further Explanation:**
As shown in **Figure 13(b) of rebuttal pdf**, previous methods (VGTR, TransVG, MDETR, SeqTR) utilize Visual/Text Encoders pretrained on their respective modalities or use alignment-based pretraining models like CLIP (e.g., Dynamic MDETR). However, these methods do not integrate multimodal fusion during the pretraining process. Later methods embed text encoding into visual encoding (e.g., QRNet, VG-LAW). However they still rely on fitting multimodal fusion representations on small-scale downstream data. These methods can be represented in **Figure 13(b)(1) of the rebuttal pdf**.
Our approach diverges from these methods by leveraging upstream fusion multimodal models like ViLT and BEiT-3. We move the multimodal fusion representation to the pretraining phase using a large-scale dataset of image-text pairs. Our architecture can be represented by **Figure 13(b)(3)**. One of the key innovations of this paper is the exploration of the importance of transferring multimodal fusion representation from downstream to upstream. **Figure 13(a)** illustrates that our method exhibits superior understanding of multimodal content, including complex details such as relative positional relationships, physical materials, and colors.
We hope this addresses the concerns and highlights the unique contributions and benefits of our approach. Thank you for your valuable feedback.
**Q2:** Although the paper aims to simplify the structure and improve reasoning speed, the overall architecture and the introduction of multiple new components (e.g., DWBD, TQG) add complexity during training. The process involves two-stage pretraining and fine-tuning steps, which can be cumbersome and resource-intensive.
**A2:** Thank you for your proposal.
**About Training Resource Consumption:**
As shown in **Table 9 of the rebuttal pdf**, the parameter count in the SimVG Head (Contains DWBD and TQG) is significantly lower compared to RefTR, SeqTR, and MDETR. Additionally, **Table 8 of the rebuttal pdf** demonstrates that the number of training epochs and total training time of SimVG are notably lower than those of other methods. With a single RTX 4090, training SimVG on the RefCOCO+ dataset takes less than 6 hours.
**About Distillation Complexity:**
Table 6 in the original paper presents two distillation modes. The **one-stage** mode involves synchronous learning and distillation, where the teacher model is trained and knowledge is distilled to the student model in a single training session. **This mode does not require additional pre-training and does not incur extra overhead**. The **two-stage** mode aims to further enhance distillation performance by first training the teacher model and then synchronously training the student branch. While the **two-stage mode does increase training complexity**, it only requires less than 10 hours on a single RTX 4090 to complete one two-stage training session on RefCOCO, which is still significantly more resource-efficient than most of the existing methods. Additionally, we will release our source code that supports both one-stage and two-stage distillation via GitHub.
Method | val | testA | testB | Training Time
---------|--------|-------|-------| --
One-stage| 86.57 | 87.80 | 82.71 | ~5.5h
Two-stage| 86.96 | 88.22 | 83.16 | ~9h
**Q3:** Could you clarify whether the model was pretrained from scratch or if existing BEiT-3 weights were used?
**A3:** The existing pre-trained weights of BEiT-3 are used, and all weights are trainable (no frozen) during the SimVG training process.
**Q4:** I recommend thorough proofreading to enhance clarity and correctness, improving readability and quality.
**A4:** We thank the reviewer for this valuale suggestion. We will do our best to improve the quality of the manuscript. We will also ask a native English speaker to prrofread and polish the manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. After reviewing other reviews and responses, I have decided to increase my rating to 5.
I hope you can carefully integrate the rebuttals into the revised version and thoroughly proofread the entire manuscript.
While the technical aspects are borderline acceptable, my main concern remains the writing quality of this manuscript. Thank you. | Summary: Visual grouding is a typical task in vision and language domain. Existing methods only use limited downstream data to fit multimodal feature fusion, leading to significant performance degradation on complex texts. Therefore, it is necessary to decouple visual-language feature fusion from downstream tasks to promote deep integration between downstream tasks and pre-training tasks. in this paper, the authors proposed the SimVG model framework. They Introduced a dynamic distillation method and a query generation module. Experimental results on several datasets demonstrate the effectiveness of the model.
Strengths: - The design of the distillation method is innovative, and TQG enables the model to be extended to GREC, broadening its application scope.
- The experiments on serveal datasets of the model performs well, achieving state-of-the-art levels on multiple datasets with relatively few parameters.
Weaknesses: - The writing needs improvement; for example, the motivation is not clearly and concisely described.
- The inference process of the model should be explained in the main text.
- In Table 3, why is there no comparison with PolyFormer-L, OFA-L, LGR-NET, and m-PLUG? For a fair comparison, at least these should be listed.
- There are some typos, such as:
1. Line 156, "an caption" should be "a caption".
2. Line 165, "R^{H/32 * W/32 * C}".
Technical Quality: 3
Clarity: 2
Questions for Authors: In Table 3, why is there no comparison with PolyFormer-L, OFA-L, and m-PLUG? For a fair comparison, at least these should be listed.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: In appendix, D4, the limitation section is provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** The writing needs improvement; for example, the motivation is not clearly and concisely described.
**A1:** We thank the reviewer for pinpointing this issue. We will try our best to improve the writting of the manuscript in the final version. Also, we outlines the main **motivations**, **innovations** and **advantages** of SimVG in the **Common concerns**. Last, the **Figure 13 in rebuttal pdf** further illustrates our motivation:
1. **Insufficient multimodal understanding:** Current approaches that use a small amount of downstream data to fit multimodal representations are insufficient. They perform poorly in scenarios involving complex relative positional relationships, physical characteristics, and detailed color descriptions as shown in **Figure 13(a) of the rebuttal pdf**
2. **Simple inference design principle:** Our model design is centered around making inference simpler, including the use of MME (eliminating the text encoder like BERT) and DWBD distillation (requiring only a simple MLP in the head during inference).
Due to the page limitation, we have placed the more critical model structure in Figure 1 of the original paper. However, our motivation is expressed in the second and third sentences of the abstract, as well as in Figure 2 and its related discussion of the original paper. We will incorporate the motivation introduction from the rebuttal PDF into the final version of the manuscript.
**Q2:** The inference process of the model should be explained in the main text.
**A2:** The inference process is quite similar to the training process. We will include a more detailed description of the inference process in the revised version. The last sentence of the caption of Figure 2 briefly summarizes that the reasoning stage can be accelerated by using only the Token Branch. For the phrase grounding and REC tasks, since there is only one target, the number of queries is set to 1, and no additional post-processing is required. For the GREC task, there are cases where there is no target or multiple targets. We set the number of queries to 10, threshold to 0.7, and the post-processing is consistent with the original GREC method.
**Q3:** In Table 3, why is there no comparison with PolyFormer-L, OFA-L, LGR-NET, and m-PLUG? For a fair comparison, at least these should be listed.
**A3:** Comparisons are shown in **Table 10 in rebuttal PDF**. We will add references and comparisons to these works in the revised version. The original decision not to compare with the Swin-Large model is made to ensure a fair comparison based on FLOPs. According to the data from the official PyTorch website, under the same 224x224 input conditions, Swin-B has a 15.42 GFLOPs, while ViT-Large/32 has 15.38 GFLOPs.
**Q4:** There are some typos, such as: Line 156, "an caption" should be "a caption". Line 165, "R^{H/32 * W/32 * C}".
**A4:** We apologies for the typos. We will do our best to fix them in the final version. We will also ask a native English speaker to prrofread and polish the manuscript. | Summary: This paper introduces a transformer-based framework called SimVG for the visual grounding task, which, unlike CLIP-based models, decouples multimodal fusion from the downstream task into the model pretraining stage. SimVG modifies a recently proposed multimodal fusion encoder architecture (BEiT-3) to generate the fused feature representation, and adopts a lightweight MLP module instead of a complex encoder-decoder structure for visual grounding prediction. To make the MLP prediction head work, the paper proposes a synchronous distillation learning process that trains the MLP prediction head and a complex decoder branch at the same time w/ dynamic weights between the two branches.
Experiments on six widely used visual grounding datasets show that the proposed SimVG framework not only achieves the state-of-the-art performance, but also brings considerable improvements in efficiency and convergence speed.
Strengths: * This paper successfully adapts a recent multimodal pretraining framework (BEiT-3) to visual grounding, and proposed a few model architecture improvements, making the end-to-end framework more efficient (1x faster) and more accurate (2~3% improvements in prediction accuracy).
* The basic idea of the paper is presented clearly. The experiments performed in this paper is convincing to show the effectiveness of each of the proposed modules.
Weaknesses: * The novelty of the paper is a bit limited to me since it basically borrows and applies the unified multimodal pretraining framework introduced in BEiT-3 to the visual grounding task. A clearer comparison between the proposed method and the BEiT-3 model is desired.
* The title and module names in Fig 3 is a bit confusing to me (at first sight w/o reading the whole paper) since many abbreviations are used. It's helpful to make them clearer to the readers.
* I noticed some grammar mistakes, e.g. in L167-168 "to interact with A with B".
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the points I mentioned in the weakness section. Besides, I am wondering why the proposed synchronous distillation process is needed and its advantages over traditional model distillation process. Does traditional model distillation process work poorly?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No obvious limitations noticed by me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** The novelty of the paper is a bit limited to me since it basically borrows and applies the unified multimodal pretraining framework introduced in BEiT-3 to the visual grounding task. A clearer comparison between the proposed method and the BEiT-3 model is desired.
**A1:** **Firstly**, BEiT-3 is a pre-training architecture designed for global multimodal representation. BEiT-3 itself does not directly have the capability for visual grounding (See comparisons in **Figure 13 (b) of the rebuttal PDF**).
**Most importantly**, **equipping BEiT-3 with existing structures**, such as the standard head in SeqTR, yields **even worse** results in downstream performance compared to SeqTR (see Table below).
**Notably**, with our proposed method, we extend BEiT-3, giving it the capability for downstream detection, and achieve an overall improvement over existing SOTA. Experiments are conducted with ViT-B/32 on the RefCOCO dataset.
**Therefore**, our contribution involves delving into the potential knowledge embedded in BEiT-3 and developing architectures to utilize this knowledge efficiently through the additional object tokens, some adaptive designs and token distillation.
Method | val | testA | testB
--------------------|------ |------ |------
SeqTR | 83.72 | 86.51 | 81.24
BEiT-3 + SeqTR-head | 80.92 | 83.63 | 74.75
SimVG (w BEiT-3) | 87.07 | 89.04 | 83.57
This paper leverages BEiT-3 while highlighting the importance of decoupling multimodal representation from downstream tasks to upstream pre-training, particularly for understanding complex text. To the best of our knowledge, this is the first exploration and experimental validation of this issue. The discussion in **Table 4 and Figure 2 of the original paper** provides the rationale behind our adoption of BEiT-3, as well as the core insights that this paper aims to convey. More details about our motivation, contributions, and advantages can be found in the **Common concerns** section.
**Q2:** The title and module names in Fig 3 is a bit confusing to me (at first sight w/o reading the whole paper) since many abbreviations are used.
**A2:** We thank the reviewer for pinpointing this issue, we will make them clearer by adding complete module names and brief desciptions in the revised version
**Q3:** I noticed some grammar mistakes, e.g. in L167-168 "to interact with A with B".
**A3:** We apologies for the typos. We will do our best to fix them in the final version. We will also ask a native English speaker to prrofread and polish the manuscript.
**Q4:** I am wondering why the proposed synchronous distillation process is needed and its advantages over traditional model distillation process. Does traditional model distillation process work poorly?
**A4:** We thank the reviewer for this insightful question. There are several reasons for adopting the synchronous distillation method:
1. **Alignment with the "Simple" design principle of this paper:** Synchronous distillation eliminates the need for a two-stage process as it does not require pre-preparation of a teacher model. Instead, both the teacher and student models are trained simultaneously in a single training run.
2. **Inheriting the strong representation of the teacher model:** Traditional distillation methods necessitate two independent models. In contrast, the synchronous distillation method shares the feature extraction components between the teacher and student models, only differentiating at the head. This approach allows the student model to inherit the superior representational capacity of the teacher model. The downside is that it can only reduce the model size of the head so cannot distill a smaller overall model.
3. **Experimental validation:** We present a set of experimental data with variables including one-stage, two-stage, and whether the teacher model is frozen in the two-stage process, as well as traditional distillation with two independent models. The comparison between DWBD (w/o synchronous) and DWBD (twostage, DB frozen) demonstrates that the synchronous distillation mode, where the teacher and student models share the MME component, provides performance improvements. Further refining the decoder branch parameters during the two-stage process can enhance the student model's performance even more.
Method | val | testA | testB | training Time
-------------------------|--------|-------|------ |--
baseline | 85.47 | 86.75 | 81.66 | ~5h
DWBD(w/o synchronous) | 85.76 | 87.01 | 81.97 | ~11h
DWBD(onestage) | 86.57 | 87.80 | 82.71 | ~5.5h
DWBD(twostage,DB frozen) | 86.72 | 87.99 | 82.85 | ~8.5h
DWBD(twostage) | 86.96 | 88.22 | 83.16 | ~9h | Rebuttal 1:
Rebuttal: First of all, we would like to thank all the reviewers for your positive comments and valuable suggestions!
This rebuttal has two parts. First, please find our responses to some common concerns below. Then, we provide the response to each reviewer.
# Common concerns
## 1. Motivation
### 1.1. Insufficient multimodal understanding
**Figure 13(a) of rebuttal pdf** shows that existing methods fail to adequately comprehend complex relative spatial relationships, physical materials, and detailed color descriptions. Due to the rich semantics and diversity of text, fitting multimodal fusion representations based on a small amount of downstream data is insufficient. Figure 13(b) highlights the differences between previous methods and our approach, which improves the issue of insufficient multimodal understanding by decoupling multimodal representation to upstream pre-training.
### 1.2. "_Simple_" architecture
We adopt the multi-modality encoder (MME) structure, which eliminates the need for an additional text encoder like BERT. By using the dynamic weight-balance distillation (DWBD) method, we enable synchronous learning of the teacher and student models with a single training run. Consequently, the decoder only requires a single MLP to accomplish phrase grounding, REC, and GREC tasks.
## 2. Novelty
### 2.1. Decoupling multimodal fusion to upstream pre-training
- To the best of our knowledge, this paper is the first that emphasizes and explores the importance of decoupling multimodal fusion representation from downstream to the upstream pre-training.
- **Figure 13 (b) in rebuttal PDF** specifically describes the differences between the previous framework and ours. Given the rich semantics and diversity of text, using a small amount of downstream data to fit fusion representations is certainly insufficient. Our method is the first study that explores the importance of decoupling multimodal fusion to upstream pre-training and validates this through experiments (refer to **Table 4 in Section 4.4.1** and **Figure 2 of the original paper**).
### 2.2. Focusing on "_Simple_"
- The MME architecture eliminates the need for additional text encoding, such as BERT.
- The synchronous distillation approach allows for training the teacher and student models in a single training run, unlike traditional distillation methods that require pre-training the teacher model before distilling the student model.
- In the inference phrase, it can apply a simple MLP to achieve phrase grounding, REC and GREC tasks effectively, with performance close to or exceeding that of more complex decoder branches.
## 3. Main advantages of SimVG
- **Simplified inference structure:** The MME component eliminates the overhead of text encoder.
The decoder component only requires a simple MLP to accomplish the phrase grounding, REC, and GREC tasks.
- **Faster convergence:** Noticeable acceleration in convergence speed (30 epochs vs. 60 epochs+).
- **High performance:** Maintains high inference speed and accuracy (compared to GroundingDINO: latency: **101ms** vs. 120ms / accuracy: **89.55** vs. 84.92).
- **Reduced training Data:** Requires significantly less training data (28K vs. 174K+).
- **Lower resource consumption:** Training can be completed within 12 hours using a single NVIDIA 3090 GPU.
Pdf: /pdf/8861f74840fd972ef17f195299c3a9bd3009b749.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Unmet Promise of Synthetic Training Images: Using Retrieved Real Images Performs Better | Accept (poster) | Summary: This paper studies whether training on synthetic images from generative model can **truly** surpass the baseline of training on the retrieved real images that are used to train the generative model. It provides several key insights: 1) retrieved real images are significantly superior to synthetic images across a wide range of recognition tasks, 2) both data sources (retrieved and synthetic) are beneficial to original training images, and 3) adding synthetic images to retrieved images will ruin the gain achieved by the latter. It also analyzes two factors that may cause the inferiority of synthetic images.
Strengths: 1. The paper is very clearly written.
2. The key insight (synthetic images are indeed inferior to the naive baseline of retrieved real images) is very interesting and timely. I believe it was previously totally overlooked by our community. There have been many works trying to utilize synthetic images for representation learning recently, but they mostly utilize synthetic images *blindly* and fail to dig into the critical question asked by this paper.
3. The analysis for the poorer performance of synthetic images is convincing, especially the ablation study on the synthetically perturbed real images, which is quite inspiring.
4. The scope and position of this paper are properly defined. It does not aim to uncover the uselessness of synthetic images, but to present a necessary baseline for future works to compare with. I believe this simple yet strong baseline will motivate future works to construct and leverage synthetic images more effectively. Besides, the authors also consider and discuss the scenarios where synthetic images are indispensable, e.g., scenarios with privacy concerns.
Weaknesses: I do not think there are critical weaknesses in this paper. I only have one minor concern. See below.
Technical Quality: 4
Clarity: 4
Questions for Authors: I love this paper, only with one minor question or suggestion. Indeed, it is somewhat expected that synthetic images may not be as good as retrieved synthetic images in standard benchmarks with widely existing class concepts. Could the authors provide more insights about the potential advantages of synthetic images in scenarios with rare or very complex concepts?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have clearly discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback! We are honored to hear that you enjoyed our paper, and are grateful you found our research question “timely” and “totally overlooked” by our community.
Your question—whether synthetic images can offer unique advantages for tasks with rare or complex concepts—is very exciting. Such concepts are precisely the concepts for which collecting labeled real data is difficult, and thus exactly where we have the greatest need for synthetic data. These directions are exciting topics for future work that we will discuss further in our final paper. For example, synthesizing very complex concepts might be made feasible by leveraging the compositional generalization abilities of generative models, which recent research promisingly suggests may be a unique boon of generative models compared to other models trained on the upstream data [1, 2] (references below).
Just for curiosity’s sake, we also performed a preliminary study to try and relate the concept-wise performance of models trained on synthetic or retrieved data with the rarity of each concept. Specifically, given a concept denoted via a text string $c$ (e.g., “Airbus A320”), we approximate that concept’s frequency in LAION-2B by counting the number of image-text pairs with text containing $c$ as a (case-insensitive) substring. We scattered the concepts with their frequency on the $x$-axis and model accuracy on the $y$-axis. Unfortunately, we were not able to find any clean trends — with existing methods, the performance gap between synthetic and retrieved data does not appear to systematically decrease on rarer concepts. To speculate, this may be because generative models are trained on the same data pool that we retrieve from, and thus may also have difficulty learning rare concepts during general pretraining as they are seen less frequently. Recent work [3] corroborates this conjecture; however, we note there are asterisks to our preliminary study, as we did not extensively validate the robustness of our concept frequency metric. Nonetheless, if off-the-shelf pretrained generators are indeed less effective at generating rare concepts, a potential future direction would be to resolve this limitation by taking advantage of a small set of examples to adapt the generative model in a sample efficient manner (e.g., perhaps through textual inversion). Afterwards, we may be able to compose the concept with the generator’s existing knowledge to sample new data variations.
Thank you again for your time, and thank you for drawing our attention to these exciting future directions that may lead to gains beyond our retrieval baseline. We will discuss them thoroughly in the next version of the paper!
* [1] Your diffusion model is secretly a zero-shot classifier. Alexander C Li, Mihir Prabhudesai, Shivam Duggal, Ellis Brown, and Deepak Pathak.
* [2] Text-to-image diffusion models are zero shot classifiers. Kevin Clark and Priyank Jaini.
* [3] No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance. Vishaal Udandarao, Ameya Prabhu, Adhiraj Ghosh, Yash Sharma, Philip H.S. Torr, Adel Bibi, Samuel Albanie, Matthias Bethge.
---
Rebuttal Comment 1.1:
Title: Thank you for response
Comment: I go through other reviews and the authors' responses. I will keep my original score of 8. I believe this paper is worth presenting to our community since it highlights a competitive baseline in learning from synthetic images. This baseline will motivate subsequent works to more effectively explore the role of synthetic images in this new era. I think this big advantage has outweighed other minor disadvantages. The authors are also recommended to follow other reviews' advice to further polish the details of this paper.
---
Rebuttal 2:
Title: Thank you for your feedback!
Comment: Thank you very much for your time and effort in this process! We appreciate your insightful comments, and will be sure to incorporate all other reviewer's valuable feedback in the next version of our paper. Thank you for helping make our work stronger! | Summary: There is a growing interest in training vision models using synthetic data. This paper explores the effectiveness of synthetic data compared to real images directly retrieved from image generator's training sets like LAION-2B. The experimental results indicate that, while synthetic data can be beneficial for some tasks, real data often matches or surpasses its performance. The paper suggests that these results are due to high-frequency generator artifacts and inaccuracies in task-relevant visual details present in synthetic data.
Strengths: * This paper proposes a new baseline for training with synthetic data, which is novel and interesting.
* This paper presents extensive experiments, such as different datasets and data scales.
* The experiment of synthetically perturbed images is novel.
Weaknesses: The proposed baseline using retrieved images is novel and inspiring. However, some experimental settings may limit the results of training on synthetic images, leading to unfair comparisons and potentially misleading conclusions.
- Settings regarding image quality:
- Data filtering: The paper keeps images with top CLIP similarity as the training set (Section 3.3). Are synthetic and retrieved training data ranked and selected separately? What is the distribution of CLIP similarity in the final synthetic and retrieved training datasets?
- Synthetic images with generation artifacts: As shown in Figure 3, synthetic 'flute' images have obvious generation artifacts. These images should be filtered for a fair comparison. What is the CLIP similarity of these images? What are the model results when training on a 'clean' dataset without these images?
- Settings regarding data distribution and variation:
- Prompts: The current prompts for generating images may provide less information and variation than retrieved real data, influencing accuracy. How about sourcing the generation model using captions from retrieved data or other real images, as suggested in [1]?
- Data scaling: Although synthetic images can be scaled efficiently, they tend to be similar when prompts are fixed. Do retrieved images have more variation with larger data scales than synthetic images? How do the results change if synthetic images in the scaling experiment are sourced using captions from retrieved images?
[1] Image Captions are Natural Prompts for Text-to-Image Models
Technical Quality: 3
Clarity: 3
Questions for Authors: Please address the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations have been discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback! We are excited that you found our proposed baseline for measuring the utility of synthetic data to be “novel and inspiring.”
Your feedback prompted us to perform additional experiments to further validate our paper’s findings, and we believe the new results corroborate our paper’s main finding that state-of-the-art synthetic data training methods lag our proposed retrieval baseline. Thank you for helping us strengthen our paper! We will include all new results in the next version of our paper, and describe the details below to address each of your points.
**Q1 (CLIP score distributions post-filtering):** Do the distributions of CLIP similarity scores in the final synthetic and retrieved training datasets significantly differ? Are retrieved and synthetic images ranked and filtered separately, or together?
**A1:** Great question! Significant differences in CLIP similarity could potentially explain gaps in downstream training performance and is worth studying. In our paper, **we ranked and filtered synthetic and retrieved training data separately** (i.e., to keep the experimental design as clean as possible, we do not use any information from the retrieved images to inform the selection of synthetic data, and vice versa). We use per-class score thresholds for filtering, which we found empirically beneficial for both synthetic and retrieved data. We histogram the CLIP similarity score distributions of the resulting filtered training data in Figure R2 of the rebuttal PDF. Overall, despite setting the filter score threshold for synthetic and retrieved data independently, we find that the distribution of post-filtering synthetic image CLIP scores is right-shifted compared to the distribution of post-filtering retrieved image CLIP scores. In other words, **synthetic images have comparable or even higher CLIP scores than retrieved images on average**; CLIP judges synthetic data to be higher quality on average. Thus, CLIP score differences alone do not explain the lagging training performance of synthetic data.
**Q2 (Filtering out images with artifacts):** As shown in Figure 3, synthetic ‘flute’ images have obvious generation artifacts. These images should be filtered for fair comparison.
**A2:** To clarify, the artifact-afflicted images shown in Figure 3 are *post-filtering*. Even though the ‘flute’ images of Figure 3 are obviously wrong to humans, CLIP assigns them relatively high scores of $0.285, 0.265$, and $0.263$. For reference, a CLIP score of $0.249$ reflects the top $30$% of all retrieved ‘flute’ images. Furthermore, we found obviously wrong and artifact-ridden synthetic ‘flute’ images that have even higher CLIP scores $> 0.3$, placing them in the top $5$% of all retrieved ‘flute’ data.
Our paper adopts CLIP filtering since it is the current best synthetic image filter [22] we are aware of; in fact, many recent works forgo data filtering altogether [3,21,51,57,58] despite its positive impact on training performance [22] (our study corroborates this gain; CLIP filtering synthetic data improved ImageNet zero-shot accuracy by 1.03% and LP by 0.91% over no filtering at 4M scale). We aim to ground our argument in the current state of synthetic training data research, so we do not explicitly innovate new data filtering methods beyond CLIP filtering. Nonetheless, these findings spurred by your feedback makes it apparent that CLIP filtering is limited. We will include a more detailed discussion of this point in our final paper to motivate future filtering methods.
**Q3 (Alternative prompts for synthetic data generation):** Synthetic images may suffer in diversity when the generation prompts are fixed. Other prompting strategies should be considered, such as using LAION alt-text or outputs from a captioning model.
**A3** Thank you for the point! First, to clarify a potential miscommunication, the prompts used to generate synthetic images in our work are *not fixed* – rather, each image prompt is sampled from a probabilistic large language model (LLM) that is tasked to include additional relevant knowledge about the desired category (e.g., for the “dog” category, a sampled generation prompt might be “a photo of a dog playing with a ball in the park”). We will make this more clear in our paper.
We adopt this LLM-guided prompting strategy from SynCLR [57], which shows that it outperforms many alternative prompting strategies at the hundred-million scale, including prompting with LAION alt-text. We did not find any existing comparison between LLM-generated prompts and prompts from an image captioning model as proposed by the work you referenced [1], so we conducted this study ourselves.
Starting from the unfiltered retrieved training set, we used BLIP-2 to caption each image and construct prompts of the form “a photo of {classname}, {BLIP-2 image caption}” following [1]. We then performed top-30% CLIP filtering on the resulting synthetic images. We compare the performance of training with filtered synthetic images generated with three prompting distinct strategies to our filtered retrieved data in Figure R3 of the rebuttal PDF. Specifically, we compare training with filtered synthetic images generated from (1) our original LLM prompts (orange lines), (2) BLIP-2 captions (red lines), and (3) LAION alt-text from retrieved data (green lines). Overall, among the three generation strategies, our original LLM prompts perform best on ImageNet, and perform comparably to BLIP-2 captions and LAION alt-text on Aircraft. All three synthetic strategies lag the performance of retrieved data. Thus, our conclusion—that existing synthetic image training methods do not surpass the retrieval baseline—holds under these variations of image generation strategy.
We were not aware of [1] at the time of submission — thank you for pointing it out to us! We will include a reference to that work and the above experiments motivated by [1]’s captioning method in our final paper.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the rebuttal. Did I miss the response to my last question, quoted below?
> Data scaling: Although synthetic images can be scaled efficiently, they tend to be similar when prompts are fixed. Do retrieved images have more variation with larger data scales than synthetic images? How do the results change if synthetic images in the scaling experiment are sourced using captions from retrieved images?
---
Rebuttal 2:
Title: Thank you for your reply! We apologize for the confusion and are happy to discuss further.
Comment: Thank you for your time and continued interest in our work! We apologize for our confusing formatting — our initial response to your last question is folded into **Q3/A3** of the rebuttal text above. For your convenience, we summarize the relevant parts here:
* First, to clarify a potential miscommunication, our generation prompts are not fixed, but rather sampled from a probabilistic LLM that is tasked to generate image captions for each desired visual concept. We use these LLM captions as text-to-image prompts.
* We adopt the LLM-guided prompt method from SynCLR [57], a current SOTA work which shows that synthesizing images with LLM prompts can outperform alternative generation strategies (including synthesizing images from LAION alt-text) at hundred-million scale.
* We experimentally compared the training performance of filtered synthetic images generated using either (a) our original LLM prompts or (b) the LAION alt-text of our retrieved images. We report results of this new scaling experiment in Figure R3 of the rebuttal PDF. Synthetic images generated via LLM prompts outperform images generated with LAION alt-text prompts on ImageNet, and perform comparably on FGVC-Aircraft. Regardless of generation strategy, synthetic images lag the retrieval baseline.
We missed the part of your question about the diversity of retrieved versus synthetic images in our initial response. We sincerely apologize for the confusion! To clarify your question, we further compared the image variation in our final filtered Aircraft and ImageNet adaptation datasets, which consist of either (a) retrieved real images, (b) synthetic images generated with our original LLM prompts, or (c) synthetic images generated from the LAION alt-text of the retrieved images. We quantify image variation via the average pairwise cosine similarity of the CLIP image features for each dataset (i.e. lower average similarity → higher variation). To understand variation at large scale, we perform this analysis on the largest-sized version of each dataset . Results are as follows, with specific scale size in parentheses:
| | ImageNet-1K | FGVC-Aircraft |
|---|----------|----------|
| **Retrieved Real Data** | 0.323 (2.5M images) | 0.506 (139K images) |
| **Synthetic Data (Original LLM Prompts)** | 0.369 (4M images) | 0.606 (500K images) |
| **Synthetic Data (LAION alt-text)** | 0.341 (2.5M images) | 0.527 (139K images) |
Overall, your intuition is correct: synthetic images generated with LLM prompts indeed exhibit higher average pairwise cosine similarity compared to retrieved images, suggesting deceased variation. This gap persists even though the scales of the LLM prompt synthetic datasets are significantly larger than the scales of the retrieved datasets (e.g. 4M vs 2.5M for ImageNet, 500K vs 139K for Aircraft). Moreover, generating synthetic images based on the LAION captions of the retrieved images does improve the measured variation of the resulting synthetic images.
However, interestingly, improvements in synthetic image diversity alone do not directly translate into significant improvements in downstream model performance. As shown in Figure R1, models trained on the less-diverse LLM prompt generated images we originally considered perform better or comparably to models trained on the more-diverse LAION alt-text generated images. Nonetheless, we are excited that the experiments you suggested have uncovered another axis along which synthetic and retrieved images differ. We will include these experiments and detailed discussion in the updated version of our paper to help motivate future synthetic data work.
---
Rebuttal Comment 2.1:
Comment: Thanks for the further discussions. I will maintain my score. | Summary: The paper tries to answer the question of whether the progress of pretraining classification backbones with images obtained from generative models is due to the advances in generative image modeling or from the fact that these are implicitly sampled from huge image collections. To answer this question, the paper proposes a simple baseline consisting of querying the original databases on which the generative models are trained, finding nearest neighbors for the task at hand, and training on these neighbors instead of the generative samples. The paper shows that this simple baseline outperforms naively sampling from the image models.
Furthermore, the paper analyzes why the generated data underperforms real data, concluding that the downgraded performance is due (at least in part) to a lack of fine-grained detail (e.g. in the case of the FGVC dataset, the generated ) as well as artifacts in the generated images that introduce a domain gap with respect to real images.
Strengths: - The question the paper tries to answer is highly relevant to the vision community (and also for other communities such as language), and pinpoints a clear deficiency of the baselines of papers tackling synthetic data generation with pre-trained models. It is highly positive for the community that the paper raises this concern and checks the actual performance of the proposed baseline when compared against simple ways of training for classification using synthetic images.
- The analysis of what causes the models trained with synthetic images to fail is interesting.
Weaknesses: - The main weakness of the paper is that it points out a deficiency of other papers, but its technical contribution is limited (i.e., nearest-neighbor retrieval). Although the question studied is really relevant to the community and the baseline pinpointed should always be considered in papers, the technical contributions of the paper seem limited.
- Sentences 33-35 are (at minimum) a bit ambiguous due to wording (the "over" in L34 is ambiguous), or they are wrong. By the data processing inequality, the generated samples cannot contain any additional information about the images in LAION-2B that is not contained in the images of LAION-2B themselves. However, the generated images can contain additional information that is not present in the original LAION-2B images, and that is useful for the downstream classification task. In fact, the images generated by the generative neural network also contain information about the neural network architecture and the training algorithm. If this was not the case, the mutual information between the generated images and the training algorithm would be 0, and thus synthetic and real images would be indistinguishable.
The *promise* of using generated images is precisely that training on synthetic images adds crucial extra information to the information present on LAION-2B about regularities of the world that are embedded in the implicit biases of the neural network and its training procedure. The implicit biases of neural networks add extra information about the composability of concepts (e.g. "a cow on the beach" is the same as "a horse on the beach" with the cow replaced by a horse, although there are no cows in the beach during training), stability with respect to small perturbations in both text and image space ("a cow in the beach" and "a cow in an island" are roughly the same) and other useful regularities of the world. I strongly encourage the authors to rephrase this sentence and explain that the gains in [57] are well motivated by this.
- Although interesting, the analysis in Section 5 is limited, and other SOTA image models should be tested. Only using StableDiffusion is well motivated in the experimental section (as it is required to have access to the training dataset), but in the analysis section, stronger public or proprietary models should be analyzed to see if they also suffer from the same deficiencies.
- Similar baselines have already been used in the past, and they are not correctly cited. StableRep [57] implicitly has the baseline proposed in the paper, as it compares against using the complete set of real data used for training the generative model (i.e. all the neighbors), showing that it outperforms it. Similarly, the preliminary work of [1] also compares against using the entire training dataset, showing that in this preliminary case, the generative models do not outperform the real datasets.
- The comparison is just against a simple way to generate synthetic images, although several works have proposed more sophisticated ways of sampling that target the shortcomings studied in Section 5. For example, [2] trains on a particular modality with scarce data, instead of using a large-scale pretrained model (which may not have fine-grained details about all the classes). As seen in Figure 2, training with generative samples underperforms over directly using CLIP, while works like [57] outperform it.
- It would be good to study how close the retrieved samples are to the training splits of the datasets studied. Are the training samples of these datasets included in LAION-2B? If this is the case, and these are properly retrieved, it is expected that the real images are strictly upper bound (as they are perfect samples of the real distribution, and there won't be any distributional shift between training and testing).
Minor comments:
- L176 and Figure 2: The use of the term "Zero-shot" performance to refer to the text-query classification performance is a bit confusing, as the tested models have been finetuned with N-shots. If I understand correctly, this corresponds to applying the zero-shot classification technique of CLIP after finetuning on N samples for each class, but as the models have been finetuned with N-shots.
Technical Quality: 2
Clarity: 2
Questions for Authors: See weakness
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Title: Clarification request on references [1] and [2]
Comment: Thank you for your valuable review! We highly appreciate your feedback. To ensure that we can thoroughly address the review, could you please clarify what works [1] and [2] refer to? It seems that [1], [2] in the review text do not match references [1], [2] in our paper. Also, to double-check, [57] in our paper is a citation to SynCLR -- does "StableRep [57] implicitly..." in the review text refer to SynCLR, or does it refer to a different citation? Thank you very much for your time!
---
Rebuttal 2:
Title: References clarification
Comment: [1] Generative models as a data source for multiview representation learning. Ali Jahanian, Xavier Puig, Yonglong Tian, Phillip Isola" (GenRep)
[2] Using diffusion models to generate synthetic labeled data for medical image segmentation. Daniel G Saragih, Atsuhiro Hibi , Pascal N Tyrrell
[3] StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners. Yonglong Tian, Lijie Fan, Phillip Isola, Huiwen Chang, Dilip Krishnan
[57] corresponds to the submission citation (SynCLR). All three GenRep [1], StableRep [3] and SynCLR [57] "implicitly have the baseline proposed in the paper", by using the entire dataset (i.e. all the neighbors). In the three papers, the networks are pretrained with generative samples, and the full datasets are used to train the generative models. [1] does not outperform the baseline, while [3] and [57] do.
---
Rebuttal 3:
Rebuttal: Thank you for your valuable review! We are glad you found our research question “highly relevant” for both vision and NLP.
Before answering your points directly, we’d like to clarify a possible miscommunication. A key point in the review is that our proposed retrieval baseline already implicitly exists in other work. For example, SynCLR [57] compares synthetic images against the generative model’s full training set (i.e. retrieve everything) and finds that synthetic data performs better.
While such results are exciting, this existing baseline—comparing synthetic data to the generator’s full train set—is distinct from our proposed baseline, and is insufficient for measuring the true added value from the generative model itself. When we sample synthetic training data, we often sample data that is targeted to tasks we want our learner to perform well. Works like [57] use this idea to generate synthetic data that beats the full real dataset for specific tasks. However, by comparing to the full real data, a task-agnostic distribution of images, [57] and other works [21] implicitly conflate the effects of training on synthetic versus real data with the effects of targeted data sampling. We aim to disentangle these factors and answer: are observed gains from synthetic data truly due to added information from the generator? Or is it since the synthetic data is implicitly sampled in a targeted manner from large upstream real datasets? To study this, we construct a retrieval baseline that explicitly controls for targeting (L39-45), thus overcoming a key limitation of prior work.
Overall, we are excited by the same possibility you highlighted—that synthetic data can transfer useful information from the generator's implicit biases. However, to ensure our field progresses toward this goal, we must carefully measure the current state of synthetic data methods. We believe our work is a crucial step in this direction.
**Q1:** “The main weakness of the paper is that… Although the question studied is really relevant to the community… the technical contributions of the paper seem limited.”
**A1:** Our goal is not to innovate new methods; rather, we seek to introduce a conceptual baseline that helps our field better understand the true utility of synthetic training images. Our proposed baseline can be implemented with existing image retrieval techniques; we aim to keep our baseline simple to facilitate comparison against it.
**Q2:** Unclear wording of data processing inequality sentence.
**A2:** Thanks! We will revise to discuss that SynCLR [57] is indeed motivated. We agree that synthetic images can contain useful information from the generator’s training process. However, it is unclear if [57]'s gains are truly due to this added information—[57]'s gains may also be from targeting the synthetic data to the evaluation tasks. Our retrieval baseline controls for the effect of targeting, allowing us to better measure whether the observed gains from synthetic data are due to the generator’s added information.
**Q3:** Why only analyze Stable Diffusion (SD), why not analyze closed-source models to see if they suffer similar limitations?
**A3:** We analyze SD so we can contextualize its analysis with its performance relative to our retrieval baseline, which is only possible with open data. In contrast, if we find some closed-source generative model G yields higher quality images than SD, it would be unclear whether these gains in image quality reflect improvements in the information added by G to the synthetic images, or whether G was simply trained on a higher quality dataset than SD. We also aim to keep consistency with recent work [21,51,57] that all use SD as the generator. We will revise to clarify these points.
**Q4:** The proposed baseline exists in prior works, which compare synthetic data against the generator’s full training set.
**A4:** Please see our discussion above for a detailed response. Briefly, SynCLR [57] does not control for the effects of data targeting, and GenRep [1] samples data from unconditional GANs, whereas modern synthetic data is sampled in a targeted manner. Our retrieval baseline which controls for targeting is now necessary to understand synthetic data gains in the modern regime. StableRep [3] only compares against a small random subset of the full train data. We will clarify in our final paper. Thank you!
**Q5:** The comparison is against a simple way to generate synthetic training images, but works like SynCLR [57] propose more advanced ways that can outperform CLIP.
**A5:** To clarify, our paper exactly adopts [57]’s synthetic data method. [57] only significantly beats CLIP on the Aircraft task, which our results corroborate. Sampling data from generators trained on data-scarce domains as you cited [2] is an exciting idea that we will discuss in related work; we focus our study on sampling data by prompting web-pretrained models like Stable Diffusion to be consistent with other work [21,51,57].
**Q6:** The high performance of retrieved data may be due to training set contamination.
**A6:** Good point! The generator is trained on the same data we retrieve from, so in theory synthetic data can also have benchmark train set contamination. We agree this contamination is more direct when we retrieve data; we thus further decontaminated all retrieved data for the train sets following [18]. We report the amount of removed data in Table R1 and plot the results of training on decontaminated retrieved data in Figure R1 of the rebuttal PDF. Overall, while some retrieved images were indeed similar to train set data (an average 1.9% of retrieved data was removed), discarding them minimally impacted performance.
**Q7:** Confusing zero-shot terminology.
**A7:** Thanks! We will clarify in our paper: our zero-shot models are not trained on downstream benchmark data. We finetune CLIP on retrieved or synthetic data only and evaluate the resulting model as-is on the test set.
---
Rebuttal 4:
Title: Thanks for the rebuttal
Comment: Thanks for the rebuttal. Could authors give a bit more details about what they mean by targeted refinement and why [57] and [21] "implicitly conflate the effects of training on synthetic versus real data with the effects of targeted data sampling". I still struggle to see the difference between [57] and "targeted refinement", is it simply that [57] targets the full training dataset instead of a subset like the case studied? Why is this substantially different?
---
Rebuttal Comment 4.1:
Title: Thank you for your response! We are happy to clarify and discuss further.
Comment: Thank you for your reply! We truly appreciate your time and continued interest. To illustrate what we mean by prior work “conflating targeted data sampling with synthetic data,” we detail why SynCLR's experiments [57] cannot answer our research question.
Our work asks: does synthetic training data contain useful added information beyond the training dataset of its generator? We believe this is timely as recent works (e.g. [57]) have shown that training on a synthetic dataset $D_S$ can outperform training on the full real dataset $D_R$ used to pretrain its generator. However, this prior finding—that $D_S$ outperforms full $D_R$—does not answer our question, as the experimental setup confounds two distinct independent variables. To see the confounded variables, let's recap SynCLR's method for generating a synthetic training dataset $D_S$:
1. Manually define a set of visual concepts that we want our model to perform well on. SynCLR's concept list is largely based on the classes in the main downstream evaluation tasks (Tables 6,11 in [57]). For example, since we want to perform well on FGVC-Aircraft and ImageNet, the concept set contains ‘Airbus A320’, ‘flute’, etc.
2. Use an LLM to generate image captions for each concept.
3. Generate an image for each caption via Stable Diffusion, a text-to-image model trained on LAION-2B.
Thus, $D_S$ is collected with so-called “targeted data sampling:” the synthetic dataset $D_S$ SynCLR trains on isn’t sampled unconditionally and is not intended to cover the full LAION-2B distribution. Rather, it is carefully tailored to specific tasks based on the manually-fixed concepts in step 1. As such, the synthetic dataset $D_S$ differs from LAION-2B, the real training set of Stable Diffusion (i.e. $D_R$), along two axes simultaneously. First and most apparent, (a) LAION-2B is real while (b) $D_S$ is sampled from a generator. More subtly but just as critical, (i) $D_S$ is targeted to the downstream eval tasks, while (ii) LAION-2B is a broad, task-agnostic distribution of images. To illustrate, $D_S$ is constructed such that 5% of the dataset is images of FGVC-Aircraft classes (30M total images). In contrast, we estimated the proportion of FGVC-Aircraft relevant images in LAION-2B to be just 0.056% (only ~1.1M total images). Thus, while SynCLR shows that training on $D_S$ outperforms training on full LAION-2B for FGVC-Aircraft and performs comparably on other benchmarks, any gains $D_S$ exhibits over full LAION-2B may not be because $D_S$ is generated while LAION-2B is real. Rather, it could simply be because full LAION-2B is task-agnostic, while $D_S$ is task-targeted. Importantly, whether data is targeted is not a unique property of synthetic data—real data can be targeted too (e.g. via retrieval as discussed below). With existing baselines, we cannot attribute the gains from training on $D_S$ to Stable Diffusion adding unique value on top of its LAION-2B training set; we cannot resolve our research question.
The conflation of these two independent variables (data can be synthetic or real; data can be targeted or untargeted) is not unique to SynCLR. Modern synthetic data methods often rely on prompting text-to-image models; synthetic data is thus often targeted via the generation prompts. For example, SynthCLIP [21] derives prompts from the MetaCLIP concept list, thus targeting synthetic data to MetaCLIP concepts. SynthCLIP also compares this targeted synthetic data to general real data (e.g. Conceptual Captions).
The key innovation of our retrieval baseline is to enable disentangling these presently conflated variables, thus creating a more principled experimental setup. Through retrieval, we collect targeted real data from LAION-2B and compare targeted synthetic data $D_S$ head-to-head against this targeted real data. Our new setup now only varies one independent variable (i.e. synthetic versus real). Any gains $D_S$ exhibits over our retrieval baseline are thus properly attributable to $D_S$ having added information over the generator’s training set. However, we show that even SOTA synthetic data methods (we adopt SynCLR) lag the retrieval baseline. When we correctly control for synthetic data being targeted, previously observed gains vanish. In summary:
* Works like SynCLR compare targeted synthetic data to the generator's full untargeted real training set. This controls for the effect of seeing information from generator training data, but does not control for the equally critical effect of targeted data collection.
* We point out this gap and propose the retrieval baseline to further control for data targeting and resolve it.
* We show that gains from SOTA synthetic methods vanish against this principled baseline.
Our work contributes a way to measure if synthetic data has useful information beyond the generator's training set, thus yielding a clean target for future methods. We're eager to see the gains our field will find.
Thank you! We are happy to discuss further!
---
Rebuttal 5:
Title: Thanks for the clarification
Comment: Thanks for the detailed clarification, you are right! If I understand correctly, and as specified in Section 4.1, SynCLR does create a LAION-like set of captions, and they train on all the captions once (using CLIP), and not retrain per each downstream task. Although this is less severe than training specifically for each downstream application (i.e. only training on Airplanes when the downstream is also Airplanes), it is true true that during the process of creating captions they use all the captions from all downstream tasks (with many variations and augmentations), so indeed, their process conflates the effects of the synthetic sampling and the targeting of (all) the downstream tasks they study.
After the discussions, authors have addressed my concerns, and I'm raising my score to acceptance. Still, I expect the final version of the manuscript to include clarifications on the mentioned points.
---
Rebuttal Comment 5.1:
Title: Thank you for your time!
Comment: Thank you very much for your time and effort in this process! We enjoyed discussing our work with you, and are glad that we were able to address your concerns. We appreciate your score increase. We will clarify based on the discussed points in the next version of our paper. Thank you for helping us strengthen our work! | Summary: This paper evaluates the performance of training machine learning models on synthetic images generated by the Stable Diffusion generative model compared to using real images directly retrieved from the LAION-2B dataset, which was used to train the generative model. The authors argue that while synthetic images can benefit some downstream tasks, they are generally outperformed by directly using targeted real images. The study involves extensive empirical testing across several visual recognition tasks, showing that real data often provides better accuracy and fewer artifacts than synthetic alternatives.
Strengths: (1) This paper innovatively proposes the impact of synthetic data and the corresponding training pairs on downstream tasks, providing a comprehensive analysis by comparing the performance of models fine-tuned on synthetic and real data across multiple tasks and scales. It recommends using true data from the same prompt domain as a baseline for downstream tasks.
(2) The experimental design includes a robust setup, featuring a variety of benchmarks that support the derived conclusions. The authors test their hypotheses across multiple data sizes and report the results with detailed statistical data.
(3) Discussions on the limitations of synthetic data and the potential for real data to yield better training outcomes have a direct impact on the development approaches of future artificial intelligence models, especially in fields where data quality is critical.
Weaknesses: (1) The author aims to emphasize the importance of the original training data as an evaluation baseline, but has not yet compared the downstream fine-tuning results of the real-and-synthetic data of non-training data. And not clear about the advantage of using upstream real data as baseline but not the othe dataset.
(2) A lack of quantitative indicators to evaluate the quality of synthetic images, the authors only use CLIP for screening. For the generated image data, no more robust quantitative indicators are used to screen and eliminate images with low quality or obvious visual semantic noise. Because in most downstream methods based on synthetic data, generated images are usually screened by a lot of manual and computational methods to improve the fine-tuning quality.
(3) Due to privacy restrictions, model open source and other issues, it is difficult to retrieve training data to compare with synthetic data, which may limit the future application of this work.
Technical Quality: 3
Clarity: 4
Questions for Authors: There is a main confuse that the key is "retrieved data - synthetic data" or "high quality data - low quality data" in this field, and the contribution to use retrieved real data as baseline should be more clear and specific. The inspiration this work provides for future work needs to be further clarified by the author, which is also the key to my rating.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The conclusions and exploration presented in this work is solid, but the fact that the original training data is of higher quality than the data generated by the generator is not a valuable conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Title: Clarification request on weakness point (1)
Comment: Thank you very much for your time and effort in providing us feedback! We highly appreciate your review. Could you please clarify your point (1) under the weakness section? We would love to address your points as throughly as possible. Specifically, we were hoping to clarify the following:
*"... downstream fine-tuning results of the real-and-synthetic data of non-training data. And not clear about the advantage of using upstream real data as baseline but not the othe dataset."*
By "non-training data," do you mean that data outside the generative model's training dataset should also be used as the baseline? Does the main question under the Question section ("There is a main confuse...") refer to the same point? Thank you so much for your time!
---
Rebuttal Comment 1.1:
Title: Clarification on Weakness (1)
Comment: I am glad to rephrase my concern in Weakness (1). I am curious about how text-image pairs from data outside the training set serve as the real-data baseline, compared with the results of the text prompt and generated images. If the result still shows that the quality of the generated data is inferior to real-data, it seems that other real-data from non-training dataset could also serve as a "baseline" for the claims in this article. In that case, what is the necessity of obtaining training data as declared in the article?
The question in the Question section revolves around whether the core answer to the performance on downstream CV tasks discussed in the article is "training and generated data" or "high-quality and low-quality data."?
Thanks, hope my clarification will help the authors' further response.
---
Rebuttal 2:
Rebuttal: Thank you for your valuable feedback! We are glad you found our experimental setup “robust” and our discussion to “have a direct impact on the development” of future models. We address each of your points below.
**Q1 (Why retrieval baseline?):** Why emphasize the importance of retrieving data from the generative model’s training data as an evaluation baseline for synthetic data? Why adopt the retrieval baseline over simply comparing synthetic data to high-quality data outside the generative model’s training set?
**A1:** Great question! Our research goal is not just to study whether synthetic training data can improve model performance (which as you noted, would not require the retrieval baseline). Rather, we seek to understand where performance gains from synthetic data come from, and to contribute a principled way of measuring whether model-generated synthetic data can surpass the real data it derives from (i.e., the data used to train the generator). To study this question empirically, we propose the retrieval baseline. This baseline enables us to disentangle whether gains from synthetic data are due to (a) the fact that synthetic images are implicitly subsampled from the generator’s huge real training set, or (b) due to the generator truly adding useful information beyond its training data. If we compare synthetic data to high-quality data outside the generator’s training set, then these two factors cannot be disentangled. For example, even if a synthetic data method outperforms high-quality outside data, the gains from synthetic data may simply be due to the generative model seeing higher-quality and larger-scale data during pretraining. In contrast, our baseline of retrieving real data from the generator’s training set controls for the effect of information from the upstream real data. Retrieval also controls for the effects of targeted data sampling (i.e., synthetic data is often targeted to specific tasks), which has been conflated with synthetic data in prior work [57].
We believe the research questions we pose—how to measure if/when synthetic data surpasses the real data it was trained on—are timely to tackle. There is surging interest in building SOTA vision models with synthetic data [3,21,51,57,58], and many works now show strong gains. We find that such gains—while exciting—still fall short of our retrieval baseline, suggesting that today’s synthetic data methods do not provide significant information beyond the generator’s training data. Hence, we hope our baseline will set a strong and simple target for the field going forward. We further discuss how our paper can impact future work in response to your **Q4** below. We will clarify all points in our final paper. Thank you for helping us strengthen our work!
**Q2 (Insufficient filtering):** The CLIP score used in the paper to filter synthetic images is not sufficiently sophisticated and does not reflect current best filtering practices.
**A2:** Thanks for the question! This point may be mistaken. While we agree that improving data filtering is an exciting future direction for improving the utility of synthetic training data, filtering based on CLIP score remains state-of-the-art [22]. In fact, many recent representative works do not apply any quality filters whatsoever to generated synthetic training images [3,21,51,57,58]. Our work uses CLIP filtering to optimize the performance of synthetic data, which [22] finds improves training performance over no filtering. Our study corroborates [22]’s finding; for example, applying CLIP filtering to synthetic training data improved ImageNet zero-shot and LP accuracy by 1.03 and 0.91 points (respectively) over no filtering at 4M data scale.
**Q3 (Retrieval is impractical):** Retrieval from a generator’s training data is often not practical due to privacy concerns or closed-source data, which may limit the work’s applicability.
**A3:** Good point! We make a similar note and discuss its implications for future research in the introduction and discussion of our paper (L70-73, L312-315). While privacy concerns and closed data are practical reasons to use synthetic data over retrieved data, these application constraints are orthogonal to our research question. Specifically, our goal in proposing the retrieval baseline is not to prescribe a general method for maximizing downstream task accuracy (which, as you noted, would require consideration of downstream constraints), but rather to advance our understanding of what value synthetic data provides beyond the training set of its generator.
**Q4:** How can this research and the proposed retrieval baseline inspire future work?
**A4:** We believe our work can inspire future research in two primary ways. First, comparison against our retrieval baseline provides a principled target for future synthetic data research to aim for. If any synthetic data method outperforms training on data retrieved from the generative model’s training data, then that is strong evidence that the generative model has added (e.g., through its inductive bias, through its compositional abilities) additional useful information on top of the data that it was originally trained on. In contrast, as discussed in **A1**, if a synthetic data method outperforms some other source of real data outside of the generative model’s training set, then we cannot draw a similar conclusion.
Second, by demonstrating that retrieving real data from the generator’s training set currently surpasses synthetic data, we hope to sharpen the field’s intuition of when existing synthetic data methods can be useful in the present day. Many recent works position synthetic data as a potential drop-in replacement for real data, even when the retrieval baseline is possible. Our findings do not corroborate such positioning; instead, existing synthetic data methods are most promising when downstream constraints (e.g., privacy concerns) prevent the retrieval baseline from being realized.
---
Rebuttal Comment 2.1:
Title: Thanks for the Rebuttal
Comment: Thank you for your rebuttal. Your reply has addressed my concerns regarding Q2 and Q3.
Regarding your statement in A4, 'Many recent works position synthetic data as a potential drop-in replacement for real data, even when the retrieval baseline is possible,'. In my opinion, the main advantage of synthetic data lies in its ability to scale effectively. By leveraging larger datasets, it can enhance performance in other tasks. From this perspective, whether considering data scaling capabilities or privacy risks, synthetic data holds more practical value than retrieving real data.
Of course, the exploration of data quality is currently a major issue. The real focus of the field is how to make synthetic data superior to the real data retrieved in the training set not just using what kind of data as baseline. I appreciate your writing and experiments, could this work give us more deeper insights on how to make synthetic data better?
---
Reply to Comment 2.1.1:
Title: Thank you for your response! We are excited to discuss further.
Comment: Thank you for your time and continued interest in our work! We are glad that we were able to address parts of your concern and are very grateful to see that you’ve increased your score in favor of acceptance.
Overall, we are excited by the same possibility that you highlighted: that synthetic data can be effectively scaled, and that training on ever-larger synthetic datasets can improve our models. However, this goal remains highly non-trivial. Our work found that even when we scale up the amount of synthetic data generated from a current SOTA method [57] beyond the amount of retrieved data considered, synthetic data still lags retrieved data in terms of downstream training performance (**Figure 1, L199-220**). For example, training on a mere 30K retrieved images outperformed 500K synthetic images for FGVC-Aircraft. We further find that with current methods, scaling up synthetic data can often hurt performance, not help. Thus, we fully agree that scaling up synthetic data has exciting potential, but retrieved data is currently more performant than synthetic data when available. To realize the untapped potential of synthetic data, we need to find better methods. And to build better synthetic data methods, our field needs a principled target to aim for.
We believe that our retrieval baseline may be one such target. If any synthetic data method outperforms training on data retrieved from the generative model’s training set, then that strongly suggests that the gains come from improvements in our synthetic data methodology itself, as opposed to improvements in the generator’s real training data. In the second case, using the generator’s real training dataset directly would continue to be more practical when available; synthetic data would remain primarily useful under privacy constraints.
While privacy is already an exciting application domain, we are optimistic about future improvements in our synthetic data methods to make them useful more broadly (i.e. even when the generator’s real train set is available). To this end, we also performed analysis experiments (**Section 5.1, Figures 3,4**) to better understand why synthetic data currently lags retrieved data. We show that both visual artifacts from the generator and high-level semantic content differences contribute to the underperformance of synthetic data, thus highlighting these two axes as future directions for improving synthetic data. Moreover, by conceptualizing retrieval as a target to beat, another natural future step could be to design methods that generate image compositions which are explicitly absent from the generator’s upstream training set; synthesizing these “missing" images may offer unique value beyond the existing upstream real images. Such an approach leverages the compositional generalization abilities of generative models, which recent research promisingly suggests may be a unique boon of generative models compared to other models trained on the upstream data [1, 2] (references below). We will thoroughly discuss these other ways in which our work may motivate future synthetic data improvements in our updated paper. Thank you for your thoughtful questions!
Thank you again for your time and valuable feedback, which has improved our work. We share your excitement about the potential of synthetic data, and are optimistic that our work will inspire subsequent works to find gains beyond our retrieval baseline. We would be eager to discuss further!
* [1] Your diffusion model is secretly a zero-shot classifier. Alexander C Li, Mihir Prabhudesai, Shivam Duggal, Ellis Brown, and Deepak Pathak.
* [2] Text-to-image diffusion models are zero shot classifiers. Kevin Clark and Priyank Jaini. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their detailed feedback. We will incorporate all suggestions in the next version of our paper. Thank you all very much for your invaluable help in improving our work!
Overall, we are thrilled to see that all reviewers found the research question we posed interesting and relevant to the community. The two main concerns were raised by **reviewers ZHX8 and rgD8**, who wondered if the retrieval baseline proposed in our paper is a necessary contribution as opposed to simpler and existing alternatives.
For example, **reviewer ZHX8’s** primary concern questioned why we might want to compare synthetic data to data specifically retrieved from the generative model’s training set, as opposed to a simpler baseline of general high-quality real data that need not come from this specific source. We detail why this simpler baseline would not allow us to answer our research questions in response to the reviewer below. We will also clarify in our revised paper. To summarize, our goal is not just to understand whether training on synthetic data can improve model performance (in which case the simple baseline would suffice), but rather to better understand where any gains we observe derive from. Do gains from synthetic data come from the fact that we are implicitly subsampling relevant data from the generator’s huge real training set (which we can alternatively do via retrieval)? Or do gains come from the generative model truly adding some new information (e.g., through its inductive biases) that surpasses its training data? Comparison against the simple baseline cannot disentangle these two factors; any observed synthetic data gains versus the simple baseline may simply arise from improvements in the generative model’s training data quality. As interest in building SOTA vision models with synthetic data [3, 21, 51, 57, 58] is surging, we believe these are timely unanswered questions to tackle. Hence, we propose the retrieval baseline to enable studying them empirically.
**Reviewer rgD8** expressed concern that our retrieval baseline already implicitly exists in previous work like SynCLR [57], which has compared synthetic data to the *full* training set of the generative model. We detail why this baseline is also distinct from our baseline in response below. Mainly, the full dataset baseline does not control for the critical effect of data targeting, which has been implicitly conflated with synthetic data in past work [57] (L39-45). We show that by comparing synthetic data against our baseline, previous gains shown from synthetic data—while still exciting—largely go away. Thus, we believe our baseline is critical for understanding the true added value of synthetic training data.
**Reviewer rgD8 and reviewer Etje** both asked for additional empirical experiments to further validate the main findings of our paper. We have performed the requested experiments, and include all resulting figures and tables in the rebuttal PDF. We summarize the new experiments here:
* Train set decontamination (Figure R1, Table R1). **Reviewer rgD8** pointed out that retrieving data from LAION-2B may result in images from the benchmark training set (e.g., from ImageNet) being included in the retrieved data. Our retrieved sets are already decontaminated for the downstream test set; we thus performed additional train set decontamination and plotted the results of training on the train+test decontaminated data. Overall, train set decontamination has minimal impact on model performance and does not change our findings.
* CLIP score distributions (Figure R2). **Reviewer Etje** wondered whether there were significant discrepancies in the CLIP score of post-filtering synthetic and retrieved images. We find that if anything, CLIP judges synthetic data to be higher quality than retrieved data on average, despite its lagging training performance. We are optimistic that this new experiment will motivate future synthetic data filtering methods that do not rely on CLIP.
* Additional synthetic image prompt strategies (Figure R3). **Reviewer Etje** wondered whether generating synthetic images with other prompting strategies would yield higher training performance. We compare our original LLM-guided prompting strategy to prompts from the LAION alt-text or BLIP-2 captions of our retrieved data, and find that the original LLM-guided strategy performs better on ImageNet and similarly on FGVC-Aircraft compared to the two alternatives.
We believe the new experiments further corroborate our main findings, and we will include all of them in the next version of our paper. Once again, thanks to all the reviewers and chairs whose effort makes this process possible. We deeply appreciate your effort in helping improve our work! We are eager to further discuss with all reviewers during the discussion period.
Pdf: /pdf/a137fda6ec30ffb935cb0bada25189dcdce77a9a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Trajectory Flow Matching with Applications to Clinical Time Series Modelling | Accept (spotlight) | Summary: This paper proposes a trajectory flow matching method for time series data, by using flow matching at each time points. To preserve the coupling of time series, the vector fields are conditioned on history lengths (or even more general $c$). The method also provides ways for model stability, irregularly sampled trajectories and uncertainty prediction. After evaluating the model performance on simple harmonic oscillators, they further apply their method to three clinical time series datasets.
Strengths: The proposed method provides a generative model for time series data, and it takes the benefits of FM, in terms of flexibility, scalability and stability, for time series data modeling.
Weaknesses: The TFM is the model for discrete time series. It would potentially be more useful to model things in a continuous way.
Technical Quality: 4
Clarity: 3
Questions for Authors: The paper is clearly written. No major questions.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The same as weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment of our paper. We appreciate the reviewer’s insight in the importance of continuous time series modeling. We are also interested in the fully continuous setting and would be interested in adapting ideas from functional flow matching [1] to this domain.
[1] Kerrigan, G., Migliorini, G., & Smyth, P. (2023). Functional Flow Matching (Version 2). arXiv. `https://doi.org/10.48550/ARXIV.2305.17209`
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors for their response and I would keep my initial rating. | Summary: This paper presents Trajectory Flow Matching, an extension of flow matching to time series. It can model irregular, sparsely sampled, and noisy time series. It trains in a simulation-free manner, bypassing backpropagation through the dynamics. The method is tested on ICU physiological time series, demonstration SoTA performance, and uncertainty prediction.
Strengths: 1) A strong research contribution, extending flow matching to time series.
2) Explicit modelling of uncertainty, handles noisy, irregular, and sparsely sampled data.
3) Application to a domain with significant societal benefit.
4) Clearly defined paper, easy to follow, and well-written.
Weaknesses: None of particular note.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1) It would be interesting to understand if this could be applied to periodic time series like ECG. Do you see ways to introduce this behaviour, and if so, could you call this out as a future research direction?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors clearly describe the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 10
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their extremely positive response to our work. We are thrilled that you found our TFM to be a strong research contribution. Additionally, we are grateful for your suggestion regarding the future application of our model to periodic time series. We have some preliminary thoughts on this particular behavior that we would like to explore, such as modeling in the Fourier domain, are eager to explore them in future work, and have added this to the future work section of our manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, particularly expanding future work and adding the README.md detailed in another review. | Summary: This paper presents Trajectory Flow Matching (TFM), a simulation-free training algorithm for neural differential equation models. This enables modeling of continuous physiologic processes using irregular, sparsely sampled, and noisy data, all with better scalability.
The authors provide theoretical proof that matching techniques can allow for simulation-free training, and empirical proof of the effectiveness of TFM in clinical settings. An ablation study is also provided to validate components of TFM.
Strengths: - Originality : Bridging the gap between flow matching and differential equation-based models is a novel idea that is carried out rigorously by the authors.
- Quality : Proofs are provided for the theoretical results; experiments are performed on three different real-world datasets; uncertainty prediction is provided by design and helps clinical usability. Responses to the NeurIPS checklist are also extensive.
- Clarity : The paper is easy to parse, as all ideas follow each other logically.
- Significance : SDEs are notoriously expensive to train and this work promises to solve this issue while offering state-of-the-art performance, by a notable margin.
Weaknesses: - The motivation for the paper is that SDEs are not scalable. While this is a known fact, the paper lacks a proper discussion of the impact of simulation-free training on the scalability of TFM.
- The provided code lacks instructions to reproduce the results (empty README).
- Explanability studies would have been welcome to help clinical usability.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Did you compare TFM-ODE with other models that are able to process irregular clinical time series?
- What is your reasoning behind modeling specifically the heart rate + MAP pair ?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the technical limitations (memory not always suited depending on the dataset, no causal representations estimations) as well as societal impacts (false predictions).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback that helps us further improve our paper. We have added a README, and impact statement on simulation-free training scalability in our manuscript.
We agree having explainability is important for others who may apply our method. We thank the reviewer for bringing this point up as it is quite important in high-stake applications such as healthcare. As such, we have included this in our future work section.
We would like to address some of the reviewers questions below:
> Did you compare TFM-ODE with other models that are able to process irregular clinical time series?
Yes, we compared TFM-ODE’s performance to NeuralODE [1], FM baseline ODE [2], and LatentODE RNN [3], as well as SDE-based models like TFM and NeuralSDE [4]. All these models are able to process irregularly sampled time series. (See Table 1)
> What is your reasoning behind modeling specifically the heart rate + MAP pair ?
We chose the heart rate + MAP pair since they are the most obtainable and non-invasive metric from a patient that provide a clinically relevant representation of patient state trajectories in the management of clinical conditions where hemodynamic monitoring is important (e.g., sepsis and GI bleeding). We chose MAP since it reflects both cardiac output and peripheral vascular resistance; this is used for decisions regarding resuscitation and vasopressor treatment, which are indications for hospital-based treatment. Heart rate may be affected by either cardiac arrhythmias or clinical conditions related to but independent from cardiac output and peripheral vascular resistance. Therefore we used a combination of the two to reflect clinical trajectories.
We hope we have addressed the questions of the reviewer and we would like to thank the reviewer once more for helping us improve the clarity of our responses and propose future directions.
[1] Chen, R. T. Q., Rubanova, Y., Bettencourt, J., & Duvenaud, D. (2018). Neural Ordinary Differential Equations (Version 5). arXiv. `https://doi.org/10.48550/ARXIV.1806.07366`
[2] Lipman, Y., Chen, R. T. Q., Ben-Hamu, H., Nickel, M., & Le, M. (2022). Flow Matching for Generative Modeling (Version 2). arXiv. `https://doi.org/10.48550/ARXIV.2210.02747`
[3] Rubanova, Y., Chen, R. T. Q., & Duvenaud, D. (2019). Latent ODEs for Irregularly-Sampled Time Series (Version 1). arXiv. `https://doi.org/10.48550/ARXIV.1907.03907`
[4] Liu, X., Xiao, T., Si, S., Cao, Q., Kumar, S., & Hsieh, C.-J. (2019). Neural SDE: Stabilizing Neural ODE Networks with Stochastic Noise (Version 1). arXiv. `https://doi.org/10.48550/ARXIV.1906.02355`
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and actions to improve your paper even further. I appreciate your clarification about the choice of clinical variables.
When I mentioned "other models that are able to process irregular clinical time series", I should have been clearer that I had in mind non-differential equations-based models such as STraTS [1], RAINDROP [2], or Warpformer [3] for example, or even older models such as SAnD [4] or InterpNet [5].
Whether you compared your model to these or not, your work still impacts the ODE domain, and my review thus remains positive (Accept), as you have addressed my other concerns.
---
[1] S. Tipirneni and C. K. Reddy, “Self-Supervised Transformer for Sparse and Irregularly Sampled Multivariate Clinical Time-Series,” ACM Trans. Knowl. Discov. Data, vol. 16, no. 6, p. 105:1-105:17, Jul. 2022, doi: 10.1145/3516367.
[2] X. Zhang, M. Zeman, T. Tsiligkaridis, and M. Zitnik, “Graph-Guided Network for Irregularly Sampled Multivariate Time Series,” presented at the International Conference on Learning Representations, Oct. 2021.
[3] J. Zhang, S. Zheng, W. Cao, J. Bian, and J. Li, “Warpformer: A Multi-scale Modeling Approach for Irregular Clinical Time Series,” in Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Aug. 2023, pp. 3273–3285. doi: 10.1145/3580305.3599543.
[4] H. Song, D. Rajan, J. Thiagarajan, and A. Spanias, “Attend and Diagnose: Clinical Time Series Analysis Using Attention Models,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, Art. no. 1, Apr. 2018, doi: 10.1609/aaai.v32i1.11635.
[5] S. N. Shukla and B. M. Marlin, “Interpolation-Prediction Networks for Irregularly Sampled Time Series,” ArXiv, Sep. 2019.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and for clarifying the methods you had in mind. At this time, we have not compared TFM to these specific methods, and we will look into adding comparisons for the revised version of our paper. | null | null | Rebuttal 1:
Rebuttal: # Global Response
We thank the reviewers for their time evaluating our paper. We are grateful for the thoughtful comments, insights, and potential directions for future work. We have addressed each of the points raised and provided clarifications where necessary. Based on suggestions from the reviewers we have made the following improvements:
* Included README.md with instructions on running the code as well as other code quality improvements. (reviewer TMfk)
* Expanded impact statement of simulation-free training and scalability of TFM to include discussion of scalability relative to simulation-based methods. (reviewer TMfk)
* Expanded discussion of future directions to include modeling periodic time series, functional flow matching for continuous time series, and explainability. (reviewer s476, 4EUb, TMfk)
We look forward to any further feedback and discussions and appreciate the opportunity to improve our work based on your valuable input. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Expected Probabilistic Hierarchies | Accept (poster) | Summary: This paper considers the problem of hierarchical clustering. This problem is typically handled by discrete optimization approaches, which define a hierarchical clustering quality score (e.g. Dasgupta and TSD) and optimize it on a discrete search space. More recent approaches consider this problem from a probabilistic perspective. They soften the hierarchical clustering scores and optimize them on a continuous space. In this paper, the authors propose an extension of the prior work on Flexible Probabilistic Hierarchy (FPH) [Zügner et al, 2021]. Specifically, they introduce Expected Probabilistic Hierarchies (EPH), a new probabilistic hierarchical clustering objective for continuous optimization. The consistency and advantages of EPH are supported by theoretical analysis. The experimental results on synthetic and real-world datasets show that EPH can outperform existing approaches.
Strengths: - Overall, this paper is well-organized and easy to follow.
- The contributions are clearly described and well-supported by theoretical analysis and experiments.
- Extensive experimental results and visualizations are provided.
Weaknesses: - While I appreciate the theoretical results for supporting the advantages of EPH over FPH, there is a lack of substantial technical improvements. Therefore, the novelty is limited.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The motivation behind adopting the Dasgupta and TSD is not clearly stated. While the authors provide definitions and references for them, it is still hard to understand why these two particular scores are used and how they are in contrast to each other. Could the authors elaborate on this point?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comprehensive feedback. In the following, we address their comments.
**Comment:** While I appreciate the theoretical results for supporting the advantages of EPH over FPH, there is a lack of substantial technical improvements. Therefore, the novelty is limited.
**Response:** We appreciate the reviewer's recognition of the theoretical results supporting the advantages of EPH over FPH. We want to emphasize that EPH introduces several significant advancements that contribute to its novelty and practical impact.
First, EPH demonstrates substantial improvements in performance, with gains of over 20% compared to the original FPH on certain datasets (see Table 10 in the appendix). These improvements are not isolated but consistent across a wide range of datasets, underscoring the effectiveness of our approach.
Moreover, our work introduces a novel *unbiased* subgraph sampling technique that significantly enhances the scalability of EPH, particularly on dense vector datasets. This scalability is a crucial advancement, as it allows EPH to be applied to larger and dense datasets that FPH cannot handle. EPH opens new possibilities for its application in various domains where data similarities are dense.
In addition to performance and scalability improvements, EPH provides theoretical justification for its framework. Finally, we are hopeful that future work will be able to adapt our algorithm to different use cases, as its only requirement is a differentiable metric.
**Comment:** The motivation behind adopting the Dasgupta and TSD is not clearly stated. While the authors provide definitions and references for them, it is still hard to understand why these two particular scores are used and how they are in contrast to each other. Could the authors elaborate on this point?
**Response:** We focused on the Dasgupta cost and TSD because they are well-studied clustering metrics with intuitive meanings.
The Dasgupta cost is a widely used metric that measures the quality of a hierarchical clustering by evaluating how close similar leaves are in the hierarchy. Specifically, it quantifies the expected number of leaves for which the lowest common ancestor of a randomly sampled pair of leaves is their shared ancestor. A lower Dasgupta cost indicates that similar items are grouped together in the hierarchy, reflecting a good clustering structure.
The Tree-Sampling Divergence (TSD), on the other hand, quantifies how well a hierarchy can reconstruct its graph. More specifically, it is defined as the KL divergence between the node and the edge distributions. Its advantage over the Dasgupta cost is that it does not favor binary hierarchies.
Both metrics are unsupervised and differentiable, which is necessary for the gradient-based optimization in our approach. The unsupervised nature of these metrics allows us to evaluate the quality of the hierarchies without the need for labeled data, making them applicable to a wide range of applications.
We again thank the reviewer for their feedback and are happy to address any upcoming questions.
---
Rebuttal 2:
Comment: Thank you for the detailed response. After reading the rebuttal and other reviewers' comments, I am happy to keep my score unchanged (weak accept).
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for their response and appreciate any further suggestions. | Summary: This paper proposes for hierarchical clustering a new method Expected Probabilistic Hierarchies (EPH) that is developed from the Flexible Probabilistic Hierarchy (FPH) method. Unlike FPH using Soft-Das and Soft-TSD, EPH provides an unbiased estimate on two new objectives called Exp-Das and Exp-TSD. EPH has addressed the alignment issue between continuous and discrete targets, which is supposed to be the reason why it outperforms other continuous methods. Soft-Das is proved to be a lower bound on Exp-Das (Proposition 4.2), which is the main theoretical contribution. A learnable approach based on differentiable hierarchical sampling was proposed, in which the tree sampling procedure and the Gumbel-Softmax estimator are the main techniques to approximate the discrete choices of parent nodes. Experimental results verify the effectiveness of the new method.
Strengths: 1. The authors propose a new hierarchical clustering method based on a sampling-based differentiable metric.
2. The authors make comprehensive experiments to demonstrate that their method outperforms the baselines on both synthetic (HSBM) and real (including graph and vector) datasets.
Weaknesses: The technical contribution is limited since the framework of the algorithm and main techniques almost totally come from (Z$\ddot{u}$gner et al., 2022). I do understand that EPH has its own scores rather than soft scores, but I don't think the novelty is strong enough for the NeurIPS criterion.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. It seems that the Dasgupta cost is robust against the numbers of hierarchies and internal nodes, whose variation may lead to quite different hierarchical structures. However, a real system is believed to have its own intrinsic structure with relatively fixed such numbers. Does it mean that EPH or even Das and TSD costs are not suitable for finding the intrinsic hierarchical structure of real systems? I think that finding the intrinsic numbers is quite a difficult task, and it seems not a good way to treat them as hyper-parameters.
2. How do you restrict the number of hierarchies when in the condition that the number of internal nodes is given?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, the authors have addressed the limitations in their discussions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and address their comments below.
**Comment:** The technical contribution is limited since the framework of the algorithm and main techniques come from (Zügner et al., 2022).
**Response:** While our work builds upon the probabilistic hierarchies introduced by Zügner et al. (2022), it addresses key limitations of their approach and introduces several significant contributions and advancements.
Firstly, we prove that the Soft-Das cost is a lower bound of the discrete Dasgupta cost (Sec. 4.2), and we provide a concrete example where it fails to identify the optimal hierarchy. Building on this insight, we propose the Exp-Das and Exp-TSD metrics (Sec. 4.1) and show that their optimal hierarchies align with the discrete counterparts. Furthermore, we introduce an *unbiased* subgraph sampling algorithm, allowing us to employ both FPH and EPH to vector datasets.
Our empirical evaluation further demonstrates these advancements, showing substantial performance improvements, with gains of over 20% on certain datasets compared to the original FPH (see Table 10 in the appendix). These improvements are consistently observed across graph and vector datasets, underscoring the practical impact and novelty of our contributions.
**Comment:** It seems that the Dasgupta cost is robust against the numbers of hierarchies and internal nodes, whose variation may lead to quite different hierarchical structures. However, a real system is believed to have its own intrinsic structure with relatively fixed such numbers. Does it mean that EPH or even Das and TSD costs are not suitable for finding the intrinsic hierarchical structure of real systems? I think that finding the intrinsic numbers is quite a difficult task, and it seems not a good way to treat them as hyper-parameters.
**Resonse:** The Dasgupta cost and TSD metrics tend to improve with an increasing number of internal nodes, but they eventually converge when the hierarchies become expressive enough to capture the intrinsic structure of the data. In an unsupervised setting, the true number of internal nodes is unknown. To address this, we conducted experiments to determine when most of the information is captured, which we found to occur at $n' = 512$ (see Figure 14 and Figure 15). While the choice of internal nodes can depend on the specific application, a higher number is generally preferable as it captures more information and can be pruned later. We do not expect significant structural differences in hierarchies of different sizes, as the metrics maintain consistent and intuitive meanings across configurations.
Moreover, we validated our approach on synthetic HSBMs, where the number of internal nodes is predefined (see Figure 4 and Table 4). The results show that the inferred hierarchies closely align with the ground truth, indicating that EPH, along with the Dasgupta and TSD costs, is suitable for identifying the intrinsic hierarchical structure when the exact number of internal nodes is used to condition the hierarchies.
**Comment:** How do you restrict the number of hierarchies when in the condition that the number of internal nodes is given?
**Response:** The number of hierarchies is implicitly constrained by the sizes of $\mathbf{A}$ and $\mathbf{B}$. Specifically, selecting $n'$ effectively limits the number of internal nodes, thereby restricting the possible hierarchies that can be inferred.
We again thank the reviewer and are happy to address any remaining concerns.
---
Rebuttal Comment 1.1:
Comment: Thank the authors' response. My major concern is about the technical contributions. I keep my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response. | Summary: This paper studies gradient-based methods for hierarchical clustering, presenting an interesting approach EPH which is both scalable and accurate. The approach uses a subgraph sampling approach for scalability and is interesting in its optimization of expected hierarchical clustering costs.
Strengths: This is a very well written paper which:
* Interesting methodological contributions
* Has thorough empirical comparisons
* Clear writing and presentation of ideas
* Presentation of complexity
* Presentation of empirical comparisons on a wide variety of datasets
Weaknesses: Weaknesses of the paper include:
* Perhaps the methodological ideas are a bit on the simple side, however, this may be an impression given by a well written paper.
* it is a bit difficult for me to understand when a practitioner would choose this method over some algorithmic alternatives
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you add / discuss more about the implications of binary vs non-binary hierarchies?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable concerns. In the following, we address their comments.
**Comment:** It is a bit difficult for me to understand when a practitioner would choose this method over some algorithmic alternatives.
**Response:** EPH is particularly advantageous over alternatives in scenarios where the quality of the hierarchy is crucial and outweighs other factors. Unlike algorithmic alternatives, EPH has the unique ability to uncover hierarchies that deterministic methods are not able to reach, regardless of their computational resources. This makes EPH an ideal choice when the primary objective is to capture intricate data structures that are critical for the analysis.
Moreover, EPH offers flexibility through its adjustable hyperparameters, allowing practitioners to balance the trade-off between clustering precision and computational efficiency. This adaptability enables users to tailor the method to their specific needs, whether they require high-quality clustering or need to optimize for runtime.
**Comment:** Can you add / discuss more about the implications of binary vs non-binary hierarchies?
**Response:** Binary hierarchies offer fine-grained clustering and are often easier to analyze with certain theoretical models, e.g., to derive lower bounds for costs [3]. However, they can be computationally intensive. Non-binary hierarchies, in contrast, provide more flexibility by allowing nodes to have more than two children, which often aligns better with real-world clustering tasks. This flexibility can simplify the hierarchy and make it easier to interpret at higher levels. However, it is worth noting that binary hierarchies are required in certain tasks, such as jet physics and cancer genomics [2].
It is important to note that while the Dasgupta cost tends to favor binary branches [1], as seen in Figure 4, our approach is not restricted to binary hierarchies. EPH allows hierarchies to have any number of children, including binary hierarchies, if that structure is indeed optimal for the data.
We thank the reviewer again for their feedback and will discuss these points in the updated manuscript. We are happy to address any remaining concerns.
[1] **Dasgupta, Sanjoy.** "A cost function for similarity-based hierarchical clustering." In Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, pp. 118-127. 2016.
[2] **Macaluso, Sebastian, Craig Greenberg, Nicholas Monath, Ji Ah Lee, Patrick Flaherty, Kyle Cranmer, Andrew McGregor, and Andrew McCallum.** "Cluster trellis: Data structures & algorithms for exact inference in hierarchical clustering." In International Conference on Artificial Intelligence and Statistics, pp. 2467-2475. PMLR, 2021.
[3] **Chami, Ines, Albert Gu, Vaggos Chatziafratis, and Christopher Ré.** "From trees to continuous embeddings and back: Hyperbolic hierarchical clustering." Advances in Neural Information Processing Systems 33 (2020): 15065-15076. | Summary: This paper proposes a method for hierarchical clustering by optimizing expected clustering scores (DAS/TSD) over the distribution of hierarchies. They show theoretically that their proposed optimization objective is consistent with their discrete counterparts: that is, the solution for their proposed optimization problem is equivalent to the solution of the intractable discrete optimization problem (this is in contrast with other continuous relaxations such as soft-DAS that do not have this property). They show that this problem can be solved well in practice in an end-to-end manner despite not being convex. The authors show that their method empirically outperforms other known methods.
Strengths: The paper is extremely well written and technically correct/solid. The problem background is explained well in a way that is accessible to a broader audience as well.
Weaknesses: The main issue with this paper is the significance of the contribution. Overall, the work seems like a marginal improvement over [1]. In my view, the differentiable sampling techniques used are very standard and the main novel contribution is their theoretical result about consistency of their optimization objective with that of the discrete objective. The experimental results also only show a marginal improvement over the work that it builds on [1]. Furthermore, based on the runtimes from Table 18, it seems that this marginal improvement comes at a great computational overhead compared to FPH [1]. Thus, in the context of the whole paper, while the theoretical result is certainly interesting, this seems like an incremental contribution.
[1] Zügner et al., End-to-End Learning of Probabilistic Hierarchies on Graphs.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1) In what cases would a practitioner prefer to use your method (EPH) over FPH? It seems that the former is much more computationally expensive. Are there applications that demand such precise clustering? (If I'm understanding correctly, because of the non-convexity, EPH cannot guarantee optimal solution despite consistency anyway).
2) Alternatively, are there reasons to believe future work can significantly speed up EPH to make it competitive (in the runtime sense) with other baselines?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: Yes, the authors addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback and address their concerns in the following.
**Comment:** The experimental results also only show a marginal improvement over the work that it builds on [1]. Based on the runtimes from Table 18, it seems that this marginal improvement comes at a great computational overhead compared to FPH [1]. [...]
**Response:** We appreciate the reviewer's feedback and would like to clarify the significance of our contribution. Our proposed method, EPH, introduces several crucial modifications to FPH, which result in notable improvements of FPH itself across multiple datasets, as shown in Table 10. For instance, on the Brain dataset, EPH achieves an over 20% improvement compared to the original FPH, and even after tuning FPH, EPH still provides up to a 12% improvement on the OpenFlight dataset. Furthermore, it is important to note that FPH can only be applied to large vector datasets due to our proposed subgraph sampling algorithm. EPH demonstrates superior scores to many baselines and a lower runtime than the other continuous method, HypHC. We believe these advancements elevate our contribution beyond incremental improvements and hope the reviewer recognizes the broader applicability and potential impact provided by EPH.
**Comment:** In what cases would a practitioner prefer to use your method (EPH) over FPH? [...] Are there applications that demand such precise clustering? EPH cannot guarantee optimal solutions despite consistency.
**Response:** While it is true that EPH cannot guarantee convergence to the global optimum, its consistency provides several advantages that make it preferable in specific scenarios. EPH ensures that any reduction in the objective function results in a probabilistic hierarchy where discrete hierarchies are encoded with consistent costs. Unlike FPH, which can diverge from an optimum, EPH's probabilistic approach ensures that the resulting optima contain discrete hierarchies with equivalent costs, allowing for easier sampling of discrete hierarchies.
EPH also provides a valuable trade-off between quality and runtime through adjustable hyperparameters, allowing practitioners to prioritize high-quality hierarchies when computational resources are secondary. In applications where precise clustering is crucial, EPH can uncover hierarchies that deterministic methods like FPH may miss, irrespective of the computational resources available.
A common example where precise clustering is crucial is to predict the clinical prognosis of cancer genomics [2, 3].
**Comment:** Alternatively, are there reasons to believe future work can significantly speed up EPH to make it competitive with other baselines?
**Response:** Our primary focus was on improving the quality of inferred hierarchies rather than optimizing for runtime. However, we see several opportunities for future work to significantly reduce the runtime of EPH. These include:
1. **Parallelizing computations** of different Gumbel samples, which in our current implementation are computed sequentially. By parallelizing, we can achieve a substantial reduction in runtime.
2. **Reducing the number of Gumbel samples** as a trade-off. EPH allows for flexibility in the number of samples needed to approximate the loss, with fewer samples leading to faster computations.
3. **Skipping validation steps** that are necessary for FPH but not for EPH, as EPH's loss is consistent with the discrete Dasgupta cost.
To demonstrate the feasibility of these improvements, we implemented two modified versions of EPH on the graph datasets: EPH (parallelized) and EPH (minimized). EPH (parallelized) maintains the same numerical results while reducing runtime by up to 80%. EPH (minimized), which reduces the number of Gumbel samples and skips validation steps, achieves a runtime reduction of up to 97.5%, even outperforming FPH in speed, as shown in the following table.
| Method | Polblogs | Brain | Citeseer | Genes | Cora-ML | OpenFlight | WikiPhysics |
|-|-|-|-|-|-|-|-|
| FPH |452|547|345|373|644|592|667|
| EPH |3834|3402|2609|2848|3322|4419|6389|
| EPH (parallelized) |1496|1066|749|521|1196|1654|2325
| EPH (minimized)|111|100|72|79|85|116|157
While these modifications offer a significant speed improvement, they still achieve state-of-the-art results, as shown in the following table.
Model|Polblogs|Brain|Citeseer|Genes|Cora-ML|OpenFlight|WikiPhysics|
|-|-|-|-|-|-|-|-|
FPH |238.65|425.70|76.03|182.91|257.42|355.61|482.40|
EPH (minimized) | **236.86** | **404.30** | **72.81** | **173.56** | **238.11** | **309.33** | **463.63** |
We are optimistic that future research, particularly in sampling approximation methods [4] and GPU-accelerated alias sampling [5], will continue to reduce EPH's runtime, allowing it to scale to even larger datasets. Finally, it is worth noting that EPH's flexibility allows it to be applied to any differentiable metric, enabling future adaptations and applications across a wide range of fields.
We again thank the reviewer for their feedback and hope that we have addressed their concerns satisfactorily.
[1] **Zügner et al.** "End-to-end learning of probabilistic hierarchies on graphs." In International Conference on Learning Representations. 2021.
[2] **Van't Veer et al.** "Gene expression profiling predicts clinical outcome of breast cancer." nature 415, no. 6871 (2002): 530-536.
[3] **Macaluso et al.** "Cluster trellis: Data structures & algorithms for exact inference in hierarchical clustering." In International Conference on Artificial Intelligence and Statistics, pp. 2467-2475. PMLR, 2021.
[4] **Paulus et al.** "Rao-blackwellizing the straight-through gumbel-softmax gradient estimator." arXiv preprint arXiv:2010.04838 (2020).
[5] **Wang et al.** "Skywalker: Efficient alias-method-based graph sampling and random walk on gpus." In 2021 30th International Conference on Parallel Architectures and Compilation Techniques (PACT), pp. 304-317. IEEE, 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for your time and effort in providing a detailed response! I really appreciate the new experiments that optimize the runtime to EPH to make it competitive with existing method like FPH. It successfully demonstrates the practical applicability of the method.
I have modified my score to recommend acceptance.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response and for adjusting the score. | Rebuttal 1:
Rebuttal: We want to thank the reviewers for their valuable feedback and for acknowledging the clear writing (xKdT, AAtX, f9v5, YnBf), our extensive experiments (xKdT, f9v5, KKzQ, YnBf), and our methodological contribution including our theoretical analysis (xKdT, f9v5, YnBf).
We have addressed all comments and concerns in the individual responses provided under each review. In this general response, we want to summarize the new results and experiments we have conducted.
**Additional Visualizations**
In response to reviewer xKdT’s suggestion, we have included additional visualizations by applying the t-SNE algorithm to the vector datasets. These visualizations display the ground-truth labels, clusters inferred from the hierarchies, and their corresponding dendrograms. To enhance the clarity, we aligned the inferred clusters with the ground-truth labels using the Hungarian algorithm, allowing for an intuitive comparison. These visualizations are available in the supplementary PDF.
The additional qualitative evaluation demonstrates a strong alignment between the ground-truth labels and the clusters inferred by EPH. Furthermore, we observe that similarities in the t-SNE space are well-preserved within the dendrograms, indicating the effectiveness of our method.
**Runtime Improvements**
Following the recommendation of reviewer AAtX, we improved the runtime of EPH by parallelizing the computations of the different Gumbel samples. This optimization does not alter the results but substantially reduces the runtime. The following table shows the improved runtime (in seconds) on the graph datasets:
| Method | Polblogs | Brain | Citeseer | Genes | Cora-ML | OpenFlight | WikiPhysics |
|-|-|-|-|-|-|-|-|
| EPH | 3834 | 3402 | 2609 | 2848 | 3322 | 4419 | 6389 |
| EPH (parallelized) | 1496 | 1066 | 749 | 521 | 1196 | 1654 | 2325
We again thank all the reviewers and are confident that the feedback will help improve the revised manuscript. We are happy to address any remaining or upcoming concerns.
Pdf: /pdf/9b69206ce0ca08ce6ffdf2d7baa7e13dce2598bd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper builds on a previous work [40] for probabilistic hierarchical clustering. Instead of using the “soft” version of Dasgupta cost and Tree-Sampling Divergence in the objective function, the paper proposes to use the expected value of the two cost functions and develops a sampling method for calculating the expected value of the cost functions. The proposed cost functions have been shown to have some interesting theoretical properties and better empirical clustering performance compared to the previous work.
Strengths: * The paper includes theoretical analysis of the proposed Exp-Das and Exp-TSD. It shows the optimal value of those cost functions for the probabilistic hierarchy is equivalent to the optimal value of their counter-parts (i.e. Das and TSD) for the discrete hierarchy. It further shows that the Soft-Exp is only a lower bound of Exp-Das and thus less ideal for use to find the hierarchy with optimal Das.
* The experiments include reasonable number of baseline methods and datasets. The proposed method EPH is shown to be empirically better than the FPH proposed in the previous work [40].
* The presentation is satisfactory and the paper is easy to follow.
* The code of the proposed method is/will be publicly available.
Weaknesses: * The presentation of the paper and the proposed hierarchical clustering solutions closely resemble a previous work [40] that proposes the generalisation of probabilistic hierarchy and the cost functions Soft-Das and Soft-TSD. The current paper proposes Exp-Das and Exp-TSD to replace the Soft-Das and Soft-TSD. The contribution looks incremental in this sense, even with some interesting and nicer theoretical properties over the previous work.
* The proposed method EPH looks only slightly better than the previous method FPH in the experiments. The improvement of EPH over FPH is not as significant as that over other baseline methods.
* The proposed solution may not converge to global optimum of Exp-Das nor Exp-TSD. This makes the theoretical results less relevant as it is not guaranteed to reach global optimum anyway. This may also explain why the empirical performance is not markedly better than the previous work.
* Only the best score of the hierarchies resulting from five random initialisation is reported in the experiments. It is hard to know how much variance there would be for the performance due to the random initialisation and whether the proposed method EPH is significantly better than the previous method FPH.
* The qualitative evaluation does not provide very meaningful insights of the performance. I suppose similar results can be observed by other clustering methods provided that suitable features are available to compute the similarity between images. I think it would be more interesting if the hierarchy of clusters can be visualised instead.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Can we find the discrete hierarchy that gives the optimal Das or TSD after getting the probabilistic hierarchy with the optimal Exp-Das or Exp-TSD?
* Section 4.3 says that "we can use a closed-form expression", but then it says "no known solution exists". Do you have a closed-form expression of Exp-Das and Exp-TSD? If so, why don't you use it instead of the sampling method?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comprehensive feedback. In the following, we address their questions and remarks.
**Comment:** The contribution looks incremental in this sense, even with some interesting and nicer theoretical properties over the previous work.
**Response:** While our work builds upon the probabilistic hierarchies introduced in [1], it has several contributions. First, we present new theoretical insights into the limitations of Soft-Das, providing a concrete example where FPH fails to find the optimal hierarchy. To address this issue, we propose expected scores, along with a framework for optimizing these scores. We further justify our approach by demonstrating that their optimal hierarchies align closely with their discrete counterparts. Additionally, we introduce an *unbiased* subgraph sampling algorithm, extending the applicability of both FPH and EPH to vector datasets. Finally, through a thorough empirical evaluation, we demonstrate that our method consistently outperforms existing baselines across both graph and vector datasets, highlighting the practical impact of our contributions.
**Comment:** The improvement of EPH over FPH is not as significant as that over other baseline methods.
**Response:** We would like to emphasize that we did various modifications to FPH, improving their results as shown in Table 10 in the appendix. For example, EPH has a >20\% improvement on the Brain dataset over the original FPH. Even after tuning FPH, EPH reaches improvements of up to >12\% (OpenFlight).
**Comment:** The proposed solution may not converge to the global optimum of Exp-Das nor Exp-TSD.
**Response:** While EPH, like many complex optimization algorithms, may not always converge to the global optimum, its consistency ensures that any probabilistic hierarchy found encoded corresponding discrete hierarchies. The theoretical results remain relevant because they demonstrate that even probabilistic hierarchies correspond to discrete hierarchies with consistent scores. This consistency is a valuable property not guaranteed by FPH or other baselines.
**Comment:** Only the best score of the hierarchies resulting from five random initialisation is reported in the experiments.
**Response:** The reason for reporting the best score is that non-deterministic methods can improve their results across multiple runs. To address the concern about variance, we have reported the standard deviations in Tables 16 and 17, which are less than 2% and are substantially lower than the standard deviations of other methods. Additionally, we provide the following table comparing the mean performance of EPH with FPH, demonstrating that even the average performance of EPH is superior:
Dasgupta costs:
Model | Polblogs | Brain | Citeseer | Genes | Cora-ML | OpenFlight | WikiPhysics | DBLP | Zoo | Iris | Glass |
|-|-|-|-|-|-|-|-|-|-|-|-|
FPH |238.65 | 425.70 | 76.03 | 182.91 | 257.42 | 355.61 | 482.40 | 31,687 | 56.13 | 69.13 | 122.00 |
EPH mean | **235.89** | **402.30** | **74.11** | **179.64** | **239.21** | **315.10**| **459.51** | **30,637** | **55.80** | **69.10** | **120.97** |
TSD scores:
Model | Polblogs | Brain | Citeseer | Genes | Cora-ML | OpenFlight | WikiPhysics | DBLP |
|-|-|-|-|-|-|-|-|-|
FPH |31.37|32.75|**69.38**|**67.78**|**59.55**|57.58|49.87|41.62|
EPH | **31.60** | **33.87** | 69.34 | 67.67 |59.38|**57.63**|**50.04**|**42.69**|
These results further demonstrate the effectiveness of EPH and its robustness against randomness.
**Comment:** It would be interesting to visualise the hierarchy of the clusters.
**Response:** We thank the reviewer for their suggestion. We would like to highlight that we have already visualized hierarchies for the HSBMs in Figure 4 of the main paper and for the OpenFlight dataset in Figure 12 of the appendix.
In addition to these, we have now also visualized the hierarchies for the vector datasets, as described in the general comment.
**Comment:** Can we find the discrete hierarchy that gives the optimal Das or TSD after getting the probabilistic hierarchy with the optimal Exp-Das or Exp-TSD?
**Response:** Yes, the optimal discrete hierarchy can be easily obtained from the optimal probabilistic hierarchy. Proposition 4.1 in our paper proves that the optimal expected scores and discrete scores align. Since the expected cost can be expressed as a convex combination of discrete scores, any sampled discrete hierarchy from the optimal probabilistic hierarchy will be optimal. This means that once we obtain the optimal probabilistic hierarchy, we can sample any discrete hierarchy from it, knowing that it will have the optimal Das or TSD score.
**Comment:** Section 4.3 says that "we can use a closed-form expression", but then it says "no known solution exists".
**Response:** We apologize for the confusion caused by the wording. The intention was to emphasize that while closed-form expressions might exist in theory, tractable solutions have yet to be discovered. The phrase "we can use a closed-form expression" should have highlighted the theoretical possibility rather than the practical availability. Currently, there are no known closed-form solutions for Exp-Das and Exp-TSD. Consequently, we use the sampling method.
We again thank the reviewer for their feedback and hope that we have satisfactorily addressed their concerns. If there are any remaining remarks, we are happy to address them.
[1] **Zügner et al.** "End-to-end learning of probabilistic hierarchies on graphs." In International Conference on Learning Representations. 2021.
---
Rebuttal 2:
Comment: Thank you for your response. I concur with some of the other reviewers that the contribution appears incremental, given its similarity to the original method [1]. While the novelty of the method (though incremental) might be sufficient for NeurIPS, I believe the paper needs to better articulate the strengths of its method before it can be accepted. Therefore, I am maintaining my score as a borderline accept.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for their response and are happy to address any remaining concerns. | null | null | null | null | null | null |
R$^2$-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction | Accept (poster) | Summary: The paper introduces R2-Gaussian, a framework for tomographic reconstruction using 3D Gaussian splatting (3DGS). This framework aims to address the limitations of traditional 3DGS in volumetric reconstruction, specifically for tasks like X-ray computed tomography (CT).
Strengths: - Identifying and addressing the integration bias in standard 3DGS formulation for volumetric reconstruction
- The proposed R2-Gaussian framework is developed with tailored Gaussian kernels, rectified projection techniques, and a CUDA-based differentiable voxelizer.
- The paper provides simulated X-ray validation, comparing the proposed method against state-of-the-art techniques
Weaknesses: - The proposed method shows results in experimental settings. However, its performance in real-world clinical or industrial scenarios is not thoroughly examined. For instance, real X-ray images are given to reconstruct the CT scan.
- I am not sure if 75, 50, and 25 views of X-rays are considered sparse-view reconstruction and if it is practical to have 75, 50, or 25 views of X-rays for CT reconstruction.
- In the proposed method, the kernel formulation removes view-dependent color. However, it cannot model the scattering effect in X-ray.
- In Fig. 7, it is unclear how to get X-3DGS slices and what "queried from three views" means in detail. Why not implement the voxelization on X-3DGS (or vanilla 3DGS) for a fair comparison?
Technical Quality: 2
Clarity: 2
Questions for Authors: - After the voxelization from Gaussians, can the final density volume be compatible with traditional CT scans and be viewed in CT software?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed review.
> Q4.1: The performance in real-world clinical or industrial scenarios is not thoroughly examined. For instance, real X-ray images are given to reconstruct the CT scan.
We further evaluate our method on real-world data. We use FIPS [b], a public dataset providing real **2D** X-ray projections. FIPS includes three objects (pine, seashell, and walnut). Each case has 721 projections in the range of $0^{\circ}\sim 360^{\circ}$ captured by Hamamatsu Photonics C7942CA-22. Since ground truth volumes are unavailable, we use FDK to create pseudo-ground truth CT volumes with all views and then subsample 75/50/25 views for sparse-view experiments. We report the quantitative and qualitative results in Tab. 1 and Fig. 1 in the attached PDF file. Our method outperforms baseline methods by a large margin in 75- and 50-view scenarios. In 25-view, our method slightly underperforms IntraTomo but is 11$\times$ faster. Overall, our method shows superior performance and efficiency in the presence of real-world noise and scattering effects. We will include these results in the revised manuscript.
> Q4.2: I am not sure if 75, 50, and 25 views of X-rays are considered sparse-view reconstruction and if it is practical to have 75, 50, or 25 views of X-rays for CT reconstruction.
* Practicality: As mentioned by Reviewer sKx6, in industrial and medical applications, CT machines typically take hundreds to thousands of X-ray projections for high-quality details [a]. Therefore, modifying existing CT machines to 75/50/25 views is practical and convenient by setting different scanning intervals.
* Rationale: The community considers fewer than 100 projections as sparse-view CT (SVCT). We list the numbers of projections used in some published papers, as shown in Tab. D. These papers use 20-180 views. Accordingly, we set our projections to 75, 50, and 25 views. Additionally, there are works investigating extremely sparse-view CT (ESVCT), which uses 2-10 views [58,30,10]. ESVCT is a severely ill-posed problem that cannot be solved without fine-grained prior knowledge, such as pretraining with external datasets. Therefore, ESVCT is out of the scope of our study, as we only use projections (completely self-supervised). Overall, our experimental setting aligns with previous work and should be considered sparse-view.
*Table D. Number of projections used in published papers.*
| Paper | Publisher | Number of projections |
| -------------- | --------- | --------------------- |
| DD-NET [b] | TMI'20 | 60-180 |
| IntraTomo [61] | ICCV'21 | 20 |
| NEAT [c] | TOG'22 | 25-50 |
| NAF [62] | MICCAI'22 | 50 |
| SAX-NeRF [7] | CVPR'24 | 50 |
> Q4.3: It cannot model the scattering effect in X-ray.
We follow most CT reconstruction work [13,2,50,61,62,7], assuming that the target radiodensity field to be isotropic, and treating scattering as a noise source on the 2D detector.
Although we do not explicitly model scattering effects, we take it into consideration in the experiments. When generating X-ray projections for synthetic datasets, we model scattering noises with Poisson. We also evaluate our method in the real-world data which contains scattering effects (Q4.1). All results demonstrate our method's superior performance and robustness to scattering noise.
> Q4.4: It is unclear how to get X-3DGS slices and what "queried from three views" means in detail.
We will improve Fig. 7 captions and relevant descriptions for better understanding.
* X-3DGS slice: After recovering the 3D density of each Gaussian (L251-252), we use the same voxelizer (Sec. 4.2.2) as R$^2$-Gaussian to extract CT volumes. We then show slices of these volumes in Fig. 7 to demonstrate the reconstruction quality.
* "Queried from three views" means that we show reconstruction results from three different views to demonstrate the view inconsistency in X-3DGS.
> Q4.5: Why not implement the voxelization on X-3DGS (or vanilla 3DGS) for a fair comparison?
For X-3DGS, we implement the same voxelizer (Sec. 4.2.2) in R$^2$-Gaussian to extract a CT volume. We will clarify it in the revised manuscript.
> Q4.6: After the voxelization from Gaussians, can the final density volume be compatible with traditional CT scans and be viewed in CT software?
Yes the reconstructed volume is compatible with traditional CT scan viewers. We show a screenshot of inspecting reconstructed volumes with the Weasis DICOM medical viewer in Fig. 4 (PDF).
> Q4.7: Ethics review.
All data used in our experiments are from open-source datasets. We have properly cited the relevant references (Appx. B) and adhered to the respective data licenses.
---
**Reference**
*[a] Villarraga-Gómez, Herminso, and Stuart T. Smith. "Effect of the number of projections on dimensional measurements with X-ray computed tomography." Precision Engineering 66 (2020): 445-456.*
*[b] Zhang, Zhicheng, et al. "A sparse-view CT reconstruction method based on combination of DenseNet and deconvolution." _IEEE transactions on medical imaging_ 37.6 (2018): 1407-1417.*
*[c] Rückert, Darius, et al. "Neat: Neural adaptive tomography." _ACM Transactions on Graphics (TOG)_ 41.4 (2022): 1-13.*
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The authors have addressed my concerns. I will raise my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback and the time you took to review our paper. We’re pleased that our response addressed your concerns. We will ensure these clarifications are incorporated into the final version of the paper. | Summary: The paper aims to achieve high tomographic reconstruction performance with a limited number of views in a time-efficient manner.
To this end, the paper modifies 3DGS for X-ray projection by adjusting the rendering equation, correcting 2D projection errors, and using voxelizers for regularization.
The experiments demonstrate the efficacy of the proposed method.
Strengths: First of all, the paper is well written.
The problem in the current tomographic representation is clearly stated, and the authors' objectives are sufficiently addressed.
The overall structure is easy to understand, and the ablation study covers most of the arising questions.
Weaknesses: One critical weakness of this paper is the existence of prior work using 3DGS for X-ray projection.
Although the paper seems structurally well-written, the paper seems less novel due to the presence of X-Gaussian.
X-Gaussian, which has been accepted to ECCV 2024, employs a similar method.
More importantly, X-Gaussian achieved a PSNR of 43 in Human Organ reconstructions, whereas this paper (R2 Gaussian) achieved a PSNR of 36, despite slightly different experimental settings.
If the authors provide persuasive explanation on the novelty of this paper, I am willing to raise my score.
Additionally, it would be helpful to augment the related works and baseline models.
For example, C^2RV (CVPR 2024) is another recent tomographic representation model.
Technical Quality: 4
Clarity: 4
Questions for Authors: In the SAX-NeRF paper, the performance gap between SAX-NeRF and NAF is significant.
However, in the R^2 Gaussian paper, the performance gap between them is not significant.
Could you elaborate on why this happens?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The paper clearly states the limitations and the potential societal impact of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed review. The comments and suggestions are helpful in improving our paper.
> Q3.1: Novelty comparison w.r.t. X-Gaussian.
Our method demonstrates considerable novelty compared to the concurrent work X-Gaussian for the following reasons:
* **Broader task scope**: Our R$^2$-Gaussian is designed for both X-ray view synthesis and direct 3D CT reconstruction, whereas X-Gaussian only supports 2D X-ray novel view synthesis.
* **Theory-supported model design**: Our R$^2$-Gaussian has successfully extended 3DGS to 3D CT reconstruction with a theoretically sound approach, including new Gaussian kernels, new splatting equations, and a voxelization strategy. All of these are grounded with careful theoretical derivations. In contrast, X-Gaussian empirically modifies and extends 3DGS for X-ray view synthesis applications, with limited novel theoretical contribution.
* **Theory contribution**: Our method provides novel and original theoretical results, including the new derivation of X-ray rasterization and the identification (and the remedy) of the previously overlooked integration bias in the standard 3DGS technique. X-Gaussian, on the other hand, does not offer theoretical contributions in this regard, despite that it indeed represents the first successful yet empirical adaptation and application of 3DGS to X-ray view synthesis.
* **Efficient CT reconstruction**: Our method can directly output 3D CT volumes. In contrast, X-Gaussian augments novel-view projections first, and then relies on other existing CT algorithms for CT reconstruction.
In summary, our method offers notable technical and theoretical contributions compared to X-Gaussian. The methodological comparison between the two methods has been discussed in L89-94. We will further highlight our novelty and contributions w.r.t. X-Gaussian in the revised manuscript.
> Q3.2: X-Gaussian achieved a PSNR of 43, whereas this paper achieved a PSNR of 36.
Actually, the reported 43 (in Tab. 1 of the X-Gaussian paper) is the **2D PSNR** of novel-view image rendering quality, rather than the **3D PSNR** of the CT volume reconstruction quality. As a mater of fact, X-Gaussian only reported their 3D PSNR as 30.56 dB (from 5+95 view, as shown in Tab. 2 in the X-Gaussian paper), which is significantly lower than our 3D PNSR at 36 (or 36.89 dB, for human organ, in Tab. 3 of our paper).
> Q3.3: It would be helpful to augment the related works and baseline models, such as C$^2$-RV (CVPR'24).
Thank you for this suggestion. We will include these SOTA related works in our revised manuscript. These will add value and authority to our current paper.
Regarding baseline selection, we did not include supervised learning methods like C$^2$-RV because they require external datasets for pre-training. Our primary focus is on evaluating the method's representation capability for arbitrary objects without pre-training. For a fair comparison, we choose self-supervised learning methods that require only X-ray projections of objects (L483-484). To our knowledge, SAX-NeRF [7] (CVPR'24) is the latest SOTA work, and we have included it in our baseline methods. Experiments show that our method also outperforms SAX-NeRF, with a 0.93 PSNR increase and 78$\times$ faster training speed.
Please also note that C$^2$-RV has not released their code and models (empty GitHub repository). We tried to contact the authors but received no response. Due to the limited time available for rebuttal, it is unfortunate that we could not reproduce their method and compare it with our method experimentally.
> Q3.4: Could you elaborate on the performance gap between SAX-NeRF papers and your paper?
We use the official code of SAX-NeRF [7] and NAF [62] to perform experiments without changing the network or hyperparameters. We show human organ results from the SAX-NeRF paper [7] and our paper, which use the same source (Tab. C). SAX-NeRF performs consistently in both papers, while NAF in our paper has a higher PSNR. Nevertheless, both the SAX-NeRF paper and our paper conclude that SAX-NeRF achieves better results than NAF. Therefore, the slight performance inconsistency does not harm the arguments made in our paper.
*Table C. PSNR values in SAX-NeRF paper [7] and our paper.*
| | NAF in [7] | SAX-NeRF in [7] | NAF in ours | SAX-NeRF in ours |
| ------- | ---------- | --------------- | ----------- | ---------------- |
| Jaw | 34.14 | 35.47 | 35.01 | 35.37 |
| Foot | 31.63 | 32.25 | 31.65 | 31.90 |
| Head | 36.46 | 39.70 | 38.90 | 39.51 |
| Chest | 33.05 | 34.38 | 33.99 | 34.45 |
| Average | 33.71 | 35.44 | 34.85 | 35.29 |
---
Rebuttal 2:
Comment: I appreciate the authors for their detailed rebuttal. It resolved most of the critical concerns. Therefore, as long as the authors incorporate the explanations from the rebuttal into the final version, I will raise my score accordingly.
---
Rebuttal Comment 2.1:
Comment: Thank you for your positive feedback and for the effort you put into reviewing our rebuttal. We are glad that our explanations have addressed most of your concerns. We will ensure that these clarifications are fully integrated into the final version of our paper. | Summary: ### Motivation
- The authors propose to adapt 3D Gaussian Splatting (3DGS) to sparse-view tomographic reconstruction, i.e., to recover a radiodensity 3D volume from a small set of X-ray images and corresponding sensor information. This is relevant for various clinical and industrial applications.
### Contributions
- Their _R2-Gaussian_ model iterates over existing 3DGS solutions tuned for XR/CT imaging, proposing a 3DGS initialization scheme better suited for tomographic reconstruction.
- The authors also correct an integration bias in 3DGS (meant for faster image inference but causing ambiguities in volumetric reconstruction) and provide the corresponding CUDA patch.
- The proposed _R2-Gaussian_ includes other adaptations (custom densification parameters, voxel-based regularization, simplified isotropic kernels), resulting in an end-to-end CT reconstruction system.
### Results
- The authors provide extensive qualitative evaluation and quantitatively compare to other NeRF-based and traditional CT reconstruction methods, showing that their method provides a better trade-off between volume accuracy and reconstruction time.
### Relevance
- Effort in applying 3DGS to XR/CT imaging has grown the past year [6, 27, 39], as 3DGS appears to be a well-suited representation for such applications (due to its compactness, fast convergence, etc.). This work is a meaningful iteration and could benefit the community.
Strengths: _(somewhat ordered from most to least important)_
### S1. Convincing Comparative Evaluation and Qualitative Results
- The quantitative comparison to state-of-the-art CT reconstruction methods appear convincing, with the proposed solution demonstrating a better trade-off between volume accuracy and reconstruction time.
- The authors provide a lot of meaningful qualitative results, to illustrate theoretical contributions, to highlight the benefits of their method, but also to showcase its limitations. This makes reading this paper the more interesting.
- An ablation study w.r.t. some of the key contributions and w.r.t. some hyperparameters is also provided.
- Though limited in number (15 volumes), the authors evaluate on different categories (animal, vegatal, and synthetic targets).
### S2. Iterative yet Meaningful Contributions Towards 3DGS for XR/CT
- The discussion w.r.t. the integration bias in vanilla 3DGS is interesting, and the technical solution brought by the authors appear valuable to the community. As mentioned in the paper, their corrected CUDA implementation could benefit other 3DGS works targeting volumetric reconstruction.
- The authors propose an initialization scheme dedicated to volumetric tomography, as usual 3DGS initialization techniques (e.g., SfM) are not applicable here. This is a relevant contribution, properly described and evaluated (qualitative + quantitative evaluation).
- The proposed system, tackling xrays-to-CT reconstruction in an end-to-end differentiable manner, is indeed novel. Existing 3DGS models [6, 27, 39] for XR/CT imaging rather focus on digitally-reconstructed-radiograph (DRR) novel-view synthesis (NVS) rather than CT reconstruction.
### S3. Sound Theory and Reproducibility
- Background theory is well described by the authors, and the scientific/technical insight of the authors w.r.t. the identified integration bias could benefit the community.
- The authors provide their model implementation, which appears sound and well-structured (note that I did not try to run the code, but had a look at key files).
### S4. Detailed Discussion of Limitations and Contributions
- The authors put significant effort in discussing and illustrating some of their method's limitations (needle-like artifacts inherent to 3DGS, varying convergence time, limited extrapolation ability, etc.), as well as summarizing its impact (possible clinical/industrial applications, benefit of their CUDA code to the CV community, etc.) in Appendices G and H.
### S5. Well-Written and Illustrated Paper
- Overall, the paper is nicely structured, written, and illustrated. E.g., Figures 2-4 are helpful to understand the methodology at a glance.
Weaknesses: _(somewhat ordered from most to least important)_
### W1. Lack of Consideration for Real-World Noise and Anisotropic Effects
- The authors claim that "X-ray attenuation depends only on isotropic density" [L140] to justify their radiodensity-based model, but this is not entirely correct. While most models generating digitally reconstructed radiographs (DRRs) from CT volumes indeed consider x-ray attenuation as an isotropic phenomenon, this is a approximation. Some x-ray transport effects, such as Compton scattering, are actually anisotropic (but because CT volumes do not inherently contain the material information necessary to accurately simulate Compton scattering, DRR models ignore the anisotropy part) [a].
However, the authors do claim that they trained their model on DRRs generated using TIGRE [5] configured to simulate Compton scattering [L217-224]. Does it mean that the authors preprocessed the CT volumes to replace the attenuation values by material information (e.g., mapping HU values to a set of predefined materials)? More information on the data generation process would be helpful here.
If indeed the model was trained on DRRs containing anisotropic residual noise (c.f. Compton effect's residual impact on XR imaging), but the proposed algorithm itself only consider isotropic attenuation, how does it impact the results? E.g., it could be interesting to generate 2 sets of input DRRs (one generated with the approximated/simplified attenuation model and one more realistic) and compare the final accuracy of the reconstructed CT volumes.
- The fact that the method is only applied to synthetic inputs (DRRs generated by TIGRE) rather than real, usually noisier, X-ray images is also problematic. The paper would benefit from a real-world evaluation, or at least a discussion on why it was not performed.
### W2. Lack of References/Comparisons to SOTA on XR-3DGS
- The authors mention some of the existing 3DGS solutions applied to CT/XR imaging (e.g., X-Gaussian [6], GaSpCT [39], Li et al.'s model [27]) [L89-94] but do not perform any form of comparison with those. The lack of qualitative/quantitative comparison is fair (the authors argue that these models "cannot generate CT models" [L91-92], which is somewhat true ; though a comparison on the XR-NVS task could have made this paper stronger). But I would argue that the authors should have better referenced some of these works in the Methodology. E.g., even if less formalized, radiative Gaussians are already presented in X-Gaussian [6] ; and even though it is performed in the pixel domain rather than voxel one, GaSpCT [39] already proposes a total variation loss to regularize their XR-3DGS.
While I still believe that the proposed work is a valuable iteration over these works (by better formalizing and adapting the Gaussian properties and rasterization to CT data), I think the authors should be more transparent w.r.t. the SOTA.
- References w.r.t. total variation (TV) theory are also missing, making it hard to contextualize the scope of the authors' contribution w.r.t. the objective function.
### W3. Somewhat Overstated Contributions
- The CUDA implementation of the radiodensity voxelization seems to be more a technical feat rather than a scientific contribution. While possibly valuable to the community, I do not see any novelty in this module (maybe not as GPU-optimized, but differentiable point-cloud-to-voxel-grid tools already exist, e.g., in PyToch3D).
- Changes to the adaptive control are unclear/minor, according to the authors' descriptions [L210-214] (i.e., changing the size threshold w.r.t. pruning large Gaussians, editing the density of cloned/split Gaussians).
- The positive impact of the TV regularization does not appear that statistically significant (+0.32dB for PSNR, +0.009 for SSIM, +1m33s for convergence). Maybe some qualitative results could help grasp its contribution?
### W4. Limited Dataset Size
- The quantitative evaluation is performed on only 15 samples, even though varied. Larger CT datasets are available, e.g. CTPelvic1K [b].
### W5. Limited Impact of Integration Bias Correction (?)
- The authors claim that "this integration bias, though having a negligible impact on imaging rendering, leads to significant inconsistency in density retrieval" [L184-186], but they only provide qualitative imaging results (Fig. 6) to justify their un-biasing contribution. Additional results could better contextualize the corresponding claims.
----
### Additional Reference:
[a] Gao, Zhongpai, et al. "DDGS-CT: Direction-Disentangled Gaussian Splatting for Realistic Volume Rendering." arXiv preprint arXiv:2406.02518 (2024).
[b] Liu, Pengbo, et al. "Deep learning to segment pelvic bones: large-scale CT datasets and baseline models." International Journal of Computer Assisted Radiology and Surgery 16 (2021): 749-756.
Technical Quality: 3
Clarity: 4
Questions for Authors: _see **Weaknesses** for key questions/suggestions._
### Q1. Typo?
- [L223] Do the authors mean "Compton scatter" rather than "ponton scatter"?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Some limitations and societal impacts are discussed in detail (see **S4** above).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive review and valuable feedback.
> Q2.1: Did the authors preprocess the CT volumes?
Yes we convert raw volumes from HU to attenuation coefficients. Following [62, 7, 27], we then normalize voxel values to [0,1] for balanced evaluation across modalities. We will add more details in the revised manuscript.
> Q2.2: How does anisotropic residual noise impact the results?
>
> Q2.3: The paper would benefit from a real-world evaluation.
We address two questions together since they all relate to scattering effects. We agree that anisotropic effects, such as Compton scattering, occur in real-world X-ray imaging. We follow most CT reconstruction work [13,2,50,61,62,7], assuming that the target radiodensity field to be isotropic, and treating scattering as a noise source on the detector.
We do consider scattering in the experiments.
* When preparing X-ray projections, we follow conventions [a] to model scattering noise with Poisson.
* As requested, we further evaluate our method on real-world data containing scattering effects. We use FIPS [b], a public dataset providing real **2D** X-ray projections. FIPS includes three objects (pine, seashell, and walnut). Each case has 721 projections in the range of $0^{\circ}\sim 360^{\circ}$. Since ground truth volumes are not available, we use FDK to create pseudo-ground truth with all views and then subsample 75/50/25 views for sparse-view experiments. We report the quantitative and qualitative results in Tab. 1 (PDF) and Fig. 1 (PDF). Our method outperforms baseline methods in 75- and 50-view scenarios. In 25-view, our method slightly underperforms IntraTomo but is 11$\times$ faster. Overall, our method shows superior performance and efficiency in the presence of real-world noise and scattering effects.
> Q2.4: The authors should better reference existing X-ray 3DGS works.
We will enhance the comparison with existing X-ray 3DGS methods by adding more details in the related work section. Please note that all X-ray 3DGS works were preprinted/under review before the NeurIPS submission deadline.
We would also like to clarify the following points regarding the comments:
* Radiative Gaussian: Although X-Gaussian and our method coincidentally use the same term, the motivations and formulations are quite different.
* X-Gaussian replaces view-dependent spherical harmonics with a view-independent feature vector based on the isotropic assumption. It retains color and opacity, which do not physically represent radiodensity field. Besides, it uses alpha-blending, which contradicts the unordered nature of X-ray imaging.
* We define the Gaussian kernel as a local radiodensity field and derive new rendering equations (Eq. 7), demonstrating that summation should be used instead of alpha-blending (L176-178). See Q3.1 (Reviewer P8A3) for more details.
* Total variation (TV): Our work does not claim contributions to TV regularization. Instead, we use TV to demonstrate the possibility of applying voxel-based supervision to Gaussians, thanks to the differentiable voxelizer. To our knowledge, we are the first to do so.
> Q2.5: References w.r.t. TV are missing.
We will add reference [c] w.r.t. TV.
> Q2.6: I do not see novelty in the radiodensity voxelization.
We conclude the technical novelty of our voxelizer as the first differentiable CUDA-accelerated one for 3D Gaussians. The voxelizer offers opportunities to apply other voxel-based losses (such as SDF supervision) to 3D Gaussians, which can benefit the community.
> Q2.7: Changes to the adaptive control are unclear/minor.
We made minor modifications to adaptive control to suit X-ray imaging. We do not intend to claim a contribution regarding adaptive control. We will clarify it in the revised manuscript.
> Q2.8: Some qualitative results could help grasp the contribution of TV regularization.
We show qualitative results with and without TV in Fig. 12. It is clear that adding TV promotes smoothness and homogeneity. However, needle-like artifacts still persist, consistent with the general acknowledgment that high-level priors such as TV do not significantly improve performance. As mentioned in Q2.6, we do not claim contributions to TV but rather to a novel strategy for applying 3D losses.
> Q2.9: Larger CT datasets are available.
While there are larger CT datasets, such as CTPelvic1K, they only cover human organs with similar structures and materials. We focus on method's representation capability of arbitrary objects. Therefore, we chose data across various modalities, aiming at diversity rather than quantity. Besides, NeRF-based baseline methods typically require hours for training (SAX-NeRF needs 13 hours). We unfortunately do not have sufficient resources to support thousand-level experiments.
Compared with previous work, our dataset has the same size as SAX-NeRF (15), and is larger than NAF (5) and X-Gaussian (5).
> Q2.10 Additional results could better contextualize the claims of integration bias correction.
We show more quantitative and qualitative results regarding integration bias in Tab. 2 (PDF) and Fig. 2 (PDF). Our method achieves better results than X-3DGS in both 2D and 3D. This suggests that correcting integration bias improves both image rendering and volume reconstruction in CT. Note that this conclusion slightly differs from L253-254, and we will update it in the revised manuscript.
> Q2.11: Typo: "Compton scatter" or "ponton scatter"?
We use Poisson to model photon statistics on the detector, which also includes Compton scattering. We will use "photon statistics" or "Compton scattering" for clarity.
---
**Reference**
*[a] Zhu, Lei, et al. "Noise suppression in scatter correction for cone‐beam CT." Medical physics (2009).*
*[b] Siltanen, Samuli, et al., "FIPS: Open X-ray Tomographic Datasets.", Zenodo (2022)*
*[c] Rudin, Leonid I., et al. "Nonlinear total variation based noise removal algorithms." Physica D: nonlinear phenomena (1992).*
---
Rebuttal 2:
Comment: I thank the authors for their thorough response, as well as my fellow reviewers for their insightful comments. I appreciate the author's effort to address my (overall minor) concerns and questions, and I lean towards maintaining my current score (_accept_).
I do hope that, were the paper accepted, the authors would account for the reviewers' remarks, as summarized by the authors in their global response, e.g.:
- **Including results on real-world data** c.f. _DGKr_ and _w7Ci_ (me). These new experiments gathered by the authors demonstrate the real-world applicability of their work (c.f. Tab. 1 and Fig. 1 of rebuttal PDF).
- **Better discussing/referencing existing XR-GS works** c.f. _P8A3_ and _w7Ci_ (me). The results and discussion provided in the authors' response would benefit the readers, by better contextualizing their work.
I would also suggest:
- **Clarifying the isotropic simplification at the core of some claims/contributions**. E.g., the authors' claims that "[X-Gaussian] _uses alpha-blending, which contradicts the unordered nature of X-ray imaging_" [response] and that "_we can individually integrate each 3D Gaussian to rasterize an X-ray projection_" [L162 + Equation 5] are only correct in the context of the isotropic simplification of X-ray imaging. I.e., if actual physics effects, such as Compton scattering, were to be considered, then the ordering of the Gaussians would matter (see preprint [i] for contributions to XR-GS orthogonal to R$^2$-Gaussian's, as well as [ii, iii] w.r.t. why ordering matters in GS-based scattering simulation). I do agree with the authors that most CT reconstruction models adopt the isotropic simplification; and, therefore, that their rasterization simplification is legitimate. However, readers should be more explicitly made aware of the basis of the authors' claims (isotropic approximation [L100, L140, L162]).
- **Clarifying the benefits of proposed GS voxelizer** compared to existing solutions, e.g., PyTorch3D PC-to-voxel solution, which is also CUDA-based and differentiable but may require a few tweaks to work on Gaussians.
- **Including discussed references**, e.g. [c] (TV).
-----
#### Reference:
[i] Gao, Zhongpai, et al. "DDGS-CT: Direction-Disentangled Gaussian Splatting for Realistic Volume Rendering." arXiv preprint arXiv:2406.02518 (2024).
[ii] Zhou, Yang, Songyin Wu, and Ling-Qi Yan. "Unified Gaussian Primitives for Scene Representation and Rendering." arXiv preprint arXiv:2406.09733 (2024).
[iii] Condor, Jorge, et al. "Volumetric Primitives for Modeling and Rendering Scattering and Emissive Media." arXiv preprint arXiv:2405.15425 (2024).
---
Rebuttal Comment 2.1:
Comment: Thank you for your careful review of our rebuttal. Your detailed comments are crucially helpful in improving our paper, and we sincerely appreciate your support. We will incorporate the reviewers' remarks into the final version. We will also thoroughly address the points you raised, especially clarifying the isotropic simplification and the benefits of our proposed voxelizer. | Summary: This paper presents a 3D reconstruction method for sparse-view computed tomography using 3D Gaussian Splatting. The core contribution is the reformulation of the volumetric rendering equation to include view-independent central density estimation. Additionally, the paper introduces a differentiable voxelizer that converts a set of 3D Gaussians into a voxel grid of densities, proving effective in computed tomography tasks.
Strengths: - The paper reveals the view-dependent integration bias in 3D Gaussian Splatting (3DGS), which, to my knowledge, has not previously been reported in the community. This discovery may have a high impact to computer vision.
- While this is not the first paper to apply 3DGS to computed tomography, it is the first to accurately reconstruct 3D volumes with an image formation tailored specifically for this task.
- The proposed method is well evaluated and compared against other baseline methods.
- Exposition is clear. The paper really reads well.
Weaknesses: I don’t see any particular weakness of the paper.
Technical Quality: 4
Clarity: 4
Questions for Authors: I have a couple of questions.
The paper focuses on sparse-view computed tomography. I am curious about how the method would perform in dense-view scenarios. In industrial CT scanning, capturing a few thousand projections for high-quality microscale geometric details is not uncommon. With a sufficient number of projections, the FDK algorithm typically performs well. How does the proposed method compare to FDK when a large number of projections are used? At what point might it start to underperform, if at all? Would it still outperform in dense-view scenarios?
Additionally, what are the implications of correcting the integration bias in image-based 3D reconstruction tasks? Would this correction lead to improved 3D reconstructions as well? If not, why?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The only limitation I can think of is the scope of this work; computed tomography represents a relatively niche area in the fields of machine learning and computer vision.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your recognition of our work and your valuable feedback.
> Q1.1: How does the proposed method compare to FDK when a large number of projections are used?
We further evaluate FDK and our method with 500 to 2000 views. Results in Tab. A show that our method outperforms FDK by a large margin in all settings. Additionally, our method achieves a peak PSNR of around 39.85 dB while FDK is at approximately 37 dB.
*Table A. Quantitative results of FDK and our method under dense-view scenarios.*
| No. views | PSNR (FDK)$\uparrow$ | SSIM (FDK)$\uparrow$ | PSNR (Ours)$\uparrow$ | SSIM (Ours)$\uparrow$ | Time (Ours) |
| -------------- | -------------------- | -------------------- | --------------------- | --------------------- | ----------- |
| 50 (reference) | 26.5 | 0.422 | **37.98** | **0.952** | 8m14s |
| 500 | 34.04 | 0.755 | **39.73** | **0.964** | 8m33s |
| 1000 | 36.67 | 0.899 | **39.84** | **0.963** | 9m5s |
| 1500 | 36.89 | 0.913 | **39.84** | **0.963** | 8m49s |
| 2000 | 37.00 | 0.919 | **39.85** | **0.963** | 9m22s |
> Q1.2: What are the implications of correcting the integration bias in image-based 3D reconstruction tasks?
While RGB-based 3D reconstruction is out of our scope, we share some preliminary findings. We compare vanilla 3DGS and rectified one (R-3DGS) on NeRF synthetic dataset. We define the geometry field as the sum of Gaussian opacities, the same as SUGAR (CVPR'24). For vanilla 3DGS, we compute the mean of recovered 3D opacities of all training views. We then use our voxelizer (Sec. 4.2.2) to query opacity volumes and extract meshes using marching cubes (MC). Note that because the actual iso-value of the surface is unknown, we report chamfer distances (CD) with three MC thresholds.
Results in Tab. B and Fig. 3 (PDF) show that correcting the integration bias does not harm 2D rendering. Furthermore, it improves 3D reconstruction, though less significantly than in volumetric CT reconstruction. We suspect three reasons. First, for opaque objects, Gaussians are trained to be flat, so integration values ($\mu$ in L182) do not change significantly in front views. Second, Gaussians are close to the surface, allowing for reasonable surface extraction using only positions. Third, the splatting technique involves many simplifications in rendering equations, which may have more impact than integration bias on 3D reconstruction.
Since these findings are preliminary, we do not include them in this paper. We will make efforts to develop a bias-free 3DGS for RGB-based reconstruction in future research.
*Table B. Quantitative results of vanilla 3DGS and rectified one (R-3DGS) on NeRF-synthetic dataset.*
| | Vanilla 3DGS | R-3DGS |
| ------------------------ | ------------ | ---------- |
| 2D PSNR$\uparrow$ | **31.46** | 31.28 |
| 2D SSIM$\uparrow$ | 0.966 | **0.967** |
| No. Gaussians | 285k | 345k |
| CD (MC=5.0)$\downarrow$ | **0.0182** | 0.0202 |
| CD (MC=10.0)$\downarrow$ | 0.0179 | **0.0147** |
| CD (MC=20.0)$\downarrow$ | 0.0172 | **0.0141** | | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thank you for your insightful comments and constructive suggestions. We appreciate Reviewers sKx6 and w7Ci for recognizing our paper's solid technical contribution, high impact on related areas, excellent evaluation, and good writing. We are grateful for the positive feedback from all reviewers on our novel CT reconstruction framework and the discovery of integration bias.
Based on these valuable suggestions, we have added more experimental analysis and clarified important concepts. Please note that we add some figures and tables in the **PDF** file. Here is a summary of the changes:
* **Experiments on real-world data (Reviewer w7Ci and P8A3)**. We further evaluate our method on the real-world dataset FIPS [a]. The results in Tab. 1 (PDF) and Fig. 1 (PDF) show that our method outperforms baseline methods in the presence of real-world noise and scattering effects.
* **Clarification of our contribution w.r.t. existing X-ray 3DGS works (Reviewers w7Ci and DGKr)**. We summarize our primary contribution as the first theory-supported 3DGS framework for direct CT reconstruction, and the discovery and remedy of previously overlooked integration bias. We provide a detailed comparison between our method and X-Gaussian in Q3.1 (Reviewer P8A3).
* **Demonstration of integration bias (Reviewers w7Ci and DGKr)**. We have added more quantitative and qualitative results (Tab. 2 and Fig. 2 in PDF) to demonstrate the necessity of correcting integration bias.
We hope our response has addressed the initial concerns. Please let us know if you have any other questions.
Kind Regards,
Authors
---
**Reference**
*[a] Siltanen, Samuli, et al., "FIPS: Open X-ray Tomographic Datasets.", Zenodo (2022)*
Pdf: /pdf/954cb93e143122eff2ac519988f8001801856dbe.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Rethinking the Diffusion Models for Missing Data Imputation: A Gradient Flow Perspective | Accept (poster) | Summary: This paper addresses two primary issues in Missing Data Imputation (MDI) using Diffusion Models (DMs): inaccurate imputation due to sample diversification and difficult training caused by the complexity of designing the mask matrix. The authors propose a novel approach, Kernelized Negative Entropy-regularized Wasserstein Gradient Flow Imputation (KnewImp), which aims to resolve these issues for numerical tabular datasets.
Strengths: - The paper addresses two issues in the domain of DM-based MDI—sample diversification leading to inaccurate imputation and the complex training process due to mask matrix design. By identifying and explicitly addressing these issues, the paper provides a fresh perspective on improving MDI techniques. The introduction of the negative entropy-regularized cost functional is a creative and innovative approach to discourage diversification, aligning the generative model’s objectives more closely with the needs of MDI tasks. The integration of the WGF framework with RKHS to derive an imputation procedure is an original and elegant solution.
- The paper provides thorough theoretical analyses and proofs, ensuring the soundness of the proposed approach. The extensive experiments conducted on multiple real-world datasets from the UCI repository underscore the robustness and effectiveness of the KnewImp approach. The detailed ablation studies and sensitivity analyses enhance the quality of the research by thoroughly examining the contributions of different components and the impact of key hyperparameters.
- The paper is well-organized, with a clear delineation of problems, proposed solutions, theoretical foundations, experimental setup, and results. Each section logically flows into the next, making it easy to follow the authors’ arguments.
Weaknesses: - The choice of kernel in the Reproducing Kernel Hilbert Space (RKHS) can significantly impact the performance of the method. The paper primarily uses the radial basis function (RBF) kernel but does not explore the effects of using different types of kernels or the rationale behind choosing the RBF kernel. A detailed analysis of how different kernel choices affect the imputation quality and computational efficiency would strengthen the technical rigor of the study.
- The theoretical foundations of KnewImp rely on certain assumptions about the data distribution, such as the smoothness of the underlying density functions. The robustness of the method to violations of these assumptions is not thoroughly investigated. Including experiments that test the method’s performance on datasets with varying statistical properties (e.g., heavy-tailed distributions, multimodal distributions) would provide a more comprehensive assessment of its robustness.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How robust is KnewImp to variations in data distributions, such as heavy-tailed, multimodal, or skewed distributions?
2. Why was the radial basis function (RBF) kernel chosen for the RKHS, and how does the choice of kernel affect the performance of KnewImp?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes. Discussed in Appendix F.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments, our rebuttal organized point to point is posted as follows:
> W1 & Q2: Why we use RBF Kernel Function:
* The selection of the RBF kernel function was strategically driven by the need to satisfy the following condition: $\int{\nabla_{\boldsymbol{X}^{(miss)}}[u(\boldsymbol{X}^{(miss)} ,\tau) r(\boldsymbol{X}^{(miss)})]\mathrm{d}\boldsymbol{X}^{(miss)}}=\underbrace{\int{r(\boldsymbol{X}^{(miss)})\nabla\_{\boldsymbol{X}^{(miss)}}[u(\boldsymbol{X}^{(miss)} ,\tau) ]\mathrm{d}\boldsymbol{X}^{(miss)}}}\_{\mathbb{E}\_{r(\boldsymbol{X}^{(miss)})}[ \nabla\_{\boldsymbol{X}^{(miss)}}[u(\boldsymbol{X}^{(miss)} ,\tau) ] ]} + \int{u(\boldsymbol{X}^{(miss)} ,\tau)^\top\nabla_{\boldsymbol{X}^{(miss)}}[r(\boldsymbol{X}^{(miss)}) ]\mathrm{d}\boldsymbol{X}^{(miss)}} =0$. This condition is pivotal as it allows us to circumvent the direct, explicit estimation of $r(\boldsymbol{X}^{(miss)})$ during the imputation procedure.
* A sufficient condition for $\int{\nabla_{\boldsymbol{X}^{(miss)}}[u(\boldsymbol{X}^{(miss)} ,\tau) r(\boldsymbol{X}^{(miss)})]\mathrm{d}\boldsymbol{X}^{(miss)}}=0$ is that "$r(\boldsymbol{X}^{(miss)})$ is bounded and $\lim_{\Vert \boldsymbol{X}^{(miss)} \Vert\rightarrow \infty}u(\boldsymbol{X}^{(miss)} ,\tau)=0$".
* Consequently, the key to choosing kernel function is to validate the condition as follows: For $x$, kernel function $\mathcal{K}(x,x')$ should satisfy the boundary condition: $\lim_{\Vert x \Vert\rightarrow \infty}\mathcal{K}(x,x')=0$.
* Conventional kernel functions, like linear kernel, sine kernel, polynomial kernel, cosine similarity kernel... cannot satisfy this condition. Thus we choose the RBF kernel in our manuscript similar to previous work [1].
* Furthermore, we propose the following experimental results at 0.3 missing rate (to consist with the table in main context, results for MNAR scenario are not posted due to space limit):
|Scenario|Kernel|BT-MAE|BT-Wass|BCD-MAE|BCD-Wass|CC-MAE|CC-Wass|CBV-MAE|CBV-Wass|IS-MAE|IS-Wass|PK-MAE|PK-Wass|QB-MAE|QB-Wass|WQW-MAE|WQW-Wass|
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|**MAR**|linear|0.77|0.73|0.83*|3.36*|0.85*|0.77*|0.83*|0.97*|0.72*|2.42*|0.8*|2.5*|0.7*|4.58*|0.76*|1.02*|
||polynomial|1.12*|1.4*|1.44*|8.36*|1.06*|1.16*|1.08*|1.55*|1.07*|5.93*|1.33*|5.93*|1.28*|13.63*|1.02*|1.76*|
||sigmoid|0.89*|1.02*|1.44*|10.8*|0.82|0.74*|0.81*|0.93*|1.17*|12.57|1.3*|9.87*|1.04*|7.42*|0.77*|1.04*|
||cosine similarity|0.68|0.53|0.8*|3.14*|0.82*|0.73*|0.81*|0.92*|0.71*|2.37*|0.78*|2.37*|0.74|4.56|0.74*|0.98*|
||sine|8.62*|95.45|8.74*|290.79*|14.83*|281.68*|17.93*|492.97*|10.56*|456.27*|10.25*|306.76*|8*|313.9*|15.14*|386.73*|
||**RBF**|0.52|0.38|0.34|0.82|0.35|0.25|0.31|0.2|0.39|1.31|0.44|1.21|0.45|3.5|0.46|0.55|
|**MCAR**|linear|0.71*|0.45*|0.87*|5.87*|0.83*|0.84*|0.81*|1.28*|0.81*|5.65*|0.83*|4.14*|0.62*|6.31*|0.76*|1.25*|
||polynomial|0.94*|0.88*|1.31*|12.19*|0.98*|1.21*|0.99*|1.85*|1.11*|9.94*|1.27*|8.64*|1.11*|18.71*|0.92*|1.95*|
||sigmoid|0.74*|0.48*|0.97*|8*|0.81*|0.82*|0.8*|1.23*|0.92*|7.36*|0.94*|6.54*|0.77*|7.74*|0.77*|1.28*|
||cosine similarity|0.7*|0.42*|0.84*|5.51*|0.81*|0.81*|0.8*|1.24*|0.8*|5.48*|0.81*|3.96*|0.63*|6.01*|0.74*|1.2*|
||sine|9.76*|95*|7.77*|434.72*|13.53*|331.59*|11.78*|352.67*|8.27*|542.27*|8.62*|384.97*|7.36*|468.07*|10.61*|289.41*|
||**RBF**|0.48|0.18|0.25|0.8|0.47|0.34|0.42|0.44|0.44|3.05|0.32|1.01|0.34|3.66|0.53|0.76|
From the results presented in the table, it is evident that **the RBF kernel outperforms other kernels in scenarios where the boundary condition is not met. This observation empirically substantiates the effectiveness of the RBF kernel, thereby validating the justification for using RBF kernel**.
* We decide to add the abovementioned contents to explain why we mainly consider RBF kernel in our revised manuscript.
> W2 & Q1: Data Distribution Property
Thank you for your comments, our responses are listed as follows:
* We have expanded the experimental results in the attached PDF to include KnewImp performance across various data distributions, such as heavy-tailed, multimodal, and skewed distributions. Please refer to **Section 2.2 in the supplementary PDF provided in the common rebuttal chat window**.
* The additional results demonstrate that **KnewImp maintains consistent performance when transitioning from Gaussian to Skewed-Gaussian, Student's-t, and Gaussian Mixture distributions**. This underscores the **robustness of KnewImp across diverse data distributions**.
* We will incorporate these findings into the revised manuscript to enhance its comprehensiveness and clarity.
---
Thank you for reading our response, **we hope the above discussion fully addressed your concerns about our work, and we would really appreciate it if you could be generous in raising your score.**
---
Refs
[1]. "Stein variational gradient descent: A general purpose bayesian inference algorithm." NeurIPS'16.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for your detailed response. I have read through most of the comments by other reviewers, as well as the rebuttal. I decide to increase my score to support this paper to be accepted.
---
Rebuttal 2:
Title: Thank you for supporting our paper to be accepted!
Comment: Dear Reviewer [9XXU],
Thank you very much for your supportive feedback and for taking the time to thoroughly review both our manuscript and the additional materials provided during the rebuttal process. We are grateful for your decision to increase your score and appreciate your endorsement of our paper.
Warm regards,
Authors | Summary: This paper presents KnewImp, a kernelized negative entropy-regularized Wasserstein gradient flow imputation approach to numerical tabular data imputation. The authors argue that existing missing data imputation frameworks based on diffusion models suffer from two major limitations. Firstly, diffusion models primarily focus on sample diversification rather than accuracy, which results in discrepancy between the training objective of the diffusion models and the aim of tabular data imputation. Secondly, existing approaches are trained by masking parts of the observed data and then predicting the masked entries. This results in training difficulty due to the need of designing complex mask matrix selection mechanisms. To address the limitations of existing models, the authors propose a Wasserstein gradient flow based framework, which employs a novel cost functional with diversification-discouraging negative entropy as regularization. KnewImp is derived within the Wasserstein gradient flow framework, reproducing the kernel Hilbert space. To bypass the need of the mask matrix and to make the model easier to train, the authors further develop a novel cost functional based on joint distribution. Experimental results on a variety of real-world datasets show that KnewImp achieves state-of-the-art, outperforming a number of state-of-the-art baseline alternatives.
Strengths: - KnewImp is theoretically sound and achieves state-of-the-art results on a number of real-world datasets.
- The paper is in general well-written and easy-to-follow.
Weaknesses: - My main concern is that, the authors only presented results in terms of MAE and Wass, but did not show results on downstream tasks. It is therefore unclear whether KnewImp would be effective in real-world scenarios. I would consider raising my score if the authors can provide the result of KnewImp on downstream tasks. The authors can refer to Figure 5 of the TDM paper in terms of downstream results.
- From the ablation study, I think the majority of the performance improvement comes from modeling the joint distribution directly. The diversification-discouraging negative entropy regularization only marginally improves the performance. The main novelty of KnewImp is from the NER objective. However, ablation study suggests that it does not add much to the final performance.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In line 150, the authors claim that "This diversification is fundamentally at odds with the precision required in MDI tasks." I understand that the need of discouraging diversification in favor of precision is the main motivation of KnewImp. However, I wonder if the authors can provide a more intuition explanation about this claim, or if the discrepancy between accuracy and diversification can be somehow quantified.
- Why did the authors primarily focus on MAR and MCAR rather than MNAR? Is this because of the setting of KnewImp is not particularly suitable for MNAR?
- In Table 2, not all results in bold are best results.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work, and have provided sufficient justification.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments, our rebuttal is posted as follows:
> W1: Downstream Tasks
According to your suggestions, we added the performance on downstream classification task similar to Fig. 5 in the TDM paper [1]. **Please see Table 1 located in Section 2.1 from the pdf attached in common rebuttal chat window**.
> W2: Ablation Study
Thank you for your feedback on our ablation study.
* There seems to be some misunderstanding about the contributions of KnewImp. Our method primarily introduces WGF to analyze and enhance DM-based MDI. This introduces two main innovations: NER and Joint Distribution Modeling.
* **The function of NER term is that it introduces a functionally effective modification to the existing DM -based MD—Optimizing the NER functional aligns well with optimizing the joint/conditional log-likelihood objective**.
* Based on this, **the ablation study aimed to show that including NER maintains an effective lower bound for MDI tasks—meaning it does not degrade model performance in general**.
* Besides, we can also turn to Table E.4, when we add NER term into the model, **the standard deviation of KnewImp with NER is generally smaller than those without NER term**. These results further confirm that NER, although providing marginal performance gains, while enabling the KnewImp has a smaller standard deviation, is crucial for the model's theoretical robustness.
> Q1: Motivation
* To clarify the trade-off between diversification and accuracy, which is a central theme of our method KnewImp, let's consider a practical example. Suppose we denote the true value by $x$ and the imputed value by $ \tilde{x}$. The goal in terms of accuracy is to minimize the discrepancy $Dis(x, \tilde{x})$, where $Dis$ is discrepancy metric.
* Diversification tends to increase either the variance $\text{Var}(\tilde{x})$ or the entropy $\mathbb{H}(\tilde{x})$. These measures do not directly involve ground truth $x$.
* In DMs, **where entropy is used as a term to encourage diversification (our proposition 3.1), the outcome may converge towards a uniform distribution [2]. In such a distribution, every potential value within the support is equally probable as the imputed value, which is undesirable for MDI.**
> Q2: Scenario Restriction
Thank you for your question regarding our focus on MAR and MCAR rather than MNAR.
* We prioritize MAR and MCAR because **MNAR involves complexities due to its dependency on unobserved data, requiring in-depth knowledge of the missingness mechanism which is often challenging to determine [3,4]**.
* For instance, in privacy-sensitive studies like diabetes medication usage, the non-response is directly linked to the privacy nature of the participants, typical of MNAR scenarios.
* Given these challenges, we focus on MAR and MCAR for their more straightforward assumptions and applicability.
* Nevertheless, to ensure a comprehensive analysis, **we include findings related to MNAR in Tables E.1 and E.3 of our manuscript**, demonstrating our approach's applicability under these conditions as well.
> Q3: Marking Mistake
Thank you for pointing out our problems, we will revise this table in our revised manuscript.
---
Thank you for reading our rebuttal, **we hope our response alleviates your problem and we would really appreciate it if you could be generous in raising your score.**
---
Refs
[1]. "Transformed distribution matching for missing value imputation." ICML'23.
[2]. "Nonlinear Stein Variational Gradient Descent for Learning Diversified Mixture Models" ICML'19
[3]. "StableDR: Stabilized doubly robust learning for recommendation on data missing not at random." ICLR'23
[4]. "not-MIWAE: Deep generative modelling with missing not at random data" ICLR'21
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the detailed response and for providing additional experiment results. The downstream classification results suggest that the imputations generated by KnewImp can potentially lead to better downstream performance. I have therefore raised my score accordingly.
I am also curious if the authors have tried applying KnewImp over downstream regression tasks? In addition, what is its performance on real-world datasets with missing values, rather than datasets that you synthesize missing values on your own? Although the authors have experimented with three different scenarios, I think there still exists discrepancy between real-world datasets with inherent missing values.
---
Rebuttal 2:
Title: Appreciation for Reviewer [fS2D]'s Support and Constructive Feedback
Comment: Dear Reviewer [fS2D]:
Thank you for your encouraging feedback and for raising your score, which greatly supports our work.
Regarding your additional inquiries:
- Application to Regression Tasks:
- Currently, our focus has been primarily on classification tasks, as suggested by the framework outlined in Figure 5 of the TDM paper [1], primarily due to our time and resource constraints.
- However, we recognize the importance of exploring the utility of KnewImp in regression scenarios. We aim to include preliminary results on this aspect in another comment chat window within the discussion period set by the NeurIPS'24 committee as much as possible.
- Additionally, we will ensure to incorporate a detailed evaluation on downstream regression tasks in the revised version of our manuscript.
- Performance on Real-World Datasets with Inherent Missing Values:
- Currently, due to platform constraints, we have not yet implemented our algorithm in real-world industrial settings, such as recommender systems [2], where various metrics foucus on business value, area under the curve, return on investment [3], are typically employed.
- However, we are actively seeking opportunities to apply our methodology in industrial scenarios and plan to explore this in future work.
- We will outline these plans and the potential for real-world applications in the future research directions section of our revised manuscript.
We appreciate your insightful questions, which guide our ongoing and future research efforts.
---
Refs
[1]. "Transformed distribution matching for missing value imputation." ICML'23.
[2]. "StableDR: Stabilized doubly robust learning for recommendation on data missing not at random." ICLR'23
[3]. "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction" NeurIPS'20
---
Sincerely,
Authors
---
Rebuttal 3:
Title: Additional Results for Downstream Regression Task
Comment: Dear Reviewer [fS2D]:
Thank you for your insightful suggestions regarding the downstream regression task. In response to your feedback, we have conducted additional experiments focusing on downstream regression task. We utilized Mean Square Error (MSE) and Mean Absolute Error (MAE) as our evaluation metrics according to references [1] on the CC dataset, specifically designed for regression analyses. The results for missing rate at 0.3 are posted as follows:
|Dataset-Scenario|CC-MAR|CC-MAR|CC-MCAR|CC-MCAR|CC-MNAR|CC-MNAR|
|-|-|-|-|-|-|-|
|**Model/Metric**|**MSE**|**MAE**|**MSE**|**MAE**|**MSE**|**MAE**|
|CSDI_T|3.07E+02*|1.41E+01*|3.07E+02*|1.41E+01*|3.07E+02*|1.41E+01*|
|MissDiff|2.31E+02*|1.25E+01*|2.44E+02*|1.27E+01*|2.45E+02*|1.27E+01*|
|GAIN|2.24E+02|1.23E+01|2.40E+02*|1.26E+01*|2.38E+02|1.68E+01*|
|MIRACLE|3.51E+02*|1.59E+01*|3.71E+02*|1.60E+01*|3.80E+02*|1.64E+01*|
|MIWAE|2.23E+02*|1.23E+01|2.43E+02*|1.27E+01*|2.42E+02|1.27E+01|
|Sink|3.48E+02*|1.58E+01*|3.85E+02*|1.65E+01*|3.87E+02*|1.66E+01*|
|TDM|2.19E+02|1.58E+01|2.38E+02*|1.26E+01*|2.37E+02|1.66E+01|
|ReMasker|3.44E+02*|1.58E+01*|3.69E+02*|1.62E+01*|3.72E+02*|1.64E+01*|
|KnewImp|2.20E+02|1.22E+01|2.33E+02|1.24E+01|2.37E+02|1.26E+01|
|Ground Truth|1.57E+02|1.01E+01|1.55E+02|9.83E+00|1.68E+02|1.04E+01|
---
Refs:
[1]. "Deep Time Series Models: A Comprehensive Survey and Benchmark"
---
Thank you for reading our comments, **we hope our response answer your problem. Given your busy schedule, please do not feel obliged to respond to this message.**
Sincerely,
Authors
---
Rebuttal Comment 3.1:
Title: Looking forward your further engagement to our added experiments on downstream regression tasks!
Comment: Dear Reviewer fS2D,
Once again, we are grateful for your time and effort for reviewing our paper!
Since the discussion period will end in a few hours, we are very eager to get your feedback on our response. We understand that you are very busy, but we would highly appreciate it if you could take into account our response when updating your final rating and having a discussion with AC and other reviewers.
Thanks for your time,
Authors of Submission 1850 | Summary: This paper considered tackling the Missing Data Imputation (MDI) problem via diffusion models, which treats MDI as an generative problem. As DM-based methods focus on sample diversification rather than accuracy, which is the primary evaluation metric for MDI, the authors proposed one cost functional to discourage diversification in sample generation based on the Wasserstein Gradient Flow framework. Moreover, given that the true values of the missing data are unknown, the authors proposed to replace the joint distribution with the conditional distribution throughout the learning procedure.
Strengths: This paper focuses on two important questions faced by the diffusion model based solver of the missing data imputation problem. Extensive numerical experiments are provided to validate the effectiveness of the proposed methodology.
Weaknesses: The main issue here is that incorporating Wasserstein Gradient Flow (WGF) with generative modeling doesn't seem to be a new idea. However, it seems that the authors didn't include any related references in section 2.3, which provides a brief review of WGF. For instance, it might be necessary to cite and discuss the following articles [1-6].
Technical Quality: 2
Clarity: 2
Questions for Authors: It seems to the reviewer that the authors have made a strong assumption throughout this paper. Specifically, the assumption in Proposition 3.4 says that the joint distribution $r(X^{\text{joint}})$ can be factorized as $r(X^{(\text{miss})})r(X^{\text{obs}})$, where $r$ denotes the probability density function. Would it be possible for the authors to discuss whether such assumption is realistic or not? Intuitively, it seems that the missing and observed entries can't be utterly uncorrelated.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: It seems to the reviewer that there are too many typos in the current version of this paper, especially for the theoretical derivation part in the appendix. For instance, derivation of the equations from line 583 to line 584 seem to have lots of issues - the first step should contain some dot product?
References:
[1] Ansari, A. F., Ang, M. L., & Soh, H. (2020). Refining deep generative models via discriminator gradient flow. arXiv preprint arXiv:2012.00780.
[2] Cheng, X., Lu, J., Tan, Y., & Xie, Y. (2024). Convergence of flow-based generative models via proximal gradient descent in Wasserstein space. IEEE Transactions on Information Theory.
[3] Choi, J., Choi, J., & Kang, M. (2024). Scalable Wasserstein Gradient Flow for Generative Modeling through Unbalanced Optimal Transport. arXiv preprint arXiv:2402.05443.
[4] Gao, Y., Jiao, Y., Wang, Y., Wang, Y., Yang, C., & Zhang, S. (2019, May). Deep generative learning via variational gradient flow. In International Conference on Machine Learning (pp. 2093-2101). PMLR.
[5] Heng, A., Ansari, A. F., & Soh, H. Deep generative Wasserstein gradient flows.
[6] Xu, C., Cheng, X., & Xie, Y. (2024). Normalizing flow neural networks by JKO scheme. Advances in Neural Information Processing Systems, 36.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments.
> W1: Contributions & Novelty of This Paper
* Our primary focus is on analyzing Diffusion Model (DM)-based Missing Data Imputation (MDI) using Wasserstein Gradient Flow (WGF, initially designed for functional optimization), **not merely integrating WGF into a generative model**.
* The central contribution of this work is leveraging WGF as a tool to analyze and demonstrate the limitations of Diffusion Model-based MDI. This led us to redesign a novel functional and develop a unique computational approach for MDI.
* To the best of our knowledge, Proposition 3.1 has not been documented in previous literature. Specifically, for the VP-SDE model, the functional of the Ornstein–Uhlenbeck process incorporates a variance term $\mathbb{E}_{r}[\boldsymbol{X}^{(miss)}]^\top[\boldsymbol{X}^{(miss)}]$ to promote diversification. This feature is notably absent in traditional gradient field-based (GF-based) generative models as referenced in [1-6].
* In our manuscript, we want to convey the concept that: MDI should not be treated as a generative problem (in our manuscript, we want to discourage diversification, but the generative models will encourage entropy function as per reference [1]: `The differential entropy term` $\mathbb{H}(r)$ `improves diversity and expressiveness when the gradient flow is simulated for finite time-steps.` Similar entropy terms can be found in Eq. (15) in [1], Eq. (16) in [2], Eq. (3) in [3], Eq (1) in [4], Eq. (4) in [5], and Eq. (5) in [6]).
* Hence the references [1-6] you recommended about using GF to improve generative models were not considered during the preparation of our manuscript, and our related works subsections located in Section 5 are chiefly organized from the perspective of DM's application in MDI and the application of WGF works which model the conditional distribution by joint distribution.
* We plan to add another subsection to discuss the application of WGF in improving generative models to demonstrate the applicability of GF to improve generative model-related tasks and cite these references.
> W2: Justification of Decomposition:
* As for the decomposition, it is not a strong assumption for MDI.
* **As we mentioned in our manuscript and common rebuttal chat window, our task is to find the missing value $\boldsymbol{X}^{{(miss)}}$ and the $\boldsymbol{X}^{{(obs)}}$ will not be changed ($p(\boldsymbol{X}^{{(obs)}})$ is a constant measure).**
* Based on this, when we want to sample $\boldsymbol{X}^{{(miss)}}$ from distribution $r({\boldsymbol{X}^{{(miss)}}})$, the results are the same as sampling from $r({\boldsymbol{X}^{{(joint)}}})$ since $r(\boldsymbol{X}^{{(obs)}})=p(\boldsymbol{X}^{{(obs)}})$ is a constant measure according to reference [7,8], which is unchanged.
* **The key is that $p({\boldsymbol{X}^{{(joint)}}})\neq p({\boldsymbol{X}^{{(miss)}}}) p({\boldsymbol{X}^{{(obs)}}})$, but $r({\boldsymbol{X}^{{(joint)}}})= r({\boldsymbol{X}^{{(miss)}}}) p({\boldsymbol{X}^{{(obs)}}})$ is justified**.
* Furthermore, similar assumption on the $r$, the 'ansatz'/variational distribution/approximate distribution/proposal distribution, can be found in mean-filed variational inference represented by reference [9].
> L1: Typos on Eq. (B.2):
We regret any oversight **regarding the omission of the $\nabla\cdot$ operator (divergence operator), and velocity term $v_\tau$ in our description of the continuity equation**.
* This equation is given as $\frac{\partial r(\boldsymbol{X}^{{(miss)}})}{\partial \tau}=-\nabla\cdot[v_{\tau}(\boldsymbol{X}^{{(miss)}})r(\boldsymbol{X}^{{(miss)}})]$, the second equality can be obtained similar to the derivation of Fokker-Planck-Kolmogorov equation, where $v_{\tau}(\boldsymbol{X}^{{(miss)}})=-\nabla{\log{p(\boldsymbol{X}^{{(miss)}}\vert \boldsymbol{X}^{{(obs)}})}}-\lambda\nabla{\log{r(\boldsymbol{X}^{{(miss)}})}}$ and $\nabla\cdot[(\nabla{\log{r(\boldsymbol{X}^{{(miss)}})}})r(\boldsymbol{X}^{{(miss)}})] =\nabla\cdot[\frac{\nabla {{r(\boldsymbol{X}^{{(miss)}})}})}{r(\boldsymbol{X}^{{(miss)}})}r(\boldsymbol{X}^{{(miss)}}] =\nabla\cdot\nabla r(\boldsymbol{X}^{{(miss)}})$.
* We commit to revising our manuscript to rectify any ambiguities and ensure clearer presentation.
---
Thank you for reading our rebuttal. **Given the above infos, we hope that these points could be kindly considered in the evaluation of our work, and we would really appreciate it if you could be generous in raising your score.**
---
Refs
[1]. "Refining deep generative models via discriminator gradient flow." ICLR'21
[2]. "Convergence of flow-based generative models via proximal gradient descent in Wasserstein space." IEEE TIT
[3]. "Scalable Wasserstein Gradient Flow for Generative Modeling through Unbalanced Optimal Transport." ICML'24
[4]. "Deep generative learning via variational gradient flow". ICML'19
[5]. "Deep generative Wasserstein gradient flows"
[6]. "Normalizing flow neural networks by JKO scheme" NeurIPS'23
[7]. "Posterior Sampling Based on Gradient Flows of the MMD with Negative Distance Kernel." ICLR'24.
[8]. "Nonparametric generative modeling with conditional sliced-Wasserstein flows." ICML'23.
[9]. "Variational algorithms for approximate Bayesian inference", Doctoral Thesis'03.
---
Rebuttal 2:
Title: Response to authors' rebuttal
Comment: Dear authors,
Thank you so much for your detailed rebuttal and global response. However, regarding the assumption on the distribution r, the reviewer still finds the explanation offered by the authors to be a bit untransparent. Would it be possible for the authors to write the derivations in terms of conditional probability and posterior distributions? Alternatively, would it be fine for the authors to specify which lemmas/theorems in the papers [1,2,3] lead to the desired claims?
Thanks in advance!
References:
[1] Hagemann, P., Hertrich, J., Altekrüger, F., Beinert, R., Chemseddine, J., & Steidl, G. (2023). Posterior sampling based on gradient flows of the MMD with negative distance kernel. arXiv preprint arXiv:2310.03054.
[2] Du, C., Li, T., Pang, T., Yan, S., & Lin, M. (2023). Nonparametric generative modeling with conditional sliced-Wasserstein flows. arXiv preprint arXiv:2305.02164.
[3] Beal, M. J. (2003). Variational algorithms for approximate Bayesian inference. University of London, University College London (United Kingdom).
Best regards,
Reviewer 833r
---
Rebuttal 3:
Comment: Dear Reviewer [833r]:
Thank you for your response and further inquiries. Here is a refined version of our rebuttal addressing the concerns raised:
1. **Mean-Field Assumption:** *The mean-field approximation described on Page 52 of reference [1], titled `The Mean Field Approximation`*, involves conditionally independent factorization of the approximation distribution, updated iteratively. This approach validates our factorization of $r$, aligning with established variational methods where each factor is updated independently. This operation has also been included in the text book [2] in Page 464 to 465, for variational inference. (again, $r(\boldsymbol{X}^{(obs)})$ is a constant, since $\boldsymbol{X}^{(obs)}$ remains unchange)
2. **Modeling Strategy Justification:** We have to emphasize that, we have not used `posterior distribution` in our manuscript. Thus, we think you may doubt that how we can simulate the `conditional` distribution modeling by `joint` distribution modeling about distribution $p$ (not $r$):
- Reference [3] provides empirical validation in Appendix C of this strategy.
- Discrepancy measurements between conditional and joint distributions are thoroughly examined in reference [4] including $f$-divergence, where supplementary material includes detailed derivations supporting our approach.
- Reference [5] further substantiates that our modeling strategy of conditional-by-joint can be effectively applied within the WGF framework, as detailed in Remark 7 and Theorem 11 of the paper.
- We elaborate on these validations in Section 5.2 of our manuscript (Lines 319 to 320) and *provide detailed proofs on Lines 658 to 688*. More specifically, our approach utilizes a discrepancy metric akin to Kullback-Leibler (KL) divergence, expressed as $-\int r(x) \frac{\log r(x)}{p(x)} \mathrm{d}x$. Unlike the traditional use of KL divergence which incorporates diversification-encouraging positive entropy $\mathbb{H}[r(x)]$, our study employs diversification-discouraging negative entropy $-\mathbb{H}[r(x)]$. We think our manuscript further extend this modeling strategy in a way theoretically.
3. **Extra Example why $r$'s factorization is justified:** To further clarify, let's consider an example from SVGD, which may help elucidate why, in WGF-based approaches that approximate $p(z|x)$ with $q(z)$ (represented by $r$ in our study), the process does not explicitly involve the input $x$ or the posterior $p(z|x)$.
- Refer to Figure 4 in reference [6] for an illustrative example (it can approximate the ground truth posterior without explicitly computing posterior). In SVGD, $q(z)$ is represented as a group of particles. These particles are not directly influenced by the input $x$ or the explicit form of the posterior $p(z|x)$; instead, they are guided by the velocity field determined by the evidence $p(x|z)$ and the prior $p(z)$, with $p(x)$ effectively being 0 under the gradient operator.
- This setup allows $q(z)$ to approximate $p(z|x)$ effectively with the help of velocity filed $v$. Similarly, in our case with $r$, there is no necessity to consider it as an input of $\boldsymbol{X}^{(obs)}$.
- Instead, $r$ is guided by the velocity field determined by $p(\boldsymbol{X}^{(miss)}, \boldsymbol{X}^{(obs)})$, which inherently includes observation information (recall $\frac{\partial r}{\partial \tau}=-\nabla\cdot(rv)$, where $r(\boldsymbol{X}^{(miss)})$ is shaped by velocity filed $v$, $r(\boldsymbol{X}^{(obs)})$ remains unchange, the velocity term for $r(\boldsymbol{X}^{(obs)})$ is zero. Notably, and $v=-\nabla_{\boldsymbol{X}^{(miss)}}\frac{\delta \mathcal{F}\_{joint-NER}}{\delta r(\boldsymbol{X}^{(miss)})}$, the term $ \mathcal{F}\_{joint-NER}$ contains information about $\boldsymbol{X}^{(obs)}$).
---
References:
[1] "Variational algorithms for approximate Bayesian inference", Doctoral Thesis '03.
[2] "Pattern Recognition and Machine Learning", Text Book'06
[3] "Nonparametric generative modeling with conditional sliced-Wasserstein flows", ICML '23.
[4] "Conditional Wasserstein Generator", IEEE TPAMI '23.
[5] "Posterior sampling based on gradient flows of the MMD with negative distance kernel", ICLR '24.
[6] "VAE learning via Stein variational gradient descent", NeurIPS'17.
We appreciate your detailed feedback and look forward to further discussions.
Best regards,
Authors
Title: Response to [833r]'s additional question about $r$'s factorization
---
Rebuttal 4:
Title: Theoretcial Justification for Mean-filed Factorization of $r$ within WGF framework
Comment: Before reading our response, we should come to following agreements, which are the settings of MDI task:
* Throughout the imputation procedure, $\boldsymbol{X}^{({obs})}$ remains invariant regardless of any modifications to $\boldsymbol{X}^{(miss)}$.
* Given this invariance, it is accurate to state that $r(\boldsymbol{X}^{({obs})})$ is constant, and consequently, $r(\boldsymbol{X}^{(obs)}|\boldsymbol{X}^{(miss)}) = r(\boldsymbol{X}^{(obs)})$, reflecting the independence of $\boldsymbol{X}^{(obs)}$ from $\boldsymbol{X}^{(miss)}$.
Based on this, **according to the requirement of Reviewers [833r] and [EhUg], we should factorize the $r$ as $r(\boldsymbol{X}^{(joint)})=r(\boldsymbol{X}^{(obs)},\boldsymbol{X}^{(miss)})=r(\boldsymbol{X}^{(miss)}|\boldsymbol{X}^{(obs)}) r(\boldsymbol{X}^{(obs)})$**. Now let's analyze the left-hand-side of continuity equation: $\frac{\partial r(\boldsymbol{X}^{(obs)},\boldsymbol{X}^{(miss)})}{\partial \tau}$.
- First, we can get: $\frac{\partial r(\boldsymbol{X}^{(obs)},\boldsymbol{X}^{(miss)})}{\partial \tau} =\frac{\partial r(\boldsymbol{X}^{(miss)}|\boldsymbol{X}^{(obs)}) r(\boldsymbol{X}^{(obs)}) }{\partial \tau} $,
- Then, expand the product on the right-hand-side: $ \underbrace{r(\boldsymbol{X}^{(obs)})\frac{\partial r(\boldsymbol{X}^{(miss)}|\boldsymbol{X}^{(obs)}) }{\partial \tau} }\_{ r(\boldsymbol{X}^{(miss)}|\boldsymbol{X}^{(obs)}) = \frac{r(\boldsymbol{X}^{(obs)}|\boldsymbol{X}^{(miss)})r(\boldsymbol{X}^{(miss)})}{r(\boldsymbol{X}^{(obs)})} }+ \underbrace{r(\boldsymbol{X}^{(miss)}|\boldsymbol{X}^{(obs)}) \frac{\partial r(\boldsymbol{X}^{(obs)}) }{\partial \tau} }\_{0} $, where the first underbrace is the Bayesian formula, the second underbrace indicates that $r(\boldsymbol{X}^{(obs)})$ remains unchanged.
- Now expand the first underbrace, we get $\underbrace{\frac{r(\boldsymbol{X}^{(obs)})}{r(\boldsymbol{X}^{(obs)})}\frac{\partial r(\boldsymbol{X}^{(obs)}|\boldsymbol{X}^{(miss)})r(\boldsymbol{X}^{(miss)}) }{\partial \tau}}_{r(\boldsymbol{X}^{(obs)}|\boldsymbol{X}^{(miss)}) = r(\boldsymbol{X}^{(obs)}) } $. The first underbrace is based on the abovementioned agreement.
- Finally, we get: $ r(\boldsymbol{X}^{(obs)}) \frac{\partial r(\boldsymbol{X}^{(miss)})}{\partial \tau}$. i.e. $ \frac{\partial r(\boldsymbol{X}^{(obs)},\boldsymbol{X}^{(miss)})}{\partial \tau} = r(\boldsymbol{X}^{(obs)})\frac{\partial r(\boldsymbol{X}^{(miss)})}{\partial \tau}$. Notably, the factorization $ r(\boldsymbol{X}^{(obs)},\boldsymbol{X}^{(miss)})=r(\boldsymbol{X}^{(miss)})r(\boldsymbol{X}^{(obs)})$ can also have the same result given that $r(\boldsymbol{X}^{(obs)})$ is a constant according to the agreement.
In summary, we can see that **within WGF, the operation is $ r(\boldsymbol{X}^{(obs)},\boldsymbol{X}^{(miss)})=r(\boldsymbol{X}^{(miss)})r(\boldsymbol{X}^{(obs)})$ is justified for MDI task**. The influence of $\boldsymbol{X}^{(obs)}$ is hidden in the velocity filed $v$, where the continuity equation: $\frac{\partial r(\boldsymbol{X}^{(miss)})}{\partial \tau}=-\nabla\cdot[ r(\boldsymbol{X}^{(miss)})v]$ **shapes the "actor" $r(\boldsymbol{X}^{(miss)})$'s performance by the "comments" $v$ (the velocity field), given by the "critic" $p(\boldsymbol{X}^{(miss)}|\boldsymbol{X}^{(obs)})$**.
We plan to add the abovementioned derivation in our revised manuscript to increase uphold the rigor of our manuscript.
---
We **hope the above discussion will fully address your concerns about our work, and we would really appreciate it if these responses could meet with your approval.** We look forward to your insightful and constructive responses to improve this work. Thank you very much!
---
Rebuttal 5:
Comment: Dear authors,
Thank you for the detailed response. I will take a further look.
Best regards,
Reviewer 833r
---
Rebuttal 6:
Comment: Dear Reviewer [833r]:
Thank you for your constructive comments, which have significantly contributed to enhancing our manuscript. In response to your concerns, we would like to outline and summary how we have addressed your problem each point in detailed:
> **Universality of the Mean-Field Assumption on $r$'s Decomposition** `would it be fine for the authors to specify which lemmas/theorems in the papers [1,2,3] lead to the desired claims?`
* Following your query, we have referenced specific pages in the literature to demonstrate how this assumption is **commonly applied in general mean-field variational inference approaches**.
* In light of your request, we have provided detailed explanations in the specific pages on the evolution of the strategy that model the joint distribution using conditional distribution, **where they treat the velocity filed of the observation part "vanishes", which supports the justification of this assumption within WGF framework**.
> **Theoretical Justification of $r$'s Decomposition within WGF framework** `Would it be possible for the authors to write the derivations in terms of conditional probability and posterior distributions?`
* We have included **a comprehensive, step-by-step derivation to substantiate the mean-field assumption of $r$ within the WGF framework, as prompted by your insightful query**.
Finally, we would like to conclude with a metaphor to further illustrate the plausibility of this factorization $r(\boldsymbol{X}^{(joint)})=r(\boldsymbol{X}^{(miss)})r(\boldsymbol{X}^{(obs)})$:
- Consider $r$ as an actor in a play, capable of being molded and shaped. Initially, the actor may not fully embody the role, akin to $r(\boldsymbol{X}^{(miss)})$ not containing information about $\boldsymbol{X}^{(obs)}$.
- However, just as a director shapes an actor's performance through guidance and rehearsal, all we need to do is ensure that $r$ is appropriately molded by the directorial guidance (mirrors the continuity equation $\frac{\partial r}{\partial \tau}=-\nabla\cdot(vr)$) of the velocity field $v$ and the script provided by the critic $p(\boldsymbol{X}^{(obs)} | \boldsymbol{X}^{(miss)})$/$p(\boldsymbol{X}^{(obs)} , \boldsymbol{X}^{(miss)})$.
- As long as $r$ can adapt based on this feedback (akin to the WGF framework), it can overcome the limitations of its initial portrayal (akin to $r(\boldsymbol{X}^{(joint)})=r(\boldsymbol{X}^{(miss)})r(\boldsymbol{X}^{(obs)})$).
**We remain committed to providing rigorous revisions following your suggestions irrespective of what decision you make. Given your busy schedule, please do not feel obliged to respond to this message.**
Warm regards,
Authors
Title: Summary of the Response to Reviewer [833r]
---
Rebuttal 7:
Comment: I would like to thank the authors for their detailed response, which have resolved almost all questions. I now think it is fine to support the paper to be published, so I will be increasing my score from 3 to 5. However, please make sure to correct all typos you can find in the appendix to make the whole manuscript more readable.
---
Rebuttal 8:
Title: Thank you for raising your score and supporting our paper to be published.
Comment: Dear Reviewer [833r],
Thank you very much for your encouraging feedback and for acknowledging the clarifications provided in our response. We are grateful for your decision to increase the score and support the publication of our paper.
We also appreciate your attention to detail and your advice regarding the typos in the appendix. We assure you that we will rigorously review the manuscript again to correct all typographical errors and enhance its readability.
Thank you once again for your insightful contributions to the refinement of our work.
Best regards,
Authors | Summary: The paper proposes a new algorithm for data imputation. The idea is to estimate the score function corresponding to the posterior p(x_miss/x_obs) using DSM and then infer the missing values using a WGF equivalence argued in this paper itself. These alternating steps are repeated until convergence. Simulations on benchmark datasets illustrate the efficacy of the proposal.
Strengths: 1. In empirical comparisons, the proposal seems to beat existing baselines.
Weaknesses: 1. There are some technical concerns I have raised in the next section
2. I found description in sec34 very cryptic. It would have been nice if important portions of related appendix were moved to the main section. Also, will the proposed alternating style algorithm converge? Do we know anything property of this converged solution? Some discussion around the final algorithm would have helped in understanding the methodology better.
3.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. I am not sure how the equivalence in (2) follows. It seems the objective in RHS is independent of X_miss as it is integrated out. objective in RHS is a function of r, p, X_obs. Whereas objective in LHS is a function of X_miss, does not involve r. It will be helpful if this equivalence is clarified. Wrt. to the optimization variable the objective in RHS seems to be a constant. Is there any notation that I am missing? Since it is a basic step and subsequent derivations depend on this critically, I was unable to check the correctness of soem of the steps.
2. Morevoer, (4) seems to clearly show that the regularizer is not a function of X_miss. Then I am not sure how this regularizer, which is essentially a constant, matter.
3. Reading line 158 gives an impression that r is the unknown. However, r does not seem to appear in LHS of (2) . If it does, when what is the relation with p in the objective of (2) ?
4. is the assumptions of prop 3.4 meaningful? If X_Obs are X_miss are independent, then what information will the observations provide for imputation? will the imputation problem remain meaningful ? Please clarify this.
5.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: na
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. Before reading our response, we think we should come to the following agreements:
1. Optimizing the instances $x$ from distribution $r(x)$ is optimizing this distribution $r(x)$, which is the basis of particle-based variational inference like Stein Variational Gradient Descent [1].
2. The velocity filed $v$ drives the optimization of the cost functional (the 'function' of function), according to Eq. (A.10), is changing and hard to be 0, until we reach the equilibrium point (nearly impossible at the beginning of the imputation procedure).
3. **During imputation procedure, $\int{r(\boldsymbol{X}^{(miss)})\mathrm{d}\boldsymbol{X}^{(miss)}}=1$ is a constant (normalization constraint), but $r(\boldsymbol{X}^{(miss)})$ and $\mathbb{H}[r(\boldsymbol{X}^{(miss)})]\coloneqq-\int{r(\boldsymbol{X}^{(miss)})\log{r(\boldsymbol{X}^{(miss)})}\mathrm{d}\boldsymbol{X}^{(miss)}}$ are not constant, since $\boldsymbol{X}^{(miss)}$ is changing**
4. Furthermore, $\log{p(\boldsymbol{X}^{(miss)}\vert \boldsymbol{X}^{(obs)} )}$ is not a constant when we change $\boldsymbol{X}^{(miss)}$ unless it is a uniform distribution, which is also nearly impossible in practice.
## Weakness:
> W1: Technical Concerns
Please see next part for detailed information.
> W2: Convergence
* Our convergence discussions have already been listed in Appendix E.3 theoretically and empirically.
* For the final solution, all we can know is that it may reach the equilibrium point, where $\nabla_{\boldsymbol{X}^{(miss)}}\frac{\delta \mathcal{F}}{\delta r(X^{(miss)})}=0$ holds.
## Questions:
> Q1: Understanding of Eq. (2) `It seems the objective in RHS is independent of X_miss ... of r, p, X_obs. Whereas objective in LHS ... X_miss, does not involve r.`
The LHS is the content at the left side of $\Rightarrow$, and RHS is the content at the right side of $\Rightarrow$. Based on this, let's start to investigate Eq. (2) based on Agreement 1:
* The LHS indicates that we are finding some $\boldsymbol{X}^{{(miss)}}$ from some unknown distribution $r(\boldsymbol{X}^{{(miss)}})$ ($\boldsymbol{X}^{{(miss)}}\sim r(\boldsymbol{X}^{{(miss)}})$), such that we can maximum the likelihood function $\log{p(\boldsymbol{X}^{{(miss)}}\vert \boldsymbol{X}^{{(obs)}})}$.
* For optimizers, to the best of our knowledge whether GAMS solvers like GuROBi or Neural Network Solvers like Adam, can only handle scalar learning objectives.
* Consequently, we can convert it to $\frac{1}{M} \sum_{i=1}^{M}{\log{p(\boldsymbol{X}^{{(miss)}}\_i|\boldsymbol{X}^{{(obs)}}\_i)}}$, where $M$ is the missing value size. This is the **Monte Carlo (MC) Estimation** of term $\mathbb{E}_{r}[\log{p(\boldsymbol{X}^{{(miss)}}_i)\vert p(\boldsymbol{X}^{{(obs)}})}]$ (to our understanding, the `integrated out` you mentioned is MC integration), which is the RHS.
* Conversion in Eq. (2) is widely used in Quantum MC [2], where they sample some instances from an optimizable distribution and optimize the concerning functional based on these instances.
> Q1: Constant Question `to the optimization variable the objective in RHS seems to be a constant.`
The functional $\mathbb{H}[r(\boldsymbol{X}^{{(miss)}})]$ is changing, not a constant since we mentioned in Eq. (A.10) $\frac{\partial r}{\partial \tau}=-\nabla\cdot(r v)$, with velocity $v$ unless $v=0$ based on Agreement 4.
> Q2: r and $\boldsymbol{X}^{{(miss)}}$
Based on Agreement 1, this question is answered.
> Q2: Is NER a constant?
Based on Agreements 2 to 4, this question is answered.
> Q3: $r$ is missed at the LHS of Eq. (2)
The exact expression of $r$ is not our concern, the vital component is $\boldsymbol{X}^{(miss)}$, the imputed value (Agreement 1). By investigating funcional related to $r$ and optimizing concerning functional (**The $\boldsymbol{X}^{{(miss)}}\sim r(\boldsymbol{X}^{{(miss)}})$ in LHS indicates that $r$ occurs in the LHS of Eq. (2)**), analyze why DM-based MDI approaches do not take effect, and propose improvements is the novelty of this manuscript.
> Q3: $r$ is unkown:
It is hard to compute $p$ (existence of normalization constant) and $r$, **but we can still progressively increase the value of $\mathbb{E}_{r(\boldsymbol{X}^{(miss)})}[\log{p(\boldsymbol{X}^{(miss)}\vert \boldsymbol{X}^{(obs)} )}] -\lambda \mathbb{H}[r(\boldsymbol{X}^{(miss)})]$ and realize MDI task (with input $\boldsymbol{X}^{(miss)}$) throughout WGF, that's what we did in Section 3.2 and 3.3**.
> Q3: Relationship between $r$ and $p$
* Based on Agreement 1, we are representing $\boldsymbol{X}^{(miss)}$ by $r$.
* $r$ is 'some' proposal distribution, and $p$ is the 'evaluator', which 'evaluates wheter $r(\boldsymbol{X}^{(miss)})$/$\boldsymbol{X}^{(miss)}$ is suitable', and 'reshape $r$ (realize by reshaping $\boldsymbol{X}^{(miss)}$) to make it appropriate' based on cost functional $\mathbb{E}_r[\log{p}]-\lambda \mathbb{H}(r)$.
* $r$ is akin to an actor, and $p$ is akin to a critic. The imputation procedure is akin to improving the actors ($r$/$\boldsymbol{X}^{(miss)}$) with guidance (WGF) from the critics ($p$).
> Q4: Assumption $r({\boldsymbol{X}^{{(joint)}}})\coloneqq r({\boldsymbol{X}^{{(miss)}}}) p({\boldsymbol{X}^{{(obs)}}})$
Please see point 3) in the common rebuttal chat window, where this decomposition is widely applied in mean-field variational inference [3].
---
Thank you for reading our rebuttal! **We hope the above discussion will fully address your concerns about our work, and we would really appreciate it if you could be generous in raising your score.**
---
Refs
[1]. "Stein variational gradient descent: A general purpose bayesian inference algorithm." NeurIPS 16.
[2]. "Ab initio solution of the many-electron Schrödinger equation with deep neural networks." Physcial Reviews Research
[3]. "Variational algorithms for approximate Bayesian inference", Doctoral Thesis'03.
---
Rebuttal Comment 1.1:
Title: Additional Justification for the factorization of $r$
Comment: During our interaction with Reviewer [833r], we further elaborated on the theoretical justification for our factorization of $r(\boldsymbol{X}^{(joint)}) = p(\boldsymbol{X}^{(obs)})r(\boldsymbol{X}^{(miss)})$. For a detailed explanation, please refer to our comments in the comment chat window addressed to Reviewer [833r], titled `Theoretical Justification for Mean-Field Factorization of $r$ within the WGF Framework`. We hope this clarification also addresses your concerns effectively.
---
Rebuttal 2:
Title: Summary of Response to Reviewer [EhUg]
Comment: Dear Reviewer [EhUg],
Thank you for your constructive feedback, which has significantly enhanced our manuscript. **Below, we address your concerns point by point with detailed explanations**:
> Q1: `It would have been nice if important portions of related appendix were moved to the main section.`
- Due to *page constraints*, we cannot incorporate basic knowledge sections into the main content. However, we aim to provide a comprehensive background for understanding.
- To this end, we have included detailed explanations of WGF, the MDI task, and the derivation of concerning proofs in the appendix.
> Q1: `Also, will the proposed alternating style algorithm converge? Do we know anything about the properties of this converged solution? Some discussion around the final algorithm would have helped in understanding the methodology better.`
- We have addressed the convergence properties of our proposed KnewImp approach both theoretically and empirically in Appendix E.3.
- Detailed information about the convergence behavior and properties is provided in our individual rebuttal to you, where we may reach to the equilibrium point for $\nabla_{\boldsymbol{X}^{(miss)}}\frac{\delta \mathcal{F}}{\delta r} = 0$.
> W1: `I am not sure how the equivalence in (2) follows.`
* **Equivalence of $r(\boldsymbol{X}^{(miss)})$ and $\boldsymbol{X}^{(miss)}$ in (2)**: Our analysis of the DM-based MDI task through the WGF framework explains that optimizing the instances from sample $x$ is essentially optimizing the sample distribution $r(x)$.
* An **example for DMs**:
- Consider the initial DMs, which are used for data generation at the beginning. Starting with a group of samples drawn from white noise, we aim to progressively refine these samples until they closely approximate a target data distribution, such as an image of a dog.
- Throughout this transformation, the data distribution evolves from random noise to the "true distribution" represented by the DM, by optimizing the sample points.
- Consequently, optimizing individual instances from a sample $x$ can be understood as refining the sample's probablity density function $r(x)$, which encapsulates the distribution of $x$. For DMs' inference, the goal is to align $r(x)$ as closely as possible with the true data distribution.
- Our transformation is supported by Monte Carlo Estimation, where $\mathbb{E}\_{r(x)}[f(x)] \approx \frac{1}{M} \sum_{i=1}^{M}[f(x_i)],x_i\sim r(x)$, applied from left to right is used in particle variational inference [1] and Quantum MC [2]. But for theoretical analysis, it is applied from right to left (scenario in our manuscript).
> W2: `Then I am not sure how this regularizer, which is essentially a constant, matters.`
- Please note that, **the constant regularization will not effect the optimized results since $\nabla_{\boldsymbol{X}^{(miss)}}{\text{Constant}}=0$**.
- Theoretically, in our model, the entropy regularization $\mathbb{H}[r]$ is dynamic and not constant during the optimization process, **unless the velocity filed $v$ is zero** or $r$ becomes uniform distribution.
- Practically, **evidence in Section 1.2 of our attached PDF demonstrates that changing the regularization strength from negative to positive significantly alters the optimal values, confirming that entropy regularization is not a constant and does influence the optimized results**.
> W3: `Reading line 158 gives an impression that $r$ is the unknown. However, $r$ does not seem to appear in LHS of (2). If it does, then what is the relationship with $p$ in the objective of (2)?`
- **$r$, represented by the particles/samples $\boldsymbol{X}^{(miss)}$**, and representing $r$ by $\boldsymbol{X}^{(miss)}$ is the quintessence of particle variational inferene approaches [1].
- Obtaining the samples rather than the detailed expressions for $r$ matters.
- For a detailed **explanation of the relationship between $r$ and $p$, please refer to the metaphor in our rebuttal chat with Reviewer [833r], titled `Summary of the Response to Reviewer [833r].`**
> W4: `Is the assumptions of prop 3.4 meaningful?`
- **Detailed justification for the mean-field factorization of $r$ within the WGF framework is provided in our discussions with Reviewer [833r]**, including literature reviews and theoretical derivations in the rebuttal chat entitled `Response to [833r]'s additional question about $r$'s factorization` and `Theoretical Justification for Mean-field Factorization of $r$ within WGF framework.`
---
Refs:
[1]. "Stein variational gradient descent: A general purpose bayesian inference algorithm.".
[2]. "Ab initio solution of the many-electron Schrödinger equation with deep neural networks."
---
Thank you for taking the time to read our rebuttal. **We are committed to implementing thorough revisions based on your suggestions. Considering your busy schedule, please feel no obligation to respond to this message.**
Best regards,
Authors
---
Rebuttal Comment 2.1:
Title: Convergence Analysis of the "Alternating Style Algorithm" (Perhaps Algorithm 4)
Comment: In addition, we think your inquiry on the question, `Also, will the proposed alternating style algorithm converge?` may question Algorithm 4's convergence.
At the beginning, we would liek to thank you for your inquiry regarding the convergence of the proposed alternating style algorithm, particularly in relation to Algorithm 4. In response, we have expanded Appendix E.3, which initially focused on the convergence of the "impute" part of algorithm 4. In this comment chat window, we will present the proof of convergence for the "estimate" part for algorithm 4, specifically focusing on the loss function $\mathcal{L}\_{\text{DSM}}$ for the estimate part, denoted by neural network parameter $\theta$. We divide our proof into two main parts:
* Monotonic Decreasing of $\mathcal{L}\_{\text{DSM}}$:
- Analyzing the evolution of $\mathcal{L}\_{\text{DSM}}$ over time $\tau$, we consider the differential equation: $\frac{\mathrm{d}\mathcal{L}\_{\text{DSM}}}{\mathrm{d}\tau} = \langle \nabla_{\theta}\mathcal{L}\_{\text{DSM}}, \frac{\mathrm{d}\theta}{\mathrm{d}\tau} \rangle$, where $\langle \cdot, \cdot \rangle$ denotes the inner product.
- The parameter $\theta$ is updated via a gradient descent-like algorithm: $\theta_{t+1} = \theta_t - lr \nabla_{\theta}\mathcal{L}\_{\text{DSM}}$. Taking the limit as $lr$ approaches zero, we obtain: $\lim_{lr \rightarrow 0} \frac{\theta_{t+1} - \theta_t}{lr} = \frac{\mathrm{d}\theta}{\mathrm{d}\tau} = -\nabla_{\theta}\mathcal{L}\_{\text{DSM}}$.
- Substituting $\frac{\mathrm{d}\theta}{\mathrm{d}\tau} = -\nabla_{\theta}\mathcal{L}\_{\text{DSM}}$ into the differential equation $\frac{\mathrm{d}\mathcal{L}\_{\text{DSM}}}{\mathrm{d}\tau} = \langle \nabla_{\theta}\mathcal{L}\_{\text{DSM}}, \frac{\mathrm{d}\theta}{\mathrm{d}\tau} \rangle$ provides: $\frac{\mathrm{d}\mathcal{L}\_{\text{DSM}}}{\mathrm{d}\tau} = -\langle \nabla_{\theta}\mathcal{L}\_{\text{DSM}}, \nabla_{\theta}\mathcal{L}\_{\text{DSM}} \rangle \leq 0$, indicating that $\mathcal{L}\_{\text{DSM}}$ monotonically decreases over time.
- In summary, the evolution of $\mathcal{L}\_{\text{DSM}}$ is monotonic decreasing along time $\tau$.
* Lower-Bounded Property of $\mathcal{L}\_{\text{DSM}}$:
Reflecting on the definition of $\mathcal{L}\_{\text{DSM}}$, which is: $\mathcal{L} \_{\text{DSM}} \coloneqq\frac{1}{2}\mathbb{E}\_{q_{\sigma}(\hat{\boldsymbol{X}}^{(joint)}\vert {\boldsymbol{X}}^{(joint)})}[\Vert \nabla_{{\hat{\boldsymbol{X}}}^{(joint)}}\log\hat{p}({\boldsymbol{X}}^{(joint)}) - \nabla_{\hat{\boldsymbol{X}}^{(joint)}} \log{q_{\sigma}(\hat{\boldsymbol{X}}^{(joint)}\vert {\boldsymbol{X}}^{{(joint)}})} \Vert^2] $, we confirm that $\mathcal{L}\_{\text{DSM}} \geq 0$.
In conclusion, similar to Proposition E.1 in Section E.3 of the appendix, when the learning rate $lr$ is sufficiently small, the "estimate" part may converge. Furthermore, the optimal parameter may reach an equilibrium point where $\nabla_{\theta}\mathcal{L}\_{\text{DSM}} = 0$.
---
We will add the abovementioned contents in our revised manuscript. Thank you once again for your comments, we hope this derivation addresses your problem about the convergence of in a further way.
---
Reply to Comment 2.1.1:
Comment: Dear reviewer EhUg,
Since the discussion period will end in a few hours, we will be online waiting for your feedback on our rebuttal, which we believe has fully addressed your concerns.
We would highly appreciate it if you could take into account our response when updating the rating and having discussions with AC and other reviewers.
Thank you so much for your time and efforts. Sorry for our repetitive messages, but we're eager to ensure everything is addressed.
Authors of Submission 1850 | Rebuttal 1:
Rebuttal: ## Overall Response
We are encouraged by the reviewers' acknowledgment of the strengths in our paper, such as its robust performance [Gtbe] [EhUg] [833r] [fs2D] [9XXU], comprehensive experimentation [Gtbe] [fs2D] [9XXU], and clear, concise presentation [Gtbe] [fs2D] [9XXU]. However, we also recognize that there are common concerns raised by some reviewers regarding the 1). motivation for using Gradient Flow (GF) to discourage diversification [833r] [fs2D], 2). the assumptions related to the decomposition $r(\boldsymbol{X}^{(joint)})\coloneqq r(\boldsymbol{X}^{(miss)})p(\boldsymbol{X}^{(obs)})$ [EhUg] [833r], and 3). the adequacy of our experimental validation [Gtbe] [fs2D][9XXU]. To address these concerns, we offer the following clarifications:
## Motivations & Contributions:
The motivation behind KnewImp is to address specific challenges in diffusion model (DM)-based missing data imputation (MDI) tasks:
* **Inconsistency between DM's goals and MDI objectives:** As generative models, DMs inherently aim to diversify data with implicit regularization terms, which intuitively conflicts with MDI requirements that often demand precise values.
* **Design of Mask Matrix for Model Training:** DMs require a mask matrix to formulate conditional distributions [1], and the design of mask matrix is crucial to the imputation accuracy.
Based on this, our major contributions are summarized as follows:
* **Introduction of GF in DM-based MDI:** We fconceptualize MDI as an optimization problem and employ GF, initially designed for functional optimization [2], to elucidate the shortcomings of DMs in MDI, particularly how DMs inadvertently promote diversification through terms like entropy and variance (Section 3.1).
* **Novel and Effective Cost Functional:** We introduce an effective cost functional that incorporates the negative regularization term, with a rigorously derived implementation strategy (Section 3.2).
* **Sidestepping Mask Matrix Design through Joint Distribution Modeling:** We demonstrate that within the GF framework, it is possible to circumvent the traditional mask matrix design and instead utilize a joint distribution modeling approach (Section 3.3).
## Weaknesses & Questions Response:
### 1). Diversification Discouraging [833r] [fs2D]:
* In Section 1.1 of our attached PDF, we analyze two distributions: uniform and normal. The uniform distribution exhibits higher entropy, aligning with the generative models‘ goal where **each value within the support is equally probable** (**maximum entropy may result in uniform distribution [3]**). This characteristic, however, **does not align with the objectives of MDI**, where specific values are often required.
* Building on this analysis, Section 1.2 of our PDF compares KnewImp's performance when optimizing a cost functional related to a specified Dirichlet distribution. By gradually adjusting the weight of the negative entropy term ($\lambda$) of KnewImp from negative to positive, we demonstrate that increasing accuracy in MDI tasks may require a reduction in diversification.
### 2). Assumption of $r(X^{(joint)}) \coloneqq r(X^{(miss)})p(X^{(obs)})$ [EhUg] [833r]:
* We treat $r$ as a proposal distribution **akin to the approximation distribution within the variational inference context**, which is an ansatz meant to be optimized based on a functional related to the distribution $p(X^{(joint)})$.
* Thus, **it is crucial to verify the application of $p(X^{(joint)}) \stackrel{?}{=} p(X^{(miss)}) p(X^{(obs)})$, rather than $r(X^{(joint)}) \stackrel{?}{=} r(X^{(miss)}) r(X^{(obs)})$**. Fortunately, **KnewImp does not assume $p(X^{(joint)}) = p(X^{(miss)}) p(X^{(obs)})$ (i.e. $p(X^{(joint)}) \neq p(X^{(miss)}) p(X^{(obs)})$)**. Moreover, the mean-field assumption in variational inference, where $r(X^{(joint)}) = r(X^{(miss)}) r(X^{(obs)})$, is practical according to references represented by [4].
* Finally, **$X^{(obs)}$ remains unchanged in MDI, which indicates that $r(X^{(obs)})$ and $p(X^{(obs)})$ are identity and constant**, thus we can replace $r(X^{(obs)})$ with $p(X^{(obs)})$.
### 3). Extra Experiments [Gtbe] [fs2D][9XXU]:
* We added results from ReMasker model [5] according to reviewer [Gtbe]:
|Dataset & Metric| BT MAE| BT Wass|BCD MAE|BCD Wass| CC MAE|CC Wass|CBV MAE|CBV Wass|IS MAE|IS Wass|PK MAE|PK Wass|QB MAE| QB Wass |WQW MAE|WQW Wass|
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
| MAR | 0.55| 0.43*| 0.50*|1.53* | 0.60*| 0.43*| 0.50*|0.40*|0.58*| 2.02*|0.53|1.31| 0.64*| 3.75*|0.53|0.59|
| MCAR| 0.44| 0.15| 0.37*|1.56*| 0.55*| 0.37 | 0.56*| 0.63*|0.55*| 4.10*|0.47*|1.51| 0.47*| 4.14*|0.56*|0.78|
| MNAR| 0.53| 0.26 | 0.42*|2.08*| 0.54*| 0.39*| 0.58*| 0.66*|0.50*| 3.57* | 0.56*| 2.59 |0.50| 5.53*|0.58|0.82|
* We have included extra experimental results in Section 2.1 of our attached PDF as suggested by reviewer [fs2D], focusing on a downstream classification task similar to the experiments depicted in Fig. 5 of the TDM paper [6], using the `cross_val_score` function from the sklearn package (the rest of the settings are same as TDM paper).
* We have incorporated additional experimental results concerning the performance of KnewImp across various data distributions in Section 2.2 of the attached PDF, as suggested by reviewer [9XXU].
---
In all, thank you for considering our responses, and we look forward to any further feedback that might help refine our work.
---
Refs
[1]. "CSDI: Conditional score-based diffusion models for probabilistic time series imputation." NeurIPS'21
[2]. "{Euclidean, metric, and Wasserstein} gradient flows: an overview." Bulletin of Mathematical Sciences
[3]. "Nonlinear Stein variational gradient descent for learning diversified mixture models" ICML'19
[4]. "Variational algorithms for approximate Bayesian inference", Doctoral Thesis'03.
[5]. "ReMasker: Imputing Tabular Data with Masked Autoencoding." ICLR'24.
[6]. "Transformed distribution matching for missing value imputation." ICML'23.
Pdf: /pdf/c98a89151cca5000433babb539ef59f8edc32eba.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper introduces Kernelized Negative Entropy-regularized Wasserstein Gradient Flow Imputation (KnewImp), a novel approach for imputing missing data in numerical tabular datasets. The proposed method addresses two significant challenges in diffusion model-based missing data imputation (MDI): inaccurate imputation and difficult training. KnewImp integrates the Wasserstein gradient flow (WGF) framework with a negative entropy-regularized (NER) cost functional to enhance imputation accuracy and simplify the training process by eliminating the need for complex mask matrix designs. The method's efficacy is demonstrated through extensive experiments, showing superior performance compared to state-of-the-art imputation techniques.
Strengths: 1. The paper presents a unique integration of diffusion models with the Wasserstein gradient flow framework, incorporating a novel negative entropy regularization to address specific challenges in missing data imputation.
2. The work is grounded in solid theoretical foundations, providing clear proofs and propositions that establish the effectiveness and validity of the proposed approach.
3. Extensive experiments on real-world datasets validate the method's superiority, with significant improvements in both mean absolute error (MAE) and Wasserstein distance (Wass) metrics.
Weaknesses: 1. The theoretical concepts and mathematical formulations presented are quite dense and may be challenging for readers not well-versed in advanced optimization and diffusion models. Simplifying these explanations or providing more intuitive descriptions could improve accessibility.
2. While the method is compared to several models, including a wider range of baseline methods, particularly more recent advancements in diffusion-based MDI (e.g., [1,2]), would provide a more comprehensive evaluation.
3. The paper could benefit from a more detailed discussion on the convergence properties of the proposed algorithm, including potential limitations and scenarios where the method might struggle.
[1] Du, Tianyu, Luca Melis, and Ting Wang. "ReMasker: Imputing Tabular Data with Masked Autoencoding." International Conference on Learning Representations, 2024.
[2] Zheng, Shuhan, and Nontawat Charoenphakdee. "Diffusion models for missing value imputation in tabular data." NeurIPS 2022 First Table Representation Workshop.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Can the authors elaborate on how the proposed method could be adapted or extended to handle other types of data, such as categorical or mixed-type datasets?
2. What are the computational requirements for implementing KnewImp in practice, and how does it scale with larger datasets?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful advice and valuable questions, we will respond to your concerns point by point.
## Weaknesses
> W1: Dense Mathematical
* We acknowledge that the theoretical concepts and mathematical formulations in our manuscript could be challenging for readers not extensively familiar with advanced optimization and diffusion models.
* To enhance the accessibility and readability of our work, we will include a new subsection in the appendix of our revised manuscript. This subsection will detail the derivation and meaning of each equation, ensuring that the mathematical underpinnings are more comprehensible.
> W2: Extra Basline
* **For CSDI_T [1]**:
- Its primary contribution involves the introduction of one-hot encoding, analog bits encoding, and feature tokenization to manage categorical variables in MDI tasks on tabular data. Our research, however, **specifically focuses on numerical tabular data and assumes the absence of categorical data**. Due to this distinction, we did not consider CSDI_T as a relevant baseline for our study.
- Nevertheless, the categorical feature extraction module in CSDI_T, once removed, essentially transforms it into the CSDI model, which we have indeed utilized and referenced in our manuscript.
* **For ReMasker [2]**:
- We have included an additional baseline model named ReMasker in our comparisons. Please see the common rebuttal chat window. We will add the results of this baseline model in the revised manuscript.
> W3: Convergence
* We have included discussions on the convergence located **in Appendix E.3 with theoretical proofs and empirical validations** on all datasets in our manuscript.
* We will add a footnote in our revised manuscript to demonstrate this point.
## Questions:
>Q1: Mixed Type Data:
* The key to applying our approach to such datasets involves **decomposing the missing data distribution, $ r(\boldsymbol{X}^{({miss})}) $**, into a product of distributions for the dense and categorical components under a mean-field assumption common in variational inference [3]. Specifically, we express it as **$ r(\boldsymbol{X}^{({miss})}) = r(\boldsymbol{X}^{({miss, dense})})r(\boldsymbol{X}^{({miss, cate})}) $**.
* For the dense data component, we continue to employ our KnewImp method.
* For the categorical part, we can initially model it using a Dirichlet distribution, which naturally supports simplex spaces, the steps are summarized as follows.
- First, we can implement **mirror descent [4] with the operator $ \nabla_x\psi(x) = \log{x} $**, which maps the distribution’s support from the simplex $ \Delta^{\text{C}-1} $ (where $ \text{C} $ represents the number of categories) onto $\mathbb{R}^{\text{C}}$.
- Subsequently, we apply KnewImp in this transformed space ($\mathbb{R}^{\text{C}}$).
- After that, we can revert the distribution back to the simplex **using the inverse operator $[\nabla_x\psi(x)]^{-1} = \text{Softmax}(x)$.**
* Besides, our example located in Section 1.2 of the attached PDF has realized this scheme, where the constraint of variables is a three-dimensional standard simplex.
>Q2: Requirement
* **Our computational requirement is given in lines 743 to 746**: `all experiments are conducted on a workstation equipped with
an Intel Xeon E5 processor with four cores, eight Nvidia GTX 1080 GPUs, and 128 GB of RAM.`, and we found that this configuration is enough for WQW dataset with 4898 items.
* In addition, **we further provided detailed time complexity analysis in Appendix E.2 theoretically and empirically**, which can mirror its scalability with larger datasets.
---
Thank you for reading our rebuttal! **We hope the above discussion will fully address your concerns about our work, and we would really appreciate it if you could be generous in raising your score.**
---
Refs
[1] "Diffusion models for missing value imputation in tabular data." NeurIPS'22 First Table Representation Workshop.
[2] "ReMasker: Imputing Tabular Data with Masked Autoencoding." ICLR'24.
[3] "Variational algorithms for approximate Bayesian inference", Doctoral Thesis'03.
[4] "Sampling with Mirrored Stein Operators" ICLR'22
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. Since my issues have been resolved, I will maintain my positive score.
---
Rebuttal 2:
Comment: Dear Reviewer [Gtbe],
Thank you for your valuable feedback and encouraging comments, which have greatly motivated us throughout the rebuttal process. **Out of respect for your review efforts, we would like to summarize how we have addressed your inquiries**:
> Clarifying the Paper:
* In response to insights gained from discussions with other reviewers, we recognize that some of our mathematical formulations may appear dense. We plan to include metaphors to better illustrate concepts like our $r$ and $p$ dynamics [EhUg, 833r], explain transformations such as Monte Carlo estimation [EhUg], and discuss the selection of RKHS [9XXU] to **enhance the manuscript's readability**.
> Incorporating Additional Baseline Models:
Following your recommendation, we have added the baseline model ReMasker. **We will detail its integration and relevance to the DMs used in our study, tailored to our specific scenarios** (numerical tabular data).
> Enhanced Discussion on Convergence, Limitations, and Scenarios:
- We will **augment our manuscript with a detailed proof of the convergence** for the "estimate" part, furthering our discussions with Reviewer [EhUg].
- **Additional experiments and analyses on toy case datasets will be included**, especially considering data properties like multi-modality, heavy tails, and skewed distributions as suggested by Reviewer [9XXU].
> Handling Mixed-Type Data
We plan to **outline strategies for managing mixed-type data in our future research directions**, building on our discussions with you.
> Computational Resources
We will clearly indicate the computational resources used in our study in the revised manuscript to ensure transparency according to your guidance.
---
Thank you once again for your supportive and constructive review. **We are grateful for your continued positive assessment. Given your busy schedule, please do not feel obliged to respond to this message**.
Sincerely,
Authors
Title: We are happy that your concerns are all addressed! Thank you for maintaining your positive score. | null | null | null | null | null | null |
MaVEn: An Effective Multi-granularity Hybrid Visual Encoding Framework for Multimodal Large Language Model | Accept (poster) | Summary: The paper presents a novel approach, termed MaVEn, which aims to enhance the performance of MLLMs in multi-image scenario by integrating discrete visual symbol sequences with traditional continuous representation sequences. This dual strategy is designed to bridge the semantic discrepancies between visual and textual information. The approach also incorporates a dynamic reduction mechanism for long-sequence continuous features, aiming to boost processing efficiency in scenarios involving multiple images.
Strengths: 1) The manuscript makes a significant contribution by proposing a hybrid model that combines both discrete and continuous data representations. This is a promising approach to mitigate the issues of semantic gaps in multimodal learning. The dynamic reduction mechanism for handling long visual sequences is also an innovative solution that could have broad applications in the field.
2) The experiments conducted are robust and comprehensive, as highlighted in section 4.4, is crucial in demonstrating the effectiveness of the visual hybrid encoding. This section effectively showcases how MaVEn performs under different scenarios, providing empirical evidence of its versatility and reliability.
3) The paper is generally well-written and organized. The methodology section is well-articulated and provides a clear explanation of how MaVEn operates.
Weaknesses: To solidify the claims regarding the efficacy of the discrete visual symbol sequences used in MaVEn, it would be recommended to conduct experiments comparing the performance of these different discrete representation techniques, such as VQGAN[1] or VQVAE[2].
[1]Taming Transformers for High-Resolution Image Synthesis.
[2] Neural Discrete Representation Learning.
Technical Quality: 3
Clarity: 3
Questions for Authors: The manuscript presents a well-structured and insightful study. I currently have no further questions.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for your insightful comments and the recognition of our work. We have carefully considered your suggestion and provide a detailed response below.
### 1. it would be recommended to conduct experiments comparing the performance of these different discrete representation techniques, such as VQGAN or VQVAE.
Thank you very much for your insightful suggestion. We fully acknowledge the importance of validating the efficacy of the discrete visual symbol sequences utilized in MaVEn by comparing them with other discrete representation techniques, such as VQGAN and VQVAE. These methods are indeed prominent in the field and have demonstrated significant success in various applications.
In response to your recommendation, we have conducted additional experiments to compare the performance of MaVEn using different discrete representation techniques, including VQGAN and VQVAE. Additionally, we explored the potential of combining these techniques to further expand the vocabulary of the large language model (LLM).
Below are the results of our comparative experiments:
| Visual Discrete Representation | Code Book Size | DemonBench Ave score | SEED Bench acc-video | VQA | MME | MMbench test |
|-------|--------------------------------|----------------|----------------------|----------------------|-----| --------------|
| SEED | 8192 | 39.0 | 42.1 | 79.1 | 1530| 65.2| - |
| VQGAN | 1024 | 37.1 | 39.2 | 77.3 | 1441| 61.3| - |
| VQVAE | 1024 | 36.6 | 38.4 | 76.3 | 1380| 60.2| - |
| SEED+VQGAN | 9216 | 39.7 | 42.8 | 79.5 | 1521| 65.8| - |
Our conclusions are as follows:
1. Using SEED as the discrete visual token yields better performance compared to VQGAN and VQVAE.
2. Combining different discrete tokenizers can enhance the model's performance. We believe this improvement is due to the different visual semantic information encoded by the distinct codebooks. By integrating multiple codebooks, we achieve a richer and more comprehensive visual semantic representation, which in turn helps improve the model's overall performance.
We appreciate your valuable feedback, which has significantly contributed to enhancing the robustness of our findings. We look forward to any further comments or suggestions you may have.
Thank you for your thoughtful consideration.
---
Rebuttal Comment 1.1:
Title: Thank you very much for your reply.
Comment: My question has been resolved very well, and I am very satisfied with the experiments you added. Especially, the experiments you added have demonstrated that using multiple different discrete visual encodings simultaneously can further improve model performance, which is a very important discovery. I'd like to try to follow this suit in my own work.
Additionally, I have read the comments from other reviewers as well as your reply, which has raised a new question for me. I noticed that in your reply, you mentioned that you believe the higher semantic granularity of discrete visual tokens helps MLLM better understand the semantic of images. But I also noticed that you used a self-regressive generation task for discrete image tokens in the second stage of training. Is it because you used this task that focused on discrete image tokens that it is the key to improving model performance? I would be very grateful if you could address this question and help me further improve my score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer HJUS
Comment: We sincerely appreciate your valuable suggestions and are honored by your recognition of our experiments. We will incorporate the aforementioned experiments and conclusions into the revised manuscript. Regarding the second issue you raised in your comments, we address your concerns from two perspectives:
1. **The semantic distribution of visual discrete tokens is closer to the text semantic distribution, aiding the alignment of image-text semantics by MLLM, which is the core reason for the model's performance improvement**: We encourage you to refer to Figure 6 in our paper and Figure 3 in the PDF provided in this rebuttal. We have conducted a visual analysis of the attention weight distribution of all features during the decoding phase of the LLM. Additionally, we clustered the discrete visual tokens. From the quantified experiments mentioned above, we observed that the semantics of visual discrete tokens are very close to the text semantics. Recent works [1][2][3] suggest that the most crucial step for multimodal large models is the alignment of image-text semantics, and visual representations based on discrete visual tokens are evidently easier to align with textual representations. Furthermore, the ablation study presented in Table 4 of our paper also supports this point where we found that the introduction of discrete tokens improved the model's performance on DemonBench from 30.66 to 39.51.
2. **We only used autoregressive image generation in the second phase of model training. The purpose of autoregressive image generation is solely to better aid the LLM in learning the embedding representation of visual discrete tokens, which is an essential step in introducing visual discrete representations**. During our training process, we only applied autoregressive image generation in the second phase of model training. Please refer to Figure 3 in our paper, where we show that only the embedding layer of the LLM was trained in this phase. Therefore, in our work, the use of autoregressive image generation is solely to better aid the LLM in learning the embedding representation of visual discrete tokens, helping the LLM understand visual discrete tokens. This is an essential step in introducing visual discrete representations. In fact, in previous experiments where we attempted to learn visual discrete tokens without autoregressive image generation, our results were as follows:
| w/ Visual Discrete Tokens | w/ Image Regressive Generation | DemonBench Performance |
|---------------------------|-------------------------------|------------------------|
| ✓ | ✓ | 39.0 |
| ✓ | × | 32.14 |
| × | × | 30.66 |
We found that without autoregressive image generation, the embedding layer of the MLLM model struggled to effectively learn the embedding representation of visual discrete tokens, thereby limiting performance improvement on multiple images. Only by incorporating autoregressive image learning can the MLLM embedding layer better learn the embedding representation of discrete tokens and enhance performance.
We hope our response adequately addresses your concerns.
[1] Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning. NIPS 2022
[2] InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks. CVPR2024
[3] Visual Instruction Tuning. NIPS 2023 | Summary: This paper presents a Multi-granularity Visual Encoding framework (MaVEn) for better multi-image reasoning. MaVEn combines discrete visual symbols and continuous representation sequences, as well as designing a dynamic reduction mechanism to efficiently and effectively process and interpret information from multiple images. Experimental results demonstrate its effectiveness in various multi-image benchmarks.
Strengths: 1. The paper is well-structured and clear in its presentation.
2. I am highly impressed by the author's methodological design, particularly the Multi-Granularity Hybrid Encoding component, which I believe makes valuable and insightful contributions to the research community.
3. The thorough experiments and visualizations presented in the paper effectively demonstrate the efficacy of the proposed method.
Weaknesses: 1. In Stage 3, would the adjustments to the Visual Projector affect the performance of the Patch Selector, since that component has been frozen?
2. The steps of training the patch selector using Grounded SAM annotated data seem a bit redundant. Directly selecting patches based on the similarity between the patch and the discrete token may be a simpler and more effective approach.
3. The role of the continuous tokens has not been well validated. In Figure 6, the attention seems to barely focus on the continuous tokens. Does this suggest that the continuous tokens have little impact on the performance, and they could potentially be discarded? Is it possible that the current evaluation design is unable to fully reflect the role of the continuous tokens?
4. In line 202, '567' should be '576'.
Technical Quality: 2
Clarity: 3
Questions for Authors: Overall, I appreciate this paper, but some of the concerns raised in the 'Weaknesses' part, especially Q3, have prevented me from giving a higher score.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### 1. Would the adjustments to the Visual Projector affect the performance of the Patch Selector?
The adjustments made to the Visual Projector in Stage 3 will not affect the performance of the Patch Selector. As shown in Figure 2(b) in our paper, the process is designed such that **we first use the Patch Selector to select the important patch tokens. Only after this selection has been made, we then input the selected patch tokens into the Visual Projector.** This sequential process ensures that the modifications made to the Visual Projector do not influence the operation of the Patch Selector.
### 2. Directly selecting patches based on the similarity between the patch and the discrete token may be a simpler and more effective approach.
Thank you for your insightful suggestion.
1. We have indeed experimented with a similar approach where we directly select patches based on the attention weights between the <EOS> token in the discrete visual token sequence and all other continuous visual tokens. However, the results are shown below which were not satisfactory. This indicates that directly using attention weights may not be an effective method to measure the similarity between discrete tokens and continuous tokens.
| Token Selection Method| DemonBench Ave Score| VQA | MMbench |
| --------|----------|-------|-----|
| Selection based on attention weights | 33.8 | 74.5 | 62.1 |
| Selection with Patch Selector | 39.0 | 79.1 | 65.2 |
2. On the other hand, Grounded SAM is designed to locate relevant visual regions based on textual semantics, and given the fact (as shwon in the figure 3 in the rebuttal pdf) that for the paired image-text sample, the semantics encoded by the text captions is similar to semantics encoded by the visual discrete tokens, we decided to use Grounded SAM to generate pseudo-labels. This allows the patch selector to learn the similarity between visual discrete tokens and patches more effectively.
Overall, we appreciate your suggestion and believe that the use of Grounded SAM to train the patch selector, although seemingly redundant, is a necessary step to ensure the effectiveness of our model.
### 3. The role of the continuous tokens has not been well validated.
Your comments are greatly appreciated, and we will address your concerns in two parts:
1. **Clarification of Figure 6 and the importance of discrete Tokens**: We apologize for any confusion caused by Figure 6. In fact, the text question in the case presented in Figure 6 is "What is the similarity between Figure 1 and Figure 2?" This case was to highlight the challenges faced by recent MLLMs, particularly in multi-image contexts, **where the semantic granularity of the continuous visual representations and text tokens differ significantly.** In this specific case, the American flag represents a higher-dimensional, coarse-grained semantic entity that continuous visual tokens struggle to effectively encode alone. This results in the MLLMs paying less attention to continuous visual tokens during the decoding phase. However, upon introducing discrete visual tokens, which align more closely in semantic granularity with text tokens, the MLLM was able to better grasp the high-level semantic from the discrete tokens, thereby focusing more attention on these tokens and enhancing its ability to establish semantic associations across multiple images.
- To further verify the semantics of the discrete visual tokens that the MLLM focuses on, as illustrated in the figure 3 within the rebuttal PDF, we collected images that contain the discrete visual tokens targeted by the LLM. We discovered that these images consistently feature the American flag.
2. **The Role of Continuous Tokens-Encoding Fine-Grained Visual Details**: We are not suggesting that continuous visual tokens are useless. In fact, continuous visual tokens and discrete tokens are complementary and indispensable. Continuous tokens encode a amount of fine-grained semantic information in the image, so when the model faces scenarios that require understanding of fine-grained details in the image (where discrete visual tokens often do not contain this information), we actually need continuous visual tokens. To better validate this point, we conducted the following experiment:
- We randomly collected 500 images from the COCO dataset and tasked a GPT-4 model to generate questions about detailed object information in each image (e.g., asking about the shape, color, number, and size of certain objects) along with four different options and the correct answer. We then tested the performance on this dataset using MaVEN with only discrete visual tokens, only continuous visual tokens, and both, as shown below table, we found that the performance of the model using only discrete visual tokens was very poor, while the performance of the model using only continuous tokens was very close to that of the model using both, which also demonstrates the importance of continuous visual tokens for understanding detailed image information.
- As shown in Fugure 1 in the submitted pdf in this rubuttal, we have also visualized the attention distribution of MaVEn for both discrete and continuous visual tokens when the model is asked about some details of the image (e.g.,"what is the color the dog in this image?"). We found that MLLMs also pay attention to continuous visual tokens. Which also suggests that the continuous visual tokens provide the fine-grained detial informations for our model.
| Model | w/ Continuous token | w/ Discrete token | Accuracy |
|-------|---------------------|-------------------|----------|
| MaVEN | ✓ | × | 35.2 |
| MaVEN | × | ✓ | 69.3 |
| MaVEN | ✓ | ✓ | 70.5 |
### 4. In line 202, '567' should be '576'.
We appreciate your attention to detail and have corrected this error in the revised version of the paper.
---
Rebuttal 2:
Comment: Dear Reviewer TdkD,
First and foremost, please allow me to extend our deepest appreciation for the time and effort you have devoted to reviewing our paper.
Moving forward, as the discussion phase is approaching its end, we are confident that we have comprehensively addressed the concerns you raised. We would greatly appreciate it if you could take a moment to review our responses. Your insights are important to us, and we are eager to hear your thoughts on the revisions we have made.
Thank you once again for your attention and assistance.
---
Rebuttal Comment 2.1:
Comment: Thanks for your response, which resolved most of my concerns. I would increase the score.
---
Reply to Comment 2.1.1:
Comment: We are truly grateful for the time and effort you invested in reviewing our paper and for your thoughtful feedback. Your insights have been invaluable, and we are glad that our clarifications effectively addressed your concerns.
Best regard. | Summary: This paper introduces MaVEn, a Multi-granularity Visual Encoding framework that enhances Multimodal Large Language Models (MLLMs) in multi-image reasoning by combining discrete visual symbol sequences with traditional continuous representation sequences. Experimental results show that MaVEn significantly improves MLLMs' understanding in complex multi-image scenarios and boosts performance in single-image contexts.
Strengths: 1. The paper is well-organized, from problem, motivation, approach and experimental validation.
2. The proposed innovative multi-granularity approach includes 1) hybrid visual encoding and 2) dynamic reduction mechanism. The hybrid visual encoding captures both coarse-grained semantic concepts and fine-grained features, effectively bridging the semantic gap between visual and textual data.To enhance processing efficiency, a dynamic reduction mechanism is proposed, which selectively reduces long-sequence continuous features. This approach maintains essential information while reducing computational overhead.
3. The paper validates MaVEn’s effectiveness using several benchmarks, including DEMONBench and SEED-Bench, which encompass multi-image reasoning and video understanding tasks. MaVEn achieves superior performance compared to state-of-the-art models like LLaVA1.5, Otter, and others.Besides multi-image tasks, MaVEn also performs well in single-image benchmarks such as Visual Question Answering (VQA) and MMBench, showcasing its versatility.
Weaknesses: see questions.
Technical Quality: 2
Clarity: 2
Questions for Authors: I am not an expert in this domain. But I do have two questions for authors:
1. Why not directly operate on continuous features to reduce feature redundancy since the approach doesn't learn from coarse to fine, like gating?
2. It's not quite clear how discrete tokens help non-single image understanding. The obvious advantage of using discrete tokens is to improve efficiency only?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your recognition of our work and for your insightful questions.
### 1. Why not directly operate on continuous features to reduce feature redundancy ?
This is an excellent question. Directly reducing redundancy in continuous features can be approached in two main ways:
1. **Token Selection with global semantic:** This could be done by using the attention weights from the image global semantic token (e.g., <EOS> token) to select important continuous visual tokens [1], [2].
2. **Merging Visual Tokens:** This can be done using latent queries (e.g., InstructBLIP[3] used q-former to extract the fixed length visual continuous sequence) or convolutional network (e.g., QwenVL[4]) to merge long sequences of continuous visual tokens.
We have experimented with these methods as per your suggestion, and the results are as follows:
| Model | Method | DemonBench Ave Score| VQA | MMbench |
|-------|--------|----------|-------|-----|
|MaVEn | Token Selection with global semantic| 33.8 | 74.5 | 62.1 |
| MaVEn | Token Merging based on Q-former | 29.4 | 71.1 | 54.2 |
| MaVEn | Our Method | 39.0 | 79.1 | 65.2 |
We found that these methods were less effective than our proposed approach, which combines discrete visual tokens with continuous tokens. From these results, we have the following inferences:
1. Compared with the first type of methods, our method is similar to them but is more efficient and accurate because our method utilizes coarse-grained discrete visual features and fine-grained continuous features to encode complementary information. In our method, discrete visual features capture high-level, coarse-grained information (e.g., "snowman," "American flag" as shown in Figure 6 in our paper), while continuous features capture fine-grained details of the image. In contrast, traditional methods based on global image semantic representations for token selection lack high-dimensional semantic guidance, resulting in lower accuracy in token selection.
2. The second type of method, which merges visual information, compresses the data and loses important information, thereby impairing the MLLM's understanding of the image.
To further illustrate the superiority of our method over traditional token selection methods, we conducted an experiment. We randomly selected 500 images from the COCO dataset and used Grounding SAM to segment relevant regions based on textual semantics of image caption as ground truth. We then measured the accuracy of selecting 20%, 40%, 60%, and 80% of tokens that hit the ground truth region for both approaches. As shown in below table, our method significantly outperformed the global visual semantic-based token selection method, demonstrating its effectiveness.
| Token Selection Method | 20% | 40% | 60% | 80% |
|------------------------|----------------|----------------|----------------|----------------|
| Token Selection with global semantic | 93.4 | 90.5 | 84.4 | 79.4 |
| Our Proposed Method | 74.8 | 66.3 | 61.1 | 52.3 |
### 2. It's not quite clear how discrete tokens help non-single image understanding. The obvious advantage of using discrete tokens is to improve efficiency only?
Thank you for your insightful question.
1. To better address your concern, we would like to draw your attention to Figure 6 of our paper, which visualizes the Average Attention Weights with Only Continuous Visual Tokens and the Average Attention Weights with Multi-granularity Hybrid Visual Encoding. The text question in the case presented in Figure 6 is, "What is the similarity between Figure 1 and Figure 2?" This case highlights the challenges faced by recent MLLMs, particularly in multi-image contexts, where **the semantic granularity of continuous visual representations and text tokens differ significantly, making it difficult for MLLMs to understand and capture high-dimensional semantic information from images.** In this specific case, the American flag represents a higher-dimensional, coarse-grained semantic entity that continuous visual tokens alone struggle to effectively encode. As shown in the visualization of Average Attention Weights with Only Continuous Visual Tokens in Figure 6, this results in the MLLMs paying less attention to continuous visual tokens during the decoding phase.
2. **Upon introducing discrete visual tokens, which align more closely in semantic granularity with text tokens, the MLLM was able to better grasp the high-level semantics from the discrete tokens.** This alignment allowed the model to focus more attention on these tokens, thereby enhancing its ability to establish semantic associations across multiple images.
3. Moreover, to further verify the semantics of the discrete visual tokens that the MLLM focuses on, as illustrated in the figure within the rebuttal PDF, we collected images that contain the discrete visual tokens targeted by the LLM. We discovered that these images consistently feature the American flag. Therefore, it can be inferred that the semantics of the corresponding discrete visual tokens are associated with the American flag.
**In summary, while discrete tokens do improve efficiency, their primary advantage lies in their ability to bridge the semantic gap between multi-image visual representations and text representations.** This alignment enhances the model's ability to understand high-level semantics and establish meaningful associations across multiple images, ultimately improving multi-image understanding.
### Reference
[1] *An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models.*
[2] *Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations.*
[3] *InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning*
[4] *Q*wen-VL: A Frontier Large Vision-Language Model with Versatile Abilities*
---
Rebuttal 2:
Comment: Dear Reviewer od3A,
Firstly, we would like to extend our sincere gratitude for the time and effort you have dedicated to reviewing our manuscript. Your insights and feedback are invaluable to us. As the discussion phase is nearing its conclusion, we believe we have addressed the concerns you raised in your review. We would be grateful if you could review our latest responses at your earliest convenience. Your further comments would be immensely helpful in refining our paper and moving forward in the review process.
Thank you once again for your attention and assistance.
---
Rebuttal Comment 2.1:
Comment: thanks for the rebuttal! I am now leaning towards weak accept.
---
Reply to Comment 2.1.1:
Comment: Thank you for taking the time to read and consider our rebuttal. We greatly appreciate your positive feedback and are pleased to learn that you are now leaning towards a weak accept. We value your expertise and the insights you have provided throughout the review process. Your constructive comments have helped us identify areas for improvement and have contributed to enhancing the quality of our work. | Summary: The paper introduces MaVEn, a framework designed to improve Multimodal Large Language Models (MLLMs) in understanding and reasoning across multiple images. Unlike current MLLMs, which are mainly focused on single-image interpretation, MaVEn integrates both coarse-grained semantic concepts and fine-grained details. This combination bridges the gap between visual and textual data, enhancing the model's ability to process multiple images. The framework also includes a mechanism for efficiently handling long sequences of features. Experiments show that MaVEn boosts performance in multi-image scenarios and also provides benefits for single-image tasks.
Strengths: 1. The concept is logical, and the paper is straightforward to read.
2. The concept of using both discrete and continuous visual tokens is intriguing.
3. I appreciate the author's use of figures 2 and 3, which help clarify the overall framework and training process.
Weaknesses: 1. In Tables 1, 2, and 3, the author does not compare some of the latest methods, such as mini-Gemini, MiniCPM, XComposer, and InternVL.
2. The paper primarily claims its main advantage is in multi-image tasks; however, the author only tests on DEMON and SEED benchmarks. Testing on additional multi-image benchmarks, such as MMBench-Video and MME-Video, would be more compelling.
3. Figure 6 is unclear. In the first figure, I see only two vertical lines on discrete tokens. Does this mean that only discrete tokens have attention weight?
4. The paper does not report the computational complexity compared to other methods.
5. Figure 5 shows that the selected tokens are mostly related to objects. Would it be beneficial to directly use Grounding Sam for token selection? A comparison might be interesting to see.
6. The training pipeline is complex and involves four stages. What is the training cost?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weakness section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### 1. Not compare some of the latest methods
Thank you for your valuable comments. We have reproduced the results of mini-Gemini, MiniCPM-V, and InternVL and compared MaVEn with them. It is important to emphasize that our experiments were based on LLaVA 1.5. To validate our method's effectiveness on the most current MLLMs, we further retrained MaVEn on the latest SOTA single-image MLLM LLaVA Next, we named it MaVEn-NEXT. The results are as follows:
|Model|LLM|DemonBench Ave score|SEED Bench acc-video|VQA|MME|MMbench test|
|-|-|-|-|-|-|-|
|miniGemini|Vicuna 7b| 31.4 | 38.6| 65.2| 1523 |69.3 |
|InternVL-Chat-v1| Vicuna 7b | 30.3| 40.3| 79.3| 1525 |74.7 |
|MiniCPM-V v2.5| Llama3 8b |33.9 | 40.5| 80.3 |**1916** | **77.2**|
|LLaVA 1.5| Vicuna 7b |30.6 | 37.3 |78.5 | 1511| 64.3 |
|LLaVA Next| Llama3 8B|31.3|39.1 | 79.3 | 1591| 72.6 |
|MaVEn| Vicuna 7b| 39.0| 42.1|79.1 |1530 |65.2 |
|MaVEn-Next| Llama3 8B| **41.2** | **44.3** | **80.7** | 1623 | 75.5|
We observed that MaVEn remains superior multi-image performance which demonstrating the effectiveness of our method. Moreover, MaVEn-NEXT shows improved single-image performance, reaching SOTA or comparable results.
### 2. Testing on additional multi-image benchmarks
Thank you for your suggestion.
Indeed, MMBench-Video and MME-Video were released after the NIPS submission deadline. Moreover, based on your suggestion, we evaluated our model on MMBench-Video and MME-Video during the rebuttal period. During the evaluation, we extracted 8 frames from each video as input:
|Model |CP | FP-S | FP-C| HL| LR| AR | RR| CSR| TR|
|-|-|-|-|-|-|-|-|-|-|
|InternVL-Chat-v1.5-[8f] |1.26 |**1.51** |1.22|**1.01**|**1.25**| **0.88** | 1.40 | **1.48**| 1.28 |
|mPLUG-Owl2-[8f] |1.15| 1.34| 1.18 |0.99 |1.15| 0.63|1.33| 1.30| 1.03|
|Qwen-VL-Chat-[8f] |0.52| 0.44| 0.62|0.33| 0.53| 0.45 |0.59| 0.50| 0.36|
|Idefics2-8B-[8f]| 1.10| 1.23| 1.07| 0.89 |1.06 |0.77| 1.27| 1.41|1.11|
|MaVEn-[8f] | **1.32** |1.32 |**1.24**| 0.96|1.18| 0.83 | **1.45** |1.44| **1.31**|
We found that MaVEn performs better than MLLM models such as mPLUG-owl2, Qwen-VL, Idefics2-8B, and its performance is comparable to that of InternVL-Chat-1.5.
For the MME-Video results, please refer to Table 1 in the rebuttal pdf, where MaVEn also achieve competitive performance.
### 3. Figure 6 is unclear.
1. Clarification of Figure 6: We apologize for any confusion caused by Figure 6. In fact, the text question in the case is "What is the similarity between Figure 1 and Figure 2?" . **In this case, the American flag represents a coarse-grained semantic entity that continuous visual tokens struggle to effectively encode alone.** This results in the MLLMs paying less attention to continuous visual tokens during the decoding phase.
2. Importance of continuous visual tokens: When the model faces scenarios that require understanding of fine-grained details in the image, MLLMs actually need continuous visual tokens. To better validate this point, as shown in Fugure 1 in the submitted pdf in this rubuttal, we have visualized the attention weights distribution of MaVEn when the model is asked about the details of the image. We found that MLLMs also pay attention to continuous visual tokens.
3. Moreover, we conducted the following experiment: We randomly collected 500 images from the COCO dataset and tasked GPT4-o to generate questions about detailed object information in each image (e.g., asking about the shape, color, number, and size of an objects) . We then tested the accuracy on this dataset using MaVEN with only discrete visual tokens, only continuous visual tokens, and both, as shown in below table. We found the performance of the model using only discrete visual tokens was very poor.
| Model | w/ Continuous token | w/ Discrete token | Accuracy |
|-------|---------------------|-------------------|----------|
| MaVEN | ✓ | × | 35.2 |
| MaVEN | × | ✓ | 69.3 |
| MaVEN | ✓ | ✓ | 70.5 |
### 4. Report the computational complexity compared to other methods.
In response, we have conducted a thorough evaluation:
1. Experiment Setting: Specifically, we set the input image size to 336x336 and the number of text input tokens to 24. We then measured the throughput and FLOPs of 10 inference step for different numbers of images (ranging from 2 to 8) across several models: LLaVA 1.5, MaVEn, QwenVL, InternVL. The inference batch size is 1 and was performed on single 80G A100 GPU.
2. Experiment Results: As shown in Figure 2 of the submitted pdf file, we observed that as the number of images increases, MaVEn exhibits higher efficiency. This is primarily because MaVEn encodes a lower number of continuous visual tokens, which reduces the computational burden.
### 5.Directly use Grounding Sam for token selection?
Very insightful question!
1. First, we want to highlight that Grounded SAM is engineered to segment image regions based on their corresponding textual information. However, if user instructions fail to provide explicit textual semantic cues for image segmentation (such as "What are the differences between image one and image two?"), it may lead to inaccurate region segmentation by Grounded SAM.
2. To fully address your concern, we tested the direct use of Grounded SAM for selecting image patches in both multi-image and single-image benchmarks. The results are as follows:
| Method | DemonBench Ave Score| VQA | MMbench |
|-------------------------|-|--------|---------|
| MaVEn-Patch Selector | 39.0 | 79.1 | 65.2 |
| MaVEn-Grounding SAM | 27.3 | 71.2 | 52.5 |
The results indicate a significant decline when using Grounded SAM for selecting image patches, highlighting the effectiveness of patch selector.
### 6. training cost?
We used 8*80G A100 GPU for training, which overall took about 122 hours. We give more detials in the table2 of the rebuttal pdf.
---
Rebuttal 2:
Comment: Dear Reviewer 9o1r,
We would like to sincerely thank you for the time and effort you have dedicated to reviewing our paper. Your insights and feedback have been invaluable in helping us improve our work.
As the discussion phase is nearing its conclusion, we believe we have addressed your concerns in our recent replies and would greatly appreciate it if you could take a moment to review our responses. Your insights are important to us, and we are eager to hear your thoughts on the revisions we have made. Thank you once again for your time and consideration. | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for your thorough and insightful reviews of our manuscript. We greatly appreciate the time and effort you have invested in providing valuable feedback and suggestions, which will undoubtedly help us improve the quality and clarity of our work.
We are pleased to note that the reviewers have acknowledged several strengths of our paper. Reviewer **9o1r** found *"The concept of using both discrete and continuous visual tokens is intriguing."* Reviewer **od3A** commended our paper *"the innovative multi-granularity approach"*. Reviewer **TdkD** and Reviewer **HJUS** commended *"our work makes a valuable and insightful contribution to research community."* Reivewers **od3A, TdkD** and **HJUS** think *our experiments effectively validates the effectiveness the proposed method.*
We are encouraged by these positive comments and will strive to address the concerns and suggestions raised by the reviewers to further enhance our manuscript.
Moreover, we would like to kindly remind you that we have included the visualized experimental data and certain tabular data within the PDF document submitted for this rebuttal phase. We encourage you to review these materials. Your attention to these details is greatly appreciated.
Pdf: /pdf/5e55b09b159a85f883f7df1f5e984cfb6c9f1c86.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes MaVEn, a novel multi-granularity hybrid visual encoding framework for multimodal large language models (MLLMs). MaVEn aims to improve MLLMs' capabilities in multi-image reasoning by combining discrete and continuous visual representations. The authors design a dynamic reduction mechanism to reduce the computational overhead of long continuous sequences. Experimental results demonstrate that MaVEn significantly improves performance on both multi-image and single-image benchmarks.
Strengths: This paper proposed a novel approach to combine the strength of both discrete and continuous visual representation as well as dynamic reduction mechanism.
In addition, the authors conduct comprehensive experiments on both multi-image and single-image benchmarks to demonstrate the effectiveness of MaVEn.
Weaknesses: While the MaVEn proposed a novel approach to combine the advantage of discrete and continuous visual info, the system can be a bit over complex for serving / maintenance in real applications.
In addition, the paper didn't discuss the computation complexity of MaVEn and compare it with other existing models.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. how does MaVEn compare with other multi-granularity approaches?
2. Could you elaborate more on the computation complexity and efficiency?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Need more discussion on the model's limitation and computational complexity
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Comment: ## 1. The system can be a bit over complex for serving & compare computation complexity with other existing models.
Thank you for your insightful comments. We appreciate your concern and would like to address it comprehensively.
We acknowledge that the training process of our model might appear complex. However, we have conducted a thorough analysis comparing our model to recent SOTA models including QwenVL[1], InternVL[2], LLaVA1.5[3] in terms of training data size, training GPU count, inference FLOPs and latency .
To evaluate the FLOPs and latency performance, we set the input image size of 336x336 and 24 text input tokens. We measured throughput and FLOPs over 10 inference steps using 4 input images, with an inference batch size of 1, on a single 80G A100 GPU.
*Table 1. Comparision with different MLLMs on training data size, training GPU count, FLOPs, Latency and benchmark performance.*
| Model | Training Data Size | Training GPU Count | Inference FLOPs | Inference Latency | DemonBench ave Score (Multi-image) | VQAv2 dev (Single Image) |
|-------|--------------------|-----------|-----------------|-------------------|--------------------------|------------------------|
| QwenVL 7B |1.5B | 640×A100(80G) | 212 | 2.22 |29.9 | 78.2|
| InternVL 7B | 6B+ | Unkown| 340 | 1.53 | 30.3 |**79.3** |
| LLaVA 1.5 7B |**1.2M** | **8×A100(80G)** | 193 | 2.46 | 30.6|78.5 |
| MaVEN 7B | 7M | **8×A100(80G)** | **163** | **2.64** | **39.0**|79.1 |
As shown in below table 2 and table 3. We also report trends in FLOPs and latency across different numbers (from 2 to 8) of image inputs.
*Table 2. FLOPs across different numbers of image inputs for various models.*
| Image Nums | LLaVA | MaVEn | QwenVL | InternVL |
|------------|-------|-------|--------|----------|
| 2 | 104.44| 130.32| 120.33 | 210.33 |
| 4 | 193.6 | 163.21| 212.04 | 340.53 |
| 6 | 290.1 | 209.24| 324.23 | 480.2 |
| 8 | 402.4 | 268.42| 450.22 | 670.21 |
*Table 3. Latency across different numbers of image inputs for various models.*
| Image Nums | LLaVA | MaVEn | QwenVL | InternVL |
|------------|-------|-------|--------|----------|
| 2 | 3.24 | 2.94 | 3.05 | 2.34 |
| 4 | 2.46 | 2.64 | 2.22 | 1.53 |
| 6 | 1.75 | 2.48 | 1.45 | 0.89 |
| 8 | 1.21 | 2.22 | 0.88 | 0.33 |
We have following conclusion:
1. The results in Table 1 demonstrate that our approach actually incurs lower overhead in both training and inference stages compared to these models. **This suggests that, contrary to initial impressions, our method is indeed practical and efficient for deployment and maintenance in real-world scenarios.**
2. The results in Tables 2 & 3, we observed that as the number of images increases, MaVEn exhibits higher efficiency. This is primarily because **MaVEn encodes a lower number of continuous visual tokens, which reduces the computational burden.**
We hope this detailed analysis alleviates your concerns about the complexity of our model in practical applications.
## 2. How does MaVEn compare with other multi-granularity approaches?
In response to your recommendation, we have conducted additional experiments to compare the performance of MaVEn using different multi-granularity techniques, where we utilize VQGAN and VQVAE to replace the SEED tokenizer. Additionally, we explored the potential of combining these techniques.
Below are the results of our comparative experiments:
| Visual Discrete Representation | Code Book Size | DemonBench Ave score | SEED Bench acc-video | VQA | MME | MMbench test |
|-------------------------------|----------------|----------------------|----------------------|------|------|--------------|
| SEED | 8192 | 39.0 | 42.1 | 79.1 | 1530 | 65.2 |
| VQGAN | 1024 | 37.1 | 39.2 | 77.3 | 1441 | 61.3 |
| VQVAE | 1024 | 36.6 | 38.4 | 76.3 | 1380 | 60.2 |
| SEED+VQGAN | 9216 | 39.7 | 42.8 | 79.5 | 1521 | 65.8 |
Our conclusions are as follows:
1. Using SEED as the discrete visual token yields better performance compared to VQGAN and VQVAE.
2. Combining different discrete tokenizers can enhance the model's performance. We believe this improvement is due to the different visual semantic information encoded by the distinct codebooks. By integrating multiple codebooks, we achieve a richer and more comprehensive visual semantic representation, which in turn helps improve the model's overall performance.
[1] Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities.
[2] InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks.
[3] Improved Baselines with Visual Instruction Tuning.
---
Rebuttal 2:
Comment: Dear Reviewer Rxew,
Thank you very much for taking the time and effort to review our paper. We sincerely appreciate your valuable feedback and insights.
We wanted to inform you that we have provided our responses in the comments section. We would greatly appreciate it if you could take a moment to review our replies.
Thank you once again for your time and consideration.
---
Rebuttal 3:
Comment: Thanks for the rebuttal! The comparison looks nice, I changed the rating to 7 (Accept)
---
Rebuttal Comment 3.1:
Comment: We sincerely appreciate you taking the time to read our rebuttal and for your positive feedback. We are thrilled to learn that our clarifications and the additional comparison have addressed your concerns effectively. Thank you for your thorough evaluation and for your support. We deeply value your expertise and the insights you have provided throughout the review process. | null | null | null | null | null | null |
AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models | Accept (poster) | Summary: This paper introduces AlphaPruning, a novel framework for unstructured LLM pruning. The framework leverages HT-SR theory that utilizes the heavy-tailed shape of ESDs in layer-weight matrices to allocate layer-wise sparsity more effectively. By focusing on shape metrics rather than scale metrics, AlphaPruning demonstrates superior performance in maintaining model accuracy and reducing complexity. The method has been empirically validated across various architectures, showing significant improvements in performance and efficiency compared to existing methods, with strong generalizability to other compression techniques and architectures.
Strengths: + This paper introduces a novel sparsity allocation method that leverages the heavy-tailed shape of ESDs in layer weight matrices, a concept previously unexplored in the literature.
+ The method is extensively validated with a range of LLM architectures, demonstrating significant improvements over state-of-the-art methods in terms of reducing perplexity, increasing accuracy, and achieving computational efficiency.
+ AlphaPruning exhibits remarkable adaptability, integrating well with various other model compression techniques and extending beyond LLMs to include large vision model architectures, proving its versatility and broad applicability.
Weaknesses: + The novelty of the method for allocating sparsity based on layer quality (Section 3.2) is incremental, as a similar idea has previously been proposed in [1].
[1] Zhou, Yefan, et al. Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training.
Technical Quality: 4
Clarity: 3
Questions for Authors: + Section 3.2 proposes a linear mapping to get sparsity from layer quality. Would other mappings, such as first compute the logarithmic of the m and then perform linear mapping, yield better outcomes?
+ I would like to see results on using AlphaPruning to determine layerwise sparsity for other structured pruning methods, such as OSSCAR [1].
[1] Meng, X., Ibrahim, S., Behdin, K., Hazimeh, et al. OSSCAR: One-Shot Structured Pruning in Vision and Language Models with Combinatorial Optimization.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness
We present the differences between AlphaPruning and [1], as detailed below:
- **Different research focus.** Our study investigates post-training LLM pruning, whereas [1] studies model training.
- **The underlying principles of the two works are different.** [1] aims to balance layer quality (or make layers equally well-trained) by tuning learning rates, aiming to improve generalization performance. This work focuses on minimizing pruning damage to the high-quality layers by making them less sparse, thereby reducing performance loss. While both studies use heavy-tailed metrics from HT-SR theory to estimate layer quality, this shared aspect does not make our work a trivial extension or incremental idea of [1]. This work is the first to explore how HT-SR theory can be applied to allocate sparsity, a concept previously unexplored in the literature.
- **Important technical difference: transformer-block-wise measurement instead of matrix-wise measurement.** [1] measures the PL_Alpha_Hill metric for each weight matrix of the model individually. However, as we demonstrate in Appendix F, this approach provides suboptimal results for sparsity allocation. We improved upon this by averaging the metric scores across matrices within a transformer block and using the block-wise average score for sparsity allocation, which yielded significantly better results.
## Question 1
We thank the reviewer for suggesting a new mapping function method. We implemented the proposed method and compared it with the linear mapping function used in our current approach, as shown in Figure 12 of the rebuttal PDF. The results show that both methods perform similarly when combined with Wanda, but linear mapping slightly outperforms the proposed logarithmic method when combined with SparseGPT.
Here are the experimental setups for Figure 12. We pruned the LLaMA-V1-7B model to different sparsity levels using two allocation methods and reported the perplexity on the WikiText validation set. The hyperparameter search and setup are consistent with the original settings in Appendix G.
## Question 2
We thank the reviewer for the new reference. We integrated AlphaPruning with OSSCAR and provided the updated results in Figure 13 of the rebuttal PDF. OSSCAR prunes only the linear sublayer of multi-head attention and the second sublayer of the feed-forward network, applying uniform pruning across each transformer block. By incorporating AlphaPruning's layer-wise sparsity allocation, we achieved non-uniform block-wise pruning ratios while keeping the global pruning ratio the same. The results show that integrating AlphaPruning with OSSCAR can reduce perplexity at different sparsities. We will include this experiment in the updated draft and cite the corresponding work.
---
Rebuttal Comment 1.1:
Comment: I sincerely appreciate the authors' response, which addressed all my previous concerns. Especially in the response to Question 2, the authors demonstrate that combining AlphaPruning with other structured pruning methods can also yield results far superior to the original uniform sparsity. I believe this result is of great value.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: We thank the reviewer for the positive feedback. We will make sure to include the new results in the updated draft. | Summary: This work presents AlphaPruning, which prunes weight matrices of LLM models with different layer-wise sparsity levels based on the Heavy-tailed self-regularization theory.
Compared to pruning with uniform sparsity among layers, AlphaPruning alleviates performance degeneration when the sparsity level is high and fine-tuning is not applied.
AlphaPruning is composable with existing pruning methods.
Experiments with various LLM models, such as Llama and Llama2, demonstrate the effectiveness.
Strengths: - The presented method can be composed with other pruning methods.
- The authors conducted experiments on various LLM architectures, such as Llama and Llama 2, with different baselines. In addition, they performed pruning of image classifiers, such as ConvNext and ViT.
- Inference on CPU accelerates by the proposed pruning up to 3 times, depending on the sparsity.
Weaknesses: - Although they say that "AlphaPruning is theoretically driven" in L83, I could not find any theoretical justification of the method in the text.
- While "Here, we provide a brief overview of HT-SR theory" (L100), it is not described, making it challenging to understand the proposed method.
- Although not an expert of this domain, I think using stronger baselines would make this work better.
For example, [Gale+19] demonstrated even simple magnitude pruning retains performance of ResNet-50 and Transformer even the sparsity level is set to 80%.
[Gale+19] Trevor Gale, Erich Elsen, Sara Hooker "The State of Sparsity in Deep Neural Networks" arXiv 2019.
Technical Quality: 2
Clarity: 2
Questions for Authors: - Why is $\mu_{\mathbf{x}_i}$ is introduced in Equation (1)? It is merely used in the text except for L125.
- What is the actual advantage of unstructured pruning as AlphaPruning over other acceleration techniques, such as quantization?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: - As written in Weaknesses, the baselines are quite limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness 1 and 2
AlphaPruning is grounded in heavy-tail self-regularization (HT-SR) theory, which we use to quantify the training quality of each layer and determine layer-wise sparsity. Here we provide a detailed overview of this theory and explain how AlphaPruning is built based on it. We will include these explanations in the updated draft.
- HR-SR theory originated as a semi-empirical theory, with early empirical work [1-2] examining the empirical spectral density (ESD) of weight matrices, specifically the eigenspectrum of the correlation matrix $W^\top W$. This research found that the heavy-tailed structure of the ESD strongly correlates with training quality. These findings are rooted in heavy-tailed random matrix theory and statistical physics, as detailed in Table 1 of [2].
- Recent theoretical work studies how heavy-tails in ESD emerge and why it correlates with training quality. It is well-known [3-4] that spikes in ESD represent "signals", while the bulk represents noise, which follows the Marchenko-Pastur law. In the theoretical setting of [3], the signal or the spike aligns with ground-truth features from the teacher model, and that corresponds to increased correlations in weight elements [1-2]. Furthermore, [5] shows that heavy tails in ESD originate from the interaction between spikes and bulk, which can be quantified precisely using recent advances in the free-probability theory [6], and that is the "bulk-decay" phase in the five-plus-one phase model in [2]. These studies indicate that layers with more heavy-tailed ESDs have extracted more useful signals during training, indicating better training quality.
- This insight motivates our sparsity assignment method: layers with more heavy-tailed ESD contain more learned signals and are assigned lower sparsity by our method, while layers with less heavy-tailed ESD retain fewer signals and are assigned higher sparsity. In practice, the heavy-tailed structure is measured by fitting a power-law distribution to the ESD, and extracting the power-law exponent $\alpha$ as the indicator. This is why our method is named "AlphaPruning".
**References**
[1] Martin et al. Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data.
[2] Martin et al. Traditional and Heavy-Tailed Self Regularization in Neural Network Models
[3] Wang et al. Spectral Evolution and Invariance in Linear-width Neural Networks
[4] Couillet and Liao et al. Random Matrix Methods for Machine Learning
[5] Kothapalli et al. Crafting Heavy-Tails in Weight Matrix Spectrum without Gradient Noise
[6] Landau et al. Singular vectors of sums of rectangular random matrices and optimal estimation of high-rank signals: The extensive spike model
## Weakness 3
We thank the reviewers for providing the new reference. The method used in [Gale+19] that performs best for Transformer models is **uniform magnitude pruning** with a **gradual pruning schedule**. The **gradual pruning schedule** refers to gradually increasing the sparsity of the network while training the semi-pruned model. This schedule is computationally intensive for LLMs due to the iterative model training and has not been adopted in LLM pruning literature. Our study focuses on pruning the LLM in a one-shot way without training, which aligns with previous studies [7-9] to ensure a fair comparison. We have included one-shot **uniform magnitude pruning** in the submitted paper, as shown in Tables 2 and 3 of the submitted paper. Our results demonstrate that our approach outperforms this baseline.
We clarify that in the submitted paper, we have compared with the most relevant and competitive baseline, OWL [7], which is the current SOTA non-uniform sparsity allocation method. We have also compared with other recent LLM pruning baselines, such as Wanda [8], and SparseGPT [9], as well as other baselines from CV pruning literature such as global, ER [10], rank-selection [11], and layer-wise error thresholding [12].
**References**
[7] Yin et al. 2024
[8] Sun et al. 2024
[9] Frantar et al. 2023
[10] Mocanu et al 2018
[11] Kuzmin et al. 2019
[12] Ye et al. 2020
## Question 1
We thank the reviewer for pointing out this writing issue. We will remove the $\mu_{\mathbf{x}_i}$ in the updated draft.
## Question 2
Both unstructured pruning and quantization are effective methods for improving inference speed and reducing memory footprints. The comparison between these two approaches in terms of efficiency can be nuanced depending on hardware deployment and the algorithms used. For example, unstructured sparsity has limited acceleration support in GPUs, compared to quantization, but it holds the great value of speedups on other hardware such as CPU, cerebras' chip, IPU, etc.
**Recent studies indicate that unstructured pruning can slightly outperform quantization in inference speedup.** A SOTA approach SqueezeLLM ([13]) demonstrates that quantizing LLMs to 3 bits can achieve a 2.1× speedup. Meanwhile, recent advances in sparse CPU kernels (DeepSparse) have better support in accelerating unstructured pruning, leading to 3.35$\times$ speedup in CPU runtimes, as shown in Table 3 of [14], as well as Table 4 of our submitted paper. This is partly because pruning can better maintain and recover performance through fine-tuning compared to quantization, as noted in [14].
**Pruning and quantization are compatible and complementary, and combining both approaches further enhances efficiency.** Table 3 of [14] shows that combining unstructured pruning with quantization (INT8) can achieve up to 9.08× speedup, significantly higher than using either method alone.
**References**
[13] Kim et al. 2024
[14] Kurtic et al. Sparse Fine-tuning for Inference Acceleration of Large Language Models, 2023
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal.
### Weaknesses 1 and 2
I appreciate the authors' explanation of the HT-SR theory.
I understand that it is essentially equivalent to applying low-rank approximation to $W$ based on its singular value distribution, which I think is quite natural.
### Weakness 3 and Question 2
I thank the authors for the clarification.
---
Reply to Comment 1.1.1:
Title: Authors' further response to Reviewer's comments
Comment: We thank the reviewer for their response. We provide our understanding of the low-rank approximation (LRA) and our AlphaPruning method.
1. The principle behind AlphaPruning, which determines layer-wise sparsity, is distinct from the method used to determine layer-wise rank in LRA. While they both involve measuring the eigenspectrum of the weights, our AlphaPruning method was not motivated by LRA. AlphaPruning aims to make the model more deterministic and less random, similar to how decision trees choose branches to reduce entropy maximally. It does this by preserving heavy-tailed layers that contain more signals and removing light-tailed layers. Higher sparsity is assigned to light-tailed layers, which, according to HT-SR theory, are closer to random distribution, and have higher rank. In contrast, LRA [1-3] focuses on applying more compression (similar to higher sparsity) on low-rank matrices where the largest eigenvalues dominate. This allows for minimal impact on reconstruction loss when removing small eigenvalues. Therefore, at first glance, these two methods will do opposite things.
2. We also note that minimizing reconstruction loss is not equivalent to minimizing the performance loss caused by compression methods. AlphaPruning outperforms baseline methods that assign sparsity based on stable rank, as shown in Table 1 of the submitted paper. This suggests that the heavy-tailed metric may be more relevant to model performance and better at determining layer-wise sparsity, while stable rank may be more relevant to matrix approximation.
3. We believe that the distinct difference between AlphaPruning and LRA requires further studies. We will make sure to add a section in the revised manuscript to deeply discuss this distinction.
**Reference**
[1] Zhang et al. Accelerating very deep convolutional networks for classification and detection
[2] Wen et al. Coordinating filters for faster deep neural networks
[3] Xu et al. Trained rank pruning for efficient deep neural networks | Summary: This paper introduces Alpha Pruning, a novel approach for pruning large language models based on Heavy-Tailed Self-Regularization theory. Instead of applying a uniform pruning ratio across layers, Alpha Pruning utilizes PL_Alpha_Hill, derived from empirical spectral densities (ESDs), to assess how well-trained each layer is. It then assigns a lower pruning ratio to well-trained layers to preserve model performance. Alpha Pruning is evaluated across various LLM architectures and datasets, demonstrating superior performance compared to baseline uniform pruning and SOTA methods like OWL.
Strengths: - Alpha Pruning leverages HT-SR theory to provide a principled method for guiding layer-wise pruning decisions, contrasting with heuristic-based approaches
- This pruning method has been evaluated on various large language models and exhibits robust performance, showcasing its effectiveness and generalizability.
- By evaluating the importance of each layer and assigning non-uniform pruning ratios, Alpha Pruning can complement existing pruning techniques such as magnitude-based pruning and Wanda. It is also compatible with other model acceleration techniques, such as structured pruning and quantization.
- Despite the challenges of unstructured pruning in achieving significant speedups compared to structured methods, Alpha Pruning still achieves noticeable efficiency gains with high pruning ratios
Weaknesses: - The paper’s explanation, particularly concerning HT-SR theory and terms used in the method section, may be challenging to grasp for readers unfamiliar with these concepts. Figure 1. a could benefit from clearer explanations.
- While Alpha Pruning is compared against uniform pruning and OWL across multiple architectures, other layer-wise pruning methods prevalent in the computer vision community [1] could provide additional comparative insights.
[1] Lee, Jaeho, et al. "Layer-adaptive sparsity for the magnitude-based pruning." arXiv preprint arXiv:2010.07611 (2020).
Technical Quality: 3
Clarity: 2
Questions for Authors: - How does the PL_Alpha_Hill metric evolve during fine-tuning? Does its value typically decrease as a block becomes better trained?
- What might be the reason why a block is better trained than others? Is it possible to include PL_Alpha_Hill in the training process to train each block equally and accelerate the training?
- What factors contribute to the non-linear relationship between sparsity and speedups is shown in the Table. 4? Is the maintenance of early layers a significant factor in this observation?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: The limitation of this work is well discussed in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness 1
We provide a detailed explanation of the parts the reviewer suggested, including HT-SR theory, terms in the method, and Figure 1a. We will include these in the updated draft.
- **More details of HT-SR theory:** HT-SR theory [1-2] examines the empirical spectral density (ESD) of weight matrices, specifically the eigenspectrum of the correlation matrix $W^\top W$. This research found that the heavy-tailed structure in ESD is strongly correlated with training quality. Recent theoretical work [3-4] finds that such a structure is a result of feature learning, a process of extracting various useful correlations (or features) from data during optimization. These studies indicate that layers with more heavy-tailed ESDs have extracted more useful signals during training, indicating better training quality. In practice, the heavy-tailed structure is measured by fitting a power-law distribution to the ESD, and extracting the power-law exponent $\alpha$ as the indicator. These studies motivate our work, and we allocate the layer-wise sparsity based on the $\alpha$ metric. This is why our method is named "AlphaPruning".
- **Terms used in method:** In our method section, $\lambda$ represents the eigenvalues of the weight matrices' correlation matrix. The interval ($\lambda_{\min }$, $\lambda_{\max }$) defines the range of eigenvalues considered as the tail part of ESDs.
- **Figure 1a:** The blue histograms depict the empirical spectral density. The $x$-axis represents eigenvalue magnitudes, and the $y$-axis represents the density, both on a logarithmic scale. The solid red curves depict the empirical distribution of the ESD tail, while the dashed red curves represent the fitted PL distribution. The PL_Alpha_Hill metric in the title is the fitted PL exponent.
## Weakness 2
We thank the reviewer for providing the new baseline method LAMP, we provide new experiments on comparing our method with LAMP. The results are shown in Table 25 of the rebuttal PDF. We implement the original LAMP method, which allocates different sparsity for each matrix, and a variant LAMP called LAMP (per block), which allocates the same sparsity for all matrices within a transformer block. This adaption is based on our ablation study comparing per-matrix and per-block strategies in Appendix F. It shows that LAMP (per-block) outperforms the original LAMP, and shows better performance than Uniform pruning baseline when both are combined with SparseGPT. However, our method AlphaPruning outperforms this baseline method.
Here are the experimental settings. We pruned the LLaMA-V1-7B model to three sparsity levels (60%, 70%, 80%) using different methods and reported the perplexity on the WikiText validation set. The hyperparameter search and setup are consistent with the original settings in Appendix G.
## Question 1
In Table 26 of the rebuttal PDF, we present the PL_Alpha_Hill and performance metrics (perplexity, zero-shot task accuracy) before and after fine-tuning. We can see that the PL_Alpha_Hill metric is reduced as well as the two performance metrics improved. Here is the experimental setup. We fine-tune the pruned LLaMA-V1-7B on 30K tokens from the C4 dataset. The model is pruned to 70\% sparsity. The perplexity is evaluated on WiKiText, and the accuracy is evaluated on 7 zero-shot datasets as listed in Section 4.1.
We select the initial stage of fine-tuning to demonstrate how the metric value evolves. Figure 14 of the rebuttal PDF shows that the PL_Alpha_Hill metric indeed continues decreasing during the fine-tuning.
## Question 2
The phenomenon that layers/blocks are not equally well-trained is well-documented in [5-7]. For example, [5] shows that layers or blocks of a model can have imbalanced training quality during training, and balancing the training speed of layers/blocks improves performance. As another example, [6-7] shows that some LLM layers are less useful than others, and removing these LLM layers has negligible impact on performance. However, the underlying reason why some layers are less well-trained than others remains an open question, and we will include that in future studies.
Incorporating PL_Alpha_Hill into the training process is possible. Recent work [5] has used this metric to dynamically adjust the learning rate of each layer during the training and demonstrated their method makes the layers more equally well-trained and improves the generalization performance.
## Question 3
**The non-linear relationship is due to the low-level hardware operations of sparsity structures rather than the maintenance of early layers.** As shown in Figure 4 of [8], for low sparsity, the computational performance (red lines) increases slowly due to overheads in storing sparse structures and controlling sparse computations. As sparsity increases to moderate and high levels, we see sustained growth of performance until it usually levels off at extremely high sparsities where storage and control overheads dominate. This explains the slow speedup in our low-sparsity regime (e.g., less than 70% sparsity).
**A new ablation study shows that early layer maintenance is irrelevant to the non-linear relationship.** In Figure 11 of the rebuttal PDF, we compare the inference speedup of non-uniform pruning (AlphaPruning) and uniform pruning. The negligible speedup differences between the two methods indicate that non-uniform distribution is unrelated to the observed non-linearity. This is because our method allocates sparsity at the transformer block level, not at the individual layer level, and in transformer architectures, all blocks use identical computational resources.
**References**
[1] Ref [10] in submission
[2] Ref [13] in submission
[3] Ref [58] in submission
[4] Kothapalli et al. 2024
[5] Zhou et al. 2023 Temperature Balancing
[6] Gromov et al. 2024 The Unreasonable Ineffectiveness of the Deeper Layers
[7] Men et al. 2024 ShortGPT
[8] Hoefler et al. 2021 Sparsity in Deep Learning | Summary: The paper introduces AlphaPruning, a novel method for pruning large language models (LLMs) using Heavy-Tailed Self-Regularization (HT-SR) Theory. AlphaPruning uses ESDs of weight matrices to determine layerwise pruning ratios. The method demonstrates the ability to prune LLaMA-7B to 80% sparsity while maintaining reasonable perplexity.
Strengths: 1. The proposed method, AlphaPruning, achieved SOTA performance across various tasks compared to OWL.
2. The experiments conducted were comprehensive and diverse.
Weaknesses: 1. The motivation is not explicitly explained. It appears to apply a theoretical concept to the pruning area without providing quantitative or qualitative proof.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Can you explain why the property of $W^T*W$ can decide the sparsity of the layer and can be combined with different pruning metrics?
2. To my knowledge, large language models (LLMs) are generally well-trained. The article concludes that PL_Alpha_Hill can indicate whether a layer is well-trained. Are there other methods or indicators that demonstrate if these layers are not well-trained?
3. Have you compared the sparsity allocation of different layers with OWL? If so, is the sparsity allocation similar to that of OWL?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: This article includes a limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness 1 and Question 1
**Motivation of our method and why property of $W^\top W$ can decide layer sparsity**. Our method is grounded in heavy-tail self-regularization (HT-SR) theory, which we use to quantify the training quality of each layer and determine layer-wise sparsity. The rationale is as follows:
- HR-SR theory originated as a semi-empirical theory, with early empirical work [1-2] examining the empirical spectral density (ESD) of weight matrices, specifically the eigenspectrum of the correlation matrix $W^\top W$. This research found that the heavy-tailed structure of the ESD strongly correlates with training quality. These findings are rooted in heavy-tailed random matrix theory and statistical physics, as detailed in Table 1 of [2].
- Recent theoretical work studies how heavy-tails in ESD emerge and why it correlates with training quality. It is well-known [3,5] that spikes in ESD represent "signals", while the bulk represents noise, which follows the Marchenko-Pastur law. In the theoretical setting of [3], the signal or the spike aligns with ground-truth features from the teacher model, and that corresponds to increased correlations in weight elements [1-2]. Furthermore, [4] shows that heavy tails in ESD originate from the interaction between spikes and bulk, which can be quantified precisely using recent advances in the free-probability theory [6], and that is the "bulk-decay" phase in the five-plus-one phase model in [2]. These studies indicate that layers with more heavy-tailed ESDs have extracted more useful signals during training, indicating better training quality.
- This insight motivates our sparsity assignment method: layers with more heavy-tailed ESD contain more learned signals and are assigned lower sparsity by our method, while layers with less heavy-tailed ESD retain fewer signals and are assigned higher sparsity. In practice, the heavy-tailed structure is measured by fitting a power-law distribution to the ESD, and extracting the power-law exponent $\alpha$ as the indicator. This is why our method is named "AlphaPruning".
**Why our method can be combined with other pruning metrics?** Our layer-wise sparsity assignment is complementary to other pruning metrics, such as Wanda and SparseGPT. Our method determines the training quality of layers, while other pruning metrics identify the importance of components within each layer (or weight matrix). Thus, our approach decides how much to prune for each layer, while other metrics determine which components to prune within each layer.
**References**
[1] Martin et al. Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data.
[2] Martin et al. Traditional and Heavy-Tailed Self Regularization in Neural Network Models
[3] Wang et al. Spectral Evolution and Invariance in Linear-width Neural Networks
[4] Kothapalli et al. Crafting Heavy-Tails in Weight Matrix Spectrum without Gradient Noise
[5] Couillet and Liao et al. Random Matrix Methods for Machine Learning
[6] Landau et al. Singular vectors of sums of rectangular random matrices and optimal estimation of high-rank signals: The extensive spike model
## Question 2
In addition to PL_Alpha_Hill proposed in our work, [7-8] are other studies that investigated methods that measure whether a layer is well-trained or not, demonstrating LLMs layers are not equally well-trained. [7] developed a method that assesses the similarity between the representations at different layers, defined as the angular distance between feature vectors. They found that deeper layers are more similar to their neighboring layers than shallow layers, suggesting that LLMs may not fully utilize the parameters in these deeper layers, indicating these layers are not well-trained. Similarly, [8] introduced a metric called Block Influence, which measures the impact of each transformer block on hidden states to gauge layer significance. Their findings showed varying degrees of ineffectiveness/redundancy across layers, suggesting that these layers are not well-trained.
**References**
[7] Gromov et al. The Unreasonable Ineffectiveness of the Deeper Layers
[8] Men et al. ShortGPT: Layers in Large Language Models are More Redundant Than You Expect
## Question 3
In Figure 10 of the rebuttal PDF, we compare the sparsity allocation of the two methods. We show that the general trends of sparsity distribution generated by the two methods are similar, with lower sparsities allocated to earlier layers and higher sparsities allocated to deeper layers. However, our method produces a more granular distribution with clearer distinctions between consecutive deep layers, resulting in improved pruning performance. | Rebuttal 1:
Rebuttal: We want to thank all the reviewers for the constructive feedback, which helps us improve our paper. Please refer to the attached PDF for our new experiments and see below for our responses to each comment.
Pdf: /pdf/1e7846f8aa2419be5a3924703a9cb73bc9d2a574.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
How Transformers Utilize Multi-Head Attention in In-Context Learning? A Case Study on Sparse Linear Regression | Accept (poster) | Summary: This paper empirically studies how the different heads works across different layers of the transformer: while the first layer uses all heads, the later layers mainly relies on a single head. In addition, the authors also propose a preprocess-then-optimize algorithm.
Strengths: This paper is easy to understand and provides interesting theoretical results.
Weaknesses: The main flow of Section 4 and 5 is that, it (1) constructs a first layer which can perform the exact preprocessing considered in the proposed algorith, (2) studies the generalization of the preprocessing, (3) constructs a the later layers which can perform gradient descent, and finally (4) studies the generalization of the optimization. The paper uses a linear attention-only transformer.
Given the above, there are several main weaknesses regarding the contributions in this paper:
(1) Missing theory or theoretical intuition to explain why multi-head attention works in the way observed in this paper. Given the series of in-context learning research which focuses on aligning multi-layer transformers with gradient descent, the contribution is limited.
(2) The linear attention-only transformer can be improved to better align with the practical LLMs. In the recent year, various studies consider single/multi-head attention with softmax attention [1,2,3]. The authors are encouraged to add softmax attention in the paper. In particular, compared to the linear attention result in [4], the softmax result in [1,2,3] shows that softmax attention works differently from linear attention. The authors are also encouraged to study how MLP works in the system. These elements in the transformer may result in different behavior compared to the linear attention-only transformer.
(3) The paper constructs a transformer which aligns with the preprocess-then-optimize algorithm, rather than showing that a trained multi-layer transformer indeed works in the way designed in this paper. The authors need to provide more evidences for this, e.g., [5]. The authors may also want to highlight their different to [5]. A convergence analysis similar to [1] and [2] is even better.
(4) The generalization properties are built upon a linear model, whose analysis is supposed to be routine. The authors may also highlight the challenges when performing the analysis.
Given the above limitation, the theoretical contributions are not sufficient enough. There is no critical difference between existing theories in literature and the study in this paper.
References:
[1] Huang, Yu, Yuan Cheng, and Yingbin Liang. "In-context convergence of transformers." arXiv preprint arXiv:2310.05249 (2023).
[2] Chen, Siyu, et al. "Training dynamics of multi-head softmax attention for in-context learning: Emergence, convergence, and optimality." arXiv preprint arXiv:2402.19442 (2024).
[3] Cui, Yingqian, et al. "Superiority of multi-head attention in in-context linear regression." arXiv preprint arXiv:2401.17426 (2024).
[4] Zhang, Ruiqi, Spencer Frei, and Peter L. Bartlett. "Trained transformers learn linear models in-context." arXiv preprint arXiv:2306.09927 (2023).
[5] Ahn, Kwangjun, et al. "Transformers learn to implement preconditioned gradient descent for in-context learning." Advances in Neural Information Processing Systems 36 (2024).
Technical Quality: 3
Clarity: 3
Questions for Authors: I would like to raise my score if the authors could provide an extra detailed theoretical result for any one of the above weaknesses. The more the better. Thanks.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and effort you've invested in reviewing our work. We've addressed your questions and concerns as follows:
> 1. *The difference between our work and other works towards aligning multi-layer transformers with gradient descent, like [3,4,5].*
Thank you for your insightful reference paper, we will add more discussion about them in our revised version. We want to highlight that our work is not just focused on the expressive power of transformers (such as their ability to implement novel optimization algorithms). Instead, we aim to answer the question: "How do trained transformers tend to use multi-head attention for in-context learning?" We first gain insights from trained transformers (in Section 3) and then propose an algorithm to explain our observations (Sections 4 and 5). While [4,5] focus on single-head attention and [3] explores the expressive power of single-layer transformers, our work seeks to understand the hidden mechanisms of trained multi-layer transformers through a novel perspective. We believe our findings are both interesting and extensible to more real-world tasks and complex models, warranting further investigation in future work.
> 2. *Theoretical intuition for why multi-head attention works in the way observed in this paper, and the linking between softmax attention and linear attention*.
An empirical intuition for the benefit of multi-head attention in sparse linear regression is that it enables the transformer to process different subspaces of the input data using separate attention heads, as discussed in Section 4.1. Here we can provide a more theoretical explanation for the working mechanism of multi-head attention in sparse linear regression, building on [2]. Consider I = d/k, representing I different tasks, each focusing on a sparse linear regression problem with k assigned dimensions, and $g_i$ is the same for different tasks i (i.e. the task can be seen as a combination of I sparse linear regression problem). Based on the results for ms-attn in [2] (eq 4.3 & theorem 4.2), the optimal solution involves each attention head processing a specific subspace while zeroing out other dimensions, aligning with our Proposition 4.1.
Assuming sufficiently large $d_i, d$ and $L$ (assumptions 2.1 and 2.2 in [2]), the parameter for $w_{qk}^h$ is scaled by $1/\sqrt{d_i}$, allowing softmax attention to converge to linear attention:$x̂_{ij} \approx\frac{\sqrt{d_i}}{N} \sum_{k = 1}^{n} (\frac{\langle x_{kj} ,x_{ij}\rangle}{\sqrt{d_i}} + 1) y_{k}=\langle\frac1N \sum{x_{kj} y_k}, x_{ij}\rangle + \frac{\sqrt{d_i}}N \sum{y_k} $
Here, we omit the term $\frac1{1 + e d_i \phi_i L^{-1} }$ for $u_i$ (eq4.3 in [2]), as it's constant across i in our setting. The left term $\frac1N \sum{x_{kj} y_k}$ can be interpreted as our preprocess coefficient $\hat{r}_i$ (defined in eq4.1 in our paper), while $\frac{\sqrt{d_i}}N \sum{y_k}$ remains constant across different tasks i.
While our results for the first layer's multi-head attention resemble those in [2], we emphasize two key differences: 1) we elucidate the role of multi-head attention in multi-layer transformers through a new preprocessing perspective, and 2) we provide both theoretical and empirical explanations for the advantages of multi-head attention, demonstrating a potential integration mechanism between the first layer and subsequent layers. Although the convergence behavior of deeper layers remains an open question, our theoretical intuition above suggests a potential approach: first assuming that subsequent layers optimize data through gradient descent, then prove that the first layer converges to utilizing multi-head attention for data preprocessing, we believe this warrants further investigation in future research.
> 3. The role of MLP in our setting.
In our linear setting, learning from context with linear attention is widely adopted in theoretical analyses. While linear attention incorporating MLP layers can implement complex algorithms, it's challenging for a trained transformer to adopt the exact solution we've constructed. In the attached PDF, we compare the performance of linear attention-only transformers with and without MLP layers, demonstrating that the difference between these models is not significant, thus justifying our simplification.
> 4. Challenges in our analysis on the linear model
Although we are analyzing a linear model, when we analyze the convergence of Preprocessing+GD, we encounter some non-trivial challenges. We point out two main challenges encountered in analyzing this problem as follows: 1) In general linear models, data are assumed to be independent, but in our setting, the data preprocessing involves the utilization of all training data points, making the preprocessed data become non-independent. This raises a great challenge in proving the concentration-related results. 2) In the analysis for the preprocessed data, the preprocessing matrix is not commutable with either the empirical data covariance matrix or the population data covariance matrix, thus many prior techniques in the literature cannot be applied and we need to develop new techniques accordingly. This also raises many challenges in obtaining reasonable risk bounds in our analysis.
---
Finally, we want to emphasize that large language models are intricate systems, and the theoretical understanding of transformers remains in its early stages. Similar to early physics investigations, our approach of gaining insights from carefully designed experiments and then explaining our observations is a reasonable path toward this goal. While we acknowledge numerous areas for future research, we believe our observations and results are both interesting and meaningful, contributing to our understanding of the mechanisms behind large language models, both empirically and theoretically.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors in addressing my comments. I have raised my score from 4 to 5. | Summary: This work presents an analysis of how Transformers perform in-context learning by experimenting with a sparse linear regression problem setup. The author’s analysis combines both empirical and theoretical analysis. They first examine the properties of real models using pruning and probing methods. From this, they identify a key property of the Transformers, that multiple heads are needed in the first layer but in the remaining layers only a single attention head is necessary. From this, they present theoretical analysis and propose a preprocess-then-optimize learning approach to explain the observed behavior in transformers. The authors also present results showing the effectiveness of this learning approach.
Strengths: While this work is not in my area of expertise, I believe the work is well organized and clearly presented. I appreciate the authors combination of empirical and theoretical analysis, to demonstrate that the proposed learning method may actually be employed by real transformers in real tasks and are not only achievable through hand-crafted weights. The authors results are mathematically rigorous as well in supporting the proposed preprocess-then-optimize method.
Weaknesses: The main weakness I see in the work pertains to the simplifications made for the analysis. The authors also acknowledge these limitations in section 7. In summary, the analysis focuses on a simplified problem (linear regression) and uses a simplified version of a transformer, which is also done by other similar works. As a result, it is hard to say if a similar learning approach is learned by transformers in more practical real-world problems. It is also impossible to say how the removed layers may alter the learning process in a full transformer. Overall, I believe this work warrants an accept rating, though I will defer to the other reviewers if they have significant issues with the work.
Technical Quality: 3
Clarity: 4
Questions for Authors: Line 108: should this instead read "to perform a tractable theoretical investigation"?
Line 209: typo "proprocessing" instead of "preprocessing"
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have discussed the limitations I mentioned in the section above. I do not see any clear potential negative societal impact for this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments! We sincerely appreciate the time and effort you've dedicated to providing thoughtful reviews. We've addressed your concerns as follows:
> 1. *The theoretical analysis is based on a simplified transformer maybe hard to apply the results to more practical real-world problems*.
While our analysis uses a simplified transformer model, we believe this widely adopted simplification captures the essence of the model and provides valuable insights applicable to real-world transformers. In the attached pdf, we use additional experiments to demonstrate that MLP layers are less critical in our sparse linear regression setting, and our theoretical and experimental results can be extended to various data distributions. Moreover, based on the recent studies about the parameter distribution of real word transformers, our results are promising to be extended to more practical settings, for example [1] and [2] demonstrate through experiments that parameters in deeper layers are less critical compared to those in shallower layers, additionally, [3] shows that multi-head attention sublayers exhibit low-rank structure in large language models. We believe our findings are not limited to simple sparse linear regression settings but can be extended to more complex ICL and other real-world tasks, and this phenomenon is worth further investigation in future work.
\[1]Gromov, A., Tirumala, K., Shapourian, H., Glorioso, P., & Roberts, D. A. (2024). The unreasonable ineffectiveness of the deeper layers. arXiv preprint arXiv:2403.17887.
\[2]Men, X., Xu, M., Zhang, Q., Wang, B., Lin, H., Lu, Y., ... & Chen, W. (2024). Shortgpt: Layers in large language models are more redundant than you expect. arXiv preprint arXiv:2403.03853.
\[3] Li, G., Tang, Y., & Zhang, W. (2024). LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models. ICML2024
> 2. *Line 108: should this instead read "to perform a tractable theoretical investigation"? Line 209: typo "proprocessing" instead of "preprocessing".*
Thank you for pointing out our typos and sorry for any confusion they may have caused, the "intractable" in Line 108 should be "tractable" and "proprocessing" should be "preprocessing", we will correct these typos in our revised version.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses and additional experiments. I will increase my rating from 6 to 7 as I support accepting this work. | Summary: This paper studies the mechanism of transformers under the in-context sparse linear regression problem. The authors reveal that the transformer pre-trained for this task has the first layer preprocessing the data, and the remaining layers implement gradient descent. More intriguingly, only one head in the second to last layers is dominantly utilized.
Strengths: 1. The paper is a nice extension of the recent line of work on ICL formulated by regression problems. The study of multi-head also provides an important direction.
2. The theory is solid and clear.
3. The authors have found many interesting phenomena in their setting. I especially find their study on multi-head very interesting and beneficial for LLM interpretability.
Weaknesses: 1. The P-probing lacks a controlled experiment. Should you also try regressing on the hidden states before the first layer? I understand that $h=1$ might be a controlled experiment, but it may still have some preprocessing on the $x$.
2. Is Theorems 5.1 and 5.2 a fair comparison? Should the $\tilde{w}^t_{gd}$ in Theorem 5.1 also include the parameters in the first preprocessing layer? My understanding is that we want to compare the excess risk of "preprocessing+gd" v.s. " gd". Now we are comparing "gd with preprocessed data" and "gd".
3. The orthogonal design is an over-simplified setting.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. I find the observation that only one head dominates in subsequent layers very interesting. Does this also occur in other settings of ICL tasks?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The orthogonal design is an over-simplified setting, which the authors do not properly address.
1. It is necessary to include ablation studies with non-orthogonal design. It's fine to get different results without the orthogonality, but it would be important to report the findings.
2. If the MLP layer is added, an alternative lasso method should be available that uses the closed-form solution under orthogonal design. Can the authors present the experiment results?
Overall, I think there should be more ablation studies in different settings. Although a simplified setting is fine for theory, I want to see experimental results in more realistic settings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and effort you spent on thoughtful reviews and comments. We address your comments below:
> **Q1**: *The P-probing lacks a controlled experiment. Should you also try regressing on the hidden states before the first layer? I understand that $h=1$ might be a controlled experiment, but it may still have some preprocessing on the $x$.*
Thank you for your insightful suggestions. In our preprocessing algorithm, the transformer employs multiple attention heads ($h>1$ ) to process each subspace differently, thereby enhancing the optimization steps for subsequent layers in our sparse linear regression task. However, a single-head transformer ($h=1$ ) can only process the entire space uniformly, lacking the ability to differentiate between subspaces. Consequently, single-head transformers have very limited "preprocessing ability," so we believe using $h=1$ serves as a reasonable "no preprocessing" comparison.
We appreciate your suggestion to use hidden states without attention module for preprocessing as another point of comparison. We have provided additional results in the attached PDF.
> **Q2**:*Is Theorems 5.1 and 5.2 a fair comparison? Should the $\tilde{w}^t_{gd}$ in Theorem 5.1 also include the parameters in the first preprocessing layer? My understanding is that we want to compare the excess risk of "preprocessing+gd" v.s. " gd". Now we are comparing "gd with preprocessed data" and "gd".*
Sorry for causing the misunderstanding. We would like to clarify that the comparison between Theorem 5.1 and 5.2 are made in terms of two “mechanisms” (corresponding to multi-head transformer and single-head transformer), which we believe is fair and the model parameters do not need to be included. In particular, based on Propositions 4.1 and 4.2, we conjecture that the working mechanism of multi-head transformer is “preprocess-then-optimize”, i.e., performing GD on the preprocessed data, while the working mechanism of the single-head transformer is conjectured to perform the “purely-optimize mechanism” on the original data [1]. When applying our results to $L$ layers transformers, $t$-step gd with preprocessed data ($L = t+1$) truly requires just one more layer than $t$-step gd ($L = t$), However, factor $L$ only appears in the log term of our bound (Theorem 5.1&5.2), hence it doesn’t affect our results (which are interpreted in orders).
To this end, it is fair to compare GD with preprocessed data and GD to demonstrate the superiority of our preprocess-then-optimize mechanism. We will make this clear in the revised version.
[1]Ahn, K., Cheng, X., Daneshmand, H., & Sra, S. (2024).Transformers learn to implement preconditioned gradient descent for in-context learning. NeurIPS 2024
> **Q3**: *I find the observation that only one head dominates in subsequent layers very interesting. Does this also occur in other settings of ICL tasks?.*
Thank you for your interest. We believe a similar phenomenon occurs in other settings of ICL tasks as well, and transformers may utilize similar preprocessing-then-optimize algorithms across different layers. While this is somewhat beyond the scope of our paper, we can gain insights from experimental results of other works. For instance, recent studies by [2] and [3] demonstrate through experiments that parameters in deeper layers are less critical compared to those in shallower layers. Additionally, [4] shows that multi-head attention sublayers exhibit low-rank structure in large language models. We believe our findings are not limited to simple sparse linear regression settings but can be extended to more complex ICL and other real-world tasks, and such phenomenon is worth further investigation in future works.
[2]Gromov, A., Tirumala, K., Shapourian, P., & Roberts, D.A.The unreasonable ineffectiveness of the deeper layers.arXiv preprint arXiv:2403.17887.
[3]Men, X., Xu, M., Zhang, Q., ... & Chen, W.Shortgpt: Layers in large language models are more redundant than you expect.arXiv preprint arXiv:2403.03853.
[4] Li, G., Tang, Y., & Zhang, W. LoRAP:Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models.ICML2024
> **Q4** *The orthogonal design is an over-simplified setting, which the authors do not properly address.*
> > **Q4.1**: *It is necessary to include ablation studies with non-orthogonal design. It's fine to get different results without the orthogonality, but it would be important to report the findings.*
> > **Q4.2**: *If the MLP layer is added, an alternative lasso method should be available that uses the closed-form solution under orthogonal design. Can the authors present the experiment results?*
[**A4.1**]: Thank you for your suggestions. Our experimental and theoretical results can indeed be extended to non-orthogonal designs. To validate this, we conducted additional experiments by modifying the distribution of x to x∼N(0,Σ), where Σ=I+ζS, and S is a matrix filled with 1. We varied ζ across [0,0.1,0.2,0.4] to further verify our findings. The results, which are consistent with those presented in Sections 3 and 6, can be found in the attached PDF. We will incorporate a more detailed discussion of these non-orthogonal design experiments in our revised manuscript.
[**A4.2**]: While it's true that there exist specially designed transformers capable of solving the lasso problem using a closed-form solution, we find that the trained transformers are more likely to use other algorithms. Here we compared the performance of linear attention transformers with and without MLP layers in the attached PDF, which shows that the inclusion of MLP layers does not significantly impact the results for this particular problem. Although different initializations may affect the results, we believe these results are sufficient to demonstrate that our choice to use a simplified attention-only transformer for theoretical analysis is reasonable and can provide valuable insights for real-world models.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' responses. I would like to raise my score to 7. | Summary: This work seeks to provide a deeper exploration of the use of multi-heads, at different layers in a Transformer, to perform in-context
learning tasks. More specifically, the goal of the paper is to experimentally discover additional insights on the interactions of multi-headed attention across layers. Subsequently, the authors find that multiple heads are primarily used in the first layer, whilst the remaining layers of the transformer typically leverage a single head. Furthermore, the authors provide a hypothesis for their observation before building a preprocess-then-optimize algorithm.
Strengths: Strengths:
- This work provides a timely analysis of the underlying mechanism of multi-headed attention in Transformer models.
- The authors provide strong empirical and theoretical evidence to justify the insights derived from on in-context learning.
- Additionally, the authors provide a novel Preprocess-then-optimize Algorithm for training Transformers.
Weaknesses: Weaknesses:
- As discussed by the authors, the reviewer's primary concern with this work is the lack of account for the remainder of the Transformer architecture. In particular, the role of feed-forward layers is sidesteped when considering the effects of the Preprocess-then-optimize Algorithm.
Technical Quality: 3
Clarity: 3
Questions for Authors: The reviewer would like to better understand how the authors think about feed-forward layers with respect to the observations made by the authors regarding the necessity of multi-headed attention in the first few layers. In particular, would alterations to the structure of the first few feedforward layers provide a beneficial impact for transformers?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have clearly stated any limitations with their current work. Additionally, the reviewer does not foresee any potential negative societal impact from this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and effort you spent on thoughtful reviews. We address your comments below:
> **Q1**: *The reviewer would like to better understand how the authors think about feed-forward layers with respect to the observations made by the authors regarding the necessity of multi-headed attention in the first few layers. In particular, would alterations to the structure of the first few feedforward layers provide a beneficial impact for transformers?*
Thank you for your insightful questions about the feed-forward layers. In transformers, we can observe that (ignoring the layer norm) the attention mechanism is the only module capable of linking tokens across different positions, while the FFN layer performs a token-wise (nonlinear) mapping without considering contextual information. For learning from context, especially in our linear sparse regression settings where contextual information is crucial for transformers to identify appropriate subspaces for sparse linear regression problems, additional (nonlinear) mapping for each token is less important. In our theoretical analysis, we primarily focus on the attention-only transformer without the FFN layer, focusing on the role of multi-head attention, such simplification is widely adopted in theoretical analyses, as seen in [1], [2], and [3]. To demonstrate the limited role of the FFN layer in our specific setting, here we provide comparative results of the linear attention model with and without MLP layers in the attached PDF.
While the FFN layer's role is limited in our setting, we acknowledge its potential beneficial impact when combined with the attention layer for more complex nonlinear or real-world tasks. Transformers may utilize similar preprocessing-then-optimize algorithms across different layers. Although this is somewhat beyond the scope of our paper, insights from other works provide valuable perspectives. For instance, [4] demonstrates that transformers tend to first learn appropriate representations using MLP layers before learning from context through the attention layer. Additionally, [5] and [6] use experiments to show that parameters in deeper layers are less critical compared to those in shallower layers. We believe our findings are both interesting and warrant further investigation in future works.
\[1]Von Oswald, J., Niklasson, E., Randazzo, E., Sacramento, J., Mordvintsev, A., Zhmoginov, A., & Vladymyrov, M. (2023, July). Transformers learn in-context by gradient descent. ICML2023
\[2]Ahn, K., Cheng, X., Song, M., Yun, C., Jadbabaie, A., & Sra, S. (2023). Linear attention is (maybe) all you need (to understand transformer optimization). ICLR2024
\[3]Ahn, K., Cheng, X., Daneshmand, H., & Sra, S. (2024). Transformers learn to implement preconditioned gradient descent for in-context learning. NeurIPS 2023
\[4]Guo, T., Hu, W., Mei, S., Wang, H., Xiong, C., Savarese, S., & Bai, Y. (2023). How do transformers learn in-context beyond simple functions? a case study on learning with representations. ICLR2024
\[5]Gromov, A., Tirumala, K., Shapourian, H., Glorioso, P., & Roberts, D. A. (2024). The unreasonable ineffectiveness of the deeper layers. arXiv preprint arXiv:2403.17887.
\[6]Men, X., Xu, M., Zhang, Q., Wang, B., Lin, H., Lu, Y., ... & Chen, W. (2024). Shortgpt: Layers in large language models are more redundant than you expect. arXiv preprint arXiv:2403.03853. | Rebuttal 1:
Rebuttal: We sincerely appreciate the thoughtful reviews and comments provided by all reviewers. Below, we address the main points raised, details can be found in corresponding blocks for each reviewer:
- Reviewer fd5H focused on the role of MLP layers in our setting and their potential benefits for other tasks. In our sparse linear regression setting, nonlinear tokenwise operations are less critical. While recent works suggest MLP layers may benefit other nonlinear tasks and real-world tasks; here we provide additional experiments comparing linear transformers with and without MLP layers to demonstrate that our theoretical analysis without MLP layers is reasonable.
- Reviewer uBeS primarily questioned the experimental details and results when extending our setting to different distributions. We clarify that our experimental and theoretical results can be extended to various data distributions and provide additional experiments to support this claim.
- Reviewer Swt3 inquired about the extensibility of our experimental findings and theoretical results to other settings. We reference recent experimental results on large language models to demonstrate the potential for our findings to be applied to more realistic scenarios.
- Reviewer Bazq acknowledged our experimental findings but expressed interest in
extending our theoretical analysis to other aspects, such as training dynamics and more complex problem settings like softmax attention. We highlight that our focus is on how trained transformers utilize multi-head attention for in-context learning, and these aspects are a bit beyond the scope of this paper. To address the reviewer's concern, we provide a theoretical explanation showing that the working mechanism we proposed for the multi-head first layer is similar to the theoretical analysis for softmax multi-head attentions. We also highlight the differences between our paper and other works. We believe our findings are both interesting and extensible to more real-world tasks and complex models, warranting further investigation in future work.
Additional experiments in the attached pdf:
Fig 1 compares the performance between linear attention transformers with and without MLP layers, where MLP is initialized with Xavier initialization. We chose the performance with 10 context examples. Although there exist some delicately designed transformers that can conduct more complicated algorithms for lasso, the trained transformer tends to use other solutions. The difference between models with and without MLP layers is minimal in our linear setting, justifying our simplification for theoretical analysis for sparse linear regression problem.
Fig 2 addresses Reviewer uBeS's interest in experimental results with different data distributions. We conducted additional experiments by modifying the distribution of x to x ∼ N(0,Σ), where Σ = I + ζS, and S is a matrix filled with 1. The results show that under different data distributions, our experimental findings remain consistent with orthogonal settings (Fig. 2a). Our pre-gd algorithm still outperforms gd, and our theoretical analysis can be extended to different data distribution cases (Fig 2b, 2c).
Fig 3 responds to Reviewer uBeS's interest in p-probing results using H₀. While we believe our experimental choice of H₁ for different heads is reasonable, using H₀ is could also be another option. Note that under the transformation with the first layer, the data distribution is shifted. Consequently, it is expected that the result using H₀ differs from that using H₁, h = 1. Nevertheless, the results still demonstrate that multi-head attention can achieve lower risk, justifying our preprocessing algorithm for multi-head attention.
Pdf: /pdf/e50190bdbc362cdacf46fc65916af3ac3f24f6f2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
TARP-VP: Towards Evaluation of Transferred Adversarial Robustness and Privacy on Label Mapping Visual Prompting Models | Accept (poster) | Summary: This paper investigates the adversarial robustness and privacy aspects of models trained using the Language Model Visual Prompting (LM-VP) technique, which has not been done before. The results suggest that LM-VP models trained with transfer AT have advantages in AI security.
Strengths: - This paper connects two larger topics: privacy and prompt learning. The novelty is high, because the reviewer has not seen any comparable work before.
- The authors also contribute to transferred adversarial training, which is very important for domain adaptation and connected to label mapping.
- The researchers demonstrated that applying transferred adversarial training (AT) to Language Model-Vision Pretraining (LM-VP) models yields superior trade-offs between adversarial robustness and privacy protection. This improvement was consistently observed across a diverse range of pre-trained models examined in the study.
- The authors ablate two type of prompt generation which shows that they take a deeper look to this topic.
Weaknesses: - Ln267-268: "MIA success rate near 50%" is the statement in the discussion section. 50% means for me random. It is missing for me, if this is an improvement or not. I am asking myself if there a baseline exists which could be even lower. It should have been shortly mentioned how to order this in general in MIA defenses.
Comments on writing:
- Ln83: No AutoAttack [1] mentioned, but later in Ln219.
- Eq1: A dot instead of a comma at the end of line.
- Eq3: The multiplications signs look like a convolution.
[1] https://robustbench.github.io/#div_imagenet_Linf_heading
[2] https://ml.cs.tsinghua.edu.cn/ares-bench/#/leaderboard
Technical Quality: 3
Clarity: 3
Questions for Authors: - In the experimental setup, it is not clear to the reviewer, which MIA attack did you use? In Ln102 the authors state two threshold-based MIAs. Is it one of these? Or both?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - Adversarial attack setup and limitations: An ablation study on different epsilon sizes would give more insights on the limitations. Ln212: The attack epsilon size 8/255 (for CIFAR-10) is larger than the standard size for adversarial training is 4/255 [1,2] for ImageNet.
In light of the current results, these limitations only could be interesting for the appendix as additional insights and does not have to appear in the main sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**. Generally, in existing work, the MIA success rate is typically larger than 50%. A value close to 50% indicates that the attack is invalid, as this is like a random guess. Thus, a successful defense against MIA would result in an MIA success rate close to 50%, as seen in Table 4 of [1], Table 3 of [2], and Tables 2, and 3 of [3]. In these studies, some defenses reduce MIA from 60%-80% to around 50%, but not below 50%.
**W2**. We appreciate for pointing out these errors in writing, we will correct them in our revised manuscript.
**Q1**. The MIA we use is based on Yeom et. al [4], and actually, Song et al. [2] also evaluate MIA and use the MIA based on Yeom [4]. We grouped the two references in Ln102, which may cause ambiguity, and we will clarify this in our manuscript.
**L1**. Actually, we use epsilon size 8/255 as it is a commonly used setting when using ResNet18 or WideResNet as threat models on CIFAR-10 after [5]. As per the reviewer’s suggestion, we select two pre-trained models in LM-VP models to conduct experiments under epsilon size 4/255, although the results are slightly different with 8/255, we see similar trends, and we will complete the whole ablation experiments in our manuscript.
| Pre-trained models |ST-Natural|ST-PGD|ST-MIA|Transfer AT-Natural|Transfer AT-PGD|Transfer AT-MIA|
|--------------------|:-------------------------------:|:----------------------------:|-----------------------------|:-------------------------------------------:|:----------------------------------------:|:---------------------------------------:|
| ResNet50 | 84.90 | 33.19 | 73.99 | 67.96 | 65.10 | 51.91 |
| ConvNext | 97.72 | 86.86 | 79.84 | 97.68 | 89.77 | 51.20 |
[1] Milad Nasr, Reza Shokri, and Amir Houmansadr. Machine learning with membership privacy using adversarial regularization. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pages 634–646, 2018.
[2] Shokri R, Stronati M, Song C, et al. Membership inference attacks against machine learning models[C]//2017 IEEE symposium on security and privacy (SP). IEEE, 2017: 3-18.
[3] Liwei Song, Reza Shokri, and Prateek Mittal. Privacy risks of securing machine learning models against adversarial examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pages 241–257, 2019.
[4] Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. Privacy risk in machine learning: Analyzing the connection to overfitting. In 2018 IEEE 31st computer security foundations symposium (CSF), pages 268–282. IEEE, 2018.
[5] Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan.Theoretically principled trade-off between robustness and accuracy. In International conference on machine learning, pages 7472–7482. PMLR, 2019
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough response. I appreciate the time you took to address each of the points I raised.
Well, you could have taken 1/255 or 16/255 as well as epsilon size for the proposed ablation study.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply and suggestions. We will add more epsilon size options in our updated version to explore the ablation effects of this parameter. | Summary: This paper explores the trade-offs between adversarial robustness and privacy in deep learning models, highlighting that while AT improves robustness but it increases vulnerability to MIA. The authors introduce an ANF-based graph structure and CryptoANFNet, a neural network model for cryptographic problem-solving, demonstrating that their approach achieves a good balance between robustness and privacy.
Strengths: This paper considers both robustness and privacy issues, which are very valuable topics in the DNN training area.
Weaknesses: NA
Technical Quality: 3
Clarity: 3
Questions for Authors: No
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewers' recognition of our work. Our main contribution is that we first introduce a method that simultaneously enhances transfer adversarial robustness and privacy. As a new research prospect, we are happy to discuss any questions you may have. Additionally, we will also fully release our code to enhance the contribution of this work to the AI security community. | Summary: The works shows that LM-VP models can achieve the great adversarial robustness and privacy at the same time, different from full model adversarial training. Across different pre-trained models, the proposed transferred adversarial training achieves good classification accuracy and low MIA success rates.
Strengths: 1. Different pre-trained models are tried, across ResNet50, ViT, Swin, and ConvNext, etc.
2. The motivation is clear, AT has bad trade off between robustness and privacy while the LM-VP might be a possible solution.
Weaknesses: 1. Considering the efficiency of LM-VP adaptation, why not try different datasets other than CIFAR10? The current results are restricted to CIFAR10, which is a 10-class classification problem and low resolution (which can perform OK in LM-VP setting). But what about a higher resolution dataset with more classes?
2. The main concerns come from the effectiveness of adversarial attacks against LM-VP, which also raises the concern about the high adversarial robustness mentioned in the paper. The high robust accuracy for ConvNext and EVA might be due to the low transfer attack success rates from ResNet. Can you also show me the success rates of transfer attacks generated on ResNet-18 to attack ConvNext and EVA? If this is very low, it is not surprising that so-called robust accuracy is similar to standard accuracy.
3. It is still not clear to me why standard white-box adversarial attacks can not be applied to LM-VP models. From my perspective, both LM and VP should be treated as parameters and the adversarial attacks can be applied accordingly.
4. In Section 3.3, it is stated that the trainable parameters are noise parameters. However, the LM as a fc layer has also parameters.
Technical Quality: 1
Clarity: 2
Questions for Authors: See Weakness.
Confidence: 5
Soundness: 1
Presentation: 2
Contribution: 1
Limitations: Theoretical work is considered limitation, which is claimed to be solved in the future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**. As per the reviewer’s suggestion, we conducted experiments on Tiny-ImageNet, which has a resolution of 64x64 and contains 200 classes. We select different pre-trained models and the results indicate that: The LM-VP model with transfer AT improves transfer adversarial robustness by 3%-24% and mitigates the MIA success rate by 3%-12% compared to the LM-VP model with standard training, on both CIFAR-10 and Tiny-ImageNet, transfer AT on the LM-VP model demonstrates a better robustness-privacy trade-off, which shows a good generalization performance.
|Pre-trained models|ST-Natural|ST-PGD|ST-MIA|Transfer AT-Natural|Transfer AT-PGD|Transfer AT-MIA|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| ResNet50|62.74| 10.26 | 57.46| 50.42|34.60| 50.90|
| ResNet152|65.00| 20.53| 62.14| 57.36| 38.81| 50.85|
| WRN-50-2|70.12| 16.59| 53.50| 50.50| 30.59| 50.89|
| VIT| 80.97| 37.77| 54.00| 72.02| 50.22| 51.45|
| Swin| 79.93| 41.81| 56.95|75.08| 55.81| 51.35|
| ConvNext| **89.01**| 73.47| 58.47| 87.60| **76.61**| 52.04|
**W2**. We thank the reviewer for pointing out this concern. Regarding this, we guess the reviewer might think that the transfer adversarial attacks generated by ResNet-18 are weak. However, from Table 1 in our manuscript, for pre-trained models other than ConvNext and EVA, the transfer adversarial attacks pose a significant threat to them, evidenced by a substantial gap in natural accuracy and adversarial robustness. The choice of pre-trained models is important in the LM-VP models, ConvNext and EVA have 197.96M and 304.14M parameters, respectively, and are fully pre-trained on ImageNet. We have provided the results of adversarial robustness against ResNet-18 transfer attacks on the LM-VP models using ConvNext and EVA (Table 1 and Table 2 in our manuscript which indicate the attack success rate (1-adversarial robustness), high adversarial robustness shows their good synergy with LM-VP models.
We speculate the reviewer wants to see the ResNet-18 transfer adversarial attacks on ConvNext and EVA when transferred to CIFAR-10 after traditional finetune. After a full finetune, the standard accuracy reaches nearly 100% and the **attack success rate is 12.50% for ConvNext and 18.75% for EVA, with MIA success rates of 62.1% and 76.80% respectively**, finetune results are similar to LM-VP results, indicating that ConvNext and EVA are indeed robust to such transfer attacks on CIFAR-10 after transfer learning, but they are vulnerable to MIA.
Lastly, we want to emphasize that our work does not solely consider the transfer adversarial robustness as the only metric. If we only consider this, it is evidence that it is highly dependent on the performance of the pre-trained model (Table 1 in our manuscript ), e.g., large-size pre-trained models achieve better transfer adversarial robustness. However, we also address privacy concerns associated with the LM-VP model, ConvNext and EVA achieve high adversarial robustness, but they still have much room for improvement in terms of privacy (Table 3 in our manuscript: Standard trained LM-VP models). Applying transfer AT to the LM-VP models not only further enhances adversarial robustness but also mitigates privacy issues, which is the primary contribution we claim.
**W3**. We thank the reviewer for raising this question and rethinking the white-box attack on LM-VP models. According to our new attempt, white-box adversarial attacks LM-VP models can be indeed applied to LM-VP models. However, as shown in the table below, the adversarial robustness of the LM-VP model varies greatly when selecting different pre-trained models, making it difficult to obtain a consistent conclusion. Different from a general model, the pre-trained model plays an important role in the LM-VP model but the parameters are fixed and do not participate in training, VP plays limit role in defending against the white-box adversarial attack, thus its adversarial robustness may largely reflect the pre-trained models’ inherent adversarial robustness when transferred to the target dataset. Furthermore, we study the standard form of AT in the LM-VP model which is different from the transfer AT in our manuscript.
|Pre-trained models|ST-Natural|ST-PGD|AT-Natural|AT-PGD|
|:-:|:-:|:-:|:-:|:-:|
|ResNet50|80.52|8.33|23.10|0.8|
|ResNet152|84.76|57.09|14.24|0|
|Wideresnet| 80.91| 40.29|12.15|0|
|VIT|91.50|19.28|27.78|0|
|Swin|92.00|0|34.65|0|
|ConvNext|97.97|43.22|40.69|0|
Unlike transfer AT, which can improve transfer adversarial robustness, standard AT can be regarded as invalid in the LM-VP model. It has really poor performance in both natural accuracy and standard adversarial robustness. The reasons may lie in:
(1) The adversarial examples generated by LM-VP models rely heavily on pre-trained models. The information of the source dataset may lead to unsatisfactory results of training with those adversarial examples on the target dataset.
(2) The adversarial perturbation directly affects VP, which further damages VP performance during AT.
We choose different pre-trained models to illustrate the above situation regarding the white-box adversarial robustness. However, our work focuses on studying the transfer adversarial robustness and privacy of the LM-VP model and proves the effectiveness of transfer AT. The same transfer attack model (ResNet18) is used to ensure the consistency of the experimental environment. Considering the white-box adversarial attack on the LM-VP model, its essence may be to attack fixed pre-trained models. How to study or improve its adversarial robustness is an interesting topic. We will revise the corresponding part in our manuscript and add the analysis and experiments of the LM-VP model under white-box adversarial attacks.
**W4**. We thank the reviewer for pointing out this mistake, LM as an FC layer, also has parameters, like in Section 3.1.3 we mentioned the parameters w2 of the fc layer, this is a writing error in Section 3.3, and we will correct it.
---
Rebuttal 2:
Comment: I still think that the so-called transferred adversarial robustness is far away from the adversarial robustness.
1. In the paper, "adversarial robustness" is used multiple times, Line 156, Line 254, Line 272. However, it is "transferred adversarial robustness" according to the author's response.
2. In the second table of your rebuttal, ResNet50 standard test accuracy is 80.52% and in Table 1 of your submission, ResNet50 standard test accuracy is 86.3%. I am confused about your setup.
3. In the second table of your rebuttal, ResNet152 standard PGD accuracy is 57.09% and is even higher than 35.99% in your submission, as transfer attack PGD20 accuracy. I am confused why the white-box attacks are even worse.
4. I need authors to double check the adversarial attack setting in both the original submission and the rebuttal.
---
Rebuttal 3:
Comment: We agree that transferred adversarial robustness is not exactly the same as adversarial robustness; to clarify this, we will use the term "transferred adversarial robustness" to make our work as precise as possible in the updated version, we appreciate the reviewer pointing this out. We also agree that the white-box standard adversarial robustness of LM-VP models is a topic worth studying. However, we want to provide some concerns about why evaluating the standard adversarial robustness of LM-VP models can be challenging. In contrast, the black-box transfer adversarial robustness of LM-VP models does not suffer from the same issues described below.
- **The standard adversarial robustness of LM-VP models may largely reflect more about the inherent properties of a fixed pretrained model rather than an effective evaluation of the LM and VP components**. In Table 2 of our rebuttal, there are significant differences in the best adversarial robustness of LM-VP models with different pretrained models. Despite using the same VP and LM components, their best adversarial robustness varies greatly and lacks a clear pattern, e.g., larger size pretrained models may not have better (or worse) adversarial robustness. This indicates the standard adversarial robustness may significantly be influenced by inherent features of the fixed pretrained model rather than the whole LM-VP model, whereas our goal is to evaluate the whole LM-VP model rather than just the pretrained component.
- **Whether the standard adversarial robustness makes sense for LM-VP models or not**: Adversarial examples generated by the LM-VP model are significantly influenced by the fixed pretrained model trained on the **source dataset**. The validity of using these samples to assess the LM-VP model's adversarial robustness on the **target dataset** requires further consideration.
**Q1**. Since our manuscript focuses on the transferred adversarial robustness of LM-VP models, we frequently use the terms “transfer” or “transferred” throughout the paper, e.g., we mention we enhanced the transferred adversarial robustness in the contribution in Ln 57. The original manuscript does not consider the standard adversarial robustness, thus in some parts, we use adversarial robustness as transferred adversarial robustness. During rebuttal, since we rethink the standard adversarial robustness in LM-VP, we will check and correct the relevant statements in the revised manuscript and add the analysis of the standard adversarial robustness.
**Q2**. The two columns on the left of Table 2 in the rebuttal are the results after standard training on LM-VP models. **“Best performance” refers to the performance under the epoch of the best (standard or transferred) adversarial robustness**, i.e., in Table 2 of rebuttal, when the standard PGD-20 adversarial robustness is best at 8.33%, the corresponding natural accuracy is 80.52% under that epoch; In contrast, in the manuscript, when the ResNet18 transfer PGD-20 adversarial robustness is best at 35.61%, it has a natural accuracy of 86.3%. We will add the description of the term “best performance” in the revised manuscript to clarify this, as it can indeed be easily misunderstood.
**Q3**. Table 2 of the rebuttal reflects the standard adversarial robustness of LM-VP models. Based on these two values alone (57.09% and 35.99%), we can only infer that, at a specific stage of LM-VP model training, the adversarial examples generated by ResNet18 are more aggressive than those generated by the LM-VP model based on pretrained ResNet152. For the general trained model, we generally believe that white-box attacks are stronger than black-box transfer attacks. However, the LM-VP model is influenced by pretrained models, which are trained on source datasets. It is conceivable that the adversarial examples under some pretrained models are strong and some are weak. For example, from Table 2 of the rebuttal, pretrained Swin shows an extremely strong white-box adversarial attack while pretrained ResNet152 is relatively weak. However, the strength of ResNet18 transfer adversarial attack remains consistent.
We would also like to mention that 57.09% of the standard adversarial robustness occurs only in the first epoch of training. This early-stage standard adversarial robustness likely reflects the inherent properties of the pretrained ResNet152, but its standard adversarial robustness decreases along with training, e.g., only 19.81% remains at epoch 9. Conversely, the ResNet18 transfer adversarial robustness maintains a relatively stable level, fluctuating between 32% and 36% for pretrained ResNet152.
**Q4**. Regarding the ResNet18, WRN-34-10 transfer adversarial attack, and the standard adversarial attack, our settings are consistent: PGD with epsilon=8/255, num_steps=10, step_size=2/255 for training, and only change the num_steps=20 for testing, the above settings are the same in the manuscript and rebuttal.
---
Rebuttal Comment 3.1:
Comment: First, thank you for your patient rebuttal and response. If I understand correctly:
1. You are studying transferred adversarial robustness for LM-VP model trained with transferred AT;
2. You did not study adversarial robustness by the white-box attacks because it is not consistent when evaluating LM-VP with different pretrained models due to Table 2 first two columns;
Then, I want to finalize my opinion to both you and **AC**:
1. the adversarial robustness should be the worst case (the strongest) attack when evaluating a model, which means, at least the stronger one between the transferred attack you have tried by PGD against ResNet18 and the PGD you tried during the rebuttal.
2. If you find that the PGD-20 against the victim model is even weaker than PGD-20 against another substitute model (ResNet-18 or WRN-34-10), PGD is not a good way to evaluate adversarial robustness in your experimental setup. Therefore, your so-called transferred adversarial robustness can not reflect the adversarial robustness. Then your so-called the better trade-off between adversarial robustness and privacy might be not convincing.
3. The main assumption by the paper is wrong: "Notably, the LM-VP models are incapable of generating standard forms of AEs like general models due to the input transformation"; "LM-VP models do not inherently have the traditional adversarial robustness property". This has been admitted by the authors in the rebuttal as well by their PGD-20 experiments on Swin. You can argue with **AC** about this.
---
Reply to Comment 3.1.1:
Comment: We once again thank the reviewer for the valuable feedback.
In this response, we aim to further clarify the experimental results we provided in our previous response regarding PGD-20 attacks. Additionally, as we clarified in our response, this paper does not intend to claim that transfer adversarial attacks should replace white-box attacks for evaluating adversarial robustness. Rather, the main purpose of this paper is to systematically evaluate both standard training and transfer AT across various pre-trained models on LM-VPs, **focusing on the trade-off between transferred adversarial robustness and privacy**. Below are our responses:
1. Yes, we study transfer adversarial robustness and its relationship with MIA-based privacy for LM-VP models.
2. We did not study the white-box adversarial robustness in this work. During the rebuttal, based on one comment, we provided additional results on white-box PGD-20 adversarial robustness. The results in Table 2 represent the **best (highest) adversarial robustness**, which was only observed in the early stages of training. Considering the characteristics of LM-VP, in the early stages of training, PGD-20 has not fully utilized the information of the victim model, and those values reflect more about the inherent robustness of different fixed pretrained models, thus showing no consistent pattern. However, **as training progresses, for all pretrained models, the adversarial robustness continues to decline until it reaches a stabilized status**. To support this, we provide the changing trend of adversarial robustness for 10 epochs, as supplemental results to Table 2 (see the table below).
We can also refer to [1] and [2] for how pretrained models impact downstream adversarial robustness. However, their research is limited to ResNet50 or Swin, and Yamada et.al conclude that **“network architecture is a strong source of robustness when we consider transfer learning”** [2]. If different pretrained models yield different results, it becomes challenging to draw a consistent conclusion on the adversarial robustness of LM-VP, making it even more difficult to study its trade-off with privacy.
**Response to the final points of the reviewer**:
1. I understand and agree with this statement. Our rebuttal Table 2 is based on PGD-20 attacks but only includes the best adversarial robustness. From the table below, after certain iterations, adversarial robustness tends to be stable and potentially reflects the worst-case performance for LM-VP models. In contrast, transfer adversarial robustness remains stable during training, the worst-case adversarial robustness is definitely lower than transferred adversarial robustness.
2. "If you find ... in your experimental setup." In our previous rebuttal Table 2, the best adversarial robustness result leads to the misunderstanding. Our additional results about **epoch-wise changes in adversarial robustness** of that experiment prove that white-box attacks are stronger than transfer attacks (see the table below).
We agree with the statement that "transferred adversarial robustness cannot reflect the overall adversarial robustness." However, the focus of this paper is to study the trade-off between transfer adversarial robustness and privacy. We believe that transferred adversarial robustness is a very important attribute to study, especially **when different pretrained models are used**, where transferred adversarial attacks can provide more consistent evaluation results.
In the paper, we state "a better trade-off between adversarial robustness and privacy." Here, adversarial robustness specifically refers to transfer adversarial robustness. Please refer to our response to Question 1, we only consider transfer adversarial robustness in our manuscript. As mentioned, in the next version of the manuscript, we will **add "transferable" or "transferred" in places where it was omitted** in the current version.
3. As we clarified in our rebuttal, we did not consider white-box adversarial robustness in this work, since the main purpose of this paper is to explore the trade-off between transfer adversarial robustness and privacy for LM-VP models, especially where different pre-trained models are employed, and we demonstrate that transferred AT can improve both simultaneously.
|Epoch|1|2|3|4|5|6|7|8|9|10|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|ResNet50|7.37|**8.33**|5.28|3.13|1.80|0.78|0.33|0.29|0.16|0.25|
|ResNet152|**57.09**|56.12|50.49|42.13|37.09|30.12|22.83|20.97|19.81|20.77|
|WideResNet|**40.29**|39.26|30.28|26.65|21.34|17.88|15.08|12.24|12.33|10.09|
|ViT|**19.28**|17.09|13.09|9.88|6.70|5.27|5.00|3.28|1.90|2.01|
|Swin|0|0|0|0|0|0|0|0|0|0|
[1] Vaishnavi P, et.al. A Study of the Effects of Transfer Learning on Adversarial Robustness[J]. TMLR.
[2] Yamada Y, Otani M. Does robustness on imagenet transfer to downstream tasks?[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. | Summary: Adversarial robustness and privacy are important considerations in AI security, particularly in deep learning models. Adversarial training (AT) is effective in enhancing robustness against attacks, but it increases vulnerability to membership inference attacks (MIAs), compromising privacy. This trade-off between robustness and privacy highlights the need for evaluation. Visual prompting, a model reprogramming technique, shows promise in vision tasks, but its performance under attacks and MIAs requires further assessment. This study evaluates the joint adversarial robustness and privacy of label-mapping-based visual prompting (LM-VP) models, combined with transferred AT, demonstrating a favorable trade-off between the two.
Strengths: 1. The article provides a comprehensive evaluation of the security aspects, specifically adversarial robustness and privacy, of Label Mapping Visual Prompting (LM-VP) models, contributing valuable insights to the field of deep learning security.
2. It introduces the concept of transferred adversarial training (AT) for LM-VP models, offering a novel approach to enhancing adversarial robustness while maintaining privacy, which can have significant implications for improving the security of deep learning models.
Weaknesses: 1. The article lacks theoretical support and interpretability for the analysis of LM-VP models, which may limit the depth of understanding of the security implications of these models.
2. The evaluation primarily relies on empirical findings, which may not fully capture the theoretical underpinnings of the observed relationships between adversarial robustness and privacy in LM-VP models.
Technical Quality: 3
Clarity: 2
Questions for Authors: Can the authors provide more detailed insights into the theoretical foundations and assumptions underpinning the empirical findings on the trade-off between adversarial robustness and privacy in Label Mapping Visual Prompting (LM-VP) models?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The article does not extensively discuss the theoretical assumptions and proofs underlying the empirical findings, potentially limiting the generalizability and robustness of the conclusions drawn. While the paper addresses the trade-off between adversarial robustness and privacy in LM-VP models, it may not fully explore the broader societal impacts and ethical considerations of deploying such models in real-world applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1 and W2**. The main novelty of this work does not lie in the theoretical aspect. The main contribution of our work is actually to introduce a novel method to jointly improve the transfer adversarial robustness and privacy of LM-VP models. This issue has not been fully explored before and we are the first to study the robustness and privacy trade-off of the LM-VP model, in this paper, we introduce the transfer AT for LM-VP models and conduct comprehensive experiments varying different pre-trained models to valid the effectiveness of transfer AT compared with standard training on LM-VP models, i.e., jointly improve the transfer adversarial robustness and training data privacy.
On the other hand, we admit that a theoretical understanding of the interaction between adversarial robustness and privacy for LM-VP models is a significant research problem. Yet, it is still an open challenge in the community. Song et.al [1] tried to analyze this interaction on general machine learning models. Through extensive experiments, they concluded that **larger generalization errors and larger training data sensitivity make the model more susceptible to MIA** but did not provide any principled theoretical analysis. From Table 1 and Table 3 in our manuscript, the generalization error on the train and test accuracy may not be a key factor that influences the training data privacy for LM-VP models, as each pre-trained model does not show a significant generalization error, but their MIA values vary.
**Q1 and L1**. We think some possible insights can be explored to tackle the theoretical challenges, for example: (1) During transfer AT, the original training examples are perturbed before feeding into the model, which means these data are not exposed to the trained model, this may be one factor that transfer AT helps mitigate the MIA issue because LM-VP models do not suffer from large generalization error and increased training data sensitivity (Table1, 2 and Figure3 in our manuscript) and transfer AT do not train the original training examples.
(2) Differential privacy (DP) is a method to guarantee privacy, both adversarial examples and VP introduce noise to original images, which may potentially resemble some operation of DP, thus mitigating the MIA issue. Other aspects, such as dataset complexity, training data size, pre-trained model architecture, and different adversarial training strategies will also be important considerations for future work to build the theory.
[1] Liwei Song, Reza Shokri, and Prateek Mittal. Privacy risks of securing machine learning models against adversarial examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pages 241–257, 2019.
---
Rebuttal Comment 1.1:
Title: About the novelity
Comment: Thanks for considering my concerns.
However,the author doesn't seem to understand what I'm trying to say. I know, in this article, the main contribution is to evaluate the adversarial robustness and privacy of the LM-VP models, using the method of Transferable AT. I want to ask why Transferable AT can be used for evaluation the boundary of adversarial robustness and privacy, and how is it modeled and quantified? In my opinion, this paper only uses an existing method to evaluate a complex problem. Maybe, further discussions about the motivation and original contributions rather than more experiments should be highlighted.
---
Rebuttal 2:
Comment: We sincerely thank the reviewer for the valuable feedback.
The idea of this paper, i.e., studying the relation of the adversarial robustness and privacy within the LM-VP model stems from similar concerns observed in general trained models. In the context of general trained models, a key consideration is the boundary relationship between standard adversarial robustness and privacy. However, existing research [1] highlights a conflict between these boundaries when employing white-box adversarial attacks. As a result, in this paper, we use the transferred adversarial attacks to study the robustness instead of using white-box attacks, which from our point of view, is more sensible; the specific reasons are listed below:
**1**. A crucial distinction between the LM-VP model and a general model lies in the presence of a pre-trained model that does not participate in training [2]. Evaluating the LM-VP model using white-box adversarial robustness metrics can be heavily influenced by the choice of pre-trained model, as shown in Table 2 of the PDF. There is no clear pattern in their best adversarial robustness, potentially leading to different boundary relationships between adversarial robustness and privacy. Moreover, since we train the LM-VP model on the target dataset, but the generation of adversarial examples relies on the fixed pre-trained model from the source dataset domain, as a result, different pre-trained models may result in varying boundary performances, i.e., in this sense, using white-box adversarial attacks for the evaluation would make it difficult to draw a consistent conclusion.
**2**. In contrast, when considering the transferred adversarial robustness of LM-VP models, the intensity of the transfer attack remains constant once the attack model is selected. This consistency holds regardless of the chosen pre-trained model, ensuring that the transfer adversarial training process consistently optimizes in the same direction. This inherent consistency thus is more helpful for exploring and establishing a sensible boundary relationship between transferred adversarial robustness and privacy within LM-VP models. Therefore, utilizing transferred adversarial attacks serves as a more reliable and insightful evaluation method in this context.
**Modeling Approach**:
Within the framework of transfer AT, the LM-VP model, comprising VP (Visual Prompt), a pre-trained model, and LM (label mapping), is treated as a unified black-box system. A fixed-parameter attack model, excluded from the training process, is employed to conduct transferred adversarial attacks. Under this setup, this paper systematically evaluates both standard training and transfer AT across LM-VP models that are based on various pre-trained models, with a focus on their impact on the trade-off between transfer adversarial robustness and privacy.
**Quantification and Analysis**:
To quantify the boundary relationship between robustness and privacy, we leverage numerical metrics of transfer adversarial robustness and the success rate of membership inference attacks. By systematically comparing these metrics under both standard training and transfer AT across a range of pre-trained models, we aim to arrive at a consistent and generalizable conclusion. This analysis seeks to demonstrate that transfer AT effectively achieves a superior balance between transfer adversarial robustness and privacy boundaries, ultimately establishing it as a more secure training approach for LM-VP models.
[1] Liwei Song, Reza Shokri, and Prateek Mittal. Privacy risks of securing machine learning models against adversarial examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pages 241–257, 2019.
[2] Bahng H, Jahanian A, Sankaranarayanan S, et al. Exploring visual prompts for adapting large-scale models[J]. arXiv preprint arXiv:2203.17274, 2022.
---
Rebuttal Comment 2.1:
Title: Good clarification
Comment: Authors provide a clear clarification for my concerns in this round, I hope you can add these rebuttal texts into the revised version if it is accepted. Blackbox evaluation is indeed a bigger challenge than whitebox.
---
Reply to Comment 2.1.1:
Comment: Thank you for your suggestion and for raising the score. We will add the analysis of white-box adversarial robustness during the rebuttal into our revised manuscript. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable comments on our work. We are very grateful to the reviewers for their recognition of our research topic and for their suggestions to improve our work. We give specific responses to each of the reviewers' comments. If there are further questions, we are happy to communicate with the reviewers further.
In short, our main responses are as follows:
1. **Robustness-privacy trade-off in LM-VP models**: We analyze the differences between the LM-VP model and general models in the robustness and privacy trade-off and provide some insights to explain why transfer AT enables the LM-VP model to achieve joint improvements on transferred adversarial robustness and privacy. Please refer to our formal response to Reviewer Jw71.
2. **Additional datasets**: According to the suggestion of Reviewer J3te, we add the experimental results of the LM-VP model on TinyImageNet. The conclusion is consistent with that on CIFAR-10, results can be seen in Table 1 in the PDF.
3. **ConvNext and EVA performance**: Although ConvNext and EVA achieve high transfer adversarial robustness after transfer learning using LM-VP or fine-tuning, they are vulnerable to MIA, and transfer AT improves their MIA resistance. Please refer to our formal response to Reviewer J3te.
4. **White-box adversarial robustness of LM-VP models**: Based on the question raised by Reviewer J3te, we analyze the performance of LM-VP models under white-box adversarial attacks using standard training and standard AT. The experimental results are shown in Table 2 in the PDF. We further explain why standard AT is not suitable for the LM-VP model. Please refer to our formal response to Reviewer J3te. (We use standard AT here to distinguish the transfer AT in our manuscript)
5. **Ablation experiment on epsilon size 4/255**: According to the suggestion of Reviewer RUkH, we complete preliminary experiments on epsilon size 4/255 and will complete the experiments of other pre-trained models in our manuscript, results are shown in Table 3 in the PDF.
We provide the main experimental results during the rebuttal phase in the attached PDF file. These contents will be further improved and incorporated into the revised version of our manuscript.
Pdf: /pdf/a4418374e5e3e64f91dde566fd6564c9a9a5eee9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Attractor Memory for Long-Term Time Series Forecasting: A Chaos Perspective | Accept (poster) | Summary: This paper introduces Attraos, a new model for long-term time series forecasting (LTSF) that incorporates chaos theory and views time series data as observations from high-dimensional chaotic dynamic systems. Attraos utilizes attractor invariance, non-parametric Phase Space Reconstruction, and a multi-scale dynamic memory unit to effectively capture historical dynamics and forecast future states with substantially fewer parameters than existing models such as PatchTST. Empirical evidence demonstrates Attraos' superior performance across various LTSF and chaotic datasets, providing a new perspective on the underlying dynamics of time series data.
Strengths: - The paper presents comprehensive experiment results and provide rigorous theoretical results to support their claims.
- Leveraging results from chaotic theories for LTSF tasks is a novel and interesting idea.
- The method proposed in the paper needs much fewer parameters than previous methods.
Weaknesses: - Most of the descriptions for problem setup (section 2), methods, and theoretical analysis (section 3) are very confusing and hard to follow. There are always notations suddenly coming out without much information. I suspect there are also several typos that cause difficulties in understanding and assessments. See **Questions** for details.
- If I understood correctly, the title is a little misleading since seemingly the key insight/perspective comes from the embedding theorem by Takens, a counterpart of Whitney theorem for attractors. Are there any other insights related to chaotic systems? If not, I would suggest concretizing this point in the title, abstract, introduction, and conclusion.
- Section A.1 in appendix is entirely copied from Wikipedia, https://en.wikipedia.org/wiki/Takens%27s_theorem , except for some changes on notations. However the notations in this section are not coherent, e.g. $k$ and $N$ in L414.
- Similar to the first point, the paper does not provide enough background introduction.
If the authors worry containing all the necessary details will exceed the page limit, I would suggest presenting a detailed version of problem setup, backgrounds, analysis and methods in appendix.
Technical Quality: 3
Clarity: 1
Questions for Authors: 1. L90 Is it a weighted integral $\mu(ds)$ instead of $ds$? For polynomials $\phi$, the integral with Lebesgue measure does not converge.
2. Is $\mu$ same as $\omega$ in L94?
3. Isn’t L94 giving the formula of projection to polynomial subspace and isn’t $K_n$ the kernel? How are they connected to $e^{tA}B$, where $A, B$ only appear in the induced dynamics of $x(t)$?
4. Does $\mathcal{A}_i$ refer that there are multiple attractors in the dynamics? $\mathcal{A}$ are sets, then what does transpose in L158 stand for?
5. L112, 120, if $A$ is a (D,N) tensor, $B$ is a (B,L,N) tensor and $u$ is a $(B,L,D)$ tensor, how is $Ax(t)+Bu(t)$ in (2a) defined?
6. Could the authors give more interpretation on what they intend to do $\Delta$ and $\theta$? I have no idea what the authors mean by ‘$\Delta$ is similar to an attention mechanism’ and how this is related to the previous context.
7. Again, given the shape of $B$ and $\Delta$, it is confusing what $\Delta B$ refers to.
8. How is $H$ in L144 related to previous text?
Confidence: 2
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for appreciating the technical novelty and efficiency achieved by our method. We are really sorry for missing several details, most of which delve into some details of the state-space model. Please allow us to provide you with a detailed response to questions 1-8.
* **Q1: $\mu(ds)$ or $ds$?** Typically, describing two orthogonal bases involves the measure ($\mu(ds)$). However, we have opted for the simplified representation seen in prior state space model literature, omitting the measure when the integral context is evident. In the revised version of the paper, we will incorporate the complete $\mu(ds)$ and provide an explanation. Your observation is appreciated.
* **Q2: The difference between $\mu$ and $\omega$:** In general, $\omega$ denotes the weight function, and $\mu(ds)= \omega(s) ds$. Expressing it in this manner within state space models enables the adaptable manipulation of basis and weight functions, facilitating the derivation of various state space model variants.
* **Q3: The connection between $K(t,s)$ and $e^{tA}B$?** Equations 2a-2c (line 86) represent the standard equations for state space models [1-6]. These formulas utilize the kernel $K$ to signify the integral transformation. For a differential equation of $x^{\prime}(t)=Ax(t)+Bu(t)$, its general solution is:$$x(t)=e^{A (t-t_0)} x (t_0)+\int_{t_0}^t e^{A(t-s)}Bu(s) \mathrm{d}s.$$ When we assume the initial value is 0, i.e., $x(t_0)=0$, we can express $K(t,s)$ as $e^{tA}B$ based on the definition of matrix exponentiation.
* **Q4: Details about $A$ and $\nabla$**: You are correct that an attractor for a dynamical system should be multiple, so in our paper, we use sets to represent this. The symbol $\nabla$ at line 158 signifies the distinctions between these attractor patterns. As per Theorem 3, it's crucial for $\nabla$ to be sufficiently large to maintain the stability of these attractor patterns during training (invariance). This necessity leads us to establish multiple orthogonal subspaces to store each $A_i$.
* **Q5: The computation of $Ax+Bu$**: We have adhered to the tensor shape representation of the state space model (Mamba), and **in our open-source code, we have meticulously annotated the tensor shapes at each step of the model**. Specifically, Equation 2a represents the expression for continuous states. Given $A, B, \Delta$, we first obtain the discrete versions for actual computation using Equation 8 as $\overline{A}=e^{\Delta A}$ (B, L, D, N) and $\overline{B}=\Delta B$ (B, L, D, N). We then proceed with the calculations outlined in Formula 9 (also $x[k]=\overline{A}[k]* x[k-1]+ \overline{B}[k]*u[k], x[0]=0$ where the index is in L dimension).
* **Q6: Details about $\Delta$ and $\theta$**:In the field of state space models, early studies such as LMU (NIPS 2019) and HIPPO (NIPS 2020) introduced the parameter $\theta$ to represent the size of the measurement window, mainly in theoretical contexts. However, in practical coding, the $\theta$ term was eventually removed through formula adjustments. Progressing from S4 (ICLR 2022), state space models began incorporating a trainable discrete step size $\Delta$ as a dynamic measurement window $\theta$, with additional insights provided in Figure 3 (a). **In Remark 3.4, we say $\Delta$ as an attention mechanism.** This terminology is also used in HTTYH (ICLR 2023) and Mamba (ICML 2024). By structuring $\Delta$ as (B, L, D), where each time step of the input signal has a unique discrete step size, the model can adaptively assign different weights to information at each step, revealing important sequence elements. This behavior resembles an attention mechanism (a gating mechanism) where attention scores are computed not through pairwise products but via a linear layer directly derived from the data itself (Line 120).
* **Q7: Details about $\Delta B$**: Given $B$ (B,L,N), $\Delta$ (B,L,D), $\Delta B$ is in shape (B,L,D,N). The corresponding code is **DeltaB = Delta.unsqueeze(-1) * B.unsqueeze(2)**, which means (B,L,D,1)*(B,L,1,N).
* **Q8: Details about $H$**: $H$ denotes the hierarchical projection matrix we have defined. As detailed in line 136, the state space model allows us to estimate the dynamical trajectory using a series of $\theta /\Delta$. However, attractor structures exhibit diverse forms (points, loops, surfaces), necessitating multiple scales of $\theta (e.g., 2\theta, 4\theta, 8\theta)$ for accurate approximation. To address this, as illustrated in Figure 3 (b), we iteratively combine $\theta$ in powers of 2 (repeatedly folding the sequence to acquire representations at varied scales). The matrix $H$ signifies the projection from the prior scale to the subsequent scale. Based on the distinctions between the left and right $\theta$ intervals, it can be further segmented into $H^{\theta1}$ and $H^{\theta2}$. These concepts encompass piecewise polynomial approximation and wavelet transforms (line 466).
* **W2: About Chaos System:** Chaos, a characteristic of nonlinear dynamical systems, has been a focal point of extensive study. The Takens and Whitney theorems serve as valuable tools for exploring chaos. Following your recommendation, we will change **Chaos perspective** to **Dynamic systems perspective** within the document.
* **W1,W3,W4**: Paper Presentation:** We trust that the responses to **questions 1-8** have assisted in clarifying any uncertainties related to symbol interpretations. In the revised manuscript, we will rephrase sections related to the Takens theorem and integrate a more thorough background on state space models to ensure accessibility for readers across various disciplines.
Considering reviewers zUbZ/zqBV, all agreed that our paper has good merits such as **satisfactory novelty**, **good evaluation**, and **efficiency**, we believe our research findings are worth sharing with the research community. If your queries have been addressed, we kindly request an increase in the rating and confidence level.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. Hope the authors will polish their writing in the final version.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and validation of our paper! We greatly appreciate your help in improving our scores! Wishing you all the best! | Summary: The paper introduces chaos theory into a long-term time series forcasting (LTSF) model called Attraos (a play on the words "attractor" and "chaos"). They propose a Multi-resolution Dynamic Memory Unit (MDMU) which is inspired by (and looks a lot like) the State Space Models (SSMs) used in the Mamba family of models. However, unlike SSMs, Attraos assumes additional structure on top of time-series signals by utilizing attractor behavior from chaos theory in the phase space. Attraos outperforms many existing architectures (e.g., Mamba, RWKV-TS, PatchTST) on time-series tasks at a fraction of the computational complexity and cost.
Strengths: - S1: **Outperforms existing architectures at a fraction of the cost**. Attraos is able to outperform models like Mamba, RWKV, and Transformers on many different time series tasks at a fraction of the computational cost (in terms of training time and parameter count).
- S2: **Theoretically complete**. The paper provide rigorous definitions, theorems, and proofs throughout the main paper and appendix to justify their method. (Note: before this paper I was quite unfamiliar with chaos theory, though I have some experience in SSMs and time-series. I am unable to verify much of the math.)
- S3: **Comprehensive empirical results**. The paper compares Attraos to many alternative architectures for time-series data (e.g., RWKV, Mamba, Transformers, MLPs, etc.) and shows that on many tasks Attraos outperforms alternative methods at a fraction of the computational cost. Additionally, the paper includes an ablation study for the different components of Attraos.
Weaknesses: - W1: **Presentation of results could be clearer**. I was often confused by the choice of coloring in the tables. See the following for my suggestions:
1. [Table 1,2,4]. The "Red/blue" colorscheme is unfortunate, because most brains (including mine) associate the color "red" with "poor performance". I suggest using **bold** for best, <u>underline</u> for second best, as is often done in AI papers.
2. [Table 3,5]. For "improvement/decline" in performance, I suggest one of two options: (1) show "better performance" with blue, and "worse performance" with bold and red to emphasize decreased performance from an ablation, or (2) showing only the "deltas" with a "stock ticker arrow" (🔺 or 🔻, colored green for improvement, red for decline).
- W2: **Complete architectural description is missing**. Most of the paper focuses on describing how to implement the special components of Attraos, but the complete picture of the architecture is missing. Additionally, there is no section describing the hyperparameters for the training setup for each experiment. See Q1 for specific questions.
- W3: **Missing error bars**. Table 1 represents average results of long term forecasting across a swath of model classes, but these averages are not accompanied by reports of standard deviation. Thus, it is difficult to tell whether improved average performance is actually signficant. Improve by including error bars for all numerical results, which may require re-running some experiments under different random seeds.
- W4: **Grandiose, non-academic language**. The paper starts with "In the intricate dance of time" [L15] 😂 and makes statements like "we can transcend the limitations of deterministic dynamical systems" [L34]. I suggest reworking these lines to preserve the professionalism of the rest of the work.
These weakenesses are admittedly small and easily fixable by the authors during the review process. My overall attitude towards the paper is that it is of high quality and should be accepted. However, a lot of the theory was beyond my ability to evaluate, and I am willing to adjust my score positively/negatively as I become more familiar with this work throughout the review process.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Q1: The complete architecture for Attraos is not described. Do you just take Mamba and modify the SSM component? What is the complete set of learnable parameters? Where does the Hopfield Network come into play and how are its memories trained? What are the training hyperparameters for each experiment?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are adequately described in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for appreciating the theoretical design of our model and its efficiency. We apologize for missing several details and would like to clarify as follows:
* **W1: Unclear presentation of results:** Your comment is really helpful. We have followed your comments to indicate the best/worst performance in our revision. Thank you.
* **W2: Lack of several architectural details:** Thank you for pointing out this issue. Please see our reply in Q1.
* **W3: Missing error bars:** Thank you for your correction. Subsequently, we chose 3 random seeds for experimentation. Due to time constraints, we have provided the standard deviation results (MSE)for select datasets. The comprehensive experimental error data and associated error bars will be included in the Appendix.
||ETTh1|ETTm2|Weather|
|---|---|---|---|
|96|0.002|0.003|0.003|
|192|0.003|0.003|0.001|
|336|0.001|0.002|0.001|
|720|0.005|0.003|0.002|
* **W4: Several non-academic words**: We have revised this part based on your advice. Thanks for raising this concern.
* **Q1.1: The complete architecture of Attraos**: We have streamlined the architecture by removing redundant modules from Mamba (such as gated multiplication and local convolution), keeping only the fundamental SSM kernel $K(t,s)=e^{tA}B$. This refinement enhances interpretability and recent studies suggest that redundant modules can introduce side effects in time series prediction tasks (https://arxiv.org/pdf/2406.04320, https://arxiv.org/pdf/2405.16312). To facilitate a deeper grasp of the Attraos framework, we present pseudocode below for your enhanced understanding, and a model pipeline structure diagram will be added in the appendix:
| Input | Discrete historical data {z} in batch, phase space reconstruction hyperparameters {m, τ} |
| --- | --- |
|Initialize State Space Model| Obtain $B$ (B,L,N), $\Delta$ (B,L,D), $H$ (B,L,N) through three linear layers, with special initialization of $A$ (D,N)|
|Discretize State Space Model| $\overline{A} = e^{\Delta A}$ (B,L,D,N), $\overline{B} = \Delta B$ (B,L,D,N) |
|1|Reconstruct dynamical trajectories in phase space (Equation 5) and obtain patch representation $u$ (B,L,D) (Equation 6)|
|2| Improved Blelloch scan algorithm, obtaining dynamical system representation $x$ (B,L,D,N) through $u$, $H$, $\overline{A}$, $\overline{B}$ (line 205) |
|3| Frequency domain evolution: $\mathcal{F}\circ\mathcal{K}\circ x \circ\mathcal{F}^{-1}$|
|4| Flatten Tokens, obtaining prediction results through observation function $W_{h}$|
| Output | Discrete future prediction data |
* **Q1.2: Details about Hopfield Evolution**:
The Hopfield network evolution strategy is one of the three mainstream attractor evolution methods mentioned in the paper (line 178). It is positioned alongside frequency domain evolution and direct evolution strategies, used for system evolution on the dynamic representation built on multi-resolution dynamic memory units (MDMU). A detailed introduction to the Hopfield network is provided in **Appendix A2 on page 13**, and the modern version of the Hopfield network(https://arxiv.org/pdf/2008.02217) is essentially a cross-attention mechanism. By employing predefined trainable tokens as attractor memory banks (Key and Value in attention), and using data as queries, the overall energy function of the system is minimized during model training to achieve stability. The trainable tokens in the stable state can be considered as the system's attractor memory.
* **Q1.3: Experimental hyperparameters**:
The experimental hyperparameters can be located in **Appendix D3 on page 21**. For further details concerning SSM and mathematical aspects, please refer to my response to the third reviewer (qpQ4).
In the revision, we have carefully incorporated your comments in the revised paper. Considering reviewers zUbZ/qpQ4, all agreed that our paper has good merits such as **satisfactory novelty**, **good evaluation**, and **efficiency**, we believe our research findings are worth sharing with the research community.
---
Rebuttal Comment 1.1:
Comment: In this year's review cycle I am unable to see your paper's revisions, but I trust the authors that my suggested changes have been made. Similarly, I thank the authors for answering my questions. After reading the other reviews and responses, I see no reason to decrease my score and keep my vote to accept this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response! We greatly appreciate your support in recommending our paper! Wishing you all the best! | Summary: The paper introduces a novel approach, named Attraos, for Long-term Time Series Forecasting (LTSF) based on treating the observed time series as high dimensional chaotic dynamical system. The model first estimates an embedding of the data through Phase Space Reconstruction (Takens embedding) and then utilizes a memory unit through state space model that represents the dynamical system through polynomials which allows to evolve the dynamics into future steps and forecast the data. The model is evaluated on multiple LTSF datasets and additional experiments such as ablations and robustness under noise are performed.
Strengths: S1. The approach is proposing to forecast data evolution through learning the dynamical system that can generate such time series vs. to predict directly from data.
S2. Novel approach for polynomial estimation through a state space model is proposed.
S3. Fundamental properties of the approach are rigorously shown.
S4. Incorporation of Belloch algorithm for speed up is used.
S5. Experiments show that the approach performs better in many cases than existing approaches and ablation experiments are performed.
Weaknesses: W1. There seems to be a detachment between the theorems and properties proved and the proposed system. There seems to be no discussion about how these properties lead to the particular setup in the paper. For example why this particular SSM was used? How the hyperparameters were chosen in light of the propositions? What are the cases that the model is limited and not expected to be effective? Also interpretation of the results is lacking.
W2. For non toy experimental results exposition and interpretation of the resulting dynamical system is missing. What is the dynamical system/s that are being obtained for the benchmark and what is the distribution of polynomials?
Technical Quality: 3
Clarity: 2
Questions for Authors: See questions in W1 and W2 and also:
Q1. The Phase Space Reduction approach seems to be the well known Takens embedding and authors mention Takens thm, however call the method differently. Is there something different that I'm missing in PSR that is not in the classical computation of Takens embedding?
Q2. With local evolution I assumed that authors mean single step "autoregressive" forecasting. Is that the case? How would that change for multi-step?
Q3. It would be informative if the benefit/limitation/motivation of frequency evol vs. direct evol be more elaborately explained.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See W1-W2 and Q1-Q3. More extended discussion of limitations would contribute to better evaluation of the work vs generalization.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for appreciating our technical novelty and the SOTA performance achieved by our method. We are really sorry for missing several details. Here we endeavor to address your questions.
* **W1: Lack of discussion on model settings**:
**[why using this particular SSM]:** The paper opts for an $A$ matrix structured as {-1, -1, -1...}, considering it a rough approximation of $Hippo-Leg$, which relies on finite measure window approximations utilizing Legendre polynomials (Line 478), aligning with our objective of approximating dynamical systems using windows $[t, t+\theta]$.
**[how the hyperparams are chosen]:** In the realm of SSM-related investigations, diverse $A$ matrices exhibit varying characteristics; for example, a diagonal matrix like {-1, -2, -3...} might emphasize past information, while a step matrix could function as a localized attention mechanism, treating specific time steps collectively. Table 3 illustrates the performance stability enhancement of our designated $A$ matrix compared to randomly initialized $A$ matrices. The revised article will encompass a more extensive variant analysis.
**[Limitation]:** Irrespective of the chosen $A$ matrix, it's crucial that its diagonal elements are negative (following left-half plane control theory) to avert gradient explosions. Moreover, the diagonal state space may limit the expressive power of the model. This paper partially addresses this issue by introducing multiple orthogonal subspaces (MDMU block).
* **W2: Missing additional analysis.**:
**[non-toy experiment]:** In Appendix E, we have delved into Chaos Modulation (E4: rectifying trajectories with real values throughout model evolution), Chaos Reconstruction (E5: visualizing the constructed model's dynamical system alongside the actual dynamical system), and Chaos Representation (E6: exploring the impacts of truncating and non-truncating trajectories). Should you necessitate additional detailed analysis, kindly inform us, and we will promptly address your queries.
**[the dynamics of other baselines]** As this paper is the first to restore the underlying dynamic structure in a time-series prediction benchmark using PSR technology and innovatively utilize polynomial estimation, whereas other baseline models are purely data-driven, it is challenging to access the dynamic systems built by other models. Detailed explanations regarding the real dynamical systems behind the datasets are provided in Appendix E1 and E2.
**[Polynomial distribution]** Given that the state-space model is initialized from specific polynomials and then adapted through open matrix gradients, making it challenging to access the specific polynomial distributions behind the model. In recent work, the state-space model has been defined to adaptively update to the most suitable polynomial space for modeling the dynamics of the time series through gradient updates. (https://arxiv.org/pdf/2405.16312)
* **Q1: Notation of PSR**: Takens embedding often refers to phase space reconstruction, while Takens theorem is broader, ensuring attractor structures can be recovered even in dimensions greater than twice the original sequence. These are roughly equivalent terms without a unified label in dynamic literature.
* **Q2: Local Evolution**: Local evolution is not actually a single-step autoregression. In lines 81-83, the adjacent dynamic trajectory points belonging to the same attractor will undergo evolution using the same evolution operator $K_i$ (for example, time steps {1, 7, 8, 11} use the $K_1$ operator, and time steps {2, 5, 67, 68} use the $K_2$ operator). In the direct evolution strategy, we employ the KNN clustering algorithm to partition the points belonging to each attractor based on the Euclidean distance in phase space. In the frequency domain evolution strategy, we directly use dominant modes in the frequency domain to represent attractors.
* **Q3: Frequency Evolution**: **[Motivation]:** We chose the frequency domain evolution strategy inspired by articles in the field of neuroscience, where attractors amplify in the frequency domain (lines 54-56).
**[Limitations]:** Regarding interpretability, the frequency domain evolution strategy is less effective than direct evolution (dividing attractors based on Euclidean distance clustering).
**[Advantages]:** Direct clustering can lead to incorrect attractor divisions due to the significant noise present in real-time series data. Frequency domain evolution, on the other hand, filters out some high-frequency noise, enhancing the model's robustness and consequently improving its performance.
In the revision, we have carefully incorporated your comments in the revised paper. Considering reviewers zqBV/qpQ4, all agreed that our paper has good merits such
as **satisfactory novelty**, **good evaluation**, and **efficiency**, we believe our research findings are worth sharing with the research community. We sincerely hope that a revision is still considered.
---
Rebuttal Comment 1.1:
Title: Kindly Request for Reviewer's Feedback
Comment: Dear Reviewer zUbZ,
Since the End of author/reviewer discussions is coming in one day, may we know if our response addresses your main concerns? If so, we kindly ask for your reconsideration of the score. Should you have any further advice on the paper and/or our rebuttal, please let us know and we will be more than happy to engage in more discussion and paper improvements.
Thank you so much for devoting time to improving our paper!
---
Rebuttal Comment 1.2:
Title: Authors Rebuttal Response
Comment: I would like to thank the authors for their concise yet informative rebuttal that have clarified most of my questions. After reading authors rebuttal to my review and other reviewers I remain positive and supportive of the work and novelty of the approach.
One issue that I am still wondering about and would like to know if authors are able to further clarify is the detachment of the theory and the approach choices. Could the authors summarize the outcomes of the properties developed in the work and how they led/constrained model choices?
---
Reply to Comment 1.2.1:
Comment: Thank you for your response and support! We would be delighted to provide you with further clarification on the theory and approach choices.
* **Motivation and Significance**: Machine learning has historically been bifurcated into physics-driven approaches (such as PDE numerical solvers, PINN, etc.) and data-driven methods (like large models, various neural networks, neural operators, etc.). While data-driven techniques have excelled in time series modeling (as well as in computer vision, natural language processing, etc.), their black-box nature poses challenges for sustained advancement within the machine learning community. Recent publications in esteemed journals like Nature, Science, PNAS, etc., have showcased a fusion of physics-driven and data-driven methodologies, hinting at the emergence of a prominent research trend involving the integration of physics priors into deep learning models. **Our research marks the inaugural incorporation of PDE dynamical system knowledge into the realm of time series prediction, promising to facilitate interdisciplinary amalgamation and the creation of interpretable deep time series models.**
* **Dynamic System**: also known as PDE dynamical systems. Let the domain $S$ be an open subset of $\mathbb{R}^d$ and set an integer $k \geq 1$. Define the system state as $\boldsymbol{x}: S \mapsto \mathbb{R}^m$ where $\boldsymbol{x}=\left(x^1, \ldots, x^m\right)$. Then, an expression of the form:$\mathcal{F}\left(D^k \boldsymbol{x}(s), D^{k-1} \boldsymbol{x}(s), \ldots, D \boldsymbol{x}(s), \boldsymbol{x}(s), s\right)=0$ is called a $k^{\text {th }}$-order system of partial differential equation, where $\mathcal{F}: \mathbb{R}^{m d^k} \times \mathbb{R}^{m d^{k-1}} \times \ldots \times \mathbb{R}^{m d} \times \mathbb{R}^m \times S \mapsto \mathbb{R}^m$ and $s \in S$.
real-time series are observed values obtained from the system through a function h. By starting from dynamical systems instead of the original value, it becomes possible to have a better understanding of the fundamental temporal behavior. Moreover, this approach offers the advantages of interpretability and visualization.
* **Chaos Theory**: The study of chaos can be considered a branch of dynamical systems, focusing on the fact that both linear and nonlinear dynamical systems tend to exhibit certain fixed shapes in their trajectories, known as attractors. What may appear as irregular behavior in the time domain often reveals stable structures in the dynamical trajectories. The concept of attractors can readily be correlated with current deep-learning pattern recognition technologies.
* **Why SSM?**: The reason for choosing SSM to encode dynamical structures is that polynomials are widely recognized as universal approximators in dynamical systems research. The mathematical interpretation of SSM conveniently aligns with polynomial projection, making SSM well-suited for encoding dynamical structures. And we have introduced a novel SSM evolution matrix {-1, -1, -1, ...} to describe finite window approximations.
* **Why SSM + Multiple Orthogonal Subspaces**: Building upon SSM, we drew inspiration from chaos-related research (**Theorem 3 Attractor evolution error, line 163**) and proposed an enhanced version of SSM that utilizes distinct orthogonal subspaces to store various attractor structures. This innovation was validated through experimental results.
* **Why Frequency Evolution**: In the field of neuroscience research (where EEG and ECG are recognized as chaos datasets), it has been established that attractors are amplified in the frequency domain. Therefore, this paper adopts a frequency-domain evolution strategy. Besides, by discarding high-frequency components, noise effects can be mitigated, reducing computational complexity. Additionally, we explored the effects of Hopfield evolution and KNN clustering evolution, and through experimentation, **we found that these two methods were not as effective as frequency-domain evolution. We provided explanations for these findings in line 257**.
* **Follow-up Work**: In our recent research, we have discovered that **incorporating dynamical system priors leads to performance enhancements across all four types of time-series tasks (prediction, classification, interpolation, anomaly detection) and all four neural network architectures (convolutional, attention-based, linear, SSM). These improvements include a tenfold reduction in parameter count, as well as more stable gradients.** This work is set to be released soon—stay tuned for more updates. | null | null | Rebuttal 1:
Rebuttal: We commence by thanking the reviewers for their insightful comments. We are pleased to see that all the reviewers agree with some strengths of our paper, such as technique novelty(**Reviewer zUbZ, zqBV, qpQ4**), comprehensive evaluation (**Reviewer zUbZ, zqBV, qpQ4**), and efficiency (**Reviewer zUbZ, zqBV, qpQ4**). Your expertise also helpsus to strengthen our paper significantly.
We apologize for any inconvenience caused by the omission of certain details in the article and endeavor to respond to each comment. We sincerely hope that the responses can release the reviewers' concerns. For reference, we present a brief introduction of the response as follows.
### **In response to reviewer zUbZ:**
* We extensively examined the **SSM setting**, including the reason for choosing the {-1, -1, -1...} form of the A matrix, its appropriate applications, and practical constraints.
* We provided a further explanation on **local evolution**.
* We extensively examined the reasons, advantages, and limitations of the **frequency evolution strategy**.
### **In response to reviewer zqBV:**
* We provided the standard deviation (**error bar**) of the experimental results on several datasets.
* We presented a **pseudocode** for the model to aid in a better understanding of the architecture.
* We offered a further explanation on how the **Hopfield network stores attractor memories** during the training process.
### **In response to reviewer qpQ4:**
* We offered detailed explanations on 8 specific questions regarding the **SSM kernel and computational details**.
Given that **all reviewers have rated soundness and contribution as 3 (good)**, we will strive to enhance the presentation of this paper. We firmly believe that our novel approach/initiative can offer the community fresh perspectives and technical contributions. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Metric Flow Matching for Smooth Interpolations on the Data Manifold | Accept (poster) | Summary: This work proposes a metric flow matching algorithm, where interpolants are approximate geodesics learned by minimizing the kinetic energy of a data-induced Riemannian metric. This targets the trajectory inference problem, such as single-cell trajectory prediction.
Strengths: * This paper clearly addresses the motivation for solving the trajectory inference problem by proposing a solution that naturally connects the recently introduced flow matching method with a data-induced Riemannian metric. The paper is well-written, with preliminaries explained compactly and effectively.
* Additionally, it thoroughly discusses the differences with Riemannian flow matching. The experiments are well compared with recent studies.
Weaknesses: *Major comments*
I have two main questions and comments, which I expect to be addressed in the revised version.
* I believe the trajectory inference problem with unpaired cross-sectional data has been well addressed in "Riemannian Metric Learning via Optimal Transport (ICLR 2023)." In this work, the authors learn the Riemannian metric without introducing an auxiliary objective, but by alternatively minimizing the optimal transport objective. Your approach and this work seem to have a fundamental difference, as the objectives used for learning the Riemannian metric are different. This difference should be clearly noted, emphasizing the advantages of your approach. I hope this is included in the Introduction section with some emphasis.
* You have used L2 distance for \( c(x,y) \) in (12) due to computational issues. While this is understandable, I believe it could significantly impact the results. The two main factors affecting the resulting trajectory via flow matching are the Riemannian metric and the choice of coupling \( q \). In the original flow matching algorithm, the quality of the resulting generation was more important than the intermediate trajectory. However, in this study, the trajectory itself is the objective, making it crucial to carefully choose these two factors.
In the current version of the paper, this issue seems to be mentioned very briefly and almost as if it were insignificant. It appears necessary to include a more detailed discussion and address this point more thoroughly.
*Minor comments*
I appreciate comprehensive reviews in section 6. There are a few missing references, and I sugges to include them.
* Geometry-aware generative models:
- Learning geometry-preserving latent spaces: Regularized Autoencoders for Isometric Representation Learning (ICLR 2022) and Geometric Autoencoders--What You See is What You Decode (ICML 2023),
- Learning data manifolds and latent spaces by exploiting ambient space metrics: A statistical manifold framework for point cloud data (ICML 2022)'' and Geometrically regularized autoencoders for non-Euclidean data (ICLR 2023)
- geometry-based regularization for learning accurate manifolds: Neighborhood Reconstructing Autoencoders (NeurIPS 2021) and On explicit curvature regularization in deep generative models (TAG-ML 2023).
Technical Quality: 3
Clarity: 3
Questions for Authors: * Both $g_{RBF}$ and $g_{LAND}$ capture data density and ensure that the interpolants lie within the data support. In principle, this algorithm appears to be designed to ensure that inferred trajectories lie within the data support. Are there other trajectory inference approaches where algorithms essentially do the same thing (i.e., identify trajectories that mostly lie within the data support and connect two data distributions)? If so, is the advantage of your method that it can provide directional information when using a general Riemannian metric? Can a 'good' Riemannian metric be defined for trajectory inference that includes direction? How does this compare to ``Riemannian Metric Learning via Optimal Transport (ICLR 2023)''?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their time and thoughtful review of our work, which gave us an opportunity to improve our work significantly. We are glad to hear that the reviewer found our work “well-written” and that it “proposes a solution that naturally connects the recently introduced flow matching method with a data-induced Riemannian metric” with “experiments that are well compared with recent studies”. We first refer to the **general rebuttal**, where we have provided additional experiments and visualizations (see the 1 page pdf). Below, we address the key clarification points and questions raised in the review one by one.
### Weaknesses
> _I believe the trajectory inference problem with unpaired cross-sectional data has been well addressed in "Riemannian Metric Learning via Optimal Transport (ICLR 2023)." (..) Your approach and this work seem to have a fundamental difference, as the objectives used for learning the Riemannian metric are different. This difference should be clearly noted, emphasizing the advantages of your approach._
We appreciate the reviewer for pointing out this highly relevant work. We have already added a citation to Scarvelis & Solomon 2023 work in a revised version of the paper. We will further comment extensively on key differences: (i) the objectives being learned and (ii) the fact that we first optimize the paths and then match the vector field rather than regularizing directly the vector field (as in their Eq. (7)). Crucially, since MFM relies on flow matching, our framework does not require simulations during training when tackling the trajectory inference problem, which is an advantage compared to the work of Scarvelis and Solomon.
> _You have used L2 distance for ( c(x,y) ) in (12) due to computational issues. While this is understandable, I believe it could significantly impact the results. The two main factors affecting the resulting trajectory via flow matching are the Riemannian metric and the choice of coupling ( q ). (...) It appears necessary to include a more detailed discussion and address this point more thoroughly._
We appreciate the reviewer's concern. It is correct that the quality of the matching is given by both the choice of a coupling $q$—that is used to sample the boundary points $x_0$ and $x_1$—and the choice of the interpolants connecting such boundary points. First, we note that choosing a non-Euclidean cost for the OT objective is an interesting and non-trivial problem on its own. For arbitrary Riemannian metrics it is often unlikely to hope for a closed form expression to the metric-induced distance and thus OT cost. As a result, this requires us to simulate all pairs of matching to compute $c(x,y)$ which is computationally prohibitive from a practical standpoint. Furthermore, we argue an advantage of our framework is showing that even when we fix the cost to be Euclidean for the OT coupling, by learning data-dependent paths we can improve upon the Euclidean baseline.
To further decouple the impact of the interpolants from the choice of the coupling, we conducted further experiments—found in our 1 pg global response PDF—comparing both CFM and MFM on the arch dataset and the single-cell ones, where we chose the coupling q to simply be the independent one. We agree with the reviewer that aspects of this discussion need further clarifications and we will include a larger section to explicitly outline our motivation and claims, along with the addition of the new experiments using independent coupling.
### Minor comments
Thanks for sharing these works, we will add relevant citations in the revised version of our paper.
### Questions
> _Both $g\_{\rm RBF}$ and $g\_{\rm LAND}$ capture data density and ensure that the interpolants lie within the data support. In principle, this algorithm appears to be designed to ensure that inferred trajectories lie within the data support. Are there other trajectory inference approaches where algorithms essentially do the same thing (i.e., identify trajectories that mostly lie within the data support and connect two data distributions)?_
Regarding the choice of the metrics, we adopted those that seem to be the easiest/most efficient ones to show versatility of our framework and importantly reduce any computational overhead compared to the Euclidean FM baseline. Crucially though, we also wanted to highlight how our framework can, in principle, be applied beyond trajectory inference (as for the image interpolation problem), which is why we did not go overly-specific on the choice of the metric based on trajectories. However we believe this to be an interesting direction, i.e. how to choose task-dependent metrics to further improve the MFM framework. We hope this can be addressed in future work by us and the community more generally.
### Final comment
We hope that our responses here in conjunction with the general rebuttal and the additional experiments help answer the great questions raised by the reviewer. We politely encourage the reviewer to continue asking more questions or if possible consider a fresher evaluation of our paper with a potential score upgrade.
---
Rebuttal Comment 1.1:
Title: Responses
Comment: Thank you for addressing all the points that I raised. I believe the clarity of the paper will increase, and I am therefore inclined to raise the score to accept.
---
Reply to Comment 1.1.1:
Title: Re: Response
Comment: We thank the reviewer for their time and engaging with us in the rebuttal. We are glad that the reviewer has found our rebuttal to increase the clarity of the paper and we hope the reviewer can also upgrade their score as they mention in their rebuttal response. We are also more than happy to answer any lingering questions the reviewer has, please let us know! | Summary: This paper proposes metric flow matching, a variant of conditional flow matching where the interpolated distributions lie on the data manifold. They first learn trajectories between $p_0$ and $p_1$ which minimize a data-dependent kinetic energy, then use these trajectories instead of straight lines for conditional flow matching. The model is applied to single-cell trajectory prediction, where performance is better than the non-metric flow matching baseline.
Strengths: The paper is very well written, with great attention to detail. Everything is easy to follow, including the more technical details. The authors address an interesting problem (how to get more interpretable and useful flow matching trajectories) which has received surprisingly little attention given the current popularity of flow matching. They find a suitable application in single-cell trajectory prediction and it may improve the quality of image translation models (it's hard to be confident since the image experiments are limited). To my knowledge the work is an original contribution and a useful addition to the generative modelling community.
Weaknesses: I have some minor complaints:
1. It seems that the dataset $\mathcal{D}$ is the concatenation of samples from both $p_0$ and $p_1$ but this is not clearly stated
2. There is no justification of the use of diagonal metrics except that it reduces computation cost. Surely there must be some downside? Is it possible to compare the performance to non-diagonal metrics in the 2d example?
3. It's not clear how the OT is implemented. Is it a batchwise scheme? Does this introduce a bias? If not batchwise, is it scalable?
4. In tables 3 and 4, the MFM results are bolded. While the mean value is indeed the lowest, we cannot be confident that the true value is lower than some of the other values (e.g., WLF-UOT) with the stated error bounds. While I understand that this is common practice, I personally find it unscientific to report MFM values in bold and not others which could plausibly be lower than the MFM values
5. Line 289: I don't agree that the interpolations are better semantically. If there is a difference, it is too small to be able to make such a subjective claim
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Line 125: "regular" means smooth in some way? Can you be more specific?
2. Why is it preferable to use LAND for lower dimensions?
3. Am I correct that the LIDAR data is just an example of a 2d manifold embedded in 3d? I found it confusing at first since from fig 2 I thought it was encouraged to follow low-altitude trajectories and that maybe the height was being used to inform the metric. Perhaps you can chose a different visualization/update the description to avoid this confusion
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The only limitation given is that the data must be embedded in Euclidean space. I am sure the authors can think of others, such as using a diagonal metric, and whatever trade-offs are involved in the OT scheme they use (see Weaknesses above)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and positive appraisal of our work. We are thrilled that the reviewer viewed our work to be “well-written, with great attention to detail” and that we “address an interesting problem” and is an “original contribution” that is a “useful addition to the generative modelling community”. We now provide responses to the main questions raised by the reviewer.
### Weaknesses
> _1. It seems that the dataset $\mathcal{D}$ is the concatenation of samples from both $𝑝_0$ and $p_1$ but this is not clearly stated_
Yes that is correct—we will make this more explicit, since we work in a “generalized” trajectory inference setup, we think of samples from $p\_0$ and $p\_1$ as samples from the same dynamical system evaluated at different times so that the manifold we refer to is indeed the one spanned by the trajectories.
> _2. There is no justification of the use of diagonal metrics except that it reduces computation cost. Surely there must be some downside? Is it possible to compare the performance to non-diagonal metrics in the 2d example?_
That's a great question! Working with diagonal matrices, may potentially have some expressive power limitations in high dimensions, specifically for settings where capturing a precise underlying metric is crucial—note that in the applications considered in our submission it is hard to identify a ground-truth metric. In general though, there are theoretical reasons as to why our choice is not particularly limiting; for example, in the 2D case any metric is actually locally conformally flat (follows from the existence of isothermal coordinates). Accordingly, for 2D manifolds, one would not lose expressive power by actually taking the matrix $\mathbf{G}$ to be a multiple of the identity pointwise. Nonetheless, our framework is not constrained to diagonal metrics and more general cases are indeed possible. To further support this point, we refer to the additional experiments shared in the 1 page pdf and **general rebuttal**, where we showed how intermediate samples generated using MFM are much closer to the underlying lower dimensional manifold (a sphere) than the ones generated using CFM (see Table 4 and Figures 1,2,3). Crucially, MFM only leverages the diagonal LAND metric defined in the ambient space based on samples and is never given information about the sphere.
> _3. It's not clear how the OT is implemented. Is it a batchwise scheme? Does this introduce a bias? If not batchwise, is it scalable?_
Our method relies on using mini-batch OT which has been a standard tool in the generative modeling / Machine Learning literature (see for example Tong et al., 2023 b,c). In this case we can easily trade off the cost of OT with the batch size and in practice this does not create a severe overhead in training our models. We find that OT helps improve training stability and leads to better empirical performance. We note however that MFM with independent coupling still surpasses CFM with independent coupling, as reported in our global 1 pg response where we compare CFM and MFM without OT (Table 1,2,3).
> _4. In tables 3 and 4, the MFM results are bolded. (..) I personally find it unscientific to report MFM values in bold and not others which could plausibly be lower than the MFM values_
Thank you for your feedback. We agree with your observation and will remove bold numbers from Tables 3 and 4.
> _5. Line 289: I don't agree that the interpolations are better semantically. If there is a difference, it is too small to be able to make such a subjective claim_
We again agree that this may be entering subjective territory a little, so we refine our statement and claims.
### Questions
> _1. Line 125: "regular" means smooth in some way? Can you be more specific?_
Yes, it does mean smooth in some way. To improve clarity, when citing the paper of Ambrosio et al we refer to their explicit statement, where assumptions are stated clearly.
> _2. Why is it preferable to use LAND for lower dimensions?_
That's an excellent question. We empirically found LAND to be marginally better in lower dimensions, which we believe to be reasonable given that in this setting we are building the metric using all samples directly without any clustering being involved beforehand.
> _3. Am I correct that the LIDAR data is just an example of a 2d manifold embedded in 3d? (...)_
Yes that is correct, as in we do not provide height information for the metric since we wanted to test MFM in a setting where the metric was built agnostic of the downstream application. To address your concerns, in the additional 1 page pdf we have provided more views which we will add in the appendix of the revised version to improve clarity.
> _The only limitation given is that the data must be embedded in Euclidean space. I am sure the authors can think of others, such as using a diagonal metric, and whatever trade-offs are involved in the OT scheme they use (see Weaknesses above)_
Thank you for the feedback. We have now also added a line mentioning the trade-offs associated with the OT scheme. In light of our previous comment, we do not see any serious weakness with using diagonal metrics and in general the framework does not require one.
### Final comment
We hope we have addressed the main concerns of the reviewer, and that the addition of experiments of MFM without OT and MFM on a sphere have further improved the strength of our submission. We hope that our answers enable the reviewer to continue endorsing our paper and potentially even upgrading the score if the reviewer deems it. We are also more than happy to engage in any further discussion.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed rebuttal. I believe you've answered all my questions. Since most of my comments were minor details, my overall judgement of the paper has not changed, so I will leave my score as is.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We thank the reviewer for engaging with us, for the detailed and valuable feedback, and finally for endorsing our submission. | Summary: The authors introduce a method for trajectory inference based on conditional flow matching (CFM) that takes a formulation of the trajectories using Riemannian geometry. The Riemannian metric is built following the manifold hypothesis and prior work, resulting in a flow matching with a data-dependent metric encouraging trajectories to stay close to the so-called data manifold. Furthermore, the sampling distribution used in the optimisation of the objective function is improved with optimal transport concepts, properly matching the joint distribution of interpolant endpoints to the main task of interest: single-cell dynamics.
Strengths: The paper is very well written and easy to follow. Its main originality lies in the use of a data-dependent Riemannian metric with conditional flow matching to learn trajectories between distributions which are discouraged to stray from the data manifold. The elegant incorporation of a data-dependent metric means the method works well without excessively requiring expert knowledge about the problem domain and an appropriate metric to be chosen.
Weaknesses: The technical contributions allow CFMs to be employed with a Riemannian metric and without deep understanding of an appropriate metric to be used for the task of interest. However, most of the work to get there was already proposed before. In particular, beyond Eq. (4) followed by then adapting all distances to be Riemannian (as required by the problem setting), I see little "on top" of the work by Arvanitidis et al. (2016, 2021) and Tong et al. (2023b).
Given the technical contribution builds little on top of existing work, and taking into account question 2 below regarding the image translation experiment, I believe the paper showed limited applicability beyond single-cell dynamics. I expected a stronger experimental setting showing applicability in more problems where adopting a Riemannian metric is the advantageous choice.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Taking into account the weakness mentioned above regarding novelty: could the authors clarify their contributions beyond Eq. (4)?
2. I understand that in other domains, e.g. single-cell data, meaningfullness of trajectories might be more obvious to define, but I do not understand the claim about meaningfullness of trajectories between pictures of cats and dogs. Especially after looking at additional results in the appendix, I believe the setting is fundamentally flawed to evaluate the proposed model, with both models being similarly bad in many cases. What is the expected "good" case? Could the authors clarify their motivations for using this specific example? (By the way, the 6th row of Fig. 4 in the Appendix does not seem to be the same interpolants in both cases).
3. Can a comparison with RFM be made in a setting where the underlying "true" appropriate metric is known? The Arch data set or some variant in hyperbolic geometry seem to be interesting cases for me.
Minor comments:
- Why are there different citations for the manifold hypothesis? At first (introduction), a set of papers are cited, but then later in Section 3.1, another paper is cited.
- [l. 160] "boundary conditions": at this point I was defensively looking for some boundary conditions I might have missed, but later noticed the authors likely meant the endpoints $x_0$ and $x_1$. Is that the case? Could this be clarified at this at this point when first mentioning "boundary conditions"?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have sufficiently addressed the limitations in their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback and constructive comments, which gave us an opportunity to improve our work significantly. We are pleased to see that the reviewer found our work “very well written and easy to follow” and that our idea is “elegant” and that “the method works well without excessively requiring expert knowledge”. We now address key clarification questions raised by the reviewer.
### Technical novelty
> _Beyond Eq. (4)(...) I see little "on top" of the work by Arvanitidis et al. (2016, 2021) and Tong et al. (2023b)._
We value the reviewer’s comments regarding the technical contribution of MFM over prior works. We kindly refer to the **general rebuttal** for a detailed discussion on the main technical contributions of MFM, while here we provide a short summary.
First, we hope that drawing connections from metric learning—such as the works of Arvanitidis et al—and flow matching, is in itself an important contribution that can foster new works in this space bridging different communities further. We believe that showing the role played by the data geometry (not the ambient geometry) for generative modeling is an important research direction.
More crucially, in terms of technical contributions, we believe that the novelty is not just in the parameterization adopted in (4), but also, crucially, in the optimization in Eq. (6). In fact the objective in Eq. (6) is based on evaluating a *data-dependent metric Dirichlet energy* over the paths, once we sample the boundary points $x\_0,x\_1$ according to the coupling q. A key contribution in our work is that we can learn approximate geodesics using a simulation free learning objective which we find is quite useful for downstream generative modeling applications like trajectory inference. Moreover, we believe that optimizing using a velocity-induced regularization is an essential part of our framework, along with aligning the geodesic sampling with the joint distribution q. Crucially though, our framework is not bound to use a specific metric, and we simply adopted existing ones to showcase flexibility and ease of use. We hope this addresses the reviewer’s concerns, and clarifies the technical novelty of our paper.
### Questions
> _Taking into account the weakness mentioned above regarding novelty: could the authors clarify their contributions beyond Eq. (4)?_
We have addressed this point above when it comes to technical contributions.
> _I understand that in other domains, e.g. single-cell data, meaningfullness of trajectories might be more obvious to define, but I do not understand the claim about meaningfullness of trajectories between pictures of cats and dogs. (...)_
We agree that meaningfulness of trajectories can be better evaluated over actual dynamical systems such as for single cell data. The experiments on unpaired image translation were meant to showcase that MFM can be on par, or in fact better, than the Euclidean baseline even on settings it was not mainly designed for. In particular, we sought to attach a visual correspondence to the learned interpolation from MFM. In contrast, one can consider another exotic interpolant which may traverse other classes—or entirely off the image manifold—in trying to go from cats->dogs. Note that to further try assessing the meaningfulness of interpolations we adopted the LPIPS metric.
> _Can a comparison with RFM be made in a setting where the underlying "true" appropriate metric is known? The Arch data set or some variant in hyperbolic geometry seem to be interesting cases for me._
This is a great suggestion! We address this with new experiments in our 1 pg PDF. We have extended the arch task on the 2D sphere embedded in $\mathbb{R}^3$. We have found that MFM not only improves significantly over the Euclidean baseline CFM (see Table 5 in the 1 page pdf), but crucially that the samples generated by MFM at intermediate times are much closer to the underlying sphere than the Euclidean counterpart (see Table 4 and Figures 1,2,3 in the 1 page pdf and comments in our general rebuttal). We emphasize that we manage to attain this **without** explicitly parameterizing the lower-dimensional space but simply relying on the LAND metric. Finally, we note that RFM uses the ground-truth geodesics of the *standard* metric on the sphere, meaning that in general how well RFM is able to solve trajectory inference problems on the manifold depends on how well the geodesics provided by the data-agnostic metric, resemble trajectories of the underlying system. Conversely, MFM directly learns from data and can approximately recover a curved, lower-dimensional manifold from the samples.
**Minor comments**
> _Why are there different citations for the manifold hypothesis?_
This is just because we wanted to cite key papers in the main body of the paper (beyond the related work section) and the distinction (i.e. not citing all of them in both cases) is due to space constraints and formatting.
> _[l. 160] "boundary conditions": at this point I was defensively looking for some boundary conditions I might have missed, but later noticed the authors likely meant the endpoints and . Is that the case? Could this be clarified at this at this point when first mentioning "boundary conditions"?_
Yes, by boundary conditions we mean that the paths recover $x\_0$ and $x\_1$ at times 0 and 1, respectively. We will add an extra sentence here to improve clarity.
### Final comment
We thank the reviewer again for their review and detailed comments that helped strengthen the paper—particularly through the addition of MFM experiments on the sphere. We believe we have answered to the best of our ability all the questions here and in the general rebuttal. We hope this allows the reviewer to consider upgrading their score if they see fit. We are also more than happy to answer any further questions.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
I am in general positive with the rebuttal. Here are a few additional comments.
> We emphasize that we manage to attain this without explicitly parameterizing the lower-dimensional space but simply relying on the LAND metric.
Yes, that is indeed interesting. We must however be a bit skeptical and say that in general it is hard to assume that the data is sampled so nicely along the underlying manifold. I wonder which amount of jitter or lack of proper sampling would make the LAND metric unreliable with real data. As such, one would then put the RFM requirement of having the exact metric versus the LAND approach of relying on the data as both **desirable**, but not guaranteed to be available/satisfied. I originally meant the experiment to compare that exact thing to see how much we "sacrifice" compared to ground truth by relying purely on data, but the example provided is already showing something positive in my opinion.
After reading the other reviews and their rebuttals, I am inclined to increase my score to Accept.
---
Reply to Comment 1.1.1:
Title: Response to official comment by Reviewer mgNS
Comment: We appreciate the reviewer taking the time to engage with us further during this rebuttal.
Our goal with the new ARCH on a sphere experiment was to test whether trajectories with MFM with the LAND metric lie on the sphere *without actually parametrizing the sphere*. Given this idealized setting, we see that MFM matches our intuitions and generates samples that lie on the sphere.
If the spherical inductive bias ---i.e. the exact parametrization of the sphere---was known to us apriori then RFM and MFM may coincide if the sampling of data demands it. However, note that if data is irregularly sampled on the sphere then trajectories can themselves bend on the sphere, and not obey "shortest paths" on the sphere which is a setting RFM cannot model but is handled by MFM.
In general, we agree with the reviewer's point that in many practical settings data may not be sampled homogeneously on an underlying manifold, and in these cases, the LAND metric may not be optimal. We would like to note however that our MFM framework can be instantiated with any choice of metric, including a parametrized/learned one such as RBF which we found works well for higher dimensions. The key point we highlight is that any metric used in MFM is informed by the sampling of data (training set) which alleviates the need for an exact parametrization of the manifold.
We thank the reviewer again for allowing us to clarify these technical aspects of MFM. We would be happy to answer any further questions the reviewer has, otherwise we politely encourage the reviewer to consider increasing their score as they originally suggested they are inclined to. | Summary: This paper proposes an instantiation of flow matching where the interpolants are learned by minimizing the kinetic energy defined by a nonparametric metric defined over a set of data points (a weighted L2 norm of the velocity field). This metric is defined through another weighted normal distribution, with learnable weights that ensure the metric is at the right scale. The interpolants are defined using another neural network and optimized to reduce the kinetic energy.
Strengths: Learning a metric over a low-dimensional manifold could potentially be very interesting for high-dimensional machine learning applications.
Weaknesses: ### Clarity
This paper lacks clarity in its exposition, and are not sufficiently careful in stating its claims.
- The paper introduces their framework by discussing the "manifold hypothesis": that high-dim data lies in a low-dim manifold. However, the choice of metric does not induce a lower dimensional manifold, as it is induced by a simple gaussian mixture model. For some of the cellular experiments, the manifold seems to actually be found by PCA and taking the first few components to define the space of the manifold.
- The paper uses very vague terminology. For instance, there is repeated mention of a "more meaningful matching" being learned. But what is a "matching" and how do you compare between two? I believe the authors are referring to the learned time-dependent probability density at intermediate times?
### Novelty
Some of the claims about novelty are overly strong and could be more clear in precisely describing this paper's contribution of using a data-dependent metric (which is an interesting direction, I just feel the paper unnecessarily sugar coats its contribution instead of stating it objectively).
- The proposed algorithm is essentially the same as Wasserstein Lagrangian Flows and GSBM, except these two existing works actually take a further step and learn the optimal transport coupling between x0 and x1 induced by the choice of metric. In contrast, I believe the main contribution of this work lies in its choice of metric defined over a finite data set. The paper repeatedly states that it is a generalization of the existing works but in fact I think of this work as an instantiation of a subset of the existing frameworks with a particular choice of metric.
- One claim is that "MFM is simulation-free and stays relevant when geodesics lack closed form" when compared to riemannian flow matching. I think this is contrasting to the case of riemannian flow matching where the interpolants are solved with an ODE solver. However, here the geodesics are also not known in closed form and instead are solved by a neural network. I think of the proposed approach just as a time-parallel method for approximating the geodesic, and feel that the above claim is too strong. There is in fact a non-trivial optimization problem happening because MFM does not have closed-form geodesics, and the reliance on a neural network further suggests it is not as simple a procedure as this claim makes it sound like.
### Empirical validation
I feel the experimental results are messy and do not provide a coherent analysis.
For instance, part of the motivation of a data-dependent metric is to impose a more useful probability path p_t. However, the probability path p_t is determined by both the coupling q(x0, x1) and the interpolant. The use of an optimal coupling should have a very strong influence on the resulting p_t, and the choice of interpolant is not independently studied with an independent coupling.
Going through each experiment section, there are concerns regarding the setup or results:
1) The LiDAR scan.
- It is unclear what the aim of this experiment is. In the paper it is mentioned that this uses a different V_t than GSBM. [Suggestion:] However, one direct comparison that should be made is to take the same V_t as GSBM but simply replace their kinetic energy with your metric-based kinetic energy. This can help answer whether the data-dependent metric is useful for this setting.
- It is hard to tell from the visualization, but do the samples from p_t stay close to the LiDAR data points or are they "floating"? [Suggestion:] i.e., is the condition || x_t - nn(x_t) || < max_i,j || x_i - x_j || satisfied? Here x_t is sampled from the learned p_t, nn returns the nearest neighbor of x_t in the LiDAR dataset, and the right-hand-side here is the maximum distance between points in the LiDAR dataset.
2) The AFHQ experiment.
- Here the FID values are extremely high, which indicates that the model is poorly fit. Looking at values reported by GSBM and the original StarGAN v2 where this dataset was introduced, it seems reasonable to expect FID values in the range of 10-20? [Suggestion:] use the same code as existing works to reproduce their results and perform a direct comparison.
- Qualitatively looking at the samples, it doesn't seem clear which method has the better samples at t=1/2. I feel this setting doesn't showcase a non-trivial p_t since it is done in the latent space of a VAE, so we know that the samples follow a normal distribution. Given this, I'm not particularly convinced that a different interpolant provides something more interpretable. [Suggestion:] Perhaps it'd be good to visualize a 2D PCA of the trajectories?
- Again, here I feel the role of the interpolant is heavily diminished when compared to the role of the optimal coupling. I believe LPIPS is computed based on pairs of (x0, x1), and that the values are extremely close seems to suggest the following: the choice of interpolant does not (or has very little) influence on the learned coupling. Since the goal of unpaired translation is to learn how to translate, it seems one should not be using the optimal coupling for training. [Suggestion:] test different interpolants while using the independent coupling.
3) The single-cell experiment.
- Here the setup is given K time points, while one time point is missing. However, we know that by construction, the probability density at this missing time point only depends on the two closest time points. [Suggestion:] Does training on all K-1 time points actually offer an improvement over training just over the nearest time points?
- Furthermore, there is the issue with the choice of coupling influencing the results. [Suggestion:] Testing different interpolants while using the independent coupling would be good to have here.
- When the optimal coupling is given, is there even a need to learn the flow? It seems this experiments can be solved by sampling pairs (x0, x1) from the coupling q and using the interpolant x_t. [Suggestion:] Does learning a flow actually provide any improvements on top of this baseline?
- I am a bit confused regarding the different dimensions (number of components used from PCA), since the initial motivation of this work is to learn the low-dimensional manifold. I feel both this and the high-dimensional image setting could be set up where the metric directly influences the dimension of the flow model, which would significantly help support the initial justification.
Technical Quality: 2
Clarity: 2
Questions for Authors: - How effective is the interpolant when no optimal coupling is used, especially for multimodal data distributions?
- Does learning a full flow matching model outperform just using the L2 optimal coupling and using the interpolant to create x_t samples?
- How does the framework work when you have multiple time snapshots? Is the metric time-dependent (by depending on different snapshots given time) or do you fit it to a single fixed-time snapshot?
- This part confuses me: to justify the metric, it is described that p_t "stays close" to a reference dataset D, however, the evaluation settings use K separate data distributions (at K different time values). How do you determine what D is?
- Given two disjoint distributions separated apart, does the metric still learn some non-trivial p_t between them (or does the metric degenerate to the Euclidean one when far away from data)?
- Can this framework be adapted to learn a low dimensional manifold (rather than just a metric)?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: There is a computational limitation when using larger training set sizes, as the majority of machine learning datasets are now in the order of millions and billions. It does not seem feasible to use a nonparametric method such as the metric proposed here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their detailed feedback and constructive comments, which gave us an opportunity to improve our work significantly. We are glad to hear that the main thrust of our work could be very “interesting” for higher dimensional ML applications. We kindly point the reviewer to our **general rebuttal** and 1 page pdf, which includes several new ablations and additional results. In this rebuttal, we address the main concerns of the reviewer: (i) Clarity on metric learning vs manifold learning; (ii) Novelty; (iii) Experiments using independent coupling.
Below the rebuttal, i.e. in the following comments, we tackle any standing point from the review.
### Metric learning vs Manifold learning
> _The paper introduces their framework by discussing the "manifold hypothesis": that high-dim data lies in a low-dim manifold. However, the choice of metric does not induce a lower dimensional manifold._
The reviewer makes a pertinent observation that the choice of metric in MFM does not induce a lower dimensional manifold. To clarify why this is **intended**, given our data-points, we are interested in defining a notion of path between samples $x\_0$ and $x\_1$ that preserve the underlying geometry. There are now two equivalent ways to proceed: (i) One can find explicit coordinate representations of the underlying manifold and build the path through these coordinates (*manifold learning*); (ii) Define the path in the ambient space but change the geometry of the ambient space so that regions away from the samples (and hence the manifold) are highly penalized (*metric learning*). In practice we achieve the same result, which is having paths bending according to where the samples are. By leveraging the metric learning approach (ii), we do not need to prescribe a specific lower dimension or find a coordinate representation.
To support our point further, we have followed the suggestion of reviewer mgNS and run MFM on the arch task when the samples belong to the 2D sphere in $\mathbb{R}^3$. We have found that MFM not only improves significantly over the Euclidean baseline CFM, but crucially that the samples generated by MFM at intermediate times are much closer to the underlying sphere than the Euclidean counterpart (see Table 4 in the 1 page pdf and our **general rebuttal**). We emphasize that we manage to attain this **without** explicitly parameterizing the lower-dimensional space but simply relying on the LAND metric. We hope this provides you with strong empirical evidence as to why the metric approach can help us design flows that stay close to a lower-dimensional manifold even when we do not know it.
### Novelty
> _The proposed algorithm is essentially the same as Wasserstein Lagrangian Flows and GSBM (…)_
We value the reviewers opinion about the similarities between our proposed MFM and WLF and GSBM. We would like to gently push back against the reviewers assertion that these methods take a further step than MFM. In particular, we disagree with the assertion that WLF and GSBM learn an OT coupling **induced by the choice of the metric**, since no Riemannian metric (of any kind) is proposed or studied. As such, we argue WLF and GSBM cannot be considered generalizations of MFM despite their similarities.
We kindly refer to the **general rebuttal** where we have provided details on technical differences between MFM and WLF and GSBM.
> _One claim is that "MFM is simulation-free and stays relevant when geodesics lack closed form" when compared to riemannian flow matching. (...)._
We appreciate the reviewers comment regarding the simulation free nature of the approximate geodesics learned in MFM. At present, there is no computational method that can find exact geodesics without simulations and we certainly do not hope to claim that MFM exactly solves these. In contrast, we believe that the strength of MFM lies in the fact that the problem of trajectory inference benefits from approximate geodesics that can easily be plugged in a conditional flow matching framework in a computationally efficient manner—i.e. they are simulation free. We further highlight that such approximate geodesics can be found thanks to our proposed objective Eq. 6, i.e.
$$
\mathcal{L}\_g(\eta) = \mathbb{E}\_{(x\_0,x\_1)\sim q,t} \left[(\dot{x}\_{t,\eta})^\top \mathbf{G}(x\_{t,\eta};\mathcal{D})\dot{x}\_{t,\eta}\right]
$$
where we minimize the Dirichlet energy of the path—whose optimum is in fact a geodesic. We point to the general rebuttal and to the 1 page pdf, where we showed how MFM manages to generate a flow whose intermediate samples are close to the lower-dimensional manifold by learning approximate geodesics of LAND through Eq. (6). We hope the reviewer may now agree that our empirical evidence enables us to claim both technical and empirical novelty of using learned (simulation-free) geodesics for trajectory inference.
### Experiments using independent coupling
> _(...) The choice of interpolant is not independently studied with an independent coupling._
We thank the reviewer for this suggestion. In the **general rebuttal** we have provided additional experiments comparing CFM with independent coupling, i.e. I-CFM, and MFM with independent coupling, i.e. I-MFM, on the arch task and the single cell RNA datasets (in both 5D and 100D). We briefly summarize here the main takeaways:
- I-MFM generally surpasses I-CFM.
- We further stress that, differently from CFM, MFM also uses the coupling q in the first stage where it optimizes paths based on the metric, which justifies why using Optimal Transport for the coupling is even more beneficial for MFM than CFM.
We hope that in light of the new experiments, the reviewer can see that the benefits of MFM are not just due to the choice of the coupling, since I-MFM and OT-MFM both surpass their respective Euclidean counterparts I-CFM and OT-CFM.
---
Rebuttal 2:
Title: Rebuttal (2/3)
Comment: In these comments we now address any standing question/concern.
> _For some of the cellular experiments, the manifold seems to actually be found by PCA_
In single-cell data, current best practices suggest performing non-linear dimensionality reduction on the top 10-100 principle components. We follow this standard approach, learning a non-linear metric on the top 5/100 PCs, please see [2] and [3] below.
> _The paper uses very vague terminology. For instance, there is repeated mention of a "more meaningful matching"_
We clarify that the term matching (introduced in Score Matching and Flow Matching [Song et. al 2019, Lipman et. al 2023]) refers approximating a target distribution $p_1$ with a learned one $p_{\theta}$ via a regression objective over vector fields—i.e. $|| v\_{\theta}(x\_t, t) - u\_t(x\_t | x\_0, x\_1)||\_2$. For trajectory inference tasks “more meaningful matching” refers to learning a matching that better respects the task description, meaning that the reconstructed trajectories replicate those generated by the underlying physical system.
We hope that our answer here addresses the reviewer's comment and we will update the main text to include these clarifications.
**1. Lidar scan**
> _It is unclear what the aim of this experiment is (...)_
We thank the reviewer for their thoughtful suggestions. The primary purpose of our Lidar experiments is to simply contrast MFM to the Euclidean baseline (OT-CFM) and visually illustrate how the metric can affect the learned paths. While using more task-specific potentials—e.g. that account for the height information—as done in GSBM, is possible within our framework, we believe that this goes beyond the goal of this experiment which serves to highlight the benefit of incorporating data geometry **without** providing additional details from the downstream task.
To address the reviewer's concern about the faithfulness of $p\_t$ to the LiDAR data, in our **general rebuttal** and additional 1 page pdf, we also provide more views of the learned paths from OT-CFM and OT-MFM . The new visualizations clearly indicate that MFM paths 1.) do not intersect the manifold and 2.) bend closely around the manifold which highlights the geometric inductive bias.
**2. The AFHQ experiment**
> _Here the FID values are extremely high (...)_
We acknowledge the reviewers' healthy skepticism regarding our reported FID values. We wish to highlight that other papers e.g. [4] have empirically reported FID values for interpolation of cats→dogs to be around 70. As such we believe that our values are acceptable.
> _Qualitatively looking at the samples, it doesn't seem clear which method has the better samples at t=1/2.(...)_
We agree with the reviewer’s observation. Note however that in this application the goal is not much about having better samples at the intermediate times, but having samples at $t=1$ that are more similar to those at $t=0$ as measured by the LPIPS metric, which we have reported in Table 2. In general though, the main goal of this experiment was to showcase how MFM can be used along with or instead of CFM, even for settings outside the trajectory inference task which is what was mainly designed for.
> _Again, here I feel the role of the interpolant is heavily diminished when compared to the role of the optimal coupling.(...)_
We thank the reviewer for their suggestions. As noted above and detailed in the **general rebuttal**, we have conducted experiments comparing CFM and MFM using independent coupling for both the Arch task and single cell RNA sequencing to validate that I-MFM surpasses I-CFM.
**3. The single-cell experiment**
> _Here the setup is given K time points, while one time point is missing. However, we know that by construction, the probability density at this missing time point only depends on the two closest time points(...)_
This is an interesting question. In our framework, the metric $\mathbf{G}$ is constructed using two consecutive dataset marginals—based on a dataset $D\_{i,j}$, which is a concatenation of samples from these marginals (excluding $p\_{\text{out}}$). For instance, considering the scRNA dataset with densities $p\_0$, $p\_1$, $p\_2$, and $p\_3$, and excluding $p\_1$ as the left-out density, we use separate metrics $\mathbf{G}_{0,2}$ and $\mathbf{G}\_{2,3}$ for the pairs $\{p\_0, p\_2\}$ and $\{p\_2, p\_3\}$. In this regard, the definition of the metric already follows the procedure you suggested.
In contrast, a *single* time-dependent interpolant network $\varphi_{t,\eta}$ and a *single* vector field networks $v\_{t, \theta}$ are used for all time-steps ($t\_0$, $t\_2$ and $t\_3$) to ensure continuity of trajectories across different times. This also ensures that we follow standard procedures adopted by the baselines reported in Tables 3 and 4.
> _Issue with the choice of coupling (...)_
We have addressed this point above and in the **general rebuttal**. Once again, we highlight that I-MFM is better than I-CFM.
---
Rebuttal 3:
Title: Rebuttal (3/3)
Comment: > _When the optimal coupling is given, is there even a need to learn the flow?_
That's an interesting suggestion! Unfortunately, single cell RNA is known to have a destructive generative process which means that different time marginals can contain varying numbers of *unpaired* observations. Operationally, this means there is no 1-1 correspondence between populations at different times. Consequently, evolving the particles along a path $x\_t$ may be an ill-posed problem and instead, we must assess the probability path $p\_t$. We achieve that by taking the pushforward of $p_0$ using the flow generated by the vector field $v_\theta$ learned using flow matching. We understand how this technical point may not have been clear given the complex nature of single cell data and now hope the reviewer agrees that their suggestion, while interesting, cannot be employed in this particular experiment.
> _I am a bit confused regarding the different dimensions (number of components used from PCA), since the initial motivation of this work is to learn the low-dimensional manifold. (...)_
We appreciate the reviewer's concern. We politely point out that trajectory inference using PCA of single-cell data is not an innovation of our work but is a large subfield in its own right (see [2],[3]). Practitioners often resort to PCA as actual single cell data is both very high dimensional and noisy which provides a computationally intractable domain for learning trajectories prior to simulation free generative modeling. In this context we argue our reported benchmarks are standard in the literature. We also point again to our new experiments on the sphere (see **general rebuttal**), highlighting the ability of MFM to learn trajectories that stay closer to an unknown, underlying manifold.
### Questions
> _How effective is the interpolant when no optimal coupling is used, especially for multimodal data distributions?_
Please see our response/additional experiments on using independent coupling.
> _Does learning a full flow matching model outperform just using the L2 optimal coupling and using the interpolant to create x_t samples?_
Please see our response above on why using just the interpolants for single cell RNA can be reductive due to the lack of a 1-1 correspondence between samples and the need to evaluate the time-evolution of the **population density**.
> _How does the framework work when you have multiple time snapshots? Is the metric time-dependent (by depending on different snapshots given time) or do you fit it to a single fixed-time snapshot?_
Please see the reply to the first point of single cell experiment setup.
> _This part confuses me: to justify the metric, it is described that $p\_t$ "stays close" to a reference dataset D, however, the evaluation settings use K separate data distributions (at K different time values). How do you determine what D is?_
Please see the reply to the first point of single cell experiment setup.
> _Given two disjoint distributions separated apart, does the metric still learn some non-trivial p_t between them (or does the metric degenerate to the Euclidean one when far away from data)?_
This is an interesting question and we believe that given the setting you described, if the supports of the marginal distribution is fully separated, then there is no meaningful signal for the nonlinear correction $\varphi\_{t,\eta}$ in Eq. (4) and hence the path should remain linear, perhaps after adding a penalization over the norm of the weights of the MLP $\varphi\_{t,\eta}$.
> _Can this framework be adapted to learn a low dimensional manifold (rather than just a metric)?_
Please see our response above on why MFM is not designed to learn a low dimensional manifold and why this is not an issue in practice (we point again to the new experiments of MFM on the sphere).
### Final comment
We thank the reviewer again for their valuable feedback and great questions that enabled us to include new results that have strengthened our paper. We hope that our rebuttal addresses their questions and concerns—particularly in regard to how MFM does not need to prescribe a lower-dimensional space, as shown by the new experiments on the sphere, and the comparison with CFM using independent couplings. We kindly ask the reviewer to consider upgrading their score if the reviewer is satisfied with our responses. We are also more than happy to answer any further questions that arise.
### References
[1]: “A geometric take on metric learning”, Hauberg et al., Advances in Neural Information Processing Systems, 2012.
[2]: “Best practices for single-cell analysis across modalities.”, Heumos, L., Schaar, A.C., Lance, C. et al. Nat Rev Genet (2023). https://doi.org/10.1038/s41576-023-00586-w
[3]:~ https://www.sc-best-practices.org/preprocessing_visualization/dimensionality_reduction.html
[4]: “Contrastive Learning for Unpaired Image-to-Image Translation” Park et al., Computer Vision-ECCV (2020)
---
Rebuttal Comment 3.1:
Title: Kindly awaiting more feedback
Comment: We thank the reviewer again for your time and feedback that allowed us to strengthen the paper with new experiments and clarifications during this important rebuttal period. As the end of the rebuttal period is fast approaching we were wondering if our answers in the rebuttal were sufficient enough to address the important concerns raised regarding 1.) clarity of our claims, 2.) the technical novelty that distinguishes our proposed approach MFM and Wasserstein Lagrangian Flows/GSBM, and 3.) the experimental evaluation. We highlight that our global response includes new ablations on the use of couplings following the great suggestions by the reviewer.
We would be happy to engage in any further discussion that the reviewer finds pertinent, please let us know! Finally, we are very appreciative of your time and effort in this rebuttal period and hope our answers are detailed enough for the reviewer to consider a fresher evaluation of our work with a potential score upgrade if it's merited.
---
Rebuttal 4:
Comment: Unfortunately, the authors did not seem to understand my concerns about clarity and the paper's claims. I want to clarify that the only concern that was addressed by the rebuttal was that the choice of metric has advantages regardless of the choice of coupling used during training.
**Metric:**
"to the best of our knowledge, existing works do not use parametric metrics" --> To be clear, by parametric (in contrast to non-parametric) I simply meant that the metrics do not depend on a large data set. But I believe here the authors are using parametric as a synonym for learned; however, even then, **this claim is certainly false**. There has been many works on learning parametric metrics over data sets / point clouds [1,2], and many works on using latent spaces for transport problems where the metric is induced by the autoencoder, such as the DSBM and GSBM methods [3,4]. **Relatedly, it may be worth noting that the authors did not address my concern on reproducing FID: DSBM reports 14.16 and GSBM reports 12.39 FID**. Both are significantly better than the values reported by this paper (37-41), and this is especially concerning because existing works have open source implementations available. Instead, it seems the authors have deliberately chosen not to compare against *the different choices of potentials* in these works.
"in favor of potentials (that are generally not defined)" --> I am not sure what is meant by not defined here, as I understand these prior works work for any potential. **Since the potentials can be arbitrarily defined, it fully includes the Lagrangian point of view of Riemannian geodesics.** As mentioned in Appendix C.1, one can always convert from the cost defined using a Riemannian metric to the regular Euclidean metric plus a potential.
"do not adopt a Riemannian formalism to assess the velocity of the path" --> I believe the authors are describing how to extract a Riemannian manifold from a data set. There is significant work in this area (see the literature on constructing Riemannian metrics from point clouds e.g. [1, 2]) and is often used for LiDAR. Actually, it is worth highlighting that **my concern regarding the LiDAR experiment has also not been addressed**: compared to the GSBM experimental result which show two modes of trajectories [4; Figure 4], the results from this paper only show one mode: why is this? For me, I am not convinced that this LAND metric is as good as the choices of potential compared to existing works.
**Cost and Complexity:** Here is a simple question: Given the LAND or RBF metric definition, do you have access to all geodesics in closed form? If so, then I agree the method would be simulation-free and very computationally efficient; however, from my understanding, **the answer is no. And the proposed workaround is to actually fit a parametric approximation to the geodesics.** This, as the authors also agree, have significant computational overhead and can increase costs to double the amount for CFM due to this extra step of needing to approximate geodesics with neural networks.
**Manifold hypothesis:** Taking wikipedia as a proper "definition", the manifold hypothesis, if it were to be true, would imply at least two things:
1) "Machine learning models only have to fit relatively simple, low-dimensional, highly structured subspaces within their potential input space (latent manifolds)."
This, as also reflected by the rebuttal, does not hold for the proposed method. The proposed method works in the original data space.
2) "Within one of these manifolds, it’s always possible to interpolate between two inputs, that is to say, morph one into another via a continuous path along which all points fall on the manifold."
As noted in the review, and also agreed upon by the rebuttal response, the intermediate samples (see Figure 3 for images, and the rebuttal pdf for sphere and LiDAR) certainly do not seem to lie on the data manifold. *This is a subtle point that I raised which I believe the authors have not understood.* **If one were to actually define a manifold (along with a tangent plane and a corresponding metric on that plane), then interpolations will never leave the manifold, no matter how poorly the model/distribution is fit.** This is the key idea behind the literature on constructing Riemannian manifolds as described above, where we can fully construct a proper manifold and not just a metric in the ambient space. However, the proposed approach does not learn a subspace and due to this, we see that the points are some distance away from the manifold (shown for both sphere and LiDAR visualizations in the rebuttal pdf).
[1] "Neural FIM for learning Fisher information metrics from point cloud data" Fasina et al. 2023.
[2] "Approximating the Riemannian Metric from Point Clouds via Manifold Moving Least Squares" Sober et al 2020.
[3] "Diffusion Schrödinger Bridge Matching" Shi et al 2023.
[4] "Generalized Schrödinger Bridge Matching" Liu et al 2023.
---
Rebuttal Comment 4.1:
Title: Thank you for the additional comments
Comment: We thank the reviewer for taking the time to continue engaging with us. We believe certain aspects of our response may have added confusion rather than clarity which we apologize for. We now attempt to answer the points raised by the reviewer’s latest response.
> "to the best of our knowledge, existing works do not use parametric metrics" … But I believe here the authors are using parametric as a synonym for learned;
Yes, “parametric” here refers to a metric that learned via gradient based optimization. In contrast, we use non-parametric to refer to a metric that does not require learning via gradient based optimization.
>however, even then, this claim is certainly false. There has been many works on learning parametric metrics (...) and many works on using latent spaces for transport problems where the metric is induced by the autoencoder
We also agree with the reviewer that our work is not the first to learn a data-dependent metric. Our principal claim, that we clarify, is that MFM is the first method that employs a data-dependent (parametric or non-parametric) Riemannian metric within the context of flow matching. While it is certainly true that DSBM and GSBM have experiments that use the latent space of an auto-encoder, they do not invoke the Riemannian formalism which would have required the use of the metric to define distances, vector fields, and interpolants. As a result, we view the methodology of GSBM in its techniques to bias paths complementary to the approach presented in this paper. Indeed, there are similarities between the goals of our work and GSBM—which will be included in a larger discussion in the main paper—but crucially the key difference lies in the objective and mechanism used to bias paths. We understand how these techniques may not have been initially clear but we will use this response as a guide to further increase clarity and transparency of our novelty claims and the difference between methods.
> the authors did not address my concern on reproducing FID: DSBM reports 14.16 and GSBM reports 12.39 FID.
Please note we did attempt to answer this in our original rebuttal but we now include additional detail that we believe could strengthen the original response. We first highlight the 3 main reasons why our experimental setup differs from GSBM which lead to discrepancies between FID numbers between GSBM and MFM:
- The chief reason is that in our experiment we use the latent space of a pre-trained VAE autoencoder in StableDiffusionV1 as opposed to a more shallow VAE used in GSBM. As a result these GSBM and MFM in our current draft do generative modeling using two very different latent spaces.
- The GSBM codebase resizes each input image to 64 x 64 while we resized it to 128 x 128 to use our pretrained autoencoder.
- GSBM and DSBM operate directly on the ambient space and do not actually generate via the decoder of the VAE. In MFM as we heavily use the Riemannian metric for latent flow matching we generate using the decoder of the VAE.
We also took this time to investigate the reviewers’ great suggestion and attempted to compute an FID using the *saved checkpoints of the UNET and VAE from GSBM* with their provided sampling notebook. Unfortunately, we were unable to reproduce the FID of 14.16 and achieved an FID of 29.5 which is lower but closer to the one we report.
> Instead, it seems the authors have deliberately chosen not to compare against the different choices of potentials in these works.
The purpose of the image translation experiments was to compare CFM with MFM. In particular, we wanted to show how the **same** metric, i.e. RBF, can be fit to different applications and does not require adjustments specifics to the task. We hope the reviewer can see the merit in that. Regarding potential comparisons with GSBM for example, the setups are actually entirely different so simply comparing reported FID would not be accurate (see our previous point for more details).
> "in favor of potentials (that are generally not defined)" --> I am not sure what is meant by not defined here
We apologize for the vague terminology. By “not defined”, we mean as in the case of GSBM that the potential has to be defined by the modeler. In our setting instead, we wanted to propose a choice of metrics e.g. LAND and RBF that could be readily applied to different downstream tasks, without explicit encoding of the task outside of using data samples.
> Actually, it is worth highlighting that my concern regarding the LiDAR experiment has also not been addressed (...)
We appreciate the reviewers' comments. We note that LAND in itself is not a contribution of this work but rather its use in flow matching. The main purpose of LiDAR is to visually demonstrate the impact of using a data dependent metric (in contrast to CFM) on the trajectory in flow matching. We argue that the current experiment adequately serves this goal and does not gain additional benefit from comparing to GSBM.
---
Reply to Comment 4.1.1:
Title: Thank you for additional comments (part 2)
Comment: > Cost and Complexity: Here is a simple question: Given the LAND or RBF metric definition, do you have access to all geodesics in closed form? If so, then I agree the method would be simulation-free and very computationally efficient;
We politely disagree with the reviewer. Simulation-free training means that we are able to jump to any time $t$ of the interpolants vector field $u\_t (x|z)$ without simulating the path up to $<t$. **An interpolant does not need to be a geodesic for simulation free training to still hold**. Our work never sought to claim that MFM is able to extract exact geodesics in a simulation free manner but rather the learned interpolants for estimating the metric-induced velocity does not require solving a differential equation (and/or backpropagating through it) and learning requires only pointwise evaluations.
> The proposed workaround is to actually fit a parametric approximation to the geodesics …
Certainly, if our network successfully finds the global optimum by minimising the Dirichlet energy then we do recover a geodesic of the data manifold. The step of approximating geodesics instead of simulating the Euler Lagrange equations is a key contribution of our work that shows one can leverage benefits from metrics via simulation-free training to lead to measurable benefits for trajectory inference tasks.
We disagree with the reviewer’s assessment that this cost is expensive as it can be thought of as a 1 time pre-processing step which notably is not more expensive than the original trajectory inference problem itself. We argue that the performance gains of MFM in domains such as single-cell justify this pre-processing overhead.
### Regarding the manifold hypothesis
> The proposed method works in the original data space.
In general, the reviewer is correct in identifying that if we care about *explicitly* parameterizing a lower-dimensional space, then one needs coordinate maps.For the sphere, the fact that the points do not lie *exactly* on the surface is expected, considering that we are assuming we do not know the underlying manifold. In practice, for the application of trajectory inference one does not need the explicit manifold, but simply cares about reconstructing trajectories whose intermediate points are closer to the underlying manifold. Empirically, we have shown the impact of our metric with the single-cell experiments, where it is even hard to define what the ground-truth manifold is and adopting the metric approach is favorable to finding an explicit parameterization of the lower-dimensional manifold since it requires the notion of a norm in the ambient space.
We emphasize that “just a metric in the ambient space” can actually induce a non-trivial Riemann tensor and hence a “proper” Riemannian manifold. More generally, the idea of learning a distance in the ambient space rather than parameterising a lower dimensional space **is not a new approach and we are certainly not the first ones arguing for it**, but plenty of previous works (referenced in our manuscript) have adopted this angle (referred to as **metric learning**). As such, we believe our work is using this perspective in the context of generative modeling and has been substantially motivated by previous literature. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful feedback and constructive questions that have helped us improve the submission significantly. We are glad to see that reviewers found our work “well-written, with great attention to detail” (R q2sv), that our proposed framework is “elegant” (R mgNS), “naturally connects the recently introduced flow matching method with a data-induced Riemannian metric” (R CNRE), and “could be very interesting for higher dimensional ML applications” (R hBAX). In this general response we address two points: (1) an overview of the new experiments we ran to address shared questions raised in the reviews; (2) the technical novelty and contributions of this work.
We discuss the additional experiments we conducted during the rebuttal period. *We will refer to the attached 1 page document*.
### Additional experiments & Ablations
**Independent coupling on arch and single cell (R hBAX, R CNRE)**
To assess the impact of metric learning in MFM even without using Optimal Transport for the coupling $q$, we tested MFM with independent coupling on the Arch task and the single cell datasets (on both 5D and 100D). The results are reported in **Tables 1,2,3** in the attached pdf, and highlight two key takeaway messages:
- I-MFM generally surpasses I-CFM
- Differently from CFM, MFM also uses the coupling q in the first stage where it optimizes paths based on the metric, which justifies why using Optimal Transport for the coupling is even more beneficial for MFM than CFM.
We hope that in light of these experiments, reviewers can see that the benefits of MFM are not just due to the choice of the coupling, since I-MFM and OT-MFM both surpass their respective Euclidean counterparts I-CFM and OT-CFM.
**Arch experiment on a explicit manifold (R hBAX, R mgNS, R q2sv)**
To assess the ability of MFM to learn trajectories that stay close to an unknown, underlying manifold, we followed the great suggestion of R mgNS and ran MFM on the arch task defined on a specific lower dimensional space, i.e. a 2D sphere embedded in $\mathbb{R}^3$. We see that MFM not only improves significantly over the Euclidean baseline CFM (see Table 5 in the attached pdf), but crucially that the samples generated by MFM at intermediate times are much closer to the underlying sphere than the Euclidean counterpart (see Table 4 and Figures 1,2,3 in the 1 page pdf). We emphasize that we managed to attain this **without** explicitly parameterizing the lower-dimensional space but simply relying on the LAND metric. We hope this provides the reviewers stronger empirical evidence as to why the metric approach can help us design flows that stay close to a lower-dimensional manifold even when we do not know it.
**LIDAR visualizations (R hBAX, R q2sv)**
To address the reviewers' concerns about the faithfulness of $p_t$ to the LiDAR data, in our additional 1 page pdf, we have provided more views of the learned paths from OT-CFM and OT-MFM (see Figure 4). The new visualizations clearly indicate that MFM paths 1.) do not intersect the manifold and 2.) bend closely around the manifold which highlights the geometric inductive bias.
### Technical novelty and contributions (R hBAX, R mgNS)
In light of some concerns about technical contributions of our submissions, we reiterate and clarify what we believe to be the key novelty of our work.
**Connecting metric learning to flow matching**
To the best of our knowledge, this is the first work that links metric learning to recent state-of-the-art generative frameworks such as flow matching and emphasizes the role played by the data geometry (*not* the ambient geometry) for generative modeling.
**Learning interpolants that approximate geodesics**
The key technical contribution of our work consists in learning stochastic interpolants that approximate geodesics of a data-dependent metric in a simulation-free manner. At present, there is no computational method that can find exact geodesics without simulations. Instead, MFM proposes to find approximate geodesics by minimizing the **metric Dirichlet energy of the path** in Eq. 6, whose minimizer is the geodesic, i.e.
$$
\mathcal{L}\_g(\eta) = \mathbb{E}\_{(x\_0,x\_1)\sim q,t} \left[(\dot{x}\_{t,\eta})^\top \mathbf{G}(x\_{t,\eta};\mathcal{D})\dot{x}\_{t,\eta}\right].
$$
To our knowledge, this is a novel objective and, importantly, can be fully decoupled from the matching objective used to learn the vector field generating the flow. Finally, our framework is general and is not bound to use a specific metric.
**Comparisons with GSBM/WLF**
In light of reviewer hBAX’s comments about similarities between MFM and WLF/GSBM—which we discuss in Section 4.2 of our paper—we comment further on **technical differences** between these frameworks. Since GSBM is an explicit matching counterpart of WLF, we focus on comparing MFM to WLF.
- (i) In contrast to WLF or GSBM, our method, MFM, separates the optimization of the paths from the matching objective. We argue this is beneficial since it avoids introducing additional challenges when learning the vector field $v\_\theta$.
- (ii) None of the Lagrangians considered in WLF account for the data distribution and/or the geometry induced by it. As such, the choice of Lagrangian always needs to be given by the _user based on overall considerations, e.g. unbalanced OT, or one needs to define a potential V_—that only depends on positions and not velocities, as discussed in 4.2. In MFM instead, we explicitly account for the empirical samples to learn the paths. If we adopt the same procedure for optimization (i.e. 2-stage learning, as clarified in (i)), then our approach can be interpreted as learning paths by minimizing a Lagrangian whose potential depends on both positions and velocities and is further dependent on the whole data distribution through the metric.
Pdf: /pdf/c3903bf7547b0786efa80c06c087e92de7807572.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset | Accept (poster) | Summary: The paper focuses on non-stationary learning of neural networks. The paper proposes a method that automatically adjusts a parameter that influences the stochastic gradient descent (SGD) update to account for non-stationarity.
Strengths: The problem tackled is relevant and the related work is clearly discussed. The method is well-interpreted and relevant for the community. The paper is also very well presented.
Weaknesses: In my view, the paper has some weaknesses, especially regarding the experimental validation:
- The experimental validation does not clearly validate the benefits of the method, possibly due to lack of clarity. In Figure 3, it is not clear to me what method this paper proposes. I believe the paper should propose a method and validate it, instead of comparing all its possible variants. Nevertheless, I can see that most (if not all) of such variants are outperforming the chosen baselines. However, only two of the baselines referred in the related work were chosen. The chosen benchmark problems are also not motivated to me.
- The reinforcement learning part is not clear to me. Is there an explanation for why, on-policy, the method is outperformed by the simple baseline? Is not on-policy the most non-stationary setting? Why? And why does the proposed method stop earlier than the competitors in the top-right figure?
- It is not clear to me what the authors intend to show with Figure 2, and the perfect soft-resets regime. It would be nice to have more explanation here, and possibly move this validation to the end of the experimental section, as appearing before the main validation of the method is confusing.
Finally, even though the method is well motivated, it is not theoretically analyzed. We have no theoretical proof that the method will outperform traditional SGD in terms of efficiency, or preventing plasicity loss and catastrophic forgetting, or other dimension.
Minor: in page 6, Equation (15), what is the parameter $\tilde{\lambda}$? Was it introduced before?
Technical Quality: 2
Clarity: 3
Questions for Authors: I have the following questions:
- Could the authors make some clarifications with respect to the experimental validation? Specifically, on the choice of baselines, problems, which method or methods the authors in fact propose to the community.
- Can the authors also clarify my concern regarding the reinforcement learning setting and the performance in on-policy settings?
- Can the authors clarify what is the intended take-away from the analysis in Figure 2?
- Can the authors clarify the Monte-Carlo samples in Equation (7)? Specifically, what is sampled, for instance in the reinforcement learning setting, and if the sampling requires access to a free simulator of the environment.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer DfC5 for their feedback. Please find our response below.
> Figure 3, hard to read
Thanks to your feedback, we will modify the way we present the results, using the extended page limit for the camera-ready version. First, we will present **Soft Reset** method and compare it to baselines. Second, we will present other variants of the method and compare it to the main method.
> baselines and benchmarks motivation
The benchmarks are motivated by Plasticity Loss literature [1-4]. In this setting, Neural Networks lose ability to learn (lose plasticity) when exposed to certain forms of non-stationarity. Random label MNIST and CIFAR10 benchmarks are most common ones. Moreover, in this setting, resetting is highly beneficial (see [1]).
We used 3 baselines, Shrink&Perturb [5], L2 Init [1] and Hard Resets. The Shrink&Perturb[5] is used to prevent plasticity loss by perturbing parameters according to $\theta_{t+1}=\lambda\theta_t+\sigma \epsilon$ where $\epsilon\sim\mathcal{N}(0;I)$. L2 Init approach adds $||\theta - \theta_0||^2$ to the loss and was shown to outperform many other approaches in preventing plasticity loss. Both baselines are highly related to our drift model and discussed in Related Work. We hope this clarifies the motivations.
> RL is not clear...
Please see our response to Reviewer Hmj6.
> Why in on-policy, simple method performs better than ours, given that it is more non-stationary setting?
In RL, the non-stationary rises from changing input data distribution (since we change the policy) as well as from changing learning targets (when learning value functions). In the off-policy RL, we expect a high impact of changing learning target non-stationarity. It was observed [4,6-7], that in this setting we can exhibit the loss of plasticity. The effect becomes more present as we increase the replay ratio (increase the number of updates per collected data). Under high replay ratio since we do many updates on replay buffer, we could overfit to the replay buffer and when we start collecting new data, the local minima of NN can switch drastically. This is the regime which motivated our method (see Figure 1 and beginning of Section 3). In this setting, hard resets [4] are effective but our method shows to be even more effective.
When we operate in on-policy setting, the situation described above is less likely to happen. If data distribution changes quickly, then we are less likely to overfit to it due to the noise in the update. Using the language from Figure 1, the uncertainty of the Bayesian posterior will not be shrinking if the data changes quickly. This, however, depends on the environment. The Hopper environment (Figure 4, bottom) is a relatively simple task to solve, meaning that the agent will progress fast and the corresponding input data distribution will change fast. The Humanoid environment (Figure 4, top) is a much more challenging environment, and the simple agent might struggle initially. This implies that even on-policy algorithm can see a lot of similar data and start to overfit to it, putting it closer to the off-policy setting. This explains why our method is more effective in Humanoid rather than Hopper environment when relpay ratio is 1.
> why stops earlier?
The experiment was not fully finished at the time of submission. See attached pdf for final version of the Figure.
> Figure 2 ?
In Figure 2, we demonstrate that drift model eq.4 coupled with update rule eq.16 is a good strategy of resetting parameters when task boundaries are known. For appropriately chosen $\gamma \neq 0$, we can achieve significantly better performance than hard reset. In Figure 2, left, we use constant l.r. $\alpha_t$ whereas in Figure 2,right, we use $\alpha_t(\gamma_t)$ from eq.17, which is more beneficial. We will add more clarifications in the text about the nature of this experiment.
> Theoretical analysis
Unfortunately, the phenomenon of plasticity loss in Neural Networks is not theoretically analyzed (there is no model for that), it is an empirical phenomenon [1-4]. We would require this theoretical framework for plasticity loss to be developed in order to analyze our method. Our future plans involve analyzing the method in the context of online convex optimization.
> MC samples in eq.7?
MC samples are not coming from a simulator but from a Gaussian distribution eq.5 induced by the drift model eq.4. MC samples are needed to approximate integral in eq.7. The MC samples correspond to NN parameters.
Overall, we hope our answer clarifies the points which you raised during your review. If we have addressed your questions, we would be grateful if you would consider increasing your score. We would be happy to answer any further questions you might have.
**References**:
[1] Maintaining Plasticity in Continual Learning via Regenerative Regularization, Saurabh Kumar, Henrik Marklund, Benjamin Van Roy, 2023
[2] Understanding plasticity in neural networks, Clare Lyle, Zeyu Zheng, Evgenii Nikishin, Bernardo Avila Pires, Razvan Pascanu, Will Dabney, 2023
[3] Disentangling the Causes of Plasticity Loss in Neural Networks, Clare Lyle, Zeyu Zheng, Khimya Khetarpal, Hado van Hasselt, Razvan Pascanu, James Martens, Will Dabney, 2024
[4] Understanding and Preventing Capacity Loss in Reinforcement Learning, Clare Lyle, Mark Rowland, Will Dabney, 2022
[5] On Warm-Starting Neural Network Training, Jordan T. Ash, Ryan P. Adams, 2020
[6] The Primacy Bias in Deep Reinforcement Learning, Evgenii Nikishin, Max Schwarzer, Pierluca D'Oro, Pierre-Luc Bacon, Aaron Courville, 2022
[7] The Dormant Neuron Phenomenon in Deep Reinforcement Learning, Ghada Sokar, Rishabh Agarwal, Pablo Samuel Castro, Utku Evci, 2023
---
Rebuttal Comment 1.1:
Title: Response to authors' rebuttal
Comment: I thank the authors for the response.
I had some points clarified. However, I am inclined to maintain my score, since the experimental validation is somehow confusing. Regarding the RL results, I can not see clear benefits from the approach. Regarding Figure 3, the authors replied that their method is Soft Reset, but I see it being outperformed by several of its variants. Then why do the authors choose as their proposed method Soft Reset and not one of the variants that outperform it?
Thank you.
---
Reply to Comment 1.1.1:
Title: Response to a response
Comment: Dear Reviewer DfC5, thank you for your response.
> Regarding Figure 3, the authors replied that their method is Soft Reset, but I see it being outperformed by several of its variants. Then why do the authors choose as their proposed method Soft Reset and not one of the variants that outperform it?
It is true that it is outperformed by other variants. Soft Reset is the cheapest method among all the other variants -- it requires only one update on $\gamma_t$ and one update on parameters $\theta_{t+1}$. All the other methods -- Soft Reset more compute, Soft Reset proximal, Bayesian Soft Reset proximal, require significantly more compute. Figure 3 suggests that we can in fact leverage more compute to achieve better empirical performance.
This is why we chose to present Soft Reset as the main method because it is relatively cheap and already achieves good performance compared to external baselines. However, when more compute is available, other methods could also be used to further improve performance.
Hope this clarifies the confusion. | Summary: - The authors study a learning algorithm that can handle the non-stationarity of the data distribution.
- They propose a parameter drift model based on the Ornstein-Uhlenbeck process, which models a form of “soft parameter reset” adaptive to the data stream. The drift model has an adaptive parameter $\gamma_t$ which a gradient-based optimizer can learn online.
- They illustrate the update rule of the main parameters of a neural network incorporating the learned drift model. The update rule is first proposed under the Bayesian neural network framework and then later adapted to a non-Bayesian neural network.
- They numerically corroborate the efficacy of their method on plasticity benchmarks and reinforcement learning experiments.
Strengths: - S1. They propose a novel way to learn online the amount of resetting the parameters.
- S2. The method is general enough to be applied broadly to continual learning problems and reinforcement learning problems.
Weaknesses: - W1. Modeling parameter drift is not well-motivated.
- The learnable drift model $p(\theta_{t+1} \mid \theta_t, \gamma_t)$ is the main contribution of this work. In my opinion, however, it is unclear why we should care about the drift of “currently learning” neural network parameters.
- According to the beginning of Section 3, the local minima of the objective function may change over time. This is of course acceptable. However, I cannot understand what is relevant between the change of the local optima $\theta^{\star}_t$ and the current parameter $\theta_t$. To be more specific, the two sentences in lines 94-96 are not connected well.
- W2. The paper is NOT self-contained overall.
- The authors claim that they chose the Ornstein-Uhlenbeck (OU) process as a drift model. However, it is unclear how a continuous stochastic differential equation (OU process) can be converted to a discrete Markovian chain defined in Equation (4). The paper does not even introduce the actual form of the OU process. Moreover, it is not very clear why the authors chose the particular model defined in Equation (4) because it is not known (at least in the paper) that the model is a unique option to choose.
- There are a lot of equations whose derivation is omitted. Let me list them: Equations (9), (10), (11), (14), (15). In particular, I am suspicious of both the validity and usefulness of Equation (10).
- For these reasons, the paper is not so easy to follow.
- W3. The soft reset method seems computationally heavy.
- I am worried about the computational cost. Although it is good to learn the forgetting parameter $\gamma_t$ online, it increases the computational cost almost twice. It might be more than just twice because learning the $\gamma_t$ requires Monte-Carlo (MC) sampling.
- In addition, there seem too many hyperparameters to tune.
- W4. Minor comments on typos/mistakes
- Do not capitalize the word “neural network”. (e.g., line 35-36)
- The term “plasticity loss” may be read as a type of loss function for some people (new to this field). I recommend using the term “loss of plasticity”.
- Line 84: “non-stationary” → “non-stationarity”
- Line 109: “mportant” → “important”
- Line 127: “wrt” → “with respect to”. Do not use abbreviations.
- Line 137: “property” → “properties”
- Line 154: what is “s” next to a parenthesis?
- Line 168: “$\theta = (\theta_i, \ldots, \theta_D)$” → “$\theta = (\theta_1, \ldots, \theta_D)$”
- Line 169: “$\mathbb{R}^D$” is a more standard notation than “$\mathcal{R}^D$”.
- Line 222: In “$\lambda\_i = \hat{\lambda}\_i \sigma^2_{t,i}$", I think $\hat{\lambda}$ must not have a subscript.
- Line 225: “sinc” → “since”
- Equation (14): What is “$\tilde{\mathcal{F}}$”? (Partial) gradient of “$\mathcal{F}$”?
- Line 232: What does it mean by “we assume that $\mu_0 = \theta_0, \sigma_0^2.$”?
- Line 236: “$\theta_{t,i}(\gamma_{t,i}) = \theta_t$” → “$\theta_{t,i}(\gamma_{t,i}) = \theta_{t,i}$”
- Line 240: “linearisng” → “linearizing” or “linearising”
- Line 292: duplicate “See”
- Line 354: “modelling” → “modeling”
- Lines 492-493: I guess “*data efficient*” and “*memorization*” are swapped.
Technical Quality: 2
Clarity: 1
Questions for Authors: Q1. As far as I know, the Hard Reset method typically resamples the model parameter every time it resets the parameter. With this in mind, what if we slightly modify the drift model as: $p(\theta \mid \theta_t, \gamma_t) = \mathcal{N} (\theta; \gamma_t \theta_t + (1-\gamma_t) \theta’_0 ; (1-\gamma_t^2)\sigma_0^2)$, where we re-sample $\theta’_0 \sim p_0(\theta_0)$ every time $t$?
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: The paper discusses its limitations in the experiment section. I think it might be better to mention them explicitly in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 4koq for their response. Please find our detailed answer below.
> why drift model...
In Section 3 and in Appendix D, we presented the reasons to use a drift model together with learning NN parameters. Figure 1 illustrates the high-level intuition in case of online Bayesian estimation. Using SGD language, assuming that the data is stationary up to the time $T$, SGD estimate $\theta_{T}$ would tend towards a local optima $\theta^*$. If data stays stationary at time $T+1$, $\theta_{T+1}$ will move closer to the $\theta^*$. However, if at time $T+1$, the data distribution changes and so is the set of local optima, SGD might struggle to move towards this set starting from $\theta_T$. Drift model allows the learning algorithm to make larger moves towards this new set of local optima. Such idea was also used in the context of online convex optimization, see [2-3]. The form in eq.4 encourages parameters to return back towards the initialization over time by shrinking the mean and increasing the variance (l.r. in SGD), allowing to make bigger steps towards new local minima.
We will change the wording of the beginning of Section 3 to better reflect this explanation.
> form of OU process
OU process defines a SDE that can be solved explicitly and written as a time-continuous Gaussian Markov process with transition density $p(x_t|x_s)=\mathcal{N}(x_s e^{-(t-s)},(1-e^{-2(t-s)})\sigma_0^2I)$ for any pair of times $t>s$. Based on this, as a drift model for the parameters $\theta_t$ (so $\theta_t$ is the state $x_t$) we use the conditional density $p(\theta_{t+1}|\theta_t)=\mathcal{N}(\theta_t\gamma_t,(1-\gamma_t^2)\sigma_0^2I)$ where $\gamma_t=e^{-\delta_t}$ and $\delta_t\geq0$ corresponds to the learnable discretization time step. In other words, by learning $\gamma_t$ online we equivalently learn the amount of a continuous “time shift” $\delta_t$ between two consecutive states in the OU process. This essentially models parameter drift since e.g. if $\gamma_t=1$, then $\delta_t=0$ and there is no “time shift" implying $\theta_{t+1}=\theta_t$.
> ...why eq.4
In Section 3, we argued for choosing eq.4 because it pushes the learning towards the initialization as well as towards previous parameters, allows to use gradient-based methods to estimate drift parameters and keeps positive finite variance (assuming we learn over the infinite amount of time) which avoids degenerate cases of $0$ or $\infty$ variance. Moreover, it couples the mean and the variance via $\gamma$, making it easier to learn $\gamma$ and less likely to overfit. Other potential choices for the drift models:
* $\theta_{t+1}=\gamma\theta_t+(1-\gamma)\mu_0+\beta\epsilon$, with $\epsilon\sim\mathcal{N}(0,I)$
* Fixed $\beta$ was explored in our experiment in Figure 2, left, where we used constant l.r. $\alpha_t$ for parameters update. We found that it performed worse than using rescaled l.r. $\alpha(\gamma_t)$ from eq.17 which is derived using our model, see Figure 2, right. For $\mu_0=0$, we recover Shrink&Perturb [4]
* Learning $\gamma$ and $\beta$ will likely overfit
* Mixture $p(\theta_{t+1}|\theta_t,\gamma_t)=\gamma_tp(\theta_{t+1}|\theta_t)+(1-\gamma_t)p_0(\theta_{t+1})$, where $p(\theta_{t+1}|\theta_t)=\mathcal{N}(\theta_t;\sigma^2)(\theta_{t+1})$ and $p_0(\theta_{t+1})=\mathcal{N}(\mu_0;\sigma_0^2)(\theta_{t+1})$, which is a Gaussian version of Spike&Slab [5]. This encourages **hard** resets instead of **soft** ones. Moreover, using mixtures is problematic since the KL in eq.11 cannot be computed exactly and needs to be approximated.
We will add the discussion of drift model choices in the appendix.
> derivations...
Please see our reply to Reviewer 6GY1 about the eq.9 and eq.11. Eq.10 is obtained from eq.9 by finding fixed points. In eq.10, we have a typo, the term $\lambda\gamma_t^0$ in the denominator should be replaced by $\lambda$. Eq.14 is a gradient descent rule applied to eq.11(eq.12) where $\tilde{F}$ is actually a derivative of $F$ wrt $\mu_t$ and $\sigma_t$. Eq.15 is Maximum a-posteriori (MAP) update for $\theta$ from the posterior $p(y_{t+1}|x_{t+1},\theta)q^{t+1}{t}(\theta |\gamma_t)$ and where we use temperature in the prior (see [3]).
We will add all the derivations and explanations in the appendix.
> $\theta'$ every time
While a possible modification for the drift model, it could be problematic to use, since resampling $\theta_{0}$ has higher variance than OU process. The variance of this model is $\gamma_t^2\sigma^2_t+(1-\gamma_t)^2\sigma^2_0+(1-\gamma^2_t)\sigma^2_0$, which equals to $2\sigma^2_0$ for $\gamma_t=0$ which is 2x larger than the variance of the initialization. This implies that such a model might inflate the variance over time since e.g. if $\gamma_t=0.5$ for all time steps $t$, then $\sigma_{t+1}^2=\gamma_t^2 \sigma^2_t+2(1-\gamma_t)\sigma^2_0=0.5^2\sigma^2_t+\sigma^2_0$ which grows with the iterations.
Thus, we believe that our current drift model more accurately implements the mechanism of parameter resets since it always brings us back to the exact initialization distribution.
We hope we have been able to address your concerns and we are happy to answer further questions on the above subjects. If we were able to address your concerns, we would be grateful if you would consider increasing your review score.
**References**:
[1] Dynamical Models and Tracking Regret in Online Convex Programming, Eric C. Hall, Rebecca M. Willett, 2013
[2] Adaptive Gradient-Based Meta-Learning Methods, Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar, 2019
[3] How Good is the Bayes Posterior in Deep Neural Networks Really?, Florian Wenzel, Kevin Roth, Bastiaan S. Veeling, Jakub Świątkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin, 2020
[4] On Warm-Starting Neural Network Training, Jordan T. Ash, Ryan P. Adams, 2020
[5] Spike and slab variable selection: Frequentist and Bayesian strategies, Hemant Ishwaran, J. Sunil Rao, 2005
---
Rebuttal 2:
Title: The response is not satisfactory yet
Comment: I have read the author’s rebuttal, but I still have remaining concerns and questions. Let me leave my comments/questions about the author’s responses one at a time.
1. I don’t think this answer really responds to my question in W1. Here, I am not asking about the motivation of the drift model (although I did it in W2). My question is: why do we care about the conditional distribution of $\theta_{t+1}$, a consequence of a single step of training from $\theta_t$, even though we want to estimate or utilize the dynamics of the moving local optima $\theta^{\ast}\_t \mapsto \theta^{\ast}\_{t+1}$?
- After pondering this issue and staring at Figure 1, I suddenly realized that the drift model is actually modeling the dynamics of $\theta^{\ast}\_{t+1}$ given $\theta_t$. So in my understanding, the $\theta_{t+1}$ in “$p(\theta_{t+1} \mid \theta_t, \gamma_t)$” actually means $\theta^{\ast}_{t+1}$, the local optima at time $t+1$, rather than the parameter we will have by updating the parameter from $\theta_t$ using a single step of learning algorithm. (Or, at least, we want to find a local optimum $\theta^{\ast}\_{t+1}$ close to the initialization & $\theta_t$.) Is my understanding correct? If it is, I think this should have been explained in the paper in more detail; also, the term “parameter drift” is a bit misleading in this sense.
- By the way, I guess the citation ([2-3]) has a typo, right? It seems that it should include [1].
2. Thank you for the explanation of the relationship between the OU process, the Gaussian Markov process, and the choice of the drift model. Will the authors add this explanation to their paper? Or do they just ignore it?
3. Thank you for the discussion on the other options for drift models and for considering putting the discussion to the paper.
4. Thank you for derivations of the equations in more detail. However, I think the derivations could be more kind than the current explanation.
- Eq 9: Please state to which parameter the linearization is taken (e.g., By linearizing log… around $\theta=\mu_t$ (cf. Line 198))
- Eq. 11 & 12: Is these are just the usual objective functions in BNN training?
5. Thank you for the interesting response based on the drift model and the variance. But what if we change the **variance scheduling**: what if we suitably change the drift model into $p(\theta \mid \theta_t, \gamma_t) = \mathcal{N} (\theta; \gamma_t \theta_t + (1-\gamma_t) \theta’_0(t); 2\gamma_t(1-\gamma_t)\sigma_0^2)$ where $\theta’_0(t) \sim p_0(\theta_0)$? I guess this model does not suffer from the same problem of variance inflation: If $\sigma_t = \sigma_0$, then ${\rm Var}(\theta) = \gamma_t^2 \sigma_t^2 + (1-\gamma_t)^2 \sigma_0^2 + 2\gamma_t (1-\gamma_t) \sigma_0^2 = \\{\gamma_t^2 + (1-\gamma_t)^2 + 2\gamma_t (1-\gamma_t)\\}\sigma_0^2 = \sigma_0^2$...! The motivation behind this is the concern about the dependency on the particular initialization “$\theta_0$” which seems to be fixed throughout the training: “What if the initialization distribution is only the matter?” Please let me know if there are other problems in this drift model, or I would be very happy if you could test this drift model empirically (but I understand if it is impossible due to the time limit)!
Also, there is a question that is not answered yet. Let me bring it here:
- Line 232: What does it mean by “we assume that $\mu_0 = \theta_0, \sigma_0^2.$”?
Although I appreciate the time and effort invested in the rebuttal and am happy with the author's general response, overall, I feel like the author’s response is not really satisfying yet. Even though the authors requested to reconsider my assessment, I cannot do so before I get a satisfying further response. If the further response is not satisfactory as well, sadly and unfortunately, I am **ready** to decrease my score to 3 (but **not yet**) because, in my view, there is still a huge room for improvement in writing and presentation.
---
Rebuttal 3:
Title: Response to a response: Part 1
Comment: Dear 4koq, thank you for your feedback and please find our answer to your concerns below.
> I don’t think this answer really responds to my question in W1. Here, I am not asking about the motivation of the drift model (although I did it in W2). My question is: why do we care about the conditional distribution of
Θt+1 , a consequence of a single step of training from θt , even though we want to estimate or utilize the dynamics of the moving local optima θt∗↦θt+1∗?
After pondering this issue and staring at Figure 1, I suddenly realized that the drift model is actually modeling the dynamics of θt+1∗ given θt.. So in my understanding, the θt+1 in “p(θt+1∣θt,γt)” actually means θt+1∗ , the local optima at time t+1, rather than the parameter we will have by updating the parameter from θt using a single step of learning algorithm. (Or, at least, we want to find a local optimum θt+1∗ close to the initialization & θt .) Is my understanding correct? If it is, I think this should have been explained in the paper in more detail; also, the term “parameter drift” is a bit misleading in this sense.
We apologize for the confusion, but under the Bayesian perspective depicted in Figure 1 $\theta^*_{t}$ and $\theta^*_{t+1}$ denote **fixed/deterministic** values and therefore are not random variables, while the corresponding random variables which are assigned probability distributions are $\theta_{t}$ and $\theta_{t+1}$. More specifically, we learn the dynamical model distribution $p(\theta_{t+1}|\theta_{t},\gamma_t)$, by learning $\gamma_t$, so that the specific value $\theta^*_{t+1}$ can become more likely or “explainable” under the distribution $p(\theta_{t+1}|\theta_{t},\gamma_t)$. In other words we hope that $p(\theta_{t+1}|\theta_{t},\gamma_t)$ will place a considerable probability mass around the fixed value $\theta^*_{t+1}$. We are going to update the paper to make the above clear and remove the confusion.
To further explain here this in Figure 1, consider the stationary case depicted in Figure,1a. There the posterior concentrates around an optimal value (which is fixed value $\theta^*$), in other words $q_t(\theta)$ will gradually converge to a delta mass around the value $\theta^*$. In a non-stationary case, the optimal value can suddenly change over time and e.g. from the value $\theta^*_{t}$ at time $t$ can change to $\theta^*_{t+1}$ . Without a dynamical model (Figure 1,b) the new posterior $q_{t+1}(\theta)$ after observing the new data at time $t+1$ has a small radius/variance (blue dashed circle) and it cannot concentrate fast enough towards the new optimum. The use of the dynamical or drift model (Figure 1,c) introduces an “intermediate step” that constructs an “intermediate prior distribution” $p_t(\theta) = \int q_t(\theta) p(\theta | \theta_t, \gamma_t) d \theta_t$ which can have increased variance and shifted mean (green dashed circle). Then once we fully incorporate the new data point at time $t+1$ the updated posterior $q_{t+1}(\theta) \propto p(y_{t+1}|\theta) p_t(\theta)$ can better concentrate around $\theta_{t+1}^*$. We will update the paper to clarify the above.
> citation [2-3] is a typo
Thank you, it should be [1-2].
> Q2
Yes we will add this to the paper (we forgot to explicitly write it).
> Q3
Thank you.
> Eq.11&12 -- usual BNN objective?
Yes, except for the temperature defined per-parameter. In BNN, temperature $\lambda$ is either 1, or fixed to a constant for all the parameters.
> Line 232: What does it mean by “we assume that $\mu_0=\theta_0,\sigma^2_0.$?
It means that the mean of the prior $p_0(\theta)=\mathcal{N}(\theta;\mu_0,\sigma^2)$ is fixed to $\theta_0$, i.e. the initialization and $\sigma^2_0$ is a constant. Normally NNs are initialized from $p_0(\theta)=\mathcal{N}(\theta;0,\sigma^2_0)$, but we make the prior to be initialization-dependent. We will rewrite this sentence to explicitly highlight this.
We hope your answer clarifies the questions and concerns you have raised, and we thank you for raising interesting points for discussion!
---
Rebuttal Comment 3.1:
Title: Part 2
Comment: > Eq.9, linearisation
As we stated above in the rebuttal, the response to Reviewer 6GY1 (due to space constraints) contains the partial derivation. Linearisation is $\log p(y_{t+1}|x_{t+1},\theta)\sim\log p(y_{t+1}|x_{t+1},\mu_t)-g_{t+1}^T(\theta-\mu_t)$, where $g_{t+1}=\nabla_{\theta} -\log p(y_{t+1}|x_{t+1},\theta=\mu_t)$. Then, $p(y_{t+1}|x_{t+1},\theta) \sim p(y_{t+1}| x_{t+1},\mu_t)\exp^{-g_{t+1}^T\mu_t}\exp^{g_{t+1}^T\theta}$. Since, $q_t^{t+1}(\theta | S_{t}, \Gamma_{t-1}, \gamma_t)$ is Gaussian, we compute the integral eq.6 in a closed form and keep only the terms depending on $\gamma_t$. We provide derivation for a scalar case here and full derivation in the appendix. In eq.6, we have
$\log\int p(y_{t+1}|x_{t+1},\theta)\exp^{-\frac{(\theta -\mu_t(\gamma_t))^2}{2\sigma^2(\gamma_t)}}d\theta =\log \frac{1}{\sqrt{2\pi\sigma^2(\gamma_t)}}\int p(y_{t+1}|x_{t+1},\mu_t)\exp^{-g_{t+1}(\theta-\mu_t)} \exp^{-\frac{(\theta-\mu_t(\gamma_t))^2}{2\sigma^2(\gamma_t)}}d\theta$
which becomes
$\log p(y_{t+1}|x_{t+1},\mu_t)+\log \int \frac{1}{\sqrt{2\pi\sigma^2(\gamma_t)}}\exp^{-g_{t+1}(\theta-\mu_t)}\exp^{-\frac{(\theta-\mu_t(\gamma_t))^2}{2\sigma^2(\gamma_t)}}d\theta$
Consider only the exponent term in the integral
$\frac{-1}{2\sigma^2(\gamma_t)}(2\sigma^2(\gamma_t)g_{t+1}(\theta-\mu_t)+\theta^2-2\theta \mu_t(\gamma_t)+\mu_t(\gamma_t)^2)$
then
$\frac{-1}{2\sigma^2(\gamma_t)}\left(\theta^2-2\theta \left[\mu_t(\gamma_t)-\sigma^2(\gamma_t)g_{t+1}\right]+\mu_t(\gamma_t)^2-2\sigma^2(\gamma_t)g_{t+1}\mu_t\right)$
then
$\frac{-1}{2\sigma^2(\gamma_t)}\left[ \left(\theta-(\mu_t(\gamma_t)-\sigma^2(\gamma_t)g_{t+1}) \right)^2+2\mu_t(\gamma_t) \sigma^2(\gamma_t)g_{t+1}-\sigma^4(\gamma_t)g_{t+1}^2-2\sigma^2(\gamma_t)g_{t+1}\mu_t)\right]$
then
$\frac{-1}{2\sigma^2(\gamma_t)} \left(\theta-(\mu_t(\gamma_t)-\sigma^2(\gamma_t)g_{t+1})\right)^2-\mu_t(\gamma_t)g_{t+1}+0.5\sigma^2(\gamma_t)g^2_{t+1}+g_{t+1}\mu_t$
Now, get back to the integral
$\log\int\frac{1}{\sqrt(2\pi\sigma^2(\gamma_t)}\exp^{\frac{-1}{2\sigma^2(\gamma_t)}\left(\theta-(\mu_t(\gamma_t)-\sigma^2(\gamma_t)g_{t+1}) \right)^2-\mu_t(\gamma_t)g_{t+1}+0.5\sigma^2(\gamma_t)g^2_{t+1}+g_{t+1}\mu_t}d\theta$
Which equals to
$1-\mu_t(\gamma_t)g_{t+1}+0.5 \sigma^2(\gamma_t)g^2_{t+1}+g_{t+1}\mu_t$
We keep only the terms which depend on $\gamma_t$ and get
$G(\gamma_t)=-\mu_t(\gamma_t)g_{t+1}+0.5 \sigma^2(\gamma_t)g^2_{t+1} $
We want to maximize $G(\gamma_t)$ or minimize $F(\gamma_t)=-G(\gamma_t)$ which is exactly the loss we defined in eq.9.
We will add the full derivation in the appendix.
> Different drift model
Assuming that $\theta'\sim\mathcal{N}(\theta;\mu_0;\sigma^2_0)$, the model you wrote will have the mean $\gamma_t\theta_t+(1-\gamma_t)\mu_0$ and the variance $\gamma^2_t \sigma^2_t+(1-\gamma^2_t)\sigma^2_0$, which are the same as for our OU model, see eq.4 and Line 173 after eq.5, therefore they are mathematically equivalent.
In order to discuss it in detail, let $\mu_0$ is the prior mean for eq.4 and $\mu'_0$ is the prior mean in your variant, i.e., $\theta'\sim\mathcal{N}(\theta;\mu'_0;\sigma^2_0)$. When we incorporate the Gaussian model drift in the update, we are only concerned with its mean and variance since these affect SGD updates (see eq.16).
We have the following three cases:
* $\mu_0=\mu_0'=0$, the models have the same mean equal to $0$. They are mathematically equivalent to the model $\theta_{t+1}=\gamma_t\theta_t+\sqrt{1-\gamma^2_t}\theta_0$, where $\theta_0\sim p_0(\theta)=\mathcal{N}(\theta;0; \sigma^2)$ -- the models are the same. The have the mean $\gamma_t\mu_t$ and the variance $\gamma^2_t\sigma^2_t+(1-\gamma^2_t)\sigma^2_0$.
* $\mu_0=\mu'_0=\theta_0\neq0$, the models have the same mean which are the one specific initialization of NN parameters. They are mathematically equivalent with the mean $\gamma_t\mu_t+(1-\gamma_t)\theta_0$ and the same variance (as above)
* $\mu_0=\theta_0$ and $\mu'_0 \neq \theta_0$. These two models have the same variance, $\gamma_t^2 \sigma^2_t + (1-\gamma^2_t)\sigma^2_0$, but have different means: $\gamma_t\theta_t+(1-\gamma_t)\theta_0$ versus $\gamma_t\theta_t+(1-\gamma_t)\mu'_0$.
To implement variant 3, the mean $\mu'_0$ is fixed at the beginning of the training, because otherwise we would be under-counting the variance in the corresponding drift model. It is unclear how to interpret this model nor how it will affect performance.
Thus, in order to answer your question about the impact of the particular initialization, we believe variant 1 is the right answer. We conducted an experiment of using Soft Reset variant 1 and 2 on random label MNIST (data efficient), where we swept over hyperparameters. We found that these variants had similar performance on this benchmark. We will add this experiment to the paper in the Appendix.
We hope your answer clarifies the questions and concerns you have raised, and we thank you for raising interesting points for discussion! | Summary: The paper proposes a method to effectively learn the neural network parameters in non-stationary environments. They propose a modified learning algorithm that adapts to non-stationarity through an Ornstein-Uhlenbeck process with an adaptive drift parameter. Drift parameter is used to track the non-stationarity in the data - specifically, adaptive drift pulls the NN parameters towards initial parameters when the data is not well-modelled by current parameters while on the other hand reducing the degree of regularization (towards initial parameters) when the data is relatively stationary. The paper introduces a Bayesian Soft-Reset algorithm in Bayesian Neural Network (BNN) framework and a General Soft-Reset algorithm in deterministic non-Bayesian framework to effectively learn in non-stationary environments. Empirically, authors demonstrate that their method performs effectively in non-stationary supervised and off-policy reinforcement learning settings.
Strengths: To take into account the non-stationarity in the data - the methodology in the paper modifies the SGD algorithm by modifying the regularization strength and the regularization target of the SGD algorithm - a significant contribution of the work is to do this modification in a principled manner using a drift parameter that is updated online from the stream of data.
Overall, the paper is well written. The need and novelty of the methodology are clearly presented, and the illustration of the use of methodology on realistic instances clearly demonstrates how it works and improves on the existing methodologies for learning in non-stationary data regime.
Weaknesses: Some parts can be improved -
1) Section 2 - Simplification of equation (6) to (9) requires some explanation [page -5].
2) Section 3 - Some intuition behind the objective (11) [page - 6] would be good to understand it better (especially the second term in that objective)
3) Some discussion on how does the methodology perform relative to the degree of non-stationarity in the data?
4) Some minor typos in line 109, 154, 190
Technical Quality: 4
Clarity: 4
Questions for Authors: How does the methodology perform relative to the degree of non-stationarity in the data? Specifically, to what extent does this methodology remain effective under varying levels of non-stationarity?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer 6GY1 for their positive feedback. Please find our answer below.
> Section 2 - Simplification of equation (6) to (9) requires some explanation [page -5]
The equation 9 is obtained from 6 as follows. First, we linearise the log-likelihood function $\log p(y_{t+1} | x_{t+1}, \theta)$ around $\mu_t$ which equals to $\log p(y_{t+1} | x_{t+1}, \theta) \sim log p(y_{t+1} | x_{t+1}, \mu_t) + g_{t+1}^T (\theta - \mu_t)$. Then, we notice that $p(y_{t+1} | x_{t+1}, \theta) = \exp^{\log p(y_{t+1} | x_{t+1}, \theta)}$ which means that $p(y_{t+1} | x_{t+1}, \theta) \sim p(y_{t+1} | x_{t+1}, \mu_t) \exp^{-g_{t+1}^T \mu_t} \exp^{g_{t+1}^T \theta}$. Then, since $q_{t}^{t+1}(\theta|S_t,\Gamma_{t-1},\gamma_t)$ is a Gaussian, we can write compute the integral in a closed form. After that, we keep only the terms which depend on $\gamma_{t}$, since we only are interested in the optimization of $\gamma_t$.
We will add a full derivation of this expression in the Appendix.
> Section 3 - Some intuition behind the objective (11) [page - 6] would be good to understand it better (especially the second term in that objective)
The equation in objective 11 is negative Evidence Lower Bound (ELBO) on approximate predictive log likelihood eq 6, if all $\lambda_i = 1$. It could be derived by considering the integral in the right-hand side of eq. 6, dividing and multiplying it by $q(\theta)$ and applying Jensen Inequality. The fact that we introduce $\lambda_i \neq 1$ means that we introduce a temperature parameter (see [1]) on the prior for each dimension $i$. It was shown empirically [1] that using a temperature in the prior leads to better empirical results. We will add more explicit explanations of how to derive the objective 11 and will add full derivation in the Appendix.
> How does the methodology perform relative to the degree of non-stationarity in the data? Specifically, to what extent does this methodology remain effective under varying levels of non-stationarity?
Our experiments in Figure 3 and Figure 4 partially answer this question. The difference between Data Efficient and Memorization regimes in Figure 3 is the amount of epochs given to a task – 70 epochs for Data Efficient and 400 for Memorization. There is more non-stationarity in the Data-Efficient setting since it changes more frequently and our methodology remains more effective than baselines.
In general, we expect our method to be very helpful in scenarios when there are relatively long stationary phases which are succeeded by large changes of the optimal parameters. Moreover, given that we can define $\gamma_t$ per parameter or per layer, we can have a combination of these scenarios -- some parameters stay stationary whereas other change significantly. In general, such scenario would occur when we have stationary segments which are followed by non-stationary ones. In case, when the non-stationarity is very strong and the data distribution changes at every step, we think that it is less likely that our method will bring an additional benefit over SGD since in this setting, SGD will be constantly refreshed by the noise from the data. SGD, however, will struggle when there is a large change after a long stationary phase. Compared to reset-type algorithms, our method is much more adaptive, and relearns the parameters only when it "has to" (i.e., when the data changes sufficiently).
We see a partial support for this claim in our RL experiments in Figure 4, where as we become more off-policy, we see much more benefit of our method over standard learning approach. Moreover, we also see a benefit of our method over baseline in a more on-policy regime in Figure 4, left, top on Humanoid environment. Since this environment is hard to solve, the baseline agent will see a lot of similar data initially and therefore the corresponding RL algorithm could overfit. Constantly refreshing the parameters is beneficial here. On the other hand, if we study Figure 4, left, bottom which shows performance in Hopper environment, we can see that Soft Resets is less effective. In this setting, since the problem is easy, the baseline method sees a lot of different data constantly and is less prone to overfit.
> Some minor typos in line 109, 154, 190
Thank you for the catch, we will make the adjustments.
**References**:
[1] How Good is the Bayes Posterior in Deep Neural Networks Really?, Florian Wenzel, Kevin Roth, Bastiaan S. Veeling, Jakub Świątkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin, 2020 | Summary: This work focuses on the problem of plasticity loss in non-stationary problems. This work proposes a new solution which adaptively drifts the parameters toward the initial distribution. The proposed solution is a form of soft resetting and can be seen as a meta version of L2-init where the degree of drift towards initialization is learned from data.
Strengths: The proposed solution is novel. Current soft resetting methods reset the parameters of the model at a constant rate, irrespective of the degree of non-stationarity. The proposed solution introduces adaptive soft resetting, where the degree of resetting is calculated based on the degree of non-stationarity. This adaptiveness allows the solution to adapt faster and retain more prior knowledge. The experimental results show the effectiveness of the proposed solution.
Weaknesses: Although the proposed solution is novel and effective, the paper has many weaknesses:
1. Hyperparameter sensitivity. Given that the proposed solutions introduce many new hyperparameters and the authors say in line 207 that one of the solutions is sensitive to the choice of hyperparameter, the paper needs to contain hyperparameter sensitivity curves for all new hyperparameters.
2. Computational cost. The proposed solutions seem computationally expensive. It would be good to include the wall of each algorithm beside one of the figures, and Figure 3 might be best.
3. Limited performance improvement. The primary purpose of high replay-ratio is to improve the sample efficiency of algorithms. However, results in Figure 4 show that the proposed solution does not utilize a high replay-ratio. Its sample efficiency at replay ratio 1 is the same as its sample efficiency at replay ratio 128.
Technical Quality: 3
Clarity: 2
Questions for Authors: See Weaknesses
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Computational cost and hyperparameter sensitivity need to be discussed in the conclusion of the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer Hmj6 for their feedback. Please find our detailed answer below.
> Hyperparameter sensitivity. Given that the proposed solutions introduce many new hyperparameters and the authors say in line 207 that one of the solutions is sensitive to the choice of hyperparameter, the paper needs to contain hyperparameter sensitivity curves for all new hyperparameters.
We have provided the answer about hyperparameters sensitivity above in the common answer for all the reviewers.
> Computational cost. The proposed solutions seem computationally expensive. It would be good to include the wall of each algorithm beside one of the figures, and Figure 3 might be best.
We have provided the answer about computational complexity above in the common answer for all the reviewers.
> Limited performance improvement. The primary purpose of high replay-ratio is to improve the sample efficiency of algorithms. However, results in Figure 4 show that the proposed solution does not utilize a high replay-ratio. Its sample efficiency at replay ratio 1 is the same as its sample efficiency at replay ratio 128.
Our choice of Reinforcement Learning experiment is motivated by the Primacy Bias [1] paper, where a significant plasticity loss occurs and where the benefits of parameter resets were observed. In this setting, it was shown that as replay ratio increases, the performance of the algorithm (Soft Actor Critic) could significantly degrade and hard-resetting parameters once in a while, can significantly improve its performance. With this in mind, our Figure 4 demonstrates that Soft Reset can in fact be an even more effective strategy when off-policy ratio increases – it can learn when to reset parameters and by what amount, compared to Hard Resets which require setting these parameters manually.
We are happy to take further questions on any of the above matters, and we hope that we have been able to address your concerns. If we have done so, we would be grateful if you would consider increasing your review score.
**References**:
[1] The Primacy Bias in Deep Reinforcement Learning, Evgenii Nikishin, Max Schwarzer, Pierluca D'Oro, Pierre-Luc Bacon, Aaron Courville, 2022 | Rebuttal 1:
Rebuttal: Dear reviewers, thank you for your feedback. In this section we provide answers to recurrent points from some of you.
# Computational Complexity
The following tables will be added to the Appendix together with a reference to this appendix at the end of Section 3.
**Notations**:
* P - number of parameters of Neural Network (NN)
* L - number of layers.
* $O(S)$ - cost of backwards pass of SGD
* $K$ - number of Monte Carlo (MC) samples in eq.7
* $J$ - number of iterations in eq. 7
* $I$ - number of parameter updates for proximal methods in eq.14 and eq.18
* $M$ - number of MC samples for Bayesian method in eq. 12
| Method | Comp. cost. | Memory |
|----------------------------------------------|--------------|------------------|
| SGD | O(S) | O(P) |
| Soft resets $\gamma$ p. layer | O(JKS + S) | O(L+(K+1)P) |
| Soft resets $\gamma$ p. param. | O(JKS + S) | O(P+(K+1)P) |
| Soft resets $\gamma$ p. layer + proximal (I iters) | O(JKS + I S) | O(L+(K+1)P) |
| Soft resets $\gamma$ p. param. + proximal (I iters) | O(JKS + I S) | O(P+(K+1)P) |
| Bayesian Soft Reset Proximal (I Iter) $\gamma$ p.layer | O(JKS + 2M I S)| P(L+(K+2)P) |
| Bayesian Soft Reset Proximal (I Iter) $\gamma$ p.param | O(JKS + 2M I S)| P(P+(K+2)P) |
|
The table above denotes the theoretical cost of each of the methods. In practice, we use $J=1$ for Soft Resets and $J=10$ for proximal methods. The table below quantifies the exact cost for methods from Figure 3.
| Method | Comp. cost. | Memory |
|----------------------------------------------|--------------|------------------|
| SGD | O(S) | O(P) |
| Soft resets | O(2S) | O(L+2P) |
| Soft resets more compute | O(11S) | O(L+2P) |
| Soft resets proximal | O(10S + 10S) | O(L+2P) |
| Bayesian Soft Reset Proximal | O(10S + 20S) | P(L+3P) |
| Bayesian Soft Reset Proximal $\gamma$ p.param | O(10S + 20S) | P(P+3P) |
|
From Figure 3, we see that it is beneficial to spend more compute on optimizing $\gamma$ and NN parameters. However, even the cheapest Soft Resets leads to a good performance.
In Reinforcement Learning (RL) experiment, we do **one** update on $\gamma$ after each new chunk of fresh data. We do $G$ updates on NN parameters with a cost $O(S)$ each. The table below denotes the complexities.
| Method | Complexity | Memory |
|---------------|-------------|--------------|
| SAC | O(G S) | O(P) |
| Soft Reset | O(S + G S) | O(L + 2 * P) |
|
Soft Reset is marginally more expensive than SAC in RL but leads to a better performance (see Figure 4) in a highly off-policy regime.
# Hyperparameters sensitivity
We study the sensitivity of **Soft Resets** where $\gamma$ is defined per layer.
**Fixed parameters:**
* Number of MC samples $K=1$ and $M=1$
* Learning rate for parameters was tuned prior to that and equals to $\alpha=0.1$.
**Sensitivity parameters** (see Algorithm 1 for precise definitions):
* $\eta_{\gamma}$ - learning rate for the drift model
* $f$ - initial prior standard deviation rescaling, i.e., $\sigma^l_0 = f \frac{1}{\sqrt{H}}$ where $H$ is the width of the layer $l$
* $s$ - posterior scaling, i.e., $\sigma^l_t = s * \sigma^l_0$.
We provide the plots for the sensitivity analysis of Soft Reset on MNIST (data efficient) in the attached pdf. On top of that, we conduct the sensitivity analysis of L2 Init [1] and Shrink&Perturb [2] methods. The X-axis of each plot denotes one of the studied hyperparameters, whereas Y-Axis is the average performance across all the tasks (see Experiments section for tasks definition). The standard deviation is reported over 3 random seeds. A color indicates a second hyperparameter which is studied, if available. In the title of each plot, we write hyperparameters which are fixed.
**Takeaways**
The most important parameter is the learning rate $\eta_{\gamma}$ of the drift model. For each method, there exists a good value of this parameter and performance is sensitive to it. This makes sense since this parameter directly impacts how we learn the drift model.
The performance of Soft Resets is robust with respect to the posterior standard deviation scaling $s$ parameter as long as it is $s\geq0.5$. For $s<0.5$, the performance degrades. This parameter is defined from $\sigma_{posterior} = s \sigma_{prior}$ and effects relative increase in learning rate (see eq. 13 and eq. 17). This increase is given by $1/(\gamma^2 + (1-\gamma^2)/s^2))$, which could be ill-behaved for small $s$.
We also study the sensitivity of the baseline methods. We find that L2 Init is very sensitive to the parameter $\lambda$ which is a penalty on $||\theta-\theta_0||^2$ term. In fact, Figure 2, left shows that there is only one good value of this parameter which works. Shrink\&Perturb is very sensitive to the shrink parameter ($\lambda$). Similar to L2 Init, there is only one value which works, $0.9999$ while values $0.999$ and $0.99999$ lead to bad performance. This method however, is not very sensitive to the perturb parameter $\sigma$ provided that $\sigma \leq 0.001$.
Compared to the baselines, our method is more robust to the hyperparameters choice.
We also conduct sensitivity analysis for other variants of the method, but due to space constraints, will only include the results in the camera ready version. The take-aways are similar.
**References:**
[1] Maintaining Plasticity in Continual Learning via Regenerative Regularization, Saurabh Kumar, Henrik Marklund, Benjamin Van Roy, 2023
[2] On Warm-Starting Neural Network Training, Jordan T. Ash, Ryan P. Adams, 2020
Pdf: /pdf/60b1fb511b9ddb0086d900e3f70318a6a16b855d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LaKD: Length-agnostic Knowledge Distillation for Trajectory Prediction with Any Length Observations | Accept (poster) | Summary: This paper presents the LaKD method to improve trajectory predictions for variable input observation lengths.
LaKD incorporates two key ideas. The first idea is dynamic length-agnostic knowledge distillation. During training time, for each training sample, they augment it by randomly masking the input observation with different lengths. Then, they calculate the prediction errors when using each observation length. They select the prediction with the lowest error as the teacher prediction, and they have the other predictions distill information from the teacher prediction through a KL divergence loss on their feature embeddings. This way, the network is trained to predict good trajectories when given any observation length.
In order to prevent the knowledge distillation process from affecting the performance of the teacher prediction, they propose a second idea to apply gradient weigths based on the importance of the neurons on the prediction prediction. For neurons that have high importance to the teacher prediction, they multiply their corresponding gradients with a smaller weight.
This method is a plug-and-play that can be applied to almost any trajectory prediction model. The authors applied the method to the popular HiVT model and QCNet model. They performed evaluations on Argoverse 1, Argoverse 2, and nuScenes datasets. They compared against a few standard baselines such as Original and Random Masking, as well as a similar work FlexiLength Network (FLN). The result shows their LaKD method achieves better performance than the baselines. The authors also performed ablation studies to demonstrate the contributions from components.
Strengths: * The evaluation result is very thorough. The authors applied their method to two popular trajectory prediction models and performed evaluation on three public datasets.
* The proposed method achieves better performance than the baselines, including a very recent work from CVPR 2024.
* The paper is well-written and easy to follow.
Weaknesses: * The performance improvement compared to the naive Random Masking baseline is not very significant. I doubt whether it's worth the complexity to use this method in practice.
Technical Quality: 3
Clarity: 4
Questions for Authors: N/A
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and efforts on evaluating our work. Here are my responses to your comments:
> **Comment 1**: The performance improvement compared to the naive Random Masking baseline is not very significant. I doubt whether it's worth the complexity to use this method in practice.
Thanks for your comment. Firstly, during testing, our model disables bi-directional self-distillation, and remains fully consistent with the backbone structure without requiring any additional inference time or resources. Additionally, our method enables existing models to effectively handle observed trajectories of varying lengths. Secondly, as shown in Table 1, LaKD outperforms Random Masking with relative improvements ranging from 3% to 11%. Additionally, Figures 4, 5, 6, and 7 in the appendix demonstrate that LaKD yields better results than Random Masking across various lengths of observed trajectories. For example, on the Argoverse1 dataset using HiVT as the backbone, LaKD achieves relative improvements of 21.2%/19.2%/10.0% at K=1 and 15.3%/18.8%/28.9% at K=6 over Random Masking in terms of minADE/minFDE/MR with 2-frame observed trajectories.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you for your response. I will keep my rating.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thank you for your decision to maintain the current rating. I appreciate your time and feedback. | Summary: To tackle the length-agnostic trajectory prediction problem, the authors are motivated to utilize knowledge learned from both longer and shorter trajectories. They propose a plug-and-play self-distillation framework for trajectory prediction, which can be integrated with many different off-the-shelf trajectory prediction models. Specifically, for a given long trajectory, the authors first randomly mask the trajectory into several different lengths, maintaining the same prediction horizon but with different history lengths. They then determine the direction of knowledge distillation based on the performance of the model with longer or shorter history. The better-performing trajectory length is used to distill knowledge into the intermediate features of represented from other lengths. Additionally, the authors introduce a soft masking method at neuron level, updating the distillation of important neurons slowly and vice versa. They test their framework on Argoverse1, nuScenes, and Argoverse2 datasets, using off-the-shelf models HiVT and QCNet, and demonstrate performance gains.
Strengths: S1. The paper is generally well-written and easy to follow.
S2. The proposed plug-and-play method can work well with many different off-the-shelf model architectures, making it flexible. And the proposed method shows performance gains with different several recent trajectory prediction models.
Weaknesses: W1. One constraint for most knowledge distillation scenarios is that for the input to the model being distilled and the model distilling, at least the information should be equivalent or nearly equivalent between the inputs to both models. I'm a bit confused about why the overall framework of bidirectional self-distillation works from an information theory / flow perspective. For my detailed confusion, please refer to my question section.
W2. For the qualitative analysis in Fig. 3, I could hardly understand why this shows that the proposed LRKD method is better than the other methods. In the above scenario, there doesn't seem to be a significant difference between the three predictions. In the below scenario, the proposed method is closer to the ground truth, but I do not fully understand why the ground truth behaves in that manner and how this is inferred by the model.
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1. Could the authors explain further what do they mean by “Knowledge Collision”?
Q2. May I ask how could a trivial degenerate solution, or a mode collapse, be avoided when conducting bi-directional self-distillation? I am aware that the authors try to classify the neurons in Section 3.4, but it seems to me that by the metrics as in Equation 4, the important neurons will get even more important length-agnostically, and therefore get distilled more both for long and for short trajectories.
Q3. A question about the overall framework is that when we distill the representation of a long history into a short history, the short history lacks information about the longer past. For example, when trajectory A and trajectory B have the same last several time steps but differ in that one turns left earlier while the other turns right, the information intrinsic to these trajectories is different. Let’s refer to the last time stamps of both A and B as a shorter C, as A and B share the same last steps. The longer horizon trajectory encoder of A can capture this information, but when it is distilled to the encoder with a shorter history C, this distinguished information of A is not seen. But the proposed framework enforces distilling this uncertain knowledge of A to the short history C, even if in a future potential test situation, the longer trajectory turns in the opposite direction in the past, like B, than seen in the training set. May I ask how is the proposed framework valid, considering a similar situation?
Q4. For Ablation Table 3, may I ask why the performance degrades when M is larger than 3? Intuitively, the performance should improve with a larger mask number?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Briefly discussed in Conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review. The following are our responses to the points you have raised.
> **Comment 1**: I'm a bit confused about why the overall framework of bidirectional self-distillation works from an information theory / flow perspective.
Sorry for any confusion caused by our unclear presentation. Traditional knowledge distillation involves transferring knowledge from a deeper teacher model to a shallower student model given the same input, which is a form of model capability transfer. However, the bidirectional self-distillation in our LaKD first randomly masks the same trajectory to obtain trajectories of different lengths. Then, knowledge is transferred from the features of good trajectories to those of bad trajectories extracted by the same model, which is a form of data-level information transfer. We expect that this approach will enable trajectories of varying lengths to exhibit strong features for future trajectory prediction.
> **Question 1**: Could the authors explain further what do they mean by “Knowledge Collision”?
As mentioned above, since our framework uses a single shared encoder to extract feature representations for trajectories of different lengths and then performs data-level information transfer through this encoder, the feature representation of good trajectories can be disrupted when knowledge is transferred to bad trajectories through updates to the shared encoder. We refer to this problem as "Knowledge Collision". We will add more details in the final version.
> **Comment 2**: For the qualitative analysis in Fig. 3, I could hardly understand why this shows that the proposed LRKD method is better than the other methods.
Thanks for your comment. Although it seems that there is no significant difference among the three predictions in the above scenario of Fig. 3, their results are 3.5891/6.0794/1, 1.4298/3.0295/1 and 0.6915/1.0974/0 in terms of minADE/minFDE/MR, respectively, indicating that our LaKD provides more accurate predictions. To more intuitively demonstrate the effectiveness of our LaKD method, we have included additional cases in the Figure 1 of the attached pdf to show that our method succeeds where others fail, as shown in the "global" response.
Regarding the behavior of the ground truth in the below scenario of Fig. 3, we infer that the vehicle stops at the road ramp. We think this can be inferred by the features of the vehicle, e.g., speed, acceleration, etc.
> **Question 2**: May I ask how could a trivial degenerate solution, or a mode collapse, be avoided when conducting bi-directional self-distillation?
We apologize for the confusion caused by our unclear presentation. In our method, we design a dynamic soft-masking strategy to effectively conduct bi-directional self-distillation. During the training process, we set different soft-masking weights based on the importance of the units. This protects the more important units from significant damage that could affect the model's performance on trajectories of different lengths. However, these important units are still slowly updating, and their importance does not necessarily increase, as shown in Equation 7. Additionally, our dynamic soft-masking mechanism recalculates the importance of units whenever a new batch of data is encountered, avoiding excessive protection of any single unit. Furthermore, when the model's performance is poor, highly important units also need to be trained. Therefore, we lower their soft-masking weights at the early stages of training to prevent mode collapse, as shown in Equation 8. We will add more details in the final version.
> **Question 3**: A question about the overall framework is that when we distill the representation of a long history into a short history, the short history lacks information about the longer past.
Thank you for your insightful question. During the training process, we can perform our LaKD to distill knowledge from the longer trajectory A to the shorter trajectory C (Note that A and C belong to the same trajectory), allowing C to capture the intrinsic information of A and achieve similar prediction performance. By this means, we expect the shared encoder to extract feature representations for trajectories of different lengths as precisely as possible. During testing, we disable bi-directional self-distillation and only use the backbone module. When a new trajectory B (i.e., B and A belong to different trajectories) comes, we directly extract its features using the shared encoder for future trajectory prediction, which can capture the distinguishing information of B.
> **Question 4**: For Ablation Table 3, may I ask why the performance degrades when M is larger than 3? Intuitively, the performance should improve with a larger mask number?
Thanks for your comment. Since our method involves randomly masking historical trajectories M times in each training iteration and continues for a sufficient number of epochs, observation trajectories of all different lengths are seen during training, regardless of the value of M. Consequently, the model's performance does not significantly fluctuant as M changes, indicating that the model is not sensitive to M. This makes M easy to set in real-world scenarios.
Regarding the performance degradation when M is larger than 3, this occurs because, despite our use of a dynamic soft-masking mechanism to prevent knowledge collision, the feature representation of good trajectories may be compromised when knowledge is transferred to bad trajectories through updates to the shared encoder. This compromise becomes more severe as M increases, resulting in reduced performance. We will add these in the final version.
---
Rebuttal Comment 1.1:
Title: Raised my rating from 5 to 6
Comment: Thank you so much for answering my questions! Most of my questions are well answered. It would be great if you could provide a more detailed explanation on question 3.
---
Reply to Comment 1.1.1:
Title: A more detailed explanation on question 3.
Comment: Thank you very much for recognizing our work. I will provide a more detailed explanation of Question 3, as follows:
During the training process, we can perform our LaKD to distill knowledge from the longer trajectory A to the shorter trajectory C (Note that A and C belong to the same trajectory), allowing C to capture the intrinsic information of A and achieve similar prediction performance. By this means, we expect the shared encoder to extract feature representations for trajectories of different lengths as precisely as possible. Our Length-agnostic Knowledge Distillation framework will only enhance the model's ability to extract temporal features of trajectories, improving the model's capacity to capture every temporal change in the observed trajectory, without reducing the model's generalization ability.
During testing, we disable bi-directional self-distillation and only use the backbone module. When a new unseen trajectory B (i.e., B and A belong to different trajectories) comes, the model does not predict based on the previously seen trajectories A and C, but instead directly extracts its features using the shared encoder for future trajectory prediction. The model can capture the distinguishing information of B, recognize B's turning behavior, and make predictions based solely on B's observed trajectory, without being influenced by A and C. | Summary: The paper presents a length-agnostic knowledge distillation framework, which is motivated from knowledge transfer among trajectories of different lengths. The authors address knowledge conflicts during distillation from a dynamic soft-masking mechanism. The evaluation is conducted using Argoverse 1, nuScenes, and Argoverse 2 datasets.
Strengths: - The problem is clearly stated and the motivation is logical.
- Dynamic knowledge transfer between trajectories of varying lengths sounds interesting.
- It seems that the soft-masking strategy reasonably addresses the issue of knowledge collision as proposed.
Weaknesses: - Lacks a proper qualitative results.
- Limited choice of backbone models.
See below for details.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The qualitative results do not clearly demonstrate the scenarios where the authors have been motivated. As shown in Figure 3, there is no difference between (b) and (c), which makes me doubt about the contribution of this work. Not just closer to the ground-truth, it is strongly suggested where the proposed method works while others fail.
- Different backbone models other than HiVT and QCNet would be needed to claim its generality.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper does not provide any failure cases or insights into the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We truly appreciate the reviewer of the constructive feedback. In light of these insightful comments, we would like to address them by providing the following clarifications.
> **Question 1**: The qualitative results do not clearly demonstrate the scenarios where the authors have been motivated. As shown in Figure 3, there is no difference between (b) and (c), which makes me doubt about the contribution of this work. Not just closer to the ground-truth, it is strongly suggested where the proposed method works while others fail.
Thanks for raising this concern which helps us to clarify the contribution of our work. Due to the limited time during the submission phase, we simply visualized the results to demonstrate that our method can more accurately predict future trajectories. Although it seems that there is no significant difference among the three predictions in the above scenario of Fig. 3, their results are 3.5891/6.0794/1, 1.4298/3.0295/1 and 0.6915/1.0974/0 in terms of minADE/minFDE/MR, respectively, indicating that our LaKD provides more accurate predictions. Following your suggestion, we have added more results to demonstrate that our method works while others fail, as shown in the Figure 1 of the attached pdf in the "global" response. Thanks again for the insightful comments, and we will add these results in the final version.
> **Question 2**: Different backbone models other than HiVT and QCNet would be needed to claim its generality.
Thanks for the comment. We chose to use HiVT and QCNet as our backbones in our paper because they are the most advanced models in the field of trajectory prediction and have recently ranked highly on the Argoverse dataset leaderboard. Based on your helpful suggestion, we conducted experiments using two other typical trajectory prediction models, TNT [1] and Vectornet [2], as our backbones to show the generality of our method. The results are as follows:
|||||||||
|-|-|-|-|-|-|-|-|
|Dataset|Methods||K=1|||K=6||
|||$\mathrm{min\overline{ADE}}$|$\mathrm{min\overline{FDE}}$|$\mathrm{\overline{MR}}$|$\mathrm{min\overline{ADE}}$|$\mathrm{min\overline{FDE}}$|$\mathrm{\overline{MR}}$|
|Argoverse 1|VectorNet-Orig|3.5335|7.8259|0.8267|2.1173|3.6945|0.6751|
||VectorNet-RM|1.7815|3.8589|0.6412|1.1016|1.9880|0.3531|
||VectorNet-DTO|1.7523|3.8065|0.6409|1.0126|1.8096|0.3189|
||VectorNet-FLN|1.7334|3.7027|0.6244|1.0088|1.7994|0.3132|
||VectorNet-LaKD|1.6003|3.4628|0.5917|0.9933|1.7546|0.3016|
|Argoverse 1|TNT-Orig|3.7318|7.7174|0.8600|1.8255|2.9818|0.4081|
||TNT-RM|2.8629|6.3561|0.7925|1.1946|2.1685|0.3116|
||TNT-DTO|2.7280|6.1935|0.7858|1.1242|2.0630|0.3084|
||TNT-FLN|2.4241|5.5038|0.7593|1.0692|1.9485|0.2714|
||TNT-LaKD|2.2174|5.0279|0.6974|1.0172|1.8571|0.2456|
The experimental results show that our method still achieves the best performance when using TNT and Vectornet as our backbones. We will add these results in the final version.
> **Limitations**: The paper does not provide any failure cases or insights into the limitations.
Thanks for your comment. Due to the limitations of the paper's length, we have analyzed our algorithm's limitations in the appendix as :
In this work, we aim to distill knowledge from 'good' trajectory to 'bad' trajectory for improving the prediction performance from observations of any lengths. However, how to determine a 'good' or 'bad' trajectory is an open problem. Currently, we adopt a heuristic strategy by utilizing the distance between the predicted trajectory and the ground-truth trajectory. More complex strategies, such as reinforcement learning, are worth further exploration and investigation.
[1] Zhao H, et al. "Tnt: Target-driven trajectory prediction" PMLR 2021.
[2] Gao J, et al. "Vectornet: Encoding hd maps and agent dynamics from vectorized representation" CVPR 2020.
---
Rebuttal Comment 1.1:
Comment: Thanks for providing the response. They are clear now. Please add new figure and results to the final version.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thank you very much for the score improvement and your constructive feedback. We will further polish the paper in the final revision. Thank you! | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful and constructive feedback. We really appreciate the reviewers thought our work to be "well-motivated" (Rudm, 4u3J, G7Yk), "easy to follow" (Rudm, 4u3J, G7Yk), "well-written" (Rudm, 4u3J, G7Yk), "novel" (Rudm, 4u3J, G7Yk), "a promising/good topic" (Rudm, 4u3J, G7Yk), "effective in experiments" (Rudm, 4u3J, G7Yk), "sufficient experiments" (4u3J, G7Yk), "tackles an important problem" (Rudm, 4u3J, G7Yk), "insightful and sound" (Rudm, 4u3J, G7Yk), "with solid theoretical part" (Rudm, G7Yk).
We have made point-to-point response to the comments of each reviewer. Additionally, we provide further qualitative results in the attached PDF to address the reviewers' concerns. Once again, we thank all reviewers for their insightful comments which are very helpful for improving the quality of our paper.
Pdf: /pdf/5e6e051416426072fcccda70ca605c6ddb48cb77.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A PID Controller Approach for Adaptive Probability-dependent Gradient Decay in Model Calibration | Accept (poster) | Summary: The submitted paper proposes a PID-based controller to ensure the consistent optimisation of model accuracy and model calibration. The controller with proposed Relative Calibration Error (RCE) dynamically adjust gradient decay rate to "control" model confidence. By applying a learning rate compensation mechanism, the side effects of the dynamic gradient decay rate, such as fluctuations in gradient amplitude, can be mitigated.
Strengths: 1, The proposed Relative Calibration Error (RCE) is a simple but highly efficient way to exhibit over-confidence and under-confidence, which can be exploited by PID-based controller. \
2, A very clear illustration (Figure 3) is used to connect PID controller and model optimisation.\
3, Empirical experiments show good results for the proposed method.\
4, The potential application (dynamic calibration) on online machine learning.
Weaknesses: 1, This work lacks theoretical analysis as authors mentioned in the appendix.\
2, Please refer to the "Questions" part.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1, Have you thoroughly investigated any related research about PID on model calibration since no related work section in this paper? PID controller is a classical and historical method. There are several researches related to PID and model training but I'm not sure whether there are similar researches of this work. Additionally, in "$\textit{Line 68-69}$ [29]", I guess you may want to highlight the success of PID controller in the machine leaning filed (Interdisciplinary filed) to enhance the feasibility of this work.\
2, $\textit{Line 102}$ : The temperature coefficient $\tau$ is introduced as an hyper-parameter. However, it seems this hyper-parameter is fixed to 1 ($\textit{Line 115}$) without any further discussion. Have you have tried any different value or any methods to optimise this hyper-parameter?\
3, Have you tried any different optimisers especially any adaptive optimisers like Adam since you want to mitigate the gradient fluctuation ,or only SGD with the proposed adaptive learning rate strategy is applied as you mentioned in $\textit{Algorithm 1}$ (Page 7)?\
4, What is the strategy for selecting/tuning the value of P/I/D terms?\
5, Could you please explain more details about the statement (threshold related) in $\textit{Line 178-179}$\
6, How did you fine-tune the comparative methods where you mentioned in $Table$ 1? Have you chosen the best result of other methods in their experiments to compare yours?\
$\textbf{(Possible) Minor issues:}$\
1, $\textit{Abstract}$ : you mentioned "During model optimization, the expected calibration error tends to overfit earlier than classification accuracy, indicating distinct optimization objectives for classification error and calibration error" which points the accuracy and calibration are two distinct but related definitions. Maybe you can briefly describe the difference between them in Introduction section for more general audience.\
2, $\textit{Line 50}$ : If I understand correctly, "These methods" represents the "Training-based model calibration methods". If it is, maybe you can reaffirm the subject instead of pronoun since you use singular from in previous sentences (Line 48 and 49).\
3, $\textit{Line 96}$ : "$D_b$ contains all sample with $\hat{p} \in [\frac{b}{M},\frac{b+1}{M})$" but you also mentioned the $b$ starts from $1$ to $M$. It seems the first bin $[0,\frac{1}{M}]$ and the right boundary of the last bin $[..., 1]$ are missed. One simple way is to change it to $\hat{p} \in [\frac{b-1}{M},\frac{b}{M})$ and additionally make the last bin compact if I understand it correctly.\
4, $\textit{Line 98}$ : "ECE and MCE". Better to use the full term when you first mention it in the main text. For example, "Expected Calibration Error (ECE) and Maximum Calibration Error (MCE)".\
5, Changing the label of y-axis "\% of samples" in $Figure$ 2 to "$\times100$% of samples" may more precise?\
6, The label of x-axis in $Figure$ 5 is missed. Is that "Epoch"?\
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Please see the weaknesses part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Answers to Questions**
* To the best of my knowledge, no existing work addresses model calibration using PID control. Most prior approaches apply PID concepts to optimization problems rather than model calibration. Our work, however, establishes a connection between model calibration and gradient decay rate by introducing a probability-dependent gradient decay coefficient. We then implement a model calibration strategy using the PID method by controlling this gradient decay rate. This novel approach introduces a probabilistically related gradient decay rate, which is controlled to achieve effective model calibration. To the best of our knowledge, this is the first discussion of the probability-dependent gradient decay rate in relation to model calibration.
* The temperature coefficient and gradient decay rate have distinct effects on model optimization. As the temperature coefficient increases, the variability in confidence levels among different samples decreases, causing their confidence levels to converge. Conversely, a smaller temperature coefficient results in greater variation in confidence levels among samples. Temperature has been intensively studied in areas such as model calibration and knowledge distillation aspects, and there are many well-established optimization methods. Our method does not address the optimization of the temperature coefficient. We appreciate the reviewer for highlighting this important issue. We will jointly consider the effects of temperature and probability-dependent on model calibration in future work.
* We thank the reviewer for the insightful question. We have tested the Adam optimizer to assess its ability to provide a relatively stable gradient for model parameter optimization. Below is a demo experiment. Our experiments were conducted on CIFAR-10 and CIFAR-100 using ResNet and VGG networks. The results indicate that, while Adam offers a stable gradient, its accuracy is lower compared to the SGD optimizer with dynamic gradient decay coefficients and our proposed gradient compensation approach. For example, Adam achieved only 63.5% accuracy on CIFAR-100 with ResNet35, which is significantly lower than the baseline accuracy of 73.8%.
| SGD | Adam | PID Controller Approach | Gradient Compensation | Accuracy | ECE | AdaECE |
| --------- | --------- | ----------------------- | --------------------- | --------| ----- | ------ |
| ✓ | - | - | -| 73.8% | 0.172 | 0.172 |
| ✓ |- | ✓| - | 72.5% | 0.022 | 0.023 |
| - | ✓| ✓| - | 63.5% | 0.023 | 0.024 |
| ✓ | - | ✓| ✓| 74.7% | 0.012 | 0.013 |
A key difference arises in the baseline case handled by Adam. In our proposed PID controller method, which adjusts the hyperparameter $\beta$ during model calibration, the loss function is dynamic. While Adam retains previous gradient information, this can conflict with the current optimization direction. In contrast, our compensation method only modifies the learning rate and retains gradient information pertinent to the current loss function. This may explain why the Adam optimizer does not yield better results.
* The current PID settings are based on a trial-and-error approach. However, it is important to note that varying the PID setting does not significantly affect model accuracy or calibration, as demonstrated by the ablation experiments with different P/I/D settings shown in Figure 5. Different PID settings provide effective calibration results and maintain model accuracy. In summary, our method is robust to variations in P/I/D settings.
* The hyperparameter $\beta$ is designed to correlate with the overall confidence level of the samples, as illustrated in Figure 2 and Figures 6-9 in the Appendix, as well as Tables 4-6. Additionally, Equation (5) shows that the cross-entropy loss function in conjunction with Softmax can be approximated using a max function, where $\beta$ serves as a threshold. This is further illustrated by Equation (5).
**Confidence Distribution of Samples with Different Gradient Decay of Three-Layer FCNN on MNIST**
The `#` indicates the number of samples that belong to the confidence interval.
| Gradient Decay Factor |1|0.5|0.1|0.01|0.001|
|-----------------------|-----|-----|-----|------|-------|
| #${p_c}\le 0.2$|903|828|1105|1325|2375|
| #$0.2<{p_c}\le0.4$|454|206|119|91|142|
| #$0.4<{p_c}\le0.6$|528|245|132|92|116 |
| #$0.6<{p_c}\le0.8$|1291|484|191|100|193|
| #$0.8<{p_c}\le1$|56824|58237|58453|58362|57147|
We will also give an example to illustrate this point. We give the confidence distribution of the MINST dataset in a fully connected network for the training set samples under varying $\beta$. Our experiments reveal that a smaller $\beta$ results in a higher overall confidence level in the distribution, such $\beta=0.5$. However, when $\beta$ exceeds a certain threshold $\beta>=0.1$, the number of high-confidence samples decreases. This phenomenon occurs because a small decay rate creates a curriculum learning sequence, where the confidence in low-confidence samples only increases once the confidence in high-confidence samples surpasses a soft threshold. If this threshold is too high, the confidence of high-confidence samples continues to increase, while low-confidence samples fail to achieve a higher confidence level.
We also give another chart in the added PDF response, which can also give more details about the statement.
* We fine-tuned the hyperparameters for all the compared methods, including the learning rate for post-processing techniques, and selected the best results from these experiments.
* We thank the reviewer for the careful review and for these minor questions and suggestions! We will address all minor issues raised and revise the presentation accordingly.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed rebuttal, which addresses my concern. I have decided to raise my score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for your time and valuable feedback. We are glad that our responses addressed your questions. | Summary: The paper presents an approach to ensure consistent optimisation of both model accuracy and calibration. The authors used a PID-based controller for the task. The PID-based controller adjusts the gradient decay rate, which ultimately optimises the neural network by gradient descent. Further ablation studies have shown that the PID controller was effective in controlling the accuracy and ECE of the model.
Strengths: 1. The paper is written in a very good format, covering every aspect.
2. Proper equations and graphs are provided for a better understanding of the paper.
3. The method is compared against other post-processing calibration methods, and it shows better results than them.
4. The ablation study further provides deeper understanding of how the variation of the parameters of the controller effects the performance.
Weaknesses: 1. The combination of PID controllers with gradient decay for model calibration appears to be an incremental improvement rather than a groundbreaking innovation. The paper does not sufficiently differentiate its contribution from existing methods using PID controllers in optimization tasks.
2. The paper claims to address both over-confidence and under-confidence in model predictions. However, the analysis of how well the proposed method balances these two aspects is not thoroughly presented. The paper should include more detailed experiments and discussions on this balance.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. How sensitive is the proposed method to the choice of PID controller parameters? Could you provide a sensitivity analysis or guidelines for selecting these parameters?
2. Why were CIFAR-10/100 and Tiny-ImageNet chosen for the experiments? Have you considered testing on other datasets, especially those from different domains, to demonstrate the generalizability of your method?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: 1. The study does not address the scalability of the proposed method to larger and more complex datasets or models. The computational overhead introduced by the PID controller and adaptive learning rate adjustments is not discussed.
2. The paper suggests that the proposed method can prevent overconfidence, but it does not adequately address how it handles overfitting to the training data. A more detailed analysis of overfitting prevention mechanisms is needed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Answers to Questions**
* In the experiments detailed in Sections 4.1 and 4.2, the hyperparameters of the PID controllers were determined through trial-and-error. In Section 4.3, we present ablation experiments that explore various P/I/D hyperparameters in the PID controller. As shown in Figure 5, the results from these ablation experiments demonstrate that the calibration results and model accuracy are insensitive to hyperparameter settings. Different hyperparameter settings in the PID controller yield better model calibration results and produce different outcomes compared to models without PID controller. In summary, our method is robust to variations in P/I/D settings as shown in Section 4.3.
* In the original submission, we validated our approach on image classification tasks using CIFAR-10, CIFAR-100, SVHN, 102 Flowers, and the Tiny-ImageNet. The above datasets have been widely used in experiments and their results are representative. As suggested by the reviewer, we applied our proposed PID Controller Approach to the object detection tasks in VisDrone and COCO, where YOLOv3 serves as the object detection model. The confidence of the object classification was calibrated, and the experimental results validated the effectiveness of our approach.
|Dataset|Model|Metric|Uncalibrated|Hist. Bin.|Temp. Scaling|TS-AvUC|Ours|
|:----------|:------:|:------:|-------------:|-----------:|--------------:|--------:|------:|
|VisDrone|YOLOv3|ECE|0.101|0.084|0.086|0.075|0.043|
|||MCE|0.231 | 0.365 |0.180|0.165|0.142|
|||AdaECE |0.100|-|0.089|0.079|0.046|
|COCO|YOLOv3|ECE|0.121|0.101|0.093|0.091|0.081|
|||MCE|0.236|0.169|0.165|0.154|0.184|
|||AdaECE|0.126|-|0.096|0.092|0.082|
**Rebuttal to Weakness**
**_The motivation and novelty of our work:_** Consider the reviewer's comment in the "Weakness" section. We would like to provide further explanation about the motivation and novelty of our approach for the reviewer and clarify how it differs from other PID-based optimization study.
Most previous PID methods are applied to model optimization, such as gradient optimization. Moreover, many optimization techniques, like momentum and Adam, share similarities with PID control concepts. However, the problem we aim to address is not only related to optimization or expediting hyperparameter tuning but rather to ensuring model calibration during the optimization. To the best of our knowledge, no existing work has employed a PID controller approach for model calibration, making our work distinct from prior PID-based optimization strategies. We are tackling a different problem by focusing on model calibration.
In addition to utilizing a PID control method, the primary novelty of our work is the introduction of a probability-dependent gradient decay coefficient. In this paper, we have verified the relationship between this coefficient and model calibration both deductively and empirically, as demonstrated in Figures 1-2, Equations (4-9), and the experiments detailed in the Appendix. This approach provides a new perspective on model calibration, shedding light on the calibration issues prevalent in modern neural networks.
Overall, this work introduces a novel approach to calibration that significantly differs from previous methods. As discussed in the paper, the connection between the gradient decay rate and model calibration offers a new explanation for the overconfidence of modern models. To the best of our knowledge, this is the first discussion of the probability-dependent gradient decay rate in relation to model calibration.
**_How is calibration implemented using a PID controller approach?_** We introduce a probability-dependent gradient decay coefficient into the loss function. This coefficient regulates the rate at which the gradient of a sample decays as the confidence level increases, as illustrated in Figure 1 and Equations (4-9). Figures 2, 6-9 in the Appendix, and Tables 4-6 demonstrate that the rate of gradient decay negatively correlates with the confidence distribution of sample pairs passing through the model. The experiments described above clarify the different phenomena of overconfidence and underconfidence exhibited by model confidence when various rates of gradient decay are chosen. Please refer to Figure 2 and Figures 6-9. This observation motivated the use of a PID control method to manage the confidence levels of samples.
During model optimization, we monitor the average confidence level of the dataset in real-time, thereby ensuring confidence calibration through PID control. Our work calibrates model by managing a single probability-dependent gradient decay rate and presents an innovative approach to this problem.
**Rebuttal to Limitation**
* **_Computational overhead of our approach:_** In our approach, there is just a single hyperparameter that needs to be tuned. This adjustment comes from the computation of the relative calibration error with respect to the validation set. Therefore, compared to the baseline optimization strategy, the additional computation is very small, which is negligible for the whole optimization process.
* **_The scalability of the proposed method:_** In our original experiments, we evaluated our method on CIFAR-10, CIFAR-100, SVHN, FLOWER102, and Tiny-ImageNet datasets across four models and nine different methods. The publicly available datasets described above are representative. In response to the reviewer's suggestions and concerns, we applied our method to object detection tasks and demonstrated its effectiveness from different domains. See "Answers to Questions."
* **_Overfitting to the training data:_** Our paper focuses on the model calibration problem. The problem of overfitting for the training set is not in our consideration, but we will consider the impact of our proposed method of dynamically adjusting the gradient decay rate on overfitting in future work. Thanks to the reviewer for the suggestion.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thank you for your detailed and thoughtful responses to my comments and questions. I appreciate the additional experiments and explanations you provided to address my concerns. Below, I offer some final thoughts on your rebuttal:
Sensitivity to PID Controller Parameters:
I appreciate the thorough explanation and the ablation studies provided in Section 4.3. It's reassuring to know that the method demonstrates robustness to variations in the PID controller parameters. Including these details in the final paper will be valuable for readers who may want to implement your approach in various contexts.
Choice of Datasets:
Thank you for conducting additional experiments on VisDrone and COCO datasets. These results further strengthen your paper by demonstrating the generalizability of your approach across different domains. Including these results in the final submission will undoubtedly enhance the paper's impact.
Motivation and Novelty:
Your clarification on the novelty of your work, especially the focus on model calibration rather than mere optimization, is well received. The distinction you draw between your approach and previous PID-based methods is clear, and I believe this will help in positioning your work as a significant contribution to the field. Highlighting these points more prominently in the paper, particularly in the introduction and conclusion, will help readers better understand the uniqueness of your contribution.
Computational Overhead and Scalability:
Your explanation that the computational overhead introduced by your method is minimal and that it scales well with larger datasets addresses my concern effectively. Including a brief discussion on this in the final paper, particularly in the limitations or methodology section, would be beneficial for readers concerned with the practical implementation of your approach.
Overfitting Concerns:
I understand that your focus was primarily on model calibration, and I appreciate your acknowledgment of the potential impact on overfitting. I encourage you to explore this aspect in future work, as understanding how your method interacts with overfitting dynamics could further solidify its utility in practical applications.
In conclusion, I believe your paper has made significant strides in addressing my initial concerns, and the additional work you've done to clarify and expand on your methodology is commendable. I look forward to seeing these updates reflected in the final version of your paper and I am increasing your score from my side
---
Rebuttal Comment 1.2:
Title: Thank you
Comment: Thank you for your time and valuable feedback. We are glad that our responses addressed your questions. | Summary: The authors propose a method for improving the calibration of neural networks, which are known to be overconfident in their predicitions. Their method is based on modifying the softmax function to include a tunable hyperparameter- which they call the gradient decay coefficient- which is controlled throughout the optimization by assessing the level of calibration of the model on a validation set and tuning the gradient decay coefficient using a PID controller. They also propose an adaptive learning rate scheduler to ensure that the changing of the gradient decay coefficient doesn't result in vanishing gradients. The authors conduct empirical experiments to validate the effectiveness of their PID controller based approach compared to other calibration methods.
Strengths: The issue of calibration and the mitigation of overconfident predictions by neural networks is clearly important, and the authors have proposed what strikes me as a sensible method to tackle it. PID controllers are widely used in industry, but remain underutilized in ML applications, so their use by the authors is refreshing. The empirical experiments conducted by the authors seem adequate to demonstrate their claims.
Weaknesses: The definition of the gradient decay coefficient is not well motivated- it is not clear to me why this particular modification of the softmax is to be preferred over others. It is likewise not clear to me what motivates the specific form of the learning rate scheduler in formula (15). The authors ackowledge that their method does not currently have theoretical justification, which is understandable given the general lack of solid theory for deep learning, but given this situation it is hard to assess whether the proposed method is likely to succeed in more general settings.
Technical Quality: 3
Clarity: 2
Questions for Authors: Did the authors try different optimizers for the model itself, other than SGD (e.g Adam)? Perhaps using a different optimizer would make the learning rate scheduler unnecessary?
"However, post-processing calibration methods rely on an optimized independent output-probability mapping, which doesn’t alter the optimization process of the original model itself. Consequently, these methods can solely refine the probability distribution of the model output."- Why is this a disadvantage?
What are the computational requirements of the proposed method compared to other calibration methods- in particular does the tuning of the gradient decay coefficient greatly slow down optimization?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Answers to Questions**
* We appreciate the reviewer's technical comment. We had previously considered using the Adam optimizer instead of SGD to achieve a stable gradient. In response to your question, we show additional ablation studies to evaluate the optimization performance when Adam replaces SGD. Our experimental results indicate that Adam can indeed provide a more stable gradient and calibration performance, particularly in conjunction with our PID controller approach for model calibration. However, it is notable that Adam results in reduced accuracy, achieving only 63.5% on CIFAR-100 with ResNet35, significantly lower than the baseline accuracy of 73.8%.
| SGD | Adam | PID Controller Approach | Gradient Compensation | Accuracy | ECE | AdaECE |
| --------- | --------- | ----------------------- | --------------------- | --------| ----- | ------ |
| ✓ | - | - | - | 73.8% | 0.172 | 0.172 |
| ✓ | - | ✓ | - | 72.5% | 0.022 | 0.023 |
| - | ✓ | ✓ | - | 63.5% | 0.023 | 0.024 |
| ✓ | - | ✓ | ✓ | 74.7% | 0.012 | 0.013 |
A key difference arises in the baseline case handled by Adam. In our proposed PID controller method, which adjusts the hyperparameter $\beta$ during model calibration, the loss function is dynamic. While Adam retains previous gradient information, this can conflict with the current gradient vector direction because the optimization objective is dynamic. In contrast, our compensation method only modifies the learning rate and retains gradient vector direction pertinent to the current loss function. This may explain why the Adam optimizer does not yield better results.
* Post-processing calibration methods necessitate the creation of an additional output-probability mapping $z \to p$ . Although this approach does not alter the decisions made in the classification task, it does increase the overall complexity of the model $x \to z$. The input-probability mapping $x \to p$ in post-processing calibration approaches is more complex than that in training-based calibration methods.
* Our proposed method does not require significantly more computational resources compared to the baseline optimization strategy. It only involves adjusting the $\beta$ parameter in the loss function during the optimization process, and this adjustment is controlled by the PID of the RCE based on the validation set. Besides, through extensive empirical experiments, we find that the dynamic tuning mechanism of the gradient decay coefficient does not significantly impact the optimization of accuracy in the model. In other words, the dynamic gradient decay rate through PID control method for model calibration does not adversely affect model convergence regarding accuracy with proposed gradient compensation. Moreover, the PID parameter settings are not sensitive to the results, as shown in Figure 5.
**Motivation for gradient decay coefficient**
We would like to provide an additional explanation for reviewer to clarify the motivation of the gradient decay coefficient we have proposed.
The probability-dependent gradient decay coefficient indicates the rate at which a sample's gradient magnitude decreases as the confidence of the softmax function increases (see Figure 1). A large gradient decay coefficient means that as the model's confidence increases, its gradient decreases more rapidly. Conversely, a small gradient decay coefficient implies that the gradient's magnitude decreases more slowly as confidence increases. This allows the sample to reach higher confidence levels at smaller gradient decay rates. As shown in Figure 2, Figures 6-9 in the Appendix, and Tables 4-6, the gradient decay rate $\beta$ exhibits a negative correlation with the confidence level for pairs of samples passing through the model. Higher gradient decay rates correlate with lower average confidence levels, motivating the use of a PID control method to manage sample confidence levels.
Additionally, Equation (5) shows that the hyperparameter $\beta$ can be regarded as soft margin in the cross-entropy function with Softmax, allowing optimization of the samples to reach a soft confidence threshold. This approximation of the equation (5) on the max function demonstrates how this threshold compares across different class output. There is a compelling relationship between this hyperparameter $\beta$ and the model's confidence distribution. While this connection cannot be proven theoretically, it is supported by empirical experiments and logical reasoning. To the best of our knowledge, this is the first discussion of the probability-dependent gradient decay rate in relation to model calibration.
Based on the above results, we adopt this probability-dependent gradient decay rate as the controlled variable of the controller for model calibration.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed response. I have decided to leave my score unchanged. | null | null | Rebuttal 1:
Rebuttal: Some supplementary Figures and Tables.
Pdf: /pdf/b8e49d92a74d10b3f103a6bbe1602c702055fe92.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AdjointDEIS: Efficient Gradients for Diffusion Models | Accept (poster) | Summary: This paper presents a way to optimize the initial seed or control parameters to a diffusion network to optimize a differentiable loss applied to the final samples drawn from a discretization of the diffusion SDE / probability flow ODE. Put differently the idea is to apply the white box integration trick from DPM / DPM++ paper, and the method from the neural ODE paper to differentiate through an ODE solver.
In addition, the paper presents methods for backpropagating through an SDE.
In terms of experiments the method is demonstrated on a face morphing application.
Strengths: The paper is well written and the derivation is mostly clear. I did not go through section 4 for the derivation of the SDE Adjoint Diffusion SDE too closely since I am not very familiar with the Stratonovich stochastic integral.
The paper is also fairly comprehensive since it presents the derivations for both the probability flow formulation as well as the SDE formulation.
Overall I think the problem tackled in the paper is interesting and the solutions are elegant.
Weaknesses: The experimental results seem to be weak.
This paper presents a way to control the final outcome from a diffusion sampler using a differentiable loss, but the only application demonstrated in the paper is on face-morphing where the NFE are not controlled across the baselines. Some of the baselines have fewer NFE and it's not possible to say conclusively whether they underperform because they are worse methods or just because they had less computational budget.
AdjointDEIS-2M is not even implemented and compared.
Also, a lot more compelling demonstrations could be shown of these new algorithms. For example, a number of "controlnet" type applications could have been tried to create an image that matches a sketch of an output.
Technical Quality: 4
Clarity: 3
Questions for Authors: Mainly I would like to understand a little bit more about the rationale for selecting the face morph application selection, and whether the authors think the empirical comparisons in Table 1 are apples-to-apples given the different NFE for the methods.
Some more minor comments
- eq (3.1) has a factor of 2 missing.
- around eq(2.5) it might be good to explicitly clarify that while VP typically uses alpha^2 + sigma^2 = 1 ibut you are not making that assumption here.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes limitations have been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 1FE9 for the helpful questions and interest in our work. We are happy the reviewer found the paper clear and comprehensive tackling an interesting problem. We address the questions raised by the reviewer below. We hope our responses help address the questions and are happy to answer any additional questions.
> Mainly I would like to understand a little bit more about the rationale for selecting the face morph application selection, and whether the authors think the empirical comparisons in Table 1 are apples-to-apples given the different NFE for the methods.
This work actually originated from research we were conducting on the face morphing problem. A popular technique in that area is to use identity guidance for GAN-based models (Zhang et al., "MIPGAN—Generating Strong and High Quality Morphing Attacks Using Identity Prior Driven GAN," in IEEE Transactions on Biometrics, Behavior, and Identity Science, 2021) wherein the optimal latent vector $\mathbf{z}^*$ is found by minimizing the identity loss defined between the generated morphed face $g(\mathbf{z})$ and the original bona fide faces $\mathbf{x}^{(a)}, \mathbf{x}^{(b)}$ such that
$$\mathbf{z}^* = \mathop{\rm arg min}_{\mathbf{z} \in \mathcal{Z}} \mathcal{L}(g(\mathbf{z}), \mathbf{x}^{(a)}, \mathbf{x}^{(b)})$$
This is done using a gradient descent algorithm and thus requires $\frac{\partial \mathcal{L}(g(\mathbf{z}), \mathbf{x}^{(a)}, \mathbf{x}^{(b)}))}{\partial \mathbf{z}}$ which can be found through simple automatic differentiation tools.
We were seeking to perform a similar guided generation procedure, but with diffusion models.
It is during this process that we developed the AdjointDEIS algorithm.
Beyond the AdjointDEIS algorithm developed out of the work on guided generation for the face morphing problem, we are also interested in experimental illustration of AdjointDEIS as we compare to the original **Di**ffusion **M**orphs (**DiM**) and a recent identity guided extension called Morph-PIPE.
We believe the comparisons in Table 1 are fair despite the different NFE numbers. We first present a brief summary of the different DiM algorithms with details on the NFE calculation:
* **DiM-A** The original DiM algorithm, the model uses 250 NFEs to encode the bona fide images $\mathbf{x}_0^{(a)}, \mathbf{x}_0^{(b)}$ into the initial noise $\mathbf{x}_T^{(a)}, \mathbf{x}_T^{(b)}$ and conditional information $\mathbf{z}\_{a}, \mathbf{z}\_{b}$. The model then takes 100 NFEs to generated the morphed image from the morphed noise and conditionals $\mathbf{x}\_T^{(ab)}$ and $\mathbf{z}\_{ab}$.
* **Fast-DiM** An improvement on DiM-A that uses higher-order ODE solvers to reduce the NFE. Fast-DiM use 250 NFEs for encoding and 50 NFEs for sampling.
* **Morph-PIPE** An improvement on DiM-A that generates 21 possible morphs using a blend of 21 different interpolations between $\mathbf{x}_T^{(a)}$ and $\mathbf{x}_T^{(b)}$, likewise with the conditional information. Due to the 21 candidate morphs, the NFE for sampling is $21 \cdot 100 = 2100$. The morph which minimizes the identity loss is then chosen.
* **DiM + AdjointDEIS** We apply AdjointDEIS-1 to the original DiM algorithm. We used 250 NFEs for encoding. Due to the optimization procedure, we were able to reduce the sampling NFE during each iteration of the optimization procedure to 20. We used 50 optimization steps, resulting in a total sampling NFE of $20 \cdot 50 = 1000$.
* **DiM + SDE-AdjointDEIS** We apply SDE-AdjointDEIS-1 solver to the original DiM algorithm. We used 250 NFEs for encoding. As prior work has noticed, due to the intricacies of discretizing the diffusion SDE, numerical SDE solvers often take more steps to achieve good performance than a numerical ODE solver. We used 50 sampling steps but 10 optimization steps for 500 total sampling NFEs.
As discussed in the Fast-DiM paper, increasing the number of sampling/encoding steps, and thereby NFE, for DiM-A, Fast-DiM, or Morph-PIPE would not meaningfully increase the effectiveness of the morphing attack in terms of MMPMR. While the higher NFE of the AdjointDEIS methods does indicate an increased computation cost over that of DiM or Fast-DiM, simply increasing the NFE for DiM or Fast-DiM would not give them equivalent performance to AdjointDEIS. Our higher NFE is due to the computation of the adjoint diffusion ODE/SDE.
We would like to emphasize that compared to Morph-PIPE, the other identity guided method, our approaches achieve *superior performance with fewer NFE*.
> Some more minor comments
> * eq (3.1) has a factor of 2 missing.
> * around eq(2.5) it might be good to explicitly clarify that while VP typically uses alpha^2 + sigma^2 = 1 ibut you are not making that assumption here.
If accepted, both of these changes will be incorporated into the camera-ready version. | Summary: In this paper, the authors proposed an accelerated method for differentials of pretrained diffusion models with respect to its latent valuables or parameters by making use of
1. the Taylor expansion of the log-SNR parameter, and
2. the exact integral formula of the derivatives related to the probability flow ODE, which they call Adjoint Diffusion ODE.
They applied this method to morphing of two generated images. As they summarized in Table 1, their results achieved good performance.
For more details:
- The setting
- Data diffusion SDE $d\mathbf{x}_t = f(t) \mathbf{x}_t + g(t) d\mathbf{w}_t$
- Its integral $q(\mathbf{x}_t|\mathbf{x}_0) = \mathcal{N}(\mathbf{x}_t|\alpha_t \mathbf{x}_0, \sigma_t^2)$ where $f(t), g(t)$ are related to $\alpha_t, \sigma_t$ by Eq.(2.5).
- A pretrained diffusion model $\epsilon_\theta(\mathbf{x},\mathbf{z}, t)$
- Its generating process: the probability flow ODE $\frac{d\mathbf{x}\_t}{dt} = f(t)\mathbf{x}\_t + \frac{g(t)^2}{2 \sigma_t}\epsilon_\theta(\mathbf{x}, \mathbf{z}, t)$
- They name the velocity (the RHS of the ODE) as $\mathbf{h}_\theta(\mathbf{x}, \mathbf{z}, t)$
- The purpose
- Calculation of gradients of $\mathcal{L}(\mathbf{x}_0(\mathbf{x}_T, \mathbf{z}, \theta))$ with respect to $\mathbf{z}$ or $\theta$
- which are defined by ODEs in Eq.(3.3) and (3.4).
- The method
- One can recover the derivatives based on a vector $\hat{\mathbf{a}}_t = \frac{\partial \mathcal{L}}{\partial \mathbf{y}_t}$ where $\mathbf{y}_t = \alpha_t^{-1}\mathbf{x}_t$.
- For $t<s$, one can get $\hat{\mathbf{a}}_s = \hat{\mathbf{a}}_t + \int\_{\lambda_t}^{\lambda_s} ... d\lambda$ (Proposition 3.1).
- Applying the Taylor expansion on ... with $\lambda$ around $\lambda_t$, and drop $O((\lambda_s - \lambda_t)^{k+1})$ term, we get a concrete relation between $\hat{\mathbf{a}}_s$ and $\hat{\mathbf{a}}_t$ (Eq.(3.11)),
- which provides explicit (approximated) formulas for the derivatives.
- The empirical result
- They apply the proposed method to morphing (Section 5 Experiments).
Strengths: - **quality(+)** The explanations are well written. The authors provide concrete proofs on their theorems and experiments with real data.
Weaknesses: - **significance(-)** It is difficult to understand in which respect the proposed method is superior.
- **clarity(-)** I am almost convinced by the explanations, however there is a concern as explained in Questions.
Technical Quality: 2
Clarity: 2
Questions for Authors: - for significance improvements:
- **Q1(significance)** In the theoretical sections, the authors apply omitting the higher order terms of the Taylor expansion, so the resultant method should be just an approximation. Why is it good performance in Table 1 compared to other methods nonetheless the method is just an approximation?
- for clarity improvements:
- **Q1(clarity)** In the Adjoint Empirical Diffusion SDE cases, the integral results in Eq.(4.12) and (4.13) look very similar to ODE cases in Eq.(3.13) and (3.14). The only difference seem to be (doubled) 2nd term. Why such similarity happens?
- **Q2(clarity)** In the beginning of section 4, the authors wrote their motivation to analyze SDE cases as it possibly improve the result, however, it turns to be that the SDE version works worse than the ODE cases. Why does it happens?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: They leave comments on
1. Lack of conditional generation in numerical experiments
2. No application of parameter differentiation in numerical experiments
in the final page.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer oW3N for the detailed questions and interest in our work. We are glad that the reviewer thought our work was well written. We address the questions raised by the reviewer below. We would be happy to provide additional clarification or to answer further questions about our work.
> * for significance improvements:
> * **Q1(significance)** In the theoretical sections, the authors apply omitting the higher order terms of the Taylor expansion, so the resultant method should be just an approximation. Why is it good performance in Table 1 compared to other methods nonetheless the method is just an approximation?
The reviewer is correct that the AdjointDEIS-1 and SDE-AdjointDEIS-1 solvers are numerical approximations of the true adjoint diffusion ODE/SDE. However, we believe these approximations are still quite useful even with large step sizes. The local truncation error for Eq. (3.10) is $\mathcal{O}(h^{k+1})$, for the first-order solver this error is $\mathcal{O}(h^2)$. Intuitively, with more discretization steps $N \to \infty$ then $h \to 0$ the error for the first-order solver will disappear. In practice we found that $N = 20$ steps for the adjoint ODE and $N = 50$ steps for the adjoint SDE seemed to work well. As other works have noted, numerical SDE solvers take more steps due to the difficulty in discretezing the stochastic integral.
The improvement compared to previous works of DiM, Fast-DiM, and Morph-PIPE in Table 1 can be attributed to the identity guided nature of DiM + AdjointDEIS.
While there are discretization errors, we posit that these errors are small enough that the estimated gradient is still useful for identity guidance.
One of the strengths of our proposed work is that the adjoint diffusion ODE solver is decoupled from the sampling procedure for the diffusion ODE, that means we can choose different numerical precision levels for sampling and estimating the gradients.
We note that while Morph-PIPE does identity guidance, it does so through a brute force search over a space of 21 possible morphs whereas we perform identity guidance through estimated gradients.
> * for clarity improvements:
> * **Q1(clarity)** In the Adjoint Empirical Diffusion SDE cases, the integral results in Eq.(4.12) and (4.13) look very similar to ODE cases in Eq.(3.13) and (3.14). The only difference seem to be (doubled) 2nd term. Why such similarity happens?
This similarity is actually one of our key insights! We observed that the adjoint diffusion SDE actually simplifies to an ODE.
The factor of 2 is present because of the difference between the PF ODE
$$ \frac{\mathrm d \mathbf{x}\_t}{\mathrm dt} = f(t)\mathbf{x}\_t + \frac{g^2(t)}{2\sigma\_t}\boldsymbol\epsilon\_\theta(\mathbf{x}\_t, \mathbf{z}, t) $$
and the diffusion SDE
$$ \mathrm d \mathbf{x}\_t= \bigg [f(t)\mathbf{x}\_t + \frac{g^2(t)}{\sigma\_t}\boldsymbol\epsilon\_\theta(\mathbf{x}\_t, \mathbf{z}, t) \bigg ]\mathrm dt + g(t) \circ \mathrm d \tilde{\mathbf{w}}\_t $$
where that factor 2 differs between the ODE and drift term of the SDE.
This means that we can use the *exact* same solvers for the adjoint ODE as the adjoint SDE with the *only* exception being the factor of 2!
The only caveat being the underlying state $\mathbf{x}\_t$ still evolves with the backwards flow, Eq. (4.6), and uses a different solver.
> **Q2(clarity)** In the beginning of section 4, the authors wrote their motivation to analyze SDE cases as it possibly improve the result, however, it turns to be that the SDE version works worse than the ODE cases. Why does it happens?
While recent work like Nie et al., "The Blessing of Randomness: SDE Beats ODE in General Diffusion-based Image Editing" (ICLR 2024) shows the promise of SDEs in image editing, the face morphing problem is unique.
The goal is to fool an Face Recognition (FR) system into accepting one morphed face image as belonging to *two* identities.
Recent work into the face morphing problem has shown that visual fidelity and morphing performance are not necessarily correlated, see the DiM and Fast-DiM paper.
Upon visual inspection, the morphs produced using the adjoint diffusion SDE solver look more realistic and less smoothed to our eyes.
In terms of morphing performance in terms of MMPMR, we find it to be about the same as the adjoint diffusion ODE morphs.
As such we would argue that the SDE morphs are actually *slightly* better than ODE morphs as they have comparable MMPMR performance and better visual fidelity.
If accepted, we plan on making these contributions more clear in the camera-ready version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clear explanations.
On **Q1(significance)**, I understand a little about the reason for good performance that is related to the capability of direct gradient estimates based on proposed methods in this paper. I think it makes sense.
On **Q1(clarity)**, I understood the reason for appearance of 2. This seems to be in the same reasons of 2 in denominator of the probability flow ODE.
On **Q2(clarity)**, I think it would be making known contributions more clear as the author responded, and I believe it will be improved in the camera-ready version.
Overall, I think my concerns are solved by the response, and I would like to raise my score a little. | Summary: The paper proposes an adjoint sensitivity method -- AdjointDEIS -- for efficiently calculating gradients of diffusion SDE models. Current methods for naive backpropagation rely on discrete adjoints which are memory intensive. The authors introduce an approach based on the stochastic adjoint sensitivity method to solve the gradients with respect to initial noise, conditional information, and model parameters. The authors develop custom solvers for the adjoint problems along with the proposed sensitivity method. The methods are validated on a face morphing problem.
Strengths: * **Great Flow/Presentation**: The general presentation of the paper is well done! Especially starting off with a more straightforward setting of an ODE and extending them to SDEs
* **Specialized Solver**: Diffusion models are quite widespread and developing specialized solvers leveraging the structure of the problem demonstrates a practical application.
Weaknesses: * **Limited Experimental Scope**: While the overall analysis and formulations are sound, the overall experimental validation for the method is limited to just a single experiment.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Point 2 in contributions: Aren't the continuous adjoint sensitivity methods used in Neural SDEs already general-purpose enough to handle diffusion problems?
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer jxa4 for the interest in our work and the insightful comments. We are encouraged the reviewer found the extensions from diffusion ODEs to diffusion SDEs as a strength of the paper. Below we respond to the question raised by the reviewer. We hope this helps address the questions and are happy to respond to any further questions.
> Point 2 in contributions: Aren't the continuous adjoint sensitivity methods used in Neural SDEs already general-purpose enough to handle diffusion problems?
We agree with the reviewer that the continuous adjoint sensitivity methods used in Neural SDEs are general-purpose enough to be applied to diffusion SDEs; however, w'd like to emphasize that one of contributions as we listed in the paper is "To the best of our knowledge, AdjointDEIS is the first general back-propagation technique **for** diffusion models that use an SDE solver for sampling".
That is, to the best of our knowledge we are the first group to explore using the method of adjoint sensitivity *explicitly for* diffusion SDEs.
A significant insight in our work comes from exploiting the structure of diffusion ODEs/SDEs, where, in the VP case---this can also be shown in the Variance Exploding (VE) case---the adjoint diffusion SDE simplifies to an ODE!
This insight would not straightforwardly be shown by simply applying the Neural SDEs methods on the diffusion SDEs without inspecting the diffusion term of the Stratonovich SDE.
Another contribution of our work is using the specific structure of the adjoint diffusion ODEs/SDEs to transform the ODE/SDE via exponential integrators to a much simpler formulation, removing the error from the linear error term.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing this.
While the overall paper is well written and theoretically sound, I agree with reviewer vPC5 that the experimental scope of this paper is quite limited (with just a single experiment). As such I will keep my recommendation of Accept (score 7) | Summary: AdjointDEIS uses the method of adjoint sensitivity to compute gradients of diffusion models, which is more efficient and less memory intensive, and robust to the injected noise. This work proposes efficient solvers for both the adjoint probability flow ODE and the adjoint diffusion SDE. Experiments demonstrate the efficacy of the solvers on guided generation for a face morphing problem.
Strengths: - AdjointDEIS reduces the adjoint of the PF ODE to the problem of exponential integrators, which enables use of the vast amount of literature in the areas.
- Obtaining accurate gradients wrt initial noise x_T and the latent z can open the door to possibly novel applications of diffusion models.
Weaknesses: - The number of experimental applications of the method seem lacking (just one application to face morphing attacks is included). I’d encourage the authors to think about applications to data assimilation or inverse problems for instance.
- Since the primary contribution of the paper is solver, having a proof of convergence rates or empirical studies of convergence rate would be valuable additions to the paper.
Technical Quality: 3
Clarity: 2
Questions for Authors: - It would be interesting to visually plot example forward and backward trajectories of the diffusion processes. What is the typical magnitude of the numerical errors that accumulate during the backward ODE solve?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer vPC5 for the helpful comments and interest in our work. We agree with the reviewer that AdjointDEIS can be applied to more applications like inverse problems. Below we respond to the questions and concerns raised by the reviewer. We hope this addresses the questions and are happy to answer any further questions.
> Since the primary contribution of the paper is solver, having a proof of convergence rates or empirical studies of convergence rate would be valuable additions to the paper.
We fully agree with the reviewer. If accepted we will include a proof the AdjointDEIS-$k$ is a $k$-th order solver with some mild assumptions to ensure the vector Jacobian product is Lipschitz w.r.t. $\mathbf{a}_t$ and that the step size is not significantly large $h\_{max} = \max\_{1 \leq j \leq M} h_j = \mathcal{O}(1 / M)$. We provide an excerpt from the proof below. The proof roughly follows the structure of Appendix B.3 from Lu et al. "DPM-solver: A fast ODE solver for diffusion probabilistic model sampling in around 10 steps" (NeurIPS, 2022).
*Proof.* The AdjointDEIS-1 solver with higher-order error terms is given by
\begin{equation}
\def\bfa{{\mathbf{a}}}
\def\bfx{{\mathbf{x}}}
\def\bfz{{\mathbf{z}}}
\bfa_{t_{i+1}}=\frac{\alpha_{t_i}}{\alpha_{t_{i+1}}}\bfa_{t_i} + \alpha_{t_i}^2\sigma_{t_{i+1}}(e^h_i - 1)\bfa_{t_i}^\top \frac{\partial \boldsymbol\epsilon_\theta(\bfx_{t_i}, \bfz, t_i)}{\partial \bfx_{t_i}} + \mathcal{O}(h_i^2)
\end{equation}
where $k=1, t_i = t, t_{i+1} = s, h_i = \lambda_s - \lambda_t$. Let $\\{\tilde{\mathbf{a}}\_{t\_i}\\}\_{i=1}^M$ denote the sequence computed by AdjointDEIS-$k$.
Since the vector Jacobian product is Lipschitz w.r.t. $\mathbf{a}\_t$, we can write
\begin{equation}
\tilde\bfa_{t_i}^\top \frac{\partial \boldsymbol\epsilon_\theta(\tilde\bfx_{t_i}, \bfz, t_i)}{\partial \tilde\bfx_{t_i}}=\bfa_{t_i}^\top \frac{\partial \boldsymbol\epsilon_\theta(\bfx_{t_i}, \bfz, t_i)}{\partial \bfx_{t_i}} + \mathcal{O}(\tilde\bfa_{t_i} - \bfa_{t_i})
\end{equation}
then
\begin{align}
\def\bfa{{\mathbf{a}}}
\def\bfx{{\mathbf{x}}}
\def\bfz{{\mathbf{z}}}
\tilde\bfa_{t_{i+1}} &= \frac{\alpha_{t_i}}{\alpha_{t_{i+1}}}\tilde\bfa_{t_i} + \alpha_{t_i}^2\sigma_{t_{i+1}}(e^h_i - 1)\tilde\bfa_{t_i}^\top \frac{\partial \boldsymbol\epsilon_\theta(\tilde\bfx_{t_i}, \bfz, t_i)}{\partial \tilde\bfx_{t_i}}\\\\
&= \frac{\alpha_{t_i}}{\alpha_{t_{i+1}}}\tilde\bfa_{t_i} + \alpha_{t_i}^2\sigma_{t_{i+1}}(e^h_i - 1)\bigg(\bfa_{t_i}^\top \frac{\partial \boldsymbol\epsilon_\theta(\bfx_{t_i}, \bfz, t_i)}{\partial \bfx_{t_i}} + \mathcal{O}(\tilde\bfa_{t_{i}} - \bfa_{t_{i}})\bigg)\\\\
&= \frac{\alpha_{t_i}}{\alpha_{t_{i+1}}}\tilde\bfa_{t_i} + \alpha_{t_i}^2\sigma_{t_{i+1}}(e^h_i - 1)\bfa_{t_i}^\top \frac{\partial \boldsymbol\epsilon_\theta(\bfx_{t_i}, \bfz, t_i)}{\partial \bfx_{t_i}} + \mathcal{O}(\tilde\bfa_{t_{i}} - \bfa_{t_{i}})\\\\
&= \bfa_{t_i} + \mathcal{O}(h_{max}^2) + \mathcal{O}(\tilde\bfa_{t_{i}} - \bfa_{t_{i}})
\end{align}
Repeating this argument, from $\tilde{\mathbf{a}}\_{t\_i} = \mathbf{a}\_0$ then
\begin{equation}
\def\bfa{{\mathbf{a}}}
\def\bfx{{\mathbf{x}}}
\def\bfz{{\mathbf{z}}}
\tilde\bfa_{t_M} = \bfa_T + \mathcal{O}(Mh_{max}^2) = \bfa_T + \mathcal{O}(h_{max})
\end{equation}
Thus the global truncation error is $\mathcal{O}(h_{max})$ and thus completes the proof Q.E.D.
> It would be interesting to visually plot example forward and backward trajectories of the diffusion processes. What is the typical magnitude of the numerical errors that accumulate during the backward ODE solve?
The forward and backwards trajectories will look identical just with a reversal in time. The sampling trajectory of the probability flow ODE starts with white noise and slowly adds information back into a clean image. Likewise, the backwards ODE solver for the probability flow ODE takes the clean image and adds noise to it. The typical magnitude is numerically quite small as the gradients themselves have a small magnitude. From what I recall the magnitude of errors were about $10^{-5}$.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for addressing my comment. I have updated my score. I still encourage the authors to do more extensive evaluation for the method in the future. | Rebuttal 1:
Rebuttal: # General Response
We thank all the reviewers for all of their time and feedback on our submitted manuscript.
---
We are delighted to see that the reviewers appreciated the practical significance of our method, highlighting that it "can open the door to possibly novel applications of diffusion models." (**reviewer vPC5**) and "demonstrates a practical application" (**reviewer jxa4**).
Additionally, we are glad to see that the reviewers appreciated that we "start[ed] off with a more straightforward setting of an ODE and extending them to SDEs" (**reviewer jxa4**) and thought that "the problem tackled in the paper is interesting and the solutions are elegant" (**reviewer 1FE9**).
Lastly, we are pleased that the reviewers found the paper to be "well written" (**reviewers oW3N and 1FE9**) and "fairly comprehensive" (**reviewer 1FE9**).
---
We primarily address the concerns and questions raised by the reviewers in our individual responses; however, we address some common concerns below.
**Significance of Contribution.** In light of the feedback we received, we make the following improvements:
* Emphasizing the significance that the adjoint diffusion SDE simplifies to an ODE and is **identical** to the adjoint probability flow ODE with the exception of a factor of 2 on the vector Jacobian term.
* Prove that the AdjointDEIS-$k$ solvers are in fact $k$-th order solvers obtaining a global truncation error of $\mathcal{O}(h^k)$. For more details, please refer to the response to reviewer vPC5.
* Highlight that the calculation of the adjoint probability flow ODE is decoupled from the probability flow ODE, i.e., separate numerical solvers can be used with **different** step sizes! This means we could use **fewer** steps to obtain a working estimate of the gradient and still perform guided diffusion. The step size could be scheduled, whereas the loss decreases with each optimization iteration using the adjoint state the step size for the adjoint ODE solver can increase to reduce computation.
**Experimental Section.** We acknowledge that additional experimental results would help strength our manuscript. We believe that the example of face morphing provides a good illustration of the utility of AdjointDEIS in guided generation problems. Our hope is that the compelling motivation of AdjointDEIS coupled with the new theoretical results will provide a springboard for further research into guided generation with gradients from AdjointDEIS.
We hope that our responses to the reviewers convincingly address the reviewers' concerns and are happy to answer any further questions.
Sincerely,
The Authors | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Tactile DreamFusion: Exploiting Tactile Sensing for 3D Generation | Accept (poster) | Summary: This paper proposes to use the tactile sensing signal (height map and normal map) to improve the 3D generation quality, especially the geometric details. The authors use a 3D mesh generation guided by a normal-conditioned ControlNet to ensure the consistency between the visual textures and the tactile textures. They also develop a multi-part editing pipeline to generate objects with different texture parts. Experiments and ablation studies demonstrate the effectiveness of the proposed method.
Strengths: The origniality of the paper is good. To my knowledge, this is the first work to use the tactile sensing for 3D generation. The generation quality is satisfying. The generated meshes include good geometric details, which align well with the input tactile signals. The paper is well written and structured. The visualizations clearly demonstrate the quality of the generated meshes.
Weaknesses: The connection between the tactile sensing and the 3D generation is not strong and critical. The tactile signals are only used to generate the normal maps of certain textures. It may be replaced by simpler alternatives such as texture reterival from a normal map database. It needs further elaboration why this combination is necessary.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What is the generation time of the multi-part geneation?
2. Can the authors add an experiment of real image to 3D mesh with real corresponding tactile signals?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Limitaions have been addressed in the paper, which include complex geometry generation and slight seams.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and valuable comments. We address the questions below:
### **Using texture reterival from a normal map database**
We agree with the reviewer that our work can be potentially extended to other high-resolution geometrical texture data such as the one from a normal map database. In this work, we focus on the data acquired from tactile sensors, due to several reasons. First, the normal map database could be limited in terms of texture variation, e.g., different avocados, as it often contains a single or very few normal maps per object. Second, the normal map data is sometimes hand-crafted rather than captured from real data, making it less realistic. Third, when the user has a particular texture in mind to synthesize, e.g., customizing the plush toy with the fabric of their favorite sweater. In these regards, we believe high-resolution tactile sensors provide a quick and scalable way to capture the precise textures of real objects and, therefore, will be a common way to capture surface textures, in the near future.
Moreover, in the future, we can extend the work of using tactile feedback to model the physical properties, such as the hardness of the material, as part of the object rendering. We thank the reviewer for pointing this out and will add this discussion in the revision.
### **Generation time of the multi-part generation**
As mentioned in the supplementary material, we train all models on A6000 GPUs and each experiment takes about 10 mins and 20G VRAM to run. To be specific, single-part experiment takes about 6 mins and multi-part experiment takes about 8 mins to run.
### **Experiment of using real Image and real corresponding tactile as input**
We show one example result of using a real image and corresponding captured tactile signals as input in Figure 3 of the attached PDF. Our method can recover the color and geometric details reasonably well.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. The explaintation is reasonable to me. | Summary: This paper proposes a lightweight 3D texture field that ensures the consistency between visual and tactile textures while preserving photorealism. The experiments demonstrate that quantitative and qualitative results show good generation quality.
Strengths: 1. The authors pioneered the use of tactile sensing to enhance geometric details.
2. The exposition is good. The paper is easy to understand.
3. They created a TouchTexture dataset, comprising 16 everyday objects, contributing a new dataset to the community.
Weaknesses: 1. This paper suggests that existing methods struggle to achieve fine-grained geometric details. However, methods like Neuralangelo and PermutoSDF can recover highly detailed geometric information.
2. The TouchTexture dataset presented in Figure 3 and the supporting materials seemingly do not capture the "local geometric intricacies."
3. There are only very limited objects been shown in the main paper and supplementary material. Are those objects cherry-picked? It would be great if more results on in-the-wild object can be provided to show the generalisation ability of the model.
4. Ablation Study is an important part in paper and it would be more convincing with both quantitative and qualitative experiments, and cannot be simply summarized with only several sentences. If there are any figures and tables in the article, please indicate the specific table number or figure number in the analysis and analyze according to the specific visualization results.
- [Neuralangelo: High-Fidelity Neural Surface Reconstruction. CVPR2023]
- [PermutoSDF: Fast Multi-View Reconstruction with Implicit Surfaces using Permutohedral Lattices. CVPR2023]
Technical Quality: 2
Clarity: 2
Questions for Authors: See my previous sections.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: I'm not confident about voting for accepting this paper because of the potential similarity to existing methods and the limited novelty.
There is more engineering effort than novelty. The novelty might be limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your questions and comments. We would like to clarify that in this work, our main contribution is leveraging tactile sensing to enhance geometric details for 3D generation tasks. (1) We are the first to leverage tactile sensing to synthesize high-fidelity geometric details for 3D generation. (2) We present a new technique to learn high-quality 3D texture fields with aligned visual and geometry textures, with a multi-part extension that allows us to synthesize multiple textures across various regions. (3) Our experiments show that our method outperforms existing text-to-3D and image-to-3D methods.
### **Comparison with the different setup of reconstruction pipeline**
We agree that methods like Neuralangelo (Li, et al., 2023) and PermutoSDF (Rosu, et al., 2023) can recover highly detailed geometric information. However, these methods focus on 3D reconstruction tasks and require sufficient data to achieve high-resolution reconstruction, e.g., “50 images” for an object and “~200-1000 images” for a larger scene. In our setup, we focus on a different task of text-driven 3D generation. We make use of a similar 3D representation, i.e., a multi-resolution 3D hash grid, as a texture field and strive to generate highly detailed 3D assets with minimal input, a text prompt / single-view image and a single tactile patch.
### **Local geometric intricacies captured by Tactile data**
The GelSight mini sensor we use to collect the TouchTexture dataset has a sensing area of 21mm x 25mm. The normal maps and height maps in Figure 3 of the main paper and the supporting materials show the tactile data for a single touch, cropped to 18mm x 24mm in physical scale and 240 x 320 in pixels, which corresponds to a tiny patch as shown in Figure 2 of the main paper. We refer to the “submillimeter scale” geometry captured by the sensor as “local geometric intricacies”.
### **More Diverse Results**
We show more complex and diverse results in Figure 4 and Figure 5 of the attached PDF. Our method is compatible with most pipelines that generate colored meshes, and can enhance the geometric details of their output. With different backbones such as RichDreamer (Qiu et al., 2024) and InstantMesh (Xu et al., 2024), our method is able to generate diverse, in-the-wild objects. These results showcase the generalizability of our method. We will include more examples in the revised draft.
### **Ablation studies**
In the main paper, Figures 9,10, and 11 illustrate the ablation studies and the analysis is provided in Line 238-250. Directly applying a normal texture map to a base mesh without joint optimization may introduce unnatural appearance, as the additional normal texture could be conflicting with the original albedo map, as shown in the misaligned strawberry seeds in Figure 9. Figure 10 shows that using the raw tactile data without proposed preprocessing produces much more flattened textures. This is because low-frequency deformation of the gel pad would dominate the tactile signal, reduce the signal-to-noise ratio, and degrade the synthesized geometric details. We also ablate the tactile input, and the example results are shown in Figure 11. Removing tactile input produces overly smooth meshes, as a text prompt could not provide sufficient guidance to geometric details at the millimeter scale. Figure 2 in the attached PDF also shows the ablation of tactile loss on a part of the object, demonstrating that the diffusion prior can infer some geometric variation to a certain level but is not capable of generating high-fidelity regular and consistent textures. We are happy to conduct additional ablation studies if there are specific aspects the reviewers would like us to investigate.
---
Rebuttal Comment 1.1:
Comment: Thank you for taking time and preparing a rebuttal which I read carefully. | Summary: This submission addresses the long-standing challenge of enhancing geometric details in results produced by text-to-3D and image-to-3D pipelines. The approach introduces a novel method that leverages tactile normal modality to synthesize high-fidelity geometric details. Additionally, it employs attention maps during the diffusion process to segment input images based on text prompts, allowing for the synthesis of multiple textures across various regions. The results demonstrate that this method effectively recovers geometric details and ensures alignment between geometry and color.
Strengths: 1. The approach is innovative in utilizing tactile normal modality to enhance geometric details.
2. The introduction of a newly collected tactile dataset, TouchTexture, is beneficial to the research community.
Weaknesses: 1. The paper claims compatibility of the proposed method with both text-to-3D and image-to-3D pipelines. However, DreamCraft3D, selected as a text-to-3D pipeline, requires both an image and a text caption as inputs. Although DreamCraft3D can be considered as a 'text-to-image, text & image-to-3D' process, it differs from a purely text-to-3D approach. Thus, the compatibility between the proposed method and purely text-to-3D pipelines remains questionable.
2. The pipeline overview in Figure 4 indicates the need for a reference image and tactile input. The paper does not address how to select an appropriate tactile input for the reference image, nor how to ensure the tactile details are compatible with the object in the image.
3. The paper presents an intriguing text-guided segmentation strategy that leverages attention maps during the diffusion process based on text prompts. However, lacking expertise in diffusion-based segmentation, I am unfamiliar with the efficiency and success rate of this method, but I am positively impressed by it.
4. The generalization and diversity of the proposed method are also of concern. The objects presented in the results are not very complex, and the dataset comprises 16 popular categories. It remains unclear how well the method would perform on more complex objects, such as those in sci-fi or fantasy genres.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. I am curious about your criteria for selecting 3D generation baselines. For image-to-3D tasks, to my knowledge, more advanced baselines such as 'InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models' and 'TripoSR: Fast 3D Object Reconstruction from a Single Image' offer superior quality and may yield stronger results. For text-to-3D tasks, 'RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D' is a noteworthy pure text-to-mesh method.
2. The proposed method in the paper first refines normals and then optimizes colors based on the refined normals. Does this imply that the geometric details introduced by your method are not derived from the original colors (i.e. not very necessary to original colors) of the objects? Additionally, could the color optimization process potentially disrupt the alignment between the original colors and the refined colors, potentially failing to meet the initial requirements?
3. I am curious about the generalization and diversity capabilities of the proposed method. Can it handle more complex objects? Furthermore, with more powerful baselines, can this method achieve higher quality results on more complex cases?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors acknowledge their limitations in the main paper and address potential social impacts in the supplementary materials. Regarding the first limitation, the implementation of new 3D generative models, such as 'Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image' and 'Era3D: High-Resolution Multiview Diffusion using Efficient Row-wise Attention', is recommended. For the second limitation, utilizing a more powerful computer graphics tool could be beneficial. Concerning the social impact, issues related to deepfakes and the potential for misinformation are noteworthy. The authors assert that humans can currently distinguish their synthesized objects from real ones, a claim with which I concur. Although it is of low priority, it would be preferable for the authors to include a comparison between a generated object and a real one; a single case would suffice.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging comments and feeback. We answer each question as below:
### **Is the proposed method compatible with a purely text-to-3D pipeline?**
Yes, our method is compatible with purely text-to-3D backbones, such as RichDreamer (Qiu, et al., 2024). In our method, we generate our base mesh by first generating an image from the input text prompt using SDXL (Podell, et al., 2023) and then running Wonder3D (Long, et al., 2023), which is a ‘text-to-image-to-3D’ pipeline. However, our method is also compatible with most pipelines that output colored meshes. As suggested, we used a state-of-the-art text-to-3D method RichDreamer, and integrated it into our method. Specifically, we generate the base mesh using RichDreamer, and finetune the albedo and normal UV map using our proposed tactile matching loss, diffusion-based refinement loss, and regularization loss. Example results of both RichDreamer and our full method are shown in Figure 4 in the attached PDF. As shown in Figure 4, our method is able to generate 3D objects corresponding to the text prompt while adding high-fidelity geometric details from tactile inputs.
### **More advanced image-to-3D baseline and more complex objects**
Similarly, we can integrate InstantMesh (Xu, et al., 2024), a more recent image-to-3D method, into our method. Figure 4 and Figure 5 in the attached PDF demonstrate the results for more complex objects using the new backbones of RichDreamer and InstantMesh. Our method enhances the geometric details since the tactile information provides guidance of finer scale than the resolution of the base mesh generated by the backbones. These results highlight the compatibility of our method with different backbones, and its generalizability to complex objects. We will include them in the revision.
### **How to select tactile input and how to ensure the tactile details are compatible with the object?**
The tactile input can be selected either for realism or creativity. For generating realistic outputs, we would choose the tactile input similar to the object, e.g., using a tactile patch collected from real strawberry to generate a strawberry mesh. Otherwise, if we aim for creativity, we can choose any tactile texture we want, as the various coffee cups show in Figure 6 of the main paper. Since the tactile details are of millimeter scale, they are added upon the coarse geometry as a normal UV map, which is compatible with the base mesh. To further ensure alignment between the color and tactile details, we jointly optimize the albedo and normal maps using the refinement loss.
### **What are the efficiency and success rate of the attention-based segmentation?**
Since we leverage the same diffusion model to compute the attention maps for segmentation, it is more memory efficient than using additional components such as off-the-shelf segmentors like SAM. In terms of running time, a single-part experiment takes about 6 mins and a multi-part experiment takes about 8 mins.
To quantitatively evaluate our segmentor, we manually segment 3 meshes to obtain ground-truth segmentation masks. We then run our segmentor on 100 renderings for each mesh. In terms of the metric, we calculate the IoU between our predicted masks and the labeled masks and also compute the accuracy of predicted labels. Our diffusion-based segmentor reaches an average IoU of 0.588 and an accuracy of 83.7%, which provides sufficient segmentation for multi-part optimization. Note that our multi-part optimization can partially resolve inaccurate and inconsistent segmentation by aggregating the gradients from different views, so our 2D segmentation does not need to be perfect. We are happy to expand this evaluation in the revision.
### **Clarification about the color and geometry optimization**
The coarse geometry is fixed after the base mesh generation while the local geometric details, defined by normal UV maps, are further refined using tactile information, and thus not derived from the original colors. The color is also optimized simultaneously with the normals using a diffusion-based refinement loss. The optimization could introduce changes of color to align visual and tactile modality, while we add the regularization term so that the refined color maintains consistent with reference color on a larger scale. We balance the loss weight and a single set of parameters works for all our experiments.
### **Comparison with real objects**
We show one example result of using a real image and corresponding captured tactile signals as input in Figure 3 of the attached PDF. Although our method can recover the color and geometric details reasonably well, we think humans can still distinguish between the real and generated objects by comparing with the input real image.
---
Rebuttal Comment 1.1:
Comment: After reading the rebuttal, most of my concerns are addressed. Hence, I decide to keep my original score and am leaning to accept this paper. | Summary: This paper proposes a method for generating 3D assets with detailed geometry through inputs from a tactile sensor. More specifically, given a bump-map as input from a tactile sensor (just a small patch is enough), the method uses it as regularization while maximizing the likelihood using a normals-conditioned stable diffusion model. The albedo is also optimized alongside the normals, though limited results are shown. Additionally, using diffusion based segmentation maps, different parts of the image can be given different textures. While each component of the method itself is not too novel, the sum total is and the results are pretty good.
Strengths: 1) This is a novel and unexplored task and the proposed method provides the community with a good baseline to build upon.
2) The results of the method are quite convincing. The ability to edit only certain parts of the image with desired textures is particularly nice.
3) The paper is well written with each component explained pretty well.
Weaknesses: 1) There are no results on the albedo provided, it would be great if the authors could explain why they were omitted. I strongly urge them to include it in the rebuttal.
2) I may be mistaken, but it seems this method works only for repetitive textures (though the diffusion model is able to change it through optimization). It would be great if the authors could provide the results of an experiment where the object has two very different textures but those textures are only learnt through the diffusion prior. For example, in the cactus pot case, this would correspond to optimizing L_{tactile} only for the pot and let the diffusion prior decide what the normals for the cactus must look like (or vice versa).
Technical Quality: 3
Clarity: 3
Questions for Authors: Just reiterating what I already mentioned in the weaknesses: It would be great if the authors could provide the results of an experiment where the object has two very different textures but those textures are only learnt through the diffusion prior. For example, in the cactus pot case, this would correspond to optimizing L_{tactile} only for the pot and let the diffusion prior decide what the normals for the cactus must look like (or vice versa).
I believe this experiment would give us insights on how the diffusion prior itself would perform if a part of the image already has detailed normals (via L_{tactile})
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging and insightful comments. We’re pleased that you recognize our setup as “a novel and unexplored task.” Below, we address the individual comments.
### **Add albedo rendering results**
Due to space limit, we omitted the albedo renderings in the main paper. Thanks for pointing it out and we agree that adding albedo renderings helps to demonstrate the quality more clearly. In the Figure 1 of the attached pdf, we show two examples and will add full results in the revision. Notably, our albedo rendering looks natural and contains minimal baked-in lighting effects or geometric details.
### **Show experiments of textures learned from the diffusion priors, i.e., optimizing $L_{tactile}$ only for the pot and let the diffusion prior decide what the normals for the cactus must look like (or vice versa)**
Thanks for your insightful suggestions. We add our experiment results in attached pdf’s Figure 2 for the example “cactus in the pot”, where we keep the text prompt and tactile input unchanged Figure 2(a) has no $L_{tactile}$ for either part; Figure 2(b) and (c) have $L_{tactile}$ for one part, “pot” and “cactus” respectively; Figure 2(d) has $L_{tactile}$ for both parts.
As shown in Figure 2, (b) and (c) contain clear texture for the part with $L_{tactile}$ and synthesize more details for the other parts compared to (a). However, without clear reference, the inferred texture lacks detailed patterns compared to the full results in (d). In short, the texture generated using only diffusion priors appears plausible but tends to look flatter and less detailed.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the rebuttal, my concerns were well addressed. Looking at other reviews and the rebuttal, I have decided to raise my score. | Rebuttal 1:
Rebuttal: We thank all reviewers for their efforts and feedback. The reviewers note that we solve “a novel and unexplored task” (zBVq) with an “innovative approach in utilizing tactile normal modality to enhance geometric details” (MUQX), provide “convincing” (zBVq) and “satisfying” (a9oZ) results, release a TouchTexture dataset that is “beneficial to the research community” (MUQX, ecYi), and deliver a “structured, well-written” (a9oZ) and “easy to understand” (ecYi) exposition.
The reviewers have suggested additional experiments that will better highlight the strengths and capabilities of our method. **We are happy to report that we have conducted most of the experiments, with favorable results, and we will include them in the revision.** Please see the attached PDF for visual results. Here, we first summarize the new experiments, then provide a more detailed analysis and address other comments and questions in separate threads.
* **Visualizations of albedo renderings (Figure 1)**: As suggested by Reviewer zBVq, we visualize albedo renderings together with the normal and the full-color renderings to present our results. Notably, our albedo rendering looks natural and contains minimal baked-in lighting effects or geometric details.
* **Ablation study of tactile matching loss $L_{tactile}$ on different parts of an object (Figure 2)**: Following Reviewer zBVq’s advice, we show the “cactus in the pot” example where only a part of the object has $L_{tactile}$. The texture learnt purely from diffusion prior tends to be flatter and less detailed compared to those learned with $L_{tactile}$.
* **Generating mesh using real image and corresponding tactile signal (Figure 3)**: As suggested by Reviewer MUQX and a9oZ, we generate a mesh from real data input, and our method achieves reasonable reconstruction with color and geometric details.
* **Integrating our method with more recent baselines (Figure 4 and Figure 5)**: As suggested by Reviewer MUQX and ecYi, We integrate RichDreamer (Figure 4), a purely text-to-3D baseline, and InstantMesh (Figure 5), a recent image-to-3D baseline into our method, demonstrating our method’s compatibility with various pipelines. Our method, when integrated with these baselines, generalizes well to complex objects across different genres, and achieves better results compared to using the baselines alone.
* **Quantitative Evaluation of our diffusion-based segmentation method**: To answer Reviewer MUQX’s question, we quantitatively evaluate our segmentation method by manually annotating a test set of meshes. Our method achieves an accuracy of 83.7%, demonstrating its capability to provide sufficient segmentation for multi-part optimization.
Pdf: /pdf/9ddeb8367e65a672cc5ce061692e77956e0b0672.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Adaptive Important Region Selection with Reinforced Hierarchical Search for Dense Object Detection | Accept (poster) | Summary: The paper presents a novel RL-driven object detector guided by Evidential Q-learning. The main contributions are:
1. an adaptive hierarchical object detection paradigm supported by an RL agent to mimic human visual attention that performs searching in the top-down fashion;
2. an evidential Q-learning method driven by a unique reward function, covering both potentially positive and highly uncertain patches
3. theoretical guarantee on the fast convergence of the proposed evidential Q-learning algorithm.
Experimental results show the effectiveness of their method.
Strengths: 1. The motivation and the proposed method are interesting and technically sound.
2. Most part of the manuscript is clearly written and easy to understand.
3. Enough ablation study experiments to show the effectiveness of the proposed method.
Weaknesses: 1. This paper does not include comparison with the latest works in dense object detection , as the proposed modules are simply tested using a few weakness baselines. I am wondering if those techniques can be used to improve accuracy of state-of-the- art models? For example, [1] [2] .
[1] Xu D, Deng J, Li W. Revisiting ap loss for dense object detection: Adaptive ranking pair selection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 14187-14196.
[2] Hou X, Liu M, Zhang S, et al. Salience DETR: Enhancing Detection Transformer with Hierarchical Salience Filtering Refinement[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 17574-17583.
2. My concern is whether the RL baseline comparisons should also be included in the main experiments (Table 1 and Table 2)..
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed limitations thoroughly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Performance comparison with [1] [2]**
Thank you for providing the references for these baselines. We would like to clarify that we focus primarily on improving the dense object detection performance by effectively discovering all objects (including the smaller ones) through leveraging the FPN structure and filtering out false positive cases using our novel evidence guided exploration-exploitation strategy coupled with adaptive hierarchical searching. The first baseline uses a novel adaptive Pairwise Error (APE) loss that focusing on ranking pairs in both positive and negative samples. The second is more like a refinement built upon the DETR model. It proposes hierarchical salience filtering refinement, which performs transformer encoding only on filtered discriminative queries, for a better trade-off between computational efficiency and precision. As such, both methods don't leverage the FPN structure for soliciting enough positive anchors as the candidate pool and may suffer in the dense scenario, where many smaller objects coexist with, overlap with or are included in the large objects. As experimental evidence shown in table below, both methods' AP performance is worse than our proposed technique, especially for small objects $AP^{S}$. We conduct comparison experiment with MSCOCO data set and use the same backbone ResNet-50-FPN and epoch number in the first baseline for a fair comparison. For the second baseline which is orthogonal to ours, we re-run their pre-trained model to get the reported performance.
| **Model** | **AP** | **$AP^{S}$** | **$AP^{M}$** | **$AP^{L}$** | **$AP^{CH}$** |
|-----------------|------------|-----------|-----------|-----------|-----------|
| Adaptive Pairwise Error | 41.5 | 23.5 | 45.7 | 52.6 | 22.4 |
| Salience DETR | 46.5 | 15.4 | 46.8 | 53.5 | 15.5 |
| AIRS | 47.6 | 31.0 | 48.5 | 54.3 | 30.2 |
**Q2: RL Baseline Comparisons in Table 1 and Table 2**
Thank you for the suggestion! We will add selected RL baselines as part of Tables 1 and 2 in the revised paper.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer A3eZ,
Thank you again for your constructive comments and insightful questions! In the rebuttal, we have provided additional experimental result on mentioned baselines [1, 2]. We would also like to acknowledge that we will include RL baselines as part of Tables 1 and 2 in the revised paper. By addressing your comments, we believe that quality of the paper has been improved and we appreciate the reviewer's support on that. We hope that reviewer finds our answers satisfactory! We are more happy to answer any additional questions that you may have. | Summary: The paper presents an innovative framework for dense object detection, called Adaptive Important Region Selection (AIRS). It introduces a method guided by Evidential Q-learning, which strategically identifies important regions within an image in a hierarchical manner. The method aims to reduce false positives commonly produced by current dense object detection techniques by dynamically balancing exploration and exploitation during the model training phase.
Strengths: 1. Introducing Evidential Q-learning into the hierarchical selection process for object detection is novel.
2. The paper is well-written, with a clear motivation and a well-defined methodology. The theoretical analysis is comprehensive.
3. Extensive experimental validation across multiple datasets shows the framework's effectiveness against state-of-the-art techniques.
Weaknesses: 1. It would be beneficial to visualize and analyze some intermediate results, such as the RL masks in the test phase.
2. The proposed AIRS involves searching for region masks before making predictions, aligning more closely with two-stage approaches. Conversely, the DETR series of models directly predict bounding boxes in an end-to-end manner, eliminating the need to obtain candidate regions beforehand. This characteristic classifies them as one-stage methods.
3. The citation for DINO seems incorrect; reference [21] actually cites DN-DETR.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does AIRS handle extremely cluttered scenes where objects are not only dense but also partially occluded?
2. Can you explain why AIRS significantly underperforms in detecting large objects in the COCO dataset compared to DINO, yet shows better performance on the other two datasets?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately discussed the limitations in terms of the potential negative impacts of low-quality uncertainty quantification and the challenges of extending to a transformer-based backbone.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Intermediate results of RL masks during testing phase**
Thank you for the suggestion. Figure 1 in the attached PDF shows the RL masks that are projected to the removed false positive bounding boxes in the detection result. For more detailed description of the masking process, please refer to the answer to **Q2** in the General Response.
**Q2: The proposed AIRS involves searching for region masks before making predictions, aligning more closely with two-stage approaches. Conversely, the DETR series of models directly predict bounding boxes in an end-to-end manner, eliminating the need to obtain candidate regions beforehand. This characteristic classifies them as one-stage methods.**
Thank you for the insightful comment. First of all, we would like to clarify that we do not claim our approach to be One-stage detector. As our technique leverages the FPN which is indeed one-stage detector, therefore our technique is based on one-stage detector. We agree with the reviewer that our technique can be interpreted as a two stage detector as we perform the RL training and inference on the top of one-stage FPN pre-trained network.
Regarding, DEtection TRansformer or DETR, we believe that it leverages a set-based global loss that forces unique predictions via bipartite matching and it leverages a transformer encoder-decoder architecture for that. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context in order to directly output
the final set of predictions in parallel. As such, our understanding is that it is different from existing two-stage detectors (e.g., RPN) as well as one-stage detector (FPN) as it does not rely on anchor generation, selective search and post-processing. However, considering its nature, we can regard it more closely related to the one-stage detector rather than two-stage detector. We will make this clear in the revised paper.
**Q3: DINO Citation.**
Thanks for pointing our this typo. We will correct the citation as DN-DETR in our revised paper.
**Q4: Handling partially occluded in extremely cluttered scenes**
Please refer to the answer to **Q4** in the General Response.
**Q5: AIRS inferior performance on large objects in MS COCO**
Please refer to the answer to **Q1** in the General response.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer Q21q,
Thank you again for your constructive comments and insightful questions! In the rebuttal, we have
- provided intermediate results of RL masks in the testing phase (Please refer to Figure 1 in the attached PDF),
- further compared the difference between AIRS and DETR,
- explained the relatively lower performance of AIRS on MS COCO large objects,
- justified how our exploration-exploitation strategy effectively removes the smaller partially occluded objects in extremely cluttered scenes (Please refer to the answer to Q4 in the General Response).
By addressing your comments, we believe that quality of the paper has been improved and we appreciate the reviewer's support on that. We hope that reviewer finds our answers satisfactory! We are more happy to answer any additional questions that you may have. | Summary: This article presents the method AIRS (Adaptive Important Region Selection) based on reinforcement learning paradigm to improve the performance of dense object detection in images.
It is highlighted that best SOTA object detectors either provide too many false positive detections in complex scenes, or fail at proposing positive candidates to detect all small objects.
The proposed method aims at searching for image patches containing detections in a top-down hierarchical (multi-scale) fashion. To this aim, it balances between exploration of unknown patches (by using evidential Q-learning to encode epistemic uncertainty) and exploitation of patches containing detections.
A theoretical analysis of fast convergence of the proposed algorithm is given. The method is evaluated on 3 object detection datasets, compared with SOTA models. An ablation study about backbones and loss choice is given.
Strengths: - The limitation of current object detectors in presence of possibly small objects in dense and complex scenes is a problem of utmost importance in the computer vision domain. Solutions to the problem would be very useful for many applications.
- The method seems original to me, with a good idea of using RL to explore and exploit the different regions of the image for potential presence of objects.
- A theoretical analysis is provided with the proposed RL method.
- The results outperform SOTA methods.
- In general, the paper is well written and clear.
Weaknesses: - My main concern is about the general claim of better managing small objects and dense complex scenes. It is not clearly demonstrated.
The datasets used are quite general to evaluate object detection. The most complex is OpenImages with about 8.4 objects per image, which is not so dense.
To demonstrate the effect of the proposed method, it would be better to use other datasets like aerial/satellite datasets with a big amount of small objects in each image.
Even if the method seems interesting and overall well-performing, it is not clear that it is more efficient for dense scenes or small objects.
- Besides, the subset for the $AP_{CH}$ computation does not separate the most challenging images considering these aspects. Evaluation should be done on other subsets of challenging cases.
- Some poor results about large objects are unexpected. Why does the exploration/exploitation mecanism decrease the performance in detecting such objects? Does the method focus too much on objects of smaller scales?
- The role and effect of hyper-parameter $\lambda$ is not studied. However, it seems to be important.
- The ablation study for choosing DIoU instead of GIoU is not convincing. Conclusions should have been the opposite.
- Some quality and clarity check should improve the paper, as detailed in the following section.
Technical Quality: 2
Clarity: 3
Questions for Authors: - As dense object detection and, specifically, small object detection are the goals of the proposed method, why only general object detection datasets were used for evaluation?
PascalVOC have mostly images with a unique big object, MSCOCO images have a bit more objects per image and at most, OpenImage images have only 8 objects/images on average.
To draw conclusions about the capacity of AIRS to detect smaller objects and better detect all objects in images with a high density of object, other datasets should be used for evaluation (e.g. the many aerial images datasets like DOTA, xview...).
- Regarding the performance results on MS COCO (Table1), how do you explain AIRS has poor performance in detecting large objects (54.3% vs 62.5% for best SOTA) and only outperforming FasterRCNN?
For medium object, it is not so bad but still does not outperform SOTA (48.5% vs 50.4%).
For challenging subset, it is slightly better than SOTA (+0.4pp).
For small objects, it outperforms SOTA by +2pp.
Thus, it seems the AIRS has specialized in smaller objects but is less effective for bigger objects in the MS COCO. It is rather unexpectable if we compare with the other datasets VOC and OpenImages.
What would be your explanation of this unexpected poor performance?
- Is there a link between the top-down hierachical strategy of AIRS and the cases of small object boxes that are fully contained in other bigger ones?
How does the top-down method handle these cases? Is it observable on a challenging subset dedicated to "included boxes"?
- Hyper-parameter $\lambda$ (line 156) seems to be important for a good balance. How to choose it? Do you have any sensitiviy study about it? It would be interesting to see how $\lambda$ actually affects the final performance on the different types of objects through the exploration-exploitation trade-off.
- The definition of challenging subset (line 273) is surprising. Why using only the images with ratio of large and medium over small objects ranging from 1 to 1/2? Why not simply <1 to have complex scenes with both large and tiny objects?
Criterion (b) is not clear. What is the measure and threshold of overlap used to select the images?
Criterion (c) is not clear. What is the measure and threshold of object inclusion used to select the images?
It is not clear if the union or intersection of these criteria is used to define the subset.
It would be interesting to separate in several subsets in order to draw more precise conclusions about the ability of the proposed detector in managing each case better than the SOTA methods.
- "DIoU is the most effective" (l.355-356). The results from Tab.4 show exactly the contrary. DIoU is the less effective. According to Tab.4, GIoU is the best choice. So, logically, I would expect AIRS combine GIoU and Uncertainty. Why didn't you use this combination? This result should be added in the ablation study.
Here are some other comments, typos or lack of clarity in some phrasing that should be corrected or clarified:
- All references to section/equation/... should have the word Section/Equation before the number. (e.g. lines 43, 62, 64, 115, 117...). Please check them all.
- l.22: "The diverse nature of images, such as shadow/occlusion" sounds awkward. Please rephrase it.
- l.29: "number of candidate object". typo.
- l.36: "inconsistency in localization quality estimation between training and testing". The idea behind this statement is not clear.
- Fig.1: The same image should be used for the qualitative comparison of the 4 methods.
- Fig.1: The name of the method GFocal+LQE in the subcaption c) should match with the name in the caption.
- l.42: "generating too many false positive predictions on small objects" It is not clear in this sentence whether the problem is about duplicated predictions on small objects (then, some post-processing, like NMS, can leverage this issue) or about false positive predictions on small areas (which are not objects).
- l.96: "leverages the latest Feature Pyramid Network structure" Please clarify the meaning of 'latest' (FPN was proposed at CVPR'2017).
- l.110: "each of the key component". typo.
- l.121: Please define the acronym NIG.
- Fig.2: Please add the layer scale (0 to L-1) on the diagram for better link with the main text.
- l.135-l.143: Please avoid using the same variable d for two different usages.
- l.140-160: Please define all variables (e.g. $\alpha, \beta, \gamma, \nu$)
- Eq.7: Please define $\gamma$.
- Eq.8, l.201: Typo $n_{epcoh}$
- l.242: "leveraging the FPN structure of the pre-trained backbones." Which type of pre-training was used? Is it the same for all compared methods? What is its influence on the performance results?
- l.251: "As can be seen, the training is more efficient comparing to other RL based methods, and the inference speed is also competitive w.r.t. the latest baselines (see Appendix D.5)." Please be more specific than "more" (adding some figures illustrating these statements).
- l.266: "It contains 20 categories partitioned into three subsets" Please rephrase as it appears the categories are partitioned.
- l.268: typo
- l.272: CH subset should be made available for comparison with future work in the community.
- l.274: add unit
- l.275: typo
- l.286: Detail which supervision and dataset were used for the pre-trained models, and if it is identical to compared methods.
- l.289: Please detail the bounds of search.
- l.292: "We gradually shrink λ" Please clarify if it is done for the hyper-parameter search only or during the training also? Please be more specific on the shrinkage process for reproducibility purpose.
- l.294: Which criteria were exactly used to stop the training?
- l.296: SOTA methods are not so recent, exept Co-DETR and EVA (2023). Have you checked more recent detectors?
- Tab.1: Add % for AP results in all tables.
- I was curious about the training time (then found it in the appendix). It would be beneficial to add a synthetic sentence about it in the main paper.
- Table2: Please specify which scale version of DINO was used.
- Table2: Please add the resnet variants in the same table for comparison.
- l.309: Please specify which pre-training was used.
- l.312: A short sentence summing up the results on the YOLO series should be added, even if all the details are in appendix.
- l.322: Please use pp (percentage points) instead of %.
- l.340-341: Please rephrase.
- Fig.3: a)b) Please correct the vertical axis name (number of...)
- Fig.3: Please add the statistics of the ground truth for a complete comparison.
- Section4.5: It is awkward to use MSCOCO for the ablation study, as AIRS gives unexpected results for medium/large objects.
- l.364: Please rephrase.
- l.370-371: Please define the acronym EL (only EU was defined).
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: As detailed before, evaluation on relevant datasets are missing to support the main claim of dense and small object detection capacity.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Use other datasets with a big amount of small objects.**
Thank you for the great suggestion! First, we would clarify that COCO, Pascal VOC, and Open Images V4 are commonly used benchmark datasets to evaluate dense object detection models such as GFocal, DINO, FCOS etc. Therefore, we choose same set of datasets in our evaluation. To explicitly show the effectiveness of our technique, we further create a challenging subset, where large, medium and small objects are mixed and embedded with each other. This mixing strategy makes the detection highly challenging because such scenarios require a good balance of exploitation and exploration in RL training to achieve high precision and recall for all large, medium and small objects. Second, following the reviewer's suggestion for using datasets with more smaller objects, we redefine our criteria to select subsets that contain those images where the ratio of large and medium objects (area $\ge$ 322) to small objects (area $<$ 322) $\leq$ 1/2. We additionally conduct experiments on an aerial park lot dataset with a large amount of small objects in each image. The quantitative results on the new challenging subset and an aerial dataset are summarized in Table 1 in the attached PDF. We also provide detection visualization of these two new challenge data sets in Figure 1 of the attached PDF.
**Q2: Poor performance on large objects and impact of exploration/exploitation**
Please refer to the answer to **Q1** in the General Response. In fact, the lower performance on large object setting of MSCOCO primarily comes from the one-stage detectors and the nature of the MSCOCO dataset not because of our exploration-exploitation technique. Due to exploration/exploitation mechanism, our approach effectively discovers small objects in addition to the larger ones. However, if our approach focuses too much on small objects, then we would consistently see the lower larger object performance across all datasets. The comparable or even better performance on other datasets except for MS COCO further justifies the effectiveness of the exploration/exploitation mechanism.
**Q3: Handle small objects embedded within large objects cases and challenging subset dedicated to "included boxes"?**
For this first part of the question, please refer to our answer to **Q4** in the General Response. Regarding the second part, we would like to clarify that our challenging dataset measured by $AP^{CH}$ already covers these situations through criterion c, which is also illustrated in Figure 9 of Appendix D.6.
**Q4: Hyper-parameter $\lambda$**
$\lambda$ is changed dynamically. In the early stage, it is set to be high ($\lambda = 1$) so the focus is on exploring the unknown patches. As training progresses, it decreases as $\lambda = \left(1-\frac{N_c}{N_{epoch}}\right)$, where $N_c$ is the current epoch. Exact exploration-exploitation balancing also depends on complexity of dataset. For instance, for easy dataset, the model may quickly focus on the exploitation part as epistemic uncertainty may reduce quickly whereas for difficult dataset, the model may stay longer to explore the patches. We also conduct an additional experiment to test sensitivity of $\lambda$. As shown in the table below, the performance is relatively robust for different $\lambda$ values. However, the adaptive $\lambda$ achieves a better performance.
| **Hyper-parameter $\lambda$** | **COCO AP** |
|-----------------|-----------------------|
| 1 | 46.8 |
| 0.8 | 46.7 |
| 0.6 | 46.9 |
| 0.4 | 46.5 |
| 0.2 | 46.1 |
| **AIRS** | **48.3** |
**Q5: The definition of challenging subset.**
In case of criteria (a), we consider images where the ratio of larger/medium objects (area $\geq$ 322) to small objects (area $<$ 322) ranging from 1 to 1/2 to ensure all sized objects coexist. This mixing strategy makes the detection really challenging because such scenarios require a good balance to achieve high precision and recall for all objects. We did not consider the ratio less than 1/2 because the selected objects are very small ones that are even hard for human to detect. Following the suggestion, we conducted an additional experiment by considering objects where ratio is less than 1/2. Please refer to our answer to **Q1** for the results. In case of (b), we consider the case where images with multiple objects overlap with each other. In case of (c), we consider the case where multiple small objects are embedded into a bigger one. In both cases, we consider the threshold of 0.4. This means, in case of (b), if the IOU between overlapping objects is more than 0.4 then we consider those samples. In case of (c), if the IOU of smaller objects with the bigger ones is higher than 0.4, we consider those samples. Apart from the IOU threshold, in both cases, in order for given image to be qualified for (b) or (c), the minimum number of small objects should be at least 3.
**Q6: DIoU vs. GIoU**
Thanks for pointing out this inconsistency. We re-checked our experiment results and found that row numbers are shifted in the paper. Below, we include GIoU with uncertainty performance and will update it accordingly in revised paper.
| **Model Design Choice** | **AP** | **$AP^{S}$** | **$AP^{M}$** | **$AP^{L}$** | **$AP^{CH}$** |
|-----------------|------------|-----------|-----------|-----------|-----------|
| GIoU | 44.3 | 28.5 | 45.6 | 51.5 | 25.4 |
| GIoU+Uncertainty | 46.7 | 30.2 | 47.5 | 53.4 | 28.1 |
| DIoU | 45.4 | 29.5 | 46.8 | 52.4 | 26.7 |
| DIoU+Uncertainty | 47.6 | 31.0 | 48.5 | 54.3 | 28.9 |
**Q7. Typos or lack of clarity**
Thanks for carefully checking our paper! We highly appreciate your suggestions and will follow them to fix the issues and improve the presentation of the revised paper.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer mhHL,
Thank you again for your constructive comments and insightful questions! In the rebuttal, we have
- conducted additional experiments
using other datasets with more small objects,
- clarified the relatively low performance of AIRS on the MS COCO large objects,
- discussed how our approach handles smaller objects embedded within large ones,
- analyzed the sensitivity of the performance with respect to hyperparameter $\lambda$.
We believe that by addressing your suggestions, our paper has been significantly strengthened and we appreciate the reviewer's support on that. We hope that reviewer finds our answers satisfactory and considers updating the score accordingly! We are more happy to answer any additional questions that you may have. | Summary: Current state-of-the-art dense object detection techniques often generate numerous false positive detections in complex scenes, as they prioritize high recall. This study tackles this problem by introducing an Adaptive Important Region Selection (AIRS) framework. This framework builds on a pre-trained FPN-based detector and uses evidential Q-learning to identify the most informative patches from the top layer to the bottom layers during training and testing. To enhance performance, the authors propose a uniquely designed reward function based on diverse detection metrics. Experiments on three standard detection benchmarks demonstrate the effectiveness of the proposed AIRS. Additionally, the authors provide a theoretical analysis of AIRS, which helps readers better understand the method.
Strengths: 1. The paper is generally easy-to-follow.
2. The proposed method makes interesting use of Q-learning to select important regions to retrain an object detector.
3. The experiments cover a wide range of different datasets
4. The method displays good performance across most of evaluation metrics over state-of-the-art methods.
5. The authors consider a number of different ablations to better understand the proposed method.
Weaknesses: 1. One of my most important concerns is that AIRS should be used along with a well-trained detector. In this study, the authors use an detector, equipped with FPN and pre-trained by GFocal. As a result, it is not an easy-to-use framework, and it is also unfair to compare the training time with other end-to-end methods in D.5. Does AIRS can be used for end-to-end training of a dense object detector? How much time will be cost by the end-to-end training?
2. My second concern is that AIRS is similar to the region proposal network, but uses a different network (RNN v.s. MLP), training strategy (RL v.s. Standard Training) to select important regions for detection. However, RPN can be integrated into a detector to form a 2-stage detector. In fact, the masked region step can also be treated as a method of proposing regions. Therefore, it should compare AIRS and RPN in depth in this study, including presentation, experiments, etc. As I see, changing several hyper-parameters in RPN can also produce high recall as the main claim in AIRS.
3. I'm also wondering what kind of insights this work could bring to the community. I will be more interested in seeing which kinds of patches (locations) should be used for training and testing. Are patches used similar among different detectors (different training strategies, losses, backbones, etc.) ? Answering these questions would help the paper go beyond engineering success in standard detection benchmarks.
4. No limitation parts.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: There are no discernible negative societal impacts related to this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: It is not an easy-to-use framework. Can AIRS be used for end-to-end training of dense detector and how much is the training time.**
Thank you for the insightful question. We would like to clarify that our goal is to remove the false positive (fp) bounding boxes, which is orthogonal to any FPN-based pre-trained detectors. As these pre-trained detectors are off-the-shelf and using them has been a common trend in the community, integration of our proposed technique with these detectors is straightforward, making our framework relatively easy to use. Since our method works on top of the pre-trained object detectors to eliminate the false positive cases, we agree with the reviewer that the AIRS training time is not directly comparable to those underlying detectors. Instead, we can train them in an iterative way by further leveraging AIRS to fine-tune the detectors, which makes the whole training process end-to-end. The added training time of the detectors should be similar to those well established frameworks since we only add the RL filtering masks in the training. We would also like to clarify that the training time comparison shown in Table 9 is fair as all RL techniques start from the same the pre-trained FPN environment.
**Q2: Comparison with RPN**
Please refer to the answer to **Q3** in General Response for the detailed discussion on the difference from RPN.
**Q3: Changing several hyper-parameters in RPN can produce high recall**
We would like to clarify that we have compared with a good number of two-stage detectors that leverage RPN (see the Two-stage section of Table 1). For each of these baselines, we consider the optimal set of hyperparameters resulting into the highest AP. Therefore, changing hyperparameter may not be able to further enhance the recall.
**Q4: What kind of insights this work could bring to the community? Which kinds of patches (locations) should be used for training and testing? Similarity of RL selected patches for training and testing among different detectors (different training strategies, losses, backbones, etc.)**
Thank you for these comments. The design of our adaptive important region selection with reinforced hierarchical search framework is inspired by the human visual attention, which usually conducts object search in a top-down hierarchical fashion. Such a mechanism makes the search very efficient as the RL agent moves down to a fine-grained level in the hierarchy only when it is likely to contain a object of interest. Furthermore, the epistemic uncertainty guided exploration plays a central role that ensures the RL agent will not miss any important regions (i.e., those with a high uncertainty) while avoiding visiting the unnecessary ones (i.e., those with a low uncertainty). Both our theoretical analysis and empirical results confirm the effectiveness of the proposed design strategy, which shows its potential to benefit similar dense object detection scenarios.
Regarding which kinds of batches should be used, there is no explicit control on selection of the patches during training and testing phases. As our AIRS is based on FPN coupled with adaptive hierarchical search strategy guided by evidential Q-learning, it looks for informative patches that are likely to have objects in different granularity. As such, during training process, our RL agent learns to effectively find the patches containing objects of different sizes and and different types. This capability will transferred to the inference process. For the same image, RL selected patches display similar patterns across different backbone detectors. As shown in Table 8 in Appendix D.4, different pre-trained detectors (RetinaNet-Trans, FCOS-Trans, ATSS-Trans) with same GFocal generated RL mask and different RL agent generated masks (RetinaNet, RetinaNet-RL, RetinaNet-Trans) with the same backbone detector (RetinaNet) both display similar Average Precision (AP) on COCO in the inference step, which is a clear evidence that the RL selected patches generated from different agents and applied among different detectors should be similar and transferable to each other in some extent. Furthermore, we have provided an additional evidence of having similar patches among different agents and backbone detectors in Figure 2 of the attached PDF.
**Q5: No limitation part**
We have included a discussion of limitations in Appendix E.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 8nSu,
Thank you again for your constructive comments and insightful questions! In the rebuttal, we have
- discussed how AIRS can be conveniently used and the training time if end-to-end training is performed along with the detectors,
- compared with RPN by clarifying that RPN is less effective to detect small objects and why changing the hyper-parameters may not be able to further improve the performance,
- discussed the insights that this work can bring to the community and how AIRS adaptively selects patches through the uncertainty-aware exploration-exploitation coupled with the adaptive top-down hierarchical search strategy,
By addressing your comments, we believe that quality of the paper has been improved and we appreciate the reviewer's support on that. We hope that reviewer finds our answers satisfactory and considers updating the score accordingly! We are more happy to answer any additional questions that you may have. | Rebuttal 1:
Rebuttal: **General Response**
We would like to thank all the reviewers for their constructive suggestions and comments. Here, we summarize our responses to some common questions raised by multiple reviewers:
**Q1: Poor performance of AIRS in MS COCO Large Objects (Reviewers mhHL and Q21q)**
In this work, we aim to improve the detection performance by having a good balance between objects of different sizes and the $AP$ metric is designed to assess the overall effectiveness in terms of detecting objects in all granularities. Compared to competitive baselines, AIRS is superior on all datasets. We agree that placing more focus on smaller and more difficult objects lowers the performance of AIRS on $AP^{L}$ and $AP^{M}$ in MS COCO. However, this is an expected behavior as MS COCO has most of the objects being very large and therefore, the cost of missing smaller objects in the existing two-stage detectors seem to be very low. As such, many two-stage detectors have superior performance (see Table 1 of the paper). In contrast, as our technique leverages a one-stage detector to better cover dense objects, it is relatively less effective to detect very large objects (which is evidenced by the lower performance by all one-stage detectors in Table 1). It is worth mentioning that in other datasets, AIRS outperforms all baselines even on the large objects. In the case of Pascal VOC 2012, it is relatively easier and does not contain very large objects. As such, one-stage detectors perform comparable or even better than the two-stage detectors. As for Open Image V4, despite being challenging, it contains a good amount of training samples with larger objects, which provides enough supervision for models to detect these large objects. As such, all single-detectors including our technique perform comparable or even better compared to two-stage detectors.
**Q2: Clarification on use of masks to generate final bounding boxes (Reviewers 4PiS and Q21q)**
We run the trained RL agent on the test image's FPN to generate RL masks. Based on the masked evidential Q-value estimate, the agent selects the next action, which would be either a downward or upward movement. Then, the agent moves to the next patch and continues the process until receiving an upward movement in layer $L$ or reaches the maximum time step i.e. $T$. After the hierarchical searching process, we can have a binary RL mask by recording which patches are visited by RL agent through it actions (denoted as 1 in RL mask), and which are not visited by the RL agent (denoted as 0 in RL mask). Given the RL mask, which is a three-level binary mask covering the feature pyramid network (FPN), each pixel in the FPN will be assigned a confidence score in the quality evaluation branch to decide if it is a positive anchor or not by (comparing with a threshold). Those pixels covered by zero RL masks will have their confidence score reset by 0, and other pixels will maintain the same confidence score. In that way, RL masks serve as an additional filter to further eliminate the "false positive" predicted bounding boxes. We show an illustration of RL masks in Figure 1 of the attached PDF.
**Q3: Comparison with two stage detectors like RPN (Reviewer 8nSu)**
There are key differences between two stage detectors and AIRS.
The former usually relies on a Region Proposal Network (RPN), which is less effective to capture all targeted objects especially in a dense scenario. This is because, RPN selects anchors from the candidate anchors provided by the region proposal network based on the confidence score resulting into missing many true positive object anchors with a low confidence. In contrast, FPN in AIRS is based on multi-scale feature representations. Thus, the number of selected anchors in all layers is far more than the ones proposed by RPN and therefore avoiding the missing of important object anchors. To tackle the many false positive anchors in the FPN based approaches, we propose a novel hierarchical search coupled with an effective exploration-exploitation strategy leveraging evidential Q-learning. As a result, AIRS effectively removes the false positive bounding boxes without removing the less confident true positive objects. This phenomenon is also demonstrated in Table 1 of the paper, where two-stage detectors result in a lower performance compared to AIRS in dense object detection.
**Q4: How does the binary RL mask help reduce unnecessary bounding boxes in inference step (Reviewers 4Pis, mhHL, and Q21q)**
We are handling two different types of false positive bounding boxes. The first category involves bounding boxes that capture only background with no targeted object. To remove those patches, our novel exploration-exploitation strategy plays a major role. Specifically, during the adaptive hierarchical search in the top-down fashion, exploration of the higher layer quickly discovers that there is no object in the lower-level granularity. Specifically, both Q-value as well as epistemic uncertainty (see Eq. 4) remains low, leading to removing bounding boxes on backgrounds. The second category involves bounding boxes that cover only a part of a given object and are embedded in the larger bounding box that covers the whole object. As our approach works in the top-down fashion, once the RL agent explores the bigger bounding box covering the full object, the model assigns a very low epistemic uncertainty for partially covering bounding boxes. As such, the model avoids going downward in a lower level granularity in the action space resulting removing unnecessary partially covered bounding boxes.
Pdf: /pdf/71a745bf6aa6ddc48a9e1e5044ba289b8499c63c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper presents an adaptive hierarchical object detection framework for dense object detection by evidential Q-learning with specially designed reward function, searching through FPN based hierarchy in a top-down fashion. Theoretical analysis proves the upper bound of the action value error and extensive experiments on various datasets illustrate the superiority of the proposed framework over other SOTA models.
Strengths: 1. The paper proposes a novel evidential Q-learning method in an adaptive hierarcical top-down searching framwork for dense object detection
2. The theoretical anlysis on the upper bound of the action value error is thorough, and guarantees the fast convergence of the Q-learning
3. The experimental evaluations are extensive and thorough, the AP metric scores of the propose AIRS on different backbones are highest in most categories (except M&L in MS COCO) in all evaluated datasets, demonstrating the effectiveness of the proposed model.
Weaknesses: 1. The main intention of the proposed AIRS framework is to reduce the false positive cases in complex scenes. I would hope the author give clearer definition/description on what kind of "false positve" cases they want to avoid and provide visual examples to justify the claim. I see some ambiguity based on the current visual illustrations in the paper. Using Fig 1(a), (c) and Fig 9(k) as examples, if I understand the proposed method correctly, the mask values for the smaller boxes are still 1, because they are still part of the elephant, banana and flower respectively, but it seems authors view them as "false positve".
2. For the experiment, especially qualitative comparisons, I would like to see more challenging examples. For current ones, I see most smaller unnecessary bounding boxes are fully within another bigger bounding box. In this case, a simple post processing is enough to remove these redundant small bounding boxes out from any exsiting models.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Fig. 2 shows the overview of the AIRS framework and the RL inference step outputs the binary RL mask, but how the mask is used to generate the final dectected bounding boxes? This is not so clear to me, I hope the authors can explain more on the inference step.
2. How does the binary RL mask help reduce #unncessary bounding boxes?
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes, the authors have addressed the limitations adqueately in the appendix section E.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Clearer definition/description on what kind of "false positive" cases authors want to avoid and provide challenging visual examples to justify the claim.**
Thank you for the insightful question. A "false positive" detection covers two cases: i) a bounding box covering only part of an object, and ii) a bounding box covering irrelevant information (e.g., background treated as elephant as shown in the right center of Figure 1(a)). In object detection, ideally we want to predict bounding boxes that completely cover all relevant objects. To this end, the training data contains ground truth bounding boxes that cover complete objects instead of partial ones. Bounding box capturing part of the object may be misleading. For instance, in Figure 1(a), one of the bounding box captures the ear of the elephant, which is not precise to refer it as the whole elephant. This is why the false positive cases also include the partial bounding boxes. To more clearly demonstrate the effectiveness of our technique in better avoiding irrelevant information (i.e., case ii), we have performed an additional qualitative analysis. As shown in Figure 1 of attached PDF, our AIRS model successfully removes false positive cases that capture irrelevant background information.
**Q2: A simple post-processing should be enough to remove redundant small bounding boxes.**
Thank you for your thoughtful comment! We would like to clarify that the smaller bounding boxes are usually associated with a high confidence score. As such, the standard post-processing algorithms like Non Maximum Suppression (NMS) cannot properly remove them. This is also demonstrated in Figure 1 of the paper, where GFocal (which leverages NMS) is unable to remove the redundant bounding boxes. Further, evidence is shown in our experimental result in Table 1 of the paper. Through our novel hierarchical searching strategy coupled with the epistemic uncertainty, during inference step, we can effectively mask out those false positive predictive bounding boxes even if they have a high confidence. We note that redundant small bounding phenomenon is a common problem in dense object detection that challenges most existing works (e.g., Li et al., Generalized focal loss, NeurIPS 2020).
**Q3: Clarification on use of masks to generate final bounding boxes**
Please refer to the answer to **Q2** in the General Response.
**Q4: How does the binary RL mask help reduce unnecessary bounding boxes in inference step**
Please refer to the answer to **Q4** in the General Response.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 4PiS,
Thank you again for your constructive comments and insightful questions! In the rebuttal, we have provided
- a clear definition of false positive cases with additional examples (please refer to Figure 1 in the attached PDF),
- justification on why simple post-processing may not be sufficient to remove redundant bounding boxes,
- clarification on use of masks to generate final bounding boxes (please refer to the answer to Q2 in the General Response),
- justification on how the binary RL mask reduces unnecessary bounding boxes (please refer to the answer to Q4 in the General Response).
By addressing your comments, we believe that quality of the paper has been improved and we appreciate the reviewer's support on that. We are more than happy to answer any additional questions that you may have. | null | null | null | null | null | null |
Boundary Decomposition for Nadir Objective Vector Estimation | Accept (poster) | Summary: This paper proposes a nadir point estimation method for evolutionary multi-objective optimization, named BDNE. BDNE decomposes the MOP into boundary subproblems and uses a bilevel architecture to optimize them. Theoretical analysis and empirical results demonstrate the effectiveness of the proposed method.
Strengths: This paper studies a problem: nadir point specification. A method based on decomposition is proposed, which is different from the past approaches. Some mathematical properties of the proposed decomposition method are demonstrated. An empirical study is conducted to highlight the effectiveness of the proposed method.
Weaknesses: Generally, this paper is not well written, and the empirical study is not well organized.
## Presentation
I believe that interested readers can easily follow the authors' reasoning from the Abstract to Section 2, and the research background of this paper is also explained very clearly. However, starting from Section 3, it becomes very confusing. In Section 3.1, the authors introduce the boundary subproblem (Eq. 3) without providing any motivation for doing so. Although the authors title Section 2 as Motivation, I feel it actually serves as Related Work because this part only summarizes some previous work and does not touch on how the authors are thinking about this problem from a new perspective. After Eq. 3, the authors propose a new dominance relation and its three properties. I am very curious why the authors emphasize these three properties, as they do not seem to be used later in the paper. Subsequently, the authors present some theorems but do not sufficiently explain the relationship between these theorems and the new method proposed in this paper. In Section 3.2, the authors use a new boundary subproblem (Eq. 9) instead of Eq. 3 discussed in Section 3.1, which makes it unclear whether these mathematical properties in Section 3.1 are related to the proposed BDNE algorithm. Additionally, these properties only hold under very ideal assumptions (L128-L130).
Moreover, I suggest that the authors use some illustrations when introducing these new concepts, such as the boundary subproblem and cone-domination, to better help readers understand these concepts. Additionally, it would be worthwhile to explain the motivation behind them. Another area for improvement is the tables in the paper. The authors have recorded the experimental data of the baseline and BDNE in separate tables (Table 3 and Table 4), which makes it difficult for readers to compare the numbers. The authors could merge these two tables into one, making the data comparison more intuitive. Tables 7-9 have the same issue.
## Experiment
The empirical study of this paper does not adequately demonstrate the significance of BDNE in MOO. The experiments only show that BDNE can determine the nadir point more accurately than the baseline. However, I believe that many readers are more interested in the significance of an accurate nadir point for MOAs, such as whether it can improve the algorithm's convergence ability or enhance the diversity of the solution set. Currently, most MOAs do not have an independent mechanism to determine the nadir point, so now there might not have been a consensus in the MOO field on the necessity and significance of such an independent algorithm that is designed just to get the nadir point. Therefore, this paper is expected to demonstrate this clearly. For instance, the authors mention that the nadir point can be used for normalization, but normalization may not require a very accurate estimate of the nadir point. A rough estimation usually works as well. Many algorithms do not even have a normalization process. In practical applications, the scales of different objectives can be adjusted manually. In summary, I believe that the contribution of this paper to the field of MOO is limited and the authors did not adequately demonstrate its practical value.
Here are some minor remarks for the experiment part:
* What criteria were used to select these test problems? These test problems seem to be chosen arbitrarily or selected based on benchmark results after the fact.
* In the experiments, 180,000 FEs were used just to find the nadir point. Isn't this too many? Current SOTA MOAs generally require less than 50,000 FEs to obtain a well-converged and uniformly distributed solution set for these test problems.
* Why do some test problems have 3, 5, and 8 objectives, while some only have 3 objectives, e.g., both MaF2 and DTLZ5 can be configured with any number of objective functions, but they only have 3 objectives in the experiment.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please refer to *Weaknesses*.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: No concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your time and effort in reviewing our work. We are glad to know that you find our method is novel and has theoretical guarantees.
We address your concerns as follows.
## W1. Presentation.
**Transition from Section 2 to Section 3.**
Thank you for raising this concern. It is better to choose ``Related Works'' as the title of Section 2. We will also include transitional statements at the beginning of Section 3.
**Role of properties.**
The three properties imply that cone domination defines a strict partial order. These properties are not utilized subsequently. We apologize for the implicit statement and will simplify the description based on your response.
**Relationship between theorems and the new method.**
We believe the relationship is clearly stated. Each theorem is associated with a corresponding explanation or utilized in a specific context. We may improve clarity by providing a summary.
Theorems 1 and 2 are primary theoretical foundations for employing BDNE to determine the nadir objective vector. The user-friendly parameter $\mu$ in BDNE is proposed based on Theorem 3. Theorem 5 is instrumental in some strategies of BDNE (e.g., the coping strategy for flat fitness). Theorem 4 extends Theorem 1. While it is not directly related to BDNE, it is crucial for proving Theorem 5.
**Illustrations of new concepts.**
Thank you for the suggestion. We will show the level surfaces of boundary subproblems and their optimal solutions on an example $PF$ (see Figure 1 of the one-page PDF). Cone domination and proper Pareto optimality are not new concepts. We have cited relevant studies in our paper.
**Motivation behind mentioning concepts.**
Cone domination has a strong connection with the boundary subproblem as reported in Theorem 2, which is the motivation for mentioning this concept. Proper Pareto optimality is introduced to replace the obscure parameter (i.e., $\alpha$) with an easy-to-understand one (i.e., $\mu$). We will improve the clarity in the revised paper.
**Tables.**
Thank you for the suggestion. We will present the results in one table. An example is Table 1 of the one-page PDF.
## W2. Significance of the accurate nadir objective vector.
We agree with you that normalization may not require an exact nadir objective vector. However, an accurate nadir objective vector is important in many scenarios and many exact methods are proposed in [1,2]. We list some consequences of inaccurate estimation of the nadir objective vector as follows:
* Inaccurate nadir objective vectors **degrade** the performance of exact algorithms [3], evolutionary algorithms [4], and multi-objective learning [5].
* An accurate nadir objective vector is often the **assumption** of interactive algorithms [6,7]. Inaccurate nadir objective vectors may cause biased decisions.
Existing exact methods have limitations in applicability (e.g., continuous problems). Heuristic methods only perform well on manually designed benchmark problems since they often have simple feasible objective regions [4,8,9]. Real-world problems usually have irregular feasible objective regions, making their nadir objective vectors difficult to estimate [10,11]. Therefore, we propose BDNE, which is **the first method with theoretical guarantees and general applicability** in multi-objective optimization. Our method can easily adopt various solvers for different tasks and achieve a trade-off between runtime and accuracy.
## Q1. Eq. (3) versus Eq. (9).
The mathematical properties of Eq. (3) remain valid for Eq. (9). The solutions associated with the critical points are invariant after the normalization (i.e., scaling and translation), which is straightforward. We further explain the reasonability behind this conversion in Lines 185-188.
## Q2. Ideal assumptions in Lines 128-130.
The statements in Lines 128-130 refer to an ideal method, **not** ideal assumptions. This ideal method aims to demonstrate the preliminary feasibility of boundary decomposition in identifying the nadir objective vector. It motivates us to propose the BLOP, which is a practical optimization task.
## Q3. Criteria used to select test problems.
Please refer to Common Concern 3.
## Q4. Too many function evaluations.
Our method can achieve a trade-off between runtime and accuracy. We aim to ensure the algorithm converges sufficiently, validating our theoretical claims empirically. Consequently, we allocate a large number of function evaluations for each algorithm.
## Q5. Results on scalable problems with 3, 5, and 8 objectives.
Our theoretical results hold for any number of objectives. You can refer to Common Concern 3 for an explanation and the supplementary experiments.
## References
[1] Computing the nadir point for multiobjective discrete optimization problems. Journal of Global Optimization, 2015.
[2] On nadir points of multiobjective integer programming problems. Journal of Global Optimization, 2017.
[3] New $\epsilon$-constraint methods for multi-objective integer linear programming: A Pareto front representation approach. EJOR, 2023.
[4] A new two-stage evolutionary algorithm for many-objective optimization. TEVC, 2019.
[5] Hypervolume maximization: A geometric view of Pareto set learning. NeurIPS, 2023.
[6] Multiobjective optimization: Interactive and evolutionary approaches. 2008.
[7] A mini-review on preference modeling and articulation in multi-objective optimization: current status and challenges. Complex \& Intelligent Systems, 2017.
[8] Regular Pareto front shape is not realistic. CEC, 2019.
[9] A survey of normalization methods in multiobjective evolutionary algorithms. TEVC, 2021.
[10] An easy-to-use real-world multi-objective optimization problem suite. Applied Soft Computing, 2020.
[11] A benchmark-suite of real-world constrained multi-objective optimization problems and some baseline results. Swarm and Evolutionary Computation, 2021.
---
Rebuttal 2:
Comment: Thank you for your response. My primary concern is that the authors have not adequately addressed the practical significance of an exact nadir point. I would like to emphasize that most existing multi-objective optimization algorithms estimate the nadir point based on the current solution set. While I acknowledge that such estimates are not accurate, they do not constitute a bottleneck for these algorithms. I recognize that the exact calculation of the nadir point is an interesting problem, but I believe that the contribution of this paper to the field of multi-objective optimization is very limited. The empirical study in this paper only demonstrates that the proposed method could accurately calculate the nadir point, but not how the proposed method could improve the performance of a multi-objective optimization algorithm.
---
Rebuttal 3:
Title: Thank you very much for your comment [1/2].
Comment: Thanks for your further comments. Below is our point-to-point response.
``Practical significance of an exact nadir point.``
Firstly, we must clarify that our method is not just an exact algorithm, and its accuracy and runtime can be balanced via a controllable parameter. Secondly, we do not intend to claim that the accurate nadir objective vector is necessary for **all** multi-objective optimization scenarios. What we emphasize is that we should not ignore the cases where the accurate nadir objective vector is essential. There have already been a lot of works devoted to accurate nadir objective vector estimation [1-13]. We provide the following three examples:
* The nadir objective vector frequently serves as the reference point of scalarization methods. Some applications require specifying an accurate nadir objective vector beforehand [10,11]. An underestimated nadir objective vector makes some Pareto-optimal solutions unattainable, whereas an overestimated one results in poor uniformity of obtained objective vectors.
* The nadir objective vector determines the search space for some exact methods, such as AUGM-2. Exact methods without an accurate nadir objective vector may skip some Pareto-optimal solutions or exhibit longer runtimes. A recent experimental analysis can be found in [12].
* Many interactive methods, such as the NIMBUS method, require accurate nadir values to present the promising region of the objective space for decision-makers [13]. If inaccurate nadir values are given, more interactions are conducted or a biased decision is made.
[1] Ehrgott et al. Computation of ideal and nadir values and implications for their use in MCDM methods. EJOR, 2003.
[2] Klamroth et al. Integrating Approximation and Interactive Decision Making in Multicriteria Optimization. OR, 2007.
[3] Dhaenens et al. K-PPM A new exact method to solve multi-objective combinatorial optimization problems. EJOR, 2010.
[4] Florios et al. Generation of the exact Pareto set in Multi-Objective Traveling Salesman and Set Covering Problems. Applied Mathematics and Computation, 2014.
[5] Kirlik et al. Computing the nadir point for multiobjective discrete optimization problems. Journal of Global Optimization, 2015.
[6] Köksalan et al. Finding nadir points in multi-objective integer programs. Journal of Global Optimization, 2015.
[7] Boland et al. A new method for optimizing a linear function over the efficient set of a multiobjective integer program. EJOR, 2017.
[8] Özpeynirci et al. On nadir points of multiobjective integer programming problems. Journal of Global Optimization, 2017.
[9] Altamiranda et al. A New Exact Algorithm to Optimize a Linear Function over the Set of Efficient Solutions for Biobjective Mixed Integer Linear Programs. INFORMS Journal on Computing, 2020.
[10] Zhang et al. Hypervolume maximization: A geometric view of Pareto set learning. NeurIPS, 2023.
[11] Mena et al. Multi-objective two-stage stochastic unit commitment model for wind-integrated power systems: A compromise programming approach. International Journal of Electrical Power & Energy Systems, 2023.
[12] Mesquita-Cunha et al. New $\epsilon$-constraint methods for multi-objective integer linear programming: A Pareto front representation approach. EJOR, 2023.
[13] Branke et al. Multiobjective optimization: Interactive and evolutionary approaches. 2008.
``Most multi-objective optimization algorithms estimate the nadir point based on the current solution set.``
We respectfully disagree with this statement. Many works show that only multi-objective evolutionary algorithms often use the current solution set to estimate the nadir objective vector [1]. Other widely adopted nadir objective vector estimation methods include the pay-off table method [2], extreme-point-based methods [3], subset-selection-based methods [4], and exact methods [5].
[1] He et al. A survey of normalization methods in multiobjective evolutionary algorithms. TEVC, 2021.
[2] Reeves et al. Minimum values over the efficient set in multiple objective decision making. EJOR, 1988.
[3] Deb et al. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints. TEVC, 2014.
[4] Mallipeddi et al. A twin-archive guided decomposition based multi/many-objective evolutionary algorithm. Swarm and Evolutionary Computation, 2022.
[5] Özpeynirci et al. On nadir points of multiobjective integer programming problems. Journal of Global Optimization, 2017.
---
Rebuttal 4:
Title: Thank you very much for your comment [2/2].
Comment: ``Inaccurate estimates do not constitute a bottleneck for multi-objective optimization algorithms.``
We also respectfully disagree with this statement. Experimental results on existing benchmarks cannot reflect the algorithm's real capability, as these benchmarks are manually designed with special features and can only be used to evaluate the algorithm's specific properties [1]. We provide the following experiment to demonstrate that inaccurate nadir objective vector estimation can have a significant impact on algorithm performance. We consider MOEA/D and NSGA-III. We replace their nadir objective vector estimation methods with ours and denote the variant as "V1". The mean and standard deviation of the hypervolume metric values are reported in the table. We can find that both MOEA/D-V1 and NSGA-III-V1 significantly outperform their original versions, respectively. Additionally, MOEA/D-V1 exhibits clearly smaller standard deviations compared with its original versions (e.g., 0.01 versus 0.003), indicating more stable performance. NSGA-III-V1 also yields stable performance.
|Problem|$m$|MOEA/D|MOEA/D-V1|
|--|--|--|--|
|TN1|3|0.3891±0.01062(2)-|0.3986±0.002891(1)|
|TN2|3|0.2166±0.01537(2)-|0.2271±0.002339(1)|
|TN3|3|0.389±0.01352(2)-|0.3987±0.004753(1)|
|TN4|3|0.2135±0.02198(2)-|0.2269±0.002335(1)|
|Problem|$m$|NSGA-III|NSGA-III-V1|
|--|--|--|--|
|TN1|3|0.3361±0.02089(2)-|0.3769±0.003307(1)|
|TN2|3|0.2114±0.01662(1)=|0.2071±0.002607(2)|
|TN3|3|0.3383±0.02713(2)-|0.3782±0.003549(1)|
|TN4|3|0.1442±0.02976(2)-|0.2096±0.003082(1)|
We hope the above experiment can help you better understand the significance of the accurate nadir objective vector.
[1] Ishibuchi et al. Performance of Decomposition-Based Many-Objective Algorithms Strongly Depends on Pareto Front Shapes. TEVC, 2017.
``Limited contribution to multi-objective optimization.``
We cannot agree with you on this point as well. Nadir point estimation has always been a fundamental yet important research direction in multi-objective optimization, and many papers have been published on this topic. We search on Google Scholar using *("nadir point" OR "nadir objective vector") AND ("estimate" OR "find" OR "compute")* and return over 8,000 results. Compared with existing methods, our method is the first work with theoretical guarantees and general applicability. We would like to emphasize that our method is not only able to accurately estimate the nadir objective vector but also can trade off the accuracy for faster runtime. Our method can cope with scenarios that require the accurate nadir objective vector while ensuring broad applicability to cases that do not need an accurate nadir objective vector.
``How the proposed method could improve the performance of an algorithm?``
We further improve NSGA-III-V1 by focusing the search within the promising region defined by the nadir objective vector. The statistical results of the hypervolume metric are presented in the following table. The improved version of NSGA-III-V1 is termed NSGA-III-V2. According to the results, our method shows great potential to improve the performance of an algorithm. Moreover, we find that NSGA-III-V2 retains more solutions within the promising region compared with NSGA-III-V1 (30 versus 39 on TN1, 25 versus 32 on TN2, 32 versus 38 on TN3, and 25 versus 33 on TN4). This explains the performance improvement and demonstrates the effectiveness of the scheme.
|Problem|$m$|NSGA-III|NSGA-III-V1|NSGA-III-V2|
|--|--|--|--|--|
|TN1|3|0.3361±0.02089(3)-|0.3769±0.003307(2)-|0.3849±0.003166(1)|
|TN2|3|0.2114±0.01662(2)-|0.2071±0.002607(3)-|0.2125±0.002711(1)|
|TN3|3|0.3383±0.02713(3)-|0.3782±0.003549(2)-|0.3821±0.003803(1)|
|TN4|3|0.1442±0.02976(3)-|0.2096±0.003082(2)-|0.213±0.00347(1)|
---
Rebuttal Comment 4.1:
Comment: Thank you for your response. This new experiment is very meaningful. Can you report the FE budget for this experiment?
Do MOEA/D and MOEA/D-V1 use the same budget (including those used in the nadir point estimation)?
---
Reply to Comment 4.1.1:
Comment: Thank you very much for your prompt reply. The number of function evaluations is the same as in our original manuscript, i.e., 180,000. We confirm that both MOEA/D and MOEA/D-V1 use the same FE budget. MOEA/D-V1 consumes 12 FEs for nadir objective vector estimation in each iteration and uses fewer iterations (about 230 less than MOEA/D).
---
Reply to Comment 4.1.2:
Comment: We notice that your rating remains unchanged. Could you please let us know if your previous concerns have been addressed?
If you have any new concerns, please let us know as well.
---
Rebuttal 5:
Comment: Thank you for your response. Now most of my major concerns have been satisfactorily addressed, so I increased my rating. I know it is very difficult to prepare the additional results during the rebuttal period, so I do appreciate your efforts. However, I strongly suggest the authors incorporate a more comprehensive and solid empirical study in the camera-ready version (if accepted) including a comparison with other strong baselines on more test problems and an ablation study to explore how BDNE helps improve the performance of MOAs in practical scenarios. I think a wide range of audiences may be interested in these results.
---
Rebuttal Comment 5.1:
Comment: We sincerely thank you for raising your score and are glad to know your major concerns have been satisfactorily addressed. We will include a more comprehensive empirical comparison in our final version. Thank you very much for your time and valuable suggestions, which will greatly help us improve our manuscript. | Summary: The authors model the task of computing the nadir objective vector as several bilevel optimization problems. A corresponding algorithm named BDNE is designed to estimate the nadir objective vector for black-box multi-objective optimization problems. BDNE scalarizes a multiobjective optimization problem into a set of boundary subproblems. By utilizing bilevel optimization on the boundary subproblems, the nadir objective vector is identified. To demonstrate the performance, the new approach is applied to some benchmark problems and real-world problems.
Strengths: The authors introduce bilevel optimization problems, theoretically ensuring that the nadir objective vector is solved. Although the corresponding algorithm is an approximation algorithm, experimental results show that it can significantly outperform existing heuristic methods. Moreover, the article is reasonably well organized. The examples are detailed clearly, as is the new approach.
Weaknesses: 1. The nadir objective vector estimation is just part of the process of solving the original multi-objective optimization problem. The development of BDNE may be not highly significant.
2. m boundary weight vectors remain unchanged throughout. It may waste the computational resources.
3. Solving the original problem creates a new optimization problem, potentially increasing complexity. And BDNE does require a longer runtime than the compared algorithms.
4. The illustration of setting mu is a bit preliminary.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why fix m boundary weight vectors as unit vectors? Can they change after some iterations?
2. The setting of mu should have a further discussion.
3. What increases the computational cost of BDNE? The computational complexity of BDNE should be analyzed.
4. What advantages does this method have compared to existing exact algorithms when applied to discrete problems?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The longer runtime of the proposed algorithm might also be a limitation that is not thoroughly discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your time and effort in reviewing our work. We are glad to know that you find this work is well organized, the analysis of existing methods is detailed and clear, and our method is demonstrated theoretically and experimentally.
we address your concerns as follows.
## W1. Significance of BDNE.
The nadir objective vector is an essential component of several fundamental operations (e.g., normalization, interaction, and guided search) in multi-objective optimization. In many engineering problems, the nadir objective vector has already been utilized to facilitate optimization [1,2,3].
Despite the introduction of many exact and heuristic methods, a unified paradigm to estimate the nadir objective vector remains absent. Our method is **the first one with theoretical guarantees and general applicability** in multi-objective optimization. Our method can be applied to various scenarios by employing appropriate solvers. Furthermore, the trade-off can be easily determined by decision-makers via setting a parameter.
## W2/Q1. Unchanged boundary subproblems waste computational resources. Can they change after some iterations?
These unchanged boundary subproblems use the unit vectors as their weight vectors. They motivate the search for the ideal objective vector, which serves as $\mathbf{z}^{r1}$. Besides, it enhances population diversity when the adjustable boundary subproblems become similar. In Table 1 of the one-page PDF, BDNE-V2 denotes BDNE removing boundary subproblems with unit vectors. We can find that BDNE outperforms BDNE-V2 across most instances. The effectiveness of these subproblems is demonstrated empirically.
Changing these subproblems after some iterations is an interesting idea. Further investigations will be conducted to explore this.
## W3/Q3. Longer runtime.
The runtime for BDNE strongly depends on the implementation. The main loop of our implementation has a time complexity of $O\left(N^2\log(N)\right)$ where $N$ is the population size. The specific analysis is provided as follows. The reproduction procedure has a time complexity of $O(N)$. The calculation of $\mathbf{r}$ governs the complexity of the selection procedure, which has $O\left(N^2\log(N)\right)$. The adjustment of boundary weight vectors with $O\left(N\log\left(\frac{N}{m}\right)\right)$ is executed periodically. Therefore, **The selection procedure governs the overall complexity**. The main loop has the complexity $O\left(N^2\log(N)\right)$. We will include the computational complexity analysis in the revised version.
We use an effective selection procedure stated in [4], which can facilitate the application of BDNE to various problems. We can significantly reduce the complexity by using a **simple selection procedure** or choosing an **alternative solver**, which deserves further investigation.
## W4/Q2. Criteria of setting $\mu$.
$\mu>0$ is a user-defined parameter, indicating a bounded trade-off preferred by decision-makers. Once $\mu$ is determined, $\alpha$ in Eq. (3) can be calculated (see Lines 155-162). If the preferences of decision-makers are not available, $\mu$ can be set to a reasonably large value. In the experimental studies, we demonstrate that BDNE with $\mu=100$ can have an overall good performance. We will improve the clarity in the revised paper.
## Q4. BDNE versus exact methods on discrete problems.
A comparison of our methods to existing exact methods is summarized in the following table. The exact method involves a more complicated optimization task, which includes more single-objective optimization problems or slack variables. Some exact methods require the variable to be discrete. Moreover, existing exact methods do not accommodate a user-defined trade-off.
| Method | Number of single-objective optimization problems | Number of slack variables | Discrete variable | User-defined trade-off |
|:-----------:|:------------------------------------------------:|:-------------------------:|:-----------------:|:---------------------------------------:|
| KS [5] | 2 | Null | Unnecessary | Incompatible |
| KL [6] | 2 | Increasing | Necessary | Incompatible |
| FD&IS [7] | 2 | Increasing | Necessary | Incompatible |
| BDNE (ours) | 1 | Null | Unnecessary | Compatible |
We choose an exact solver for BDNE to validate its effectiveness empirically. BDNE has a leading performance in terms of runtime (please refer to Common Concern 2).
## References
[1] Weerasuriy et al. Performance evaluation of population-based metaheuristic algorithms and decision-making for multi-objective optimization of building design. Building and Environment, 2021.
[2] Mena et al. Multi-objective two-stage stochastic unit commitment model for wind-integrated power systems: A compromise programming approach. International Journal of Electrical Power \& Energy Systems, 2023.
[3] Ekhtiari et al. Optimizing the dam site selection problem considering sustainability indicators and uncertainty: An integrated decision-making approach. Journal of Cleaner Production, 2023.
[4] Zheng et al. A generalized scalarization method for evolutionary multi-objective optimization. AAAI, 2023.
[5] Kirlik et al. Computing the nadir point for multiobjective discrete optimization problems. Journal of Global Optimization, 2015.
[6] Köksalan et al. Finding nadir points in multi-objective integer programs. Journal of Global Optimization, 2015.
[7] Özpeynirci et al. On nadir points of multiobjective integer programming problems. Journal of Global Optimization, 2017.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply and additional experiments with the discrete problem, which make me more explicit about the significance of the proposed BNDE. However, I am still confused about the "trade-off" in W1 and W4/Q2. If the authors could provide more explanations or illustrations, I would consider raising my rating.
---
Reply to Comment 1.1.1:
Comment: Thanks for your further comments. We apologize for the unclear expression. The two trade-offs are different.
``Trade-off in W1.``
This refers to the trade-off between the accuracy of the estimated nadir objective vector and the runtime of BDNE. This trade-off can be controlled by the parameter $\tau_u$. A larger value of $\tau_u$ results in higher accuracy, while a smaller value of $\tau_u$ leads to a shorter runtime. We provide the following experiments to show the effects of $\tau_u$. The first and second tables present statistical results of errors and runtimes, respectively. We can find that the runtime is getting shorter as the value of $\tau_u$ decreases; the error is getting smaller (or the accuracy is getting higher) as the value of $\tau_u$ increases.
|Problem|$m$|2|6|10|14|18|
|--------------|-----|--------------------|----------------------|------------------------|-----------------------|------------------------|
|TN4|3|0.01654±0.01637(5)|0.001661±0.001799(4)|0.0001939±0.0002971(3)|0.000134±0.0004145(2)|5.646e-05±0.0002382(1)|
||5|0.08632±0.05066(5)|0.01003±0.01915(4)|0.009246±0.02555(2)|0.003051±0.007012(1)|0.009329±0.02633(3)|
||8|0.08955±0.09165(5)|0.03703±0.05067(3)|0.04032±0.08866(4)|0.03051±0.04605(2)|0.0255±0.02799(1)|
|Average rank||5(5)|3.6667(4)|3(3)|1.6667(1)|1.6667(1)|
|Problem|$m$|2|6|10|14|18|
|--------------|-----|-----------------|-----------------|-----------------|----------------|----------------|
|TN4|3|4.551±0.1162(1)|12.52±0.3078(2)|20.04±0.4353(3)|27.44±0.488(4)|34.89±0.645(5)|
||5|8.587±0.2431(1)|23.2±0.3991(2)|37.45±0.5952(3)|51.6±0.8106(4)|65.96±1.079(5)|
||8|16.22±0.2524(1)|44.8±0.6788(2)|72.78±1.144(3)|100.9±1.64(4)|129.1±1.898(5)|
|Average rank||1(1)|2(2)|3(3)|4(4)|5(5)|
``Trade-off in W4/Q2.``
This implies the trade-off between objectives. This trade-off is the parameter $\mu$, determined by the decision-maker. Specifically, $\mu$ is the amount of increment in the value of one objective function that the decision-maker is willing to tolerate **in exchange for a one-unit decrement in another objective function** (in the context of the minimization problem). Each Pareto-optimal solution corresponds to a particular value of $M$ (see Definition 10 in our original manuscript). If $M \leq \mu$, the Pareto-optimal solution aligns with the decision-maker’s preference. Otherwise, the Pareto-optimal solution should be discarded. We refer to the region where the objective vectors dominate the estimated nadir objective vector by BDNE as the promising region. Solutions outside this promising region can be ignored. Without the preference of $\mu$, $\mu$ is set to a reasonably large value, and BDNE finds the exact nadir objective vector. When this preference is available, the estimated nadir objective vector by BDNE may further narrow down the promising region.
we conduct experiments to show the effects of $\mu$. The estimation errors are statistically shown in the following table. We can find that a smaller value makes a larger error. We also examine the position of the estimated nadir objective vector. The estimated one is dominated by the exact one on every instance. These findings indicate that a smaller value of $\mu$ can yield a more refined promising region.
|Problem|$m$|1|5|10|50|100|
|---------|---|---------------------|---------------------|---------------------|----------------------|-----------------------|
|mDTLZ2|3|0.5211±0.0006548(5)|0.1548±0.001253(4)|0.0793±0.002107(3)|0.01325±0.001883(2)|0.006579±0.001046(1)|
||5|0.3927±0.0108(5)|0.08951±0.008372(4)|0.0471±0.003567(3)|0.01099±0.00667(2)|0.004697±0.0004873(1)|
||8|0.2884±0.01397(5)|0.06433±0.004405(4)|0.03188±0.002174(3)|0.005223±0.001297(2)|0.003585±0.004993(1)| | Summary: This paper looks at the problem of finding the nadir objective vector in multi-objective optimization problems, which is important for both optimization and decision-making. This work first analyzes the drawbacks of existing techniques, and then proposes a new method for nadir objective vector estimation using bilevel optimization and a multi-objective evolutionary approach. Experimental results show that the proposed method can achieve good performance in several test examples.
Strengths: This paper is very well written. As far as I can tell, the problem discussed in the paper is important for multi-objective optimization.
The analysis of existing methods is convincing. Particularly, the paper gives a clear example of poor estimation of the vector by existing heuristic techniques. The proposed method has theoretical guarantees and can be applied to a wide range of problems. The effectiveness of this method is also verified experimentally.
Weaknesses: I do not find any obvious weakness, but I have the following questions:
1. Two different evolutionary algorithms are used to solve the upper- and lower-level problems respectively. Why is CMA-ES used for the optimization of upper-level problems but not for lower-level problems?
2. How should one select a value for the parameter mu in BDNE?
3. Suppose I select a relatively small value of mu and obtain a point consisting of the optimal values of bilevel optimization problems. Does this mean that in the area where the objective vectors do not dominate the point, there are no Pareto optimal objective vectors with a minimum M value less than mu?
3. I notice that the error on CRE5-3-1 is large (about 20%). Please explain this.
4. Why are algorithms only tested on continuous problems?
Technical Quality: 3
Clarity: 4
Questions for Authors: Refer to the above comments.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The discussion in the conclusion section is comprehensive enough.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your time and effort in reviewing our work. We are glad to know that you find this work is important and well-written, the analysis of existing methods is convincing, and our method has wide applicability.
we address your concerns as follows.
## Q1. Why is CMA-ES used for the optimization of upper-level problems but not for lower-level problems?
We utilize CMA-ES as the optimizer for ULOP for two reasons. First, CMA-ES is a state-of-the-art algorithm for single-objective black-box optimization. Second, optimal solutions to the LLOPs are not always available; instead, approximate solutions are often obtained. Consequently, the function values of the approximate solutions may exhibit noise. CMA-ES is suitable for this task as it demonstrates strong robustness in optimizing noisy functions [1].
CMA-ES can be used to solve the LLOP since the LLOP is a single-objective optimization problem. We utilize an MOEA to cooperatively solve the LLOPs. This is because the superiority of MOEAs is demonstrated empirically and theoretically in solving multi-objective black-box optimization problems [2,3].
## Q2. Criteria of setting $\mu$.
$\mu>0$ is a user-defined parameter, indicating a bounded trade-off preferred by decision-makers. Once $\mu$ is determined, $\alpha$ in Eq. (3) can be calculated (see Lines 155-162). If the preferences of decision-makers are not available, $\mu$ can be set to a reasonably large value. In the experimental studies, we demonstrate that BDNE with $\mu=100$ can have an overall good performance.
## Q3. A small value of $\mu$: Suppose I select a relatively small value of $\mu$ and obtain a point consisting of the optimal values of bilevel optimization problems. Does this mean that in the area where the objective vectors do not dominate the point, there are no Pareto optimal objective vectors with a minimum $M$ value less than $\mu$?
Yes. That is, the Pareto-optimal solution does not address the user-defined trade-off $\mu$ if it does not dominate the point constructed by the optimal values of BLOPs. The optimal solution of the $i$-th BLOP has the largest value of the $i$-th objective function among all the solutions addressing the trade-off. The conclusion can be deduced by Theorems 2 and 3. A satisfactory pseudo-nadir objective vector can be obtained if the decision-makers prefer a relatively small value of $\mu$. This preference-compatible property enables our method to have greater application potential. We will add the new claims and their proofs to the revised version.
## Q4. Large error on CRE5-3-1.
Thank you for the careful review. The $PF$ of CRE5-3-1 is unknown and represented by the non-dominated objective vector set provided in [4]. The approximate $PF$ may be the reason for the large error since the results of DNPE and BDNE are similar.
## Q5. Why are algorithms only tested on continuous problems?
We test BDNE on the discrete problem and choose an exact solver. Compared with exact methods, BDNE still has a superior performance in terms of runtime (please refer to Common Concern 2).
## References
[1] Nikolaus Hansen. The CMA evolution strategy: A tutorial. arXiv preprint, 2016.
[2] Zhang et al. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. TEVC, 2007.
[3] Dang et al. Crossover can guarantee exponential speed-ups in evolutionary multi-objective optimisation. AIJ, 2024.
[4] Tanabe et al. An easy-to-use real-world multi-objective optimization problem suite. Applied Soft Computing, 2020.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. My concerns have been resolved, so I will maintain the positive score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your time and effort in reviewing our work, and we are glad to know your concerns have been resolved. | Summary: This work proposes bilevel optimization problems to align their optimal values with the nadir point. Some schemes are suggested to address potential flat fitness in upper-level optimization. An algorithm based on evolutionary computation is then proposed for black-box cases.
Strengths: 1. This work demonstrates both theoretically and experimentally that the nadir point can be obtained or approximated via newly proposed optimization problems.
2. Compared to existing optimization problems for finding the nadir point, the proposed problem is more concise and applicable to a broader range of scenarios.
3. Several schemes are used to cope with the issue of flat fitness in the upper-level problem.
Weaknesses: 1. The algorithm proposed in this paper is an approximation algorithm rather than an exact one.
2. The algorithm is general. Its performance may be unsatisfactory for some specific problems.
3. The lower-level optimization problem consists of a series of single-objective subproblems, limiting the applicability of some multi-objective optimization algorithms.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is there evidence of really complicated feasible objective spaces from industrial applications?
2. The algorithm for the lower-level optimization problem appears to be quite complicated. Can we consider changing it to a different one?
3. Can evolutionary algorithms be used in the optimization problems proposed by Gokhan et al. (2015)?
4. How are the benchmarks chosen in the experiment?
5. What is the performance if coping strategies for flat fitness are eliminated?
6. I also wonder about the effects of removing the subproblems with unit vectors.
7. I recommend providing a more detailed analysis of the deficiencies of the heuristic methods.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This work mentions one limitation: the lack of fine-tuning to a specific problem. I agree that this is an interesting and important topic for future research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your time and effort in reviewing our work. We are glad to know that you find our method is concise, has broad applicability, and demonstrates its effectiveness theoretically and experimentally.
we address your concerns as follows.
## W1/W2. BDNE is a general approximate algorithm.
BDNE can **accurately** determine the nadir objective vector, as long as an exact solver is available. BDNE can achieve **satisfactory** performance for a specific task, as long as a suitable solver is utilized. This paper proposes a general framework applicable to various scenarios, such as multi-objective integer linear programming with available exact solvers, and multi-objective learning utilizing gradient information.
To demonstrate that BDNE can be a promising exact algorithm, we apply BDNE to the discrete problem and choose an exact solver. We test BDNE on the multi-objective assignment problem (MOAP) and the multi-objective knapsack problem (MOKP). 3 instances are randomly generated for each problem. The following table shows the mean and standard deviation of error metric values. Our method is **accurate**, while the heuristic method misses the nadir objective vector. The superiority is also demonstrated, as shown in Common Concern 2.
|Problem|$m$|$n$|ECR-NSGA-II|DNPE|BDNE|
|:-------:|:---:|:------------:|:------------:|:--------------:|:----:|
|MOAP|3|20$\times$20|0.261±0.103|0.392±0.0828|0|
||||0.202±0.064|0.419±0.055|0|
||||0.197±0.0945|0.582±0.0767|0|
||4|10$\times$10|0.147±0.0602|0.498±0.0405|0|
||||0.252±0.0917|1.16±4.52e-16|0|
||||0.142±0.0659|0.536±2.26e-16|0|
|MOKP|3|200|0.218±0.0861|3.84±0.0284|0|
||||0.229±0.0804|4.62±0.0315|0|
||||0.208±0.084|5.48±0.0346|0|
||4|50|0.244±0.0957|3.43±0.0586|0|
||||0.27±0.0743|4.25±0.0587|0|
||||0.262±0.107|5.08±0.0709|0|
## W3. Applicability of other multi-objective optimization algorithms for lower-level optimization.
We would like to clarify that the LLOPs constitute a set of single-objective optimization problems (SOPs) **rather than** an MOP. Therefore, not all multi-objective optimization algorithms are suitable for the lower-level optimization. Nevertheless, these SOPs can be solved sequentially or in parallel. We employ a decomposition-based multi-objective optimization algorithm to collaboratively address these SOPs, thereby enhancing optimization efficiency.
## Q1. Evidence of really complicated feasible objective region.
Real-world problems usually have many constraints that can result in irregular feasible objective regions. A comprehensive review of these industrial cases is provided in [1,2].
## Q2. Can the complicated lower-level optimization algorithm be changed?
Yes. We can change the selection strategy in this algorithm to reduce the complexity. This is because this algorithm employs an efficient selection strategy as described in [3], which governs the overall complexity. Moreover, the optimizer of LLOPs can be changed to any suitable one according to the task's properties. For example, use other decomposition-based evolutionary algorithm frameworks [4]. Local search (or gradient information, if available) can be employed to facilitate the convergence [5,6]. All single-objective optimizers such as GUROBI and BARON can also be applied to solve the LLOP, as the LLOP is a single-objective optimization problem.
## Q3. Can the method proposed in (Gokhan et al. 2015) adopt the evolutionary algorithm as the solver?
This method relies heavily on an exact solver. This method is **time-consuming** and **unreliable** when the evolutionary algorithm is adopted. Specifically, its lower-level optimization problem, including two sequential single-objective optimization problems, exhibits high computational costs. Furthermore, uncertain optimality gaps arise when the evolutionary algorithm is adopted. As a result, the first single-objective optimization problem misleads the subsequent one, thereby presenting significant unreliability in the lower-level optimization process. The pay-off table, which is a key step, also cannot be accurately obtained.
In contrast, our method does not require the payoff table, and the lower-level optimization problem is an independent single-objective optimization problem.
## Q4. Criteria used to select benchmarks.
Please refer to Common Concern 3.
## Q5 \& Q6. Ablation studies.
The experimental results are presented in Table 1 of the one-page PDF. BDNE-V1 denotes BDNE without the coping strategy for flat fitness. BDNE-V2 denotes BDNE removing boundary subproblems with unit vectors. The effectiveness of the two strategies is validated.
## Q7. More detailed analysis of deficiencies of the heuristic methods.
We aim to point out that heuristic methods do not guarantee obtaining exact objective vectors. We construct an example $PF$ and then find all heuristic methods can not identify its nadir objective vector. We believe this example is clear and intuitive. Additional analysis is not necessary to support the above claim.
## References
[1] Tanabe et al. An easy-to-use real-world multi-objective optimization problem suite. Applied Soft Computing, 2020.
[2] Kumar et al. A benchmark-suite of real-world constrained multi-objective optimization problems and some baseline results. Swarm and Evolutionary Computation, 2021.
[3] Zheng et al. A generalized scalarization method for evolutionary multi-objective optimization. AAAI, 2023.
[4] Ke Li. A survey of decomposition-based evolutionary multi-objective optimization: Part I-past and future. arXiv preprint, 2024.
[5] Lara et al. HCS: A new local search strategy for memetic multiobjective evolutionary algorithms. TEVC, 2010.
[6] Lapucci et al. A memetic procedure for global multi-objective optimization. MPC, 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I am glad to see that the proposed method can also work as an exact algorithm and achieve a shorter running time than other existing exact algorithms. The controllable trade-off between running time and accuracy is important and will greatly benefit the development of more advanced multi-objective optimization algorithms. My other concerns have been addressed as well. I will increase my score to 7 and support its acceptance.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your support and are delighted to hear that all your concerns have been addressed. | Rebuttal 1:
Rebuttal: Dear AC and Reviewers,
We would like to thank you for the insightful comments and feedback on our paper. Overall, the reviewers agree that this work is important (NNkD) and well-written (NNkD, wp71), the analysis of existing methods is convincing (NNkD) /clear (wp71), the proposed method is concise (3RPX) /novel (2guY) and has broad applicability (3RPX, NNkD).
We address some common concerns raised by different reviewers in this response.
## C1. Significance (wp71, 2guY).
**Nadir objective vector.**
The nadir objective vector is a fundamental concept of multi-objective optimization, and it has been widely used in many multi-objective optimization methods (not only heuristic but also exact methods)'s operations, such as normalization, interaction, and guided search. There have been many studies demonstrating the importance of accurate estimation of the nadir objective vector [1,2,3] and its effectiveness in facilitating the optimization of real-world problems [4,5,6].
**BDNE.**
Heuristic methods do not have theoretical guarantees, while exact methods have limitations in applicability. Therefore, we propose BDNE, which is **the first method with theoretical guarantees and general applicability** in multi-objective optimization. Our method can easily adopt various solvers for different tasks and achieve a trade-off between runtime and accuracy. Furthermore, the trade-off can be easily determined by decision-makers via setting a parameter.
## C2. BDNE adopts an exact solver for discrete problems (3RPX, NNkD, wp71).
To demonstrate that BDNE can be competitive on other problems, we apply BDNE to the discrete problem and choose an exact solver. We test BDNE on the multi-objective assignment problem (MOAP) and the multi-objective knapsack problem (MOKP). 3 instances are randomly generated for each problem. The runtimes in seconds are shown in the following table. "TO" represents timeout. Compared with existing exact methods, BDNE still demonstrates superior performance in terms of runtime.
|Problem|$m$|$n$|KS [7]|KL [8]|FD&IS [9]|BDNE|
|:-------:|:---:|:------------:|:----:|:----:|:-----:|:-------:|
|MOAP|3|20$\times$20|340|4375|687|**222**|
||||405|1249|1471|**183**|
||||394|1214|915|**182**|
||4|10$\times$10|4279|575|TO|**212**|
||||8028|1189|TO|**273**|
||||4724|431|TO|**160**|
|MOKP|3|200|1703|270|826|**166**|
||||1976|77|238|**62**|
||||1500|131|283|**92**|
||4|50|1562|28|10211|**24**|
||||1106|53|14256|**36**|
||||412|21|3013|**15**|
## C3. Criteria used to select benchmarks (3RPX, 2guY).
Since some benchmark problems are designed too specially (e.g., triangle-like $PF$), they cannot reflect the difficulty for estimating the nadir objective vector of real-world problems. Therefore, the problems with complicated feasible objective regions are used in our experimental study. To comprehensively compare the algorithms and avoid showing similar results, we select the problems with different kinds of feasible objective regions from several test suites. Specifically, we employ 4 new and 6 existing test problems (28 instances in total), including instances with many objectives (e.g., 5 and 8 objectives cases), weakly Pareto-optimal boundaries (e.g., TN1-TN4 and mDTLZ3), linear $PF$s (e.g., TN1 and TN3), convex $PF$s (e.g., mDTLZ3), concave $PF$s (e.g., TN2, TN4, and DTLZ3), irregular $PF$s (e.g., TN3, TN4, DTLZ5, IMOP4, and IMOP6).
The following table summarizes the complete results on all the test problems by showing "Total +/=/-". "+", "=" or "-" denotes that the performance of the corresponding algorithm is statistically better than, similar to, or worse than that of BDNE based on Wilcoxon's rank sum test at 0.05 significant level. DTLZ, MP-DMP, and ML-DMP are not used in our experimental study, as the MaF test suite already covers them. We can find that our method still outperforms the other algorithms on most instances.
|Problem|# instances|EC-NSGA-II|DNPE|BDNE|
|:-------------:|:-----------:|:----------:|:--------:|:---------:|
|MaF1-MaF7|21|1/0/20|5/2/14| \ |
|MaF8-MaF9|6|0/0/6|1/0/5| \ |
|MaF10-MaF13|12|5/0/7|0/1/11| \ |
|mDTLZ1-mDTLZ4|12|0/0/12|1/2/9|\ |
|IMOP4-IMOP8|5|0/0/5|0/1/4| \ |
|Average rank||2.5893(3)|2.125(2)|1.2857(1)|
##
Best Regards,
Paper13080 Authors
## References
[1] Mesquita-Cunha et al. New $\epsilon$-constraint methods for multi-objective integer linear programming: A Pareto front representation approach. EJOR, 2023.
[2] Zhang et al. Hypervolume maximization: A geometric view of Pareto set learning. NeurIPS, 2023.
[3] Branke et al. Multiobjective optimization: Interactive and evolutionary approaches. 2008.
[4] Weerasuriy et al. Performance evaluation of population-based metaheuristic algorithms and decision-making for multi-objective optimization of building design. Building and Environment, 2021.
[5] Mena et al. Multi-objective two-stage stochastic unit commitment model for wind-integrated power systems: A compromise programming approach. International Journal of Electrical Power \& Energy Systems, 2023.
[6] Ekhtiari et al. Optimizing the dam site selection problem considering sustainability indicators and uncertainty: An integrated decision-making approach. Journal of Cleaner Production, 2023.
[7] Kirlik et al. Computing the nadir point for multiobjective discrete optimization problems. Journal of Global Optimization, 2015.
[8] Köksalan et al. Finding nadir points in multi-objective integer programs. Journal of Global Optimization, 2015.
[9] Özpeynirci et al. On nadir points of multiobjective integer programming problems. Journal of Global Optimization, 2017.
Pdf: /pdf/7c210220a378143bfd05352308fe5d92bdb800ee.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Factorization Curse: Which Tokens You Predict Underlie the Reversal Curse and More | Accept (poster) | Summary: This paper focus on the "reversal curse", where models struggle to recall information when probed in a different order than encountered during training. The authors propose reframing this issue as the "factorization curse", which is a failure of models to learn the same joint distribution under different factorizations. They support it with experimental evidence, providing also a new dataset WikiReversal, to evaluate this problem and explore solutions. Authors suggest to use factorization-agnostic training objectives, which can significantly mitigate these issues.
Strengths: 1. Authors introduce novel concept of "factorization curse", broadening the understanding of why models fail in information retrieval.
2. The development of WikiReversal, based on Wikipedia knowledge graphs, is a valuable contribution, offering a more realistic benchmark for evaluating model performance in information retrieval tasks.
3. Authors provide extensive experiments, proving the fact that "factorization curse" exists both in toy examples and real cases.
4. They suggest a factorization-agnostic training objective that showed good performance in mitigating such issues, while preserving satisfactory quality in forward queries.
Weaknesses: 1. While the paper presents a variety of experiments with different training objectives, they experiment with small model sizes (100M parameters). It is not clear how well this approach could be scaled on bigger models.
2. Authors didn't clearly analyze whether proposed training objective will hurt the quality on the standard benchmarks. For example, it would be interesting to see if the quality of the open generation drops with new training objective.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Do the quality of generation drops if we use proposed method to train the model?
2. Will you give the open access to the provided WikiReversal?
3. What are the computational costs associated with factorization-agnostic training, and how do they compare to traditional training objectives?
Also some typos:
Line 83 "are: missed in “Note that there many”
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: 1. The experiments are primarily based on the WikiReversal testbed, which, while realistic, is still limited to the structure and content of Wikipedia knowledge graphs.
2. Limited scales of models, the training was done only on small models, which could be justified for retrieval itself, but need to be addressed for scalability of approach
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for providing valuable feedback to improve our work. We appreciate you finding the factorization curse a novel concept to explain model information retrieval failures. We are happy you appreciate the development of WikiReversal and our extensive experiments. Based on your suggestions we ran an additional experiment to measure the computation costs and convergence tradeoffs of MLM-U. We outline these new experiments and answer your remaining questions below.
### Scale and performance on standard benchmarks
Given that our objective is not to train a general purpose language model, we do not benchmark standard language benchmarks as here we isolate only information retrieval. This helps us control the experiments. Albeit limited by relatively small context windows in our knowledge retrieval tasks, we have confirmed that generated text is grammatically coherent and looks like proper English.
Overall in our experiments, we focus on whether alternative learning objectives, namely factorization-order agnostic objectives can overcome information retrieval failures such as the reversal curse.
For all information retrieval benchmarks considered, we found a modestly sized model was able to perform quite well. Even for the more challenging WikiReversal a much smaller model outperformed Mistral-7B finetuned on the same data—demonstrating the advantage of factorization-agnostic learning objectives such as MLM-U. Of course, we agree scaling such models further would be interesting and we are exploring this direction for future work.
### Access to WikiReversal
We agree it is important for the research community to build on the WikiReversal benchmark. Fortunately, the underlying data is already openly accessible via GenWiki https://github.com/zhijing-jin/genwiki with an openly accessible (Creative Commons) license. While WikiReversal should be reproducible from the details outlined in Section E (specifically Alg. 1), we do plan to include the exact scripts we use for researchers to download and parse the dataset to form WikiReversal in order for the community to study the reversal curse on more realistic natural text.
### Computational Costs and Convergence
Thank you for raising this point. We agree comparing the computational costs for each training approach is important. Based on this suggestion, we analyzed the runtime and convergence rates for both MLM-U and standard autoregressive (AR) training. We benchmarked two parameter matched models on the retrieval task dataset from Section 3.1. We found MLM-U and AR exhibited comparable computational costs per training step: 558.80 minutes versus 559.45 (AR) on 8 V100 GPUs after 1k epochs of training.
To better understand the convergence tradeoffs we also include a comparison of the forward and backward accuracy curves for MLM-U versus AR in Figure 2 of the supplementary rebuttal PDF. As expected we found the forward loss for the AR converged more quickly with forward saturating after 1600 epochs versus MLM-U, which saturated after 3999 epochs of training, although we observed a much smoother convergence for MLM-U even for forward. Of course, the AR model despite the faster forward convergence was not able to improve the backward accuracy. Based on your suggestion we plan to include a more thorough discussion of the computational cost and convergence in the revised draft. We’ve included both forward and backward converge plots in Figure 3 the rebuttal PDF for your reference as well. Note also appendix F, which talks about the delayed generalization speed for the backward direction.
We’d like to thank you for the thoughtful questions and suggestions. We hope the additional experiments and clarifications have addressed your questions. We remain available for any further discussion or questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your quick and very detailed reply. I believe the score is already high enough, but you have fully answered all the questions.
Best wishes to your paper, and let me know if I can be of any help.
---
Reply to Comment 1.1.1:
Comment: Thank you very much! | Summary: This paper addresses the reversal curse, where models trained on a relation in one direction (e.g. "A is the capital of B") cannot answer questions about the relation worded in the reverse order (e.g. "B's capital is [?]"). The authors frame this as a subcase of a broader problem, where models trained with a causal learning objective do not learn to assign equal probability to different factorizations of the same sequence, which the paper terms the factorization curse. To address the factorization curse, the authors propose using MLM-U, a modified MLM that varies both the location and size of the masked segments. The authors show that models trained with this modified objective do not exhibit the same reversal curse and analyze the behavior of models trained with causal, MLM, and MLM-U objectives.
Strengths: S1. The paper tackles a well-defined problem from an interesting angle, providing both theoretical and empirical evidence for its claims. The paper is well-written.
S2. The connection between PLM and discrete state diffusion here is an interesting insight and well-explained.
S3. WikiReversal is an interesting idea for a dataset and looks like it will be a great resource for the community, especially for work aiming to study this reversal phenomenon. It is documented well.
Weaknesses: W1. In a two-token setting, you can (at a high but not intractable computational cost) compute the right-to-left factorization of a causal model through enumeration. It would be interesting to compute the left-to-right and right-to-left factorizations for some two-token sequences at test time for the models presented here. Do the causal models clearly suffer from a discrepancy of joint probabilities in the two-token case? Does the MLM-U training mitigate the two-token factorization curse? It seems likely the answer is yes, but it would strengthen the paper to confirm this empirically. (And in a 2-token setting, it seems that MLM would also mitigate the factorization curse-- does this happen in practice?)
W2. Comparing BERT-like MLM to encoder-decoder MLM-U or GPT-like AR seems to be a bit of a strange comparison. It would seem more natural to use a BART-like encoder-decoder for the MLM example (though I recognize that the causal attention in the decoder may be slightly confounding, so I think both settings have merit; still, it would be good to *see* both to be sure).
W3. The architecture details for MLM-U, while not the key argument of the paper, are only sparsely explored. It is not clear how much of the benefit comes from using the new architecture design versus using the MLM-U objective. And since Appendix G suggests that this architecture can handle both MLM-X and AR training, why not just train all models using this architecture and vary only masking strategy?
Technical Quality: 3
Clarity: 4
Questions for Authors: Q1. In appendix B, you state that the PLM and absorbing state diffusion objectives are “the same” but “practically speaking […] may have very different implications”. Can you elaborate on this point?
Q2. Why use BERT rather than BART (or another encoder-decoder) as the choice for MLM?
Q3. You argue that at least some of the benefits of MLM-U over MLM arise from the ability to fully mask out multi-token entities. Could you demonstrate that, given a task where the entities are all single-token, the benefit of MLM-U is marginal or non-existent? I’m thinking of a version of the retrieval task in 3.1 where the key-value sequences are all of length 1.
typos/small notes:
* line 79: GPT-style citation should probably be GPT-2 or GPT, not GPT-4.
* line 83: “Note that there *are* many factorizations”
* Table 1: very small point, but I would have expected the entry with the asterisk to be the entry *with* delimiter tokens.
* I think the factorization curse section could emphasize even more that the ordering of the tokens in the input is not altered when calculating the different factorizations. This is stated in the text (lines 83-85 and 92), but given that the section begins by discussing causal models (and, like a lot of us, I work almost exclusively with causal models these days!), it still required a second read to parse exactly what was going on.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitations seem appropriate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully considering our work. We are glad you found the paper well written and that we tackle a well-defined problem with both theoretical and empirical insights. We are especially happy you appreciated the effort we put into crafting a more realistic benchmark for the reversal curse with WikiReversal. We appreciate the suggestions you provide to improve the work and have performed additional experiments based on these suggestions, which we address below.
### Two-Token (W1)
Thank you for the suggestion to experiment with a two-token setting to assess whether MLM-U training mitigates the two-token factorization curse. We ran a new experiment comparing standard left-to-right AR training (labeled GPT in the legends) and MLM-U. Specifically, we train both MLM-U and AR in a two-token variant of the retrieval task from Section 3.1 trained for 2k epochs. We find MLM-U reaches 100% forward and backward whereas AR struggles to learn the backwards setting, reaching only 12% accuracy after 2k epochs of training. We’ve attached plots demonstrating the forward and backward accuracy of each model throughout training in Figure 1 of the supplementary rebuttal PDF. Please let us know if this answers the question you had in mind.
### Encoder-decoder architecture comparisons (W1, W2, Q2)
While we agree that training all models with encoder-decoder and then varying only the masking strategy is one valid strategy, we wanted to give each masking strategy its “best shot”. We argue that it is most fair to have each masking strategy compete with a corresponding architecture that is known to work well with it. Otherwise one could, for instance, claim that the encoder-decoder is not well suited (or not properly optimized) for AR.
(FYI, earlier on in our research investigations we experimented with both encoder-decoder and BERT-style encoder only architectures for MLM. There we found BERT-style encoder architectures to perform better with MLM-X objectives.)
### MLM versus MLM-U (Q3)
We would like to clarify that both MLM and MLM-U are capable of masking and predicting multi-token entities (unless of course the entity is longer than the masking ratio). The main advantage of MLM-U as shown in Figure 2 stems from its ability to handle context-dependence of variable lengths.
### Practical differences between PLM and MLM-U (Q1)
While the objectives are theoretically equivalent, the implementation of XLNet is not fully factorization agnostic in practice. As described in Section 2.3 of XLNet https://arxiv.org/abs/1906.08237 for "practical reasons they end up training with a permutation on the last few tokens only." This results in a model that is not fully factorization-agnostic.
Additionally, in practice, we do not average over all permutations or all masking rates, but only a randomly chosen subset. This might induce more practical differences.
Thank you for the proposed clarifications and language suggestions. We’re very glad you point out “the input is not altered when calculating the different factorizations.” We agree we’ll emphasize this key point much more prominently in the writing. We remain available for further questions and thank you again for all the effort you put into this review.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed reply and additional experiment! The two-token setting you ran is quite interesting, though not what I originally had in mind-- I was thinking of a demonstration of the issue for causal models. In lines 86-95, you describe the two-token case in detail; for a causal model, you would generally only be able to compute $p(x_2|x_1)p(x_1)$. However, in a 2-token case, it's possible to fix a value for $x_2$ and enumerate all possible pairs $x_1 x_2, x_1 \in V$, so that you can compute $p(x_1|x_2)$, allowing you to directly compute the "backwards" factorization $p(x_1|x_2)p(x_2)$. I think it would be a nice empirical demonstration to compute "forwards" and "backwards" factorization of a few sequences under the causal, MLM, and MLM-U models, to show that the causal model has a much higher difference in probability between these two factorizations.
However, this is not necessary to the paper-- more of a demonstration-- and I appreciate the effort that went into the rebuttal as a whole, so I will raise my score 6 -> 7. I think this is a good paper!
---
Reply to Comment 1.1.1:
Comment: Oh, sorry for the confusion about the experiment!
In any case, thank you very much for your evaluation!
---
Reply to Comment 1.1.2:
Comment: if we computed p(x1|x2) through p(x1,x2) and the marginals, wouldn't we circumvent the issue? I don't think we have time to discuss anymore, but we shall think about this experiment for a possible camera ready version. Thank you for the suggestion! | Summary: The paper extends the idea of the "Reversal Curse" from prior work and proposes ways to mitigate it by finetuning LLMs with a different objectives. To recall, the reversal curse is formulated roughly as follows: a model, when **finetuned,**(i.e. not prompted) in A is B statement, does not automatically generalize to B is A. Authors generalize this, from a probabilistical point of view, into the inability to generalize between different factorization orders on the joint text distribution. Authors then hypothesize that this can be alleviated by what they call "permutation-agnostic training" - training techniques popular among LLM that require predicting tokens in a varying order. Authors consider a number of such techniques based on prior work on encoder pretraining and find that MLM-u offers a consistent solution to the reversal/factorization curse, while many other intuitive solutions don't. Authors experiment using two relatively smaller language models (GPT-2 and Mistral 7B) on tasks like simple retrieval, understanding non-reciprocal relationships (between sets/statements), WikiReverasal. The paper also analyzes the representations learned after the model is fine-tuned with the proposed approach.
Strengths: 1. The paper proposes (seemingly) a solution that covers one of the reasons why LLMs hallucinate. Since LLM hallucination is one of the main roadblocks to their greater adoption, this is an impactful problem to address.
2. The experiments appear sound: authors evaluate on a diverse set of tasks, using two different LLMs. This eliminates the possibility of a false positive. However, there is still a direction of scaling to larger models (e.g. 70B and above) and checking against possible changes in the efficacy of the proposed solution.
3. The proposed method relies on fine-tuning, not training a model from scratch, and thus can be broadly applied to existing models
4. The paper is generally well written, has a clearly stated hypothesis and is overall easy to follow, if a bit unconventionally structured. Minor typos and clarification requests (below) do not affect the overall presentation.
Also, a minor but pleasant advantage of this paper is that they openly declare that their solution is a clever reuse of an existing method. Many works I read in the past instead choose to slightly modify the approach and declair it a newly proposed method. I count your choice as a minor advantage because it reduces the research debt, i.e. how many methods does a newcomer need to learn to meaningfully contribute to this area.
Weaknesses: ### Side-effects can be explored better
My main concern with the paper is that , while there is great attention to how fine-tuning combats reversal curse, but the side-effects of such fine-tuning are arguably not explored enough.
**In other words, does fine-tuning Mistral affect its accuracy on other, unrelated tasks? If yes, what trade-off points are there?**
To test that, once can use LM Eval Harness for a selection of such tasks ( https://github.com/EleutherAI/lm-evaluation-harness ). If not familiar with the tasks, please use the ones commonly reported upon popular LLM releases (e.g. see Table 20 in the Llama 2 paper https://arxiv.org/pdf/2307.09288 ) or choose your own tasks. Another direction would be to evaluate the overall quality of your model's generated text, whether with human evaluation (local/MTurk/Toloka/...) or, as an Ersatz, with another LLM (as in https://arxiv.org/pdf/2304.03277 )
This can make a difference between a "free" solution to the curse and the one that comes at too great a cost got most practitioners -- or something between the two. As such, the paper would greatly benefit from understanding these trade-offs, or knowing that they don't exist.
### XLNet the Baseline
You (rightfully) spend a lot of Section 2.2 on describing XLNET (alongside MLM-U), but then never compare against that as a baseline.
While you offer some algorithmic reasons MLM-U could be better, but to dismiss a relevant algorithm as a baseline, one usually needs more evidence, e.g. showing that it is infeasible to use or proving that it is guaranteed to be worse. I could not find such an argument in the paper. If you have it, please direct me to it. If not, the paper could be improved by actually comparing to XLNET, at least as a side-experiment on a subset of tasks, so that the reader better understands your choice of MLM-U.
Technical Quality: 3
Clarity: 3
Questions for Authors: ### Does this scale?
In your work, you test the proposed solution on GPT-2 and Mistral 7B. While the latter is undoubtedly an LLM, it is still curious if your approach generalizes to more capable LLMs. Note that I am not asking you to run all the experiments with a larger model, but even a selection of such experiments would improve the paper, particularly for practitioners.
If your main concern is GPU size, it should be possible to fine-tune relatively larger models using QLoRA ( https://github.com/artidoro/qlora ) or ReLoRA ( https://arxiv.org/abs/2307.05695 ). The former can fine-tune a 65B model on a 48GB GPU or multiple consumer gpus with sharding. The latter amounts of running the formal, then merging the adapters into LLM parameters and running again, for several such loops. Note that there may be a potential confounder of LoRA adapters vs regular fine-tuning. If you care to disentangle these effects, one possible way is to first check if LoRA finetuning can lift the curse for Mistral 7B, and if it does, try for larger ones and compare against that.
> (L43) to learn learn logical implications,
Possibly an extra “learn” (typo)
> (L174-175)
there is a paragraph missing line numbers, likely due to excessive negative vertical space. In that paragraph, and in the unnamed equation below, you refer to tokens as $t_1, t_2$, etc. In turn, your text up to this point refers to tokens as $x_0, x_1$, etc Moreover, $t$ is explicitly reserved for token index (L80). Unless there is a specific reason for this, the presentation would be improved if you untangle the notation around $x$ and $t$.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: To the best of my knowledge, authors have sufficiently addressed limitations of their work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad you regard reliable knowledge retrieval as an impactful problem and view MLM-U as a consistent solution to the factorization curse. We are happy that you found the paper to be well-written and suggested several useful pieces of feedback. We address each below:
### Finetuning
First, we’d like to clarify MLM-U models in our experiments are trained from scratch—not finetuned. We found training from scratch directly on the downstream dataset (Tokens, BIOS, Wikigraph in Section 3.1 and 3.2) was sufficient for competitive performance or to solve the benchmark entirely without the need for pretraining. Given we’re training from scratch, we do not measure performance on side-effects related to standard benchmarks (though we do carefully check forward standard accuracy as well). However, we absolutely agree exploring MLM-U for finetuning would be a valuable and interesting direction we hope to tackle in an upcoming publication. For finetuning, side-effects in terms of standard benchmarks would then become paramount to understand the tradeoff between finetuning reliability.
### XLNet Baseline
We absolutely agree XLNet is an important baseline to consider among factorization agnostic approaches. We present XLNet in Section 2.1 under the heading Permutation Language Modelling (PLM) where we describe the method as a baseline. In addition, we also draw a connection between permutation language modeling (such as XLNet) and discrete state diffusion in Section 2.2. We agree the heading and presentation can be made more clear to indicate we are in fact referring to XLNet in this section. The primary reason we chose to focus on MLM-U in the main tables is that the implementation of XLNet is not fully factorization agnostic: as described in Section 2.3 of XLNet https://arxiv.org/abs/1906.08237 for "practical reasons they end up training with a permutation on the last few tokens only." Nevertheless we do provide additional comparisons to XLNet in Appendix Tables 5, 6, and 7 studying BIOS, QA relations, and synthetic tokens. Based on your suggestions, we’ll rework the writing to ensure XLNet results are presented more clearly.
### Scaling MLM-U
For fair comparisons to prior work in order to ensure our gains were the result of the proposed learning objective (and not mere differences in model scale), we chose to perform experiments using model scales from prior work ( retrieval task from Goloveneva et al. 2024 and BioS from Allen-Zhu et al. 2023). We compare the effect of adjusting the learning objective only, keeping the model architecture fixed, to prior explorations that modify the training data (see Table 2 AR w/reverse for example).
To push the experimental setting further towards larger realistic graphs with known entities we also develop a benchmark based on natural text from Wikipedia and naturally occurring entity relations. In this setting, we found even relatively small models trained with MLM-U performed remarkably well at resolving the reversal curse gap between forward and backward accuracy. Specifically, we found training from scratch only on the WikiGraph data with the MLM-U objective outperformed Mistral-7B, a model that’s 70x larger, after finetuning on the same data.
Finally, we very much appreciate your attention to the typos, spacing, and notation suggestions. We’ve addressed each to sharpen the presentation of the work.
---
Rebuttal Comment 1.1:
Title: On Author Response
Comment: I thank the authors for answering my questions and clarifying some of my concerns. I still recommend that the paper should be accepted, and I am increasing my score by a notch.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your evaluation! | null | null | Rebuttal 1:
Rebuttal: We’d like to thank reviewers for their high-quality feedback and thoughtful suggestions for our work. We very much appreciate reviewers noted the importance of reliable knowledge retrieval in language models noting “LLM hallucination is one of the main roadblocks to their greater adoption, this is an impactful problem to address”—Z95k. We’re glad reviewers appreciated our insight into the role of factorization in the reversal curse, noting we “introduce novel concept of ‘factorization curse’, broadening the understanding of why models fail in information retrieval”—kVoQ. We’re also glad reviewers found our experimental setup “sound” (Z95k) “realistic” and “extensive” (kVoQ) with several reviewers noting the value of WikiReversal to the research community. Finally, we’re glad several reviewers found our paper to be “well-written” (Z95k, bZEp) and our solution, MLM-U, to offer “a consistent solution to the reversal/factorization curse, while many other intuitive solutions don't”—Z95k.
Reviewers provided useful feedback on the scalability of the method, standard benchmark performance, computational tradeoffs, and suggestions to better isolate the effect of architecture as well as a new interesting two-token setting. We’ve made a considerable effort to incorporate this feedback with clarifications and three new experiments based on reviewers’ suggestions for which we’ve attached results in the supplementary rebuttal PDF. In summary, we have
- **Clarified scalability comparisons and benchmarks**: We ensure model scales were comparable to those used in prior work for tasks in section 3.1 such as BIOS. We also clarified for the WikiReversal experiments the MLM-U objective trained from scratch outperformed Mistral-7B, a model that’s 70x larger, after finetuning on the same data. Given our objective is not to train a general purpose language model, we do not benchmark standard language benchmarks as here we isolate only information retrieval when models are trained from scratch using MLM-U.
- **Compared in new experiments MLM-U versus AR training in the two-token setting proposed by reviewer bZEp (Figure 1 in PDF)**: We find MLM-U reaches 100% forward and backward accuracy whereas AR struggles to learn the backwards association in the two-token setting.
- **Measured the Computational Costs and Convergence of MLM-U (Figure 2 in PDF)**: We analyzed the runtime and convergence rates for both MLM-U and standard autoregressive (AR) training. We benchmarked two parameter matched models on the retrieval task dataset from Section 3.1. We found MLM-U and AR exhibited comparable computational costs. While AR forward accuracy converges faster, MLM-U exhibits smoother convergence for the forward accuracy and is able to learn both backward and forward associations, whereas AR struggles to learn the backwards association.
After incorporating these new experimental results and suggestions, thanks to reviewers’ feedback, we believe the quality of our submission has improved. We hope the factorization curse illustrates the importance of factorization-agnostic learning objectives for reliable knowledge retrieval. We believe together with our realistic WikiReversal benchmark, the factorization curse and proposed solution would be a valuable contribution to the research community for advancing the reliability of knowledge retrieval.
Pdf: /pdf/1c4a222a1fb78e270bfd26862389ad538c29f1ef.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments | Accept (poster) | Summary: The paper titled "Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments" introduces the concept of self-healing machine learning (SHML). This framework aims to address performance degradation in machine learning models due to distributional shifts. Unlike traditional concept drift adaptation methods, SHML focuses on diagnosing the reasons for model degradation and proposing targeted corrective actions. The paper presents a theoretical foundation for SHML, an algorithm (H-LLM) that uses large language models for diagnosis and adaptation, and empirical evaluations demonstrating the effectiveness of the approach.
Strengths: 1. The SHML framework is a novel approach to handling model degradation by diagnosing and addressing the root causes, rather than using reason-agnostic methods.
2. The paper provides a solid theoretical foundation for SHML and demonstrates its practical viability through empirical evaluations.
3. The concepts are well-explained, and the structure of the paper is logical and easy to follow. Figures and tables enhance the understanding of the proposed methods.
4. SHML has significant potential in high-stakes applications where maintaining optimal model performance is critical, such as healthcare and finance.
Weaknesses: 1. While the paper demonstrates promising results, the empirical evaluation is limited to a simulated diabetes prediction task. Additional experiments in diverse real-world environments would strengthen the claims.
2. The comparison with existing concept drift adaptation methods is not exhaustive. A broader set of baseline comparisons would provide a clearer picture of the advantages and limitations of SHML.
3. The paper lacks detailed information on the implementation of H-LLM, especially regarding the practical challenges of deploying such a system in real-world scenarios.
4. Although the authors mention the availability of the code upon acceptance, more details on the experimental setup and data used would help in reproducing the results.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can you provide more detailed results and analysis of SHML in a wider range of real-world environments beyond the simulated diabetes prediction task?
2. How does SHML compare to other state-of-the-art concept drift adaptation methods in terms of computational overhead and sample efficiency?
3. Can you elaborate on the specific contributions of each component of the SHML framework to the overall performance improvement?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors acknowledge the limitations of SHML, including the challenges in accurately diagnosing the root causes of performance degradation and the potential computational overhead of the approach. They also discuss the need for sufficient parallel processing capacity to handle the increased demands of multiple diagnostic and adaptation actions. The broader societal impact of SHML, including potential misuse in high-stakes applications, is also briefly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer xkmy,
Thank you for taking the time to review our paper. We're happy you think that SHML is a novel approach with a solid theoretical foundation and that has significant potential in high-stakes applications.
To address your concerns, we've expanded our evaluation with **five new datasets**, **additional SHML effectiveness evaluation**, added a study with **six new benchmarks**, evaluated SHML across **10 different ML models**, and performed **a qualitative ablation study**. We hope this information provides reasons to consider increasing your score. Our responses (A-H) follow:
---
# (A) More datasets
While our paper's main contribution is the SHML theoretical framework, with viability studies as proof-of-concept, we agree more real-world examples improve the paper. **We've added five new datasets** common in concept drift adaptation, varying corruption levels at test time to evaluate adaptation methods. Please find this in **Table 1 in the response pdf**.
**Takeaway**: $\mathcal{H}$-LLM improves upon reason-agnostic baselines in five real-world datasets when the reason for performance drop can be diagnosed, even in test-time scenarios. This highlights the potential of self-healing ML.
We'll include these results with more experimental details in the updated paper.
---
# (B) Greater empirical analysis of self-healing ML
To address your concern about limited empirical analysis, **we've conducted an additional viability** study across the five datasets. We vary problem difficulty (corruption level) at test time and evaluate self-healing benefits (**Fig. 1 in the pdf**).
**Takeaway**: $\mathcal{H}$-LLM improves performance relative to baselines across five real-world datasets, with greater self-healing effects at higher corruption levels.
We'll include these results with more experimental details in the updated paper.
---
# \(C) Limited benchmarks
We emphasize that our viability studies aim to show that any reason-agnostic strategy will perform poorly whenever it is important to understand the reason for performance degradation. This is because such strategies do not directly address the root cause.
That said, we've extended our evaluation benchmarks. **We've run an additional viability study with five benchmarks and four adaptive algorithms** common in concept drift adaptation (**Table 2 in the response pdf**).
**Takeaway**: $\mathcal{H}$-LLM consistently outperforms adaptation methods and adaptive algorithms that fail to address test-time corruption, demonstrating self-healing ML's feasibility in high-stakes environments where understanding model degradation reasons is important.
Additionally, we run one more viability study to evaluate whether these gains are consistent across different ML models. We show that $\mathcal{H}$-LLM consistently outperforms other adaptation methods across 10 different ML models (shown in **Table 4 in the response pdf**), highlighting the model-agnostic nature of the framework.
---
# (D) Implementation of $\mathcal{H}$-LLM
We're glad you're interested in $\mathcal{H}$-LLM's practical implementation. Besides the main text, implementation details are in the appendix: components (**Appendix B.1.**), prompt templates (**Appendix B.2**), outputs (**Appendix B.3**) and viability studies (**Appendix C.2.**). That said, we acknowledge it might be difficult to find this information.
**Actions taken**: Move input/output examples and prompt structures from appendix to main text.
---
# (E) Practical challenges of deploying self-healing systems
We're excited that you're interested in the practical deployment of self-healing systems. We answer this exact concern in **Section 5.1.** titled "unique challenges of building self-healing systems". We agree that the naming of this section could be improved.
**Actions taken**: We will rename section 5.1. to *Practical challenges of deploying self-healing systems* to improve clarity. We will also expand the discussion in the appendix.
---
# (F) More details on experimental setup and data used
In the camera-ready version, we'll use the additional page to move some of the experimental details and dataset information described in **Appendix C** to the main paper.
---
# (G) SHML comparison to other concept drift adaptation methods
**Computational overhead**. SHML methods have larger overhead than reason-agnostic approaches due to the self-healing system (LLM pipeline) identifying model failure reasons. Practically, it takes 20-40 seconds to implement a full pipeline and correct a model upon drift detection. This overhead is negligible for real-world systems given the benefits. Overhead may vary across systems.
**Sample efficiency**. No differences exist as failure detection doesn't depend on sample size, but on self-healing pipeline complexity.
**Actions taken**: Add an appendix discussing computational details for SHML systems.
---
# (H) Contribution of each component
**We've performed an additional qualitative ablation study**, running a self-healing ML system and iteratively removing one component, inspecting its output (**Table 3 in supplementary pdf**).
**Takeaway**: This study illustrates that all components are required for the self-healing ML system to work. Having said this, we see the robustness of self-healing ML as a big research agenda for future work and hope this spurs research in this area.
---
# Thank you
We believe these changes should greatly enhance the paper's contribution and improve its clarity. In addition to the clarifications which will be included in the camera-ready version, we have added **five new datasets**, **additional SHML effectiveness evaluation**, evaluation across **10 ML models**, a study with **six new benchmarks**, and **a qualitative ablation study**.
Thank you for your help.
If we addressed your concerns, we hope you consider significantly revising your assessment of the paper's impact and the corresponding evaluation for NeurIPS. ☺️
---
Rebuttal 2:
Comment: Dear reviewer xkmy,
We thank you once again for the effort put into reviewing our paper. As there are only a couple working days left in the discussion period, we would like to ask if our response has satisfied your concerns. If so, we hope you consider revising your assessment of the paper's impact and the corresponding evaluation for NeurIPS. If any concerns remain, we are happy to discuss them further here.
---
Rebuttal Comment 2.1:
Comment: Hello,
Sorry for my late response. I am satisfied with your answers, therefore I am increasing my assessment..
Regards,
Shadab
---
Reply to Comment 2.1.1:
Comment: Dear reviewer xkmy,
We are glad our response was helpful and would like to thank you for raising your score! Thanks again for your time and suggestions which have helped us to improve the paper.
Regards
Paper Authors | Summary: This paper proposes a new concept of self-healing machine learning, or SHML. The idea is based on understanding and addressing the reasons of performance drops in ML systems, thereby going beyond most common approaches that are labelled as reason-agnostic. The approach is based on a pipeline well illustrated in Fig 2 that includes four steps: monitoring, diagnosis, adaptation and testing. Formal definitions are provided. Experimental support is provided to demonstrate some aspects of the new idea and its potential advantages.
Strengths: I have identified the following strength:
- The paper starts from an interesting assumption that the reasons for performance drop should be considered and investigated.
- The paper is well organised and easy to read.
- The method is described with precise formalism.
Weaknesses: I have identified the following weaknesses:
1. While the main assumption is move on from reason-agnostic methods to a system that attempts to understand the reasons, the paper does not offer a good classification of such reasons, mainly because it main a meta-level approach, in which reasons are to be learned. This could be less effective than pre-defining monitoring and diagnosis methods according to known causes.
2. As a follow-up from the previous point, the diagnostic part that generates hypotheses via a LLM may or may not be correct and the adaptation may or may not be effective. I find it hard to get a feeling of how well the system would work on a large set of benchmarks and problems from the evidence and experiments presented in the paper.
3. Most of the interesting details of the paper are actually in the appendix.
4. I disagree that this is the first non-reason-agnostic method, as the authors overview and list other approaches that do focus on the causes of performance drop. I agree that this may be the first approach to tackle the issue comprehensively. Nevertheless, the novelty, knowledge gap, and improvements of the approach with respect to existing approaches is not well summarised in the introduction/related work, which is rather short and it refers to the appendix for further detail.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is it possible to define a medium to large set of conditions in known benchmarks that could be use to extensively test the SHML?
2. One challenge in a system with multiple sequential steps is that the failure of one component affects all the following. E.g. failure in diagnosis will affect adaptation and testing. How can we assess how well the system work when failure in one component compromises the correct functioning of others?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I find that the limitation part (before the conclusion and in 5.2) is too short . Yes, I agree that the correct identification of the root causes of performance drop is challenging in real work scenarios, but the issue needs to be expanded and discussed more. For example, why is it difficult? Are some reasons that are more difficult than others to identify? Monitoring could also be challenging. Adaptation could be challenging.
The impact is sufficient covered.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear R-vweT,
we're glad you think our paper is important and easy to read.
---
# (A) Classification of reasons for performance drops
You're right that we don't provide an exhaustive classification of reasons for performance drops. We aim SHML to be a meta-level approach which flexibly adapts to various scenarios.
A meta-level approach offers two main advantages: 1) It allows the system to adapt to unforeseen scenarios and new types of model degradation not captured by pre-defined lists. 2) It flexibly generates multiple hypotheses to explain the same model degradation, as different explanations may account for the same issue (illustrated in **Table 2** of the main paper). Also note that SHML could incorporate pre-defined causes into the framework, as these represent deterministic if-else diagnosis functions and would result in a less complex policy function $\pi$ than explored in the paper.
**Actions taken**: Include the classification of performance drops in the discussion.
---
# (B) How well the system works on large set of benchmarks
We introduce self-healing ML as a framework for understanding model degradation. Our goal is to demonstrate its viability rather than provide extensive benchmark comparisons.
That said, to address your concern directly, we've conducted additional viability studies:
- We compare $\mathcal{H}$-LLM, with baseline approaches on **five real-world datasets** by introducing corruption at test time (**Table 1 in supplementary pdf**). **Takeaway**: $\mathcal{H}$-LLM improves upon reason-agnostic baselines when the reason for performance drop can be diagnosed.
- We show how the importance of self-healing ML varies with the problem difficulty (**Fig. 1** in the response pdf). **Takeaway**: The effects of self-healing are greater when the data corruption levels are higher.
**Actions taken**: We will include a summary of this discussion in the main paper.
---
# \(C) The effectiveness of each component
We emphasize that we see robust studies of each component as non-trivial research directions, as our paper primarily aims to introduce SHML. That said, we address your concern by performing a qualitative ablation study. **Setup**: We run $\mathcal{H}$-LLM and iteratively remove one component from the system, inspecting its output. This is presented in **Table 3 in the response pdf**.
**Takeaway**: We illustrate that *all components are required for the SHML system to work*.
---
# (D) Moving items from the appendix to the main text
In the camera-ready version, we'll use the additional page to expand the viability section in the main text. We'll include more details on the prompts and input/output examples of $\mathcal{H}$-LLM.
---
# (E) Is self-healing the first reason-agnostic method?
You're correct that there have been previous papers attempting to identify reasons for model degradation, discussed in **Section 2** (L83 - 90 and L105-120). To clarify our contribution, we've created a comparison table highlighting the key differences between SHML and existing approaches:
| Approach | Diagnosis | Adaptation Action |
|-|-|-|
| Concept drift adaptation [1,2] | n/a | Fixed |
| Specialized drift handling [3,4] | n/a | Fixed |
| Distribution change attribution [5] | Fixed | n/a |
| Model failure attribution [6] | Fixed | n/a |
| Dynamic classifier selection [7] | n/a | Fixed |
| SHML (Our approach) | Variable | Variable |
[1] Gama et al. (2004), [2] Lu et al. (2018), [3] Goncalves et al. (2013), [4] Alippi et al. (2013),
[5] Budhathoki et al. (2021), [6] Zhang et al. (2022), [7] Cruz et al. (2018)
**Actions taken:** We will highlight that SHML is the first framework where the diagnosis and adaptation are not fixed in advance and where the diagnosis informs the adaptation.
---
# (F) Defining conditions in known benchmarks that could be used to extensively test SHML
We agree that having an extensive set of known conditions could be extremely useful for evaluating SHML systems that would likely gain a lot of traction in the ML community. We think there are multiple conditions that could be used to simulate real-world conditions in existing benchmarks (such as data corruption, systematically changing the DGP, introducing external shocks such as covid-19) which would require significant testing and validation. We see this as promising future work.
---
# (G) Failures in sequential systems and how that affects self-healing ML
You're right that failure in one component can affect the entire system's performance.
**a) Component sensitivity analysis**. To provide initial insights, we refer to the qualitative ablation study presented earlier in our rebuttal. The study showcased that each component working is required for the system's overall performance.
**b) Built-in safeguards**. SHML includes mechanisms to mitigate cascading failures with the testing component (step 4). Actions that do not improve performance over baseline are discarded.
We see designing fail-safe self-healing systems as a promising research direction.
---
# (H) Expanding limitations
In the camera-ready version, we'll expand the discussion of the limitation sections.
1. We'll add a paragraph discussing specific challenges with root cause identification (expanding on and moving items from section 5.1 lines 208 - 2012)
2. We'll add a paragraph discussing the challenges with choosing an appropriate action (expanding on and moving items from section 5.1 lines 213-216)
3. We'll add a challenge explaining when understanding the root cause is easy/difficult based on our experience.
---
# Thank you
You have helped us improve our paper. Given these changes, we hope you consider revising your assessment of the paper's impact and the corresponding evaluation for NeurIPS. ☺️
---
Rebuttal Comment 1.1:
Comment: I appreciate the careful response and effort to address my concern. I believe the paper has improved as a consequence, and I'm happy to increase my assessment.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer vweT,
We are glad our response was helpful and would like to thank you for raising your score! Thanks again for your time and suggestions which have helped us to improve the paper.
Regards
Paper Authors | Summary: The paper presents a self-healing framework for machine learning models called Self-Healing Machine Learning (SHML). Unlike previous methods, SHML autonomously diagnoses the causes of model degradation and suggests corrective actions based on these diagnoses. The authors formalize SHML as an optimization problem, aiming to minimize expected risk by adapting to changes in the data-generating process (DGP).
A theoretical framework for self-healing systems is introduced, exemplified by H-LLM, which leverages large language models for self-diagnosis and self-adaptation. Empirical analyses of SHML's components demonstrate its potential and effectiveness.
The paper underscores the importance of ensuring optimal performance in algorithms used in high-stakes applications. By enabling systems to autonomously adapt to new environments, SHML aims to advance self-healing systems, benefiting both the machine learning community and society. The theoretical framework lays the groundwork for developing optimal adaptation and diagnosis methods. The authors hope this work will stimulate further theoretical developments and encourage the adoption of self-healing systems in critical fields such as medicine and finance.
Strengths: From the perspective of originality, this paper lowers the barrier for others to implement adaptation actions in machine learning models. Its importance lies in addressing the growing challenge of maintaining machine learning models, especially given their increasing usage. A framework capable of diagnosing drift or degradation and autonomously solving these issues is crucial. This paper effectively addresses this need, outperforming existing approaches in both presentation and results.
The organization of the paper is good, with a well-structured presentation. The detailed inclusion of examples, definitions, assumptions, theoretical explanations, and results in the main text and appendix is thorough and effective (but the appendix is probably required to really understand)
Weaknesses: The main weakness of this paper is its tendency to use overly long sentences and complex phrases, which can hinder readability and clarity. Another significant weakness lies in the viability section; many crucial details have been relegated to the appendix. This makes it difficult to fully understand and trust the framework based on the main text alone. Additionally, while the concept of "self-healing machine learning" is compelling, the section dedicated to it could be more concise and focused. Overall, improving the clarity of these sections would greatly enhance the paper's impact.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Is the diagnostic step continuously operational during the entire period of model usage?
2. How does the performance of the framework compare to benchmarks when applied to large-scale models?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Authors mention a limitation in section "SHML’s success relies on accurate root cause identification and finding effective adaptation policies which could pose challenges in some complex, real-world settings" (Sec. 5.1).
This could be further explored.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer 4TYN,
Thank you for carefully reading our paper. We're glad that you think we address the problem of maintaining machine learning models autonomously and that this lowers the barriers for others. We respond to each point below.
---
# (A) Moving information from the Appendix to the main text
In the camera-ready version, we'll use the additional page to expand the viability section in the main text. We'll include more details on the prompts and input/output examples of $\mathcal{H}$-LLM. We will also highlight which design choices are most important to consider such that adopters can more easily replicate and build their own self-healing systems.
---
# (B) Improving the clarity of Section 3 on self-healing ML
We agree with you that the main section could be more concise and focused. To improve clarity, we will make the following changes:
a) We'll start with a concise overview of SHML:
> Self-healing machine learning is a framework for autonomously detecting, diagnosing, and correcting performance degradation in deployed ML models. It aims to maintain model performance in changing environments without constant human intervention.
b) In the box "Self-Healing Machine Learning in a nutshell.", we will add a more intuitive explanation at the beginning.
> SHML contains four components: (a) **monitoring**: continuous assessment of model performance; (b) **diagnosis**: identification of root causes for performance degradation; \(c) **adaptation**: suggesting possible corrective actions to take in response to degradation, and (d) **testing**: empirically evaluating the effect of actions on the model's performance. After these steps, the best action is implemented on the ML model. This is illustrated in Fig. 2.
c) We will contrast SHML with traditional approaches at the end of Sec. 3.3 (lines 152-154):
> The primary insight of SHML is that the best action to take in response to model degradation depends on the reason for that degradation. This contrasts with standard approaches, which often assume the best approach is independent of the degradation reason. For example, a standard drift adaptation method might continuously retrain the model, which could be suboptimal if the new dataset is corrupt.
We will also make other smaller changes, such as add a clear definition of $f$ at the beginning, clarify the relationship between $f$ and the policy $\pi$, and include an illustrative example.
---
# \(C) Question: Is the diagnostic step continuously operational?
The diagnostic step is not continuously operational. It's triggered only when the monitoring component detects performance degradation, as shown in Figures 2 and 3. This design choice is due to the computational intensity of the diagnostic step. In our implementation of $\mathcal{H}$-LLM, we perform multiple language model calls to hypothesize possible model failures, propose actions and implement them. In practice, we find this loop takes about 20 seconds to a minute to finish.
Continuous diagnostics might be feasible in two scenarios:
a) Batch prediction settings: If your model makes predictions in large batches (e.g., every 15 minutes) rather than continuously (e.g., every second), a diagnosis step could potentially run for each batch, even without detected degradation.
b) Future research could explore novel architectures where LLM-based monitoring systems run concurrently and continuously with monitoring. However, to the best of our knowledge, no such systems currently exist.
**Actions taken**: We will clarify when the diagnosis is triggered in the main paper more clearly and expand on when it might be continuously operational.
---
# (D) Question: performance with large-scale models
The SHML framework is agnostic to the model $f$ (L139-141 in the main text). Therefore, self-healing ML shows superiority over benchmarks whenever understanding the reason for degradation is important, *regardless of the learner*. We can intuitively explain this with an example: suppose that a new batch of data is corrupt. Using a reason-agnostic approach, such as retraining your model on new data, is suboptimal regardless of the model used. This is because the optimal action requires to remove the corrupted data and then retrain the model on the de-corrupted data.
That said, **we've conducted an additional viability study** comparing SHML's performance against other adaptation methods **for 10 popular ML models** commonly used in practice. We include models used for large-scale industry applications, e.g. XGBoost or Random Forest. We simulate real-world degradation by corrupting data at test time and evaluating the performance of each adaptation approach. The results are included in **Table 4** in the response pdf. **Takeaway**: We show that $\mathcal{H}$-LLM outperforms other adaptation methods. This showcases that self-healing ML can benefit any downstream ML model whenever understanding the reason for degradation is important (such as test time corruption).
---
# (E) Limitations
In the camera-ready version, we will expand our limitations.
- We will move some challenges of building self-healing systems from **Section 5.1** (L206-216) to the discussion to highlight limitations.
- We will expand on the challenges in diagnosis (L208-212) by giving an illustrative example.
- We will expand on the challenges in adaptation (L213-216) by explaining when root cause identification might be easy/difficult.
- We will outline research directions that can help overcome said challenges.
---
# Thank you
You have helped us improve our paper. Given these changes, we hope you consider revising your assessment of the paper's impact and the corresponding evaluation for NeurIPS. ☺️
---
Rebuttal Comment 1.1:
Comment: Dear reviewer 4TYN,
We thank you once again for the effort put into reviewing our paper. As there are only a couple working days left in the discussion period, we would like to ask if our response has satisfied your concerns. If so, we hope you consider revising your assessment of the paper's impact and the corresponding evaluation for NeurIPS. If any concerns remain, we are happy to discuss them further here.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer 4TYN,
Thank you for your thorough review of our work. As the discussion period is coming to a close, we'd like to follow up on your prior concerns.
Have our responses and clarifications addressed your main questions and concerns? If so, we kindly ask that you consider increasing your score to better reflect this. If not, please let us know what specific issues remain, and we'll promptly provide additional information to resolve them.
We appreciate your reconsideration and look forward to your feedback. | Summary: Model performance degradation on unseen data is a classic problem. Existing approaches solve the problem through a deterministic strategy: change model, retraining, etc. This paper proposes an adaptive way to decide the action after model degradation automatically and introduces a self-healing framework. The evaluation thoroughly analyzes the intuition of SHML and its limitations.
Strengths: 1. Autonomous healing for model performance degradation is an important problem.
2. The automatic adaptation idea is novel and interesting.
3. The evaluation is strong and thorough, which covers the details to help readers further understand the scope and the logic of the proposed method.
Weaknesses: 1. The writing is hard to follow in section 3. What is the practical meaning of f? How is it related to the policy?
2. The assumption of the requirement for optimal adaptation actions is strong. Is there any practical case to support this assumption?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are well-discussed in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer LnCp,
Thank you for your thoughtful feedback on our work on self-healing machine learning. We appreciate your recognition of the importance and novelty of our approach. We'll address your concerns in two parts, corresponding to both weaknesses.
---
# (A) Clarity of Section 3
The goal of Section 3 is to describe how a deployed ML model ($f$), such as a logistic regression classifier, can be "healed" with a self-healing system ($\mathcal{H}$). In this case, *healing* refers to restoring the performance of $f$ after it drops.
A natural question is: how do we know what actions we should take to heal $f$ once it drops in performance? We say that $f$ should be healed by the healing-system $\mathcal{H}$ which follows a policy $\pi$. The policy $\pi$ outputs actions $a$ (such as $a_1$: retrain a model or $a_2$: remove corrupted features) which are then implemented *onto* $f$. Therefore, $\mathcal{H}$ follows a policy $\pi$ which helps to determine optimal actions $a$ that change/modulate the deployed ML model $f$.
We will improve the paper's writing in the following ways:
a) We will add a clear definition of $f$ at the beginning:
> $f$ represents the deployed machine learning model that we aim to heal. It is the function that makes the predictions on input data and whose performance we're trying to maintain and improve.
In our viability studies, a logistic regression model represents $f$.
b) We'll clarify the relationship between $f$ and $\pi$.
> While $f$ is the model making predictions, $\pi$ is the adaptation policy --- a function that determines what actions to take to modify $f$ based on the diagnosed reasons for its performance degradation.
c) We will include an illustrative example.
> For instance, if $f$ is a diabetes prediction model and $\pi$ diagnoses that $f$'s performance has degraded due to concept drift, $\pi$ might suggest an action to retrain f with more recent data or to adjust feature weights.
d) We will link the theory of the policy $\pi$ to the experiments.
> In our viability studies with $\mathcal{H}$-LLM, the policy $\pi$ is instantiated with an LLM (GPT-4) which uses the diagnosed reasons for model failures (also achieved with an LLM) to propose concrete actions.
e) We will explain how the policy impacts SHML:
> Self-healing ML is formalized as "an optimization problem over a space of adaptation actions." This means we aim to find the optimal actions to take each time the model $f$ degrades. These actions are chosen by the policy $\pi$ of the self-healing system $\mathcal{H}$ (Fig. 3). For instance, two different policies $\pi_1$ and $\pi_2$ might propose different actions to take in response to $f$'s performance drop.
---
# (B) Requirement for optimal adaptation actions
You're correct that the assumption of optimal adaptation actions would be very strong. Optimal adaptation actions would assume that we always pick the most *optimal* action in any situation. However, this is unrealistic --- we rarely *really* know what is the optimal action to take to improve a model's performance because there are many possible reasons which could have led to the decrease in performance. So, achieving model optimality is often impossible.
However, we believe there may have been a misunderstanding. Our framework doesn't require optimal actions. Rather, we aim to make more informed decisions to try to *approximate* optimal actions (which are never known in practice). We illustrate this with the model degradation example in **Sec. 3.2**.
To improve the clarity, we will make the following changes:
a) We will add a paragraph (lines 154-156) explaining that SHML approximates an ideal rather than achieving or assuming perfect optimality.
> In real-world ML problems, it is often impossible to determine the optimal action due to the complexity of the problem. SHML attempts to approximate an ideal optimal adaptation strategy instead of achieving perfect optimality. This is done by selecting actions to take by taking into account diagnosis information instead of relying on deterministic actions (such as model retraining).
b) In Section 5.2, we'll add this practical example.
> Consider a scenario where both data corruption and concept drift occur simultaneously. A traditional method might simply retrain the model on new data (potentially incorporating corrupted values). In contrast, SHML would diagnose both issues and suggest a two-step adaptation strategy: (a) clean the corrupted data and (b) retrain the model on the drift-adjusted datasets. In our experiments, this has improved the model by about 18% in model accuracy, despite not necessarily being the theoretically optimal action (Sec. 6.1).
c) We'll expand the discussion section with:
> While SHML attempts to find optimal adaptations, it does not theoretically guarantee that the adaptations are indeed optimal. While we see this as a substantial improvement over reason-agnostic methods, future research could explore how to obtain theoretical guarantees of optimality within self-healing machine learning.
---
# Thank you
You have helped us improve our paper. Given these changes, we hope you consider revising your assessment of the paper's impact and the corresponding evaluation for NeurIPS. ☺️
---
Rebuttal Comment 1.1:
Comment: Dear reviewer LnCp,
We thank you once again for the effort put into reviewing our paper. As there are only a couple working days left in the discussion period, we would like to ask if our response has satisfied your concerns. If so, we hope you consider revising your assessment of the paper's impact and the corresponding evaluation for NeurIPS. If any concerns remain, we are happy to discuss them further here.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer LnCp,
Thank you for your thorough review of our work. As the discussion period is coming to a close, we'd like to follow up on your prior concerns.
Have our responses and clarifications addressed your main questions and concerns? If so, we kindly ask that you consider increasing your score to better reflect this. If not, please let us know what specific issues remain, and we'll promptly provide additional information to resolve them.
We appreciate your reconsideration and look forward to your feedback. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful and positive feedback!
We are encouraged by the unanimous recognition of our self-healing ML framework's importance and novelty. The reviewers consistently described our work as "important" (**R-LnCp**, **R-4TYN**, **R-xkmy**) with a "novel and interesting" approach (**R-LnCp**). Our paper had a "well-structured presentation" (**R-4TYN**) and was considered as "well organised and easy to read" (**R-vweT**), with the structure "logical and easy to follow" (**R-xkmy**). Reviewers mentioned that our paper is "outperforming existing approaches in both presentation and results" (**R-4TYN**). We would like to note the consensus on two key themes we saw across the reviews:
- **Innovative and impactful approach**. Reviewers noted our work "lowers the barriers for others to implement adaptation actions in machine learning models" (**R-4TYN**) and has "significant potential in high-stakes applications where maintaining optimal model performance is critical, such as healthcare and finance" (**R-xkmy**). "Rather than using reason-agnostic methods" (**R-xkmy**), we show that "the reasons for performance drop should be considered and investigated" (**R-vwet**). In this way, our paper addresses "the growing challenge of maintaining machine learning models, especially given their increasing usage" (**R-4TYN**).
- **Robust theoretical foundation**. The paper provides a "solid theoretical foundation" (**R-xkmy**) and the method "is described with precise formalism" (**R-vweT**). This is complemented by empirical evaluations demonstrating its practical viability (**R-xkmy**). Reviewers saw our evaluation as "strong and thorough, which covers the details to help readers further understand the scope and the logic of the proposed method" (**R-LnCp**). Reviewers appreciated the "detailed inclusion of examples, definitions, assumptions, theoretical explanations, and results in the main text and appendix" which are "thorough and effective" (**R-4TYN**).
---
# Information in the supplementary pdf
We provide in total five new viability studies. That said, we would like to note that we see the primary contribution of our paper as formalizing self-healing machine learning which researchers and practitioners can use to build their own self-healing systems.
We provide the following information in the supplementary pdf.
- **Table 1.** Additional viability studies involving **five real world datasets**. The datasets cover a wide variety of setups: Airlines (Bifet et al., 2010), Poker (Cattral et al. 2007), Weather (Elwell & Polikar, 2011), Electricity (Zliobaite, 2013), Forest Type (Blackard, 1998). We simulate real-world unexpected degradations by corrupting features at test time and evaluating models for different number of corrupted features and corruption values. **Takeaway**: $\mathcal{H}$-LLM better adapts at test time to issues that require reasoning about the structure of the data generating process across five datasets. This showcases the need for using self-healing systems in real-world environments.
- **Figure 1**. **Empirical insights** into the effect of self-healing on downstream accuracy for each of the **five datasets**. We systematically vary the corruption value and the number of corrupted columns and quantify the accuracy with and without triggering a self-healing mechanism. **Takeaway**: We find that the effect of applying a healing mechanism is largest when there is greatest corruption levels in the dataset. Furthermore, we find that applying a healing mechanism consistently improves downstream accuracy in the presence of data corruption. This showcases the importance of healing mechanisms in test-time environments.
- **Table 2**. Additional viability study involving **more benchmarks** on the original diabetes prediction task. We benchmark against an additional adaptation methods and four other adaptive algorithms. Results shown on the original diabetes setup described in the paper. We do not vary corruption values for space reasons. **Takeaway**: The additional benchmarks are unable to cope with adaptations when it requires reasoning about the structure of the data generating process.
- **Table 3**. **Qualitative ablation study** results for $\mathcal{H}$-LLM. We systematically remove one component of the system and inspect its outputs. **Takeaway**: All four components are required for self-healing to work. Removing any component results in poorer adaptations.
- **Table 4**. Additional viability study **evaluating each adaptation action across 10 different ML models**. We follow the same setup as in Table 1 by corrupting values at test time and varying the underlying model used. **Takeaway**: We show that self-healing is agnostic to the kind of model used. Self-healing ML can benefit any downstream ML model whenever understanding the reason for degradation is important (such as test time corruption).
---
# Thank you
The review has been extremely productive. We thank everyone for their help in shaping the paper to be in better form.
Pdf: /pdf/ab6a7581fb6abf78d77eb5216f4b8fbdf5bf7d30.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Foundation Inference Models for Markov Jump Processes | Accept (poster) | Summary: The paper explores the possibility of using transformers to directly infer the parameters of a Markov jump process (MJPs) from a noisy time data set obtained from a time-dependent process. They first train the machine to correctly predict the model parameters by using synthetic datasets containing data obtained with different realizations of MJPs defined with different parameter sets. Later, they show that the models trained in this way are nevertheless able to derive the parameters of a real simulated or experimental process, even though they were not trained on this data. They also show that one-shot inference is as good as state-of-the-art methods when trained on the target datasets.
Strengths: The authors present a very simple yet powerful and practical idea of using neural networks to extract information from a time-dependent process. In contrast to traditional unsupervised approaches, whose goal is to learn the model parameters to generate samples that are as similar as possible to the original, their goal is a fully supervised one to correctly predict the model parameters in inverse synthetic experiments. They show that the neural network is not only powerful enough to infer all the parameters occurring in the master equation in controlled experiments, but also that they can later successfully apply these trained models to unseen real-world datasets.
Weaknesses: Despite the originality of the work, there are some aspects that remain unclear to me. I find that important details about how to apply the trained models to real data in practice are missing. I also think the limitations are not properly explored.
Technical Quality: 3
Clarity: 3
Questions for Authors: Do they make their predictions on the basis of coarse-grained data? If so, how are these representations obtained? If not, how is the data standardized so that they can transfer the models?
I did not understand if the adjacency matrix is learned or not, and I am a bit confused about the information given to the machine in real datasets if not.
I find that the limitations of this approach are not properly explored or discussed in the text. Are the processes used to generate the synthetic datasets realistic in a general setup? Are they able to describe complex processes like the data generated with the trap model? or just Gaussian processes such as a Brownian motion?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I discused my doubts about the limitations above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the helpful comments and questions, which we know will improve the presentation of our work. Below we address each of them.
**@W1** (On how to apply the model): We kindly refer the reader to our general response above. We hope it addresses the reviewer’s comment on the lack of details on how to apply FIM to any dataset. It will be included into our manuscript (see summary above).
**@W2** (On limitations): Please see our response to your Q3 below.
**@Q1** (Do they make their predictions on the basis of coarse-grained (CGR) data): Yes, FIM takes coarse-grained data as input.
As we explained in our introduction (see e.g. lines 49-54 and lines 116-117), our goal is to tackle the classical MJP inference problem on *coarse-grained (i.e. discrete) space*.
In practice, one can obtain these CGRs by e.g. leveraging pretrained models specifically tailored to performing CGR inference in specific datasets or, more naively, by simply using clustering algorithms.
In our experiments we used the CGR inferred by NeuralMJP, for the sake of fairness when comparing against it. We also used naive CGRs obtained via Gaussian mixture models (see e.g. Section 4.2, lines 283-287) and the KMeans clustering algorithm (see e.g. Section 4.3, lines 305-307 and Appendix E.2, lines 674-683). Table 3 in the main text and Tables 12, 15, 17 and 18 in the Appendix contain our estimated observables wrt. the different CGRs we employed for each of our datasets.
Let us finish this response with the following remark. The inference of CGR should be understood as a different problem from that of the MJP inference. Accordingly, it is to be considered outside the scope of the present work (see e.g. our discussion in lines 102-117 in the related work section). To include such an inference into the FIM methodology, would amount to including an emission probability model into the generative model (Eq. 2) of our synthetic training distribution. Effectively, this would be equivalent to replacing our noise distributions with more complex models. We do not pursue this avenue, but the present work serves as a basis for such an extension.
**@Q2** (On the adjacency matrix): The adjacency matrix $\mathbf{A}$ only appears in our model as part of our synthetic training distribution, and it simply defines the level of sparsity of the embedded Markov chain of the MJPs within the training dataset. Accordingly, the adjacency matrix $\textbf{A}$ is *not needed* as input to the model. One only requires the time series of the observation times and the corresponding values (states) in CG space (see also the general reply above for details).
Finally, note that since FIM returns the $C \times C$ inferred rate matrix $\mathbf{\hat F}$ (see e.g. Eq. 5), the target adjacency matrix can be understood as being implicitly inferred by FIM (that is, it can be extracted from the inferred $\mathbf{\hat F}$ matrix).
**@Q3**. (On limitations): The question is twofold. Let us start by answering whether our synthetic datasets are realistic enough to be useful in a general setup. If we interpret this question as whether our synthetic distributions are wide enough to cover empirical path distributions of real world problems, our answer is *yes*. Indeed, we leveraged one and the same pretrained FIM to infer hidden MJPs from 5 very different target datasets. In particular, the switching ion dataset is a real-world, experimental dataset recorded from a viral ion channel, and the ADP dataset is an all-atom simulation of the alanine dipeptide molecule. Both these datasets correspond to very complex dynamical systems. Yet FIM, trained on our heuristically constructed synthetic dataset, is able to make the same inference as the baselines, without the need of any fine-tuning.
The second question concerns the applicability of our pretrained FIM to diffusion processes and to trajectories sampled from the trap model. Traditionally, MJPs and diffusion processes are treated (or better, defined) as different types of (continuous-time) stochastic process, as they have different supports. Indeed, MJPs are defined as processes which take values on discrete sets (like e.g. the integers), whereas diffusion processes are defined as processes which take values in continuous sets (like e.g. the real numbers). Our methodology is designed to tackle the inference of MJP only, and hence cannot be used to infer diffusion processes.
Nevertheless, let us remark, for the sake of completeness, that there are certain limits in which diffusion process can be asymptotically described in terms of MJPs, as first studied by van Kampen and Kubo (see e.g. Gardiner, 2009, chapter 11, or the very recent work of Winkler et al. 2024). These limits involve state spaces that are typically unbounded, and hence are out of the scope of the present work, which focuses, as we stressed in our abstract, on the inference of MJP on bounded state spaces.
Similarly, trap-like models, understood as phenomenological models for relaxation times (as in the mean-field ferromagnetic model of Griffiths et al. 1966, or the model for aging in disordered systems of Bouchaud 1992), can be understood as MJPs on unbounded state spaces (i.e. space of infinitely many metastable states). Our methodology targets the inference of MJP on bounded state spaces and hence cannot be used to model trajectories from trap-like models.
Let us finish this section by kindly referring the reviewer to our response to weakness W2b of Reviewer sVKK, where we discuss the limitations of our methodology, especially with respect to the synthetic training distribution.
*Links to references*:
- Gardiner (2009): https://link.springer.com/book/9783540707127
- Winkler et al. (2024): https://arxiv.org/pdf/2405.03549
- Griffiths et al. (1966): https://journals.aps.org/pr/abstract/10.1103/PhysRev.149.301
- Bouchaud (1992): https://jp1.journaldephysique.org/articles/jp1/abs/1992/09/jp1v2p1705/jp1v2p1705.html
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed explanations. My question Q3 was more about the complexity of the free energy landscapes of inference problems that can be addressed. I apologise for my abuse of language. Is the method still reliable when it comes to glassy dynamics or rugged landscapes?
---
Reply to Comment 1.1.1:
Comment: Let's now finish with some remarks.
- Let us note that our methodology is suitable for any choice of the coarse graining procedure. The latter is left to the practitioner, for different regions of the energy landscape can be better suited for the state definition, depending on the application at hand.
- The general applicability of our pretrained FIM was demonstrated by inferring MJPs from synthetic, simulated and experimental datasets of very different nature. However, albeit general enough to cover these cases, our **proposed training distribution should be understood as a first example**. Our main goal was to demonstrate that designing broad synthetic training distributions can be used to train neural network models to perform (amortized) zero-shot inference of MJP in different situations.
- Future research can involve, for example, extending our dataset to include power-law distributed rates into our datasets, and hence extend the applicability of the newly trained FIM to the case discussed above. One can also extend the pretraining dataset to include MJP of larger space state sizes, featuring connecting networks with different distributions.
---
Rebuttal 2:
Comment: As we understand it, an MJP description of glassy dynamics, as done by e.g. Bouchaud (1992), identifies the metastable states of the disordered system in question (i.e. the set of energy minima characterizing the energy landscape) with the states of the MJP. Within this description, the MJP dynamics depends on the shape of the energy landscape, inasmuch as the transition rates between states $i$ and $j$ ($f_{ij}$ in our notation) are characterized by the depth of the *trap* (that is, the height of the barrier between them).
In equations, practitioners typically write $f_{ij} = \exp(-E_j/T)$, where $E_j$ is the trap depth and $T$ is the temperature.
What matters when applying FIM to such systems is that
1. the number of metastable states in the system **is smaller or equal than** $C$ (the size of the largest state space in our dataset, which we set to 6 in our experiments); and that
2. the distribution of energy barriers in the system is such that the corresponding transition rate distribution **is within our training dataset**.
Just to give an example, we can assume an exponential energy barrier distribution and find, by using the expression for $f_{ij}$ above, a rate distribution of the form
$p(f) = A T f^{T-1}$, for some constant $A$,
which is a power law. However, our prior rate distribution $p_{FIM}(f)$ is a (mixture of) Beta distribution(s). See e.g. line 150 or lines 473-475. Therefore, we expect FIM *trained on our current dataset*, with Beta priors, to perform poorly for glassy systems whose energy landscapes feature exponentially distributed trap depths.
We truly hope that the above answers the reviewer's question.
---
Rebuttal Comment 2.1:
Comment: I want to thank again the authors for the detailed explanation. That's what I expected. I think the issue should be discussed in the limitations section. Trap like dynamics are relevant in the context of disordered proteins, as the random energy model presents trap dynamics.
I will update my score to accept.
---
Reply to Comment 2.1.1:
Comment: We would like to thank the reviewer again for their comment and respond that we will include the discussion above into the Limitations section, as suggested. | Summary: The authors propose a foundation model for Markov jump processes (MJPs) that is trained on sequences drawn from synthetic MJPs to predict the corresponding rate matrix and initial state distribution. High-level arguments are presented for the justification of model trained in such a way to be able to generalize to unseen sequences drawn from different processes. Extensive experimental results are shown demonstrating the model's performance after initial training on synthetic data and without any further fine-tuning.
Strengths: The paper is very clearly written and the approach is motivated well. The lack of finetuning in the experiments showcase well the generalizability of the proposed foundation model. Many experiments are showcased that do a good job of demonstrating the intended strengths of the model.
Weaknesses: As pointed out in the paper, the model naturally has weaknesses generalizing to data that lies outside of the initially exposed distribution that it was exposed to in training, such as on sequences with very high rates or with states that lie outside the initial support.
Outside of this, I could not find any other weaknesses of the paper; I found it to be very good in general.
Technical Quality: 4
Clarity: 4
Questions for Authors: I personally believe the success demonstrated by the foundation model with how well it generalizes being due to the simplicity (not a bad thing!) of the data, i.e., single dimensional across time and is Markov. Would you agree with this or am I missing something?
Additionally, I was wondering if "zero-shot" is really appropriate to use here. Typically, zero-shot is used when trying to produce predictions when there is no labeled data to learn from, i.e., predicting "dogs" vs. "cats" with an image model that has seen neither during training. This is commonly achieved by providing some side-information, such as a description of a class. Here the model does indeed not see any "labeled" rate matrices; however, it does see many different sequences realized from this matrix. I see it as less zero-shot and moreso that the inference problem has been amortized. By this, I mean that other methods directly learn a MJP whereas this predicts the parameters to one given a batch of sequences. I am not saying any of this to detract from the paper, but rather just to dial down how to actually think about what it is doing and the language used to describe it. Any thoughts on this would be great to hear.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors adequately describe the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for both the detailed review and the kind words about our work. Below we address each of the comments and suggestions.
**@Q1** (FIM success due to simplicity of the data): We completely agree with the reviewer. Indeed, we argued in Section 3, lines 119-127 of our paper, that the main assumption underlying our FIM methodology is that *the space of realizable MJPs, which take values on bounded state spaces that are not too large, is simple enough to be covered by a heuristically constructed synthetic distribution over noisy and discretely observed MJPs*.
That being said, we are currently working on extending our methodology to other families of Markov processes, like e.g. single MJPs on unbounded state spaces, or multidimensional (i.e. coupled) MJPs, for which, despite their significantly larger state space, we think we can define, via information theoretic arguments, synthetic distributions that cover well their realizable trajectories.
**@Q2** (On the zero-shot terminology): We understand the point raised by the reviewer. We use the zero-shot terminology in the spirit of the work in Larochelle et al. (2008). That is, for us, zero-shot learning aims to recognize objects (e.g. stochastic processes) whose instances (e.g. samples or paths) may not have been seen during training.
As the reviewer writes, FIM does indeed not see any labels (i.e. rate matrices) during inference. That is one of our first motivations to use the zero-shot terminology. We understand that what we propose is, perhaps, an extension of this terminology to the case of inference of stochastic processes, which we intended to go hand in hand with our foundation (inference) model terminology (which is also an extension of the nlp terminology).
Nevertheless, we do not want to create any confusion as to how FIM works. We completely agree with the reviewer in that our methodology can be fully understood as an amortized inference procedure. In fact, reviewer EGs8 also drew our attention to the work of Paige&Wood (2018), who described a procedure to train a recognition model offline via amortized inference.
What we therefore propose is to
- first, add to the related work section a paragraph on amortized inference and offline learning, where we reference Paige&Wood (2018), together with other representative works in this direction, and connect them with our approach; and
- second, explain in Section 3 what our motivation for the zero-shot terminology is, and how we understand it as an amortized inference procedure.
Does the reviewer think that these modifications will help avoid misunderstandings with our terminology?
*Links to references:*
- Larochelle et al. (2008): https://cdn.aaai.org/AAAI/2008/AAAI08-103.pdf
- Paige&Wood (2018): https://arxiv.org/pdf/1602.06701
---
Rebuttal Comment 1.1:
Comment: Thank you for the in depth responses to my questions! I do in fact believe this extended discussion over the zero-shot terminology would go a long ways towards better communicating this concept to the reader.
I maintain my original score. | Summary: This study describes a foundation model for a specific stochastic process, called Markov Jump Process (MJP). The foundation model, called FIM, is trained with a large number of synthetically generated MJP data set. It is shown that the pre-trained FIM can make a zero-shot inference. The capability of FIM is demonstrated by using a few data set.
Strengths: The paper is well written. It clearly formulates the problem setup and the proposed approach, which makes it read well. While the proposed method is exploratory, it demonstrates some interesting capability. The numerical experiments are limited, but well documented.
Weaknesses: The numerical experiments are not strong enough. Numerical experiments should achieve either 1) comparing the performance of the proposed model against a cohort of the state-of-the-art methods, or 2) to systematically demonstrate the capability and limitations of the proposed model and investigate the behavior of the model under a range of important conditions. However, the manuscript does not achieve both. Based on the Appendix, I believe that the authors would have performed an extensive number of studies. A more detailed analysis of the model behaviors with respect to the range of parameters will be appreciated.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. It is not clearly explained how the data are scaled. Since the training data is generated from equations, it should be fine. But for an inference the input data should be correctly scaled. How is the inference data scaled? Are there any effects of the scaling on the inference result?
2. Similar to question 1, how the time scale is determined. The model predicts "rate", which depends on the time scale of the data. What method do you use for the inference time? Is the time scaling method general enough to be applied for any real-life MJP?
3. A more detailed explanation about the number of states is required. The model is trained with the maximum number of states of six. What happens when the number of states is less than 6? What do $A$ and $F$ look like? Do the redundant elements of the matrices become exactly zero? If not, how do you normalize the transition probability?
4. Similar to 3, it is interesting to see how the model behaves if the number of states is larger than the maximum number of states in the training, as in real-life problems it is difficult to know or restrict the number of states.
5. During the training, when the number of states is less than six, how do you compute $Var\hat{f}_{ij}$? Note that $Var\hat{f}_{ij}$ is used in the denominator and in a log.
6. How accurate is the prediction of the initial state, $\pi_0$?
7. The prediction requires a prior distribution for $A$. What are the effects of $A$ on the prediction? What happens if the data is generated from a MJP with a different distribution of $A$?
8. The inference of FIM requires a fairly large amount of input data. A conventional approach is to fit a state-space model with the same amount of data to learn the MJP parameters and make an inference. How does FIM compare against the standard ML approach?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Let us thank the reviewer for the detailed review, as well as for the proposed questions and weaknesses. However, let us remark here that some of the proposed weaknesses are somewhat vague. Accordingly, below we ask the reviewer to be more specific in (some of) their remarks.
**@W1** (On comparisons against state-of-the-art): We tackle the problem of MJP inference from noisy data for processes defined on bounded state spaces. A prominent application of these processes is the modeling of conformational dynamics in proteins and molecules, and in our paper we considered two such examples: the conformational switching of ion channels and the molecular dynamics of a 22-atom molecule. We compared FIM against
- The VampNets model of Mardt et al. (2017): the go-to, state-of-the-art deep-learning approach for modeling molecular dynamics via discrete-time Markov chains;
- The SDiff model of Koehns et al. (2021): a recent variational model that has been applied to the modeling of switching ion channels; and
- The NeuralMJP model of Seifner & Sanchez (2023): to the best of our knowledge, the most recent, state-of-the-art neural network model available online, which leverages neural variational inference and neural ODEs to infer hidden MJPs.
We therefore believe that we have compared our methodology against a set of state-of-the-art models. Nevertheless, we would be very happy if the reviewer could point out any state-of-the-art model we did not include into our related work.
**@W2a** (On the systematic demonstration of the capabilities of FIM): We have empirically demonstrated that one and the same pretrained FIM can be used to estimate stationary distributions, relaxation times, mean first-passage times, time-dependent moments and thermodynamic quantities (i.e. the entropy production) from five very different, noisy and discretely observed MJPs, taking values in state spaces of different dimensionalities, without the need of any fine-tuning. What is more, FIM was shown to perform on par with all the baselines.
We believe that these tests cover well the capabilities of FIM. We therefore would very much appreciate if the reviewer could be more specific as to what other demonstrations they are referring to.
**@W2b** (On the systematic demonstration of the limitations of FIM): As we explained in the limitations section of our paper, the main limitations of FIM are related to the synthetic training distribution. We expect that if one leverages FIM to infer the rate matrix from an empirical process whose path distribution lies well outside our synthetic training distribution, one will obtain poor rate estimates. Figure 4 represents such an example.
Beyond this, we have also explored how other features of our synthetic training distribution affect FIM. Indeed, as we explained in lines 199-207 and lines 212-217 of the main text, FIM is expected to perform best for the context number $c(300, 100)$, a number which is specified by the training distribution. We studied how FIM handles context numbers that are different from the optimum in both Appendix D.1 and D.4, and we refer the reviewer to them for details.
Similarly, we have also studied the effect of training FIM on a synthetic dataset which contains only six-state MJPs in Appendix D.3.
Again, we believe all these experiments cover well the limitations and features of FIM, especially as regards the synthetic training distribution, and we would be glad if the reviewer could be more specific as to which type of experiments they refer to.
**@Q1, Q2, Q3**: We kindly refer the reviewer to our general comment above.
**@Q4**: As explained in the general comment above, FIM always returns a $C\times C$ matrix. It is therefore not possible for it to predict intensity matrices of dimension higher than C.
**@Q5**: As can be read from equation 6, during training we ask FIM to predict zeros for those redundant matrix elements. Indeed, the first two terms in equation 6 are both multiplied by $a_{ij}$, which is zero when the target rate is zero. Similarly, the second two terms are multiplied by $(1-a_{ij})$, so that only these terms are active when the target rate is zero.
**@Q6**: Let us answer this question by reporting our predictions for the DFR dataset with $V=1$. The ground-truth initial condition is given by the distribution
$\pi_0 = [0.301, 0.137, 0.062, 0.200, 0.159, 0.141]$.
FIM predicts
$\hat \pi_0 = [0.224, 0.147, 0.112, 0.189, 0.155, 0.173]$.
Our estimates are therefore reasonably accurate. For completeness we will report both our estimated $\hat \pi_0$ and their empirical counterpart for all target datasets in the Appendix.
**@Q7**: The adjacency matrix $\textbf{A}$ only appears as part of our training distribution, and it simply defines the level of sparsity of the embedded Markov chain of the MJPs within the training dataset. Thus, no information about the adjacency matrix is needed during inference.
As explained in Appendix B.1, lines 479-484, the adjacency matrix is sampled from an Erdős–Rényi model, with edge probability 0.5, and is rejected if the corresponding graph is not connected. This prior on $A$ is, just as all other priors in Eq. 2, part of our definition of the FIM synthetic training distribution. Our experiments corroborate that these heuristic priors, which we define in Appendix B.1, are expressive enough for our trained FIM model to perform well in our 5 target datasets.
**@Q8**: In our experiments we make use of as many data points as the baselines in all experiments, for fairness in comparison. We give the sizes of each of these datasets in each subsection of section 4 in the paper.
We however do not fully understand what the reviewer means by the last question: *How does FIM compare against the standard ML approach?* All our baselines are different instances of the “standard ML approach”. Indeed, the goal of our paper is to show that FIM provides a good alternative to the standard paradigm.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the author for the clarifications. My last comment is a question regarding a model trained for the target distribution versus a foundation model trained with a large number of data. In the conventional way, you train a MJP model for a particular data set and make an inference for the data set with the same underlying distribution. In FIM, you train FIM using a large collection of data set and make an inference on a new data set. It would be interesting to compare the accuracy of these two different approaches. It will be surprising if FIM outperforms a MFP model tailored for the particular data generating distribution. But it will be interesting if the FIM can show a comparable accuracy.
Re-iterating my first review, the study is not perfect and still exploratory. But it is interesting and certainly has a merit. I will update my score from 5 to 6.
---
Rebuttal 2:
Comment: *Links to references*:
- Mardt et al.(2017): https://www.nature.com/articles/s41467-017-02388-1
- Koehns et al. (2021): https://proceedings.neurips.cc/paper/2021/hash/abec16f483abb4f1810ca029aadf8446-Abstract.html
- Seifner & Sanchez (2023): https://proceedings.mlr.press/v202/seifner23a.html
- Gazzarrini et al. (2006): https://www.pnas.org/doi/full/10.1073/pnas.0600848103
---
Rebuttal 3:
Comment: We would like to thank the reviewer for their comment and respond that what the reviewer asks is precisely what we did.
Our three baselines (NeuralMJP, SDiff, and VampNets) were trained in the conventional way. That is, our three baseline models were trained on the target datasets. We mention this in our abstract and again on line 82. See also the discussion on lines 234, 237, and 238.
We realize, however, that we may not have been that clear about this. Therefore, to make this point more explicit, we will add a line in the Experiments section to clarify that all baselines are trained on the target datasets. | Summary: This work presents a framework for amortizing inference on Markov Jump Processes by learning a foundational model in a supervised fashion from synthetic data. Once learned, the "foundation model" is shown to be successful at zero-shot inference in MJP across a range of domains, out-performing SOTA models that are fine-tuned to the target datasets.
Strengths: Clear exposition of why problem is important and the motivation behind the approach. Clear exposition of the method and results.
Weaknesses: Discussion of prior literature could be improved. Some relevant work is on amortized inference in other types of Bayesian models. E.g. "Inference Networks for Sequential Monte Carlo in Graphical Models" learns a supervised model on synthetic data for zero shot inference on Bayesian regression coefficients.
Technical Quality: 3
Clarity: 3
Questions for Authors: You mention a main limitation is that generalization is poor outside of the family of models assumed in the synthetic training data. What happens to generalization as you scale up the NN model and widen the family of MJPs in the training data (like for LLMs)?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First of all, we would like to thank the reviewer for taking the time to review our work and for the helpful comments and questions.
**@W1**: We thank the reviewer for pointing this reference out. We will include it into our related work section, together with other representative references on amortized inference and offline learning.
**@Q1**: Indeed, we expect that if one leverages FIM to infer the rate matrix from an empirical process whose path distribution lies well outside our synthetic training distribution, one will obtain poor rate estimates. As we commented in the limitation section, Figure 4 represents such an example.
As implied by the reviewer, there are two paths one can follow to improve the performance of FIM in such cases:
(i) scaling the parameter count of FIM, and
(ii) widening the MJP training distribution.
*Regarding parameter scalings*, we have empirically observed that, for our current 45K MJP synthetic dataset, increasing the parameter count does not necessarily improve the performance of FIM. This can be seen, for example, in Figure 7 of Appendix D.2, in which, broadly speaking, the performance of different FIM architectures, with respect to our synthetic distribution, is statistically similar. We invite the reviewer to read our discussion in lines 592-606 for details.
*Regarding the MJP training distribution*, we do expect that widening our synthetic training distribution will improve the performance of FIM in e.g. the cases reported in Fig. 4. Future work will explore how to define wider synthetic distributions that contain rare or exceptional processes, and how scaling of the FIM parameter count in such cases affects their encoding.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I thank the authors for their thoughtful response to my review and am glad to keep my score as accept. I look forward to future work in this area building upon this paper. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their valuable comments and questions. We have carefully read and addressed all of them, for each reviewer separately, in the discussions below. We have however noticed a couple of common questions among (some of) the reviewers, namely
1. How is the input data of FIM scaled, so that FIM can be applied to real-world data?
2. How does FIM handle MJPs whose state space size is smaller than the maximum size $C$ in the training ensemble?
Let us address each of these, and summarise the main updates we will do to the manuscript, which we believe answer all questions/concerns of the reviewers, here.
**1. On the scaling of the input data**: As we explained in Appendix C.2, lines 549-556, FIM takes as input:
- the normalized sequence of observation times, which lie on the interval $[0, 1]$;
- the values of the process at those observation times, which are sequences of integers taken from the set $\{0, 1, …, C-1\}$, where $C$ is the size of the largest state space in our training dataset;
- the number of observations per simulation; and
- the time scale $T^*$, which we define as the observation time of the last event in the (unnormalized) input time series.
(*On the scaling of the observation values*). The observation values do not need to be scaled since they are nothing but sequences of integers from the set $\{0, 1, 2, … C-1 \}$. Indeed, as we explained in our introduction (see e.g. lines 49-54 and lines 116-117), our aim is to tackle the classical MJP inference problem on coarse-grained (i.e. discrete) space.
(*On the scaling of the observation times*). In order to be able to process arbitrary input data, FIM requires the observations times to lie on the interval $[0, 1]$. Indeed, as we explained in Appendix B.1, lines 485-490 and lines 505-507, our target rate matrices take, by construction, values between 0 and 1 only. We use these matrices to simulate MJPs on the time interval [0, 10], and then map our random observation times to the interval [0, 1]. We do this mapping by normalizing all observation times wrt. the maximum observation time in the set of paths we simulate per MJP. That is, we define the *time scale* of the simulated paths to be the time $T^*< 10$ of the last event in the path set of a given MJP. Naturally, this normalization affects the target rates (which have units of one over time) only by the factor $T^*$. *FIM is trained to predict the normalized rates*.
The true (unnormalized) rates can then simply be computed by multiplying the matrix $\mathbf{\hat F}$ returned by FIM with $1/T^*$. In practice, this normalization happens inside FIM, which is why the model requires $T^*$ as input.
Note that, as we empirically demonstrated in the paper, this rescaling procedure allows us to work with real-world MJPs of arbitrary time scales. For example, the time scales for the switching ion channel dataset were more than 500 times smaller than the time scales in our training dataset.
**2. FIM and MJP with different state space sizes**. As we mentioned above, FIM takes as input integer sequences (i.e. the observations) on the discrete set $\{0, 1, 2, … C-1 \}$. FIM always returns the $C(C-1)$ off-diagonal elements of the $C \times C$ inferred rate matrix $\mathbf{\hat F}$, whose diagonal elements are then computed as $\hat F_{ii} = \sum_{j \neq i} \hat F_{ij}$.
In our experiments we trained FIM on a synthetic dataset with $C=6$.
Now, we arrange all the target rate matrices $\mathbf{F}$ within our training dataset for MJPs with state spaces of size $c < C=6$, to be the leftmost block diagonal $c \times c$ matrix within a $6\times 6$ matrix of zeros, so that the redundant matrix elements are always zero. As can be read from equation 6, and as we explained in lines 196-198, we train FIM to predict zeros for those redundant matrix elements.
In practice, however, our trained FIM does not exactly predict zeros for those redundant matrix elements. In our experiments the user knows a priori the number of states of the hidden process, so we explicitly set the redundant matrix elements to zero, and only then compute the corrected diagonal (i.e. the normalization) of the output rate matrix. Note that this assumption is typically made by the baselines (see e.g. Seifner & Sanchez, 2023).
To illustrate that FIM nevertheless predicts very small values for the entries of the rate matrix which are supposed to be zero, we considered again the 3-state switching ion dataset, and computed again the stationary distribution from the complete $6\times 6$ output matrix, without zeroing out the elements outside the $3\times 3$ leftmost block diagonal matrix. We obtained the following:
$
\hat p^* = (0.154, 0.226, 0.590, 0.010, 0.008, 0.011).
$
Note that, as expected, the last three entries have very small probabilities. Also note that the first three entries agree well with the results reported in Table 3.
**SUMMARY OF MAIN MODIFICATIONS**:
- We will add to the related work section a paragraph on amortized inference and offline learning, where we reference the work of Paige&Wood (2018), together with other representative works in this direction (Weakness 1, Reviewer EGs8);
- We will explain in Section 3 what our motivation for the zero-shot terminology is, and how we understand it as an amortized inference procedure (Question 2, Reviewer 5km2);
- We will move the subsection “How to use the model” of Appendix C.2 into a new Appendix section. This new Appendix will gather and extend what we wrote above **on the scaling of the input data** (Question 1 and 2, Reviewer sVKK and Weakness 1, Reviewer pfZc);
- We will include a new Appendix which will contain our notes **on FIM and MJP with different state space sizes** which we wrote above (Question 3, Reviewer sVKK).
- We will report both our estimated initial conditions ($\hat \pi_0$) and their empirical counterpart for all datasets in the Appendix (Question 6, Reviewer sVKK). | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Rethinking Transformer for Long Contextual Histopathology Whole Slide Image Analysis | Accept (poster) | Summary: The authors provide LongMIL, a hierarchical and hybrid of local and global attention mechanism, to address the inherent low-rank bottleneck of MIL problems in computational pathology. Through extensive evaluations across feature encoders and subtyping/survival tasks, the authors indeed demonstrate the superior performance as well as computational efficiency of LongMIL.
Strengths: Despite the recipes for LongMIL being simple (Masking of attention), I find the problem well-motivated and solution intuitive enough to be implemented in future MIL studies. It's been long well-known that TransMIL, although being the first self-attention mechanism in MIL, have not been demonstrating good performance and was in need of some alternative implementations.
I also appreciate the fact that the authors performed extensive ablation studies over several different feature encoder choices as well as tasks of different nature (subtyping and survival), to truly show that LongMIL can be a meaningful contribution to the field.
Weaknesses: There are several weaknesses of the studies that I think the authors need to address for this to be a meaningful contribution to the field.
**Novelty**: Although the authors tried hard to distance from HIPT, I still consider LongMIL solution to be very similar to HIPT - For HIPT, if the first patch-level stage ViT is replaced with the pretrained feature extractors, wouldn't this be same solution as LongMIL, with the difference being how ROI regions are masked? Can authors expand on this point?
**Motivation**: While I agree that the low-rank nature of MIL problem is problematic due to n >> d, I am not entirely convinced that "the representation ability of self-attention is limited by the low-rank bottleneck, thus vanilla Transformer based model in WSI analysis suffers sub-optimal performance" is always the case.
1. There exists lots of morphological redundancy in WSI, so the effective number of distinct features might be much lower than actual n [1], [2]. Therefore low-rank might not always be the issue? Can authors expand on this point?
2. To concretely show n>>d is indeed the issue, the authors should also evaluate their algorithm on tissue biopsies (not tissue resections), where the number of patches would be way lower (few thousand patches) or TMAs (few hundred patches). On these datasets, the gap between LongMIL and other frameworks should decrease, since this is not a low-rank setting.
**Presentation**: Although the paper was not hard to follow, there are several items that needs improvement. There are lots of typos in the paper that need to be ironed out (e.g., line 157 "are got", line 172 "is design", line 298 "which may because"). I am not sure if Figure 2 contributes meaningfully to the paper (also it's impossible to read the axial labels). My suggestion would be to make it smaller and use extra space for the experiments section. Same goes for Equation 12 - I think this can be moved to supplemental section.
The "pre-training backbones" section (line 263~276) was hard to follow. Perhaps make it as a table?
**Experimentation**: To follow up on the Motivation section, I think the authors could run few more ablation studies to demonstrate the severity of low rank issue, by trying to reduce the gap between n and d. This could include larger patch size (256->512) or random sampling patches, both of which have been used in literature and results in lower number of patches.
I think the authors emphasize the survival experimentation over the subtyping (BRACS), since prognosis is known to depend on context [3], [4].
The authors might also consider using latest pathology foundation models (all of these have not been trained on TCGA) - PLIP, UNI, GigaPath - for future studies (It will be too much to do in the given time)
References
[1] Song, Andrew H., et al. "Morphological prototyping for unsupervised slide representation learning in computational pathology." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[2] Vu, Quoc Dang, et al. "Handcrafted Histological Transformer (H2T): Unsupervised representation of whole slide images." Medical image analysis 85 (2023): 102743.
[3] Lee, Yongju, et al. "Derivation of prognostic contextual histopathological features from whole-slide images of tumours via graph deep learning." Nature Biomedical Engineering (2022): 1-15.
[4] Jaume, Guillaume, Andrew H. Song, and Faisal Mahmood. "Integrating context for superior cancer prognosis." Nature Biomedical Engineering 6.12 (2022): 1323-1325.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Since n >> d is MIL model-agnostic, would other approaches such as AB-MIL also suffer from low rank nature?
- It seems LongMIL supersedes existing positional encodings (Or am I undsteranding this correctly?) - Can they be combined?
- Equation 11, p -> p_{i,j}. Otherwise, readers might confuse it as being uniform probability.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Please see Weakness & Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer UesU,
We appreciate your time and valuable feedback. We are glad that you found that our method is well-motivated and intuitive. Below, please find our point-to-point response to your comments:
> **W1: Novelty: Although the authors tried hard to distance from HIPT, I still consider LongMIL solution to be very similar to HIPT ...**
For the comparison to HIPT, please find the A.4 and Figure 6 of paper manuscript for illustration. Here we provide some details:
* HIPT first slice the whole image into regions (4096x4096, ~50 regions per WSI), then each region *r_4096* is sliced into patches (256x256, 256 patches per region). Each patch *p_256* are extracted by a ViT as feature, the patch-level operation is same as ours and other methods.
* However, their self-attention on extracted patch features only focuses inside on each region *r_4096* with a Transformer layer. As pointed out in our paper (line 56-57, 180-183), **the adjacent patches may be separated into two regions, and the interactions between them are ignored in HIPT**. The features of each region *r_4096* is processed by further pooling, and a higher Transformer layer globally focus on slide-level.
* Conversely, **our method do not need 4096 region slicing but use 2-d attention mask to treat all patches equally, thus all patches will interact with other adjacent patches**.
> **W2.1: Motivation: .... There exists lots of morphological redundancy in WSI, so the effective number of distinct features might be much lower than actual n. Therefore low-rank might not always be the issue ...**
We find this question being very constructive to our paper. Though in patch-level the morphology and feature semantic is quite similar, in region-level, similar patches with different spatial combinations may construct different tumor type (carcinoma in situ V.S. invasive cancer). Some previous methods, e.g. DSMIL [1] [2] try multi-scale information to solve this problem, which indicates the importance. Our method can reach similar goal as multi-scale by adaptively local-global interactions in a uniformed transformer, which we believe to be more elegant.
>**W2.2 + W4.1: To concretely show n>>d is indeed the issue... evaluate their algorithm on tissue biopsies (not tissue resections), where the number of patches would be way lower ... On these datasets, the gap between LongMIL and other frameworks should decrease, since this is not a low-rank setting. .... the authors could run few more ablation studies to demonstrate the severity of low rank issue, by trying to reduce the gap between n and d ...***
We run some experiments on larger patch size 448 on BRACS (224 originally), where the largest patch num n is less than 2k, near the feature size of UNI (given the limited time, we will further run biopsies and TMAs in next version). The results are shown in the following table, with main findings that:
1. Simple attentions (ABMIL, CLAM without pair wise interactions) gain improvement, and we speculate that the larger image-size can modelling the local context better.
2. DTFD try to split the whole bag into 3 sub-bags, but smaller bag size may result in larger label noise of sub-bags which may answer its performance drop.
3. The gap between LongMIL and TransMIL decreases given closer n and d. Full attention and LongMIL show small drops, since less interactions can be modelled with less patches.
4. LongMIL still out-performs full attention. We speculate that local attention also works better when dealing with the shape-varying WSI even with less n.
* *BRACS, 224x224 VS 448x448*
| Patch Encoder | UNI-224| | UNI-448| |
|:---|:---|:---|:---|:---|
| Method\Metric | F1 | AUC | F1 | AUC |
| AB-MIL | 0.692±0.03 | 0.875±0.02 | 0.695±0.01 | 0.875±0.01 |
| CLAM-SB | 0.640±0.06 | 0.844±0.03 | 0.654±0.03 | 0.851±0.02 |
| DTFD-MIL | 0.655±0.03 | 0.878±0.02 | 0.625±0.03 | 0.839±0.01 |
| TransMIL | 0.592±0.04 | 0.859±0.02 | 0.646±0.07 | 0.855±0.02 |
| Full Attention | 0.715±0.04 | 0.884±0.02 | 0.700±0.04 | 0.874±0.02 |
| LongMIL (ours) | 0.728±0.05 | 0.887±0.01 | 0.722±0.03 | 0.883±0.01 |
>**W3 + Q3: Presentation: ...**
We appreciate your advice and will definitely focus on refining the paper's presentation for the next version to enhance clarity.
>**W4.2: ... consider using latest pathology foundation models ...**
We have finished part of this experiments in this stage, which can be inferred in the general response.
>**Q1: Since n >> d is MIL model-agnostic, would other approaches such as AB-MIL also suffer from low rank nature?**
For AB-MIL, CLAM, etc., their attentions are in fact adaptive weighted average ($1 \times n$) over all patches, and there is no pair-wise interaction like self-attention does ($n \times n$ attention matrix). It's hard to say the rank of a $1 \times n$ array, but we find that they are sparse like what doctors do: determine the lesion level of a slide based on some specific lesion regions. However, this process needs quite high semantics align with doctors. Conversely, our method with self-attention can be understood as trying to convert the feature representation (learned by patch-level self-supervised learning) aligning better to doctors' diagnosis level by introducing important context. After layers of attention, cls-token and average pooling on these features will do the same thing as AB-MIL.
>**Q2: It seems LongMIL supersedes existing positional encodings (Or am I understanding this correctly?) - Can they be combined?**
We have combined the rotary positional embedding, since we still need to determine the position information within the local areas (about a size of $20 \times 20$). We feel sorry to ignore this implementation detail and will add it into the next version.
### Ref:
[1]. Dual-stream multiple instance learning network for whole slide image classification with self-supervised contrastive learning.
[2]. Cross-scale multi-instance learning for pathological image diagnosis.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for addressing my concerns - I trust the authors in the revised version will do their best to address the clarity issue I raised. I increase my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We appreciate the constructive feedback aimed at enhancing the clarity of our paper. We agree with the points and suggestions raised and will address them in the revised manuscript. | Summary: This paper focuses on the issue of attention computation for long sequences in WSI (Whole Slide Image) images. The authors first analyze how the low-rank nature of the long-sequence attention matrix constrains the representation ability of WSI modeling. They then propose a method using local attention masks to compute attention within local regions, followed by the computation of global attention. Experimental results demonstrate that the combination of local and global attention computations outperforms full attention.
Strengths: 1. The computation of attention for ultra-long sequences is a significant challenge in WSI slide-level feature learning, and addressing this issue is highly valuable.
2. The paper provides a detailed analysis of the low-rank and sparsity problems in the attention matrix of long sequences, based on which the use of local attention is proposed.
3. The authors propose the longMIL method, which achieves better results than the baseline in both subtyping and prognostic tasks.
Weaknesses: 1. Although the paper identifies and analyzes the bottleneck issues in long-sequence attention and proposes the use of local attention based on this analysis, there is a lack of innovation in using local attention. Other methods, such as LongViT, LongNet, and Prov-GigaPath, have also used local attention more elegantly. Additionally, the paper does not discuss the potential for directly transferring numerous attention optimization methods from the fields of computer vision (CV) and natural language processing (NLP).
2. The paper lacks a review of relevant literature, such as the aforementioned works.
3. In the experimental results, for instance, the classification results in Table 1 and the prognostic results in Table 2 show only a 0.01 improvement over full attention, which is not significant.
4. Although the paper claims to reduce computational costs, which is evident, it should provide corresponding comparisons to substantiate this claim.
Technical Quality: 2
Clarity: 2
Questions for Authors: see weakness
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: no
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Ghwr,
We appreciate your time and constructive feedback. We are glad that you found that our analysis and method are valuable. Below, please find our point-to-point response to your comments:
> **W1 & W2: Although the paper identifies and analyzes the bottleneck issues in long-sequence attention and proposes the use of local attention based on this analysis, there is a lack of innovation in using local attention. Other methods, such as LongViT, LongNet, and Prov-GigaPath, have also used local attention more elegantly. Additionally, the paper does not discuss the potential for directly transferring numerous attention optimization methods from the fields of computer vision (CV) and natural language processing (NLP).
The paper lacks a review of relevant literature, such as the aforementioned works.**
We have great respect for the significant contributions made by Prov-GigaPath, LongViT, and LongNet, as well as the innovative dilated-attention method that underpins them. These advancements have brought substantial progress to the field of Computational Pathology. We regret that our initial submission did not adequately acknowledge these important works. The Prov-GigaPath paper, published in Nature on May 22, 2024, coincided with the NeurIPS submission deadline, while the LongViT was available as an arXiv preprint formatted as a technical report and titled as a 'Case Study' thus we missed it. We sincerely apologize for this oversight and have now included extensive discussions on these works in our rebuttal. We are committed to providing a more thorough analysis in the future version of our paper.
For the detailed comparisons of our method and Prov-GigaPath, please infer to the general response, where we make systematical analysis including:
d_1. Method: their receptive field weigh more on x-axis than y-axis, however our method as 2D locality treat x-y equally. Please also check the figure illustration in the rebuttal PDF.
d_2. Contribution: we focus more on analyzing why previous transformers failed then deriving our method, while they empirically scale up to big data based on dilated attention.
d_3. We find that when their patch feature is not the best in some task cases, their heavily pretrained WSI head with problem in 'd_1' only shows sub-optimal performance.
We provide some quick experiments to compare their WSI-architecture and our method.
>**W3: In the experimental results, for instance, the classification results in Table 1 and the prognostic results in Table 2 show only a 0.01 improvement over full attention, which is not significant.**
It is not so easy to totally beat the full attention in performance since our main goal is to achieve comparable results as full attention but with less computational complexity. Even in the area of NLP currently, like LLM, the vanilla full attention is the most widely adopted choice [1][2][3] when equipped with the best hardware, although a lot of methods [4][5][6], and also the LongNet, are trying to replace it to solve long sequence problem. However, in the area of digital pathology given less computational resources, the inference speed is quite important. Moreover, we have tested the training and inference speed when dealing with larger resolution WSI or higher magnification like 40x with 0.25 mpp, the latency is unbearable even with flash attention. Whereas our method can highly alleviate the speed problem without performance drop (even improvement) if we need more detailed features in 40x.
>**W4: Although the paper claims to reduce computational costs, which is evident, it should provide corresponding comparisons to substantiate this claim.**
Please check A.6.5 and Figure 8 of our main paper manuscript, where we have tested the speed (time consumption in deployed GPU), and shown the comparisons in paper submission stage.
The theoretical complexity is also included in the *line 232-236* of our main paper manuscript.
If you have additional questions, we’re happy to engage further.
### Ref:
*[1]. Touvron H, Martin L, Stone K, et al. Llama 2: Open foundation and fine-tuned chat models[J]. arXiv preprint arXiv:2307.09288, 2023.*
*[2]. Jiang A Q, Sablayrolles A, Roux A, et al. Mixtral of experts[J]. arXiv preprint arXiv:2401.04088, 2024.*
*[3]. Bai J, Bai S, Chu Y, et al. Qwen technical report[J]. arXiv preprint arXiv:2309.16609, 2023.*
*[4]. Gu A, Dao T. Mamba: Linear-time sequence modeling with selective state spaces[J]. arXiv preprint arXiv:2312.00752, 2023.*
*[5]. Sun Y, Dong L, Huang S, et al. Retentive network: A successor to transformer for large language models[J]. arXiv preprint arXiv:2307.08621, 2023.*
*[6]. Peng B, Alcaide E, Anthony Q, et al. Rwkv: Reinventing rnns for the transformer era[J]. arXiv preprint arXiv:2305.13048, 2023.*
---
Rebuttal Comment 1.1:
Title: response
Comment: Since the two-stage method is adopted in this paper, the first stage has used the pre-trained model to extract features offline. The second stage feature dimension has been smaller. Usually the number of effective patches (256) for a WSI image is between a few thousand and ten thousand, and the second stage network is smaller. In the experiment, the second stage of the network usually only needs a few simple transformer layers, which can be completed in a few minutes or ten minutes of training. In view of this, if the local attention that cannot be proposed in this paper is not to solve the performance bottleneck and other effects of full attention caused by too many tokens, it cannot exceed full attention (I think too many tokens will lead to inefficient learning, full attention is not necessarily better than local attention performance), and I have doubts about the actual value in terms of solving computational efficiency.
---
Reply to Comment 1.1.1:
Comment: > the number of effective patches (256) for a WSI image is between a few thousand and ten thousand.
This is not always the case. There are various cases facing larger patch numbers need computational efficiency:
1. 40x magnification, with about n=40k~70k patches (also shown in Fig. 8), we have test training speed of full attention, which is about 25x to 35x times than ours. We have talked about this in the caption of Fig. 8, and also show some experimental results of performance in Table 5 of A.6.4. Though currently 40x is not the mainstream, we speculate this is caused by 20x captures better context thus performs better using AB-MIL (without context modelling ability) paradigm. Moreover, there is study [1] show better performance via 40x.
2. Overlapped patching, it will be resulting 2~4x patches if the overlap ratio is 0.25~0.5. This helps alleviating the edge effect of image modelling neither by CNN nor ViT.
3. Some slides contain >=2 histology tissues; Survival prediction need multiple slides for one patience.
> if the local attention that cannot be proposed in this paper is not to solve the performance bottleneck and other effects of full attention caused by too many tokens... I have doubts about the actual value in terms of solving computational efficiency.
We find that you are doubting on the motivation of the whole paper, but our paper shares the same motivation to the structure of TransMIL, HIPT and even Prov-Gigapath (to the best of our knowledge, in Prov-Gigapath, they also use patch size 256 and extract patch features from stage-1, then perform stage-2 slide-level 'dilated attention encoder' fine-tuning ). An interesting point is that there are very limited papers using full attention, even in the famous works like UNI and Prov-Gigapath, which we believe, is incurred by its unacceptable complexity.
The complexity has impeded it being widely use or scaling to larger data setting, including:
1. 40x, overlapped patching ... as we mentioned above.
2. pretraining on over 100k slides just like Prov-Gigapath do.
3. slow speed in the clinical or deployed setting, where the GPU hardware is not so good as training.
4. using larger feature embedding d (e.g. carrying both high-level and low-level feature) as we mentioned in the line 207~208 of our paper: 'An intuitive modification to handle the low-rank problem is to set a larger embedding size d, but this makes computational complexity O(n^2 d) more severe...'
5. more transformer layers, as you said.
These prospective directions or future work which may improve the WSI diagnosis or prognosis, we think, can be better implemented via our method. Though is this paper we cannot include all above topics, we are actively working to enhance these factors so that the model becomes a more valuable resource for the community.
### Ref:
[1]. Yu, Jin-Gang, et al. "Prototypical multiple instance learning for predicting lymph node metastasis of breast cancer from whole-slide pathological images." Medical Image Analysis 85 (2023): 102748. | Summary: The authors point out that that MIL often has insufficient ability to offer accurate slide level classifications. There is a long (now) history of attempting to better consider sub-slide level context in the aggregation function. TransMIL, GNN-based methods all have provided attempts to this end.
The authors argue that transformer based aggregatioins functions have limited ability to consider both local and global context for tile-attention-rankings. They provide theoretical arguments for why a "low-rank bottleneck" exists because the total number of patch embeddings is much greater than the embedding size.
The method addresses this limitation but calculating local self attention followed by pooling function before doing a "global" self attention function.
In main paper experiments are done with BRACS dataset using two different encoders for feature extraction. The proposed method is benchmarked against cotext aware and non-context aware aggregations functions. Similar experiment is done with survival predictions. Extensive additional supporting experiments are in supplement.
Strengths: The paper is well written and thorough. The authors are very meticulous about addressing the problem proposed with traditional transformer based aggregators. The claims are backed by the results. The supplement provides extensive figures and ablation studies as well as additional details (including memory efficiency studies) on the methodologies.
Weaknesses: A recent preprint (https://arxiv.org/abs/2407.07841) shows that context aware aggregations functions offer less performance boost over ABMIL when you have a high feature extraction encoder. In this work the encoders used are less robust than the now publically available encoder (UNI, Gigapath and Virchow). All of these are very recently released so is understanable that they are not part of this submission.
Going forward these encoders should be used for any aggregation function assessment. The supplemental table showing that the boost of this method is much stronger with an image net pretrained cnn.
Technical Quality: 3
Clarity: 4
Questions for Authors: In final version of paper, can you please improve orientation of figure 3. It is hard to zoom in sufficiently to understand second panel. It would also benefit from having sub labels (eg. a, b, c) to improve the legend description of the sub panels.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: I concur with listed limitations. I have pointed out other limitations in the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer GgQd,
We would like to express our sincere gratitude for your thoughtful review and insightful feedback on our manuscript. We appreciate your recognition of the thoroughness and approach taken in addressing the limitations of traditional transformer-based aggregators. Your positive evaluation of our work's soundness, presentation, and contribution is greatly encouraging.
Below, we provide responses to your comments and questions:
>**W1: A recent preprint (https://arxiv.org/abs/2407.07841) shows that context aware aggregations functions offer less performance boost over ABMIL when you have a high feature extraction encoder. In this work the encoders used are less robust than the now publicly available encoder (UNI, Gigapath and Virchow). All of these are very recently released so is understandable that they are not part of this submission. Going forward these encoders should be used for any aggregation function assessment. The supplemental table showing that the boost of this method is much stronger with an image net pretrained cnn.**
We find this question being very constructive and insightful to the completeness our paper.
On the one hand, we have performed some experiments on stronger patch encoder including UNI and GigaPath in this stage (check it in the general response), with main findings:
1. Our method continues to outperform previous work in complicated tasks such as BRACS and survival prediction, demonstrating a notable performance boost in consistency compared to other methods, with the exception of TransMIL.
2. The results in the TCGA-BRCA tumor subtyping task are similar across almost all the methods. We speculate that this task may have reached an upper limit when using a strong patch encoder. In future versions, we plan to validate our method with robust encoders on larger datasets to further explore the potential for improvement.
On the other hand, motivated by this suggestion, we realize that strong patch encoders with high-level semantics may lose important low-level spatial-context or fine-grained details. Thus, in future work we may be going to aggregate more features from different depth of layers in ViT or different pretrained ViT. This may not only include more detailed spatial contexts, but also helps improving the feature size $d$.
>**Q1: In final version of paper, can you please improve orientation of figure 3. It is hard to zoom in sufficiently to understand second panel. It would also benefit from having sub labels (eg. a, b, c) to improve the legend description of the sub panels.**
Thank you for your detailed feedback on Figure 3. We understand that the current orientation and labeling make it difficult to interpret, particularly the second panel. To address this issue, we will take actions to improve the clarity and readability of the figure in the final version of our manuscript. | Summary: This work examines the problem of extrapolating Transformer attention to long sequences in WSI representation learning. The main technical contribution is in examining the low-rank bottleneck problem of Transformer attention for WSIs, and proposing LongMIL which introduces modifications via local attention masking + 2D ALIBI in order to improve the rank of the attention matrix and enable extrapolation capabilities.
Strengths: - Core contribution of this work is novel and would have a lot of interest in the CPath community. Extrapolating to long contexts is an exciting problem that has had little investigation (outside of Prov-GigaPath). I believe this work can be re-organized in presenting a more systematic understanding of they key components needed to extrapolating to long contexts.
- Supplement includes some interesting ideas and ablation experiments. A.4 discusses similarities and differences between HIPT, adding local mask attention, and 2D alibi. A.6.1 includes a comparison with the ViT-S in HIPT for equivalent comparisons. A.6.3 examines different hyper-parameters during Transformer training. A.6.4 looks at the difference between 20X and 40X magnification. A.6.6. ablates other straightforward extensions of Transformer attention with subquadratic complexity (including Mamba and V-Mamba).
- Figures are illustrative (in both main text and Supplement).
Weaknesses: - Main limitation of this work is that a comparison with Prov-GigaPath [1] (a concurrent work that appeared at the time of NeurIPS submission) is warranted. Prov-GigaPath also presents overlapping contributions in solving this problem, though I think there is room for more than 1 study examining this problem.
- The writing feels a bit rushed and informal. I was often scrolling back and forth to understand the different comparisons being made. Ideally, there should be one table for results that compare MIL architectures and one table for ablating pretrained encoders. Other areas where the writing / presentation of figures and results could be significantly polished:
- - "We omit the ResNet-50 embedding for survival prediction since it get quite low and unacceptable results." Informal and non-scientific.
- - Section 4.3 should be able to summarize the findings in the Supplement in a more clear and descriptive manner.
- - One of the main issues I see in this work is that the authors are juggling many different encoders for different tasks. A.6.1 compares ViT-S Lunit and HIPT for TCGA-BRCA subtyping (evaluating multiple MIL models across multiple encoders on 1 task), but A.6.2. shows Resnet-50 features for BRACS and TCGA-BRCA subtyping (evaluating multiple MIL models across multiple tasks with the same encoder). Many of these issues can be drastically simplified if the authors were to use an encoder not pretrained on TCGA such as Prov-GigaPath [1], UNI [2], or PLIP [3] - with the main emphasis on comparing LongMIL with other competing works. Ref [4] provides an example on how the findings of this work can be better organized.
References
1. Xu, H., Usuyama, N., Bagga, J., Zhang, S., Rao, R., Naumann, T., Wong, C., Gero, Z., González, J., Gu, Y. and Xu, Y., 2024. A whole-slide foundation model for digital pathology from real-world data. Nature, pp.1-8.
2. Chen, R.J., Ding, T., Lu, M.Y., Williamson, D.F., Jaume, G., Song, A.H., Chen, B., Zhang, A., Shao, D., Shaban, M. and Williams, M., 2024. Towards a general-purpose foundation model for computational pathology. Nature Medicine, 30(3), pp.850-862.
3. Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J. and Zou, J., 2023. A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine, 29(9), pp.2307-2316.
4. Park, N. and Kim, S., How Do Vision Transformers Work?. In International Conference on Learning Representations.
Technical Quality: 3
Clarity: 2
Questions for Authors: Would the authors be able to address my concern in updating the results of this work to using a different encoder (to simplify the presentation of results)? Choice of pretrained encoder should not matter significantly (as the main focus is in fairly comparing LongMIL), but having 1-2 comparisons would be nice in a Supplemental Figure.
Rating at the time of reviewing this work is slightly negative, but am enthusiastic of this work and would raise my rating to borderline / weak accept if my concerns were addressed.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer UUFY,
We appreciate your time and valuable feedback. We are glad that you found the formulation and analysis being novel and sound, and the figures and ablations being illustrative. Below, please find our point-to-point response to your comments:
> **W1: Main limitation of this work is that a comparison with Prov-GigaPath (a concurrent work that appeared at the time of NeurIPS submission) is warranted. Prov-GigaPath also presents overlapping contributions in solving this problem, though I think there is room for more than 1 study examining this problem.**
Since another reviewer Ghwr also points out this problem, we post the detailed comparisons in the general response, where we make systematical comparisons to Prov-GigaPath including:
d_1. Method: their receptive field weigh more on x-axis than y-axis, however our method as 2D locality treat x-y equally.
d_2. Contribution: we focus more on analyzing why previous transformers failed then deriving our method, while they empirically scale up to big data based on dilated attention.
d_3. We find that when their patch feature is not the best in some task cases, their heavily pretrained WSI head with problem in 'd_1' only shows sub-optimal performance.
We provide some quick experiments to compare their WSI-architecture and our method.
>**W2: The writing feels a bit rushed and informal. I was often scrolling back and forth to understand the different comparisons being made. Ideally, there should be one table for results that compare MIL architectures and one table for ablating pretrained encoders. Other areas where the writing / presentation of figures and results could be significantly polished ...**
We thank a lot for your suggestions on the presentation of our paper, we will polish it for better reader-experience in next version.
>**W2.3 + Q1: Would the authors be able to address my concern in updating the results of this work to using a different encoder (to simplify the presentation of results)? Choice of pretrained encoder should not matter significantly (as the main focus is in fairly comparing LongMIL) but having 1-2 comparisons would be nice in a Supplemental Figure.**
We indeed find that there are more and more stronger patch encoders pretrained without TCGA should be validated, which is also highlighted by almost all the other reviewers. So, we also add it into the general response, including both UNI and GigaPath pretrained patch encoders with main findings:
1. Our method still outperforms previous work for some complicated tasks like BRACS and survival prediction.
2. The results in TCGA-BRCA tumor-subtyping are similar for almost all the WSI methods. We speculate that this task as binary classification might have reached some sort of upper limit when equipped with strong patch encoder. We will be going to evaluate our method on more difficult tasks in future version.
---
Rebuttal Comment 1.1:
Comment: Thank you again for your review.
Considering that it is the last day of the discussion period, we would like to confirm whether our rebuttal has adequately addressed the concerns you raised.
We continue to welcome any supplementary observations or clarification to bolster our work. | Rebuttal 1:
Rebuttal: Thanks to all the reviewers for your time and effort during the review process. We appreciate that you found our work insightful and solid.
We have responded to each reviewer individually, uploaded a rebuttal PDF, and collected the below response to general concerns. If you find our answers responsive to your concerns, we would be grateful if you considered increasing your score, and if you have additional questions, we’re happy to engage further.
> **All reviewers suggest evaluating on better pre-trained patch encoders**
Here, we list experiment results of tumor-subtyping on BRACS and TCGA-BRCA.
We find that our method still outperforms previous method in BRACS with UNI and GigaPath. But for TCGA-BRCA tumor-subtyping, almost all the methods show similar results, which may because the binary classification is too simple, and this task meet its upper-bound given so strong patch feature encoders. We will experiment on more data with higher complexity in future work.
* *BRACS, tumor subtyping*
| Patch Encoder | UNI | UNI | GigaPath | GigaPath |
|:---|:---|:---|:---|:---|
| Slide Method\Metric | F1 | AUC | F1 | AUC |
| AB-MIL | 0.692±0.033 | 0.875±0.020 | 0.640±0.022 | 0.837±0.010 |
| CLAM-SB | 0.640±0.057 | 0.844±0.025 | 0.624±0.023 | 0.826±0.014 |
| DTFD-MIL | 0.655±0.031 | 0.878±0.022 | 0.610±0.032 | 0.843±0.017 |
| TransMIL | 0.592±0.036 | 0.859±0.023 | 0.599±0.058 | 0.838±0.048 |
| Full Attention | 0.715±0.043 | 0.884±0.017 | 0.663±0.023 | 0.850±0.018 |
| LongMIL (ours) | 0.728±0.045 | 0.887±0.008 | 0.673±0.023 | 0.856±0.015 |
* *TCGA-BRCA, tumor subtyping*
| Patch Encoder | UNI | UNI | GigaPath | GigaPath |
|:---|:---|:---|:---|:---|
| Slide Method\Metric | F1 | AUC | F1 | AUC |
| AB-MIL | 0.865±0.039 | 0.945±0.018 | 0.872±0.038 | 0.946±0.021 |
| CLAM-SB | 0.862±0.031 | 0.943±0.020 | 0.864±0.049 | 0.937±0.027 |
| DTFD-MIL | 0.867±0.034 | 0.941±0.024 | 0.870±0.035 | 0.937±0.034 |
| TransMIL | 0.853±0.049 | 0.949±0.019 | 0.830±0.048 | 0.934±0.020 |
| Full Attention | 0.849±0.043 | 0.942±0.017 | 0.860±0.041 | 0.946±0.023 |
| LongMIL (ours) | 0.863±0.033 | 0.945±0.008 | 0.871±0.030 | 0.947±0.022 |
Due to the limited time and large model architecture of UNI and GigaPatch, we only experiment survival prediction on TCGA-BRCA since its features are extracted in tumor-subtyping.
* *TCGA-BRCA, survival prediction*
| Patch Encoder | UNI | GigaPath |
|:---|:---|:---|
| Slide Method\Metric | c-index | c-index|
| AB-MIL | 0.630±0.054 | 0.635±0.033 |
| AMISL | 0.627±0.080 | 0.620±0.040 |
| DS-MIL | 0.616±0.034 | 0.612±0.086 |
| TransMIL | 0.598±0.059 | 0.599±0.064 |
| Full Attention | 0.638±0.056 | 0.617±0.069 |
| LongMIL (ours) | 0.656±0.061 | 0.645±0.055 |
> **Reviewers UUFY and Ghwr concern on the comparison to Prov-GigaPath / LongViT WSI-Architecture**
Although reviewers UUFY and Ghwr point out that both our paper and Prov-GigaPath use similar local attention mechanism for efficient transformer modelling in slide-level, we find that there are some important differences between them.
1. The motivation /contribution: our paper not only focus on proposing an efficient self-attention mechanism for WSI, but also showing analysis on why some previous work like Roformer and TransMIL fail for WSI from the low-rank perspective, which we believe to be insightful to the digital pathology community. However, both the Prov-GigaPath and LongViT focus on scaling up to a large-scale of data with pre-training, which is more empirical. We believe that our analysis may also work for Prov-GigaPath and could be one potential explain on why Prov-GigaPath success and how to improve further.
2. The method details: Prov-GigaPath does not treat interactions inside x-axis and y-axis equally, though the 2-d positional embedding is applied. By putting all patches into a 1-d sequence in a 'z-scan' manner like ViT, their 1-d local attention focus more on x-axis but less on y-axis, as depicted in Fig. 1 of our rebuttal PDF. Although this can be alleviated by their higher-level dilated attention term, the x-y inequality still exists. Whereas, our local-attention is designed for 2-d (based on 2d Euclid distance), thus treat them equally.
3. The pretrained Prov-GigaPath WSI-head seems relying heavily on their own patch-pretrained encoder, which may be a potential barrier to wide usage, e.g. there are still some cases when GigaPath patch features weaker than UNI or Conch, as posted in the github repo of UNI. The WSI pretraining is indeed useful as the key to their superior performance, which covers their problem of spatial inequality on x and y. When dealing with the case 'BRACS', as shown in the following table, our method (even AB-MIL) with better UNI feature can outperform their 'worse patch feature with stronger pretrained slide encoder'.
Experimentally, we here perform evaluation on the two WSI-level architectures. Since the WSI params are pre-trained in Prov-GigaPath, we also experiment it using random initialization for fair comparison. For the mismatch of UNI patch encoder and GigaPath WSI head, we add a nn.Linear layer as a feature projector. We find that the pre-training plays a key role to the success of Prov-GigaPath WSI-head, since transformers are much more over-parameterized than previous simple attention-based MIL. Given the limited time, we are going to validate on more tasks in future version.
* *LongMIL V.S. Prov-GigaPath in slide-level on BRACS, tumor subtyping*
| Patch Encoder | UNI | UNI | GigaPath | GigaPath |
|:---|:---|:---|:---|:---|
| Slide Method\Metric | F1 | AUC | F1 | AUC |
| AB-MIL | 0.692±0.033 | 0.875±0.020 | 0.640±0.022 | 0.837±0.010 |
| TransMIL | 0.592±0.036 | 0.859±0.023 | 0.599±0.058 | 0.838±0.048 |
| GigaPath (random init) | 0.648±0.041 | 0.837±0.033 | 0.627±0.038 | 0.808±0.038 |
| GigaPath (pre-trained) | 0.668±0.026 | 0.861±0.030 | 0.677±0.033 | 0.862±0.034 |
| LongMIL (ours) | 0.728±0.045 | 0.887±0.008 | 0.673±0.023 | 0.856±0.015 |
Pdf: /pdf/a642b2b2338d7e7630b5042798b18205e5b25044.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
How JEPA Avoids Noisy Features: The Implicit Bias of Deep Linear Self Distillation Networks | Accept (poster) | Summary: This paper presents a theoretical analysis of the JEPA and MAE SSL objectives for deep linear networks. Under a somewhat restrictive diagonal covariance assumption, the authors demonstrate that the critical time for learning a feature is dependent only on the input variance for MAE, while JEPA prioritizes learning features with high regression coefficients, which are predictive yet have minimal noise as measured by their variance across the training set (for large depth encoders).
The authors provide various numerical simulations to validate their theory and show that their findings are robust with respect to the network initialization scheme, even when using a more restrictive initialization necessary for the theory.
Overall, this paper offers valuable insights into the behavior of JEPA and MAE SSL objectives in deep linear networks, and provides a solid foundation for future research in this area.
Strengths: - This work is the first to provide theoretical insights into the empirical observation that JEPA-based approaches tend to learn 'abstract' features more efficiently than MAE. Although the theory is limited to a restrictive case of diagonal linear networks, it calls for further research to generalize these findings.
- The numerical simulations and Section 3 effectively support the theory while also highlighting the qualitative differences between MAE and JEPA.
Weaknesses: -While the paper is generally clear, the presentation of the theoretical results could be improved. Specifically, it would be helpful to expand on Theorems 4.4 and 4.6 in the main paper, as their validity is not immediately clear without referring to the appendix.
-In addition to the diagonal assumption, the authors focus on the case where the predictor/decoder is linear. However, it is important to use a deep predictor/decoder for both JEPA and MAE methods in practice.
-It would be useful to provide more intuition on what covariance_{x,x} and covariance_{x,y} represent for usual pretraining tasks. This would help readers better understand the underlying concepts and appreciate the significance of the theoretical results.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Would the main theoretical results hold for deep predictor/decoder?
- Given that MAE predicts a corrupted input in the same input space, what does it mean for covariance_{x,y} to differ from covariance_{x,x}, and how does the corruption affect this difference?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q.** Would the main theoretical results hold for deep predictor/decoder?
**A.** Our analysis can be directly extended to deep linear decoders/predictors without changing the results in any qualitative way. Nonlinear predictors however are non-tractable, and therefore beyond the scope of this work. We opted to use a shallow rather than a deep linear predictor in our analyzed model due to 1) a deeper predictor would add to an already notation heavy presentation, without adding or changing the results in any meaningful way. 2) JEPA models typically use a lightweight predictor relative to the encoder, and even linear predictors (see [1]). And 3) Shallow linear predictors have been analyzed theoretically in [1] and [2] and have been shown to work even in practice (though not necessarily producing SOTA results).
**Q.** Given that MAE predicts a corrupted input in the same input space, what does it mean for covariance_{x,y} to differ from covariance_{x,x}, and how does the corruption affect this difference?
**A.** The difference comes from masking or mapping that is applied in the self-supervised model. In the case of random masking one place the difference occurs is on the diagonal elements: because the same element will always be zero in either x_i or y_i, the diagonal elements of C_{xy} will be zero, and this is not true of C_{xx}. For a full derivation and the full impact please see Appendix C.1. One intuitive impact of this difference is that increasing iid pixel noise will not impact diagonal elements of C_{xy} but will impact diagonal elements of C_{xx} leading to a suppression of regression coefficients.
We thank the reviewer for their time and effort in reviewing our paper. If we have sufficiently addressed the reviewers concerns, we kindly ask them to consider raising their score.
[1] Tian et al: Understanding Self-Supervised Learning Dynamics Without Contrastive Pairs“
[2] Richemond et al: “The Edge of Orthogonality: A Simple View of What Makes BYOL Tick”
---
Rebuttal Comment 1.1:
Title: Thank your for the rebuttal.
Comment: The paper highlights a fundamental difference between two main SSL paradigm (MAE and JEPA). By presenting empirical evidence and theoretical insights, it suggests that JEPA may be more effective for learning semantic features in images, but faces challenges with low-level features required for fine-grained tasks. The authors' rebuttal successfully addressed my initial concerns and provided additional supporting evidence from ImageNet, which strengthened the paper's contributions and led me to revise my score.
I believe this work will be highly relevant and of interest to the SSL community. Thank you for your contribution! | Summary: The paper investigates the implicit bias of predictive self-supervised learning methods, specifically focusing on the Joint-Embedding Predictive Architecture (JEPA) and comparing it with the Masked Autoencoders (MAE). The study presents a theoretical analysis of the learning dynamics of these methods, revealing how different objectives lead to varied implicit biases in feature learning. It also includes numerical experiments with linear generative models to illustrate the theoretical findings.
Strengths: - The paper provides a rigorous theoretical framework for understanding the implicit bias of JEPA and MAE. This is valuable for the community as it offers insights into why these methods might prioritize certain features over others during training.
- By comparing JEPA and MAE, the paper helps delineate the strengths and weaknesses of each method. This comparative approach can guide practitioners in choosing the appropriate method for their specific tasks.
Weaknesses: -The paper would benefit from experiments on more diverse datasets, including real-world data. This would demonstrate the practical implications of the theoretical findings and validate their robustness in more varied scenarios.
- The numerical experiments are limited in scope. They primarily focus on linear generative models, which may not fully capture the complexities of real-world data. Expanding the experiments to include more diverse datasets and model architectures would strengthen the findings.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How well do the theoretical results generalize to non-linear models and more complex data distributions? The paper's findings are based on linear generative models, which may not fully represent the behavior of JEPA and MAE in practical settings.
- what are the practical implications of the implicit bias observed in JEPA and MAE? How should practitioners account for these biases when applying these methods to real-world tasks?
-The paper suggests that JEPA may be more efficient in learning semantic features. Can this efficiency be quantified in practical scenarios, and how does it impact downstream tasks such as classification and object detection?
-aree there strategies to mitigate the negative effects of the implicit bias observed in JEPA and MAE? For instance, can architectural modifications or additional regularization techniques help balance the feature learning dynamics?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper offers valuable theoretical insights into the implicit bias of JEPA and MAE in predictive self-supervised learning. However, it falls short in terms of related work, novelty,and experimental scope. Addressing these weaknesses and answering the important questions raised would significantly enhance the impact and applicability of the research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q.** “How well do the theoretical results generalize to non-linear models and more complex data distributions?”
**A.** We have conducted additional experiments on ImageNet (in attached pdf), which are consistent with aspects of our theoretical predictions. (Please see the pdf for details of the setup).
Regarding our focus on linear models: The question of how to quantify the efficiency of learning semantic features in practical scenarios may require different metrics than the ones used in this paper. Beyond analytical tractability, concentrating on deep linear networks allows us to separate the question of which features are learned by MAE and JEPA models from the timing/order with which these features are learned, the later being the main focus of this work. Because there is a leap in going between the linear and non-linear setting, we focused on deriving exact results and theoretical guarantees in this paper. Given the long and prolific literature on deep linear models, we view this contribution as self contained, and leave for future work the construction of metrics (of which there may be many) that allow us to analyze the non-linear case where both the features and order are different between the two models.
**Q.** what are the practical implications of the implicit bias observed in JEPA and MAE?
**A.** The main qualitative insight from our results is that noisy (or high variance) features are learned more slowly and with lower amplitude when using the JEPA loss, since a large feature variance across the dataset (denoted as \sigma_i in the paper) reduces the regression coefficient for a fixed cross covariance \lambda_i. A direct prediction that follows from this is that JEPA will tend to focus on the lower subspace of the data variance (PCA space) where most of the perceptual features reside in natural images, as claimed in [1] (see lines 59 - 62 and 74 - 76 in the intro of the paper).
We have conducted additional experiments on ImageNet providing evidence for this claim (see attached pdf). As additional evidence for this in the literature we would like to point the reviewer to [1] which shows how unlike JEPA, reconstruction losses tend to focus on the upper part of the PCA space, and [2] which shows that JEPA tends to learn “slow features” (low variance). Our work can be seen as a first principled analysis of these claims in a toy settings. Additionally, our results provide an intuition to why JEPA objectives are perhaps inefficient for learning features suited for fine-grained pixel level tasks, as those features tend to be noisy (features that would correspond to a low regression coefficient in the linear setting). Finally, our results point to a fundamental limit of the efficiency of the MAE objective in learning semantic features since depth does not meaningfully change the feature learning dynamics (see theorem 4.7 and figure 4 in the paper), unlike the JEPA objective. Questions such as how practitioners should account for these insights and limitations in practice we consider out of scope and left to future work.
[1]: Randall et al: “Learning By reconstruction Produces Uninformative Features for Perception“
[2] Sobal et al: “Joint Embedding Predictive Architectures Focus On Slow Features”
We thank the reviewer for their time and effort in reviewing our paper. If we have sufficiently addressed the reviewers concerns, we kindly ask them to consider raising their score.
---
Rebuttal Comment 1.1:
Title: response by Reviewer 5fn3
Comment: I have carefully reviewed the feedback from other reviewers, considered the author’s rebuttal, and followed the ensuing discussion. I appreciate the authors' thorough responses, particularly their additional experimental results (on W1) and answering my questions.
Assuming that the insights from these discussions will be included in the final paper, I recommend the paper for acceptance as it provides interesting insights and has the potential to contribute to the ML community and I will raise my score from 5 to 7. | Summary: Analyze learning in two self-supervised paradigms, JEPA and MAE, through the lens of learning dynamics in deep linear networks. Report on a qualitative difference in the order in which features are learned, thus demonstrating their different implicit bias.
Strengths: * Originality: the analysis of deep linear networks has been a prolific line of research in terms of deep learning theory, but the application to the self-supervised setting is novel and promising. Related work is adequately cited, as far as I can see.
* Quality: the submission is very solid technically, with the assumptions of the theory clearly written and interesting results are derived, demonstrating a clear distinction between the two paradigms.
* Clarity: the manuscript is clearly written and well organized, introducing all the relevant background in a concise manner.
* Significance: theory provides here a unique view, where the implications of design choices made in development of different algorithms are made visible and thus can educate development of better algorithms. This premise is not fully fulfilled here, see weaknesses.
Weaknesses: * Quality: this is a theory paper with very limited experimental support (beyond the numerical evaluation of the theory). The random masking and temporal model presented in section 5.1 are very briefly presented, and it is unclear how the simulation results relate to the theoretical predictions. I would expect a clear distillation of an experiment designed to highlight the difference between JEPA and MAE, the qualitative theoretical prediction and how the experimental results support it (or not).
* Significance: the implications of the results beyond the theory community are unclear. I would expect the authors to be able to offer (i) some characterization of JEPA and MAE, which is useful for practitioners, or (ii) some classification of possible behaviours of self-supervised algorithms, which can inspire the development of new algorithms with different properties than JEPA and MAE.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Can you offer qualitative prediction from the theory to real-world systems implementing JEMA or MAE? What would be such prediction if the same deep architecture was trained on both JEMA and MAE loss, what would you expect to see differently in terms of the learned features?
2. What is the spectrum of self-supervised algorithms through the lens of deep linear network dynamics?
3. What is the result presented in Figure 3? How does it demonstrate a correspondence between theory and the experimental results? Can you offer a similar plot for the random masking task?
4. Can you demonstrate the effect of the number of layers is different under JEPA and MAE, as the theory predicts?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors adequately addressed the limitations of their approach, namely the assumptions about a common diagonalization of the covariance matrices' diagonal dynamics in the low-variance initialization. It would be great if the authors provided experimental support for the applicability of their theory to real-world systems implementing JEPA and MAE or even just provided qualitative predictions derived from the theory.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q.** “Can you offer qualitative prediction from the theory to real-world systems implementing JEPA or MAE? What would be such prediction if the same deep architecture was trained on both JEPA and MAE loss, what would you expect to see differently in terms of the learned features?”
**A.** (Reproduced from our General Response to All Reviewers): The main qualitative insight from the toy model is that noisy (or high variance) features are learned more slowly and with lower amplitude when using the JEPA loss. A direct prediction that follows from our theory is that JEPA will tend to focus on the lower subspace of the data variance (PCA space) where most of the perceptual features reside in natural images (see lines 59 - 62 and 74 - 76 in the intro of the paper). We have conducted additional experiments on realistic models/data providing evidence for this claim (see attached pdf). As additional evidence for this in the literature we would like to point the reviewer to [1] which shows how reconstruction losses tend to focus on the upper part of the PCA space, and [2] which shows that JEPA tends to learn “slow features” (low variance). Our work can be seen as a first principled analysis of these claims in toy settings. Notably, the toy linear setting not only allows for tractable training dynamics but also has the added benefit that both setups learn the same features. This allows us to focus entirely on comparing the schedule according to which the features are learned for the two setups.
**Q.** What is the spectrum of self-supervised algorithms through the lens of deep linear network dynamics?
**A.** We are not sure we understood the question; could you please clarify what you meant by “spectrum of SSL algorithms”? (Do you mean the linear-algebraic notion of “spectrum”, or are you asking about how the variety of different SSL algorithms manifest in the deep linear setting, beyond MAE and JEPA?)
**Q.** What is the result presented in Figure 3? How does it demonstrate a correspondence between theory and the experimental results? Can you offer a similar plot for the random masking task?
**A.** In Figure 3, we demonstrate that the data distribution considered in our paper (along with its assumed constraints) can in principle be realized — that is, our assumptions are not vacuous. Moreover, it provides a simple data-generative model to help guide intuition about how JEPA and MAE will learn differently in deep linear networks. We had included two candidates generative processes: one for the static data and the other for time-varying data. Fig 3. focuses on the time-varying (“video-like”) data generation variant. The generative process is given by eq. (17). To summarize, $v^{a}$s are static images, while $u^a=u^a(t)$ are stochastic processes characterized by the autocorrelation coefficients $\gamma^a$s. The Figure 3.a depicts a sample set of $v$s. Fig 3.b shows two sample realizations of the stochastic functions $u^a$s for two different autocorrelation values. Fig 3.{c, d} show the resulting diagonal and off-diagnal pieces of the covariance matrix generated by the process eq.(17). Fig 3e demonstrates how rapidly one converges to the expected values of $\rho$ and $\lambda$ eq (18) derived in the Appendix (as a function of the log of the “mixing time”). The last plot on the right shows diagonal-ity condition is satisfied. These plots together illustrate that the process in eq.(17) generates a data distribution that satisfies our assumptions 4.1.1 and show us how the key parameters of correlation coefficient ($\rho$) and correlation vary with noise and autocorrelation coefficient $\gamma$ in each mode. We do include theory, but not a similar plot for the random masking task (Appendix C.1) . We put the discussion of this in the appendix because we wanted to keep the formulation simple in the main paper and avoid confusing readers, but it is possible to construct a similar formulation for the random masking case with some additional assumptions.
**Q.** Can you demonstrate the effect of the number of layers is different under JEPA and MAE, as the theory predicts?
**A.** Outside of figure 4 which verifies the effect in the linear setting, we have not done comprehensive experiments studying the effect of depth for non-linear networks.
We thank the reviewer for their time and effort in reviewing our paper. If we have sufficiently addressed the reviewers concerns, we kindly ask them to consider raising their score.
[1]: Balestriero et al: “Learning By reconstruction Produces Uninformative Features for Perception“
[2] Sobal et al: “Joint Embedding Predictive Architectures Focus On Slow Features”
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: It seems that I had some technical difficulty submitting a response to your rebuttal, and my detailed answer is now lost.
In that beautifully written response, I said I was satisfied with your answer, especially with the empirical demonstration of the qualitative prediction in non-linear self-supervised learning, and that I would raise my score (which I did).
Also, I tried to explain better what I meant in my puzzling comment on the "spectrum of self-supervised algorithms", but this explanation was not very important just a few hours before the discussion deadline. | Summary: This paper aims to understand the implicit bias of on two paradigms of self-supervised learning, Joint Embedding Predictive Architectures (JEPAs) and Masked Auto Encoder (MAE). The authors introduce a tractable setting of deep diagonal linear networks and charaterize the learning dynamics of two objectives on the toy problem. Through theoretical analysis, the authors show different behavoirs of two objectives: JEPA prioritizes "influences features" while MAE prioritizes highly-covarying features. These observations are supported numerical experiments on the toy model and the Linear Generative Models.
Strengths: 1. This paper is well written the presentation is nice. The symbols and math formulations are clear. The demonstration on Section 3 is helpful on understanding the paper.
2. The analyis is comprehensive, including both the learning dynamics and the critical time.
3. The difference between the behaviors of JEPA and MAE observed in this paper is interesting.
4. The theoretical results are supported by the numerical experiments.
Weaknesses: Presentation on the setting (Section 2) could be improved. More explanations on $\rho_i, \lambda_i, \sigma_i$ would be helpful to understand the toy setting.
On Section 3, more explanation on the settings are needed, e.g, what's the motivation of choosing these two distributions? What senarios do the distributions represent?
On Section 4, line 165, the subscript $i$ is dropped. This should be said in the beginning of the section to avoid misunderstanding.
The conclusion needs to be clearly elaborated. The authors say that JEPA prioritizes "influential features", whereas MAE prioritizes "highly-covarying features". The connection between "influential/covarying features" and the parameter $\rho_i, \lambda_i, \sigma_i$ is not clear to me.
The study is limited to diagonal linear networks. The analyis on the toy setting provides interesting insights about the learning dynamics of JEPA and MEA. But it's not sure whether the conclusions are applicable to more complex scenarios. I believe it'd be helpful to provide some examples on non-linear cases.
Figure 3 is too small and not anotated.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the "Weakness".
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q.** “More explanations would be helpful to understand the toy setting.”
**A.** Thank you for pointing this out, we plan to add more elaboration in the rebuttal to make it more accessible.
**Q.** “what's the motivation of choosing these two distributions?”
**A.** The motivation for choosing these distributions is to illustrate how changing the structure of a distribution affects the learning speed of different features, as predicted by our theory. Note that, for a linear model, the only relevant aspects of the training data population statistics are its first and second moments. Hence, when it comes to characterizing data features, \lambda_i and \rho_i are the only relevant quantities to consider (assuming a centered distribution with independent components). In other words, the time it takes to learn the i'th feature has to depend on the feature parameters \lambda_i,\rho_i. The distributions in section 3 are meant to illustrate how varying these feature parameters affects the order of learning for each objective. We will add a note to help clarify this in section 2.
**Q.** “On Section 4, line 165, the subscript is dropped.”
**A.** Thank you for noticing this.
**Q.** “The connection between "influential/covarying features" and the parameter is not clear to me.”
**A.** By “influential features” we simply mean features with a high regression coefficient \rho. In the linear regression literature these features are sometimes referred to as influential/predictive/significant features. By highly covarying features we simply mean features with a high covariance parameter \lambda. Is this explanation clear to the reviewer? We will make this clearer in the intro section, and will include references to this usage in prior work.
**Q.** “...I believe it'd be helpful to provide some examples on non-linear cases.”
**A.** We generally agree with this statement, hence we have conducted the following experiment: one prediction that directly follows from our theory is that the JEPA objective allows the model to focus on the bottom subspace of the observed data variance where, at least for natural images, most of the perceptual features tend to reside [1] (see lines 59 - 62 and 74 - 76 in the intro of the paper). We therefore design an experiment that directly tests this hypothesis (see figures in the added pdf). Additionally, we would like to highlight the fact that beyond analytical tractability, linear models are especially appealing as a testbed in our case for the reason that both the MAE and JEPA objectives eventually learn the same features (perhaps with different amplitudes, see Corollary 4.3 in the paper) if we train the models for sufficiently long. This allows us make fair comparisons on the timescale of learning these features in each objective.
**Q.** “Figure 3 is too small and not annotated”
**A.** Thanks for pointing this out, we will fix it for the final version.
We thank the reviewer for their time and effort in reviewing our paper. If we have sufficiently addressed the reviewers concerns, we kindly ask them to consider raising their score.
[1]: Randall et al: “Learning By reconstruction Produces Uninformative Features for Perception“
---
Rebuttal Comment 1.1:
Title: Acknowledgement
Comment: Thank you for the response. My concerns are addressed and I believe this paper would be valuable for the SSL community. Thus I'd raise my score to 6. | Rebuttal 1:
Rebuttal: **General Response to all Reviewers**
We would like to thank all reviewers for their time and dedication in reviewing our paper, and for their support.
In response to several reviewers, we have conducted additional ImageNet experiments demonstrating phenomena consistent with our theory (described in the attached pdf).
We would also like to address a common question raised by the reviewers in the following general response.
**On the implication of the results to practice:** The main qualitative insight from our results is that noisy (or high variance) features are learned more slowly and with lower amplitude when using the JEPA loss, since a large feature variance across the dataset (denoted as \sigma_i in the paper) reduces the regression coefficient for a fixed cross covariance \lambda_i. A direct prediction that follows from this is that JEPA will tend to focus on the lower subspace of the data variance (PCA space) where most of the perceptual features reside in natural images, as claimed in [1] (see lines 59 - 62 and 74 - 76 in the intro of the paper). We have conducted additional experiments on realistic models/data providing evidence for this claim (see attached pdf). As additional evidence for this in the literature we would like to point the reviewer to [1] which shows how unlike JEPA, reconstruction losses tend to focus on the upper part of the PCA space, and [2] which shows that JEPA tends to learn “slow features” (low variance). Our work can be seen as a first principled analysis of these claims in a toy settings. Additionally, our results provide an intuition to why JEPA objectives are perhaps inefficient for learning features suited for fine-grained pixel level tasks, as those features tend to be noisy (features that would correspond to a low regression coefficient in the linear setting). Finally, our results point to a fundamental limit of the efficiency of the MAE objective in learning semantic features since depth does not meaningfully change the feature learning dynamics (see theorem 4.7 and figure 4 in the paper), unlike the JEPA objective. Questions such as how practitioners should account for these insights and limitations in practice we consider out of scope and left to future work.
[1]: Balestriero and Lecunl: “Learning By reconstruction Produces Uninformative Features for Perception“
[2] Sobal et al: “Joint Embedding Predictive Architectures Focus On Slow Features”
Pdf: /pdf/929407217daaa61bfa6575ca3cdbbf79ae80f5b5.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The authors study a characteristic of two common approaches towards visual modeling in SSL--particularly MAE and JEPA. Previous works have demonstrated or identified empirically that JEPA architecture are more prone towards lower variance features whereas MAE optimize towards higher variance features. This work focuses on a theoretical understanding of why this occurs through deep linear encoders. The authors also verify their framework empirically, with both linear and nonlinear encoders (I believe they verify with nonlinear encoders, please see questions for more details).
Strengths: * The theoretical contribution is very important, as the question as to whether raw pixel or feature reconstruction works better is widely debated. Thus, the theory can be used to justify specific architectures in different circumstances.
* The theoretical results are insightful, and the experimental results strongly support the theoretical results.
* The authors experimented with relaxed assumptions, and had the same or similar empirical results, providing evidence that the theoretical results, while derived under unrealistic assumptions, may still hold true.
Weaknesses: * There are very strong assumptions that will almost never be realistic
* I am not convinced that the magnitude of encoders is a proper metric for the training dynamics. There were not many details or justification for this. Please see the questions section for more details.
Technical Quality: 4
Clarity: 3
Questions for Authors: * There was no mention of the moving average commonly used in JEPA architectures to update the teacher encoder (the encoder with the stop grad operator). It would be nice to know how different approaches to train/get the teacher encoder affect the training dynamics.
* I am a bit confused by the figures involving the encoder magnitudes, particularly about the values of encoder magnitudes and how they were gotten for each feature. For the first layer, or if the encoder had a single layer, this metric makes sense. However, as linear layers are fully connected, feature i in weight layer 1 does not necessarily correspond strongly with feature i in weight layer 3. Thus, if the average magnitude across each layer for feature i is being used that seems like an invalid metric.
* Is the MLP used in line 478 nonlinear?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q.** “There was no mention of the moving average commonly used in JEPA...”
**A.** Indeed EMA (exponential moving average) is often used in practice to boost performance in a variety of SSL methods that employ self distillation, however we argue that the stop gradient operator is the crucial design choice preventing encoder collapse, rather than EMA, which mainly contributes to increased training stability. Indeed, these claims were argued and verified in the SimSiam [1] paper, showing that EMA can be removed altogether without sacrificing performance. Having said that, we agree EMA should be mentioned, and will add a short discussion on it in the paper.
**Q.** “I am a bit confused by the figures involving the encoder magnitudes...”
**A.** Let us clarify. We do not compute the average magnitude across layers. Rather, for each feature i (feature i is the i’th component in the input), we measure the magnitude of the projection of the full encoder on e_i (i’th standard basis), which simply corresponds to the norm of the i’th column of the encoder (see line 127 in the paper). Since we analyze deep linear models, the full encoder is a matrix given by the product of all the layers belonging to the encoder (see \bar{W} in eq 3 in the paper). This makes sense as a metric since the amplitude of the projection of e_i on the encoder is the amplitude of the response to a unit input in feature direction i, which corresponds exactly to how sensitive the encoder latent (output) is to feature i. A zero projection would indicate, for example, invariance to the feature, making it a useless feature for any downstream task that uses the encoder latent.
**Q.** “Is the MLP used in line 478 nonlinear?”
**A.** No, it is a deep liinear MLP
We thank the reviewer for their time and effort in reviewing our paper. If we have sufficiently addressed the reviewers concerns, we kindly ask them to consider raising their score.
[1] Chen et al: “Exploring Simple Siamese Representation Learning”
---
Rebuttal Comment 1.1:
Comment: I have read the reviewer's responses and am satisfied with their rebuttal. They answered all the questions I had.
I am keeping my score as a 7 as I believe the paper will have a high impact in the field of Self-Supervised Learning (SSL), helping to theoretically explain the learning phenomena observed in MAE vs. JEPA. While the paper is impactful in systematically explaining this phenomenon, it's not revolutionary in that it was not the first paper to observe and explain this trend. Regardless, having a theoretical backbone is important, and can be used to motivate future research directions within SSL. | null | null | null | null | null | null |
Bayesian Optimisation with Unknown Hyperparameters: Regret Bounds Logarithmically Closer to Optimal | Accept (poster) | Summary: This paper considers the Bayesian optimization (BO) problem under an unknown length scale and upper bound of the RKHS norm.
The proposed algorithm LB-GP-UCB is designed to select length scale
from certain candidates set adaptively. Furthermore, the algorithm eliminates some candidate length scales if certain conditions are met. The validity of the proposed algorithm is given both empirically and theoretically. Specifically, the proposed algorithm has
a favorable property compared with the existing A-GP-UCB algorithm in the sense of regret optimality.
Strengths: - The motivation is well-discussed; specifically, I agree with the issue of
the A-GP-UCB algorithm described in the Introduction.
- From my view, the comparison with A-GP-UCB in the sense of regret optimality is interesting and novel.
- Enough numerical evaluations are given, and the author also provides the anonymized codes with an easily reproducible style.
Weaknesses: - In contrast to the existing literature (Berkenkamp, 2019), the analysis is limited to where the length scale parameters are the same among each coordinate. This relates to the limitation of the practical applicability, and I think that the comparison with A-GP-UCB should be evaluated by noting this limitation.
- The method how to calculate the quantity $R^{\theta}$ (Line 3), which contains MIG we only
know the dependence of $T$ and $\theta$ of its upper bound, is ambiguous.
As far as I see the proof of Lemma C.1 and the statement of Lemma 5.2 of Pacchiano et al.,
$R^{\theta}$ should be specified as the exact upper bound of regret; only the knowledge
of the order of the regret upper bound is insufficient to specify $R^{\theta}$.
If we cannot calculate the exact value or upper bound of $R^{\theta}$, this part can become a limitation,
and the author at least should add the discussion. As far as I see the anonymized code, the numerical experiments are done by
approximating the upper bound of MIG with the numerically optimized version of MIG (Hong et al, 2023).
Even if we follow Hong et al, 2023, I believe that the exact version of the upper bound of MIG can be computed only when the input space $\mathcal{X}$ is finite and small.
- To my understanding, the Ziomek's paper is closely related to your setting, and it seemed
that some parts of your paper (e.g., Lemma A.2) borrow their ideas; however, the relation
and comparison with their paper are not described in Related Works.
Ziomek, Juliusz, Masaki Adachi, and Michael A. Osborne. "Beyond Lengthscales: No-regret Bayesian Optimisation With Unknown Hyperparameters Of Any Type." arXiv preprint arXiv:2402.01632 (2024).
[Minor]
- L108 $x^{\ast} = \max_{x \in \mathcal{X}}$ -> $x^{\ast} = \max_{x \in \mathcal{X}} f(x)$.
- L110 ... value $\theta \in \mathbb{R}$ we denote -> ... value $\theta \in \mathbb{R}$. We denote
- L117 ... them below:: -> ... them below:
- L119 $\theta \in \\{0, \theta_0\\}$ -> $\theta \in (0, \theta_0]$.
- L129 $\gamma(k^{\theta})\_{t-1}$ -> $\gamma_{t-1}(k^{\theta})$
- L129 $\frac{1}{2} \ln |I + \sigma_N^2 K_T^{\theta^{\ast}}|$ -> $\frac{1}{2} \ln |I + \sigma_N^{-2} K_T^{\theta}|$.
- Footnote 1 and Lemma 4: This statement does not hold for any stationary kernel (see Assumption 2 in Bull [11]).
I recommend the author explicitly assume RBF or Mat\'ern kernel in these statements.
- Liu et al., 2023 related to your setting. Although the unknown hyperparameter they consider is different from the setting
of your paper, I recommend adding their paper as the related work.
- Liu, Yusha, and Aarti Singh. "Adaptation to Misspecified Kernel Regularity in Kernelised Bandits." International Conference on Artificial Intelligence and Statistics. PMLR, 2023.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Is there a possibility of extending the algorithm to a case where length scale parameters are different among each coordinate?
- In Proposition 2.3, can the author update the Berkenkamp's MIG upper bound of Mat\'ern to $\tilde{O}(T^{\frac{d}{2\nu + d}})$?
The recent result (Vakili et al., 2021) shows that the maximum information gain of Mat\'ern kernel increases
with $\tilde{O}(T^{\frac{d}{2\nu + d}})$. Their results do not provide explicit dependence
of $\theta$; however, by combining Theorem 3 of Vakili et al. and the eigendecay rate of the kernel with explicit dependence of $\theta$,
I guess that $\gamma_T = \tilde{O}(\theta^{-2\nu + d} T^{\frac{d}{2\nu + d}})$ can be obtained.
- Vakili, Sattar, Kia Khezeli, and Victor Picheny. "On information gain and regret bounds in gaussian process bandits." International Conference on Artificial Intelligence and Statistics. PMLR, 2021.
- The exact upper bound of the MIG is obtained by relying on the uncertainty sampling as described in Section 5.1 of Srinivas's paper.
Does any problem occur with using Srinivas's results in the author's analysis?
- Srinivas, Niranjan, et al. "Gaussian process optimization in the bandit setting: No regret and experimental design." arXiv preprint arXiv:0912.3995 (2009).
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: - The analysis is limited to the case where the length scale parameters are the same among each coordinate.
- No potential negative societal impact is seen.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for reading our paper and for mentioning relevant related work. We address each question/concern below.
**Case of different length scales**
The method of Berkenkamp (2019) cannot really handle kernels with differing length scales across coordinates. While technically A-GP-UCB can be applied to non-isotropic kernels, it decreases the lengthscale value for each coordinate at the same pace (e.g. see eq. 9 in (Berkenkamp, 2019)). As such, unless the initial length scales $\theta_0$ differ for each coordinate (which would imply strong some prior information on the relative importance of coordinates), A-GP-UCB utilises the same length scale value for each coordinate at each timestep, making it no better than an algorithm utilising the isotropic kernel. As such, our algorithm is not inferior to A-GP-UCB on that regard.
As we mention in Section 7, to the best of our knowledge, there have been no results in the literature deriving the MIG bounds for the non-isotropic kernels. If one were to obtain such bounds, we believe we could very easily extend our algorithm LB-GP-UCB to handle multiple unknown length scales, in the same way as we extended it to the case of simultaneously handling unknown norm and length scale. Such an algorithm would truly handle the non-isotropic case and would be able to potentially choose to fit a GP model with different length scale values, unlike A-GP-UCB.
**Updating MIG bound**
We would like to thank the reviewer for mentioning that the improved MIG bounds exist, as we were not aware of that fact. **We will replace the MIG bounds with the improved ones in our Theorems.**
**Relation to Ziomek et al, 2024**
The problem setting considered in (Ziomek et al, 2024) differs from ours, as they assumed there are a number of candidate hyperparameter values given to us at the start of the problem, making the problem much easier. Their proposed algorithm also scales with the MIG of the worst candidate and as such could be arbitrarly far from the optimal. **We will add this discussion to the Related Work section.**
**Knowledge of Regret Bounds**
One do not need the exact form of the regret bounds---the knowledge up to a constant is sufficient. The fact that the algorithm selects hyperparameters by the rule in line 3 is only used in two places in the proof of the Theorem 4.1 The first place is Lemma C.1., where the constants $C$ in the regret bounds cancel out. The second place is in line 472 - 474 on page 14, where we assume that for the regret bounds we must have $R(t+1) < R(t) + 2B$, since $2B$ is the highest instantaneous regret we can possibly suffer. If the order dependence is known, one can always find such a constant $C$ such that this constraint is respected.
While the lack of need for exact knowledge of $C$ might seem surprising at first glance, it is a direct result of the fact the regret bound for LB-GP-UCB is also derived up to a constant factor. Changing $C$ in the suspected regret bound does not affect the order dependence of the regret bound of LB-GP-UCB. This is a very interesting point and we are grateful to the reviewer for mentioning it. **We will add a short discussion in the paper to clarify this.**
**Applicability of the MIG bounds from (Srinivas et al, 2009)**
The results from Srinivas et al (2009) rely on uncertainty sampling to derive a bound on the sum of predictive variances for **any** strategy (the MIG bound). In fact, Srinivas et al (2009) use this MIG bound to derive a regret bound for the GP-UCB algorithm. If each of the GP-UCB base algorithms were run in isolation, these bounds clearly hold. These bounds must therefore also hold in the case, where datapoints are shared between algorithms, as variance is a non-increasing function of the number of conditioning datapoints.
**Minor points**
We would like to thank the reviewer for pointing out the typos and that Lemma 4 of [11] requires an additional assumption. **We will correct the typos and explicitly restrict the statement to RBF or Matern kernel.** We would also like to thank for pointing out more relevant, related work, **we will add it to the related work section**.
---
Rebuttal Comment 1.1:
Comment: I thank the reviewer for responding to my questions and correcting some of my misunderstandings. I have raised my score to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for taking the time to review our paper and respond to us. We are glad to know you are satisfied with our rebuttal. | Summary: This paper proposes a novel Bayesian optimization algorithm for the setting with unknown kernel lengthscale. The proposed approach improves upon prior work by running a logarithmic array of algorithms on exponentially decreasing lengthscales in combination with a regret balancing scheme. The paper proves a regret guarantee with an exponential improvement compared to prior work in the ratio of the regret compared to the oracle algorithm that knows the optimal lengthscale. The experimental evaluation also shows substantial improvements compared to prior the prior work.
Strengths: Originality: This paper introduces a clever regret balancing scheme to obtain a improved regret bound for Bayesian optimization with unknown kernel lengthscale. While the individual ideas already appear in prior work (e.g. regret balancing, and decreasing the lengthscale), the combination in this setting is novel.
Quality & Clarity: The paper is overall well written but could benefit from careful proof reading (some suggestions below). The main idea is easy to follow and there is an extensive comparison to the prior work. The regret bound is a significant improvement over prior work. The experimental evaluation shows that the proposed approach outperforms the prior work.
Significance: The findings are of high interest to the Bayesian optimization community (and users of Bayesian optimization) as selecting hyperparameter remains a major challenge. In particular, classical schemes (like marginal likelihood) tend to fail because there is too little data initially.
Weaknesses: The paper could use some polishing for English and punctuation (e.g. many sentences are quite long). In addition, the authors could provide more intuition (e.g. why regret balancing works, and the elimination scheme) and try to make the paper more accessible to readers not familiar with the tools used.
Minors:
* line 110: punctuation
* Algorithm 1, line 7: Is there a simpler notation for the set $S_t^\theta$ ?
* 209: Long sentence, unclear what "their" refers to.
* 229: Wording/missing word "highest ? as possible"
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The main limitation is that the approach so far only works for a scalar parameter that induces a natural nesting of Hilbert spaces.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for reading our submission, and pointing out the issues with writing. **We will correct the writing errors pointed out by the reviewer and try to break long sentences into shorter ones.** We agree that the algorithm is quite complex and it would be good to give more intuition to the readers. **We plan to use the additional page in the camera-ready version to add a longer, more intuitive explanation of the regret balancing scheme with some easily-understandable figures.**
When it comes to the definition of set $S_t^\theta$, in the pseudocode, it would be possible to drop the iteration subscript $t$ and just redefine the set at each iteration. However, we explicitly wanted to index this set by the current timestep $t$ and the hyperparameter value $\theta$, as this notation is very convienient later in the proofs.
---
Rebuttal Comment 1.1:
Comment: I'd like to thank the authors for the response and clarifications. I read the reviews and rebuttal, and my evaluation remains positive.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for taking the time to review our paper and respond to us. We are glad to know you are satisfied with our clarifications. | Summary: This paper introduces LB-GP-UCB (Lengthscale Balancing GP-UCB), a Bayesian Optimization (BO) algorithm that proposes a new tuning of the covariance function hyperparameters. A regret bound is derived, with logarithmic improvement over A-GP-UCB, the most similar solution in the literature. Some numerical experiments also show improvements in practice.
Strengths: LB-GP-UCB is a significant improvement over A-GP-UCB, and constitutes an interesting step forward in the field of no-regret BO with unknown hyperparameters.
Weaknesses: My concerns are mostly related to the experimental part of the paper.
**Missing Baselines**. In Section 6, the authors seem to discard many solutions addressing the problem of BO with unknown hyperparameters because they do not provide any theoretical analysis for their algorithms (see [13, 20, 25, 28] in the main paper). Although I understand that LB-GP-UCB comes with an additional, reassuring theoretical guarantee, its empirical performance should still be compared against some empirical algorithms at least.
**Missing Benchmarks**. Only four benchmarks were considered, I think that is not enough for a comprehensive study of LB-GP-UCB's empirical behavior. I believe additional experiments should be run.
**Impact of the Dimension $d$**. The dimensionality of the objective function may have an important impact on the performance of LB-GP-UCB. However, the dimensionality of the problems was not specified in the main text nor in the Appendix E entitled "Experimental Details".
**Wall-Clock Time Comparison**. In Appendix E, it is mentionned that all experiments for every method (except MCMC) took up to 4 minutes to run, but I would be interested in the precise wall-clock time for each solution and each experiment. This is important as online estimation of the hyperparameters brings a computational overhead to the BO algorithm.
Technical Quality: 2
Clarity: 1
Questions for Authors: Here are some questions to spark the discussion with the authors.
(1) Have you compared LB-GP-UCB to any of the algorithms presented in [13, 20, 25, 28]? If not, on what ground have you discarded them for your empirical evaluation?
(2) Why have you chosen these 4 benchmarks? I know that the rebuttal period is very short, but I believe more experimental results on a variety of problems (e.g., different smoothness of the objective function, different dimensionality...) should be considered to strengthen Section 5.
(3) What were the dimensionality for the considered benchmarks? Do you have any insight on how LB-GP-UCB would react to higher-dimensional problems?
(4) Do you have the precise wall-clock times for each solution and each experiment?
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 3
Limitations: I believe the authors have properly addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for reading our submission and providing feedback on our paper. We address each question/concern below:
(1) The methods used by [13] and [28] are equivalent to the MCMC baseline we compared against, where the hyperparameters are marginalised from the acqusition function using Monte Carlo samples (as we mention at the beginning of Section 5). [25] is an empirical study of robustness to misspecification of the prior on hyperparameters that does not provide any novel method of hyperparameter estimation. The codebase of [20] has been taken down from the web, and currently no publicly available implementation of their algorithm exists.
(2) We chose benchmarks with differing smoothness throughout the domain (Berkenkamp, Michalewicz) or those that exhibit "needle in a haystack" behaviour (such as Material Design problems), as typically used methods such as maximum likelihood tend to struggle on these kinds of problems. Our benchmarks were chosen to showcase how our method can be used to tackle problems with which existing methods struggle. Of course, if one were to chose lesschallenging benchmarks with constant smoothness throughout the domain, it is entirely possible the improvement delivered by LB-GP-UCB over MLE and other baseline would be smaller. We do not claim that our method would provide universal improvement across all sorts of benchmarks, but rather that MLE and MCMC can perform poorly on a certain class of problems on which our algorithm performs well. We will add this comment to the limitation section.
We would also like to emphasise that the paper proposing A-GP-UCB (published in JMLR, 2019) included only two empirical benchmarks, so we would argue that our experimental evaluation of four benchmarks meets the standard for papers, whose main contribution is theoretical. In fact, Reviewer YCMG mentions "Enough numerical evaluations are given (...)".
(3) The Berkenkamp function is a 1-dimensional synthetic function. We used a 5-dimensional version of the Michalewicz function. AGNP and CrossedBarrel are 5-dimensional and 4-dimensional real-world problems, respectively. We will amend the manuscript to include the dimensionality of each of the problems.
As we prove in the paper, LB-GP-UCB can recover performance "close" to the performance of GP-UCB optimiser with the oracle knowledge of the optimal length scale value. However, it is well-known that even the regret bound of this oracle optimiser would grow with the dimensionality of the problem and same is true for the LB-GP-UCB. To remedy that, one could enhance LB-GP-UCB optimiser in the same way as standard BO optimiser is being enhanced to perform well in high-dimensional spaces (e.g. by adding a Trust Region or decomposing the input space). These enhancements are orthogonal to our method.
(4) Yes, we do have wallclock times for our experiments. We provide them below and **will update the script to include the exact wallclock times in the Appendix**. All the times below are seconds, values after +/- are standard errors from accross the seeds.
**Berkenkamp Function:**
MLE : 438 +/- 0.66
A-GP-UCB : 443 +/- 1.51
LB-GP-UCB : 442 +/- 1.68
MCMC : 1653 +/- 25.99
**Michalewicz Function:**
MLE : 237 +/- 2.41
A-GP-UCB : 167 +/- 0.88
LB-GP-UCB : 181 +/- 0.47
MCMC : 3388 +/- 369.38
**Crossed Barrel Materials Experiment:**
MLE : 55 +/- 0.1
A-GP-UCB : 48 +/- 0.4
LB-GP-UCB : 48 +/- 0.5
MCMC : 471 +/- 25.2
**AGNP Materials Experiment:**
MLE : 53 +/- 0.05
A-GP-UCB : 49 +/- 0.18
LB-GP-UCB : 49 +/- 0.16
MCMC : 246 +/- 3.56
As such, LB-GP-UCB is faster than MLE on all benchmarks except for Berkenkamp Function, where it is 4 seconds ($\approx 1$ \%) slower. MCMC is significantly slower than any other baseline.
---
Rebuttal Comment 1.1:
Title: Rebuttal Ack
Comment: Thank you for the clarifications.
I am now positive about the paper and I have increased my score to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review the paper and to respond to us. We are glad to know that our rebuttal addressed your concerns. | Summary: This paper proposes an approach to deal with unknown hyper-parameters in Gaussian process upper confidence bound (GP-UCB) algorithms, a popular Bayesian optimisation (BO) strategy. The objective function is assumed to be a member of a reproducing kernel Hilbert space (RKHS) associated with a translation-invariant kernel class whose length-scale is unknown. Algorithms are proposed to adaptively estimate the unknown kernel length-scales and an upper bound on the RKHS norm of the objective function, which are common hyper-parameters for GP-UCB algorithms. Theoretical guarantees on the regret for the proposed algorithms are provided, which show that rate of the algorithm's cumulative regret to the regret of an algorithm with knowledge about the exact hyper-parameters are only logarithmic in contrast to previous approaches. Experimental results are presented comparing the regret of the proposed algorithms to typical hyper-parameter estimation strategies in the BO literature.
Strengths: * The paper builds well on existing theoretical results and a novel rigorous analysis.
* Experimental results show improvements against existing popular hyper-parameter estimation strategies, bringing new insights.
* Existing relevant literature seems well covered by related work section.
* The text is well organised following a mostly clear structure.
Weaknesses: * It is unclear how close the estimated length-scale and RKHS norm are to their true values at each iteration.
* There are no (theoretical) convergence results on the algorithm’s estimates, only the regret bounds.
* The proposed algorithms are only compared against other GP-UCB strategies. It’d be interesting to see how they compare to other methods which do not require explicitly knowledge of the RKHS norm bound, for example, such as expected improvement algorithms. There are also no comparisons against meta-learning BO strategies. Even though they require prior data, it’d be interesting to see how close (or better) the performance of the proposed algorithms can get to them.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Another important hyper-parameter that might affect GP-UCB algorithms' performance is the sub-Gaussian noise parameter upper bound. However, the paper presents no discussion about the noise parameter. I was wondering if the authors have considered estimation strategies for that hyper-parameter as well.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Discussions on the main limitations of the theoretical analysis are presented, but there are no discussions on scalability issues, such as problems involving high-dimensional data or large datasets, which often require low-rank GP approximations. There are also no discussions on noise hyper-parameters (i.e., the sub-Gaussian noise parameter), another important hyper-parameter that might be unknown for some applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for reading our submission and for their appreciation for our theoretical analysis and empirical evaluation. We address each question/concern below.
**Convergence of length-scales estimates**
In general, we consider the setting in which the function can arbitrarly change its smoothness throughout the domain. As such, it is impossible to guarantee correctness of the length scale estimation unless we cover the input space with an infinitely-dense grid---however, doing so would defeat the purpose of conducting sample-efficienct optimisation. Our algorithm side-steps the need for accurate length scale estimation by appriopriately balancing the regret. As such, we do not consider it a weakness that we do not solve the impossible task of precisely estimating length scales---instead, we propose an alternative approach that solves a practically-relevant problem.
Also, as shown in Figures 2 and 4, in practice, the length scale values typically selected by our algorithm are close to the estimates of optimal length scale value. As such, while we cannot guarantee the convergance of that estimator (which, as explained before, would be impossible in the considered setting), we observe reasonable convergence empirically.
**Comparison to Exptected Improvement**
While it is true that EI does not require knowledge of the RKHS norm to compute the acqusition function, in practice, EI still requires the specification of output scale value $c$ for the kernel $c k(x,x^\prime)$. We have that $||f||_{ck} = \frac{||f||_{k}}{c}$, so if we do not know the RKHS norm, we still have one more parameter to find. Additionally, EI still requires knowledge of the length scale. Our algorithm can be used to remedy these problems also in the case of EI, where each of the base learners could be an instance of GP-EI instead of GP-UCB.
**Comparison to Meta-Learning strategies**
The only previous work of which we are aware that solves the problem of unknown hyperparameters via meta-learning is the work of , which we cite in the related work section. They assume the training and target function were sampled from the same Gaussian Process prior, which is a different setting from ours, were we do not impose any prior on the black-box function.
We did not compare against meta-learning baselines, as such approaches are only applicable where one can easily find functions that are highly "similar" to the target functions. For the benchmarks we considered, it is not clear how to find such similar functions.
**Case of unknown noise**
The problem of simultaneously estimating noise magnitude and length scale value is ill-posed as, for example, pointed by [Karvonen and Oates, 2023]. An intuitive way to see that this problem is ill-posed is by observing that if the function values we observe change rapidly, we can never know for sure if the change is caused by the true function $f(\cdot)$ changing rapidly (implying a short length scale value) or by the magnitude of noise being large and function changing slowly (implying long length scale value). As such, simultanous estimation of noise magnitude and kernel hyperparameters is likely impossible. However, we agree that we should comment on this, when discussing limitations. **We will update the paper to reflect that.**
**Scalability Limitations**
Scalability to high-dimensional spaces and large datasets is a limitation of virtually all optimisation methods based on standard Gaussian Process models. These limitations are not particularly tied to the method we propose. **However, for clarity, we will update the script to clearly mention those issues** as a limitation of the standard Gaussian Process model and, by extension, a limitation of our algorithm, which relies on that model.
**References**
Karvonen, Toni, and Chris J. Oates. "Maximum likelihood estimation in Gaussian process regression is ill-posed." Journal of Machine Learning Research 24.120 (2023): 1-47.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my concerns. I keep my vote for this paper to be accepted, as it brings an important contribution to the BO community.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for taking the time to review our paper and respond to us. We are glad to know our rebuttal addressed your concerns. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for taking time to read our submission and provide insightful feedback, as well as asking interesting question. We answer to each of the reviewers individually below. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Grid4D: 4D Decomposed Hash Encoding for High-Fidelity Dynamic Gaussian Splatting | Accept (poster) | Summary: This paper proposes using Hash Encoding to model the Deformation Field for dynamic scenes. The authors first decompose 4D encoding into four 3D encodings to avoid the losses caused by the low-rank tensor assumption. They also introduce an attention module to decouple spatial and temporal features. Since explicit modeling can result in insufficiently smooth delta predictions for the deformation field, the authors incorporate a smooth loss to strongly regularize the outputs of Hash Encoding. Experiments on both synthetic and real-world datasets show substantial quality improvements compared to prior state-of-the-art methods. In addition, Grid4D shows a significant FPS improvement compared to Deformable-GS (without lightweight).
Strengths: I think the main strength of this paper is the authors' deep understanding of Deformable-based Gaussian splatting. To be compatible with the densification of vanilla Gaussian splatting, the deformation field needs to output delta (x or rotation). The key to the success of outputting delta lies in the sufficiently smooth output of the deformation field. This is why Deformable-GS [1] can outperform D-4DGS [2] significantly on datasets with accurate camera poses, since Hexplane is not as smooth as MLP. In addition, Hexplane not satisfying the low-rank tensor assumption in dynamic scenes is also a **potential reason**. The authors insightfully addressed the issue of MLP being overly smooth and explicit methods being insufficiently smooth. Therefore, they used Hash Encoding, which does not require the low-rank tensor assumption, to ensure high-frequency details, while employing smooth loss to ensure the foundation of the deformable-based Gaussian splatting.
The design of the deformation network and the experiments with strong baselines are compelling. Ablations are also appreciated.
[1] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. arXiv preprint arXiv:2309.13101,2023.
[2] Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4d gaussian splatting for real-time dynamic scene rendering. arXiv preprint arXiv:2310.08528, 2023.
Weaknesses: 1. low-rank assumption -> **low-rank tensor assumption** is more accurate.
2. I think the following papers should also be cited:
- Spacetime Gaussian Feature Splatting for Real-Time Dynamic View Synthesis by Zhan Li et al.
- 4D Gaussian Splatting: Towards Efficient Novel View Synthesis for Dynamic Scenes by Yuanxing Duan et al.
3. Datasets:
- D-NeRF: I think `Lego` (wrong scene with inconsistent training and test set) should not appear in the table, or it should use the Deformable-GS [1] setup with the validation set as the test data. This can prevent misleading subsequent research.
- HyperNeRF: I do not find the results on HyperNeRF convincing because the camera poses in the HyperNeRF dataset are inaccurate. For example, in the `3D Printer` (Fig. 5), Grid4D is visibly clearer than TiNeuVox, but the metrics do not reflect this. Therefore, I suggest that the authors adopt `NeRF-DS` [3] dataset with more accurate camera poses for real-world scenes comparison.
4. More difficult motion: the motions in the datasets used for experiments are mostly rigid or easy to explain. It would be interesting to see if the method is able to handle more difficult deformations, like non-rigid or large motion. Based on my understanding of the Deformation Field, it is challenging for it to handle large motions effectively.
5. More comparisons: I wonder if the authors have considered SC-GS [4]. It could be better if the authors could compare their method with SC-GS. I believe the approach of SC-GS could be applied to Grid4D to achieve higher rendering quality and FPS.
[3] Zhiwen Yan, Chen Li, and Gim Hee Lee. Nerf-ds: Neural radiance fields for dynamic specular objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8285–8295, 2023.
[4] Yi-Hua Huang, Yang-Tian Sun, Ziyi Yang, Xiaoyang Lyu, Yan-Pei Cao, and Xiaojuan Qi. Sc-gs: Sparse controlled gaussian splatting for editable dynamic scenes. arXiv preprint arXiv:2312.14937, 2023.
Technical Quality: 3
Clarity: 4
Questions for Authors: A minor question: Lines 227-231 state that different scenes use different hyperparameters. I would like to know how significantly these different hyperparameters affect the scenes, as this greatly impacts the method's versatility.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Please refer to the Weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the constructive feedback. We hope that our response below will address your concerns.
**Q1: Citation of the two related works and writing problem.**
A1: We will add the citation of the mentioned related works and change the presentation in the final version.
**Q2: Should the metrics of the wrong scene 'Lego' be removed ?**
A2: We will remove the metrics of the 'Lego' scene in the final version.
**Q3: Use the NeRF-DS dataset with more accurate camera poses than the HyperNeRF dataset ?**
A3: According to the original paper, we think that NeRF-DS dataset is designed for dynamic scenes with specular objects which is not suitable for general dynamic scene rendering. Therefore, to further demonstrate the improvements of Grid4D, we also choose the Neu3D dataset. Neu3D is captured by 21 cameras with fixed poses and has more accurate camera poses than HyperNeRF. Additionally, the scenes in the Neu3D dataset are larger and more challenging.
We report the comparison results with PSNR on the Neu3D[1] dataset in the following table, and our model has better performance than the state-of-the-art models. The qualitative results can be found in Figure 1 of the top comment PDF. We will add these experiments to our paper in the final version.
| Model | Coffee Martini | Cook Spinach | Cut Beef | Flame Salmon | Flame Steak | Sear Steak | Mean |
| :----------: | :------------: | :----------: | :-------: | :----------: | :---------: | :--------: | :-------: |
| 4D-GS | 27.34 | 32.50 | 32.26 | 27.99 | 32.54 | **33.44** | 31.01 |
| Grid4D(Ours) | **28.30** | **32.58** | **33.22** | **29.12** | **32.56** | 33.16 | **31.49** |
**Q4: How does Grid4D perform when handling difficult motions.**
A4: The 'Hand', 'Peel Banana', 'Chocolate', 'Broom', and 'Teapot' scenes in HyperNeRF dataset have large and complex motions. As listed in Table 5, Figure 9, and Figure 10 of our paper, our methods have a much better rendering quality than the state-of-the-art models due to the discriminability improvement of explicit features. However, Grid4D still has several artifacts in the 'Broom' and 'Teapot' scenes, which might need further research.
**Q5: Comparison to SC-GS.**
A1: We conduct the comparison with SC-GS on the D-NeRF dataset. The comparison results with PSNR can be found in the following table, and the qualitative results can be found in Figure 2 of the top comment PDF. Notably, the original SC-GS rendering resolution is $400\times 400$, lower than our model, so we changed the rendering resolution of SC-GS to $800\times 800$ for fairness. Additionally, we remove the 'Lego' scene because of the incorrect test ground truth. We will add these experiments in the final version.
| Model | Bouncing Balls | Hell Warrior | Hook | Jumping Jacks | Mutant | Standup | Trex | Mean |
| :----------: | :------------: | :----------: | :-------: | :-----------: | :-------: | :-------: | :-------: | --------- |
| SC-GS | 41.59 | 42.19 | 38.79 | 39.34 | 43.43 | **46.72** | 39.53 | 41.65 |
| Grid4D(Ours) | **42.62** | **42.85** | **38.89** | **39.37** | **43.94** | 46.28 | **40.01** | **42.00** |
According to the results, Grid4D has better performance than SC-GS on average. The improvement of Grid4D might be because of the performance gap between implicit deformation fields (which SC-GS is based on) and Grid4D. SC-GS uses a full MLP-based implicit model to predict deformations, which has the over-smooth inherent property. Our proposed 4D decomposed hash encoder generates explicit features with high discriminability to represent the deformations more accurately.
We consider that SC-GS is an excellent work mainly focused on dynamic scene editing and rendering. Therefore, combining the advantages of SC-GS and Grid4D or applying Grid4D to SC-GS are probably great future works that might get better rendering quality.
**Q6: What hyperparameters are set in different scenes ?**
A6: In our experiments, the hyperparameters that need to change mainly include the resolution of the time dimension, the weight of smooth regularization term $\lambda_r$, and the perturbation range $\epsilon=(\epsilon_x, \epsilon_y, \epsilon_z, \epsilon_t)$.
The resolutions of the time dimension are set according to the frame number, usually about a half and a quarter of the frame number according to the Nyquist-Shannon Sampling Theorem. The resolution can be set automatically by the algorithm.
The weight of smooth regularization term $\lambda_r=0.5$ for most scenes, $\lambda_r=1.0$ for the 'vrig' part of HyperNeRF dataset. In most cases, setting $\lambda_r=0.5$ could obtain reliable results. For better performance, increasing $\lambda_r$ can help Grid4D render better when the scene primarily consists of rigid objects and simple motions. Similarly, decreasing $\lambda_r$ can help Grid4D for modeling complex motions and non-rigid objects.
For D-NeRF dataset and the 'vrig' part of HyperNeRF dataset, we set the perturbation range into $\epsilon\in [-10^{-2}, 10^{-2}]$, For the 'interp' part of HyperNeRF dataset, we set the perturbation range into $\epsilon\in [-10^{-3}, 10^{-3}]$. In most cases, setting the range to $\epsilon\in [-10^{-3}, 10^{-3}]$ can obtain reliable results.
[1] Li, Tianye, et al. "Neural 3d video synthesis from multi-view video." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2022.
---
Rebuttal Comment 1.1:
Title: Feedback from reviewer T422
Comment: Thanks for authors' response and the additional experimental results based on the review. I will not dwell too much on the authors' measurements of SC-GS rendering metrics. Overall, the rendering quality of SC-GS and Grid4D is similar (please refer to the homepage of MaGS: Reconstructing and Simulating Dynamic 3D Objects with Mesh-adsorbed Gaussian Splatting).
However, the authors have underestimated the FPS and training time of SC-GS. For instance, the FPS measurement of SC-GS should fix the KNN, rather than querying it in every iteration, achieving nearly 300 FPS on D-NeRF at 400x400 resolution. The training time is about 40-50 minutes, far less than 75 minutes. Moreover, I believe 4DGS (FDU, real-time 4DGS) should be compared in Neu3D dataset.
Considering my personal concerns about the future of deformation-based Gaussian splatting, the per-scene optimized hyperparameters by the authors, and the incremental (or lack of) improvement compared to SC-GS, I have decided to lower my score to `weak accept`.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for the constructive feedback. We hope that our response below will address your concerns.
**Q1: Further Comparison to SC-GS.**
Although our model has no obvious improvements in several scenes on D-NeRF dataset in comparison to SC-GS, our model still significantly outperforms the state-of-the-art model on the HyperNeRF dataset, as shown in Table 3 of our paper. According to Section 7 in the paper of SC-GS, SC-GS might fail to reconstruct dynamic scenes with imprecise camera poses while our model is more robust in facing this problem.
We apply SC-GS to the real-world HyperNeRF dataset. We set the resolution to $960\times 536$. On several successfully reconstructed HyperNeRF scenes of SC-GS, SC-GS obtains 25.58 PSNR in the 'Chicken(interp)' scene while Grid4D(Ours) obtains 27.31 PSNR; SC-GS obtains 24.00 in the 'Slice Banana' scene while Grid4D(Ours) obtains 25.79 PSNR. The results demonstrate the robustness of our model when facing the imprecise camera poses.
We think that the reconstruction failures and degradation of SC-GS might be because of the over-smooth inherent property of the implicit MLP-based models, as mentioned in Section D of the supplementary of DeformGS.
**Q2: Are the FPS and training time of SC-GS underestimated ?**
In all our experiments on the D-NeRF dataset including training and rendering, we set the resolution to $800\times 800$ for fairness, which is higher than $400\times 400$ of the original SC-GS. Therefore, the results of FPS are lower and the training time is higher.
We think only caching the KNN results of SC-GS from the rendering of the first frame might be a little bit unfair to Grid4D because we can do a similar caching operation in our model by caching the directional attention score. By using the caching operations, we obtain 234FPS of SC-GS and 200FPS of our model.
---
Rebuttal 2:
Title: A kind comment
Comment: Dear authors and reviewer T422, I would like to join the discussion and kindly defend the deformation-based GS:
Recent works, such as "Shape of Motion" (Wang et al., ArXiv24) and "MoSCA" (Lei et al., ArXiv24), seem to excel in monocular setups through explicit deformation models. I believe these methods could handle HyperNerf's setups. Therefore, I don't think 4DGS (Yang et al.) and Spacetime GS are similar methods. However, as the authors mentioned, the implicit global deformation network also has its advantages. Generalizability could be another considerable potential advantage.
Exploring implicit representations is an important research topic in 3D vision. While NeRF has its own advantages compared to 3D-GS, even though most researchers currently prefer the latter, we must remember that CNNs eventually replaced handcraft-designed convolutional kernels. | Summary: This paper proposes a Grid4D representation for dynamic scene rendering. It breaks low-rank assumptions on 4D-GS and propose a decomposed 4D hash-grid representation for encoding canonical 3D Gaussians. A attention module is used to aggregate the spatial-temporal features of 3D Gaussians. More training strategies are applied in Grid4D to ensure efficient training and rendering quality are also better than 4D-GS, and deformable-GS.
Strengths: 1. The paper proposes a deformable representation for dynamic scene rendering and without relying on low-rank assumption.
2. The attention module looks reasonable for feature aggregation.
3. A smooth training strategy is employed to ensure the smoothness of Grid4D.
Weaknesses: 1. Why don’t you compare with [13]? SC-GS now preserves the state-of-the-art (SOTA) rendering quality in D-NeRF.
2. If you choose 4D-GS as your baseline, please provide the complete results on the Neu3D dataset and all other metrics (average training time, average rendering speed, average storage consumption).
3. Only selecting some figures and reporting PSNR/LPIPS/SSIM for comparison is not quite enough in 3D vision. The reviewer recommends that the authors submit more comparison videos as supplementary material.
4. If the authors want to propose a method that can replace the deform-based dynamic Gaussians (4D-GS, SC-GS, deform-GS) and become a new baseline, they should provide strong, comprehensive results in the experiment section.
5. Additionally, the D-NeRF dataset is just a toy. I don't think much efforts are deserved to used in the dataset after the publication of SC-GS (PSNR already up to 40). If the authors are still interested in dynamic scene novel view rendering, I recommend focusing on monocular setups (DyCheck, NeurlPS2022) or multi-view (Neu3D, CVPR 2022).
6. The figure looks unattractive.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. I want the authors to demonstrate that Grid4D is better than 4D-GS, Deformable-GS, and SC-GS in most evaluation metrics. Please use average results instead of the range 0.3~1.5 in Table 7, as that range is too large. If some metrics are worse, please provide more discussions.
2. Solving a low-rank problem to achieve better rendering quality in the D-NeRF/NeRF-DS/HyperNeRF dataset is indeed a contribution. However, the reviewer believes that the core challenges of novel view synthesis in this dataset have already been solved. There are many important topics in dynamic scene novel view rendering, such as large motion and monocular novel view synthesis.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: As mentioned in weakness and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the constructive feedback. We hope that our response below will address your concerns.
**Q1: Comparison to SC-GS.**
A1: We conduct the comparison with SC-GS on the D-NeRF dataset. The comparison results with PSNR can be found in the following table, and the qualitative results can be found in Figure 2 of the top comment PDF. Notably, the original SC-GS rendering resolution is $400\times 400$, lower than our model, so we changed the rendering resolution of SC-GS to $800\times 800$ for fairness. Additionally, we remove the 'Lego' scene because of the incorrect test ground truth. We will add these experiments in the final version.
| Model | Bouncing Balls | Hell Warrior | Hook | Jumping Jacks | Mutant | Standup | Trex | Mean |
| :----------: | :------------: | :----------: | :-------: | :-----------: | :-------: | :-------: | :-------: | --------- |
| SC-GS | 41.59 | 42.19 | 38.79 | 39.34 | 43.43 | **46.72** | 39.53 | 41.65 |
| Grid4D(Ours) | **42.62** | **42.85** | **38.89** | **39.37** | **43.94** | 46.28 | **40.01** | **42.00** |
According to the results, Grid4D has better performance than SC-GS on average. The improvement of Grid4D might be because of the performance gap between implicit deformation fields (which SC-GS is based on) and Grid4D. SC-GS uses a full MLP-based implicit model to predict deformations, which has the over-smooth inherent property. Our proposed 4D decomposed hash encoder generates explicit features with high discriminability to represent the deformations more accurately.
We think that SC-GS is an excellent work mainly focused on dynamic scene editing and rendering. Therefore, combining the advantages of SC-GS and Grid4D or applying Grid4D to SC-GS are probably great future works that might get better rendering quality.
**Q2: Results on Neu3D dataset.**
A2: We conduct the experiments on the Neu3D dataset and report PSNR in the following table. From this table, one can see that our model has better performance than the state-of-the-art model. The qualitative results can be found in Figure 1 of the top comment PDF. We will add these experiments to our paper in the final version.
| Model | Coffee Martini | Cook Spinach | Cut Beef | Flame Salmon | Flame Steak | Sear Steak | Mean |
| :----------: | :------------: | :----------: | :-------: | :----------: | :---------: | :--------: | :-------: |
| 4D-GS | 27.34 | 32.50 | 32.26 | 27.99 | 32.54 | **33.44** | 31.01 |
| Grid4D(Ours) | **28.30** | **32.58** | **33.22** | **29.12** | **32.56** | 33.16 | **31.49** |
**Q3: Can the authors provide comparison videos ?**
A3: To demonstrate the effectiveness of our model, we provide additional rendering results in Figure 12, Figure 11 and Figure 9 of our supplementary. To demonstrate the temporal coherence of our model, we provide some videos by displaying included images in Figure 3 of the top comment PDF. We uniformly interpolate the time and randomly select a camera pose to render the scene reconstructed by Grid4D, and select the images per 5 frames (full 150 frames) to display. We will provide more videos on our GitHub repository in the final version.
**Q4: Experimental results on other datasets such as Neu3D since D-NeRF is a toy dataset ?**
A4: To further evaluate our model, we also conduct experimental comparisons on the real-world HyperNeRF and Neu3D datasets. The results demonstrate the effectiveness of our model on the real-world datasets.
**Q5: Use average results of the time and space consumption instead of a range.**
A5: We provide the detailed average metrics in the following table. We will add the table to our paper in the final version.
| Model | Training Time | GPU Memory | FPS | PSNR |
| :----------: | :-----------: | :--------: | :--: | :---: |
| 4D-GS | 20min | 1GB | 160 | 34.11 |
| DeformGS | 33min | 4.5GB | 62 | 38.26 |
| SC-GS | 75min | 3.1GB | 179 | 39.56 |
| Grid4D(Ours) | 55min | 4.0GB | 153 | 39.86 |
---
Rebuttal Comment 1.1:
Comment: Thank you for your comprehensive answer. Most of my problems have been solved. I would like to update my final score to borderline acceptance. | Summary: The paper presents a grid-based method to compute deformed gaussians to render dynamic scenes from input images. It proposes to perform a 3D decomposition of the 4D input, using multiresolution hash encoding to access spatial (static) and temporal (dynamic) feature vectors. The static features are fed to an MLP responsible of computing a directional attention score, which is composed with temporal features coming from another MLP, resulting in the deformation features. Those features pass through a multi-head decoder, which infer the deformation parameters. Those parameters are used to deform the set of canonical gaussians, which are rasterized to generate the final image.
Strengths: (1) Contextualization is very good, cited works are sufficient. The Introduction and Related Work sections may be successfully used for familiarization with the fields needed for proper understanding of the paper.
(2) Every part of the method is intuitively presented and the Math holds.
(3) Mathematical notation is very clean.
(4) I liked the introduction of an attention mechanism in this context and the idea behind the smooth regularization.
Weaknesses: (1) I am not convinced about the temporal smoothness and coherence of the proposed method. The interpolations needed for grid-based methods are known to introduce discontinuities even in static cases (NGLOD, Instant-NGP). The paper does not present a supplementary video nor a temporal coherence metric (such as FID-VID or FVD). Moreover, all presented experiments are based on specific frames of the datasets. It is not currently possible to qualitatively or quantitatively evaluate this aspect of the proposed approach, which is crucial for a 4D method. This is the core reason behind my current rating.
Since the authors are not allowed to add a supplementary video nor include a link in the rebuttal (as far as I know), I believe a way to answer this critic with data would be to show a table with temporal coherence metrics. However, I encourage the authors to present other arguments if they find creative ways to be convincing. I liked the paper and I am willing to increase my rating in case the authors are convincing in this subject.
Technical Quality: 2
Clarity: 3
Questions for Authors: (1) Small presentation advice: at the introduction, it would be better to talk about Figure 1 before Figure 2, because Figure 1 is very good as a primary description of how the 4D encoding is decomposed. Starting with Figure 2 is a little bit more confusing.
(2) I think the Preliminaries section should include a small introduction to multiresolution hash encoding too. It would help the reader to have better context for the remaining of the method section.
(3) I would like to know the values of $\lambda_c$ and $\lambda_r$.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes, the limitations are presented in the supplementary.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the constructive feedback. We hope that our response below will address your concerns.
**Q1: How to demonstrate the temporal coherence of the rendering results ?**
A1: To demonstrate the temporal coherence of our model, we provide some videos by displaying included images in Figure 3 of the top comment PDF. We uniformly interpolate the time and randomly select a camera pose to render the scene reconstructed by Grid4D, and select the images per 5 frames (full 150 frames) to display. We will provide more videos on our GitHub repository in the final version.
In the test stage of the dynamic scene rendering task, the camera poses and timestamps are usually not selected in a video sequence. To our knowledge, there has no existing metric to evaluate the temporal coherence of a video without the ground truth. To further demonstrate the temporal coherence of the rendering results, we attempt to design a simple metric to evaluate the coherence. We use the optical flow estimation model RAFT[1] to predict the forward and backward flow between the two neighbouring frames of the rendered video. Similar to Chamfer Distance, we calculate the mean of total pixel displacements, as the following formula,
$F^{forward} = \frac{1}{NHW}\sum_{i, h, w}||f_{ihw}^{forward}||_2$
$F^{backward} = \frac{1}{NHW}\sum_{i, h, w}||f_{ihw}^{backward}||_2$
$F = \frac{1}{2}(F^{forward}+F^{backward})$
where $N, H, W$ denote to the frames, height, width of the video, $f$ means the corresponding flow. The unit of $F$ is pixel. The lower value of $F$ denotes the better temporal coherence keep in the video.
We use the official pretrained RAFT[1] model of PyTorch to estimate the flow. We uniformly interpolate 150 timestamps along the timeline and randomly select a camera pose for our model to render the video. The results on the D-NeRF dataset are listed in the following table. For reference, we random select a video from the Kinetics-400[2] dataset and calculate the same metric $F$.
| Bouncing Balls | Hell Warrior | Hook | Jumping Jacks | Lego | Mutant | Standup | Trex | *Reference* |
| :------------: | :----------: | :--: | :-----------: | :---: | :----: | :-----: | :---: | :--: |
| 0.248 | 0.673 | 1.014 | 1.799 | 0.220 | 0.324 | 1.331 | 0.576 | *2.137* |
According to the results, the average pixel displacements of the rendered videos are all below 2.0pix and could demonstrate the temporal coherence of our rendering results. However, the metrics mentioned above are not well explored which might need further research.
**Q2: What are the values of $\lambda_c$ and $\lambda_r$ ?**
A2: In our experiments, $\lambda_c$ and $\lambda_r$ are set as follows: $\lambda_c=0.2$ for all scenes. $\lambda_r=0.5$ for most scenes, $\lambda_r=1.0$ for the 'vrig' part of HyperNeRF dataset.
In most cases, set $\lambda_r=0.5$ can obtain reliable results. For better performance, increasing $\lambda_r$ can help Grid4D render better when the scene primarily consists of rigid objects and simple motions. Similarly, decreasing $\lambda_r$ can help Grid4D for modeling complex motions and non-rigid objects.
**Q3: Problems of writing.**
A3: We will add the introduction of Multi-resolution Hash Encoding and the description of Figure 1 in the final version.
[1] Zachary Teed and Jia Deng. Raft: Recurrent all-pairs field transforms for optical flow. In Computer Vision–ECCV 2020:16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pages 402–419. Springer, 2020.
[2] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950,2017.
---
Rebuttal 2:
Comment: Thank you for considering the review and the efforts for answering it. I have no further questions. I will reflect a little bit about the presented data about temporal smoothness and coherence and will update my rating to reflect the conclusions. | Summary: The paper introduces Grid4D, a novel dynamic scene rendering model that leverages hash encoding for 4D input decomposition, enabling high-quality and speedy rendering of dynamic scenes.
Unlike traditional plane-based methods that suffer from excessive feature overlap due to low-rank assumptions, Grid4D uses a tri-axial 4D decomposition into spatial and temporal 3D hash encodings.
This approach, combined with a novel directional attention mechanism, improves the discrimination and aggregation of deformation features, enhancing both the visual quality and rendering speed of dynamic scenes.
Strengths: Grid4D's use of 4D decomposed hash encoding addresses the limitations of plane-based methods by effectively reducing feature overlap, which enhances feature discriminability and rendering accuracy. The novel attention mechanism aligns well with the diverse deformations across different scene components, allowing for more accurate deformation prediction and rendering. It is also good to demonstrate superiority over SoTA models.
Weaknesses: 1. Despite its advancements in rendering performance, Grid4D does not significantly improve the training speed compared to existing models.
2. The sophisticated architecture involving multiple hash encodings and the directional attention mechanism might complicate the implementation and tuning of the model.
3. The model's complexity and the specific tuning required for different datasets might limit its generalizability or lead to overfitting on particular types of dynamic scenes.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How does Grid4D handle scenes with highly non-uniform motion patterns?
2. Could the directional attention mechanism be adapted for use in other types of neural network architectures beyond scene rendering?
3. What are the specific challenges faced when applying Grid4D to real-world datasets compared to synthetic ones?
4. How does the smooth regularization impact the model's performance in terms of real-time rendering capabilities?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: 1. Grid4D shows limited improvement in training speed.
2. It does not fully address challenges related to extremely dynamic or unpredictable environments.
3. The effectiveness of the model heavily relies on the novel encoding and attention mechanisms, which might not translate as effectively to different rendering tasks or simpler scene compositions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the constructive feedback. We hope that our response below will address your concerns.
**Q1: Grid4D does not significantly improve the training speed in comparison to existing models.**
A1: Although our model has no improvement in training speed, our model can obtain a good balance between time consumption and performance. Additionally, the GPU memory consumed with our model is comparable to DeformGS and SC-GS. We report the average time, GPU memory and PSNR on the D-NeRF dataset in the following table.
| Model | Training Time | GPU Memory | FPS | PSNR |
| :----------: | :-----------: | :--------: | :--: | :---: |
| 4D-GS | 20min | 1GB | 160 | 34.11 |
| DeformGS | 33min | 4.5GB | 62 | 38.26 |
| SC-GS | 75min | 3.1GB | 179 | 39.56 |
| Grid4D(Ours) | 55min | 4.0GB | 153 | 39.86 |
**Q2: Will the complexity and specific tuning of Grid4D limit its generalizability ?**
A2: Although our model has multiple hash encodings, it has fewer reconstruction failures than DeformGS on the real-world HyperNeRF dataset, as shown in 'Teapot' (last row of Figure 9 in our paper) and 'Hand' scenes (middle row of Figure 11 in our paper).
To demonstrate the versatility of our model, we also apply Grid4D to the Neu3D[1] dataset which has larger scenes and more complex motions. We report PSNR in the following table. The qualitative results can be found in Figure 1 of the top comment PDF. From this table, one can see that although our model has multiple hash encodings, our model can still yield better performance. We will add these experiments to our paper in the final version.
| Model | Coffee Martini | Cook Spinach | Cut Beef | Flame Salmon | Flame Steak | Sear Steak | Mean |
| :----------: | :------------: | :----------: | :-------: | :----------: | :---------: | :--------: | :-------: |
| 4D-GS | 27.34 | 32.50 | 32.26 | 27.99 | 32.54 | **33.44** | 31.01 |
| Grid4D(Ours) | **28.30** | **32.58** | **33.22** | **29.12** | **32.56** | 33.16 | **31.49** |
The main tuning of our model is the resolution of the temporal hash encoder because different scenes have different numbers of frames. Additionally, according to the Nyquist-Shannon Sampling Theorem, the temporal resolution can be set to half or a quarter of the frame number automatically by the algorithm.
**Q3: How does Grid4D handle scenes with highly non-uniform motions ?**
A3: Firstly, explicit representation methods are much more flexible than implicit ones, making Grid4D better suited for modeling highly non-uniform motions. Secondly, our hash encoder decomposes 4D coordinates into four 3D coordinates. When encoding heavily overlapping 4D inputs, the 3D decomposition results will have less same parts. Therefore, the encoded explicit features will have fewer overlapped parts, resulting in a more accurate representation of the corresponding non-uniform motions.
In our experiments, the 'Hand', 'Peel Banana', and 'Chocolate' scenes in HyperNeRF dataset have much more highly non-uniform motions. As listed in Table 5, Figure 9, and Figure 10 of our paper, our methods can significantly achieve better rendering quality than the state-of-the-art models.
**Q4: Can the directional attention mechanism be adapted for other usage ?**
A4: We consider that directional attention could also be used in 4D generation models for temporal feature selection. For example, we could generate the directional attention score from the features unrelated to the timestamp, and apply the score to the other features.
**Q5: What are the specific challenges faced when applying Grid4D to real-world datasets in comparison to synthetic ones?**
A5: The main challenge is that the real-world scenes usually have imprecise camera poses which might lead to the failure or degradation of rendering. Another challenge is that real-world scenes usually have more complex motions.
For the imprecise camera poses, the smooth regularization term helps model keep consistency in the neighboring region. Such smoothness makes the model tend to fit the average results captured by the imprecise camera poses, which improves the robustness of our model. We can increase the weights of the smooth regularization term to handle more imprecise camera poses in the real-world datasets.
To handle complex motions, our model improves the discriminability of the explicit features. Encoded by our 4D decomposed hash encoder, our explicit features have fewer overlapped parts, resulting in a more accurate representation of the corresponding non-uniform motions. We can increase the resolution of 4D decomposed hash encoder to model more complex motions in the real-world datasets.
**Q6: How does the smooth regularization impact the real-time rendering capabilities of the model ?**
A6: The smooth regularization does not affect real-time rendering capabilities. During the rendering period, the smooth regularization term and other losses are not calculated by the model. Smooth regularization is mainly used in the training process and improve the rendering quality.
[1] Li, Tianye, et al. "Neural 3d video synthesis from multi-view video." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2022. | Rebuttal 1:
Rebuttal: We thank four reviewers for their constructive comments on how to improve our paper. We will provide individual responses below. The qualitative results on the real-world Neu3D dataset, the qualitative comparison to SC-GS, and a simple video exhibition can be found in the following PDF.
Pdf: /pdf/2e1b1dffb0f5f156b9cd614179dbba0dd71e5a96.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A scalable generative model for dynamical system reconstruction from neuroimaging data | Accept (poster) | Summary: This paper proposes a SSM-based DSR algorithm, convolution SSM model (convSSM), to recover the underlying systems. To prevent exploding gradients, the model is trained by SGD with generalized teacher forcing (GTF), for both (pseudo-)invertible and signal convolved decoders. After validating and studying the proposed method by simulations (Lorenz63 and ALN), they apply it to a experimental fMRI dataset (LEMON). Overall, the paper provides a scalable model for dynamic system reconstruction, which can be helpful for both inference and prediction task.
Strengths: 1. The proposed method (convSSM trained via SGD+GTF) can be efficiently scales with model size and convolution length
2. The model selection strategy based on short empirical time series are practically useful.
3. The proposed models can reliably extract key dynamical system features.
Weaknesses: From the model and model inference perspective, the main contribution for the paper is training convSSM with GTF for signal convolution model, by exploiting linearity of Wiener deconvolution. The proposed method is mainly to solve the difficulty that the current observation depends on the whole series latent states, which is common in real application, especially in neuroscience.
However, the evolution of latent states described by the system equation can capture some correlation to previous states, i.e., the current latent states depends on the history of previous states. The modeling of observation equation as a function of the whole latent history further captures the remaining correlation structure, which is missed by the AR(1) assumption in system equation. But doing this makes inference harder. It would great to show the necessity of complicated modeling on observation equation, rather than put all correlation modeling into latent space (such as using GP, which is clearer and usually easier for inference), based on some evaluation criter
Technical Quality: 4
Clarity: 3
Questions for Authors: The paper is clearly written. But the major question is the same as weakness, why it is necessary put correlation modeling into the observation equation, and hence make it more difficult to inference?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Using AR(1) for system equation and simultaneously making current ovservation explicitly depending on the whole latent history may make inference difficult.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Summary**
We thank the reviewer for taking the time to comment on and read our manuscript in detail, as well as for this supportive and positive evaluation. We provide new results and figures in an uploaded PDF, and replies to the questions and comments below. A full list of references is listed in the general rebuttal to all reviewers.
**Weaknesses**
[“*W1*”]
Thank you for your detailed review. We would like to clarify a few points regarding our model, which may not have been fully articulated in the paper. First, the AR(1) component, representing the Markov property in our latent model, is crucial because it ensures that the model constitutes a true dynamical system. If the dynamics in state space were not Markovian, it would mean the state space is not complete (lacking dimensions/ variables) and trajectories not uniquely resolved [13, 15]. For a true state space, the future must be determined solely by the current position, and the same state cannot be associated with different futures. This is where the delay embedding theorems come in in partially observed systems [16].
An AR(1) process is therefore essential to recover the underlying dynamical system, and be able to analyze its dynamics unambiguously, such as the maximum Lyapunov exponent (e.g., Fig. R2).
Second, our model is designed to reconstruct the generative dynamical system underlying neural signals measured through functional magnetic resonance imaging (fMRI). These neural signals are not directly accessible due to the delay in the delivery of nutrients and oxygen to cells, which results in a convolution with the hemodynamic response function (HRF). To address this, we convolve the underlying neural state with the HRF, a standard practice in the fMRI research community. This approach allows us to accurately model the neural dynamics by separating them from the neurovascular response.
By incorporating the HRF into our observation model, we disentangle the neural state and its dynamics from the neurovascular mechanics, which are not the focus of interest. We will further clarify this aspect in the updated version of our paper.
**Questions**
[“*Q1*”]
The necessity of having an observation model that relates a history of states to the observed measurement stems from the actual biophysical properties of the blood-oxygenation-level-dependent (BOLD) signal measured using magnetic resonance imaging. As alluded to in response to *W1* above, when neurons in the brain are active they require oxygen. The fMRI signal measures changes in blood oxygenation in response to neural activity (the BOLD signal). To provide oxygen and nutrients, there is therefore a change in blood flow, however, it comes at a delay and with a characteristic shape, accounted for by the HRF. So the reason we have to convolve with the HRF is simply due to the biological properties that give rise to the data. We therefore agree that it makes inference more difficult, however, it is necessary to capture the underlying neural process in an unbiased manner (as now also illustrated in Figs. R1 & R2).
**Limitations**
[“*L1*”]
As mentioned earlier, while incorporating additional lags into the latent model might simplify inference from a function approximation perspective, it would not result in a true dynamical system (see also response to *W1*). Our primary interest lies in accurately recovering the underlying system dynamics, which requires preserving the Markov property inherent to a true dynamical system. This focus allows us to explore the system's fundamental dynamics, as highlighted in [6]. Our approach ensures that we maintain the integrity of the system recovery, which is crucial for our analysis and objectives.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors for their detailed explanations. Most my concerns are resolved, and hence I raised my rating with 1 more score.
---
Reply to Comment 1.1.1:
Title: Response to comment
Comment: We thank the reviewer for engaging with our rebuttal, and are happy to hear we could address the open points! | Summary: The authors introduce a novel algorithm for dynamic system reconstruction (DSR) suited for systems where current observables depend on an entire history of previous states, which notably includes fMRI signals (BOLD signals) and calcium imaging, as both signal are filtered with a response function. The algorithm extends a previous class of methods (teacher forcing - TF) and employs a deconvolution pre-training step where the deconvolved signal is recovered via Wiener deconvolution, which is then used as in the standard TF paradigm. The pre-training step scales linearly with system size enabling efficient application to large datasets. Performance on synthetic dataset show better performance of the novel algorithm as opposed to standard TF.
Strengths: - *Originality & Significance:* The paper presents a novel algorithm (convSSM) that extends the applicability of DSR to systems where the observables depend on the history of the latent variables. This is particularly significant as this class encompasses relevant experimental settings in neuroscience such as BOLD signals in fMRI or calcium imaging. Moreover the presented algorithm shows improved performance as compared to the previous (sota) techniques.
- *Quality & Clarity:* The paper is nicely crafted and written, with clearly organized sections. The experiments are designed with careful control to ensure meaningful comparisons.
Weaknesses: - In the application of the DSR technique to the empirical LEMON dataset the comparison with the competing standard SSM technique is missing. It would have been helpful to identify whether the two techniques differed significantly in a real-world application and whether the novel algorithm offered better insights.
- Minor point: In Figure 2B/C is not clear whether the plots of $x_1$ and $x_3$ belong to panel B or not as panel C sits in between. Panel labels have generally exaggerated font size.
Technical Quality: 4
Clarity: 4
Questions for Authors: - In Figure 2D, leftmost panel, the conv model does not seem to improve on the quality of the observables. Does this have implications for the generative-mode of the inferred models, i.e. the conv-SSM do not offer better generative performances?
- How does the standard SSM technique perform on the LEMON dataset? Does the convSSM method provide better insights?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have adequately addressed the limitations of their work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Summary**
We thank the reviewer for taking the time to comment on and read our manuscript in detail, as well as for this supportive and positive evaluation. We provide new results and figures in an uploaded PDF.
**Weaknesses**
[“*W1*”]
We apologize for the error in the table labeling. The standard is indeed found in Table 1, where it is referred to as LinSSM (for linear SSM). We will correct this.
Overall, the convSSM outperforms the standard SSM on the Lorenz63 benchmark and on the ALN data set in latent space. It performs comparably to the standard SSM on the LEMON data where the latent space is inaccessible. We therefore dissected performance contributions of the convSSM in detail by adding an additional (simple to visualize 2D) benchmark, the Van der Pol nonlinear oscillator (VdP), which we could specifically adjust to produce oscillations in a frequency range consistent with the empirical fMRI data. We show that the convSSM outperforms the standard SSM in latent space (where the VdP lives in; Fig. R1), even if we deconvolve the standard SSM in latent space. This demonstrates that the standard SSM, while a very powerful tool for dynamical system reconstruction when trained by GTF, is not able to reproduce the true underlying process without bias. We also show that this can result in biased estimates of maximum Lyapunov exponents (as now illustrated on the Lorenz system where the max. Lyapunov exponent is known, Fig. R2).
In a sense, compared to the standard SSM trained with GTF, the convSSM implements a biological prior, which enables it to often capture strongly low-pass filtered (convolved) processes more efficiently, and to provide a more accurate description of the underlying latent dynamics. In that sense, yes, we believe the novel algorithm offers better mechanistic insights, as the main findings with respect to the empirical data is that we predominantly find chaotic systems.
[“*W2/Minor point*”]
Thank you for pointing this out. We will adjust Figure 2B/C to make it more clear, and reduce the font size in the panel labels.
**Questions**
[“*Q1*”]
As pointed out in the response to *W1*, the latent dynamics is what we want to analyze in terms of computational mechanisms, and therefore care about reproducing accurately. The standard SSM trained by GTF produces biased estimates of this system (as now further illustrated in Figs. R1& R2 in the uploaded PDF). Therefore, while yes, the prediction performance is comparable, the analysis of the generative mechanisms will be flawed.
[“*Q2*”]
Please see our response to *W1*, where we address these questions in detail.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
thank you once more for your time and effort in reviewing our work. We would kindly like to ask if our rebuttal adequately addressed your questions and concerns, and whether there are any remaining questions we can clarify. | Summary: The paper proposes a teacher forcing (TF) mechanism for a latent variable model where the dynamics evolve according to a (deterministic) piecewise linear RNN model and the observations are linear projections of the latent space convolved with a filter (in the case of BOLD signals, the form of that filter is known and it’s called hemodynamic response). In the presence of nuisance parameters (such as residual motion, etc.) a linear term is added to the observation to control for their effect and isolate them from dynamics. TF proves useful for robust training of the model and does a better job in recovering long-term and topological features of the chaotic attractors. Two examples in low-d and high-d synthetic data and one example in a real fMRI dataset (called LEMON) are shown.
Strengths: The literature on using control theoretic ideas for model fitting and training has been expanding. This work takes a natural step towards extending TF to latent variable models with deterministic dynamics.
The writing quality is good, which makes it easy to follow the arguments and contributions.
Most time series models focus on short-term prediction and use MSE to quantify the errors. Although I’m not fully convinced (see the questions and weaknesses) developing metrics that assess the topological properties of attractors is an interesting idea.
Weaknesses: The work has several weaknesses and limitations detailed below. I’m open to discussion about any of them.
- As the title and narrative of the paper suggest, the work is mostly useful for the fMRI data where a well-established convolution filter exists. Extending this to arbitrary (and learnable) observation filters or nonlinear transformations sounds non-trivial.
- The model does not incorporate noise in the dynamics space. Given the main motivation for developing the work is to analyze real-world fMRI data the common assumption of the existence of noisy dynamics is a sensible assumption to make and lack of noise in the dynamics can limit the expressivity of the model. Some control theoretic ideas for noisy dynamics are proposed before \[2,3\], a discussion on this would be helpful to orient readers.
- I didn’t quite understand why the authors used a PLRNN in the latent space. To me, it sounds like any generic RNN should be able to take advantage of teacher forcing in the proposed framework. If this is true, please include examples with other generic RNNs (with tanh or other nonlinearities). Otherwise please explain why it’s critical to use PLRNN as the dynamics model.
- The introduction could be improved. Some relevant models are not introduced or discussed. In my understanding, some of the mean-field theory is developed not to fit the data, but to gain theoretical insight into the population dynamics. In contrast, many latent variable models are developed in the field that are not discussed in the introduction (such as nonlinear LDS, switching linear LDS and its variants, gaussian process latent variable models, data-constrained RNNs, etc.).
- Some of the results go against the main motivation and claims of the paper. Detailed comments are included below.
**Unknown observation filter**
- For the effect of _hrf_, deconvolution methods have been proposed previously. In your framework, you still run deconvolution (through using convolution in the generative model). It’s unclear whether the history-dependence still holds after controlling for the effect of _hrf_. This statistical dependency needs to be shown to motivate your work.
- _“hrf with alternative functions if we want to account for filtering in the original signal”_
Related to this, what about the cases in which the _hrf_ is not known and an a priori observation model does not exist (e.g. a neural network maps the latents to the observations)?
- In addition, the noise spectra are another unknown which is estimated using the VISUSHRIN algorithm (as discussed in the appendix). How are these choices robust to the misspecification? This robustness analysis is important given that it’s almost never the case that we can capture the true noise spectra or _hrf_.
**Results against the claims/motivation**
- _“\[40\] proved that for chaotic systems gradient-based training techniques for RNNs will inevitably lead to diverging loss gradients.”_
This is true under some assumptions, but for specific architectures, this might not hold. In order to motivate your work, it’s crucial to empirically show that the gradients diverge without TF. In fact, your results show that the model without TF does a pretty good job in learning the task and it even outperforms the models with TF on short-term performance measures. Given that the main motivation for the paper is avoiding exploding and vanishing gradients through teacher forcing, it's important to first show that it’s indeed a problem for this specific model (and datasets).
- On the LEMON dataset the latent dimension is set to be equal to the observation dimension. Some parts of the motivation of the paper come from the low dimensionality of the latent space. If the best-performing models on real data are the cases in which the latent and observation dimensions are equal, then the traditional teacher-forcing methods are equally applicable. What do the authors think about this?
**Fig. 2**
- Fig. 2E,F are labeled incorrectly?
- Unclear what’s shown in Fig. 2A.
- What are the takeaways from this figure?
- In Fig. 2C, can you include $\\lambda_{max}$ distributions from other models (specifically convNoGTF model)?
**Long-term performance measures**
- The precise definition of performance measures is in the appendix. Can you reference the appendix section in the main text for readers who want to learn more about the definitions of performance measures?
- It’s important to see how long-term and short-term performance measures are correlated. These results are in the appendix. Can the authors include references in the main text?
- In the figure, the authors show that short-term and long-term prediction errors are correlated. However, in the LEMON dataset, this trend does not seem to hold. How do the authors explain this?
- The 10-step prediction error is always better for the models without GTF. This is very surprising. First, it shows that the GTF is not necessary for this model and the models still learn the task without suffering from vanishing or exploring gradients (which goes against the main motivation of the paper).
- It looks like estimating $D_{PSE}$ in high dimensions is associated with challenges. This is perhaps confounded with dynamics and noise itself too. How trustworthy are these estimates? In other words, how much should we read into these performance measures?
- On the LEMON dataset (based on Table 1) it looks like that model without GTF achieves lower prediction error but higher $D_{PSE}$ and $D_{stsp}$. What exactly does this mean? Given the correlations reported between long and short-term performance measures don’t we expect a successful model to outperform baseline models in both measures?
**More comments**
_“This is not automatically given for standard RNN”_
No model that I’m aware of has the capability of precisely predicting neural data if there’s no clear trial structure due to non-stationarity.
_“guiding the training process through optimally chosen control signals – also referred to as teacher forcing (TF) signals”_
At least the definition that I’m familiar with does not fully coincide with this. In my understanding, there’s no notion of optimality in the control signals and they’re usually driven by a teacher model or the output feedback.
_“Chaotic systems in particular, as typically encountered in neuroscience (e.g., \[53, 21, 32\]), pose a severe problem here”_
Another paper considered the theoretical aspects of training RNNs on chaotic data \[1\], please cite and discuss it.
_“but not if it depends on a history of states”_
The assumption of history dependence is a very common assumption made in most latent variable models.
\[1\] Engelken, Rainer, Fred Wolf, and Larry F. Abbott. "Lyapunov spectra of chaotic recurrent neural networks." Physical Review Research 5.4 (2023): 043044.
\[2\] Brenner, Manuel, Georgia Koppe, and Daniel Durstewitz. "Multimodal teacher forcing for reconstructing nonlinear dynamical systems." arXiv preprint arXiv:2212.07892 (2022).
\[3\] Schimel, Marine, et al. "iLQR-VAE: control-based learning of input-driven dynamics with applications to neural data." bioRxiv (2021): 2021-10.
Technical Quality: 3
Clarity: 3
Questions for Authors: The effect of nuisance variables is considered to be linear, why is this a good assumption?
The latent space for the Lorenz model is 3 dimensional, why do you need L=50 to model this dataset (or even larger in the next experiment)?
Comparisons with many other latent variable models are not shown. Specifically, the neuroscience community has developed a suite of latent variable models with linear, piecewise linear, or nonlinear dynamic models and linear or nonlinear observation models. How do those models compare to the datasets presented here?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Summary**
Thank you for your detailed review! All refs. are found in the general rebuttal to all revs.
**Weaknesses**
[“*W1*”]
The conv. filter could in principle also be learnable by parameterizing its length and weights. Here our focus lay on incorporating biological prior knowledge on the HRF to more accurately reconstruct the dynamics of the underlying system (Figs. R1&R2).
[“*W2*”]
We chose BPTT as it performs competitively even for highly noisy data [2], focusing on a strong and scalable approach. However, our framework can easily be adapted to variational inference [2-4]. The key is making the conv. filter amenable to generalized teacher forcing (GTF). Will discuss.
[“*W3*”]
Yes, as shown in [4], the architecture for dynamical systems reconstruction (DSR) is very flexible. Other act. func. also yield good performance (eg $D_{stsp}(ReLU)= 0.29 \pm 0.47$; $D_{stsp}(tanh)=0.61 \pm 1.61 $ on Lorenz). The PLRNN was chosen as it consistently yields best performance in low dims. [5], and is math. tractable in the sense that many of its topological properties can be determined semi-analytically [6,7].
[“*W4*”]
We will elaborate on neurosci. models like DCMs [8], SLDS [9], and gaussian process models [10], and their usages apart from DSR [6].
**Unknown observation filter (UF)**
[“*UF1*”]
We are unsure we fully understand the question. The biophys. assumption is that the obs. signal is generated from the latent process via conv. By incorporating this prior, we achieve more accurate reconstructions (Figs. R1&2). Should we demonstrate that before deconv. $p(x_t|z_t) \neq p(x_t|z_t…z_{t-\tau})$, while after deconv. $p(x_t|z_t) = p(x_t|z_t…z_{t-\tau})$? We are unaware of a model-independent method to show this?
[“*UF2*”]
See *W1*.
[“*UF3*”]
In Fig. R6, we now show noise level inference in VISUSHRINK for time series with varying noise levels and conv. filters. We also emphasize that the Lorenz exps. (sect. 3.2) demonstrate the model's robustness in inferring GT systems using *default* VISUSHRINK settings, indicating resilience to misspecification.
**Results against the claims (RC)**
[“*RC1*”]
The proof in [40] is not related to architecture, but follows from the properties of GD techniques and the chain rule (the same product series of Jacobians occurs in the def. of the max. Lyap. exponent as in the loss derivatives, causing the problem). So it is indeed quite general as long as GD-based training algos. are used. We now illustrate div. (Fig. R4A) and the problem of models that architecturally prevent div. (Fig. R4B).
[“*RC2*”]
We assume by trad. TF the ref. means that during training the obs. are provided as inputs? GTF is designed to achieve a fine balance between obs.-inferred and forward-iterated latent states that optimally controls trajectory and gradient flows. Traditional ‘ad-hoc’ methods do not work here [5, 11]. Also, trad. TF for DSR requires a mechanism to ensure consistent interpret. of model inputs during training and runtime; directly using obs. as inputs necessitates replacing them with predicted obs., causing the latent DS to lose its Markov property due to conv. Finally, the optimal model dims. are not always of equal dim., but depend on dataset and no. of obs.
**Fig. 2**
Fig. 2E,F: Thanks, will be adapted.
Fig. 2A: Visualizes the reconstruct. performance to provide an intuition on the applied measueres. Will clarify.
Fig. 2C: Yes, Fig. R3.
**Long-term performance measures (PM)**
[“*PM1,2*”]
We are happy to reference Appx. defs. and figs. in the main text.
[“*PM3*”]
We apologize for any confusion and assume the question pertains to Fig. 5 (Appx.). We compare predictions assessed on both short and longer data to show that our PMs can be accurately evaluated on short data. This is shown only on ALN data (since LEMON data is short). Fig. 5B bottom is most relevant for empirical eval, comparing PMs on 500 vs. 5000 data points. All 3 scores correlate with $r \geq 0.72$, which we find satisfactory for assessing DSR. Will clarify.
[“*PM4*”]
There may be a misunderstanding. Our goal with DSR is to achieve agreement in long-term behavior between recons. and true systems [6]. In chaotic systems, short-term forecasting measures like PEs *can be lower for worse reconstructions* [5-6,12], as poor models might capture mean trends or dominant osc. periods that *deviate less from the true signal on average* than accurate models that capture chaos but therefore cause exponential diverg. of trajectories [11] (Fig. R6). We now show that noGTF models frequently *inaccurately* converge to fixed points and osc. (Figs. R3&4).
[“*PM5*”]
For $D_{PSE}$, dimensionality is irrelevant since it is performed dimension-wise. For $D_{stsp}$, efficient high-dim. approximations using GMMs are available and used here (see [5,12] for eval). Thus the main question was whether these measures are reliable on short time series.
[“*PM6*”]
See *PM4*.
**More comments (MC)**
[“*MC1*”]
Agreed, the point here is not precise prediction, but rather capturing long-term temporal and geometric structure [6]. Will clarify.
[“*MC3*”]
Happy to cite and discuss.
[“*MC4*”]
Yes, true, but it still constitutes a problem for DSR. A proper DS model must be Markovian by def. to ensure the uniqueness of trajectories [13].
**Questions**
[“*Q1*”]
We followed the established practice of modeling fMRI data with linear nuisance vars. but emphasize that our approach can easily accommodate nonlinear effects.
[“*Q2*”]
The applied RNN is piecewise linear and more pieces are needed to approx. the nonlinearities of the true eqns.
[“*Q3*”]
We specifically chose SOTA models for *DSR*. Linear lat. models cannot produce DS properties like limit cycles and chaos. Most neurosci. models focus on inferring connectivity params., with only a few addressing DSR [eg 14]. While open to comparisons, prior experiences makes us confident that these models are no match in DSR performance.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
thank you again for your time and effort in reviewing our manuscript. We would kindly like to ask if our rebuttal adequately addressed your major concerns, or if there are any remaining questions we can clarify.
Due to the strict character limit, our responses were partly brief. We are happy to elaborate on points that may yet be unclear.
---
Rebuttal Comment 1.2:
Title: Updated review
Comment: Thank you for replying to all questions and comments. In particular, the addition of new experiments and results are very helpful for a more in-depth understanding of the paper. I will slightly increase my score and would appreciate it if the authors include the following.
**W1-3,Q1)** Can you include these two in the discussion? Specifically, can you describe in the discussion part of the paper how to extend the method to learnable filters (W1), how to extend to variational schemes for models with noise in the latent space (W2), how to extend GTF to arbitrary choices of the architecture in addition to PLRNN (W3), and how to extend to nonlinear nuisance model (Q1)? This could open up new applications of the method and make the paper more accessible to a wider audience. Would you include the updated discussion here for a review?
**UF1)** A simple check is to look at the mutual information between x_t, x_{t-1:t-\tau} and compare it to the mutual information between z_t, z_{t-1:t-\tau}. If these two are largely different, it shows that deconvolution has successfully removed the history dependence.
**UF3, RC1)** The new experiments address these two.
**RC2)** Can you add some details on model selection to the discussion? Specifically, how did you set the latent dimension for specific datasets? This would be important for practitioners who’d be interested in applying this model to other datasets.
**PM3)** Apologies for the unreferenced question. This question is mostly addressed by Fig. R3,4,6 but just to double-check; from what I understand after reading the rebuttal the authors are arguing against the use of PE (either 1-step or 10-step) as it doesn’t represent “long-term” behavior of the dynamical systems and instead suggest using D_{stsp} or D_{PSE}. Is the point of Fig. 2E to say that increasing the horizon of prediction doesn’t help with better quantification of DSR?
**Q2)** Previous switching linear approaches have shown successful reconstruction using 2 pieces. I guess my question is why do we need the number of states to be this large for a good reconstruction?
**Q3)** At least a comparison against one of the switching models (e.g. SLDS or rSLDS) and a nonlinear model (e.g. LFADS) would be very helpful just to make your point.
---
Reply to Comment 1.2.1:
Title: Response to updated review
Comment: W1-3,Q1) This is a very good idea, thank you for the great suggestions! We add here two paragraphs that we will integrate into the Discussion:
“We emphasize that the proposed framework is highly flexible due to its modular structure, and may be easily adapted to meet diverse requirements. First, the latent model can be replaced with any other differentiable and recursive dynamical model, such as e.g. LSTMs. The GTF training framework would remain unchanged as the control signal and the latent state update (eqn. (3)) are not affected by such modifications [5]. Likewise, the observation model can easily be adapted to account for nonlinear effects of nuisance covariates, e.g. through basis expansions in these variables, or through learnable but regularized MLPs. While our model was designed as a scalable method to integrate biological prior knowledge on convolution filters like the HRF, alternatively we can parameterize the filter weights within the observation model, making them learnable through BPTT, with filter length either as a hyperparameter, or by imposing a regularization that truncates filter length by driving coefficients to zero. To prevent conflicts between filter adjustment and latent model, a viable strategy may be stage-wise learning as suggested in [12]. Once the filter is adjusted, one may reduce the learning rate on the observation model, or even fix its parameters, to prioritize learning of the dynamics. Fixing the filter parameters after an initial stage would have the advantage that subsequent training would enjoy the same speed benefits as in our suggested method.
Finally, we would like to highlight that our framework could be adapted to accommodate noise in the latent process. For example, in Brenner et al. [4] the GTF procedure has been modified to work in the context of stochastic DSR models using variational inference. The key idea lies in introducing a (multimodal) variational autoencoder that takes the observed variables as input and maps them into the DSR model’s latent space, thereby generating the control signal required for GTF. In a similar fashion, we could replace the MVAE with the inversion in eqn. (10), thereby providing a TF signal to be used to steer a probabilistic latent DS model, i.e. its distributional mean, via eqn. (3), and use the reparameterization trick [17,18] for BP in the latent space. However, although probabilistic frameworks are appealing, ‘deterministic’ BPTT has previously been shown to be (at least) comparable in terms of DSR performance, even for noisy observations and/or processes [2], such that the benefits for DSR would need to be further examined.”
17) Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In Proceedings of the 31st International Conference on Machine Learning, 2014.
18) Kingma, D. P. and Welling, M. Auto-Encoding Variational Bayes. In Proceedings of the 2nd International Conference on Learning Representations, 2014.
UF1) In a deterministic DS observed within the full state space, the complete future of a trajectory is fully determined by its initial condition, so we would not expect the MI to decay to 0 with $\tau \rightarrow \infty$ (and in fact, we checked this and it does not), but we would expect that the previous state contains all information about the current state if the deconvolution worked as intended. So - in our minds - the crucial question would be whether we have $p(z_t|z_{t-1:t-\tau})=p(z_t|z_{t-1})$ in latent space, but $p(x_t|x_{t-1:t-\tau}) \neq p(x_t|x_{t-1})$ in observation space (i.e., confirming the Markov property in latent but not observation space). These high-dimensional multiple state probabilities are, however, hard to access. For the revision, we will look into different ways we can approximate them, to illustrate this property. | Summary: This paper introduces two techniques, pseudo-inverse and deconvolution, for dynamical system reconstruction. The two techniques are used to help the teacher forcing for the latent sequence $z_t$ so that the learning can be more efficient. Experimental results show the effectiveness of the proposed ConvSSM + GTF compared with alternative variants.
Strengths: * The two techniques seems effective for training an SSM model and dynamical system reconstruction.
* Experimental results on both synthetic and real-world datasets show that ConvSSM with GTF is better than others.
* This works might have broader impact to the field of computational neuroscience, since lots of models are based on dynamical system.
Weaknesses: * From my understanding, there is actually no new model but different ways of training a particular model, although they are called ConvSSM, LinSSM, etc.
* In most recent dynamical systems, the latent RNN procedure is not a deterministic progress, but latent sequence $z_t$ is usually treated as latent variable with some noise at each time steps, such as Gaussian. It is not clear whether this work can be generalized to them.
* The presentation should be improved. It is quite hard, at least for me, to get the main idea of the model until I arrive at line 125. Lots of sentences in the abstract and introduction section seems a bit verbose and distracting.
Technical Quality: 2
Clarity: 2
Questions for Authors: /
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: /
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Summary**
We thank the reviewer for taking the time to comment on and read our manuscript in detail. We provide new results and figures in an uploaded PDF. A complete list of references can be found in the general rebuttal to all reviewers.
**Weaknesses**
[“*W1*”]
We see our major contributions as follows:
1) We developed the first of its kind approach that makes efficient large-scale inference of dynamical systems (DS) models from empirical systems observed through convolved signals feasible. This is achieved by reformulating a SSM model such that it a) becomes amenable to recent, highly efficient control-theoretic training methods for DS reconstruction, and b) performing the deconvolution in a computationally very effective way. This leads to an algorithm which efficiently scales up to large amounts of data. In our minds this is of huge practical relevance for the field, as it enables to construct a large battery of single subject-level models in comparatively short time (and, after all, the tremendous success of some recent methods, e.g. structured SSMs like Mamba [1], lies less with the novelty of the ingredients per se, rather than with their high computational efficiency).
2) Apart from methods development, we also, for the first time, explicitly demonstrate that training and DS assessment of the models is indeed valid even on such short time series as provided by fMRI, which we think is important for the community to know. We also show that the Lyapunov spectrum can be retrieved from the trained models, something that is not possible directly on the experimental data itself.
We believe that the novelty of our contribution should therefore be judged by how this whole framework and its validation could advance the field by opening up new possibilities, not just by how novel the underlying latent model is.
[“*W2*”]
The choice to use conventional (deterministic) backpropagation through time (BPTT) as training algorithm is based on previous findings that, surprisingly, in the context of DS reconstruction BPTT outperforms probabilistic training algorithms like those based on variational inference (VI; [2]), even when the data are in fact highly noisy. Nonetheless, we can easily adapt the framework presented here to work within a VI training framework, as suggested in works by [2-4]. The crucial step is to make the convolutional filter amenable to generalized teacher forcing (GTF) which we have made here.
[“*W3*”]
Thank you for pointing this out. We will rework the presentation to emphasize the crucial ideas behind the model.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
thank you once more for your time and effort in reviewing our work. We would kindly like to ask if we were able to address your concerns, and whether there are any remaining questions we can clarify. | Rebuttal 1:
Rebuttal: **General response**
We thank all reviewers for their positive and supportive feedback, for taking the time to review our work, and for providing many helpful comments and suggestions, which we address in detail below. We have prepared a PDF file with additional material and results.
Specifically, we provide additional analyses that demonstrate once more the superiority of the introduced convSSM model over the standard SSM, both trained with generalized teacher forcing (GTF). Our results now illustrate that the convSSM produces less bias in dynamical systems reconstruction, resulting in improved performance measures (Fig. R1), as well as more accurate estimation of dynamical systems phenomena (Fig. R2).
We also explicitly show that training without GTF leads to higher gradient explosions (Fig. R4A), or alternatively, when gradient explosions are avoided through architectural adjustments, it results in more bias in model estimates due to the inability to accurately reconstruct chaotic phenomena (Fig. R4B & Fig. R3). Additionally, we illustrate the robustness of the VISHUSHRINK algorithm to misspecification (Fig. R5) and explain why prediction errors (e.g., $PE_{10}$) are not an appropriate performance measure for dynamical systems reconstruction (Fig. R6). Finally, we provide an update to Fig. 2C (Fig. R3).
We hope these additions address the reviewers' main questions and concerns. Due to the character limitations, we had to significantly condense our responses at times. We are happy to provide more detailed answers to any further questions upon request.
**References**
1) Gu & Dao. (2023). Mamba: Linear-time sequence modeling with selective state spaces. *arXiv preprint* arXiv:2312.00752
2) Brenner et al. Tractable dendritic RNNs for reconstructing nonlinear dynamical systems. In *Proc. 39th International Conference on Machine Learning* (eds. Chaudhuri, K. et al.) 2292–2320 (PMLR, 2022).
3) Kramer et al. Reconstructing nonlinear dynamical systems from multi-modal time series. In *Proc. 39th International Conference on Machine Learning* (eds Chaudhuri, K. et al.) 11613–11633 (PMLR, 2022).
4) Brenner et al. Multimodal teacher forcing for reconstructing nonlinear dynamical systems. In *Proc. 41st International Conference on Machine Learning* (2024).
5) Hess et al. Generalized teacher forcing for learning chaotic dynamics. In *Proc. 40th International Conference on Machine Learning* (eds Krause, A. et al.) 13017–13049 (PMLR, 2023).
6) Durstewitz et al. (2023). Reconstructing computational system dynamics from neural data with recurrent neural networks. *Nature Reviews Neuroscience, 24(11)*, 693-710.
7) Eisenmann et al. (2024). Bifurcations and loss jumps in RNN training. Advances in *Neural Information Processing Systems2*, 36.
8) Friston et al. (2003). Dynamic causal modelling. *Neuroimage 19*, 1273–1302
9) Ghahramani & Hinton. (2000). Variational learning for switching state-space models. *Neural Computation 12*, 831–864
10) Yu et al. (2009). Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. *Journal of Neurophysiology 102*, 614–635
11) Mikhaeil et al. On the difficulty of learning chaotic dynamics with RNNs. In *Proc. 35th Conference on Neural Information Processing Systems* (eds. Koyejo, S. et al.) (Curran Associates, Inc., 2022).
12) Koppe et al. (2019). Identifying nonlinear dynamical systems via generative recurrent neural networks with applications to fMRI. *PLoS Computational Biology, 15(8)*, e1007263.
13) Perko, L (2013). Differential equations and dynamical systems (Vol. 7). Springer Science & Business Media.
14) Singh et al. (2020). Estimation and validation of individualized 460 dynamic brain models with resting state fmri. *Neuroimage, 221*:117046.
15) Strogatz, SH. Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering (CRC, 2018).
16) Sauer et al. (1991). Embedology. *Journal of Statistical Physics 65*, 579–616
Pdf: /pdf/377a6d2a482644fa009350c8b36e98205fe9499b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Are We on the Right Way for Evaluating Large Vision-Language Models? | Accept (poster) | Summary: This paper investigates the evaluation of large vision-language models (LVLMs) and the currently used benchmarks. Within the paper two primary issues are identified: the lack of need for visual information, and data leakage. Based on these issues a new compiled benchmark is proposed MMStar that includes a set of multi-modal samples validated by humans, and a variety of models are evaluated on this new benchmark.
Strengths: With the rapid progress of the field of LLM and LVLMs, it is crucial that we have reliable benchmarks for evaluation - this work explores existing benchmarks, identifies potential issues therein, and proposes a new benchmark which contributes to the quality of evaluation in the field. Additionally, the work proposes new metrics for evaluation and performs a benchmark evaluation of various models.
Weaknesses: 1) Random Chance
An issue in this work is that it insufficiently accounts for random guessing and data bias. As shown in [27], the ground-truth chance for it being answer A or B on MMbench are 26.4% (due to having questions with less than 4 options), and moreover, certain models may prefer certain answers. LLMs may get correct answers even when they require visual input. Given the 26.4% baseline for MMbench from [27], it is also surprising that the random choice value reported in this paper, for MMbench, is 0.0 in both Table 1 and Table 2. Moreover, an additional baseline based on majority class may also be necessary here.
For the results in Figure 2, which require 6 out of 8 models to get a hit it is unlikely that random guessing has a big influence. However, for the results in Table 1 this is not the case. For instance, it appears that all MMBench results in Table 1 are below the 26.4% from [27] - which means all LLM do worse than always answering A (or B). For other benchmarks there may similarly be data biases in which answer is more frequent.
Relatedly, it may be the case that such data biases about which options are more frequently chosen are more pronounced for multi-modal questions, i.e., when asked about colour the answer is more often grey across all datasets and settings - which means LLM do not learn this data bias as their training data does not include such questions, but LVLM may learn this because the same bias is present in their training data. While this is unlikely to fully explain the phenomenon observed in Table 2, it may explain part of it and not be directly related to data leakage.
2) Not all benchmarks
The issues identified with existing benchmarks do not hold evenly across the benchmarks tested. In particular, MMBench and slightly lesser, MathVista seem to do pretty well with respect to these issues. This also appears to be reflected in the construction of MMStar, where after Manual Review the proportion of questions from these two existing benchmarks jumps considerable - in the end making up almost half of the MMStar benchmark. Which raises questions whether there is any benefit of using MMStar versus simply using MathVista and MMBench.
3) Manual review
The manual review description is insufficiently clear. From the three criteria applied, only the first one is somewhat clear - for the other two it is unclear how the 'experts' judged this. I would expect description (can be in appendix) of agreement rates between these experts for these criteria, as well as further description of what these criteria entail.
4) New metrics
The newly proposed metrics MG and ML are somewhat unclear in what they measure. If the 1500 questions in MMStar all require visual input as determined by the manual review, then what additional information does the MG metric give? Given the discussion above about chance, it appears the MG metric is more of a correction for random guessing. The ML metric similarly doesn't account for random guessing, or the aforementioned potential for data biases.
Technical Quality: 2
Clarity: 3
Questions for Authors: The paper raises a number of interesting questions and points out limitations of (some) existing benchmarks. Unfortunately, the data leakage issue appears to be not fully disentangled from the potential of data biases. I would appreciate input from the authors on this point.
Additionally, I would be interested in discussion on random guessing within the paper, and whether the ideas presented in [27] may address this - and then subsequently how this influences the findings in the paper. Also taking into account the majority class versus random guess.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: I believe question 9 in the paper checklist has been answered incorrectly - or at least the justification given is not valid. Given that the paper discusses a benchmark that may include images of people or copyrighted material it is crucial that the authors affirm whether the work has been done in accordance to the ethical guidelines. Even if the data is a compilation of existing datasets - by selecting and combining information from these the new resulting dataset may be biased in ways the original datasets were not (e.g., by selecting only those images containing users from a certain demographic).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable suggestions. We address your concerns point by point:
**Q1: Given the 26.4% baseline for MMbench from [27], it is also surprising that the random choice value reported in this paper, for MMbench, is 0.0 in both Table 1 and Table 2.**
**A1:** The 26.4% in MMBench represents the frequent choice probability. We use MMBench's circular evaluation method, where:
1. Options are shuffled and evaluated multiple times
2. All evaluations must be correct for the sample to be considered correct
Probability of random guessing:
- Two-choice: 1/4
- Three-choice: 1/27
- Four-choice: 1/256
MMBench has 77 two-choice, 221 three-choice, and 1462 four-choice questions, resulting in a theoretical accuracy of approximately 1.88%. Using VLMEvalKit, the files include options C and D for all samples, making the generalized random choice accuracy about 0.39%. Actual random selection shows an accuracy of 0.01%, rounded to 0.0% in Tables 1 and 2. This reflects the presence of samples that do not rely on visual content or have been leaked in the LLM training data, aligning with our findings.
**Q2: An additional baseline based on majority class may also be necessary here.**
**A2:** Good point! We conducted a frequent choice evaluation on existing benchmarks and MMStar:
- MMMU|26.8
- MMB|0.0
- ScienceQA|36.0
- AI2D|26.9
- SEED|26.9
- MathVista|26.3
- MMStar|29.8
The result of 0.0% for MMBench is due to circular evaluation, too. We chose not to use the circular evaluation mechanism for MMStar, consistent with most other benchmarks. We'll include these results in the next version.
**Q3: For the results in Figure 2, which require 6 out of 8 models...which means all LLM do worse than always answering A (or B).**
**A3:** This misunderstanding is resolved by explaining the circular evaluation for MMBench. The 0.0% results for random choice and frequent choice are lower than the 13.8% average accuracy for 22 LLMs. This indicates some samples don't rely on visual content or have been leaked in LLM training data, aligning with our observations.
**Q4: The issues identified with existing benchmarks...whether there is any benefit of using MMStar versus simply using MathVista and MMBench.**
**A4:** On one hand, MMBench uses circular evaluation, so the 0.0% random choice accuracy is much lower than the 13.8% average for the 22 LLMs. On the other hand, MathVista's 17.9% random choice accuracy is also lower than the 22.5% average for the 22 LLMs. While MathVista focuses on mathematics, our MMStar covers six core competencies and 18 detailed axes, offering a more comprehensive evaluation of LVLMs' multimodal capabilities. Additionally, we manually verified that each MMStar sample is visually dependent, a guarantee that neither MMBench nor MathVista can provide.
**Q5: The manual review description is insufficiently clear...as well as further description of what these criteria entail.**
**A5:** Due to space constraints, we have detailed the manual review process and the agreement rates in the General Author Rebuttal.
**Q6: The newly proposed metrics MG and ML...account for random guessing, or the aforementioned potential for data biases.**
**A6:** The score of an LVLM on a multimodal benchmark can be split into three parts: leakage from LLMs, leakage during multimodal training, and genuine understanding from multimodal training. Multi-modal Leakage (ML) measures the second part, and Multi-modal Gain (MG) measures the third. These metrics complement each other and should not be considered separately.
These metrics are not exclusive to MMStar; they can analyze existing benchmarks that may not ensure visual dependency. Although MMStar ensures visual dependency, some samples might still leak into future LVLMs' training corpora. In such cases, MG and ML can assess multi-modal training leakage and performance gains.
**Q7: The paper raises a number of interesting questions...input from the authors on this point.**
**A7:** We have provided clarifications in our previous responses to Q1, Q2, and Q3, explaining that we adopt MMBench's native circular evaluation mechanism. Therefore, the 0.0% accuracy for both random choice and frequent choice is significantly lower than the average accuracy of 13.8% for the 22 LLMs, which indicates the presence of data leakage in the LLMs.
**Q8: Additionally, I would be interested...Also taking into account the majority class versus random guess.**
**A8:** We add circular evaluation results for MMBench, AI2D, and MMStar using two representative LVLMs. One can observe from the table that:
- Random choice and frequent choice results close to 0% under circular evaluation
- Data leakage in LLMs observed (e.g., InternVL-Chat-v1.2's LLM achieves 47.3% on AI2D, surpassing LLaVA-1.5's performance with images)
- Multimodal training data leakage evident (e.g., LLaVA-1.5 improves MMBench and AI2D performance by 8.8% and 14.3% without image input)
- MMStar mitigates sample leakage in LLM and LVLM training corpora while maintaining the challenge level
| Model | Strategy | MMB | AI2D | MMStar |
|--------------------|-----------|:----:|:----:|:------:|
| Random Choice | - | 0.0 | 0.2 | 0.1 |
| Frequent Choice | - | 0.0 | 0.0 | 0.0 |
| LLaVA-1.5 | LLM | 10.3 | 18.3 | 1.7 |
| | LVLM-text | 19.5 | 32.6 | 6.7 |
| | LVLM | 65.0 | 41.7 | 19.0 |
| InternVL-Chat-v1.2 | LLM | 20.1 | 47.3 | 4.1 |
| | LVLM-text | 23.9 | 53.3 | 11.1 |
| | LVLM | 82.4 | 71.7 | 45.6 |
**Q9: I believe question 9 in the paper checklist has been answered incorrectly.**
**A9:** We have carefully reviewed the Code of Ethics of NeurIPS and cross-checked each requirement to ensure compliance. We will update this answer to [Yes] in the next version of the manuscript.
Please do not hesitate to contact us if you have any further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed reply and clarifications, with this my concerns are addressed, I will update my score to a weak accept.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Dear Reviewer sNEM:
Thank you for your time and patience in reviewing our submission and rebuttal. Your appreciation for our work, along with the questions and suggestions you provided, has greatly helped us improve the quality of our work.
Best regards and thanks,
Paper 2491 Authors | Summary: The current benchmarks used to evaluate Vision Language Models (VLMs) contain several flaws. In particular, a lot of questions can be answered without looking at the image at all. These benchmarks still being hard, the best proprietary models without looking at the images can obtain better scores than strong VLM baselines (looking at the images). As a result, the authors create MMStar, a difficult benchmark aiming at evaluating the capability of the vision-language tasks. They manually review their benchmark, and do several ablations to confirm its importance.
Strengths: - The authors provide a benchmark that is hard to be good at without looking at the images. This is a problem for some questions in MMMU and MathVista currently.
- The authors manually reviewed the examples of the benchmark.
- The dataset is nicely divided into 6 subtasks, evaluating different aspects. The fact that there is exactly the same number of examples in each of these subtasks is appreciated.
- The fact that each question is a MCQ, instead of an open-ended question that would be difficult to evaluate due to the different output formats of the models, is also appreciated.
Weaknesses: - In the released dataset, the choices are directly integrated into the prompt. It would be good to also add a column with only the original question, and another column containing the list with the possible options, so that researchers could evaluate their models with the prompts they used during their fine-tuning.
- As the authors mentioned, it would be useful to also create a test set for this benchmark.
- It would have probably made more sense to publish this in the Datasets and Benchmarks track.
Technical Quality: 3
Clarity: 3
Questions for Authors: I personally noticed many hallucinations in the SEED benchmark, with some Q/A pairs that are simply false.
Since the Q/A pairs from SEED represent 28.3% of your dataset, I am worried that you would have such incorrect pairs in your benchmark.
Can you confirm that this was removed during the manual filtering?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your thorough review and appreciation of our work. Below, we address your concerns point by point:
**Q1: In the released dataset, the choices are directly integrated into the prompt. It would be good to also add a column with only the original question, and another column containing the list with the possible options, so that researchers could evaluate their models with the prompts they used during their fine-tuning.**
**A1:** Good point! We will revise the format as per your suggestions for the public release of the benchmark. Here is a description of the revised columns for the benchmark:
- **Index**: The sample number in the benchmark.
- **Question**: Contains only the question itself without options.
- **Image**: The content of the image.
- **Options**: Lists the content of all options.
- **Answer**: A single letter indicating the correct answer.
- **Category**: Indicates the core dimension to which the sample belongs.
- **L2 Category**: Indicates the detailed axis to which the sample belongs.
- **Meta Info**: Indicates the source of the sample from previous multimodal benchmarks.
We hope this format will facilitate evaluation within the LVLMs community.
**Q2: As the authors mentioned, it would be useful to also create a test set for this benchmark.**
**A2:** Thank you for your valuable feedback. We will strive to create a private test set in the future. We believe this test set can help the LVLMs community more comprehensively and fairly evaluate the actual multi-modal capabilities of existing models.
**Q3: It would have probably made more sense to publish this in the Datasets and Benchmarks track.**
**A3:** In fact, our work begins with two important and interesting observations regarding the evaluation of existing LVLMs. The first observation is that many samples in current benchmarks can be correctly answered without relying on visual content. The second observation is the phenomenon of data leakage hidden in the final evaluation scores of LVLMs. The second observation is derived from a detailed analysis of many carefully constructed experimental results, revealing the potential data leakage in LLMs and LVLMs that has not been adequately addressed in the current LVLM evaluation field.
Based on these two observations, we propose two solutions. The first is a meticulously constructed benchmark that ensures all samples are visually dependent and are, as far as possible, not leaked in the training corpus of LLMs. However, this benchmark cannot entirely prevent some samples from being present in the training data of LVLMs. Additionally, it cannot guarantee that the samples will remain unexposed to LLMs and LVLMs introduced after the benchmark's creation date. Therefore, we also innovatively propose two supplementary metrics: Multi-modal Gain and Multi-modal Leakage.
It is important to note that these two metrics are not exclusively tied to MMStar. They can be used to measure the extent of data leakage and the actual multimodal capability improvements gained from multimodal training in any benchmark. These metrics allow researchers to evaluate multimodal training leakage and benefit levels at any time, independent of the benchmark's creation date.
Therefore, the contributions of this work include two important and intuitive observations, a benchmark, and a set of multimodal evaluation methodologies. After careful discussion, we believe this work is more suitable for the main track.
**Q4: I personally noticed many hallucinations in the SEED benchmark, with some Q/A pairs that are simply false. Since the Q/A pairs from SEED represent 28.3% of your dataset, I am worried that you would have such incorrect pairs in your benchmark. Can you confirm that this was removed during the manual filtering?**
**A4:** Astute observation! The occurrence of hallucinations in samples seems inevitable for large-scale benchmarks. Compared to the initial candidate pool of 14,000 samples from SEED, we retained only around 400 samples in our final benchmark, significantly reducing the cost of manual review. Therefore, all samples in MMStar underwent cross-validation by three experts to minimize the issue of hallucinations as much as possible.
Your constructive comments and criticisms will greatly assist us in improving this work. Please do not hesitate to contact us if you have any further questions.
---
Rebuttal Comment 1.1:
Title: Answer to authors
Comment: Thank you for answering the questions.
---
Reply to Comment 1.1.1:
Title: Thanks for the appreciation
Comment: Thank you for your prompt response and appreciation. Your suggestions have indeed helped us improve the quality of this work. | Summary: In this paper, the authors examine current benchmarks for large vision-language models (LVLMs) and identify two main problems: 1) many samples do not require visual content, and 2) there is unintentional data leakage in LLM and LVLM training. To address these issues, they developed a multimodal benchmark called MMStar, consisting of 1,500 samples, and proposed two metrics to measure data leakage and performance gain in LVLMs’ multimodal training. They conducted empirical evaluations on 16 LVLMs to report their performance on MMStar.
Strengths: 1. The paper is well-organized and easy to follow.
2. The motivation behind the study is clear, and the empirical analysis is thorough.
3. Data curation for MMStar is comprehensively explained.
4. The proposed performance metrics are intuitive and effectively presented.
Weaknesses: 1. The authors only consider multiple-choice questions for the MMStar benchmark. Including a wider variety of well-curated questions without choices would be great.
2. Similar to Figure 2, the authors should provide the LLM Hit Rate for the MMStar benchmark.
3. What is the percentage distribution of the 1,500 samples across the four difficulty categories?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please respond to the points of weakness I mentioned above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are encouraged to see that you found our work intuitive, containing extensive experiments and empirical analysis, and well-written. We have endeavored to address your concerns as follows:
**Q1: The authors only consider multiple-choice questions for the MMStar benchmark. Including a wider variety of well-curated questions without choices would be great.**
**A1:** Thank you for your suggestion. In the current version of the MMStar benchmark, we choose to use a multiple-choice question format for two reasons:
1. Most of the mainstream multi-modal benchmarks are in multiple-choice format. We carefully select samples from these benchmarks to construct a comprehensive, fully visually dependent benchmark, making MMStar a multiple-choice format as well.
2. The multiple-choice format allows for more objective evaluation, avoiding fluctuations caused by variations in LLM versions (differences in LLM capabilities). For instance, in FreeVA [1], it was observed that the results of some open-ended multimodal benchmarks were easily influenced by the version of the GPT API used.
However, if the questions and prompts given to the language model are appropriate, open-ended questions without options can indeed better assess the capabilities of LVLMs. We are open to exploring this approach in future work.
**Q2: Similar to Figure 2, the authors should provide the LLM Hit Rate for the MMStar benchmark.**
**A2:** Thanks for your valuable feedback. The LLM Hit Rate of MMStar is 0%, which is significantly lower than the lowest LLM Hit Rate of 10.3% observed in the previous 6 benchmarks. This exceptionally low LLM Hit Rate is a result of our meticulously designed benchmark construction pipeline. In the first step, we only select samples from 6 existing benchmarks that are hit 2 times or fewer by 8 advanced LLMs. We have detailed the statistics of the hit counts in MMStar in the table below. As shown, all samples in MMStar have hit counts far less than 6, resulting in an LLM Hit Rate of 0%. We will add them to the supplementary materials of the next version.
| Number of hits | Number of samples |
|----------------|-------------------|
| 0 | 848 |
| 1 | 392 |
| 2 | 260 |
**Q3: What is the percentage distribution of the 1,500 samples across the four difficulty categories?**
**A3:** Thank you for your thorough review and reminder. In the table below, we present the difficulty distribution of samples in MMStar. As shown, nearly 80% of the samples are answered correctly by at most half (8) of the LVLMs, with easy samples comprising less than 10% of the total. This highlights MMStar's focus on challenging samples that require advanced multimodal capabilities from LVLMs.
| Difficulty level | Number of samples |
|------------------|-------------------|
| Tough (0-3) | 532 |
| Hard (4-7) | 631 |
| Moderate (8-11) | 189 |
| Easy (12-16) | 148 |
Your constructive comments and criticisms will greatly assist us in improving this work. Please do not hesitate to contact us if you have any further questions.
[1] FreeVA: Offline MLLM as Training-Free Video Assistant | Summary: The authors have identified two primary concerns with the benchmarks commonly used for large vision-language models (LVLMs). Firstly, many samples do not require visual content to answer the questions. Secondly, they noted unintentional data leakage during LVLM training. They assessed eight large language models (LLMs) across six widely-used multi-modal LLM benchmarks, demonstrating that LLMs can correctly answer a significant portion of questions without visual input. To more reliably evaluate LVLM performance, they developed a new benchmark by meticulously filtering data from six existing benchmarks with three requirements: 1) visual dependency, 2) minimal data leakage and 3) multi-modal capability for resolutions. Additionally, they designed two metrics: Multi-modal Gain, to quantify the improvement from multi-modal training, and Multi-modal Leakage, to assess the extent of potential data leakage. Using this new benchmark and the two metrics, they provide a comprehensive comparison of state-of-the-art LVLMs.
Strengths: - The paper is well-structured and clearly articulated, facilitating ease of comprehension.
- The evaluation process is meticulously designed, and the conclusions drawn from it are convincing.
- The findings presented in this paper meaningfully impact multi-modal large language model (LLM) research. Researchers have depended heavily on benchmarks without thoroughly examining their quality. The authors question the reliability of evaluations based on these benchmarks. Without a reliable benchmark, it is impossible to faithfully measure actual multi-modal gain. They developed a new benchmark, MMStar, which facilitates more reliable evaluations.
- Using the MMStar benchmark, the authors evaluated two closed-source and fourteen open-source large vision-language models (LVLMs), with the results presented in Table 3. As expected, GPT4 emerged as the top performer in five out of six tasks. Additionally, they underscored the efficacy of smaller-scale models by highlighting that TinyLLaVa, a model with 3 billion parameters, outperformed some larger competitors with 7 billion and 13 billion parameters, thereby emphasizing the potential of smaller-scale LVLMs.
Weaknesses: - The proposed metrics, Multi-modal Gain and Multi-modal Leakage, are dependent on the base LLM utilized in the large vision-language models. This dependency complicates the use of these metrics for directly comparing the multi-modal gain across different LVLMs.
- The manual review step aggressively reduces the MMStar benchmark from 11,607 samples to 1,500 samples. The explanation provided in Section 3.1 for this reduction is somewhat vague and lacks clear, objective criteria for filtering. I am curious about the rationale behind such an aggressive reduction by nearly tenfold. Is this reduction due to a scarcity of data meeting the three specified criteria mentioned between line 187 to 189, or are there other reasons for this decision?
Technical Quality: 3
Clarity: 3
Questions for Authors: - Please clarify the goal of using proposed metrics
- Please clarify the decision to aggressively reduce the size of final benchmark
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors properly discuss the limitation in the paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the positive comments on the novelty and meaningful impact of our findings and proposed benchmark. We detail your concerns and our corresponding responses below:
**Q1: The proposed metrics, Multi-modal Gain (MG) and Multi-modal Leakage (ML), are dependent on the base LLM utilized in the large vision-language models. This dependency complicates the use of these metrics for directly comparing the multi-modal gain across different LVLMs.**
**A1:** In fact, when developers want to evaluate the actual multi-modal capabilities of their LVLMs, they can directly choose our MMStar benchmark and perform inference with LVLM for quick evaluation. Furthermore, ML and MG can reflect the extent of data leakage during multimodal training and the actual improvement in the model's multimodal capabilities. We have already provided the performance of several popular LLM bases in our work, and we will continue to update with new LLM bases such as LLaMA3 and Gemma2 on MMStar. This will facilitate the community in quickly evaluating and comparing the multimodal gains of their models.
Additionally, the proposed MG and ML metrics can serve as probes for the training corpus of LVLMs. For instance, averaged over seven benchmarks, ShareGPT4V-7B, compared to LLaVA-1.5-7B, significantly increases its MG by incorporating high-quality image-caption data under the same architecture without affecting ML, demonstrating the importance of high-quality image-caption data. Similarly, with an average of over 16 models, MMMU exhibited the lowest average MG, indicating minimal overlap between the multimodal training data of LVLMs and the samples in MMMU.
**Q2: The manual review step aggressively reduces the MMStar benchmark from 11,607 samples to 1,500 samples. The explanation provided in Section 3.1 for this reduction is somewhat vague and lacks clear, objective criteria for filtering. I am curious about the rationale behind such an aggressive reduction by nearly tenfold. Is this reduction due to a scarcity of data meeting the three specified criteria mentioned between lines 187 to 189, or are there other reasons for this decision?**
**A2:** Thank you for pointing out the lack of clarity in our description of the manual review stage. We provide a detailed supplement on this stage here and will integrate these details into the main text. After roughly filtering the original data pool with 8 advanced LLMs, resulting in 11,607 candidate samples, we initiate a rigorous manual review phase.
First, we establish 6 core evaluation dimensions and 18 detailed axes by integrating the evaluation dimensions from existing benchmarks. Next, we use 16 LVLMs to infer and count the number of hits for each sample. Furthermore, we design a UI interface listing the current sample's image, options, answer, sample source, hit count, and the 18 detailed axes. The samples are arranged in ascending order based on the number of hits.
The formal manual selection and benchmark construction process is as follows:
1. Preliminary Classification: Three experts are each responsible for two core capability dimensions (i.e., 6 detailed axes). They need to review all candidate samples and select and correctly classify the samples belonging to their respective dimensions. The samples selected must retain their visual dependency.
2. Statistical Analysis: After the preliminary classification, we consider the numerical balance between dimensions and the difficulty level of the samples. Samples under the "coarse perception" dimension approach 4,000, while those under "logical reasoning" are fewer than 700. In terms of difficulty distribution, there are 4,555 easy (i.e., number of hits between 12 and 16) samples but only 2,758 tough (i.e., number of hits between 0 and 3) ones. Given these premises, a lot of repetitive simple samples, such as those merely asking for the color of an object in the image, are not what we desire.
3. Initial Benchmark: After considering both the numerical balance and difficulty level of the samples, we set the total sample number of the benchmark at 1,500, with each core capability dimension containing 250 samples. Then, we assign each expert two core capability dimensions, instructing them to prioritize sample difficulty when selecting 250 samples per dimension.
4. Cross-Validation: To minimize personal bias, we arrange for each expert to review the dimensions handled by the other two experts after the initial benchmark is constructed. Samples with issues are replaced by correct samples of the same difficulty level from the candidate pool.
By following this thorough process, we ensure a balanced and challenging benchmark set.
If you still have any concerns or aspects you would like to discuss further, please do not hesitate to contact us at any time. | Rebuttal 1:
Rebuttal: We sincerely appreciate all reviewers for your time and efforts in the review. All detailed questions of each reviewer are answered accordingly in each column below. We hope these responses can address the reviewers' concerns adequately. Additionally, we provide the implementation details of the manual review process used in constructing MMStar to supplement the missing details in the manuscript.
After roughly filtering the original data pool with 8 advanced LLMs, resulting in 11,607 candidate samples, we initiate a rigorous manual review phase. First, we establish 6 core evaluation dimensions and 18 detailed axes by integrating the evaluation dimensions from existing benchmarks. Next, we use 16 LVLMs to infer and count the number of hits for each sample. Furthermore, we design a UI interface listing the current sample's image, options, answer, sample source, hit count, and the 18 detailed axes. The samples are arranged in ascending order based on the number of hits.
The formal manual selection and benchmark construction process is as follows:
1. Preliminary Classification: Three experts are each responsible for two core capability dimensions (i.e., 6 detailed axes). They need to review all candidate samples and select and correctly classify the samples belonging to their respective dimensions. The samples selected must retain their visual dependency.
2. Statistical Analysis: After the preliminary classification, we consider the numerical balance between dimensions and the difficulty level of the samples. Samples under the "coarse perception" dimension approach 4,000, while those under "logical reasoning" are fewer than 700. In terms of difficulty distribution, there are 4,555 easy (i.e., number of hits between 12 and 16) samples but only 2,758 tough (i.e., number of hits between 0 and 3) ones. Given these premises, a lot of repetitive simple samples, such as those merely asking for the color of an object in the image, are not what we desire.
3. Initial Benchmark: After considering both the numerical balance and difficulty level of the samples, we set the total sample number of the benchmark at 1,500, with each core capability dimension containing 250 samples. Then, we assign each expert two core capability dimensions, instructing them to prioritize sample difficulty when selecting 250 samples per dimension.
4. Cross-Validation: To minimize personal bias, we arrange for each expert to review the dimensions handled by the other two experts after the initial benchmark is constructed. Samples with issues are replaced by correct samples of the same difficulty level from the candidate pool.
Moreover, we also provide the number of samples with consensus before and after the cross-validation step in the manual review process for MMStar in the table below. Only samples that all three experts unanimously agree upon are retained; otherwise, they are replaced with samples of the same difficulty level from the candidate pool.
| | Before | After |
|----------|--------|-------|
| Expert 1 | 472 | 500 |
| Expert 2 | 468 | 500 |
| Expert 3 | 483 | 500 |
By following this thorough process, we ensure a balanced and challenging benchmark set. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Leveraging Drift to Improve Sample Complexity of Variance Exploding Diffusion Models | Accept (poster) | Summary: Diffusion models are powerful tools in generative modeling. As the paper pointed out, very few theoretical works in the literature
consider variance exploding diffusion models. Among those works, forward convergence rate $1/\text{poly}(T)$ is achieved compared to $\exp(-T)$ from the variance preserving models. This paper proposes a drifted variance exploding diffusion model that allows a faster $\exp(-T)$ forward convergence rate. With this process, the polynomial sample complexity for a series of variance exploding models is achieved under the manifold hypothesis. In addition to the reverse SDE, probability flow ODE is a popular alternative in the literature. The paper considers a more general setting and proves a convergence guarantee with probability flow ODE.
Strengths: (1) The paper proposes a so-called drifted variance exploding forward process and to the best of my knowledge this is new.
(2) When the data is supported on a compact set, the paper manages to derives convergence guarantees in total variation distance, and 1-Wasserstein distance.
(3) The analysis seems to be rigorous.
Weaknesses: (1) The model seems to be an interplay between the VE-SDE model and VP-SDE model in the literature in the sense that when $\tau=1$, it recovers the VP-SDE model and then $\tau=\infty$, it recovers the VE-SDE model. To better compare the model with the VP-SDE and VE-SDE models in the literature, I am wondering whether it is better to assume $\tau=[1,\infty]$ and for a fixed $\tau$, to see what is the convergence guarantees and how it depends on $\tau$? If you do that, will you find the optimal range of $\tau$ to be $[T,T^{2}]$ which is what proposed in the paper? My guess is that for given $\beta_{t}$, the optimal choice of $\tau$ should depend on $\beta_{t}$ instead of being in the range $[T,T^{2}]$.
(2) Since the model proposed is a new model, it would be even more convincing if the paper can include some experiments beyond synthetic experiments.
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) You studied both total variation distance and 1-Wasserstein distance. Since your data is supported on a compact domain, would it be possible to study more general $p$-Wasserstein distance guarantees?
(2) After Assumption 3.1. you wrote that the choice $\tau\in[T,T^{2}]$ is used to guarantee the exploding variance of the forward process. This is not that accurate. I understand when $\tau$ is large, the mean-reverting effect of the forward SDE gets very weak, and the variance tends to grow. But that does not explain why you need $\tau\leq T^{2}$. I think you should also add some discussions on why you need $\tau\leq T^{2}$.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations have been discussed in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions. We provide our response to each question below.
**Weakness 1: The universe error analysis for general $\tau$ and $\beta_t$.**
In this part, we first provide a universe complexity for general $\tau \in [1,+\infty)$ and $\beta_t\in [1,t^2]$ under the reverse SDE, which covers current diffusion models. Then, we discuss the influence of different $\tau$ and $\beta_t$ in detail.
$$
\begin{align}
\frac{\bar{D} \sqrt{m_t}}{\sigma_T}+\frac{R^2\sqrt{d}}{\sigma_\delta^4}\sqrt{\bar{\gamma}_K\beta_T\tau T}+\epsilon\_{\text{score}}\sqrt{\beta_TT},
\end{align}
$$
where
$$
\begin{align}
m_t=\exp \left(-\int_0^t \beta_s/\tau \mathrm{~d} s\right)\, \text{and } \sigma_t^2=\tau\left(1-m_t^2\right)\,.
\end{align}
$$
(a) The general formula covers current diffusion models.
In this paragraph, we show that our general formula can cover current models and provide the sample complexity. The key part of the sample complexity is balancing the first two terms: the reverse beginning and the discretization term. When $\beta_t=1$ and $\tau=1$, the drifted VESDE becomes VPSDE and $m_t=\exp{(-T)}$, which leads to a logarithmic $T$ and would not influence discretization term heavily. Then, we achieve $\tilde{O}(1/\epsilon\_{W_2}^8\epsilon\_{TV}^2)$, which has the same order with [1]. When $\beta_t=1$ and $\tau =T$, our formula is similar but slightly better (as shown in Figure 2 and our real-world experiments) to pure VESDE (\sigma_t^2= t) and we can achieve $\tilde{O}(1/\epsilon\_{W_2}^8\epsilon\_{TV}^6)$ results ([2] achieve slightly better results $1/\epsilon\_{W_2}^8\epsilon\_{TV}^4$ since they assume strong LSI assumption). When considering $\beta_t=t$ and $\tau =T^2$, the general formula is similar to pure SOTA VESDE (\sigma_t^2=t^2) and achieve the first polynomial sample complexity $\tilde{O}(1/\epsilon\_{W_2}^8\epsilon\_{TV}^6)$ under the manifold hypothesis. We also note that the above results holds for pure VESDE with $\sigma_t^2= t$ and $t^2$.
(b) When given a fixed $\beta_t$, the optimal $\tau$ has the same order with $\beta_T$.
As shown in (a), pure VESDE has a worse $\epsilon_{TV}$ dependence compared to VPSDE, which comes from large reverse beginning terms (the first term). For example, when considering $\beta_t=t$ and $\tau =T^2$, $m\_T= e^{-1/2}$ and $\sigma\_T^2=(1-e^{-1})T$, which leads a polynomial $T$ and heavily influence the second discretization term. Hence, the optimal choice of $\tau=T$ instead of $T^2$. With $\tau = T$, $m\_T=\exp{(-T)}$ and the complexity is $\tilde{O}(1/\epsilon\_{W_2}^8\epsilon\_{TV}^2)$ (Thm. 5.2), which has the same with VPSDE (this result also shows that the choice of VPSDE is optimal.). For $\beta_t=t^2$, the optimal $\tau$ is $T^2$, which has the same order with $\beta_T$.
We will make the above discussion clearer in the next version.
**Weakness 2: The real-world experiments on CelebA 256.**
We do experiments on the CelebA 256 dataset (a common face dataset) and show that our drifted VESDE can improve the results of pure VESDE **without training** from the quantitative and qualitative perspectives. Please see the experiment detail, discussion, and generated images in the **global rebuttal** part.
**Q1: The general $W_p$ distance.**
For the reverse SDE setting, similar to Corollary 5 of [1], by using the projection technique, we can achieve pure $W_2$ guarantee $\tilde{O}(1/\epsilon\_{W\_2}^{12})$, which has the same order with [1].
For the reverse PFODE setting, similar to the tangent-based method for VPSDE [3], our results can extend to $W_p$ for any $p\ge 1$. We will make it clearer in the next version.
**Q2: The choice of $\tau$.**
Thanks for the helpful comment on our general formula. As shown in Weakness 1, our general formula can consider $\tau\in [1,+\infty)$ and achieve polynomial sample complexity under the reverse SDE. We also show that our formula can recover current VPSDE and pure VESDE models and go beyond with $\tau \in [1,T^2]$.
For the reverse PFODE, as shown in Lem. 6.3, if considering $\tau =1$ (the VPSDE setting), there would be an additional $\exp{(T)}$ term. To avoid this term, we need to use the variance exploding property of VESDE. Hence, we choose $\tau\in [T,T^2]$, which represents two common VESDE choices under reverse PFODE (We note that Thm. 6.2 and Coro. 6.3 also hold for pure VESDE.)
We will make it clearer in the next version.
[1] Chen, S., Chewi, S., Li, J., Li, Y., Salim, A., & Zhang, A. R. (2022). Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. *arXiv preprint arXiv:2209.11215*.
[2] Lee, H., Lu, J., & Tan, Y. (2022). Convergence for score-based generative modeling with polynomial complexity. *Advances in Neural Information Processing Systems*, *35*, 22870-22882.
[3] De Bortoli, V. (2022). Convergence of denoising diffusion models under the manifold hypothesis. *arXiv preprint arXiv:2208.05314*.
---
Rebuttal Comment 1.1:
Comment: We thank you once again for your careful reading of our paper and your constructive comments and suggestions. As the discussion period approaches its end, we will appreciate it very much if you could let us know whether all your concerns are addressed. We are also more than happy to discuss our work and answer any further questions.
---
Rebuttal 2:
Comment: Thanks again for your insightful suggestions and comments! According to your helpful comments, we improve our work from the empirical and theoretical perspectives. From the empirical perspective, we do experiments on the real-world CelebA 256 dataset and show that our drifted VESDE is a plug-and-play method without training. More specifically, the images generated by our drifted VESDE are more detailed than those of the pure VESDE baseline (shown in the PDF of the global rebuttal). From the theoretical perspective, we show that our drifted VESDE covers common diffusion models (including VP and VESDE) and goes beyond. More details and discussions are shown in the rebuttal part. We will add the above discussion to our next version and are more than happy to answer any further questions. | Summary: The paper analyzes the Variance Exploding diffusion model under the manifold hypothesis. By a slight modification to the VESDE process, the authors propose a method whose convergence guarantees are better than prior best known rates in this regime.
Strengths: The rate obtained is state-of-the-art for Variance Exploding models, which are notorious for being difficult to analyze.
Although the paper borrows parts of its analytic framework from the work of Bortoli et al. (2022), the adaptation of the analysis in this context appears to be non-trivial.
The paper is clearly written and the exposition of the proof is overall quite clear.
Weaknesses: The experiments in the main text and appendix are not particularly thorough. This is acceptable since the primary claimed contribution by this article is theoretical, but some kind of practical benchmark might also be beneficial.
The dimension and diameter dependence are quite severe. Is there any sense of how close these parameters are to their optimal values?
Technical Quality: 3
Clarity: 2
Questions for Authors: The final rates are only given in total variation. Is it possible to improve the metric to KL or to state W2 bounds with similar complexities? What is the analytical challenge otherwise?
Theorems 6.1 and Corollary 6.2 could have their exposition simplified in the main text; the intuition provided in the subsequent section is much clearer.
Minor:
The font used for "KL" is not consistent in the equations in the appendix.
97: about data -> about the data
102: score function -> the score function
107: with strong LSI -> with a strong LSI
108: assume the Lipschitz score -> assume the score is Lipschitz
108: first work focus -> first work to focus
138: introduces -> introduce
187: "reversing" does not make sense here. Perhaps "initial distribution of the reverse process" is meant
216: "does reverse" -> "which reverse the process"
345: "support" -> "supports"
364: "support our", "show that" -> "supports our", "shows that"
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have already addressed all major limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions. We provide our response to each question below.
**W1: The real-world experiments on CelebA 256.**
We do experiments on the CelebA 256 dataset (a common face dataset) and show that our drifted VESDE can improve the results of pure VESDE **without training** from the quantitative and qualitative perspectives. Please see the experiment detail, discussion, and generated images in the **global rebuttal** part.
**W2: The discussion on the diameter.**
Since there is no existing work to discuss the lower bound of the diameter of VE-based models, we start from the VP-based models and discuss the improvement space of VE-based models to achieve the same order results compared with VP-based models.
For the reverse SDE, VP-based models achieve an optimal $d$ dependence by using the stochastic localization technique and exponential-decay stepsize [1]. As a first step, we use a uniform stepsize for VE-based models, which leads to a slightly worse dependence on $R$ and $d$. It is an interesting future work to use a more refined, time-dependent stepsize to achieve improved results for VESDE and show that VESDE performs better than VPSDE from the theoretical perspective.
For the reverse PFODE, since the Wasserstein distance can not use the data processing inequality, Thm. 6 has an exponential dependence on $R$. As discussed at the end of Sec. 6, it is possible to introduce a suitable corrector (such as the Underdamped Langevin process in [2]) to inject some small noise into the PFODE predictor, which allows the use of the data processing inequality and replaces the exponential $R$ with a polynomial one.
We will add a discussion paragraph on the diameter to make it clearer.
**Q1: The guarantee under stronger metric.**
(a) The pure $W_2$ guarantee.
Since we assume the manifold hypothesis, we first show how to obtain a pure $W_2$ guarantee at each setting. For the reverse SDE setting, similar to Corollary 5 of [3], by using the projection technique, we can achieve pure $W_2$ guarantee $\tilde{O}(1/\epsilon\_{W\_2}^{12})$, which has the same order with [3]. We note that the slightly worse $\epsilon$ dependence of our work and [3] is due to the relationship between $W_2$ and $\mathrm{TV}$ [4]:
$$
W\_2(R\_K^{q^{\tau}_{\infty}},q\_{\delta})\leq R\sqrt{\mathrm{TV}(R\_K^{q^{\tau}\_{\infty}},q\_{\delta})}+R\exp{(-R)}.
$$
For the reverse PFODE setting, similar to the tangent-based method for VPSDE [5], our results can extend to $W_p$ for any $p\ge 1$.
(b) The $\mathrm{KL}+W_2$ guarantee.
When considering reverse SDE setting, similar to [6], we can use the chain rule of $\mathrm{KL}$ divergence instead of the triangle inequality to obtain a $\mathrm{KL}+W_2$ guarantee $\tilde{O}(1/\epsilon_{\mathrm{KL}}^2\epsilon\_{W\_2}^8)$.
We will discuss the guarantee under stronger metrics in detail.
**Q2 and Minor Question: the presentation.**
Thanks for your helpful comments on the presentation. For our tangent-based unified framework part, we will simplify the formula of Thm. 6.1 and Coro. 6.2 and highlight the technique novelty. For the typos, we will polish our presentation according to your comments.
[1] Benton, J., De Bortoli, V., Doucet, A., & Deligiannidis, G. (2023). Linear convergence bounds for diffusion models via stochastic localization. *arXiv preprint arXiv:2308.03686*.
[2] Chen, S., Chewi, S., Lee, H., Li, Y., Lu, J., & Salim, A. (2024). The probability flow ode is provably fast. *Advances in Neural Information Processing Systems*, *36*.
[3] Chen, S., Chewi, S., Li, J., Li, Y., Salim, A., & Zhang, A. R. (2022). Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. *arXiv preprint arXiv:2209.11215*.
[4] Rolland, P. T. Y. (2022). *Predicting in uncertain environments: methods for robust machine learning* (No. 9118). EPFL.
[5] De Bortoli, V. (2022). Convergence of denoising diffusion models under the manifold hypothesis. *arXiv preprint arXiv:2208.05314*.
[6] Chen, H., Lee, H., & Lu, J. (2023, July). Improved analysis of score-based generative modeling: User-friendly bounds under minimal smoothness assumptions. In *International Conference on Machine Learning* (pp. 4735-4763). PMLR.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. In light of the numerous improvements shown both in your response to me and to the other reviewers, I will raise my score by 1.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback and support! We will add the discussion and polish our presentation according to your comments. In case you have any other questions, please don't hesitate to let us know. | Summary: This paper focuses on variance exploding (VE) based diffusion models and proposes a drifted VESDE forward process with an unbounded diffusion coefficient. This choice of coefficients allows an exponential-decay forward convergence rate, and the authors establish the first polynomial sample complexity for VE-based models with reverse PFODE. Moreover, the authors propose a tangent-based unified analysis framework with reverse SDE and PFODE and prove the first quantitative guarantee for SOTA VE-based models with reverse PFODE.
Strengths: In terms of originality, this paper proposes a new variance exploding (VE) based diffusion model and establishes the corresponding convergence guarantees. The theoretical results are solid. Moreover, this paper is well-organized and clearly written.
Weaknesses: 1. The convergence guarantees for VE-based models with reverse PFODE are relatively weak. For example, in Assumption 3.1, the choice of $\beta_t$ is more conservative for reverse PFODE; In Theorem 6.2 part (2), the last term is $\bar{D}/\tau$ instead of $\bar{D} e^{-T/2} / \sqrt{\tau}$ in part (1). Does this mean that the $e^{-T}$ forward convergence rate can only achieved by the reverse SDE?
2. The numerical results only include synthetic experiments.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Assumption 3.1, why for reverse SDE and PFODE, the choices of $\beta_t$ are different? Could the authors provide some intuitive explanation?
2. In Theorem 5.2, the sample complexity has the same dependence on $\epsilon_{W_2}$ and $\epsilon_{TV}$ as that in Chen et al. (2023c). However, the dependence on $d$ is worse. Is it a consequence of a more aggressive $\beta_t$?
3. Could the theoretical results reflect the superiority of VE-based diffusion models over VP-based diffusion models?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discuss future work and limitations in Section 8.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions. We provide our response to each question below.
**W1 & Q1: The different $\beta_t$ for reverse SDE and PFODE: the balance between different error terms.**
(a) We first recall the reverse beginning error term when considering the unified tangent-based framework:
$$
\begin{align}
W_1\left(Q_{t_K}^{q_{\infty}^{\tau}},Q_{t_K}^{q_0P_T}\right)\leq \frac{\sqrt{m_T}\bar{D}}{\sigma_T}\exp\left(\frac{R^2}{2\sigma_{T-t_K}^{2}}+ \frac{(1-\eta^2)}{2}\int_0^{t_K}\frac{\beta_{T-u}}{\tau}\mathrm{d}u\right)\,.
\end{align}
$$
We note that the forward and reverse processes determine the above bound simultaneously, where the exponential terms come from the bound of the tangent process (Lem. 6.3, reverse process), and the first part comes from Thm. 4.2 (forward process).
For the reverse SDE ($\eta=1$), the exponential term becomes $\exp\left(\frac{R^2}{2\sigma_{T-t_K}^{2}}\right)$, which is independent with $\beta\_t$. Hence, choosing an aggressive $\beta_t=t^2$ (here we use $\tau =T^2$ as an example) will introduce $\frac{\bar{D}e^{-T/2}}{\tau} \exp\left(\frac{R^2}{2\sigma\_{T-t_K}^{2}}\right)$ without introducing additional terms.
For the reverse PFODE ($\eta=0$), we can also choose an aggressive $\beta_t=t^2$, which leads to a $\bar{D}e^{-T/2}/\tau$ for $\frac{\sqrt{m_T}\bar{D}}{\sigma_T}$. However, since $\eta = 0$, $\exp{\int_0^{t_K}\frac{\beta\_{T-u}}{\tau}\mathrm{d}u}$ term will introduce an additional $e^{\frac{T}{6}}$ to the bound of the tangent process. **We note that the key part of the convergence guarantee is to balance the discretization, reverse beginning, and early stopping error terms.** Though the reverse beginning still enjoys an $e^{-T/4}\exp\left(\frac{R^2}{2\sigma\_{T-t_K}^{2}}\right)$, the aggressive $\beta_t$ will introduce an additional $e^{\frac{T}{6}}$ in the final result (since the tangent process also influences the discretization error term.). Hence, a better choice for the reverse PFODE is a conservative $\beta_t$.
(b) An interesting future work: the PFODE predictor and suitable corrector.
The above discussion shows that since the reverse beginning error is determined by the forward and reverse process at the same time, an aggressive $\beta_t$ can not be used under the PFODE setting. The complex dependency is due to the fact that data processing inequality is forbidden when considering the Wasserstein distance. As a next step, we discuss how to improve the results of Thm. 6.1 with an aggressive $\beta_t$ (the PFODE setting).
We first recall the data processing inequality: Consider a channel that produces $Y$ given $X$ based on the law $P_{Y \mid X}$. Let $P_Y$ be the distribution of $Y$ when $X$ is generated by $P_X$ and $Q_Y$ be the distribution of $Y$ when $X$ is generated by $Q_X$. Then we know that for any $f$-divergence $D_f(\cdot \| \cdot)$, $D_f\left(P_Y \| Q_Y\right) \leq D_f\left(P_X \| Q_X\right)$.
When choosing $f(x)=\frac{1}{2}|x-1|$, the $f$-divergence is TV distance. Hence, by viewing $Q_{t_K}$ as the channel, the inequality $\operatorname{TV}\left(Q\_{t_K}^{q\_{\infty}^\tau}, Q\_{t_K}^{q_T^\tau}\right)\leq \operatorname{TV}\left(q\_T^\tau, q\_{\infty}^\tau\right)$ holds, which indicates that for the $\mathrm{TV}$ distance, the influence of reverse process can be ignored when considering the reverse beginning error term. However, the data processing inequality does not hold for Wasserstein distance. To overcome this problem, an interesting future work is to introduce a suitable corrector (such as the Underdamped Langevin process in [1]) to inject some small noise into the PFODE predictor, which allows the use of the data processing inequality and achieve a polynomial sample complexity.
We will add a discussion to make it clearer.
**W2: The real-world experiments on CelebA 256.**
We do experiments on the CelebA 256 dataset (a common face dataset) and show that our drifted VESDE can improve the results of pure VESDE **without training** from the quantitative and qualitative perspectives. Please see the experiment detail, discussion, and generated images in the **global rebuttal** part.
**Q2: The dependence on $d$ and $R$.**
We recall that the result shown in [2] is $\tilde{O}\left(\frac{d^3 R^4(R \vee \sqrt{d})^4}{\varepsilon_{\mathrm{TV}}^2 \varepsilon_{W_2}^8}\right)$ and our results is $\tilde{O}\left(\frac{dR^4(d+R\sqrt{d})^4}{\epsilon\_{W_2}^{8}\epsilon_{\text{TV}}^2}\right)$. Since the image datasets are usually normalized into $[-1,1]^d$, $R\leq \sqrt{d}$. Hence, our results have the same order as [2]. We will add a discussion part about the dependence on $d$ and $R$.
**Q3: The superiority of VE-based models over VP-based models.**
When considering the reverse PFODE, we show the superiority of VE-based models over VP-based models. More specifically, Lem. 6.3 contains $\exp{(\int_{0}^{t_K}\beta\_{T-u}/\tau du)}$ for the reverse PFODE. For VPSDE ($\beta_t =1$ and $\tau =1$), there is an additional $\exp{(T)}$. On the contrary, VE-based models make use of the variance exploding property of VESDE, avoid this term (for example, the above term is a constant for $\beta\_t=t$ and $\tau = T^2$), and achieve polynomial $T$ dependence in final results. We note that our results also hold for VESDE ($\sigma\_t^2= t^2$), the SOTA models proposed by [3]. We will add a discussion part in the next version.
[1] Chen, S., Chewi, S., Lee, H., Li, Y., Lu, J., & Salim, A. (2024). The probability flow ode is provably fast. *Advances in Neural Information Processing Systems*, *36*.
[2] Chen, S., Chewi, S., Li, J., Li, Y., Salim, A., & Zhang, A. R. (2022). Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. *arXiv preprint arXiv:2209.11215*.
[3] Karras, T., Aittala, M., Aila, T., & Laine, S. (2022). Elucidating the design space of diffusion-based generative models. *Advances in neural information processing systems*, *35*, 26565-26577.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response.
I have an additional concern. It seems that Theorem 6.1 implies an exponential dependence of sample complexity on the inverse of the target accuracy. Is it correct to understand Section 6 as providing a more general analysis of VE-based models to encompass the reverse PFODE, whereas the analysis in Section 5 is more refined for the reverse SDE?
Additionally, could the authors elaborate on the dependence of the sample complexity on the target accuracy for the reverse PFODE under the extra assumption in Corollary 2?
---
Reply to Comment 1.1.1:
Comment: Thanks for your effort and time. We discuss each question in detail below.
(Question 1) The understanding in the response is correct. Section 6 provides a unified tangent-based framework for the VE-based models (including reverse SDE and PFODE). Hence, we achieve a slightly worse convergence guarantee compared to the results in Section 5 when considering the reverse SDE. In Section 5, we focus on the reverse SDE setting and give a more refined analysis. The more refined analysis shows that our drifted VESDE can balance different error terms and achieve better complexity results compared to pure VESDE. We will add the above discussion and polish our presentation according to your helpful comments.
(Question 2) In this part, we provide the sample complexity of Corollary 6.2. More specifically, by choosing $\delta \leq \epsilon_{W_1}^2/d$, $T\ge \frac{\bar{D}\exp{(\Gamma)}\beta^{\Gamma/2}}{\delta^{\Gamma}\epsilon\_{W_1}}$ and $\bar{\gamma}\_K\leq \frac{\epsilon\_{W_1}^2\delta^{2\Gamma}}{C_2^2(\tau)\kappa_2^4(\tau)T^2\exp{(2\Gamma)}\beta^{\Gamma}}$, we have $W_1\left(R_K^{q_{\infty}^\tau}, q_0\right)\leq \epsilon_{W_1}$ with the sample complexity
$$
K\leq \frac{\bar{D}\exp{(3\Gamma)}\beta^{3\Gamma/2}C_2^2(\tau)\kappa_2^4(\tau)T^2}{\delta^{3\Gamma}\epsilon_{W_1}^3}.
$$
We note that compared with Thm. 6.1, the above complexity replace the exponential dependence on $\delta$ by a polynomial $\delta$ and an exponential $\Gamma$. Since $\delta$ is related to the $\epsilon_{W_1}$ and $\Gamma$ is only determined by the data structure, this result improves Thm.6.1. As discussed at the end of Section 6, an interesting future work is to introduce a suitable corrector to inject some small noise to the PFODE sampler and achieve a polynomial sample complexity (w.r.t all problem parameters) under the manifold hypothesis. We will add the above result and discuss it in Section 6.
We hope the above discussion can address your concerns. We are more than happy to discuss our work in detail and answer any further questions in the rebuttal phase. | Summary: In this paper, the authors propose an analysis of the convergence of diffusion models under the manifold hypothesis in a similar setting as [1]. The main contribution is the extension of the analysis to the case of VESDE (Variance Exploding SDE) contrary to [1] which is limited to VPSDE (Variance Preserving). The rates obtained by the authors are better than the ones obtained in [1] (although in a different context). They also extend their analysis to ODE samplers which are notably more difficult to deal with than SDE samplers from a theoretical point of view. The decomposition of the error is the same as in [1] but with a more careful analysis of the tangent process (see Section 6 "The Tangent-based Analysis Framework"). Experiments in toy settings are presented.
[1] De Bortoli -- Convergence of denoising diffusion models under the manifold hypothesis
Strengths: * These results are the first results obtained for the convergence of diffusion models in the VESDE setting under the manifold hypothesis.
* The analysis of the tangent process represents an improvement over the results of [1]. This is an interesting development in itself.
* The introduction of the drifted VESDE (Equation 5) is interesting and represents a good avenue for future studies of the VESDE process.
[1] De Bortoli -- Convergence of denoising diffusion models under the manifold hypothesis
Weaknesses: * Experiments are only toyish. I actually don't think they benefit the paper. This is mostly theoretical work and I'm struggling to understand what point is made here. If this is to illustrate the validity of the samplers this is already well established. If the point of the experiment is to illustrate the benefit of drifted VESDE then I would have appreciated a more challenging setting (like CIFAR10 or Imagenet in image processing or larger models like Stable Diffusion). This does not require pretraining a large model since this is a modification of the sampler.
* There is actually not a lot of discussion on how drifted VESDE relate to VESDE. Can one obtain convergence results for VESDE (classical) based on drifted VESDE?
* As of now, the analysis is limited to drifted VESDE. It would be interesting to analysis if the improved results can transfer to the VPSDE framework and improve on the results of [2].
* The paper is easy to follow and clearly presented.
[1] Karras et al. -- Elucidating the Design Space of Diffusion-Based Generative Models
[2] De Bortoli -- Convergence of denoising diffusion models under the manifold hypothesis
Technical Quality: 3
Clarity: 3
Questions for Authors: * l.34 "Furthermore, Karras et al. [2022] unify two processes and show that the optimal parameters of the general formula correspond to VESDE." --> Not clear what the authors are referring to here.
* l.48 "leads to a large reverse beginning error" --> I disagree as one could argue that most of the error in diffusion models arises from the approximation of the score. The "large reverse beginning error" is a strong statement here.
* One relevant work that is not discussed is [1]
* l.296 "Furthermore, we emphasize that our tangent-based unified framework is not a simple extension of Bortoli [2022]." --> Can the authors provide more details here?
* Since there is a one-to-one mapping between VESDE and VPSDE could the authors have leveraged this connection? There is also a connection with Stochastic Localization as pointed out by [2]. Below we explicit the connection between VESDE and VPSDE.
Assume that $(X_t)_{t \geq 0}$ satisfies a VESDE $\mathrm{d} X_t = g(t) \mathrm{d} B_t$ then we have that $(Y_t)_{t \geq 0}$ given for any $t \geq 0$ by $Y_t = \exp[F(\phi(t))] X_{\phi(t)}$ satisfies $\mathrm{d} Y_t = F(\phi)'(t) Y_t \mathrm{d}_t + \exp[F(\phi(t))] \phi'(t)^{1/2} \mathrm{d} B_t$.
[1] Conforti et al. -- "Score diffusion models without early stopping: finite Fisher information is all you need"
[2] Montanari -- "Sampling, Diffusions, and Stochastic Localization"
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are not really addressed in Section 8 ("Conclusion") in the paragraph "Future Work and Limitation". I think a more in depth discussion of the benefits of VESDE and VPSDE is needed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments and suggestions. We provide our response to each question below.
**W1: The real-world experiments.**
We do experiments on the CelebA 256 and show that our drifted VESDE improves pure VESDE **without training** from the quantitative and qualitative perspectives. Please see the experiment details in the **global rebuttal**.
**W2: The link between drifted VESDE and VESDE and universe error analysis.**
(i) For reverse PFODE, our analysis holds for pure VESDE with $\sigma_t^2=t$ or $t^2$, which covers the SOTA VESDE.
(ii) For reverse SDE, we extend our general drifted VESDE formula and provide a universe complexity for general $\tau \in [1,+\infty)$ and $\beta_t\in [1,t^2]$.
$$
\begin{align}
\frac{\bar{D} \sqrt{m_t}}{\sigma_T}+\frac{R^2\sqrt{d}}{\sigma_\delta^4}\sqrt{\bar{\gamma}_K\beta_T\tau T}+\epsilon\_{\text{score}}\sqrt{\beta_TT},
\end{align}
$$
where $m_t=\exp \left(-\int_0^t \beta_s/\tau \mathrm{~d} s\right)$ and $\sigma_t^2=\tau\left(1-m_t^2\right)$.
(a) The general formula covers current models (including VE and VPSDE).
When $\beta_t=1$ and $\tau=1$, the drifted VESDE becomes VP and $m_t=\exp{(-T)}$, which leads to a logarithmic $T$ and achieve $\tilde{O}(1/\epsilon\_{W_2}^8\epsilon\_{TV}^2)$ (the same with [1]). When $\beta_t=1$ and $\tau =T$, our formula is similar but slightly better (Fig. 2 and real-world experiments) to pure VESDE (\sigma_t^2= t) and achieves $1/\epsilon\_{W_2}^8\epsilon\_{TV}^6$ results ([2] achieve $1/\epsilon\_{W_2}^8\epsilon\_{TV}^4$ since they assume strong LSI holds). For $\beta_t=t$ and $\tau =T^2$, the formula is similar to SOTA pure VESDE ($\sigma_t^2=t^2$) and achieves the first polynomial results $1/\epsilon\_{W_2}^8\epsilon\_{TV}^6$ under the manifold hypothesis. We also note that the above results holds for pure VESDE with $\sigma_t^2= t$ and $t^2$.
(b) Go beyond: When given a $\beta_t$, the optimal $\tau$ has the same order with $\beta_T$ for reverse SDE.
The key part of analysis is balancing the reverse beginning and discretization error (please see approximated score in Q2). As in (a), pure VESDE has a worse $\epsilon_{TV}$ than VP, which comes from the large reverse beginning term. For example, if $\beta_t=t$ and $\tau =T^2$, $m\_T= e^{-1/2}$ and $\sigma\_T^2=(1-e^{-1})T$, which leads a polynomial $T$ and heavily influence the discretization term. Hence, the optimal choice of $\tau=T$ instead of $T^2$ and $m\_T=e^{-T}$. Then, we achieve the same guarantee with VPSDE [1]. Similarly, the optimal $\tau$ is $T^2$ for $\beta_t=t^2$ (Thm. 5.2).
**W3: The improved results for [3].**
[3] considers VPSDE with the reverse SDE and achieves a guarantee with an exponential term $\exp{(1/\delta)}$. [1] achieve a pure $W_2$ guarantee $1/\epsilon_{W_2}^{12}$ by using a projection technique=, which is a direct improvement of [2]. Our general formula can also recover this result (W1), although the main point of our work is not VPSDE.
**Q1: Discussion on Karras et al.**
This work unifies the reverse PFODE of VP and VESDE (Eq. 4 of their work) and proves that the ODE solution trajectory of VESDE ($\sigma_t^2=t^2$) is linear and directly towards the data manifold (Fig. 3 of their work). On the contrary, the trajectories of VP and VESDE ($\sigma_t^2=t$) are not linear in most regions, which makes the denoise process difficult.
**Q2: The error terms.**
The reverse beginning, discretization, and approximated score error are both important. Since the sampling and learning process are relatively independent, current works usually decouple these parts. For the sample process, previous works assume an $L_2$ accuracy score [1] [3]. For the learning process, some works analyze how to use a NN to learn score [4]. We will add a discussion about error terms.
**Q3: The discussion of Conforti et al.**
This work considers VPSDE with reverse SDE under the finite fisher information assumption. Though this work relies heavily on the property of OU process, it is an interesting future work to analyze whether the connection in Q5 can be used to improve the results of VESDE. We also note that our work analyzes a broader area (reverse SDE and PFODE). We will add a discussion.
**Q4: The novelty of our tangent-based method.**
The technique novelty is our tangent-based lemma. For the PFODE, Lem. 6.3 contains $\exp{(\int_{0}^{t_K}\beta_{T-u}/\tau du)}$. For VPSDE ($\beta_t =1$ and $\tau =1$), there is an additional $\exp{(T)}$, which indicates that the previous lemma can't deal with reverse PFODE even under the VPSDE setting. To avoid this term, we use the variance exploding property of VESDE (for example, the above term is a constant for $\beta_t=t$ and $\tau = T^2$) and achieve polynomial $T$ in final results.
**Q5: The connection between VP and VESDE.**
As shown in Montanari, VE and VPSDE are equivalent to a change of time, which indicates the discretization analysis of these models is similar under the reverse SDE. Our universal analysis (W2) also reflects this phenomenon. However, the other key point is the balance of the first two terms, and pure VESDE performs badly. Hence, we further propose the general drifted VESDE formula, prove the optimal choice of $\tau$ faces a $\beta_t$, and provide better results. We will make the discussion of W1 and Q5 clearer.
**Limitation.**
Thanks for the comments on Limitation. We will discuss the benefit of our drifted VESDE formula (including VPSDE and pure VESDE) and its potential to achieve the SOTA performance.
[1] Chen et al. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. ICLR 2023.
[2] Lee et al. Convergence for score-based generative modeling with polynomial complexity. NeurIPS 2022.
[3] De Bortoli, V. Convergence of denoising diffusion models under the manifold hypothesis. TMLR.
[4] Chen et al.. Score approximation, estimation and distribution recovery of diffusion models on low-dimensional data. ICML 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer and the additional experiments. I would like to keep my score to 6.
I would also like to point out that I think there was a misunderstanding regarding W2.
When I am talking about VESDE (Brownian motion) I am talking about the following dynamics.
$$ \mathrm{d} \mathbf{X}_t = g(t) \mathrm{d} \mathbf{B}_t . $$
To the best of my knowledge this is not described by (3). I have no doubt that VPSDE can be recovered. Regarding VESDE I am less sure. For example in the bound provided by the authors in the rebuttal to get to VESDE (I insist on having a zero drift), I need $\tau \to +\infty$. Unless, I am mistaken, this means that the bound provided by the authors blow up.
This is the discussion I was asking for.
I also think that the authors missed my point in Q2. My point was that improving on the discretization errors in SDE models might not be the right term to look at. Indeed, choosing VPSDE, VESDE (drifted or not), implies different sampling bounds (as illustrated by this work), but some of these choices also affect the learning of the score. It is hard to disentangle those parts.
Regarding Q1, I don't think that Karras et al. "prove" the optimality of VESDE (and a reference to a Figure 3 is not sufficient).
---
Reply to Comment 1.1.1:
Comment: Thanks again for your effort in reading our real-world experiments and rebuttal. We discuss each question in detail below.
(Question 1.) As shown at the end of W2 (a), we can obtain a polynomial sample complexity for the pure VESDE (with zero drift). We use pure VESDE ($\sigma_t^2=t^2$) as an example
$$
\mathrm{d} \mathbf{X}_t=\sqrt{2t} \mathrm{d} \mathbf{B}_t.
$$
In this case, the convergence guarantee is
$$
\begin{align}
\frac{\bar{D} }{T}+\frac{R^2\sqrt{d}}{\delta^4}\sqrt{\bar{\gamma}_KT^4}+\epsilon\_{\text{score}}\sqrt{T^2},
\end{align}
$$
whose bound does not blow up. However, the above bound still has difficulty in balancing the reverse beginning and discretization error term. Hence, we propose our drifted VESDE to balance these two terms.
For the drifted VESDE, as mentioned in your response, the formula can not recover pure drifted VESDE by setting $\tau \rightarrow +\infty$, and we need the above independent theorem to give a guarantee for the pure VESDE. However, we also note that with a conservative $\beta_t$ (for example, $\beta_t=t$ when $\tau =T^2$), the performance of conservative VESDE is similar but better than pure VESDE (The red and brown line of our Sec. 7.1, Fig 2 and our real-world experiments). Hence, we present the sample of drifted VESDE with reverse SDE for the sake of coherence. We will add the above results of pure VESDE as an independent theorem and make our presentation clearer according to your comments in the next version.
(Question 2.) The learning process of the score function is an important part of the analysis of diffusion models. However, when considering the sample complexity, most current theoretical works assume a $L_2$ accuracy score function [1] [2] [3] [4] [5], and we follow this standard assumption in our work. In this work, we take the first step in analyzing the great performance of VE-based models from the sample complexity perspective. As mentioned in the response, the choice of different processes will influence the score learning process. Hence, an end-to-end analysis (considering the sampling and learning process simultaneously) for the VE-based models is a really interesting future work, and we will add a detailed discussion in our future paragraph.
(Question 3.) When discussing Karras et al., we want to show that the linear solution trajectory of VESDE ($\sigma_t^2=t^2$) is more friendly compared with the one of VPSDE when considering the sampling phase. The great performance has been shown in many areas, such as the one-step consistency model (including the follow-up works) [6] and video generation models [7]. For the consistency models, they use the forward process proposed by Karras et al. due to the linear trajectory. For the Stable Video Diffusion, they also use the noise schedule to obtain a pre-trained base model (Sec. 4.1 of their paper). We will improve our presentation according to your comments and add the above discussion to our introduction paragraph.
[1] Chen et al. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. ICLR 2023.
[2] Chen, S., Chewi, S., Lee, H., Li, Y., Lu, J., & Salim, A. (2024). The probability flow ode is provably fast. *Advances in Neural Information Processing Systems*, *36*.
[3] De Bortoli, V. Convergence of denoising diffusion models under the manifold hypothesis. TMLR.
[4] Benton, J., Bortoli, V. D., Doucet, A., & Deligiannidis, G. (2024). Nearly d-linear convergence bounds for diffusion models via stochastic localization.
[5] Lee et al. Convergence for score-based generative modeling with polynomial complexity. NeurIPS 2022.
[6] Song, Y., Dhariwal, P., Chen, M., & Sutskever, I. (2023). Consistency models. *arXiv preprint arXiv:2303.01469*.
[7] Blattmann, A., Dockhorn, T., Kulal, S., Mendelevitch, D., Kilian, M., Lorenz, D., ... & Rombach, R. (2023). Stable video diffusion: Scaling latent video diffusion models to large datasets. *arXiv preprint arXiv:2311.15127*. | Rebuttal 1:
Rebuttal: # The Real-World Experiments and Discussion (CelebA 256)
Once again, we thank all reviewers for their valuable suggestions on real-world experiments. In this part, we show that our conservative drifted VESDE can improve the quantitative results (IS (higher is better), and Aesthetic score [1] (1-10, higher is better)) of pure VESDE **without training** on the CelebA256 dataset (a human face dataset). From the qualitative perspective, similar to our synthetic experiments (Sec. 7.2), we observe that drifted VESDE can generate more details compared to pure VESDE.
(a) Setting. In this experiment, we adapt well-known VESDE implementation [2] and do experiments on CelebA datasets (size: 256\*256\*3 ). More specifically, we use ve/celebahq_256_ncsnpp_continuous checkpoints provided by [2] and modify the sampling process strictly according to our drifted VESDE. To do a fair comparsion, we fix the random seed and use the reverse PFODE process. Then, we generate 10000 face images to calculate the metrics. We note that when using this checkpoint and pure VESDE pipeline provided by [2], the models would generate almost pure noise with a certain probability. Hence, we use an aesthetic predictor [1] (aesthetic score>=5.5) to filter the generated images to ensure that the images are clear faces.
(b) Discussion. From the qualitative perspective, as shown in the experiment results (**please click our PDF to see the generated images**), the images generated by our drifted VESDE have more detail (such as hair and beard details). On the contrary, since pure VESDE can not deal with large variance, the images generated by pure VESDE appear blurry and unrealistic in these details. From the quantitative results, our drifted VESDE achieves aesthetic score **5.813**, and IS **4.174**, which is better than the results of baseline pure VESDE (aesthetic score 5.807 and IS: 4.082).
In conclusion, the real-world experiments show the potential of our drifted VESDE, and we will make it clearer in the next version of the paper.
We note that the goal of these experiments is to show that our conservative drifted VESDE is plug-and-play without training instead of achieving a SOTA performance. Hence, we focus on the relative improvement compared to the baseline [2]. There are two interesting empirical future works. For the conservative drifted VESDE, we will do experiments on the SOTA pure VESDE models [3] and improve their results without training. For the aggressive drifted VESDE, since this process makes a larger modification compared with the conservative one, we need to train a new score function instead of directly using a pre-train one to achieve better results. We will add these discussions to the future work paragraph.
[1] Christoph Schuhmann. Laion-aesthetics. 2022.
[2] Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., & Poole, B. (2020). Score-based generative modeling through stochastic differential equations. *arXiv preprint arXiv:2011.13456*.
[3] Karras, T., Aittala, M., Aila, T., & Laine, S. (2022). Elucidating the design space of diffusion-based generative models. *Advances in neural information processing systems*, *35*, 26565-26577.
Pdf: /pdf/d80636dffd3848f9c2c40ede96550ba4278e81b6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
In-Context Symmetries: Self-Supervised Learning through Contextual World Models | Accept (poster) | Summary: This paper proposes ContextSSL, a novel self-supervised learning framework designed to enhance the existing joint embedding architecture by incorporating task-specific context. The main idea is to dynamically adapt symmetries by leveraging context in SSL. Consequently, ContextSSL can adapt to varying task symmetries without requiring parameter updates. The authors demonstrate the efficacy of ContextSSL on 3DIEBench and CIFAR10, showing that ContextSSL can selectively learn invariance or equivariance to transformations while maintaining general representations.
Strengths: **[S1]** This paper suggests an interesting direction for SSL, proposing that self-supervised representation incorporating context can enable dynamic adaptation to varying task symmetries.
**[S2]** The overall writing is smooth and easy to follow.
Weaknesses: **[W1]** It seems possible for invariant-based approaches to achieve context lengths of 0 to 126 by training a linear classifier. Why are these results ignored? More shots could also improve the performance of SimCLR and VICReg.
**[W2]** Although ContextSSL performs well on the augmentation prediction task, it underperforms compared to other important baselines in linear classification, which is the most common task.
**[W3]** It seems that ContextSSL can be trained on a single augmentation type, while other Equivariant-based approaches benefit from multiple augmentations.
Technical Quality: 3
Clarity: 3
Questions for Authors: **[Q]** Regarding the [W2], can ContextSSL benefit from few-shot classification, e.g., ImageNet accuracy of models trained with 1% of labels [1]? I believe few-shot can serve as context, and in this setup, ContextSSL might outperform other baselines.
[1] Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." International conference on machine learning. PMLR, 2020.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: They addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and feedback that helps us improve the work. We believe that there are a few confusions that have resulted in the given rating. We have endeavored to address your concerns as concretely as possible and ask for your careful consideration of our clarifications. All of the discussions below shall be added to the paper to improve its clarity.
---
> Reporting results for invariant baselines for context length 0 to 128
The invariant and equivariant baselines do not utilize context in their training, and as a result, **their performance remains constant regardless of context length**. The reported performance for these baselines in Table 1 corresponds to zero-shot evaluations. We recognize the confusion caused by the centered values in the table. To clarify further, we have included supporting figures and tables in the attachment pdf that will also be added to the revised manuscript.
---
> Lower performance of ContextSSL on linear probe compared to baselines
Core to ContextSSL is its ability to selectively enforce invariance or equivariance based on context. Thus, our experiments test if ContextSSL can do so without sacrificing performance on standard benchmarks like linear probe accuracy. Achieving high linear probe accuracy is not the primary goal of this work.
- On the 3DIEBench dataset, ContextSSL achieves higher $R^2$ scores for rotation (0.744) and color (0.986) compared to all baselines (the highest being 0.671 for rotation and 0.975 for color). It also matches or exceeds other equivariant models in linear probe performance, as shown in Table 1. Unlike equivariant baselines, ContextSSL does this *without* training separate models for each equivariance. Moreover, in 3DIEBench, augmentations like rotation or color are independent of classification labels, indicating that equivariant models generally aren't expected to outperform invariant ones. To clarify this further, we will add additional clarification in the revised manuscript.
- In datasets where equivariance to transformations is crucial and correlated with labels, ContextSSL achieves superior performance, with an 83% linear probe classification accuracy compared to 72% for baselines like SimCLR (as shown in Table 4).
- As shown in Table 3, ContextSSL achieves an $R^2$ of 0.608 on rotation and 0.925 on color, significantly surpassing SimCLR's 0.459 and 0.371. Further, ContextSSL achieves this while also achieving a linear probe accuracy of 88.5%, comparable to SimCLR's 89.1%.
- Additional compelling evidence of ContextSSL's strong performance over baselines is presented in Table 2, Table 3, Table 4, and Figure 5 of the original manuscript.
**Please refer to the detailed response about key observations from Table 1 in the consolidated review above.**
---
> ContextSSL can be trained on a single augmentation type, while other Equivariant-based approaches benefit from multiple augmentations.
Similar to other invariant and equivariant baselines, ContextSSL is indeed trained with multiple augmentations by using multiple augmentations for generating positive samples in the context.
As shown in Table 1, all equivariant baselines are trained to be equivariant to either 1) rotation, 2) color, 3) both rotation and color. This requires training a separate model for each setting. ContextSSL, on the contrary, trains a single model using two contexts—one corresponding to rotation and the other to color—thus using multiple augmentations. Depending on which context is used, the model dynamically enforces either invariance/equivariance to rotation or color. To avoid this confusion around Table 1, we show a different version of Table 1 in the attached rebuttal document (through Table 1 and Figure 1) that is hopefully clearer; it separates context length from the comparison (the baseline methods are independent of context length).
**Please refer to the detailed response about key observations from this table in the consolidated review above.**
---
> ContextSSL benefits in few shot classification setting
We are afraid that there seems to be some misunderstanding. The linear probing metrics in Table 1 show zero-shot results for both ContextSSL and other methods, which, indeed, form a fair comparison. ContextSSL outperforms both the invariant and equivariant baselines in terms of quantitative equivariance measures such as $R^2$ in Table 1, Mean Reciprocal Rank (MRR), and Hit Rate in Table 2. While achieving the highest linear probe accuracy is not the goal of this work, ContextSSL still demonstrates competitive performance in Table 1 and surpasses other baselines in Table 4.
To further emphasize this, we compare the linear probe accuracy of the *predictor* of ContextSSL across different context lengths with that of SimCLR. As shown in Table 4 in the attached rebuttal pdf and Table 19 in the Appendix of our manuscript, ContextSSL outperforms SimCLR in linear probe accuracy across all context lengths. Note that the invariant and equivariant baselines do not operate on context, and, as a result, the performance of SimCLR in this table remains constant regardless of context length.
---
---
Rebuttal Comment 1.1:
Title: Gentle reminder to respond to our rebuttal
Comment: Dear reviewer iSU1,
As the discussion period is drawing to a close, we wanted to kindly request your feedback on our rebuttal. In our response, we have carefully tried to address all your concerns and also included additional experiments to further demonstrate the strengths of our work. We would greatly appreciate it if you could provide your feedback at your earliest convenience.
Thank you for your time.
Best regards,
Authors | Summary: This work proposes to employ context modules to learn general representations such that invariance and equivariance to specific augmentations do not bias the representations. The method utilizes a module to learn to be both invariant or equivariant based on the context of the input augmentations, thus producing highly generalized features from learnt symmetries that are capable of preserving or disregarding a variety of transformations for the downstream tasks. The resulting performance leads to improved downstream evaluations in both invariant and equivariant settings, excelling above state-of-the-art in some settings.
Strengths: ⁃ The paper is generally well presented and written, describing a clear and rationale problem statement supported by appropriate examples.
⁃ The resulting framework is original and highly significant in the field of SSL. Notably, the addition of the context module could allow for a significant shift in the real-world application of SSL.
⁃ Empirical results show significant improvement in equivariant downstream tasks further justifying this works significance in the field. Additionally, an extensive ablation and sensitivity analysis is performed guiding the reader into a greater understanding of the behavior of the method and the rationale behind implementation decisions.
⁃ Extensive details to support replication are provided.
Weaknesses: ⁃ How does the method handle more complex augmentation strategies, the proof of concept in the setting of 3DIE and CIFAR demonstrate strong performance yet these transformations, especially equivariance are highly controlled and unique to these datasets. It therefore would have been good to see more generalized augmentations including multiple combinations in the downstream context that better adapt to the real world setting and thus support the generalization claim of the work.
⁃ Does not the choice of context length to extract representations for inference require significant supervision? Such an implementation requires the practitioner to construct augmentations per context and then provide such information for the downstream. Please correct my understanding if incorrect.
⁃ From the latter point, this method claims to not hard code symmetries. However, from my understanding this method is still hard coding symmetries to some extent as the practitioner is selecting which augmentations to select. In this case the context is enabling separability between representations that belong to each group symmetry. They are still hand-coded just conditioned on the learnt context for determining whether to be invariant or equivariant.
Minor:
⁃ Figure 1 and 2 could perhaps be made clearer or positioned differently in the paper. It is not overly clear or useful in supporting the written explanation.
⁃ Context length should be headed in the tables.
Technical Quality: 3
Clarity: 3
Questions for Authors: ⁃ I’m intrigued to understand how does “out of context” augmentations for downstream tasks impact performance? I assume given the automatically learnt symmetries that applying a full context would result in better performance to downstream cases where the context would be vastly different to the training data.
⁃ See weaknesses
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The limitations are appropriately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thorough and insightful review, along with their positive feedback regarding the significance of our work for the SSL community, extensive evaluations and ablations, and our writing. In response to their review, have endeavored to address your concerns as concretely as possible
> Extension to complex augmentation strategies
We thank all the reviewers for raising this excellent question. It is indeed critical to evaluate our approach beyond self-supervised datasets, which use synthetic augmentations to enforce priors on the representation. To address this, we show that ContextSSL extends to naturally occurring symmetries and sensitive features in fairness and physiological datasets such as the MIMIC III [1] and UCI Adult [2]. To demonstrate this, we train ContextSSL to be selectively equivariant or invariant to gender by merely attending to different contexts. This is crucial; for instance, equivariance is needed for gender-specific medical diagnoses where different medicine dosages are required, while invariance is essential for fairness in tasks such as predicting hospital stay duration or medical cost. We present these results in Table 2 and Table 3 of the attached rebuttal document, with details in the caption. From Table 2, we can observe that ContextSSL learns equivariance to gender in one context, improving gender and medical diagnosis prediction for MIMIC-III. In another context, ContextSSL achieves higher invariance to gender, resulting in superior performance on fairness metrics like equalized odds (EO) and equality of opportunity (EOPP) for hospital stay (LOS) prediction. We observe similar results for fairness of income prediction in the UCI Adult dataset, as shown in Table 3 of the attached document.
[1] Johnson, A., T. Pollard, and R. Mark III. "MIMIC-III Clinical Database (version 1.4). PhysioNet. 2016." (2016).
[2] Arthur Asuncion and David Newman. UCI machine learning repository, 2007.
---
> Does the choice of context length require supervision at inference?
This is indeed a critical question. As shown in Table 1 and Table 4 of the attached rebuttal document, ContextSSL is robust to varying context lengths and generalizes well to longer contexts, eliminating the need for explicit supervision during inference. During training, we use random masking and subsequently test without masking, which enhances the model's robustness to varying context lengths. For example, although trained with an average context length of 9 under 90% data masking, the model extrapolates well to context lengths up to 128 during testing, as demonstrated in Table 1.
Furthermore, depending on the useful priors of different downstream tasks, one only needs to construct the corresponding context and use the *maximum* context length. The degree of equivariance or invariance in ContextSSL increases with context length and is highest at the maximum context length, as observed empirically through all our experiments.
---
> Does our work still hard coding symmetries to some extent?
Indeed, we still need to know the set of symmetries at training time. However, our work is the first to move beyond these fixed symmetries, training a representation that can dynamically adapt to be invariant or equivariant to a subset of these transformations. This enables learning a single representation that performs well across various downstream tasks, eliminating the need to retrain a new model for each task. So far, we have tested this on environments like rotation and color or crop and blur. We believe that it is an important stepping stone towards learning from diverse contexts. In practice, ContextSSL could be trained to handle a larger set of transformations, covering the entire set of commonly used augmentations.
To demonstrate how ContextSSL extends beyond these synthetic transformations to naturally occurring symmetries, we conduct experiments on the fairness and physiological datasets such as the MIMIC III [1] and UCI Adult [2]. We present these results in Table 2 and Table 3 of the attached rebuttal document, with details in the caption.
**Please refer to the detailed response about the extension to naturally occurring features and symmetries in the consolidated review above and to Table 2 and Table 3 in the attached document.**
[1] Johnson, A., T. Pollard, and R. Mark III. "MIMIC-III Clinical Database (version 1.4). PhysioNet. 2016." (2016).
[2] Arthur Asuncion and David Newman. UCI machine learning repository, 2007.
---
> Minor issues and typographical errors
We thank the reviewer for their attention to detail. We will make the following corrections in our revised manuscript:
- *Positioning of Figure 1*: We will add more discussion on Figure 1 to highlight our approach clearly.
- *Header for context length in Table 1*: We agree with the reviewer and have improved Table 1 with supporting plots and tables, as shown in the attached rebuttal document (Table 1 and Figure 1).
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Many thanks for the detailed response that addresses many of the weaknesses identified and questions raised. I appreciate the addition of naturally occurring symmetries, and the rebuttal results demonstrate the capability of the method to adapt to such settings. It would also be beneficial to see more visual benchmarks given this had been the main focus of the paper, however, I understand that time restrictions do not permit this. However, for future revisions comparisons on benchmark datasets to compare against methods such as EquiMod, E-SSL, CARE, would improve the findings. Additionally, the clarification on the context length is a useful addition.
I emphasise that all clarifications made during this rebuttal should be made in any revised manuscript to improve clarity of the work.
Given my already positive review, I for now maintain my score.
---
Reply to Comment 1.1.1:
Title: Thanks to Reviewer y5eJ
Comment: Thank you for the prompt response and recognition of our new experiments and discussions. We will incorporate all additional experiments and clarifications into our revised manuscript. The important concerns raised by you have been very valuable in enhancing the clarity of our work. We fully concur with the suggestion to include more vision benchmarks and are currently testing our approach on them. We will ensure that they are included in the revised manuscript. | Summary: This paper focuses on the problem of symmetry discovery in self-supervised learning. In particular, the goal is to learn models that are either sensitive to certain features like rotations and lightning or invariant to them, depending on the task. The authors propose to learn a world model that models transformations of the input images as a sequence of state, action, next state tuples. The major contribution is learning to adapt the representation of the world model based on the provided context of the task.
Strengths: 1. The authors present a novel combination of in-context learning and symmetry discovery. Their method successfully adapts its representation to be equivariant or invariant to different transformations
2. The experimental evaluation uses a large number of strong baselines in self-supervised representation learning and learning of symmetric representations. The proposed method is superior in its ability to be sensitive to or invariant to certain features based on the context of the task.
3. The paper contains an extensive ablation study to justify all components of the method.
Weaknesses: 1. I am not sure if it is meaningful to call this property equivariance: “if H(A|Z) is relatively small, the representation Z is nearly equivariant to the augmentation A”. Equivariance is specifically defined as the transformation of the input to the model having a predictable effect on the transformation of the output. This is different from simply having features that are predictive of a particular features (hence low entropy H(A|Z)). Would it be better to call this property something like sensitivity to a transformation?
2. Section 4.1 does not make a strong case for ContextSSL outperforming the baselines. The results in Table 1 are somewhat mixed. Table 2 is not explained well.
3. The paper does not make a clear case for the application of the proposed method outside of synthetic tasks. The 3DIEBench is artificially created to test equivariance and invariance to specific properties and the CIFAR-10 experiments do not actually demonstrate an improvement in classification accuracy. It is unclear how this method could be applied to more general and practical visual pre-training settings, such as CLIP [1] or DINO [2] self-supervised pre-training.
References:
[1] https://arxiv.org/abs/2103.00020
[2] https://arxiv.org/abs/2304.07193
## Comments:
* Table 1 is difficult to read. It is not immediately clear why ContextSSL is missing from the Rotation + Color section and why the context length != 14 fields for other methods are empty. Moreover, depending on the place in the table, a high or a low R^2 score could be the best result. That is very non-intuitive.
* Clipped sentence: “We further test this at For all our equivariant baselines on 3DIEBench”.
* The text “Contextual Self-Supervised Learning” in Figure 1 should be rotated by 180 degrees.
Technical Quality: 3
Clarity: 2
Questions for Authors: What is the path towards making this method discover or adapt to naturally occurring symmetries?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Despite stating “Limitations of our work are discussed in Section 5” in the paper checklist, the discussion of the limitations in Section 5 is insufficient.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for the time they put in to review our work. We are glad to see that they recognize several strengths in our work, including the novelty of our approach, comprehensive empirical evaluation using many baselines, and conducting thorough ablations. Below, we share our thoughts on the questions asked.
> H(A|Z) as a definition of equivariance
We fully concur with the reviewer's observations. As noted in the footnote of Section 2.1, we use the term "equivariance" in a somewhat relaxed sense to denote that learned features are sensitive to data augmentations. However, it is common practice in equivariant self-supervised learning [1, 2, 3, 4] to use this definition to enforce equivariance.
[1] Dangovski, Rumen, et al. "Equivariant contrastive learning." arXiv preprint arXiv:2111.00899 (2021).
[2] Lee, Hankook, et al. "Improving transferability of representations via augmentation-aware self-supervision." NeurIPS (2021): 17710-17722.
[3] Xie, Yuyang, et al. "What should be equivariant in self-supervised learning." CVPR. 2022.
[4] Scherr, Franz, Qinghai Guo, and Timoleon Moraitis. "Self-supervised learning through efference copies." NeurIPS (2022): 4543-4557.
> Confusion regarding Table 1 and explanation about the benefits of ContextSSL
We acknowledge Table 1 may be confusing as presented in the paper, and will improve the presentation. Table 1 compares ContextSSL with baselines and shows the effect of context length. We believe the empirical success of ContextSSL is significant, and to show that, we show a different version of Table 1 in the attached rebuttal document (through Table 1 and Figure 1), that is hopefully clearer; it separates context length from the comparison (the baseline methods are independent of context length).
**Please refer to the detailed response about key observations from this table in the consolidated review above.**
- On the 3DIEBench dataset, ContextSSL achieves higher $R^2$ scores for rotation (0.744) and color (0.986) compared to all baselines (the highest being 0.671 for rotation and 0.975 for color). It also matches or exceeds other equivariant models in linear probe performance. Unlike equivariant baselines, ContextSSL does this *without* training separate models for each equivariance.
- ContextSSL seamlessly enforces equivariance or invariance to rotation or color by merely paying attention to different contexts, as shown in Figure 1 of the attached document. Thus *one* model can align the learned representation to priors that are beneficial for different downstream tasks.
- Additional compelling evidence of ContextSSL's strong performance over baselines is presented in Table 2, Table 3, Table 4, and Figure 5 of the original manuscript.
---
> Confusion regarding Table 2
We provide more details regarding Table 2 here and will add the corresponding discussion in the paper.
Table 2 shows that ContextSSL outperforms baseline approaches on two key metrics for equivariance: Mean Reciprocal Rank (MRR) and Hit Rate at k (H@k) [1]. ContextSSL's performance on these metrics consistently improves with increasing context length, demonstrating adaptation to rotation-specific features. To put these numbers into perspective, a H@1 score of 0.29 for ContextSSL signifies that the first nearest neighbor is the target embedding 29% of the time. In contrast, this occurs only 5% of the time for EquiMod and SEN, which is marginally better than the 2% expected by random chance. Notably, ContextSSL surpasses the baseline performances even with zero context, demonstrating its ability to learn equivariance without any contextual information.
[1] Garrido, Quentin, Laurent Najman, and Yann Lecun. "Self-supervised learning of split invariant equivariant representations." arXiv preprint arXiv:2302.10283 (2023).
---
> Extension beyond synthetic augmentations and towards adapting to naturally occurring symmetries
We thank the reviewer for raising this excellent question. It is indeed critical to evaluate our approach beyond self-supervised datasets, which use synthetic augmentations to enforce priors on the representation. To address this, we show that ContextSSL extends to naturally occurring symmetries and sensitive features in fairness and physiological datasets such as the MIMIC III and UCI Adult. To demonstrate this, we train ContextSSL to be selectively equivariant or invariant to gender by merely attending to different contexts. This is crucial; for instance, equivariance is needed for gender-specific medical diagnoses where different medicine dosages are required, while invariance is essential for fairness in tasks such as predicting hospital stay duration or medical cost. We present these results in Table 2 and Table 3 of the attached rebuttal document, with details in the caption.
**Please refer to the detailed response about the extension to naturally occurring features and symmetries in the consolidated review above and to Table 2 and Table 3 in the attached document.**
---
> Additional Limitations
We would like to highlight some additional limitations of our work.
- So far, ContextSSL has been evaluated on medium-sized datasets such as 3DIEBench and CIFAR. Its scaling law to massive datasets and more diverse environments is left to be explored in the future with more available compute.
- Using the transformer network to learn a contextual world model increases training and memory costs, though these are relatively small compared to the encoding process.
- So far, we have tested ContextSSL on two environments such as rotation and color or crop and blur. As a future work, we aim to expand our testing to continuous environments, moving beyond the constraints of finite settings.
---
> Minor errors
We thank the reviewer for their attention to detail. We will make these corrections in our revised manuscript.
---
---
Rebuttal 2:
Title: Gentle reminder to respond to our rebuttal
Comment: Dear reviewer WVFD,
As the discussion period is drawing to a close, we wanted to kindly request your feedback on our rebuttal. In our response, we have carefully tried to address all your concerns and also included additional experiments to further demonstrate the strengths of our work. We would greatly appreciate it if you could provide your feedback at your earliest convenience.
Thank you for your time.
Best regards,
Authors
---
Rebuttal Comment 2.1:
Comment: Dear reviewer WVFD,
Understanding that you may be busy, and with the author-reviewer discussion period coming to a close, we would like to take this last opportunity to summarize the major updates we made during the rebuttal to address your concerns.
1. In response to your concern about the applicability of ContextSSL beyond synthetic augmentations, we demonstrated that ContextSSL extends to naturally occurring symmetries and sensitive features in fairness and physiological datasets such as MIMIC III [1] and UCI Adult [2].
2. Based on your concerns regarding Table 1, we replaced it with a version featuring clearer annotations and captions, highlighting key strengths of our approach, ContextSSL.
3. We clarified the definition of equivariance and its connection to H(A|Z).
4. We provided additional clarification around Table 2 and added other limitations and future directions of our work.
*We hope these revisions address your concerns and would be happy to answer any further questions. If you find our revisions satisfactory, we hope that you would kindly consider re-evaluating our work.*
Best,
Authors | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and expertise in evaluating our paper. Their perceptive remarks and constructive feedback have been valuable in improving our work. In response, we have made several key revisions to address their concerns and have conducted additional experiments to enhance the support for our claims. Below is a brief summary of the key revisions:
---
> Testing ContextSSL beyond synthetic augmentations and towards naturally occurring symmetries?
We thank all the reviewers for raising this excellent question. It is indeed critical to evaluate our approach beyond self-supervised datasets, which use synthetic augmentations to enforce priors on the representation. To address this, we show that ContextSSL extends to naturally occurring symmetries and sensitive features in fairness and physiological datasets such as the MIMIC III [1] and UCI Adult [2]. To demonstrate this, we train ContextSSL to be selectively equivariant or invariant to gender by merely attending to different contexts. This is crucial; for instance, equivariance is needed for gender-specific medical diagnoses where different medicine dosages are required, while invariance is essential for fairness in tasks such as predicting hospital stay duration or medical cost. We present these results in Table 2 and Table 3 of the attached rebuttal document, with details in the caption. From Table 2, we can observe that ContextSSL learns equivariance to gender in one context, improving gender and medical diagnosis prediction for MIMIC-III. In another context, ContextSSL achieves higher invariance to gender, resulting in superior performance on fairness metrics like equalized odds (EO) and equality of opportunity (EOPP) for hospital stay (LOS) prediction. We observe similar results for fairness of income prediction in the UCI Adult dataset, as shown in Table 3 of the attached document.
[1] Johnson, A., T. Pollard, and R. Mark III. "MIMIC-III Clinical Database (version 1.4). PhysioNet. 2016." (2016).
[2] Arthur Asuncion and David Newman. UCI machine learning repository, 2007.
---
> Clarifications about key results of ContextSSL and Table 1 of the manuscript
We acknowledge that in the initial manuscript, there was some confusion surrounding Table 1, which may not have effectively communicated the strengths of our approach. To address this, we present an improved version of Table 1 in the attached rebuttal document (through Table 1 and Figure 1). The new table demonstrates the empirical success of ContextSSL over invariant and equivariant baselines, and the new figure highlights ContextSSL's dynamic adaptability by paying attention to different contexts. Key observations from the Table are as follows:
- With context corresponding to rotation and color, respectively, ContextSSL achieves higher $R^2$ scores for rotation (0.744) and color (0.986) compared to all baselines (the highest being 0.671 for rotation and 0.975 for color). This indicates that it enforces equivariance to rotation and color in their respective contexts. ContextSSL also matches or exceeds other equivariant models in linear probe performance. Unlike equivariant baselines, ContextSSL does this \emph{without} training separate models for each augmentation group.
- With contexts of rotation and color, ContextSSL achieves invariance to the other transformation, i.e., color ($R^2$ of 0.023) and rotation ($R^2$ of 0.344), respectively, comparable to SIE's $R^2$ values of 0.011 for color and 0.304 for rotation. However, ContextSSL achieves higher linear probe classification accuracy while training a single model, unlike SIE and other equivariant baselines that require two trained models, one for rotation and one for color.
- ContextSSL seamlessly enforces equivariance or invariance to rotation or color by merely paying attention to different contexts, as shown in Figure 1 of the attached document. Thus *one* model can align the learned representation to priors that are beneficial for different downstream tasks.
- Additional compelling evidence of ContextSSL's strong performance over baselines is presented in Table 2, Table 3, Table 4, and Figure 5 of the original manuscript.
---
Pdf: /pdf/ddc1790af2b63bb96d2d498bc50ade438ddbe4f0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Neur2BiLO: Neural Bilevel Optimization | Accept (poster) | Summary: This paper proposes two approximate methods for solving constrained, mixed-integer, non-linear bilevel optimization problems. The core idea is to convert the bilevel problem into a single level problem by clever use of neural networks trained offline by solving single level optimization problems. The upper-level approach trains a NN to predict for a fixed leader decision, the leader objective function assuming the follower acts optimally. A single level optimization problem can then be solved by replacing the lower level problem with the NN. The other approach uses the value function reformulation for bilevel problems and aims to train a neural network to learn the optimal value of the lower level problem, given a leader decision.
The performance/capability tradeoffs for each method across different classes of bilevel problems is thoroughly discussed, and some theoretical analysis is provided which depends on the approximation error of the true objective/value function. Importantly, the algorithm has a "post-processing" step where the approximate solutions from Neur2BiLO are refined to ensure feasibility of the original bilevel problem.
Neur2BiLO is thoroughly evaluated on several challenging benchmark problems to highlight the generality and effectiveness compared to exact baselines and other learning-based methods.
Strengths: This is a very strong submission overall and was enjoyable to review. In particular, the writing is of extremely high quality given the technical nature of the paper. The methodology is clearly written, but importantly, the discussion of related works, experimental setup, limitations and analysis of results are presented very cleanly.
Neur2BiLO is comprehensively evaluated against exact methods and other learning based methods, both experimentally and in the discussion. The approach is relatively straightforward but seems to integrate machine learning into existing algorithms and results for bilevel optimization a very elegant and principled way. In particular, the practicality of training the method and the effectiveness in practice as shown in the experiments is very appealing.
Weaknesses: This is a minor weakness but none of the problems evaluated have coupled constraints. It would be great if Assumption 1 i) were satisfied but the constraints were coupled.
The theory around the approximation error to the true underlying functions is discussed in Sec. 3.1, which is nice, but it is unclear whether or not for these class of problems that the optimal value function of the follower's problem is actually smooth. I think this is important to at least have some discussion of this to caveat the bounds, since the approximation error in (8) may not be attainable.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Lines 161-173: My understanding is that this is a sensible heuristic approach which refines the approximation to satisfy constraints 2b, 2c imposed in (1). Specifically, there is no guarantee that the solution is optimal, but it is probably a good solution which is feasible. Can you please confirm or correct my intuition please? It was not completely clear from reading the paper whether or not this is the case.
- This relates to one of the weaknesses. Is it reasonable to assume in general that the value function of the lower level problem or the optimal objective function of the outer level function is smooth?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Limitations have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review. Below, we provide responses to each of the weaknesses and questions.
**Weaknesses**:
- Indeed, none of the problems we study in our experiments contain coupling constraints. We have not found bilevel optimization benchmarks with coupling constraints in the literature. For example, the bilevel mixed-integer linear solver of Fischetti et al. [1] is evaluated on many problems, none of which have coupling as far as we could tell. We will discuss this in the limitations of the paper. Please also note that in the presence of coupling constraints the upper-level approximation algorithm we developed is not applicable since the coupling constraints cannot be modeled. However, the lower-level approximation can certainly be used.
- Regarding the smoothness of the optimal value function (OVF): unfortunately even in the simplest possible case, namely when the follower problem is a linear optimization problem with continuous decision variables $y$, the OVF can be non-smooth. Indeed, the OVF of a linear problem is piecewise linear and convex in the right-hand-side parameters of the constraints and piecewise linear and concave in the objective parameters; see [2]. Hence, if the leader variable $x$ appears either on the right-hand-side of the constraints or in the objective function the optimal value function can be piecewise linear and hence non-smooth. Even worse, if the leader variables appear in the constraint matrix of the follower the optimal value function can be discontinuous. Clearly, certain problem structures exist, where the optimal value function of the follower problem is smooth, but we do not see why this would be important for our method. Especially the approximation error in (8) you are mentioning is not affected by the smoothness of the function. Actually, the neural network with ReLU activations is a piecewise linear function itself and thus non-smooth. Hence, at least theoretically, it could achieve an approximation error of $\alpha=0$ in (8). Could you clarify your point about why the smoothness of the OVF would be important for our work?
**Questions**:
- Yes, you are completely right. There is no guarantee that the solutions returned by our methods are optimal, but in Lines 161-173 we discuss under which assumptions our methods can guarantee a feasible solution. Note that the only case which can lead to an infeasible solution $x^\star$ is if only Assumption 1(ii) is satisfied and if we use the upper-level approximation method. In all other cases the procedure guarantees feasibility. Note also that for the lower level approximation feasibility is always ensured. We will make sure to clarify this point in the paper.
- Regarding smoothness, please see our answer above.
**References**:
- [1] Fischetti, M., Ljubić, I., Monaci, M., & Sinnl, M. (2017). A new general-purpose algorithm for mixed-integer bilevel linear programs. Operations Research, 65(6), 1615-1637.
- [2] Bertsimas, D., & Tsitsiklis, J. N. (1997). Introduction to linear optimization (Vol. 6, pp. 479-530). Belmont, MA: Athena Scientific.
---
Rebuttal Comment 1.1:
Title: Response to Authors' rebuttal
Comment: Thank you for your detailed response and for agreeing to address my concerns and limitations. I still think this is a very strong paper and would advocate for acceptance.
Re my concern about smoothness: My question was motivated by the universal approximation theorem and the general ability for a neural network to approximate a function which may not be continuous (the value function in this case). It would be nice to have a little bit of discussion around the approximation error (empirical insights or general discussion like in your response will suffice) for the other more general problems where discontinuous value functions may arise.
I appreciate that Theorem 3.1 is for a particular class of problems and does not require any assumption on the value function specifically.
---
Reply to Comment 1.1.1:
Title: Response to reviewer
Comment: Thank you for the clarification. We agree that including a discussion on this would be a great addition, and we will do so in the final version of the paper. | Summary: The paper tackles the bi-level optimization problem (BiLo) in general. BiLO can be seen as the problem of a leader computing a strategy (x) to commit to, such that the leader’s objective is optimized subject to the follower’s best response (y) to the committed strategy. The paper provides two ML-based approaches, one is the upper-level approximation that learns to predict a mapping from x to the leader’s objective, and the other is the lower-level approximation that learns to predict the utility of the follower given x. Both approaches reformulate the BiLO into a single-level mathematical program. The paper also provides approximation analyses for the lower-level approximation. The error term is an additive function of ML regression errors and a gap of f values due to discontinuity. In experiments, both approaches are evalutated on 4 different BiLO problems and compared against B&C, heuristics and exact solvers.
Strengths: The paper presents two novel ML-based approaches to reduce the challenging problem of BiLO. The approaches are not complicated but new and interesting. Empirical results show that both methods are promising - they can find similar quality or better quality solutions than the baselines but with shorter runtime.
The paper is also easy to read.
Weaknesses: Note that I have reviewed this paper in the past and I am frankly surprised that it was rejected.
The authors have addressed most of my concerns last time.
These are not major weaknesses but still should be pointed out:
The effectiveness of this approach mainly depends on how closely ML can learn to approximate \Phi(x) or F(x,y*). In general, this looks like a very challenging task and it is the bottleneck of these approaches. In this case, it just happens that the regression task is easy for the four benchmarks.
Furthermore, the theoretical approximation guarantees are not that surprising given that the prediction error is assumed to be bounded.
Technical Quality: 3
Clarity: 3
Questions for Authors: I don't have any questions.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The applicability of this approach seems to be largely dependent on the ML regression errors. The authors have discussed limitations of their work in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review.
We do acknowledge that the effectiveness certainly does depend on how well the value functions can be approximated. Through our experiments, we do indeed demonstrate that this is relatively easy for the problems studied and note that these are already challenging problems within bilevel optimization. While it may be possible that optimization problems with a more complex structure may be harder to approximate, we note that these would pose similar challenges for any method for bilevel optimization. Specifically, most methods require frequent evaluation of the upper- and/or lower-level problems, the addition of cutting planes, and branch-and-bound. All of which will likely suffer from similar issues with increasing problem complexity. Furthermore, more challenging problems are even less likely to have well-defined problem-specific heuristics/algorithms.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I will keep my score. | Summary: The paper develops a neural method to solve bi-level optimization problems. It begins with a motivating application and then proposes NEUR2BILO, which utilizes two layers of neural networks to approximate solutions for the upper and lower levels.
Strengths: I found the work conducted to be substantial, supported by several proofs of components.
Additionally, the experiments evaluate performance across four scenarios: KIP, CNP, DRP, and DNDP, which adds further substance to the study.
Weaknesses: 1. The paper is not very friendly to readers who are not engaged in this specialized area. The problem addressed has many challenging issues, such as bi-level, non-linear, and mixed-variable components. I wonder how the authors address each of these challenges, as it is not very clear in the current presentation.
2. The paper introduces two models, NN^u and NN^l, designed to generate solutions for the upper and lower levels, respectively. Then, I would expect that the combination of the two models will solve the entire bi-level problem. However, in the experiments, NN^u and NN^l are compared in parallel, which is confusing.
3. I am struggling to identify the major contribution of this study to the learning to optimize area. The paper's organization makes it difficult to grasp the main points clearly.
4. The major body of the paper is not self-contained, requiring frequent switching between the main text and the appendix to obtain necessary information.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review. Below, we provide responses to each of the weaknesses.
1. For the first weakness, we will discuss two separate points.
- We acknowledge that this paper focuses on relatively specialized bilevel optimization problems/literature. We will add some basic references of the area in the introduction of the paper. However, it is a technical conference and bilevel optimization is indeed a technical topic, so the paper certainly does require some background in the area. Given the area of submission is Optimization (convex and non-convex, discrete, stochastic, robust) and all other reviewers commented that the clarity of the paper is high, we believe the presentation to be appropriate. However, if you have specific suggestions to improve the readability, please let us know and we can attempt to include them in the final revision.
- In terms of the non-linear and mixed-integer bilevel problems, our approach handles all of them similarly as the upper- and lower-level approximations proposed rely only on computing optimal objective values to the decomposed single-level problems for data collection, which can reliably be done with any mixed-integer solver. The trained models are then utilized directly as discussed in Section 3.1. For this reason, we view this as a strength of the paper, as standard algorithms for bilevel optimization typically require much stronger assumptions or are limited to specific classes of problems, whereas Neur2BiLO can be deployed quite generally.
2. $\text{NN}^u$ and $\text{NN}^l$ are separate approaches, based on the same principle of transforming the bilevel problem into a single-level problem. One does so by learning to approximate the upper-level objective and the other does so by learning to approximate the lower-level objective. Each of the two approaches stand on their own; they cannot be combined.
3. The major contribution of this paper is on the development of an efficient general learning-based algorithm for bilevel optimization, through the upper- and lower-level learning based approximations presented. In the final version of the paper, we will make the contributions more explicit. However, we provide a brief list of major contributions below.
- Generality: We propose a learning-based approach for bilevel problems particularly in the presence of integer variables or non-linear constraints/objectives. These are extremely challenging problems for classical optimization methods which require specialization to problem structure or significant computational effort.
- Efficiency & Efficacy: Neur2BiLO computes high-quality solutions on a variety of bilevel optimization problems, often within orders of magnitude less time than traditional methods for bilevel optimization. For larger, and more challenging problems, Neur2BiLO computes best-known solutions, in some cases by large margins, such as the 26% improvement over the state-of-the-art solutions for the donor-recipient problem.
- Theoretical Guarantees: We provide theoretical guarantees for solution quality in terms of an additive absolute optimality gap which mainly depends on the prediction accuracy of the regression model.
4. Given the pagelimit, we aimed to present the central aspects of our approach in the main body of the paper. We will aim to improve the readability and inclusion of material within the final version of the paper. If you have any suggestions on what material you believe would be most beneficial to move from the main paper to the appendix, we would be happy to take that more strongly into consideration. | Summary: The paper studied bilevel optimization problems with discrete decision variables. The proposed framework, Neur2BiLO, adopts a learning-based approach to solve such problems, which is based on a trained neural network to approximate the leader's or follower's value functions.
Strengths: The paper studied an important problem, and its overall structure is easy to follow. In recent years, research in bilevel optimization has become increasingly popular. However, most algorithms are dedicated to bilevel programs with continuous decision variables, like the continuous network design problem in transportation and the hyperparameter optimization problem in machine learning. Yet, bilevel optimization problems with discrete decision variables are equally important and have also found applications in various domains. Hence, the relevance and applicability of tackling such problems are well-justified.
Weaknesses: The proposed algorithm is very straightforward in its development and lacks established performance guarantees. This limitation typically necessitates robust numerical validation. Despite multiple experiments, the evidence provided does not convincingly demonstrate superiority over existing methods.
First, the experiments on the discrete network design problem (DNDP) are conducted on the relatively small Sioux-Falls network. I recommend testing on larger networks, such as the Chicago-Sketch [1], to better assess scalability.
[1] https://github.com/bstabler/TransportationNetworks
Second, a significant application, neural architecture search (NAS), is missing from the study. NAS and DNDP share conceptual similarities: one designs neural networks, and the other transportation networks. Traditionally, NAS often employs continuous relaxation for efficient architecture search using gradient descent. However, given that the proposed algorithm directly manages discrete bilevel optimization, it opens the possibility of exploring NAS without needing such relaxations.
[2] Liu, H., Simonyan, K., & Yang, Y. (2018, September). DARTS: Differentiable Architecture Search. In International Conference on Learning Representations.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the proposed algorithm be extended to continuous network design problems? If not, please explain the reasons.
2. Can the proposed algorithm be applied to neural architecture search? If not, please explain the reasons.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please find my comments about the limitations in other parts of the review.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review. Below, we respond to each of the weaknesses and questions.
**Weaknesses**:
- Next to the performance guarantees we provide in the paper, we believe that we have conducted a thorough numerical validation. In total, we test on 2250 instances with ranging instance sizes on a wide variety of benchmark problems (continuous/integer upper/lower-level problems and non-linearity in both levels). For almost every set of larger instances (i.e., more difficult instances), we achieve better solutions on average over exact/heuristic methods, within multiple orders of magnitude less time.
- For DNDP, our primary motivation for comparing the Sioux Falls instances is their use, as a publicly available benchmark set, in other bilevel DNDP papers (see [1] and corresponding GitHub repo). While there may be larger networks at the suggested GitHub repository, these do not have the bilevel instance parameters required. Namely, the travel speeds and capacities of candidate road segments that are to be considered for addition to the network. The author of [1] did this generation exercise using their domain knowledge for the Sioux Falls network, making it suitable for benchmarking.
- Regarding neural architecture search (NAS), Neur2BiLO can easily be adapted to this problem (see response to the below question for details). While this is certainly an important bilevel optimization problem within the machine learning community, this could also be argued for any bilevel optimization problem, such as the over 70 applications listed in [2]. Given our extensive numerical study already evaluates Neur2BiLO on four integer bilevel optimization problems of interest to the bilevel optimization community more broadly, we believe this alone to be a notable contribution and sufficient evaluation. In addition, we note that the generality and evaluation on four benchmarks with such variable structure is already more than the vast majority of bilevel algorithms evaluate on. Furthermore, most bilevel algorithms are not even as general as Neur2BiLO given they are not suitable for non-linear problems.
**Questions**:
- Neur2BiLO can be applied to purely continuous bilevel problems. However, these are generally well-solved by formulating the problem as a single-level problem using KKT-conditions or duality theory where applicable, or using first-order gradient methods as in the highly effective approach of BOME [3]. Generally, we focus on more challenging integer bilevel problems, wherein existing methods are intractable or have prohibitively long runtime.
- Yes, given a bilevel formulation of NAS, it can be applied to this problem. In this context, one can learn to approximate the value function of any training metric that needs to be optimized. Data collection can be done via sampling over architectures and training those specific architectures. As you mentioned, Neur2BiLO may be useful given the discrete nature of NAS. We would like to thank the reviewer for pointing this out as we believe studying Neur2BiLO and extensions for NAS would certainly be an interesting, and potentially high-impact, direction for future work and contribution. We believe a contribution such as this would likely warrant an independent paper given the large body of existing methods, and literature, much of which can be leveraged within the Neur2BiLO framework.
**References**
- [1] Rey, D. (2020). Computational benchmarking of exact methods for the bilevel discrete network design problem. Transportation Research Procedia, 47, 11-18.
- [2] Dempe, S. (2020). Bilevel optimization: theory, algorithms, applications and a bibliography. Bilevel optimization: advances and next challenges, 581-672.
- [3] Liu, B., Ye, M., Wright, S., Stone, P., & Liu, Q. (2022). Bome! bilevel optimization made easy: A simple first-order approach. Advances in neural information processing systems, 35, 17248-17262. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Overcoming Brittleness in Pareto-Optimal Learning Augmented Algorithms | Accept (poster) | Summary: In the burgeoning field of learning-augmented online algorithms, the ideal case is to design an algorithm that achieves a competitive ratio (CR) as a function of prediction errors, without knowing the prediction error in advance. To tackle this problem, many existing works focus on two extreme metrics: consistency (i.e., the CR when the prediction error is zero) and robustness (i.e., the worst CR over all prediction errors). These works aim to derive algorithms that achieve the Pareto-optimal trade-off between consistency and robustness.
This paper introduces and studies the concept of brittleness in Pareto-optimal learning-augmented algorithms, highlighting that the CR of Pareto-optimal algorithms may sharply degrade to the robustness guarantee with only a small prediction error. Specifically, the work formally defines and demonstrates the brittleness of the max-rate prediction in the one-way trading problem. To overcome this brittleness, it extends the consistency to a concept of consistency by profile F, which specifies the target CR over different prediction errors. A profile-based algorithm is proposed to check the feasibility of the consistency by profile F and find the online algorithm if feasible algorithms exist to attain it.
Finally, the paper presents an adaptive algorithm that can improve the performance of Pareto-optimal algorithms for particular instances. However, this algorithm also suffers from brittleness.
Strengths: - This paper formalizes the brittleness issues in Pareto-optimal algorithms for the one-way trading problem. This helps to better understand the limitations of existing works in learning-augmented algorithms, which is particularly important for practical applications where perfect predictions are nearly impossible.
- The concept of consistency by profile is a natural yet useful extension of the classic consistency. The proposed algorithm, which can quickly identify feasibility and find a feasible online strategy, is an interesting extension of classic threshold algorithms.
Weaknesses: - The paper focuses solely on the brittleness issues for the one-way trading problem. It is unclear whether many other problems exhibit similar brittleness and whether profile-based algorithms can also be designed to address such brittleness. The paper mentions the contract scheduling problem in the appendix, which deserves more formal treatment. In contrast, the paper uses an entire section (Section 5) to propose an adaptive Pareto-optimal algorithm (which is still brittle) for one-way trading, which deviates from the central topic of overcoming brittleness in learning-augmented algorithms.
- The concept of consistency by profile is relatively easier to define for single-value predictions, raising the question of whether this concept would generalize when considering multiple predictions. Specifying the profile may also be challenging for users.
- The algorithmic techniques used to design profile-based algorithms seem similar to existing approaches, which involve designing thresholds to maintain the target consistency profile while being prepared for the worst-case scenario if exchange rates drop to 1. The complexity arises from maintaining consistency over the user-specified profile instead of a single perfect prediction point.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you explain how Section 5 (adaptive brittle algorithm for one-way trading) is connected to the theme of the paper (overcoming brittleness in learning-augmented algorithms)? How significant are the results for contract scheduling in the appendix, and can these results be used to validate the generalizability of the proposed concepts and approaches?
- For a given user profile, there may exist multiple algorithms that can ensure such a profile; however, these algorithms may exhibit different instance-dependent performances. Can you make some formal statements on how to select the profile-based algorithm in practice? It seems the algorithm making $\omega_{l+1} = 1$ can be a good candidate algorithm.
- The definition of brittleness (in Definition 3.1) is specific to the maximum rate prediction of one-way trading. Can this be a more general definition for the brittleness of learning-augmented algorithms?
- in line 216 page 5, should the integral in $\tilde{s}_i = \int_1^{w_i}\Phi(u)du$ start from $0$?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We comment first on “weaknesses”.
**1**. There are indeed other problems for which Pareto-optimal algorithms are brittle. Examples include 1-MAX search [19], [38] which is a much simpler version of one-way trading, online bidding [6], [23] and searching for a hidden target in the infinite line [4]. We believe there may be several other problems in this class; for example, given the connections between one-way trading and online knapsack demonstrated in [16], it is quite likely that knapsack also suffers from brittleness, in the maximum-density prediction setting studied very recently in [A], but this remains to be proven. More broadly, we aim to bring attention to the fact that Pareto-optimality treats the prediction error in an “all-or-nothing” fashion (perfect vs adversarial predictions) which may not always yield an appropriate measure of performance. Concerning the adaptive algorithm please see our response to Question 1 below.
**2**. In regards to applying the model to settings beyond single-valued predictions, please refer to *Point 2* in the *global response*. As we explain, while the model is indeed more amenable to single predictions, it can be applied to more complex prediction settings. Specifying the user-based profile can indeed be challenging for certain problems. But we also believe that for problems such as one-way trading, and online financial optimization problems, more broadly, a user-based profile makes sense: e.g., a trader may be satisfied with a linear-like degradation of performance, based on historical data from stock exchanges.
**3**. Designing a threshold function for the profile setting is non-trivial, and does not follow straightforwardly from known approaches. Please refer to *Point 3* in our *global response*.
Below we respond to questions:
**1**. The adaptive setting of Section 5 stems from observing that the design and analysis of SOTA algorithms are heavily tied to worst-case sequences, yet a natural question is to ask: can we design Pareto-optimal algorithms that improve upon the SOTA if the input diverges from the worst-case one? We believe this is a natural question that falls into the *analysis beyond the worst-case*. It is also related to your observation about the analysis being tied to worst-case sequences: we show that it does not need to, and we can indeed go beyond. To solve this problem, we applied some techniques we developed for the profile setting: namely, when analyzing constraints $[\beta]$, we argue that replacing the inequality with an equality allows us to analytically solve the fundamental differential equation, which differs from previous analysis techniques of threshold algorithms. Thus, we believe this setting is not disconnected from the profile setting of Section 4. We also believe that one can combine the two approaches, and obtain an adaptive, profile-based algorithm, in order to circumvent the unavoidable brittleness. This is done as follows, in rough terms: For the decreasing part of the profile (rates smaller than the prediction), the algorithm behaves similarly to lines 6-9 of Alg. 2, by replacing $r$ with the individual ratios $t_i$. For the increasing part of the profile, we would like to exchange as much as possible at each rate, subject to the profile. This can be accomplished, at a high level, using an approach along the lines of Appendix C, but with some additional technical modifications due to the presence of multiple ratios $t_i$ instead of a single one.
In regards to contract scheduling, we demonstrate how to analytically find a schedule that simultaneously optimizes the robustness and the consistency according to a given linear profile. More precisely, we present a 4-robust schedule (best-possible) that also has optimal consistency according to this profile. This demonstrates that the model applies to other problems. Contract scheduling is an important problem in AI that has been studied in learning-augmented settings [7],[B]. Moreover, it has clear connections to other problems such as online-bidding [6,23] and searching on the line [4]. We are very confident that our approach will carry over to these problems, which, likewise, suffer from brittleness.
**2**. You are correct in that there may exist several algorithms that respect a given profile, say $F$, and one would like to define further criteria to choose a good one. One way to accomplish this is to use our offline algorithm so as to find the best-possible extension $G$ of $F$, as stated in Remark 4.1, and in the discussion at the end of Section 3, starting at line 185. Intuitively, this extension $G$ describes a profile that has the same “shape” as $F$, but defines much better performance ratios than $F$, for all rate values. This means that if $F$ is feasible, then we can obtain an algorithm that not only respects $F$, but also $G$ (hence will perform even better, and “optimally” in the sense of respecting the "lowest" profile that has the shape of $F$). One could impose other criteria, e.g., insist that $w_{l+1} =1$ as you suggest.
**3**. Yes, the definition can be extended as follows. Let $O$ be an online problem (say cost minimization). Let $\hat{p}$ denote a prediction, $\eta$ the metric that defines the prediction error, and let $r$ denote the robustness requirement. We say that $O$ is *brittle with respect to $\hat{p}$* if for every Pareto-optimal algorithm PO and every $\epsilon > 0$, there exists a sequence $\sigma$ such that $\eta(\sigma,\hat{p}) \leq \epsilon$, and ${cost}(PO,\sigma)\geq r \cdot cost(OPT,\sigma) -\delta$, where $\delta$ can be infinitesimally small.
**4**. Correct, thank you for catching the typo.
[A] M. Danashveramoli et al: Competitive Algorithms for Online Knapsack with Succinct Predictions, arXiv:2406.18752
[B] S. Angelopoulos et al. “Contract Scheduling with Distributional and Multiple Advice”, arXiv:2404.12485
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your reply. I still find that Section 5 is disconnected from the core theme related to overcoming the brittleness of learning-augmented algorithms, unless this section can formally address the brittleness of the adaptive algorithm, as the authors believe it can.
After further consideration of the paper's potential impact, I believe it makes a valuable contribution by bringing the issue of brittleness in Pareto-optimal algorithms to the attention of the field. Therefore, I have increased my score from 5 to 6. | Summary: The authors consider learning-augmented algorithms for the one-way trading problem. In this problem, we are given a budget of 1 and a sequence of exchanges rates between 1 and M that are revealed in an online manner. Whenever an exchange rate is revealed, we have to decide whether to exchange a fraction of our remaining budget at this rate or not. The goal is to maximize the overall profit. In the learning-augmented setting, we are additionally given a prediction on the maximum exchange rate, which is equivalent to predicting the optimum, as an optimal solution will exchange the complete budget at the maximum rate. Usually, learning-augmented algorithms are analyzed using consistency, the competitive ratio for a perfectly accurate prediction, and robustness, the worst-case competitive ratio for any input. For the one-way trading problem, a pareto optimal algorithm w.r.t. consistency and robustness is already known from previous works.
The main point of the paper is to address two weaknesses of such pareto-optimal algorithms and analyses via consistency and robustness in general. The first weakness is called *brittleness* and describes prediction models were even a wrong prediction that is arbitrarily close to the correct value leads to any pareto-optimal algorithm having a competitive ratio matching the robustness. This means that only completely perfect predictions allow for an improved performance. The authors prove that maximum rate predictions for the one-way trading problem are brittle and address the brittleness by proposing an analysis via *profiles*, a generalization of consistency and robustness. A $profile$ partitions the range of possible maximum exchange rates into intervals and, for each interval, defines a target competitive ratio. The interval that contains the predicted maximum exchange rate has the best target competitive ratio and the ratio degrades for intervals that are farther away. An algorithm respects the profile if it always achieves the target competitive ratio of the interval that contains the actual maximum exchange rate. As a main contribution, the paper gives a constructive algorithm that decides whether there exists an algorithm that respects a given profile.
The second weakness of pareto-optimal algorithms is that such algorithms are often tailored to worst-case instances w.r.t. the consistency and robustness tradeoff. To address this, the authors give a pareto-optimal algorithm that dominates all other pareto-optimal algorithms in the sense that it does not perform worse than any other pareto-optimal algorithm on instances where the actual maximum exchange rate is larger than the predicted optimal exchange rate.
Strengths: * The paper identifies and addresses two reasonable and realistic potential drawbacks of pareto-optimal learning-augmented algorithms. Since the framework of learning-augmented algorithms is ultimately a tool to analyze algorithms beyond the worst-case, the proposed generalizations that extend the ability to do just that are certainly of interest to the community.
* The authors give a proof of concept that profile-based algorithms are indeed possible by introducing such algorithms for the one-way trading problem (and contract scheduling in the appendix). It is a nice idea to consider the offline problem of deciding whether a given profile is feasible or not. I am not completely convinced that this idea also works for problems of a different flavor (see weaknesses), but at the very least it inspires future work to settle this question, which already is an important contribution.
* The paper is well-written, the problem statement is properly motivated, and the results are presented clearly and concisely.
Weaknesses: * The definition of profiles seems to be tailored to predictions models where only a single value is predicted. For more complex predictions, one could define profiles w.r.t. the prediction error. However, specifying a profile upfront would not always be possible as the range for the predictions error often depends on the unknown online input. Even for the maximum exchange rate predictions, the specification of a profile requires knowledge of the maximum exchange rate M.
* As one of the main technical contributions, the authors give an algorithm that decides whether a given profile is feasible. This algorithm heavily relies on the simple characterization of worst-case instances as given in Remark 2.1. For many other online problems, no such simple characterization of worst-case instances are known. This could limit the impact of the proposed analysis framework via profiles as deciding whether a profile is feasible is likely more difficult (or even not possible) for problems without such a simple worst-case characterization.
* I am not an expert regarding the literature on the one-way exchange problem. However, the paper does not seem to introduce too many new algorithmic ideas. Instead, the algorithmic contributions of the paper seem like a very natural extension of known ideas. In particular, Algorithm 1 seems to be the canonical way of extending threshold-based algorithms to profiles.
* Regarding the experiments, it would be interesting to also see results for different profiles. The used profile is quite similar to the consistency and robustness cases. While this makes the comparison to the pareto-optimal algorithm more fair, it would be interesting to see results for profiles that emulate a smooth error dependency.
* Minor comment: Line 86: I do not think that not having access to the prediction ahead of time is a novelty. In the context of online algorithms there are several examples where the predictions also are revealed over time. One example would be reference [9].
Technical Quality: 3
Clarity: 3
Questions for Authors: -
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: In my opinion, all limitations have been properly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Below we respond to “weaknesses”.
**1**. In regards to the prediction being tied to single values, please see *Point 2* in the *global response*. As we explain, while the model is indeed more amenable to single predictions, it can apply to more complex prediction settings.
In regards to the error, specifically, there is a variety of problems and prediction settings for which the worst-case prediction error is bounded, with or without any further assumptions. An example where no additional assumptions are needed includes online problems with frequency-type predictions, e.g., bin packing [8], or knapsack [A]. In our problem, the prediction error is bounded from above by $M$, by the assumption that $M$ is the maximum exchange rate. But this is a standard assumption in the context of trading problems, not only in the standard competitive analysis of one-way trading [19] (without this assumption, no algorithm has bounded competitive ratio), but also in the state-of-the-art learning-augmented algorithm [38]. The bound helps compare our algorithm to that of [38]; however, it is not strictly needed in the definition of the profile, since the profile can be defined even if the error is unbounded. Hence, having an upper bound on the prediction error may be helpful, but is not a requirement in our model.
[A] Im, Sungjin, et al. "Online knapsack with frequency predictions." NeuRIPS (2021): 2733-2743.
**2**. In general, the analysis of profile-based algorithms need not rely on knowing the structure of worst-case sequences, in the same way that competitive analysis in the standard setting (without any predictions) need not rely on such knowledge. Knowing this structure may help the analysis, but is not a prerequisite. For instance, our analysis of contract scheduling (appendix) does not use “worst-case” instances, it applies instead to any given instance. In addition, we believe that one-way trading remains a challenging problem in our setting even when knowing the structure of worst-case instances, because there are several conflicting objectives in trade-off relation, which is reflected in the complexity of the algorithms and their analysis.
**3**. We address this issue in *Point 3* of the *global response*, which we also include below. Designing a threshold function for the profile setting is non-trivial, and does not follow straightforwardly from known approaches. For instance, if one tried a “myopic” approach that considered each interval individually (and obtained a threshold function for each such interval), then the overall function would fail. This is because when transitioning to a new interval, the algorithm would not have made enough profit to be competitive in this new interval. This adds complications which we address as explained in Section 4 (lines 245-254) and in lines 9-14 of Algorithm 1: informally, we need to “flatten” a portion of the threshold function of each interval appropriately. As a result, the obtained function is quite complex, and combines exponential functions, plateaus and discontinuities, as illustrated in Figure 3 (appendix).
**4**. For the experiments, we chose a relatively simple profile in order to be able to compare our algorithms to the known Pareto-optimal algorithms in a very clear and meaningful manner . However, we agree with your suggestion, and in *Point 1* of the *global response* and the accompanying PDF, we describe the performance of our algorithm on a more complex profile, which demonstrates that the algorithm indeed performs as predicted by the theoretical results.
**5**. Here, we meant that this is a novelty in regards to one-way trading, and online financial optimization problems, more broadly. We will clarify this point, and add references to [9] and other related works.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your rebuttal on my review and the other reviews. My opinion on the paper and my score remain unchanged. | Summary: The paper considers the learning-augmented one-way trading problem. In the problem, we are given a starting budget equal to 1 and a sequence of exchange rates $p_1,...,p_n \in [1,M]$ arriving online. When each $p_i$ arrives, we need to. decide the amount to be exchanged to the secondary currency. Our goal is to maximize the total profit under the given budget. In the learning-augmented setting, the algorithm can access an imperfect prediction $\hat{p}$ of the largest rate.
The authors first show that Pareto-optimality is very fragile for comparing online trading algorithms and then motivates a new metric called performance profile. This new metric incorporates the structural information of instances, rather than a simple comparison based on consistency-robustness values. They further develop an online algorithm that can satisfy the given performance profile (if it is feasible). The authors also discuss another generalization of Pareto-optimality and provide empirical evaluations of the proposed algorithms in the end.
Strengths: - The brittleness issue considered in the paper is well-motivated. Actually, this is a highly significant concern in the field of learning-augmented algorithms. Previous efforts mainly focused on achieving smoothness in ratios by defining new error metrics, while this paper takes a novel approach by introducing the concept of profiles to tackle this issue.
- Both theoretical analysis and experimental evaluation are provided in the paper.
Weaknesses: - The main weakness of this work is that the concept of performance profile may be hard to extend to other online problems, which makes this work less interesting. It would be better if the authors could demonstrate the applicability of this technique to a wider range of problems.
- In Line 131, I didn't see why $w_{A,i}=\sum_{j=1}^{i-1}w_{A,j}$. Is this a typo?
Technical Quality: 3
Clarity: 2
Questions for Authors: See the weakness above.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I didn't see any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. In regards to weaknesses/questions, our responses are below.
**1**. We address this issue in *Point 2* of the *global response*, which we also include below for convenience.
The concept of a profile is inherently applicable, and at the very least, to the class of online problems with a single-valued prediction. In this work, we focused on two well-known problems from this class. The first and main problem is one-way trading, which was chosen because it is one of the main online financial optimization problems and an error-based analysis for such problems is obviously paramount (but missing in the state of the art). In addition, the problem has very close connections to other important online problems, notably online knapsack [16], [41]. The second application is contract scheduling, because it is a well-known problem from AI with close connections to other important problems such as online bidding [6], [23], as well as searching for a hidden target [4], both of which suffer from brittleness. The concepts and techniques we introduced should be readily applicable to such related problems, but due to space limitations and the complexity of approaches, we had to make a selection.
There are two additional points we would like to further emphasize. The first is that single-valued predictions constitute a very rich class of learning-augmented algorithms. Beyond the works cited above, many other studies fall in this class including ski rental and rent-or-buy problems [36], [39], scheduling [A], secretary problems [10] and bin packing [A], to mention only a few representative works. Such predictions are also very useful in the context of *succinct* predictions, e.g., as studied explicitly in the recent work [D].
The second point is that the prediction need not necessarily be single-valued for our profile-based analysis to be applicable. For example, the prediction may be a *vector* of values, as e.g., in scheduling [26] or bin packing [8], then the concept of profile still applies since the error is defined by a distance norm between the predicted and the actual vector. Our model can also be applicable in a *multiple* prediction setting, in which the algorithm is given a set of several predictions, and its consistency is evaluated at the best-possible prediction in this set. More concretely, we believe that one could combine our analysis of one-way trading with the multiple advice setting of [B], and our analysis of contract scheduling with the multiple advice setting of [C]. Of course, we expect any technical results to be more challenging.
Nevertheless, we agree that our profile model, as is, is not immediately applicable to all learning-augmented settings, e.g., when predictions appear dynamically. This is a topic of future work, and we will emphasize this in the introduction and the conclusions.
[A] K. Anand et al. "A regression approach to learning-augmented online algorithms." NeuRIPS 34 (2021): 30504-30517.
[B] K. Anand et al. "Online algorithms with multiple predictions". ICML 2022, 582-598.
[C] S. Angelopoulos et al. “Contract Scheduling with Distributional and Multiple Advice”, arXiv:2404.12485
[D] M. Danashveramoli et al: "Competitive Algorithms for Online Knapsack with Succinct Predictions", arXiv:2406.18752
**2**. Yes this is a typo. The correct expression is $w_{A,i}=w_{A,i-1} + x_i$, where $x_i$ is the amount traded on the $i$-th rate. I.e., $w_{A,i}$ is the sum of the amounts exchanged up to and including the $i$-th request.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the comments. I will maintain my original score. | Summary: In the context of learning augmented algorithms, two widely used metrics are robustness (i.e: the performance when the prediction is adversarially chosen) and consistency (i.e: the performance when the prediction is perfect).
This work analyzes the interplay between these two metrics in the one way trading problem with imperfect predictions. In particular, it starts by showing that Pareto-optimal algorithms, that is, algorithms that for a given robustness upper bound r achieve the smallest possible consistency c(r), are dramatically sensitive to small prediction errors. Specifically, the authors show that any prediction error guarantees the existence of an input sequence for which the performance ratio reaches its robustness, a behavior denominated ‘britleness’. This implies that the competitive ratio of any Pareto-Optimal algorithm is either its consistency (if the prediction is indeed perfect) or its worse possible performance, suggesting that Pareto-Optimality might not be a sensible algorithm design criteria.
In light of this result, this work puts forward the notion of profiles, which maps rate prediction intervals to desired competitive ratios, allowing for the performance of algorithms to degrade smoothly with respect to prediction errors. *V*iewing this, the a*u*thors present an a*l*gorithm that establishes the *f*easibility of a profile in an offline fashion and yields an online procedure that, in the feasible case, satisfies the enforced performance constraints.
Lastly, the authors propose an adaptive algorithm that is not designed to handle worst-case predictions but leverages the deviations from the predicted exchange rate to navigate the robustness and consistency tradeoff.
Strengths: - The paper is clearly written.
- The theoretical results (brittleness of Pareto-Optimality, feasibility determination of profile-following and correctness of online algorithm for profile following) are relevant and sound.
- The analysis technique used to determine the feasibility and solve the profile-based one-way trading problem seems novel. In particular, the constraints associated to a profile can be written as a set of linear differential differential equation whose solution yields a profile-following exchange strategy.
- The authors provide intuition regarding the PROFILE algorithm, in particular, on the behavior of the threshold function $\phi$ with respect to transitions in the desired performance ratio.
Weaknesses: - The experimental setting might be limited. Specifically, the profile used for evaluation if fairly simple, with only three intervals, two of which map to the worse possible ratio.
- Is there any practical use of determining feasibility if it's in an offline fashion ?
Technical Quality: 4
Clarity: 3
Questions for Authors: - Is there a way to characterize the likelihood of the sequences that severely degrade the performance of Pareto-Optimal algorithms ? If those sequences are very unlikely then Pareto-Optimality might not be that fragile a criteria.
- Why is the function \phi increasing ? That is, why do larger utilizations necessarily map to larger reservation rates ? (Is it only under the assumption that rates are increasing and then drop to 1?)
Minor comment: Fix figure ratios.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: - N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Please allow us first to comment on the “weaknesses”.
**1**. For the experiments, we chose a relatively simple profile in order to be able to compare our algorithms to the known Pareto-optimal one, in a very clear and meaningful manner. Nevertheless, we agree with your suggestion, and in *Point 1* of the *global response* and the accompanying PDF, we describe the performance of our algorithm on a more complex profile, which demonstrates that the algorithm indeed performs as predicted by the theoretical results even on more complex profiles.
**2**. There are indeed practical uses for determining feasibility in an offline fashion. Specifically, let $F$ denote the given profile, then we can use the offline algorithm combined with binary search, so as to find the best-possible extension $G$ of $F$, as stated in Remark 4.1, and in the discussion at the end of Section 3, starting at line 185. Intuitively, this extension $G$ describes a profile that has the same overall “shape” as $F$, but defines much better performance ratios than $F$ for all rate values. This means that if $F$ is feasible, then we can obtain an online algorithm that not only respects $F$, but also $G$ (hence will perform even better).
Below, please see our response to the questions:
**1**. We are not aware of any such characterizations in the literature, not only for Pareto-optimal algorithms, but also for the standard competitive analysis of this problem. Our adaptive algorithm of Section 5 aims to address this concern: if the sequence is not pathological, then we show that we can perform much better than the known Pareto-optimal algorithms (while maintaining Pareto-optimality). But nothing formal is known about the “likelihood” that a sequence severely degrades the performance of such algorithms. An additional complication is that Pareto-optimality is a generalization of competitive analysis, and thus it is intrinsically bound to worst-case analysis.
**2**. This is due to Remark 2.1: Any optimal algorithm only trades at rates which are local maxima, hence the function $\phi$ must be increasing for the algorithm to be optimal. Furthermore, $\phi$ needs to be invertible, in order to determine utilization and thus the exchanges made at each rate, hence it needs to be increasing.
Thank you for the suggestion, it is not immediately clear to us to which figure you refer, and whether you mean that certain figure ratios should be more legible, but we will do so.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my questions, and I have raised my score to 8. | Rebuttal 1:
Rebuttal: We respond to some points brought up in the reviews.
**1**. **Experimental evaluation on complex profiles**. For the experiments, we chose a relatively simple profile in order to be able to compare our algorithms to the known Pareto-optimal one, in a very clear and meaningful manner. Namely, the profile of Fig 2(a) in the submission captures the consistency/robustness tradeoff, the smoothness around the prediction, but also allows for an average-improvement evaluation as in Fig. 2(c), which becomes a much more subjective task for more complex profiles. Nevertheless, we agree with the suggestion of some reviewers, and in the accompanying PDF we consider a more complex profile, shown in Fig. 1. Fig. 2 in the PDF depicts the performance of PROFILE relative to the SOTA Pareto-Optimal algorithm PO. We observe again that PO has high brittleness if $p^*$ is close to, but smaller than $\hat{p}$, whereas PROFILE has a much smoother overall performance that respects the profile of Fig. 1 and again validates Theorem 3.1.
**2**. **Prediction types**. The concept of a profile is inherently applicable, and at the very least, to the class of online problems with a single-valued prediction. In this work, we focused on two well-known problems from this class. The first and main problem is one-way trading, which was chosen because it is one of the main online financial optimization problems and an error-based analysis for such problems is obviously paramount (but missing in the state of the art). In addition, the problem has very close connections to other important online problems, notably online knapsack [16], [41]. The second application is contract scheduling, because it is a well-known problem from AI with close connections to other important problems such as online bidding [6], [23], as well as searching for a hidden target [4], both of which suffer from brittleness. The concepts and techniques we introduced should be readily applicable to such related problems, but due to space limitations and the complexity of approaches, we had to make a selection.
There are two additional points we would like to further emphasize. The first is that single-valued predictions constitute a very rich class of learning-augmented algorithms. Beyond the works cited above, many other studies fall in this class including ski rental and rent-or-buy problems [36], [39], scheduling [A], secretary problems [10] and bin packing [A], to mention only a few representative works. Such predictions are also very useful in the context of *succinct* predictions, e.g., as studied explicitly in the recent work [D].
The second point is that the prediction need not necessarily be single-valued for our profile-based analysis to be applicable. For example, the prediction may be a *vector* of values, as e.g., in scheduling [26] or bin packing [8], then the concept of profile still applies since the error is defined by a distance norm between the predicted and the actual vector. Our model can also be applicable in a *multiple* prediction setting, in which the algorithm is given a set of several predictions, and its consistency is evaluated at the best-possible prediction in this set. More concretely, we believe that one could combine our analysis of one-way trading with a multiple advice setting, such as the one studied in [B] (with static predictions), and our analysis of contract scheduling with the multiple advice setting of [C]. Of course, we expect any technical results to be more challenging.
Nevertheless, we agree that our profile model, as is, is not immediately applicable to all learning-augmented settings, e.g., when predictions appear dynamically. This is a topic of future work, and we will emphasize this in the introduction and the conclusions.
[A] K. Anand et al. "A regression approach to learning-augmented online algorithms." NeuRIPS 34 (2021): 30504-30517.
[B] K. Anand et al. "Online algorithms with multiple predictions". ICML 2022, 582-598.
[C] S. Angelopoulos et al. “Contract Scheduling with Distributional and Multiple Advice”, arXiv:2404.12485
[D] M. Danashveramoli et al: "Competitive Algorithms for Online Knapsack with Succinct Predictions", arXiv:2406.18752
**3**. **Challenges in the design and analysis**. Designing a threshold function for the profile setting is non-trivial, and does not follow straightforwardly from known approaches. For instance, if one tried a “myopic” approach that considered each interval individually (and obtained a threshold function for each such interval), then the overall function would fail. This is because when transitioning to a new interval, the algorithm would not have made enough profit to be competitive in this new interval. This adds complications which we address as explained in Section 4 (lines 245-254) and in lines 9-14 of Algorithm 1: informally, we need to “flatten” the threshold function of each interval appropriately. As a result, the obtained function is quite complex, and combines exponential functions, plateaus and discontinuities, as illustrated in Figure 3 (appendix).
Pdf: /pdf/486341bcc906d6639659cf66eaa6cd1d705e9b6c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Simple Image Segmentation Framework via In-Context Examples | Accept (poster) | Summary: This paper proposes a generalist segmentation model, dubbed as SINE, for a variety of segmentation tasks.
The general idea is to harness the in-context examples, and to alleviate the task ambiguity.
Specifically, an in-context interaction module, a matching Transformer and a Hungarian algorithm is devised to realize the above objectives.
Extensive experiments on a variety of datasets and segmentation tasks show its effectiveness.
Strengths: + This paper is overall well-written and easy-to-follow.
+ The proposed method and module design is rationale and effective.
+ The experiments and validation are extensive.
Weaknesses: - The motivation does not match the methodology design properly. Specifically, the authors claim that the proposed method focuses on alleviating the task ambiguity. Unfortunately, throughout the module design, the in-context fusion has little relevance to the task-level ambiguity. Besides, the two losses in the M-Former are still implemented on the object level.
- The proposed method clearly lacks theoretical insight on how the task ambiguity is modeled and handled in theory. The overall module designs are common in visual representation learning, which is less relevant to the task-specific guidance or the task-level ambiguity.
- The Transformer design along with the Hungarian algorithm is not uncommon in modern Transformer based pipeline design such as DERT.
- From the performance perspective, compared with the prior in-context segmentation methods, the proposed SINE does not show a clear improvement in many cases, for example:
(1) Table 1, few-shot segmentation, on three out of six experiments, SINE is weaker than either Painter, SegGPT or recent ICLR works.
(2) Table 4, video object segmentation, the propose method is clearly inferior to the state-of-the-art by a large margin on two out of three datasets.
- The ablation study Table 5c is confusing, and more details or experiments need to be clarified. For example:
(1) In-context fusion is only a part of in-context interaction module, while in Table 5c the authors seem to only include the impact of fusion part.
(2) The specific components in M-Former also need further study, such as the specific loss functions and the phototype length and etc.
- Eq.3 Hungarian loss. Do other types of commonly-used loss functions also achieve similar performance? More discussion and comparison is needed.
- The visual results are not sufficient enough. Although the authors provide a lot of segmentation predictions in the supplementary material, only the results from the proposed method along with ground truth are provided. Please provide and compare the visual results from other state-of-the-art methods.
Technical Quality: 2
Clarity: 2
Questions for Authors: - Q1:The motivation does not match the methodology design properly. The in-context fusion has little relevance to the task-level ambiguity.
- Q2: The two losses in the M-Former are still implemented on the object level.
- Q3: The proposed method clearly lacks theoretical insight on how the task ambiguity is modeled and handled in theory. The overall module designs are common in visual representation learning, which is less relevant to the task-specific guidance or the task-level ambiguity.
- Q4: The Transformer design along with the Hungarian algorithm is not uncommon in modern Transformer based pipeline design such as DERT.
- Q5: Limited performance: Table 1, few-shot segmentation, on three out of six experiments, SINE is weaker than either Painter, SegGPT or recent ICLR works.
- Q6: Limited performance:
- Q7: Table 4, video object segmentation, the propose method is clearly inferior to the state-of-the-art by a large margin on two out of three datasets.
- Q8: Clarify ablation study in Table 5c. (1) In-context fusion is only a part of in-context interaction module, while in Table 5c the authors seem to only include the impact of fusion part.
(2) The specific components in M-Former also need further study, such as the specific loss functions and the phototype length and etc.
- Q9: Eq.3 Hungarian loss. Do other types of commonly-used loss functions also achieve similar performance? More discussion and comparison is needed.
- Q10: The visual results are not sufficient enough. Please provide and compare the visual results from other state-of-the-art methods.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Though providing a discussion, the reviewer believes that the discussion is not proper.
Some potential negative impacts such as job displacement brought by generalist model should be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >W1,Q1,Q2: Motivation does not match methodology design properly.
The motivation and methodology design are related and clear (supported by **sZmi,rhP3**).
- The goal of In-Context Fusion is to establish the correlations between reference and target (see Line 163-164), understanding the complex information within the context, which is crucial for the in-context segmentation task (supported by **sZmi**).
- The proposed M-Former is designed to address task ambiguity in prompts, with the challenges for M-Former regarding task ambiguity analyzed in Lines 181-185.
- This paper discusses the relationships between different tasks in detail (Line 112-117) and unifies them using instance segmentation, allowing different tasks to use the same model. Therefore, the loss is reasonable (supported by **nRMP**).
The paper clearly states the motivation of all designs, and their effectiveness is verified through experiments. This has also been recognized by other reviewers, aligning our method's design with our motivation.
>W2,Q3: Insights and relevance.
The contributions and insights are as follows (please see **General Response** for details):
1. Ambiguities in Visual Prompting
2. Investigating the Capabilities of VFMs
3. Lightweight and Effective Decoder
**We believe the module designs are relevant to our motivation.** We design M-Former to address task ambiguity. Information from different tasks can interfere, causing incorrect predictions (Lines 181-185).
*Mask2Former and DETR are not designed for in-context segmentation, making it difficult to address these challenges.*
M-Former's dual-path structure and shared SA enable effective information interaction and prevent task interference, balancing efficient in-context segmentation decoding and resolving task ambiguity in prompts. In Table 5c, compared to Mask2Former, M-Former effectively resolves task ambiguity, resulting in improvements across various tasks.
>W3,W6,Q4,Q9: Transformer along with Hungarian algorithm is not uncommon in modern Transformer based pipeline. Eq.3 Hungarian loss.
Hungarian Loss: **We do not claim the loss as a contribution.** Please refer to the General Response or Lines 65-71 of the paper for our contributions. The Hungarian algorithm has been widely used[A,B,C], and our use of it aligns with these works, making it reasonable and **not a weakness**. The loss with one-to-many matching used by the traditional method[D,E] will introduce NMS, which is not suitable for Transformer based method.
Transformer design: We analyze task ambiguity challenges (Lines181-185) and design M-Former to address them. The dual-path design and Shared SA ensure efficient decoding and prevent information confusion across tasks of different granularity. Unlike previous methods, SINE improves performance with fewer parameters, demonstrating our method's novelty. See our **response to Reviewer rhP3 for more details**.
>W4(1),Q5,Q6: SINE is weaker than other works on FSS.
Our experiments show that SINE achieves significant improvements across multiple tasks and benchmarks (supported by **sZmi and nRMP**) and demonstrates solid practical utility (supported by **rhP3**).
The table below shows the **average performance** of Table 1, SINE achieves the best performance. Therefore, SINE's improvements are significant.
||one-shot|few-shot|
|---|---|---|
|Painter|35.9|36.0|
|SegGPT|52.6|61.0|
|Matcher|42.9|50.4|
|SINE|60.4|62.6|
>W4(2),Q7: VOS.
See our **response to sZmi‘s W3**.
>W5(1),Q8: Impact of fusion part in Table 5c.
Mask Pooling part does not contain any parameters. Learning in-context information is demonstrated in In-Context Fusion (see Lines 170-171). Mask Pooling is a necessary operation for SINE and cannot be removed. Hence, the corresponding ablation was not performed in Table 5.
>W5(2),Q8: Studies of specific loss, phototype length.
Phototype: The number of prototypes is determined by the prompts (reference images and masks) and is not a settable hyperparameter. The number equals the total count of different semantic categories in the in-context examples.
Loss: Hungarian loss has been widely validated in object detection and segmentation[A,B,C]. We use this loss to align with Mask2Former and facilitate comparison (see Table 5(c)). Loss with one-to-many matching[D,E] is not suitable. Other hyperparameters, e.g., query dimension, are also aligned with Mask2Former to eliminate their impact.
>W7,Q10: More visualization comparison results.
Fig.3 in attached PDF shows further visual comparisons. For video tasks, SINE reduces tracking failures from intersections, viewpoint changes, and occlusions. SINE addresses task ambiguity, preventing errors in semantic segmentation where SegGPT fails, as shown in the second set of comparison results in Fig.3(a). In real-world image segmentation, SINE exhibits better class generalization than SegGPT, matching the LVIS-92i comparison in Table 1.
___
>Limitation 1:The reviewer believes that the discussion is not proper.
**We believe the discussion is proper.** We discuss the connections and differences between SINE and related works (SegGPT and SAM), and the limitations of SINE(Line478-494). We believe these discussions are necessary. **If the reviewer finds any inaccuracies**, please specify them in detail so we can further address them.
>Limitation 2: Some potential negative impacts such as job displacement brought by generalist model should be discussed.
We do not foresee any obvious undesirable ethical or social impacts now. We believe that generalist models are powerful tools for efficient production and will create more job opportunities rather than causing job displacement.
[A] End-to-end object detection with transformers. ECCV2020.
[B] Per-Pixel Classification is Not All You Need for Semantic Segmentation. NIPS2021.
[C] Masked-attention mask transformer for universal image segmentation. CVPR2022.
[D] Mask r-cnn. ICCV2017.
[E] SOLO: Segmenting Objects by Locations. ECCV2020.
---
Rebuttal 2:
Title: The rebuttal does not address my substantial concerns
Comment: Thanks the authors for the rebuttal.
Unfortunately, the rebuttal keeps repeating the comments and views from other reviewers, instead of seriously addressing the specific concerns raised by me. For example:
- **Q1**:I am not convinced from the text from the submission mentioned by the authors. The text still considers the object level learning. No rigid defintion or formulation is given, especially to the level of task. Neither does the substantial relevance to the task-level ambiguity.
- **Q2**: *The two losses in the M-Former are still implemented on the object level.* How do them relate to the task-level and the ambiguity? The rebuttal does not directly answer this question. Instead, it just keeps highlighting the view from other reviewers and mentions some void and lackluster general contributions.
- Besides, the authors in the rebuttal claim *Lightweight and Effective Decoder*. But the evidence in the main text supporting this aspect is not enough. The parameter comparison is only an ablation study, not with other types of decoders.
- **Q3**: The authors keep repeating the application value and insight in the rebuttal. However, how is the task ambiguity modeled, formulated and defined, and whether some theorical support can be found, are neither clarified.
Besides, the concern *The overall module designs are common in visual representation learning, which is less relevant to the task-specific guidance or the task-level ambiguity* is unaddressed.
- **Q4**: *We do not claim the loss as a contribution* does not necessarily mean *the devised representation learning pipeline is techniqually novel*. The authors do not address ths aspect directly in the rebuttal. For example, would the authors acknowledge a paper's novelty and significance, if it claims to use U-Net with minor modifications for another task / application such as industrial segmentation for the first time?
- **Q5 \& Q7**: I acknowledged that in some tables the proposed SINE shows state-of-the-art performance. However, my question also raises. *Table 1, few-shot segmentation, on three out of six experiments, SINE is weaker than either Painter, SegGPT or recent ICLR works.* The authors do not address this in the point-by-point response.
- I acknowledge from the rebuttal that the average performance is the best, but this does not contradict to the fact that, it shows significantly inferior performance in many of my listed settings. In fact, it further raises the concerns on whether it is stable or generalized enough.
- **Q8 \& Q9**: *Using the same loss as other works for fair evaluation* is not a proper excuse to not study the impact of some common loss types. This aspect is unaddressed.
- **Q10**: Only limited visual results on limited datasets from SegGPT is provided. My concerns on more visual results from state-of-the-art methods are not addressed.
- Minor issue: **Limitation**: Efficient production definately leads to the loss of some old, tradtional and off-the-shelf jobs. This does not contradict *create more job opportunities*. Why is it difficult for the authors to acknowledge this aspect?
To conclude, my substantial concerns are not properly addressed. I keep my original rating and recommend clear reject this paper.
---
Rebuttal 3:
Title: More detailed explanation
Comment: Thank you for your comments. Due to the rebuttal's character limit, we couldn't fully address your concerns. Below, we provide a detailed explanation.
>No rigid defintion or formulation is given, especially to the level of task. Neither does the substantial relevance to the task-level ambiguity.
The formulation of SegGPT can be represented as:
$f(x_r, y_r, x_t) \rightarrow y_t,~~~ y_t \in \{task_1, task_2, task_3\}$
When the given prompt $(x_r, y_r)$ cannot precisely indicate a specific task, the prompt is ambiguous, as shown in Fig. 1 of the paper. When SegGPT performs a task (e.g., $task_1$) with an ambiguous prompt, it might incorrectly output the results of $task_2$ or $task_3$. **This is an important and unexplored problem.**
The formulation of SINE is as follows::
$f(x_r, y_r, x_t) \rightarrow \{ y_t^{task_1}, y_t^{task_2}, y_t^{task_3} \}$
By providing results for all tasks, SINE avoids incorrect predictions caused by prompt ambiguity. Fig. 4(a) in the paper presents a visual comparison.
>The text still considers the object level learning. The two losses in the M-Former are still implemented on the object level.
We address the ambiguity between ID, instance, and semantic segmentation. These tasks can be converted into instance segmentation. Using instance/object level segmentation allows for unified loss forms across different tasks. M-Former introduces a dual-path decoder and shared self-attention with a mask (Fig. 2, top right), enabling effective information interaction and preventing task interference. This balances in-context segmentation decoding and resolves prompt ambiguity.
Our design unifies training for different tasks and prevents task information contamination during decoding. During inference, the trained network can perform different tasks, which we believe is an advantage of our method.
>Lightweight and Effective Decoder
Compared to SegGPT's 300M training parameters, our model has only 19M. Without the encoder, Mask2Former has 21M parameters. Thus, our method is more lightweight.
> We do not claim the loss as a contribution does not necessarily mean the devised representation learning pipeline is techniqually novel. Would the authors acknowledge a paper's novelty and significance, if it claims to use U-Net with minor modifications for another task / application such as industrial segmentation for the first time?
We believe **the value of a paper lies in whether it provides academic insights to the research community.** For example, Marigold (CVPR 2024, Best Paper Award Candidate) first demonstrated stable diffusion's significant generalization in depth estimation. However, Marigold made no changes to U-Net's structure or training strategy. This does not change our view that Marigold is an outstanding and inspiring work.
In addition, our core claim is that we are the first to identify and explore task ambiguity in prompts, rather than making minor modifications for in-context segmentation. **We believe this will inspire other work.** The General Response summarizes our contributions and insights.
> Table 1, few-shot segmentation, on three out of six experiments, SINE is weaker than either Painter, SegGPT or recent ICLR works.
1. SINE outperforms Painter on all datasets.
2. SegGPT is trained on COCO and PASCAL and uses a Context Ensemble strategy for few-shot settings. SINE is not trained on PASCAL and is not designed for few-shot settings. However, SINE achieves the best performance on 1-shot in COCO-20i and PASCAL-5i, and is comparable to SegGPT in few-shot settings.
3. On LVIS-92i, SegGPT is only weaker than Matcher because Matcher uses SAM, and its larger training set better aligns with the LVIS.
> Eq.3 Hungarian loss. Using the same loss as other works for fair evaluation is not a proper excuse to not study the impact of some common loss types. This aspect is unaddressed.
In the rebuttal, we explained that Hungarian loss is essential for Detection/Segmentation Transformer methods. These methods model detection/segmentation task as a set prediction problem, introducing one-to-one Hungarian matching. Traditional methods use a one-to-many matching for loss, which requires NMS and is not applicable here. Could the reviewer specify which loss should be used for comparison?
> Limited visual results.
The visual results in the rebuttal are limited to the **one-page** PDF requirement. We will provide more visual comparisons in the paper.
> Limitation: Efficient production definately leads to the loss of some old, tradtional and off-the-shelf jobs. This does not contradict create more job opportunities. Why is it difficult for the authors to acknowledge this aspect?
We will add this discussion in the paper. However, almost all AI model developments may cause such issues in the short term, but we cannot stop the advancement of AI because of this. We believe that, in the long run, the benefits of developing general models for human society outweigh the drawbacks.
---
Rebuttal Comment 3.1:
Title: Re: More detailed explanation
Comment: First of all, I would appreciate the authors' effort, so that we could start a constructive discussion on the weakness and its improvement.
Regarding the specific responses, I've spent some time on it, and conclude that there are still multiple issues need to be clarified / resolved before raising up to the accept threshold.
**No rigid defintion or formulation (part of continued Q1)**:
- The SegGPT formulation is fine, as it is still an one-to-one mapping. However, is it really true that SINE is a one-to-three mapping? From the reviewer's understanding, in the experimental table, you still do the inference on one task/ dataset one time, right? If so, this is not a one-to-three mapping, and the correctness remains doubted.
- From my view, the proposed SINE can implictly learn a rank between different types of tasks. Could the authors re-consider the formulation, and try to model the score of each task? In this way, it will improve the clarity and also make sense to the distingushment between different tasks.
- On top of this, perhaps the authors can find a way to model the ambiguity between different tasks, which can in turn help explain the proposed SINE from a more theortical perspective.
I would expect if some aspects can be made on the above points, so as to improve both the clarity and the theotical insight.
**Object level learning (Q2)**: There are still some remaining issues before resolving this question:
- In my view, *instance* in instance segmentation is typically object-level. Could the authors maybe justify more on *Using instance/object level segmentation allows for unified loss forms across different tasks*?
- *M-Former introduces a dual-path decoder and shared self-attention with a mask (Fig. 2, top right), enabling effective information interaction and preventing task interference*. This explanation does not convince me. The interaction is still between instance representations, right? It just exploits the long-rang dependencies between different instances. How could it benefit the task level?
**Lightweight and Effective Decoder**: This concern has been well addressed, after having a comparison with existing paradigm.
**Representation learning novelty (Q3\&Q4)**: There is so far no strong argument to address this perspective. As the representation learning pipeline is very ordinary and common, and no theoretical insight can be demonstrated so far, I still believe this work is typically beyond the standard of top-tier conference.
**Limited performance (Part of Q5\&Q7)**: The explanation does not alter the fact of limited performance on three out of six experiments. Maybe the authors have to make significant revision on this aspect in the main text, to discuss what is the reason.
Besides, if there is some way to make a fair evaluation between them?
**Regarding the loss design**: Can some other common mechanisms such as IoU based Non-Maximum Suppression, Greedy Algorithm, Positive Sample Mining be inspected in the loss design? Anyway, this is a minor issue, but I insist that the loss and overall idea is ordinary.
**Limited Visual Results**: Hope the authors can further address this part later, as it is still a minor weakness of this work.
**Limitation Discussion**: Thanks the authors for acknowledging this aspect and making the adapations accordingly.
---
Reply to Comment 3.1.1:
Comment: Thank you very much for your efforts and time in further discussing our work. Below, we provide a more detailed clarification regarding the remaining concerns.
>Q1 No rigid defintion or formulation (part of continued Q1):
Our goal is to address the ambiguity problem by using the following formulation, which outputs the results for all tasks simultaneously:
$f(x_r, y_r, x_t) \rightarrow \{ y_t^{ID}, y_t^{Ins}, y_t^{Sem} \}$
As the reviewer mentioned, since SINE’s learning process operates at the object level (including ID and instance), the formulation for SINE is:
$f(x_r, y_r, x_t) \rightarrow y_t \rightarrow \{y_t^{ID}, y_t^{Ins}\}$
The losses for instance and ID segmentation are clear and well-defined, as shown in Equations (3) and (4) of the paper. Therefore, the above formulation holds. We believe the primary concern here lies with semantic segmentation.
For semantic segmentation, SINE does not directly output $y_t^{Sem}$. Instead, during inference, we can obtain it by merging the instance segmentation predictions $y_t^{Ins}$. Specifically,
$y_t^{Ins} = \{ P_{mask}^{Ins} \in R^{S \times H \times W}, P_{class}^{Ins} \in R^{S \times M} \}$
$P_{mask}^{Ins}$ and $P_{class}^{Ins}$ represent the mask and class predictions for instance segmentation. Here, $S$ is the number of instance queries, and $M$ is the number of semantic prototypes, i.e., the number of candidate categories.
The semantic segmentation result $y_t^{Sem} \in R^{M \times H \times W}$ can be concisely expressed using the following matrix multiplication formula:
$y_t^{Sem} = P_{class}^T \times P_{mask}^{Ins} $
Where $P_{class}^T \in R^{M \times S}$ is the transpose of $P_{class}^{Ins} $, $ y_t^{Sem} \in R^{M \times H \times W}$ is the semantic segmentation result before applying argmax.
This formula indicates that the segmentation map for each category $m$, denoted as $y_t^{Sem}[m]$, is obtained by a weighted sum of all instance masks $P_{mask}[s]$ with the corresponding class probabilities $P_{class}[s, m]$.
Therefore, the SINE formulation can be expressed as:
$f(x_r, y_r, x_t) \rightarrow \{ y_t^{ID}, y_t^{Ins}, y_t^{Sem} \}$
**Through the above derivation, we hope to provide a clearer answer to the reviewer's question.**
>Is it really true that SINE is a one-to-three mapping?
Yes, during the inference stage, SINE can provide $ \{y_t^{ID}, y_t^{Ins},y_t^{Sem} \}$ for any input $(x_r, y_r, x_t)$.
>In the experimental table, you still do the inference on one task/dataset one time, right?
SINE performs inference on any dataset by predicting all three tasks simultaneously. We select the relevant output based on the specific task. For instance, Table 1 uses semantic segmentation $y_t^{Sem}$ , Tables 2 and 3 use instance segmentation $y_t^{Ins}$, and Table 4 uses Object ID $y_t^{ID}$ on the VOS dataset. Thank you for pointing this this. We will clarify it further in the revised version.
>From my view, the proposed SINE can implictly learn a rank between different types of tasks.
The above derivation shows that SINE provides results for all tasks simultaneously, rather than learning a rank between different types of tasks.
>On top of this, perhaps the authors can find a way to model the ambiguity between different tasks, which can in turn help explain the proposed SINE from a more theortical perspective.
Thank you for the helpful suggestions in formulating SINE. We believe this makes SINE clearer and more interpretable from a theoretical perspective.
>Object level learning (Q2): There are still some remaining issues before resolving this question: In my view, instance in instance segmentation is typically object-level. Could the authors maybe justify more on Using instance/object level segmentation allows for unified loss forms across different tasks?
Beyond the previous clarification of SINE's formulation, we would like to further address the reviewer's concerns.
SINE targets three tasks: ID segmentation, instance segmentation, and semantic segmentation. During training, we focus on learning instance and ID segmentation, as defined by Equations (3) and (4). The differences in losses lies only in the matching strategy between predictions $y_t$ and ground truth, while the loss functions remain the same, making the form of the two losses unified. Semantic segmentation is derived from instance segmentation results by combining masks of instances within the same category.
---
Rebuttal 4:
Comment: We believe that our response has answered the reviewer's concern. In addition, **the reviewer's response at the last minute of ddl is irresponsible**. | Summary: The paper proposes an image segmentation framework using in-context examples. To eliminate ambiguity from the in-context examples, multiple output masks are predicted. It uses a pre-trained image encoder to extract features from target and reference images, pools these features into ID and semantic tokens using the reference mask, and employs a Matching Transformer to decode the output masks. Experiments on various segmentation tasks demonstrate the effectiveness of the proposed method.
Strengths: - The motivation on task ambiguity of segmentation with in-context example is clear.
- The paper evaluates the framework on few-shot semantic segmentation, few-shot instance segmentation, and video object segmentation on multiple datasets.
Weaknesses: The novelty of the overall idea and network structure is limited. The idea to solve the ambiguity of prompts (in-context examples in this paper) by predicting multiple masks is from SAM. The network architectures are mainly based on DETR and Mask2Former. Though these choices are effective, they do not introduce innovations to the field. Overall, the paper shows solid practical utility, and the limited novelty in its core idea and network design prevents me from giving it a higher rating. A stronger emphasis on introducing novel concepts or architectural advancements would enhance the impact and recognition of the work.
Technical Quality: 2
Clarity: 3
Questions for Authors: N/A
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your comments and the approval of our motivation and practical utility of SINE. We address your concerns here.
___
>W1: The novelty of the overall idea and network structure is limited.
To address the reviewer's concerns, we first discuss the contributions and academic insights of our paper. Then, we discuss the differences between SINE and other methods (SAM, DETR, and Mask2Former). Finally, we outline some perspectives. We hope these responses adequately address the issues and concerns raised.
## Contribution and Insight
In-context segmentation is an important proxy task for unifying different segmentation tasks, and it has garnered significant attention from the research community. Our paper delves deeply into this task, and we summarize our contributions, novelty, and research insights as follows:
1. **Ambiguities in Visual Prompting**: This paper is the *first to explore the task ambiguity problem in prompts within in-context segmentation* (supported by **sZmi** and **nRMP**). This is an **important** and **novel** issue that has been **underexplored**. We investigate the conflicts among multiple tasks in in-context segmentation from the perspective of task ambiguity and *provide effective solutions*, offering valuable insights to the research community.
2. **Investigating the Capabilities of VFMs**: Utilizing Visual Foundation Models (VFMs) to address various tasks is becoming a research trend. Our paper explores how to efficiently transfer the visual representations of VFMs to in-context segmentation, a research paradigm that *currently lacks extensive exploration in the in-context segmentation field*.
3. **Lightweight and Effective Decoder**: We thoroughly analyze the challenges brought by task ambiguity (see Lines 181-185) and design the M-Former structure to address these issues. The novel dual-path design and Shared SA in M-Former ensure efficient decoding while avoiding information confusion across tasks of different granularity. Additionally, unlike previous methods, SINE achieves significant performance improvements with fewer trainable network parameters (19M). Thus, our method also demonstrates novelty.
## Differences between SINE and other methods
Based on the above contributions, SINE fundamentally differs from SAM, DETR, and Mask2Former:
**Comparison with SAM**
- SAM is a promptable segmentation model that takes a point as input and outputs the corresponding mask. The ambiguity lies in the uncertainty of the segmentation granularity represented by the point. SAM addresses this by introducing different queries in the decoder to obtain masks of different granularities.
- Resolving ambiguity in prompts (references) for in-context segmentation poses a more challenging task. In-context segmentation requires understanding the information in the prompt (e.g., category, position, shape) and learning the complex interaction relationships between the reference and target (approved by **sZmi**). Misunderstanding the prompt leads to incorrect outputs. Therefore, unlike SAM, SINE must learn the complex contextual relationships and avoid information confusion across different tasks.
**Comparison with DETR and Mask2Former**
- DETR and Mask2Former aim to train all model parameters on a specific dataset (e.g., COCO) to perform detection or segmentation on a limited number of categories.
- SINE aims to effectively perform in-context segmentation in the open world by using the off-the-shelf VFMs. The parameters of the encoder are frozen. We have analyzed the difficulties in addressing task ambiguity in in-context segmentation, challenges that do not exist in the general detection and segmentation task. *Mask2Former and DETR are not designed for in-context segmentation, making it difficult to address these challenges.*
The dual-path design and shared SA in M-Former are introduced specifically to address these challenges. As shown in Table 5(c), *compared to the Mask2Former decoder, M-Former effectively resolves task ambiguity, resulting in improvements across various tasks.*
## Perspective
Finally, we want to convey our perspective to the reviewer: In the era of large models, we cannot only focus on network architecture design. Different models with the same architecture can have vastly different characteristics (e.g., DINOv2 shows excellent patch-level matching ability, while CLIP excels in image-text retrieval). We should also focus more on how to fully leverage the potential of pre-trained models with fewer parameters and computations. From this perspective, SINE provides valuable insights to the academic community.
---
Rebuttal 2:
Comment: Thank you to the author for the rebuttal. My concerns have been partially addressed.
- "SINE must learn the complex contextual relationships and avoid information confusion across different tasks"
Compared with SAM, the contextual information seems heavily dependent on ground truth reference masks, whereas SAM can progressively segment objects in multiple turns. Other segmentation models, like those from co-segmentation, only require image groups as inputs. These examples make the claim confusing to me.
- "This paper is the first to explore the task ambiguity problem in prompts within in-context segmentation"
I agree with the author that we cannot solely focus on network architecture design in the era of large models. However, the insights presented appear to be only incrementally novel, as they mainly build on multiple previous works.
---
Rebuttal 3:
Comment: Thank you very much for your efforts and time in further discussing our work.
Below, we provide a more detailed clarification regarding the remaining concerns.
___
>Compared with SAM, the contextual information seems heavily dependent on ground truth reference masks, whereas SAM can progressively segment objects in multiple turns. Other segmentation models, like those from co-segmentation, only require image groups as inputs. These examples make the claim confusing to me.
We will address the reviewer's question from the following two points. If our explanation is unclear, we would welcome further discussion.
1. *SINE is more cost-effective and broadly applicable.*
When using SAM for processing a large number of images, it requires human interaction to segment each image, leading to significant labor costs. Co-segmentation requires ensuring that a group of images contains objects with the same semantic conception and cannot segment multiple objects with different semantics in complex scenes simultaneously. This limitation hinders its widespread application. In contrast, SINE only needs a single in-context example containing a reference image and mask to handle target images in different tasks without human intervention, making it more cost-effective and broadly applicable.
2. *Compared with SAM, why does SINE need contextual information.*
For each image, SAM segments objects based on human interaction without learning semantics. This allows SAM to progressively segment objects in multiple turns, but its drawback is the need for human input for every image, leading to significant labor costs. In contrast, SINE can batch-process images using just one image and its mask, a practical advantage that SAM lacks. To achieve this, SINE needs to understand the contextual relationship between the reference image and target images. In fact, **SAM and SINE represent two vertically developed directions** of segmentation foundation models and can complement each other. For instance, in auto-labeling, SAM could label objects (e.g., a dog) in the first image, and SINE could use that image as an in-context example to label subsequent images, reducing costs.
>I agree with the author that we cannot solely focus on network architecture design in the era of large models. However, the insights presented appear to be only incrementally novel, as they mainly build on multiple previous works.
We appreciate the reviewer's agreement with our perspective. Below, we explain why our insights are not incremental:
In widely used segmentation tasks such as semantic and instance segmentation, traditional methods require assigning a category label to the object, which becomes challenging when scaled to thousands of categories. *In-context segmentation overcomes these limitations by using visual concepts as annotations.* However, ambiguous prompts in in-context segmentation can lead to incorrect predictions, limiting its practical use. We are the first to consider and resolve task ambiguity in prompts for in-context segmentation, a significant contribution in both research and application, and we believe this challenge is inherently novel.
In exploring the potential of DINOv2 for in-context segmentation, the challenge lies in *how to maximize the reuse of the foundational model's capabilities across different tasks while still distinguishing the model's decisions for each task*. This is a technical issue that **cannot be solved by simply using methods like DETR**. For example, as shown in Fig. 1 of the paper, an ID query for Prof. Geoffrey Hinton might get confused with broader information about the "person" category when using the traditional DETR decoder to process ID queries, instance queries, and semantic prototypes simultaneously. **These technical challenges motivated us to propose the novel M-Former design**:
- Dual-Path Structure: M-Former's input includes queries and prototypes. Queries handle instance-level predictions, while prototypes, derived from the reference image, maintain semantic-level category information. SINE processes them separately, forming a dual-path structure to enable query learning and avoid interference from prototype's semantic-level information.
- Shared SA: Queries include both ID and instance queries, representing different levels of granularity. To prevent interaction between ID queries and coarse-grained instance queries or prototypes, we introduce a well-designed attention mask (Fig. 2, top right) in the shared SA to effectively address this issue.
As shown in the table below, compared with recent method that directly adopt the DETR structure without considering task ambiguity, SINE achieves significant performance improvements.
||DAVIS|YouTube-VOS|
|---|---|---|
|DINOv [A] | 73.3|60.9|
|SINE|77.0|66.2|
[A] Visual in-context prompting. CVPR 2024.
Based on the considerations mentioned above, our insights are not incremental but provide valuable contributions to the research community.
---
Rebuttal 4:
Comment: Thank you to the author for their detailed analysis and explanation. In my opinion, the insights and novelty presented in the paper are limited, leading me to consider it as a borderline-to-rejection paper. I appreciate the extensive effort put into evaluating the proposed method. As a result, recognizing that the method is demonstrated as a strong baseline, I have adjusted my rating to borderline.
---
Rebuttal Comment 4.1:
Comment: Thank you very much for your efforts and time again! This rebuttal and discussion will be helpful to improve our revised manuscript. | Summary: The paper proposes a generalist model for image segmentation named SINE, which unifies multiple image segmentation tasks into the common formulation of visual in-context learning. This work aims to identify and model the task of object reidentification to reduce ambiguities within the in-context examples. By incorporating the modeling of each specific segmentation task within the SINE architecture, it effectively improves upon existing generalist models and achieves strong performance across a wide range of segmentation tasks.
Strengths: 1. This paper offers a valuable review of related works in in-context segmentation, analyzing the problems with the recent SegGPT and clarifying the differences of task setting between them.
2. The authors discuss the relationship between different segmentation tasks, effectively unifying them as instance segmentation, which is well-motivated.
3. This paper points out and addresses the ambiguities in visual prompting, which is currently an open research problem.
4. The authors conduct comprehensive experiments across various segmentation tasks, demonstrating significant performance improvements over recent generalist and specialist models.
Weaknesses: 1. This paper only addresses the ambiguity between instance and semantic segmentation. However, there is a broader ambiguity in visual prompts, such as spatial position, category, color, etc. The authors need to discuss these aspects in more detail.
2. Compared to SegGPT, SINE introduced additional Objects365 as extra training data. Although this was explained, it still seems to lack some fairness. Without using Objects365, can better performance be achieved than SegGPT? For example, by only using ADE20K, COCO, etc., and what is the performance on COCO-20i, PASCAL-5i?
3. Although the authors conducted numerous experiments, the impact of different backbones is missing. Since the backbone is frozen, different models might bring significant performance differences.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The authors only conducted experiments on dinov2 vit-l. How do models of different sizes and different pre-training affect the results? For example, clip.
2. How does SINE perform few-shot learning? It seems that SINE can only accept a single reference image.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your comments and the approval of our motivation and performance. We address your concerns here.
___
>W1: Discussion of more ambiguity in visual prompts.
Thanks for your helpful suggestions. SINE is the first work to highlight task ambiguity in the visual prompts of in-context segmentation, initially focusing on resolving ambiguities among ID, instance, and semantic segmentation tasks (as these are more important and commonly used). We believe addressing ambiguities at this level is meaningful. For more complex ambiguities, such as full objects and parts, spatial position, category, and color, these can be addressed by incorporating multimodal in-context examples (e.g., image and text). We will add more discussions on this in the paper.
>W2: Comparison of SINE and SegGPT by only using ADE20K, COCO.
The table below compares the one-shot semantic segmentation results of training SINE using only ADE20K and COCO with SegGPT. SINE outperforms SegGPT on three benchmarks. Notably, SINE achieves 10% higher mIoU than SegGPT on LVIS-92i, indicating stronger class generalization capability in real-world image segmentation compared to SegGPT. Additionally, we are the first to explore the use of Objects365 in in-context segmentation. Table 5(a) of the paper shows that SINE's effective design leads to further improvements in generalization capability with the inclusion of Objects365.
| | COCO-20i | PASCAL-5i | LVIS-92i |
| --- | --- | --- | --- |
| SegGPT | 56.1 | 83.2 | 18.6 |
| SINE | 67.1 | 86.3 | 28.8 |
>W3,Q1: Impact of different backbones.
We select DINOv2-S, DINOv2-B, DINOv2-L, and CLIP-L to explore the impact of different backbones. The conclusions are as follows:
1. DINOv2 Outperforms CLIP: DINOv2 achieves better performance than CLIP because it has general matching capabilities at both image and patch levels, allowing it to better understand complex contextual information between images. In contrast, CLIP captures image-text similarity, making it difficult to capture relationships between images, leading to poorer performance.
2. Larger DINOv2 Models Perform Better: Larger DINOv2 models have stronger representation capabilities, making it easier to capture contextual relationships, thus improving performance. This also indicates that SINE is scalable with the enhanced capabilities of the encoder.
| | COCO-20i | PASCAL-5i | LVIS-92i |
| --- | --- | --- | --- |
| SegGPT | 56.1 | 83.2 | 18.6 |
| SINE DINOv2-S | 56.8 | 81.4 | 26.7 |
| SINE DINOv2-B | 61.7 | 84.1 | 29.5 |
| SINE DINOv2-L | 64.5 | 85.4 | 31.2 |
| SINE CLIP-L | 34.8 | 57.3 | 16.1 |
>Q2: How does SINE perform few-shot learning?
Multiple reference image features and masks are concatenated in the spatial dimension. The resulting feature and mask can be treated as a single reference image, and the subsequent process remains the same.
---
Rebuttal Comment 1.1:
Comment: The rebuttal has addressed my main concerns. Since the contribution of unifying segmentation tasks with in-context examples is clear, novel, well motivated, and well demonstrated, I would like to keep my original rating and recommend to accept this paper. I hope the authors can release their source code to benefit researchers in the same domain.
---
Reply to Comment 1.1.1:
Title: Reply
Comment: Thank you for the reviewer's recognition of our work. And we promise to open-source our code. | Summary: SINE aims to resolve the problem of task ambiguity in in-context segmentation, where previous models struggled to accurately infer tasks based on in-context examples alone. This ambiguity arises because traditional models often fail to distinguish between different segmentation tasks like semantic segmentation, instance segmentation, or identifying specific objects based on the context provided. SINE utilizes a Transformer-based architecture where the encoder generates high-quality image representations and the decoder produces multiple task-specific output masks. In-context Interaction Module further enhances the encoder's output by establishing correlations between the target image and in-context examples, helping to better define segmentation tasks. Matching Transformer (M-Former), a dual-path Transformer decoder, updates object queries and semantic prototypes for precise task execution. It incorporates fixed matching and the Hungarian algorithm to resolve differences between tasks, ensuring that the output aligns well with the given task. SINE achieves impressive performance improvements over previous in-context segmentation models like SegGPT, particularly in handling multiple segmentation tasks simultaneously. It is shown to effectively address the issue of task ambiguity, producing relevant segmentation masks that are more aligned with the semantic content of the images.
Strengths: [Experimental Results]. Extensive experiments across multiple benchmarks show significant improvements in multiple downstream tasks.
[Paper writing]. This paper is well-written and overall easy to follow.
[Interesting task formulation]. SINE was shown to effectively address the issue of task ambiguity, producing relevant segmentation masks that are more aligned with the semantic content of the images, which was not explored by the previous works in this domain.
Weaknesses: [Concerns on Generalizability: Complex Interaction Relationships Beyond Semantically Similar Objects] While this paper adeptly handles in-context instance or semantic segmentation, numerous open-vocabulary segmentation models already capably segment novel classes or objects. A crucial demonstration of an in-context segmentation model's comprehension of in-context samples should extend beyond merely identifying semantically similar objects provided by context information. An intriguing task would involve, for example, segmenting objects situated on a table across multiple images—such as a bottle, a plate, and a book in successive images—and understanding whether SINE can recognize and segment another object on a table in a subsequent image. This requires the model to understand the interaction, the relationships, etc, that goes beyond segmenting semantically similar objects.
[Challenges with Complex In-Context Information] Often, a single pair of images suffices to provide the necessary in-context information reflecting the user's intent. However, more complex scenarios might require integrating multiple pairs of images to fully capture intricate in-context information. I am concerned about SINE's limitations in handling such complexity.
[Performance Discrepancy in Video Segmentation Compared to SegGPT] While SINE excels beyond SegGPT in instance segmentation on MSCOCO, it underperforms in video instance segmentation across most evaluated benchmarks. This result is concerning, considering video instance segmentation's unique demand for the model to sustain consistent correspondence across multiple video frames, a task more complex than segmenting static images. I am interested in understanding the specific reasons behind this performance gap.
[Unable to Handle Many Tasks SegGPT Supports]. SegGPT can actually support a lot more tasks, such as hierarchical in-context segmentation. I am concerned if SINE is overfitted on the instance/semantic segmentation tasks.
Technical Quality: 2
Clarity: 2
Questions for Authors: Minor question: Why is the target image the same as the raw image in Figure 4a)?
Minor issues - typos: L307 "Results Table 7 compares ...." should be "Results Table 4 compares ...."? If it's not a typo, Table 4 was never discussed in the main paper.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your comments and the approval of our task formulation. We address your concerns here.
___
>W1: Concerns on Generalizability.
Fig. 1 in the attached PDF shows SINE's capability in handling complex interaction relationships.
- In Fig. 1(a), the reference consists of multiple images, each containing different objects (box, cup, keyboard, mouse). When using these as in-context examples, SINE can segment one or more semantically different objects on a desk.
- In Fig. 1(b), with a reference containing only one object, the in-context example cannot represent complex interactions, and thus no segmentation result is provided.
- In Fig. 1(c), replacing multiple single-object images with a single image containing multiple objects yields the same effective results.
These experiments indicate that SINE has the potential to handle complex interaction relationships beyond semantically similar objects.
>W2: Challenges with Complex In-Context Information.
We believe SINE can handle complex in-context infromation because the contextual relationships between complex scenarios can be captured more efficiently in the representation space of DINOv2.
SINE leverages this characteristic of DINOv2 to accurately understand and fully capture intricate in-context information in multiple pairs of images. Extensive experiments show that SINE performs better in the few-shot setting than the one-shot setting, indicating its ability to effectively comprehend and capture in-context information from multiple samples. Figure 1(a) in the attached PDF also shows SINE's capability to understand and handle the complex relationships in in-context examples.
>W3: Performance Discrepancy in Video Segmentation Compared to SegGPT.
We think the reason behind this performance gap is that SegGPT trains all model parameters (300M), while SINE uses a **simpler in-context fusion module** and **fewer learnable parameters (19M)**. The detailed analysis is as follows:
SegGPT uses a broader dataset, concatenates references and targets spatially, and employs an ViT architecture. It trains all model parameters (300M), with self-attention effectively capturing relationships between video frames.
SINE aims to learn a general and lightweight decoder to efficiently transfer representations from pre-trained DINOv2 (encoders) to in-context segmentation. The frozen DINOv2 is limited to capture in-context information. For efficiency, SINE deploys only a simple in-context fusion module (1.58M) to learn in-context relationships, limiting its ability to handle inter-frame relations in complex videos.
Considering image tasks or simple videos (e.g., DAVIS), we believe that the overall performance of SINE are satisfactory. In paiticular, compared to recent generalist segmentation models SEEM[A] and DINOv[B] (which train all parameters), SINE's efficient design shows greater potential in video tasks (see the table below).
Although SINE currently has limitations in learning complex inter-frame relationships in videos, we believe that by designing more suitable In-Context Interaction module, the current paradigm holds greater potential for solving in-context segmentation tasks. We will explore this further in future work. These discussions will be added to the paper.
||DAVIS|YouTube-VOS|
|---|---|---|
|SegGPT|75.6| 74.7|
|SEEM [A] |58.9| 50.0|
|DINOv [B] | 73.3|60.9|
|SINE|77.0|66.2|
[A] Segment everything everywhere all at once. NIPS 2023.
[B] Visual in-context prompting. CVPR 2024.
>W4: Unable to Handle Many Tasks SegGPT Supports.
SINE and SegGPT have different motivations.
- SegGPT aims to verify visual in-context learning can unify different segmentation tasks by using a broader range of datasets, such as semantic segmentation (ADE20K), instance segmentation (COCO), and part segmentation (PACO). Hence SegGPT can perform hierarchical in-context segmentation.
- SINE aims to resolve the task ambiguity in prompts. As the first work to highlight this problem, we initially focus on resolving ambiguities among ID, instance, and semantic segmentation tasks (as these are more important and commonly used).
When PACO is included as training data, **SINE can perform part segmentation like SegGPT**, as shown in Fig.2 of the attached PDF. This shows that our method does not overfit instance/semantic segmentation. Additionally, the results in Table 1 of the paper (LVIS-92i) and Fig.3 of the attached PDF indicate that SINE has stronger class generalization capability compared to SegGPT in real-world image segmentation.
>Q1: Why is the target image the same as the raw image in Figure 4a ?
In Fig. 4(a), the giraffe demo is selected from two video frames with a large temporal gap. The reference is from an earlier frame with only one giraffe, while the subsequent frame gradually introduces another giraffe. This demo tests whether SINE can identify objects with the same ID without temporal information.
>Q2: typos: L307 "Results Table 7 compares ...."
Thanks for pointing this out. It should be Table 4. We will fix this error in the paper.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their responses. Most of my questions have been addressed. Therefore, I will maintain my current rating as borderline accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and your great efforts. Any further questions/suggestions would be also appreciated. | Rebuttal 1:
Rebuttal: # **General Response**
We thank the reviewers for recognizing that our paper points out an open research problem, i.e., the ambiguities in visual prompting (**nRMP**). The motivation on task ambiguity is clear (**rhP3**), and we effectively address task ambiguity (**sZmi,nRMP**). Our method is rationale and effective (**2YmX**), significantly improving performance across various segmentation tasks (**sZmi,nRMP**), and demonstrating solid practical utility (**rhP3**).
In addition, we will carefully address the issues and suggestions raised by the reviewers and will make further revisions and improvements to our paper. In every Official Review of the reviewers below, we have provided responses to the questions and suggestions made by the reviewers, and we hope these responses adequately address the issues and concerns raised. **If the reviewers have any further questions regarding our paper and responses, please let us know.**
___
In the General Response, we highlight the contributions and insights of this work.
In-context segmentation is an important proxy task for unifying different segmentation tasks, and it has garnered significant attention from the research community. Our paper delves deeply into this task, and we summarize our **contributions**, **novelty**, and academic **insights** as follows:
1. **Ambiguities in Visual Prompting**: This paper is the *first to explore the task ambiguity problem in prompts within in-context segmentation* (supported by **sZmi** and **nRMP**). This is an **important** and **novel** issue that has been **underexplored**. We investigate the conflicts among multiple tasks in in-context segmentation from the perspective of task ambiguity and *provide effective solutions*, offering valuable insights to the research community.
2. **Investigating the Capabilities of VFMs**: Utilizing Visual Foundation Models (VFMs) to address various tasks is becoming a research trend. Our paper explores how to efficiently transfer the visual representations of VFMs to in-context segmentation, a research paradigm that *currently lacks extensive exploration in the in-context segmentation field*.
3. **Lightweight and Effective Decoder**: We thoroughly analyze the challenges brought by task ambiguity (see Lines 181-185) and design the M-Former structure to address these issues. The novel dual-path design and Shared SA in M-Former ensure efficient decoding while avoiding information confusion across tasks of different granularity. Additionally, unlike previous methods, SINE achieves significant performance improvements with fewer trainable network parameters. Thus, our method also demonstrates novelty.
Taken together, these contributions represent novel insights to the academic community.
___
The attached PDF contains 3 additional results:
1. Generalizability of SINE beyond semantically similar objects. (**sZmi**)
2. Visualization of part segmentation. (**sZmi**)
3. Visualization comparisons between SINE and SegGPT on video and image tasks. (**2YmX**)
Pdf: /pdf/509ea74e7a0414232b1af2745523e011dafd7ab1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Mixture of Experts Meets Prompt-Based Continual Learning | Accept (poster) | Summary: The paper titled "Mixture of Experts Meets Prompt-Based Continual Learning" explores the integration of prompt-based continual learning methods with mixture of experts (MoE) architectures. The paper proposes a novel gating mechanism called Non-linear Residual Gates (NoRGa) to enhance the performance of prompt-based continual learning by leveraging theoretical insights and empirical evidence.
Strengths: 1. This paper offers a novel connection between prompt-based-tuning and mixture-of-experts, providing a fresh perspective on prompt-based continual learning approaches.
2. Introduction of NoRGa, which integrates non-linear activation and residual connections to enhance continual learning performance while maintaining parameter efficiency.
Weaknesses: 1. Comparing NoRGa with other state-of-the-art continual learning methods that do not use prompts would highlight the specific advantages of the proposed method.
2. It may be better to use a additional graph to represent the final method NoRGa of the paper.
3. It may be necessary to further clarify the differences between the proposed method and other prompt-based methods.
Technical Quality: 3
Clarity: 2
Questions for Authors: Which dataset is the ViT-B/16 pre-trained on.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: This paper has already discussed the limitations of our work in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and insightful comments. Below, we provide a point-to-point response to these comments and summarize the corresponding revisions in final version.
__Q1: Comparing NoRGa with other state-of-the-art continual learning methods that do not use prompts would highlight the specific advantages of the proposed method.__
A1: Thank you for your valuable suggestion. As our work focuses on the continual learning of pretrained models, we have limited our comparison to pretrained model-based (PTM-based) continual learning methods. Previous works have also demonstrated that utilizing pretrained models has shown great promise and performance for continual learning, surpassing the performance upper bound of non-PTM-based methods. Specifically, we compared our method (NoRGa) against ADAM [1], RanPAC [2], and ESN [3], using ViT-B/16 with pretrained Sup-21K weights. Performance was evaluated using final average accuracy (FA) on Split CIFAR-100 and Split CUB-200. The results can be summarized as follows:
| Method | Split CIFAR-100 | Split CUB-200 |
|----------------|-----------------|------------------|
| ADAM + VPT-D [1] | 85.04 | 85.28 |
| ADAM + SSF [1] | 85.27 | 85.67 |
| ADAM + Adapter [1] | 87.29 | 85.84 |
| RanPAC [2] | 92.20 | 90.30 |
| ESN [3] | 86.34 | N/A |
| NoRGa (ours) | __94.48__ | __90.90__ |
As shown, NoRGa exhibits competitive performance on both datasets. For example, on Split CIFAR-100, NoRGa achieves an FA of 94.48%, surpassing the next best method by over 2%. On Split CUB-200, NoRGa also demonstrates competitive results compared to other baselines. This improvement underscores the effectiveness of our proposed method in mitigating catastrophic forgetting and preserving knowledge across multiple tasks. A more detailed comparison will be included in the final version.
__Q2: It may be better to use a additional graph to represent the final method NoRGa of the paper.__
A2: Thank you for your valuable suggestion. You can refer to **Q1** in General Response.
__Q3: It may be necessary to further clarify the differences between the proposed method and other prompt-based methods.__
A3: Thank you for your valuable suggestion. We compare the differences between L2P, DualPrompt, HiDe-Prompt, and NoRGa (ours) as follows:
- __L2P:__ Utilizes a shared prompt pool for all tasks. Each prompt is associated with a learnable prompt key. We then employ the query feature $q(\boldsymbol{x})$ to retrieve the top K most similar prompts using cosine distance. Consequently, the most relevant keys and corresponding prompts are explicitly assigned to instances based on the query feature.
- __DualPrompt:__ Enhances L2P by using two complementary prompts during training: a general prompt (G-Prompt) and a task-specific expert prompt (E-Prompt) per task. The set of E-Prompts acts as an expanding pool of task-specific knowledge, similar to the L2P prompt pool, but with the key difference of growing incrementally with each new task. DualPrompt employs the same prompt selection mechanism as L2P for the E-Prompts. In contrast, the G-Prompt is shared among all tasks, requiring no prompt selection.
- __HiDe-Prompt:__ A recent SOTA prompt-based method that employs only task-specific E-Prompts. Prompts for each task are trained with the task's objective and a contrastive regularization that tries to push features of new tasks away from prototypes of old ones. Unlike L2P and DualPrompt, prompt selection is achieved by an additional MLP head placed atop the pre-trained ViT, which employs $q(\boldsymbol{x})$ to determine the suitable prompt.
- __NoRGa:__ Utilizes the same framework as HiDe-Prompt. Specifically, NoRGa only uses task-specific prompts (E-Prompts) with an additional MLP head for prompt selection. Recognizing that MSA layers embody a mixture of experts (MoE) architecture and applying prefix tuning is the process of introducing new experts into these models, NoRGa modifies the gating mechanism of prefix tuning, addresses statistical limitations of prefix tuning, and enhances continual learning performance.
We will add the above discussion in the final version.
__Q4: Which dataset is the ViT-B/16 pre-trained on?__
A4: Thanks for your question. The ViT-B/16 model is pre-trained on the ImageNet dataset [4] (ImageNet-1K and ImageNet-21K). We utilize several publicly available checkpoints to demonstrate the effectiveness and robustness of our proposed method under varying pretraining settings:
- Sup-21K: Supervised pretraining on ImageNet-21K
- iBOT-21K: Self-supervised pretraining on ImageNet-21K
- iBOT-1K, DINO-1K, MoCo-1K: Self-supervised pretraining on ImageNet-1K.
[1] Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need, arxiv 2023.
[2] Ranpac: Random projections and pre-trained models for continual learning, NeurIPS 2023.
[3] Isolation and Impartial Aggregation: A Paradigm of Incremental Learning without Interference, AAAI 2023.
[4] ImageNet: A large-scale hierarchical image database, CVPR 2009
---
Rebuttal Comment 1.1:
Comment: I have read the authors' response and my concerns have been addressed. I raise my rating to 6.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We would like to thank the reviewer for rating a positive score of 6. We are happy to discuss more if the reviewer still has questions.
Best regards,
Authors | Summary: The paper explores the theoretical underpinnings and practical implications of prompt-based methods in continual learning, aiming to enhance our understanding and optimize their effectiveness. It introduces a novel perspective by connecting prefix tuning with mixture of experts models, revealing insights into how self-attention mechanisms encode specialized architectures. The proposed Non-linear Residual Gates (NoRGa) further advances this by improving within-task prediction accuracy and overall continual learning performance while maintaining parameter efficiency.
Strengths: 1. This paper introduces a novel theoretical framework connecting self-attention mechanisms to mixture of experts models, significantly advancing the understanding of prompt-based approaches in continual learning.
2. The theoretical insights are well-supported and complemented by empirical experiments across diverse benchmarks, demonstrating robustness and reliability.
3. The concepts, although complex, are explained with clarity, aided by concrete examples and theoretical justifications.
4. Addressing the gap in theoretical understanding, the paper proposes a practical enhancement (NoRGa) that promises substantial improvements in model adaptation and efficiency.
Weaknesses: 1. The paper could benefit from clearer explanations regarding the core elements of prompt-based methods, particularly in how prompts are defined and utilized within their framework.
2. While NoRGa is presented as a novel gating mechanism, its explicit connection to prompts could be elaborated further. Is it reasonable to interpret NoRGa's gating mechanism as a variant or extension of prompts in a broader sense?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could NoRGa's gating mechanism be seen as a different interpretation or implementation of prompts, albeit operating at a different level or with distinct functional goals?
2. Are there instances of redundant theoretical explanations or definitions throughout the paper? Streamlining these could improve clarity and focus on the novel contributions without detracting from the foundational concepts.
3. Given the complexity of the concepts, would integrating more visual aids (e.g., diagrams illustrating the architecture of NoRGa or the relationship between prompts and gating mechanisms) enhance the paper's accessibility and readability?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper's reliance on dense theoretical explanations without adequate visual aids may limit its accessibility. Visual representations could significantly enhance comprehension of complex concepts like the relationship between prompts and NoRGa, potentially broadening the paper's audience and impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and insightful comments. Below, we provide a point-to-point response to these comments and summarize the corresponding revisions in final version.
__Q1: The paper could benefit from clearer explanations regarding the core elements of prompt-based methods, particularly in how prompts are defined and utilized within their framework.__
A1: Thank you for your valuable suggestion. As detailed in Section 2, within our framework, a prompt is a set of learnable parameters denoted by $\mathbf{P} = [ \mathbf{P}^K, \mathbf{P}^V ] \in \mathbb{R}^{L_p \times d}$. These parameters are utilized to fine-tune the multi-head self-attention (MSA) layer of the pretrained model, where $L_p$ is the prompt length and $d$ is the embedding dimension. Notably, the MSA layer incorporates a specialized architecture comprising multiple mixture of experts (MoE) models. Our approach leverages prompt parameters to refine these models by introducing new prefix experts. Specifically, $\mathbf{P}^V \in \mathbb{R}^{\frac{L_p}{2} \times d}$ encodes parameters for new prefix experts appended to the pretrained MoE models. Correspondingly, $\mathbf{P}^K\in \mathbb{R}^{\frac{L_p}{2} \times d}$ encodes parameters for the associated score functions of these new experts within the MoE models in the MSA layer. We adopt task-specific prompts for each task, ensuring that every task has its own set of experts and score functions. During inference, the task identity is inferred to select the appropriate experts and score functions. We will add the above discussion in the final version.
__Q2: Could NoRGa's gating mechanism be seen as a different interpretation or implementation of prompts, albeit operating at a different level or with distinct functional goals?__
A2: NoRGa's gating mechanism can be regarded as a distinct implementation of prompts. As demonstrated in Section 3, the MSA layer in pretrained models can be regarded as a specialized architecture comprising multiple MoE models. Prefix tuning finetunes these MoE models by introducing new prefix experts, utilizing prompts to encode the parameters of the new experts' components. The score functions for newly introduced experts via prefix tuning are linear functions of the input, resulting in suboptimal sample efficiency for parameter estimation as detailed in Appendix A. To mitigate this statistical limitation, NoRGa refines the score functions associated with the original prefix tuning's new experts by incorporating non-linear activation and residual connections, __substantially enhancing statistical efficiency (polynomial versus exponential) with theoretical guarantees__. We will add the above discussion in the final version.
__Q3: Are there instances of redundant theoretical explanations or definitions throughout the paper? Streamlining these could improve clarity and focus on the novel contributions without detracting from the foundational concepts__
A3: Thanks for your question. Upon careful examination, we found minimal redundancy in the theoretical explanations and definitions presented throughout the paper. Section 2 introduces the foundational concepts of MSA, prefix tuning, and mixture of experts (MoE). Building upon this foundation, Section 3 elucidates the connections among self-attention, prefix tuning, and MoE. Based on these insights, we propose a novel method termed NoRGa to enhance statistical efficiency. Our theoretical analysis considers a regression framework to demonstrate that the non-linear residual gating is more sample efficient than the linear gating in terms of estimating experts. The algebraic independence condition is employed to characterize compatible experts for the non-linear residual gating. Finally, we design a loss function based on Voronoi cells for the convergence analysis of expert estimation in MoE models. Our results indicate that under non-linear residual gating, MoE experts exhibit polynomial-order estimation rates, outperforming the $1/\log^{\tau}(n)$ rate observed in the original prefix tuning design (as detailed in Appendix A).
__Q4: Given the complexity of the concepts, would integrating more visual aids (e.g., diagrams illustrating the architecture of NoRGa or the relationship between prompts and gating mechanisms) enhance the paper's accessibility and readability?__
A4: Thank you for your valuable suggestion. You can refer to **Q1** in General Response.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer MKaE
Comment: Thank you for the author's positive response.
Most of my doubts have been resolved, but in fact I have the same doubts as Reviewer dPHR about "the placement of this work as a Continual Learning contribution". I hope the author will seriously consider whether the proposed method can truly solve the core problems such as catastrophic forgetting from the perspective of continuous learning. In addition, the experimental settings are also compared with some popular PTM-based continuous learning methods.
Nevertheless, I also appreciate that this work rethinks the relationship between MoE and prompt from a novel and meaningful perspective, so I still maintain the original score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for maintaining the positive score “6” of the paper. Regarding the question of whether our proposed method effectively addresses core challenges in CL, particularly catastrophic forgetting, we wish to emphasize that our paper is built on the theory that improved WTP is both a necessary and sufficient condition for enhanced CL performance. This theory, which is supported by prior CL research [1] and the original HiDe-Prompt paper, is fundamental to connecting our contribution to continual learning from the perspective of continous learning.
While exploring alternative Parameter-Efficient Fine-Tuning (PEFT) methods or incorporating more complex expert models might offer improvements in WTP, these approaches lack theoretical guarantees and could lead to an increased number of parameters. In contrast, our NoRGa method modifies the original score functions of prefix tuning to enhance WTP performance with theoretical rigor. Importantly, NoRGa maintains the same parameter count as HiDe-Prompt, which is crucial in CL due to memory constraints.
We will incorporate the above discussion in the final version. If you have any further questions, please let us know.
[1] A theoretical study on solving continual learning, NeurIPS 2022. | Summary: This paper introduces an extension to prefix tuning, by introducing non-linear residual gating - a simple extension over existing prefix tuning. This non-linear residual gating is supported with theoretical efficiency guarantees (polynomial versus exponential) to better estimate the optimal parameters. When applied to the problem of rehearsal-free continual learning with pretrained models (over four standard benchmarks) on top of the HiDe-Prompt method, the suggested gating provides consistent and partly significant improvements.
Strengths: * To the best of my knowledge in both the Continual Learning and the parameter-efficient finetuning domain, the non-linear residual gating prefix MoE is a novel extension over standard prefix tuning.
* This paper is generally well written, and except for the proofs, quite easy to follow.
* Assuming the regression estimator to be a fair assumption, the statistical sample efficiency improvements of the proposed NoRGa setup versus normal gating is significant (polynomial versus exponential).
* The explicit improvements over HiDE-Prompt are consistent, and often significant.
Weaknesses: My biggest issue with this work is its placement as a Continual Learning contribution, and evaluating it as such.
__[1]__ Firstly, the proposed approach, whilel helping with WTP, is simply an extension on top of prefix tuning - and none of the contributions connect to the continual distribution shift nature. The improved performance is much more likely tied to an improvement in the underlying PEFT approach; which in itself is simply sufficient on standard benchmarks (see Zhou et al. 2023). This is further supported by the fact that the authors freeze both alpha and tau after the first task (see supplementary D). Moreover, on such small benchmarks, introducing two additional tunable hyperparameters does incorporate additional degrees of freedom to overfit to these benchmarks, and makes direct comparison to existing methods difficult.
Consequently, the question needs to be answered: How robust is NoRGa with respect to the additional introduced alpha and temperature? These are strictly two hyperparameters more than HiDe-Prompt. As such, how much more compute went into hyperparameter optimization for NoRGa w.r.t. Hide? And given the same search budget, how does HiDe-Prompt compare relatively? This is the most crucial aspect, as the other method comparisons are somewhat redundant, since HiDe-Prompt introduces orthogonal classifier head alignment which can be applied to all other methods as well.
> It would be great if the authors could address both the placement as a continual learning contribution (as opposed to a parameter-efficient finetuning paper, which would need to be evaluated on other respective benchmark tasks), and the comparability to HiDe-Prompt.
__[2]__ This paper also only tackles prompt-based rehearsal-free continual learning, but misses discussions of recent works on first-task-adaptation and simple PEFT-style tuning for Continual Learning, such as: [1] Jason et al. 2022, “A simple baseline that questions the use of pretrained models for continual learning”, [2] Panos et al 2023, “First session adaptation: A strong replay-free baseline for class-incremental learning”, [3] Zhou et al. 2023, “Revisiting class-incremental learning with pretrained models: Generalizability and adaptivity are all you need”, [4] McDonnell et al. 2023: “RanPAC: Random projections and pretrained models for continual learning”.
> Given that these works show that existing prompt-style CL objectives are matched by simply training on the first task, and how performance is matchable with simple parameter-efficient finetuning without explicitly accounting for the continual nature of the problem, how should the insights in this paper be rated w.r.t. These works?
__[3]__ A vast part of this paper deals with the sample in-efficiency of standard prefix tuning mechanisms. However, the authors do not provide any efficiency tests (e.g. how does performance vary as a function of training iterations / samples seen), and only provide slightly improved performances on a few smaller-scale benchmarks. This is somewhat unfortunate, as it would be great to see how the efficiency estimates (e.g. Eq. 16) connect to practical convergence behaviour (e.g. L288 “polynomial number of data versus exponential number of data”).
__[4]__ Both statistical efficiency proofs also do not take into account the continual nature of the problem, but rather assume a single generative model and i.i.d. Samples from said model, which are presented to the learner. This fully disconnects from the actual continual nature and sequential distribution shifts expected in such problems. Similarly, I’m having general troubles to connect the assumptions made throughout the proofs to the practical nature of the problem. E.g. why is Eq. 13 a reasonable assumption for the CL problem? Is convergence behaviour of a LS regression estimator suitably connected to practical convergence behaviour? If so, why? If not, why not? Similarly, why would I want to encourage algebraic independence - what would this give within the actual continual learning scenario?
---
__Smaller issues:__
* I find it somewhat difficult to follow the proofs in 4.2 and Appendix A. Providing a proof-sketch as an overview would significantly help readability.
* It is a bit problematic to have the entire proof of convergence, without even a proof sketch, in the supplementary- as it motivates the entire section 4 and the non-linear residual gating. It would be great if the authors could provide at the very least a proof sketch in the main paper.
* L139/140 is fairly hard to parse.
Technical Quality: 3
Clarity: 3
Questions for Authors: All relevant questions are incorporated in the previous section.
I am currently leaning towards rejection - not because of the proposed method and the theoretical support (both of which I believe are sensible and meaningful), but rather its placement and consequent evaluation as a continual learning method. Neither in the methodological nature NOR the actually conducted proofs do the author account for the continual nature of the problem.
Instead, I believe that this paper is much better suited as a contribution in the domain of parameter-efficient model finetuning. However, this in turn requires different benchmark evaluations.
Moreover, I am uncertain about the significance of the report results (see above).
Together, I would be happy to raise my score if the authors can address these particular concerns (as well as those listed above).
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors do not discuss limitations or societal impact explicitly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and insightful comments. Below, we provide a point-to-point response and summarize the corresponding revisions in the final version.
__Q1: The placement as a continual learning contribution__
A1: Our contributions encompass the introduction of a novel connection between self-attention, prefix tuning, and MoE. __This offers a fresh perspective to the design of previous prompt-based continual learning methods__. For example, DualPrompt uses two complementary prompts during training: a general prompt (G-Prompt) shared across tasks and a task-specific expert prompt (E-Prompt) per task. As illustrated in our paper, each head in the MSA layer comprises multiple MoE models. We hypothesize that the G-Prompt aims to expand the set of pretrained experts in the MSA layer, which capture generalizable knowledge, with new experts encoded within it. However, unlike pretrained experts, the experts in the G-Prompt are learnable across tasks, potentially leading to catastrophic forgetting. Conversely, E-Prompt encodes task-specific experts capturing each task's knowledge.
Furthermore, we agree that NoRGa can have a broader impact to other domain, such as parameter-efficient model finetuning. However, our work builds on HiDe-Prompt, which uses task-specific prompts, i.e. each task has its own set of experts and score functions. NoRGa modifies the original score functions of prefix tuning, enhance WTP performance with theoretical guarantee. We then use the theory that improved WTP performance is sufficent and necessary condition for improved CL performance. This theory is also discussed in previous continual learning works and the original paper of HiDe-Prompt. This theory is critical and connects our contribution to continual learning. Importantly, NoRGa maintains the same parameter count as HiDe-Prompt, which is crucial in continual learning because of the memory constraint.
__Q2: The introduction of 2 hyperparameters $\alpha$ and $\tau$__
A2: In our framework, $\alpha$ and $\tau$ are learnable hyperparameters optimized through backpropagation by the objective of the first task, eliminating the need for manual tuning. __Moreover, our theory on NoRGa's statistical efficiency holds for any values of $\alpha$ and $\tau$, demonstrating the theoretical robustness__. We also experimented with fixed and learnable settings for $\alpha$ and $\tau$. For fixed hyperparameters, we set their values to 1. We report FA on Split CUB-200 and Split CIFAR-100 with Sup-21K weights. The results are summarized below:
|Method|Split CIFAR-100|Split CUB-200|
|----------------------------|-----------------|---------------|
|HiDe-Prompt|92.61|86.56|
|Learnable $\alpha$, $\tau = 1$|94.38|90.45|
|$\alpha = 1$, Learnable $\tau$|94.42|90.48|
|$\alpha = 1$, $\tau = 1$|94.29|90.32|
|NoRGa|__94.48__|__90.90__|
Although performance slightly decreased with fixed hyperparameters, it still outperforms HiDe-Prompt, indicating our method's empirical robustness. We will add this discussion in the final version.
__Q3: Discussions of recent works on first-task-adaptation and simple PEFT-style tuning for Continual Learning__
A3: Thank you for highlighting these excellent related works. We will add the following discussion to the final version:
Previous works have shown that first-task adaptation and simple PEFT-style tuning can achieve competitive performance [1,2,3,4] with prompt-based methods. For instance, [1] demonstrated that appending a nearest class mean (NCM) classifier to a ViT model's feature outputs, can serve as a strong baseline. [2, 3] enhanced this strategy by adapting the pretrained model to the first task using the three PEFT methods for transformer networks [3] and the FiLM method for CNNs [2]. Additionally, [4] improved NCM by incorporating second-order statistics—covariance and Gram matrices. However, these methods, which finetune only the backbone for the initial task, may not always ensure satisfactory separation of new tasks' features. Our work focuses on continually adapting the backbone, utilizing task-specific prompts to consistently capture emerging tasks' characteristics, and proposing a novel method to enhance the CL performance of previous prompting methods.
__Q4: Efficiency tests__
A4: Thank you for your suggestion. We have added the graph to compare Validation loss of NoRGa and HiDe-Prompt throughout the first task in the attached pdf of General Response.
__Q5: Statistical efficiency proofs__
A5: Thanks for your questions. We use the HiDe-Prompt framework with task-specific prompts, where each task has its own experts and score functions optimized with only data from that task. Thus, it is reasonable to assume that samples within a task are generated from a single model and are i.i.d.
In Section 3.2, we show that the output attention head with prefix tuning can be expressed as a linear gating prefix MoE model. Recent works [a, b] have shown that the performance of MoE models can be improved by using a more sample efficient gating function in terms of parameter and expert estimation. Motivated by this, we propose using the non-linear residual gating prefix MoE in Section 4 and conduct the convergence analysis to justify its sample efficiency.
Next, Eq.(13) is a common regression framework to study the sample efficiency of the gating function in MoE models [c], rather than an assumption. Finally, the algebraic independence condition characterizes which experts are compatible with the non-linear residual gating. It indicates that under the non-linear residual gating MoE, experts would have estimation rates of polynomial orders rather than of order $1/\log^{\tau}(n)$ when using the linear gating in Appendix A
[a] Is Temperature Sample Efficient for Softmax Gaussian Mixture of Experts?, ICML 2024
[b] A General Theory for Softmax Gating Multinomial Logistic Mixture of Experts, ICML 2024
[c] On Least Square Estimation in Softmax Gating Mixture of Experts, ICML 2024
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal
Comment: I thank the authors for their detailed feedback, which has helped clarified some confusion I had - particularly regarding the placement of the proposed approach in the continual learning literature. Some questions however still remain:
* If I understand the reply correctly, this paper directly builds on top of HiDE-Prompt; but does not modify any of the CL components (referring to elements that explicitly account for the continual nature; i.e. the task-specific prompts); but with an explicit focus on improving the WTP (within-task performance). But how is what the authors propose then not just simple parameter-efficient finetuning - simply applied to simplistic CL benchmarks on top of HiDE-Prompt?
* Consequently, following Zhou et al. 2023 [3]: What would happen if other PEFT approaches are used to optimize the WTP performance? Or more generally, what does the comparison to first task adaptation sanity checks look like (which are void of meaningful CL elements)? Or a comparison directly to e.g. Zhou et al? Just stating that _"However, these methods, which finetune only the backbone for the initial task, may not always ensure satisfactory separation of new tasks' features."_ may not be sufficient.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable and constructive comments. Here we explain your additional questions as below:
We agree that NoRGa can serve as a simple, parameter-efficient fine-tuning technique. However, our contributions extend beyond this by offering a novel perspective on the interplay between self-attention, prefix tuning, and the mixture of experts. This offers a fresh viewpoint on the design of previous prompt-based continual learning methods. Furthermore, this relationship enables us to theoretically substantiate the effectiveness of NoRGa.
While we acknowledge that other PEFT methods, such as LoRA and adapters, could be explored to optimize WTP performance, the relative advantages of different PEFT approaches remain an open question. In contrast, NoRGa not only demonstrates empirical superiority over prefix tuning but also offers theoretical guarantees of enhanced performance. Notably, our theoretical framework imposes no assumptions on the pretrained weights, rendering our method robust across various pretrained models. We have addressed a comparison of different PEFT methods in our response to Reviewer cCSa. Specifically, we employed the HiDe-Prompt framework with various PEFT techniques and Sup-21K weights, evaluating performance using FA on Split CIFAR-100 and Split CUB-200. The results are as follows:
|Method| Split CIFAR-100| Split CUB-200|
|--------------|-----------------|---------------|
|HiDe-Prompt|92.61|86.56|
|HiDe-LoRA|92.71|87.37|
|HiDe-Adapter|92.73|87.10|
|NoRGa|__94.48__|__90.90__|
These results highlight the effectiveness of our proposed method relative to other PEFT techniques, although further research is necessary to draw more definitive conclusions. Additionally, we compared our method with a first task-adaptation based method [3], with results summarized below:
| Method| Split CIFAR-100 | Split CUB-200 |
|----------------|-----------------|------------------|
| ADAM + VPT-D [3]| 85.04 | 85.28 |
| ADAM + SSF [3]| 85.27 | 85.67 |
| ADAM + Adapter [3]| 87.29 | 85.84 |
| NoRGa| __94.48__ | __90.90__ |
As illustrated, NoRGa achieves the highest performance across both datasets. For instance, on Split CIFAR-100, NoRGa attains an FA of 94.48%, surpassing the next best method by over 7%. This substantial improvement underscores the efficacy of our proposed method in addressing catastrophic forgetting and preserving knowledge across multiple tasks.
We thank you again for the valuable feedback. If you have any further questions, please let us know. | Summary: The topic of this paper is about the prompt-based continual learning. The authors give a theoretical analysis on these prompt-based continual learning methods, and utilize a Mixture-of-Expert (MoE) architecture characterized by linear experts and quadratic gating score functions. They develop a gating mechanism Non-linear Residual Gates (NoRGa) for MoE-based continual learning. The proposed method has been evaluated on several benchmarks.
Strengths: + The paper is well-written and easy to follow.
+ It is interesting to theoretically analyze the effectiveness of prompt-based continual learning.
+ Continual learning via MoE architecture is worth exploring.
Weaknesses: - Comparison to different Parameter-Effiecient-FineTuning methods [b] (e.g. adapter) is needed. This paper mainly focuses on the theoretical analysis of prompt-based continual learning methods. Prompt-based continual learning belongs to PEFT methods and also add new parameters for new tasks. Can the authors further analyze the advantages of prompt-based methods theroretically?
- The parameters cost is usually considered in practical memory-constrained continual learning scenarios. Dynamic routing mechanism can be employed for gating-based neural networks (e.g. [c]). To improve the parameter efficiency of the final model, how to integrate this mechanism in the proposed method?
- Important MoE-related continual learning methods are not included in the related works [a]. The difference between the proposed method and other related works should be highlighted.
- Performance improvement on some datasets is limited. For example, FA metric (75.06%->75.40%) on Split ImageNet-R, CA metric (95.02%->95.11%) on Split-CIFAR-100 in Table 1. What are the running times of each experiment? Can the authors provide the performance variance of experimental results (e.g. the accuracy at the last incremental learning session)?
[a] Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters, CVPR2024
[b] Continual Learning with Pre-Trained Models: A Survey, IJCAI2024
[c] Harder Tasks Need More Experts: Dynamic Routing in MoE Models, arxiv2024
Technical Quality: 2
Clarity: 3
Questions for Authors: My major concerns are included in the above weaknesses.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and insightful comments. Below, we provide a point-to-point response to these comments and summarize the corresponding revisions in final version.
__Q1: Comparison to Different Parameter-Efficient Fine-Tuning Methods and Theoretical Analysis__
A1: Thank you for your valuable suggestion. As the advantages of different PEFT methods remain an open question, we briefly describe them through our revealed connection between self-attention and MoE. Prefix tuning introduces additional parameters at the input of MSA layers to adapt the pretrained model representation, contrasting with adapters, which insert adaptive parameters between layers, often replacing MLP blocks. LoRA approximates weight updates with low-rank matrices and adds them to the backbone weights. Our work shows that the MSA layer in a pretrained model can be seen as a pretrained MoE architecture. Applying LoRA to the MSA layer refines both the pretrained experts and their corresponding score functions for downstream tasks. In contrast, **prefix tuning expands the pretrained MoE models by incorporating new experts while preserving the original components, rather than modifying the pretrained experts** like LoRA.
For empirical comparison, we used the framework of HiDe-Prompt with different PEFT techniques and Sup-21K weights, evaluating performance using FA on Split CIFAR-100 and Split CUB-200. The results are summarized below:
|Method| Split CIFAR-100| Split CUB-200|
|--------------|-----------------|---------------|
|HiDe-Prompt|92.61|86.56|
|HiDe-LoRA|92.71|87.37|
|HiDe-Adapter|92.73|87.10|
|NoRGa|__94.48__|__90.90__|
The table shows that NoRGa consistently outperforms the other PEFT methods on both datasets, suggesting its effectiveness. However, further investigation with LoRA and adapters would be necessary to draw more definitive conclusions. We will add this discussion to the final version.
__Q2: Dynamic routing mechanism can be employed for gating-based neural networks (e.g. [c]). To improve the parameter efficiency of the final model, how to integrate this mechanism in the proposed method?__
A2: Thank you for pointing out these excellent related works. We will add the following discussion to the final version:
Each head in the MSA layers comprises $N$ MoE models, where $N$ is the length of the input sequence. This allows for a dynamic routing mechanism to enhance parameter efficiency. For instance, [c] proposed a dynamic routing strategy that adaptively adjusts the number of activated experts based on the input. The computation for any MoE model’s gating is directly correlated with the corresponding row in the attention matrix, which encapsulates the MoE model’s score functions. For example, selecting the top $k$ experts via Top-K routing in the $i$-th MoE model is equivalent to identifying the top $k$ largest values in the $i$-th row of the attention matrix. To implement [c], we first sort the elements in the $i$-th row from highest to lowest, then find the smallest set of experts whose cumulative probability exceeds the threshold. Consequently, unselected experts remain inactive, reducing the need to compute all elements of the value matrix within self-attention.
__Q3: Important MoE-related continual learning methods are not included in the related works [a]. The difference between the proposed method and other related works should be highlighted__
A3: Thank you for providing these excellent related works. We will add the following discussion to the final version:
Recently, the MoE model has been employed to mitigate catastrophic forgetting in continual learning (CL). For example, [a] focused on continual learning in vision-language models by adapting a pretrained vision-language model to new tasks through learning a mixture of specialized adapter modules. [a] introduced an MoE structure onto a frozen CLIP, utilizing a mixture of adapters to modify the MLP block after the MSA layer. In contrast, our work centers on general continual learning with pretrained models, leveraging the inherent MoE architecture of MSA layers. Consequently, our MoE model placement differs from that of [a]. By employing prefix tuning, we demonstrate that it is analogous to introducing new prefix experts to scale and adapt these pretrained MoE models to downstream tasks. Furthermore, while [a] utilizes task-specific routers, our approach employs task-specific prompts that encapsulate both task-specific router and expert parameters.
__Q4: Performance improvement on some datasets is limited.__
A4: While performance gains on certain metrics may be modest for some datasets, our method consistently outperforms the baseline, HiDe-Prompt, the current state-of-the-art in prompt-based continual learning, in terms of either FA or CA. For example, on the Split Imagenet-R dataset with Sup-21K weights, the improvement in FA is small (75.06%->75.40%), but the CA enhancement is significant (76.60%->79.52%). __This trend is consistent across various datasets and pretrained settings, underscoring our method's effectiveness and robustness.__
__Q5: Running Times and Performance Variance__
A5: We utilize a single A100 GPU for all experiments. The training times are summarized below:
| Method| Split CIFAR-100| Split ImageNet-R| Split CUB-200| 5-Datasets|
|-------------|-----------------|------------------|---------------|------------|
|HiDe-Prompt|2.80h|2.67h|1.04h|24.06h|
|NoRGa|2.85h|2.70h|1.10h|24.23h|
Each experiment was conducted 3 times. While NoRGa exhibits slightly longer training times compared to HiDe-Prompt, it consistently achieves significantly better performance as indicated in Table 1. This demonstrates the effectiveness of NoRGa while maintaining competitive training efficiency. We will add the above discussion in the final version. Regarding performance variance, we have already presented the standard deviation of the results in the main results, as displayed in Table 1 of the main text.
---
Rebuttal Comment 1.1:
Title: Rebuttal Response
Comment: Thank the authors for their response. Since the authors have addressed all of my concerns, I decided to increase my score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We would like to thank the reviewer for rating a positive score of 6. We are happy to discuss more if the reviewer still has questions.
Best regards,
Authors | Rebuttal 1:
Rebuttal: **General Response:**
We thank all reviewers for their valuable feedback and suggestions, which have significantly contributed to the enhancement of our manuscript. We are encouraged by the endorsements that:
1. Our reveal relationship between self-attention, prefix tuning and mixture of experts is novel, significantly advancing the understanding of prompt-based approaches in continual learning (reviewer MKaE, boEM, cCSa).
2. NoRGa, which integrates non-linear activation and residual connections, can enhance continual learning performance while maintaining parameter efficiency (reviewer boEM)
3. The statistical sample efficiency improvements of NoRGa versus normal gating is significant, promises substantial gains in model adaptation and efficiency (reviewer dPHR, MKaE)
4. The theoretical insights are well-supported and complemented by empirical experiments across diverse benchmarks, demonstrating robustness and reliability (reviewer MKaE). The explicit improvements over HiDE-Prompt are consistent, and often significant (reviewer dPHR).
We address a common comment from Reviewers:
**Q1: Additional graphs/visual aids to represent NoRGa (Reviewer MKaE, boEM)**
**Answer**: We have added some visualizations in the attached PDF. In this file, you can find visual aids illustrating the relationships between self-attention, prefix tuning, and MoE, as well as the NoRGa implementation. We plan to include these in the final manuscript, and we hope they will enhance the paper's accessibility and readability.
We have addressed all the weaknesses and questions raised by all reviewers in the respective rebuttals. We believe that most of this input is valid and greatly improves our paper. We hope that our responses to the reviewer's questions and the additional experiments will help in the review process. Please let us know if you have any further questions.
Regards,
Authors
Pdf: /pdf/6b33e9ba25d3b1d702bac60f480f5a4377f7dd15.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Discovering Preference Optimization Algorithms with and for Large Language Models | Accept (poster) | Summary: This paper introduces a method of searching for offline RL objectives by using LLMs to generate and refine objectives. They demonstrate that several objectives discovered using this method are able to achieve higher evaluation scores than existing objectives (e.g. DPO) on a variety of benchmark tests. They provide a high level analysis of one discovered objective, hypothesizing that it may possess desirable properties that lead to these performance boosts.
Strengths: The paper is well written, organized, and easy to understand. While the use of LLMs to modify prompts and self-improve in other ways has been studied, using them to improve objective functions is a novel idea that the authors demonstrate holds potential. The authors demonstrate that the objectives discovered can be useful, showing improvements on a variety of generation and classification tasks.
Weaknesses: The main weakness of this paper in my opinion is that it isn't clear what results the paper is trying to present. While this is partially an issue of organization, it is also an issue with the amount of evidence given to the claims in the paper. If these was made clearer, I would be inclined to raise my score. I detail below:
1. The paper starts by introducing a new method for discovering new optimization objectives using LLMs, then transitions quite abruptly to describing the properties of DiscoPop. While both could be valuable contributions, I don't feel that either is given quite the attention required. It would help to clarify what the main focus of the paper is. If DiscoPOP is the focus, then claims about its properties should be better studied. If the discovery method is the focus, more emphasis should be placed on the method's performance and ability to produce novel and useful objectives.
2. The majority of insights about DiscoPOP are presented as hypotheses or intuitive insights (e.g. the non-convex portion, and how the model behaves at $\pm\inf$. This is not enough to support the claim made in the conclusion that they provide insights into what properties an optimal objective function should possess. While it \emph{may} be enough to say the provide intuition into why DiscoPOP is successful, further experimentation would be necessary to make optimality claims like this.
3. Some portions of the discovery method are vague as well. Though the authors do not observe the models simply regurgitating old objectives and cite a pattern of objective refinement for supervised learning, this is of course not guaranteed, and not well studied. Further experiments or details on the how often they observe refinement vs regurgitation would help to support the utility of the method.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How often are unit tests for the generated code failed?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations described cover most points, including the difficulties and uncertainties associated with using LLMs to improve upon objectives in a rigorous fashion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful and extremely helpful review. We’re glad that the reviewer finds our approach to be both novel and useful, though we agree that more analysis would improve the paper significantly.
> The majority of insights about DiscoPOP are presented as hypotheses or intuitive insights...further experimentation would be necessary
This is a great point and we’re grateful to the reviewer for bringing it up. We’ve performed further analysis that *considerably* improves the paper and our understanding of the loss function. Previously, we merely hypothesized that the local optimum of the DiscoPOP loss could catch noisy or incorrect data points. We now have empirical evidence for this and include the results in the rebuttal PDF.
In short, we identified which data points end up between the local optima after training, and found that 1.35% of points fall there (see Figure 1 in the PDF, where they are clearly visible). Although we use the binary preference labels from the IMDb dataset for training, the dataset also includes a reward score for each completion. When we analyze the data points that are between the local optima, we find that the positive and negative completions are *significantly* closer in absolute reward difference than the data points outside the local optimum (See Table 1 in PDF). This implies that the preference labels on those points are more difficult to distinguish and helps empirically validate our hypothesis. Thanks to the reviewers, we will be adding this analysis to the paper.
>Further experiments or details on...refinement vs regurgitation
This was another great suggestion. We’ve included an ablation on the CIFAR-10 results in which we *don’t* provide fitness feedback. Thus, the model is unable to perform refinement (as it does not know what performs well) and can only regurgitate ideas. This is a key baseline to compare to to validate our discovery method.
In the attached rebuttal PDF, you can see that, without the fitness score, the LLM is unable to refine its ideas and can only regurgitate, thus leading to fewer steps of improvement.
> On LLM Discovery vs. DiscoPOP as the main focus
Our main focus was on DiscoPOP, as that is where the majority of our resources and efforts were focused. We will adjust the writing to make this clearer.
We believe both parts are significant contributions of our paper, though we understand both could use more analysis. Thanks to the reviewer’s feedback, we’ve managed to significantly strengthen evidence for both.
> what properties an optimal objective function should possess
Thanks for bringing this up. This was imprecise language on our end. We agree, and have now removed any claims about what an "optimal" objective should possess, instead replacing the word optimal with the word "good" in the conclusion (line 277 and in line 110).
> How often are unit tests for the generated code failed?
For GPT-4 the unit tests failed ~5% of the time; however, the model usually fixes it upon the feedback. We briefly experimented with weaker models and found the failure rates to be significantly higher.
---
*We hope that most of the reviewer’s concerns have been addressed and, if so, they would consider updating their score. We’d be happy to engage in further discussions.*
---
Rebuttal Comment 1.1:
Comment: Apologies for my late reply and thank you for your thorough response and clarifications. After going over your main response and supplemental PDF, I'm going to raise my score. The additional experiments on DiscoPOP properties and the experiment regarding refinement have satisfied my biggest concerns. The CIFAR-10 experiment is a good demonstrative example, though it doesn't prove that this will always be the case for the method, I believe it is a sufficiently convincing example to improve the point you were making in that section.
---
Rebuttal 2:
Comment: We would like to once again thank the reviewer for their time and feedback. We've incorporated it into our manuscript and we believe it has strengthened our paper.
We hope our rebuttal, which includes substantial additional results and analysis, has addressed the reviewer's concerns.
Seeing as the discussion period is coming to a close, could the reviewer please let us know if they have any further questions or concerns about our submission? | Summary: The paper proposes DiscoPOP, an algorithm for discovering preference optimization loss functions using Large Language Models (LLMs). The authors propose an LLM-driven objective discovery process by iterative prompting LLMs by previously evaluated performance metrics. Experiments on various benchmarks demonstrate its effectiveness.
Strengths: 1. This method is innovative. The idea of updating the loss function through automated exploration is both interesting and novel. The process being fully automated and end-to-end makes it particularly neat.
2. The results are impressive. DiscoPOP demonstrates strong performance across various benchmarks, showing its potential.
Weaknesses: I think the updating process might be sensitive to the prompts used for proposing new loss functions.
Technical Quality: 3
Clarity: 3
Questions for Authors: Refer to the weakness
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Refer to the weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their concise feedback. We’re happy that the reviewer finds the paper innovative and the results impressive. We understand the reviewer has concerns around prompt sensitivity. We would like to point the reviewer to a feel ablations we’ve run on this.
In Appendix D3 of the paper, we show that the discovery process is very robust to different sampling parameters, prompts, and techniques from the LLM. We find little change in performance or behavior when we change the sampling temperature (unless it is set unreasonably high), telling it to “think” first, or sorting the input context.
Notably, in our attached rebuttal PDF we show that providing the fitness in context is important to allow the LLM to refine its proposed ideas.
---
*We hope that most of the reviewer’s concerns have been addressed and, if so, they would consider updating their score. We’d be happy to engage in further discussions.*
---
Rebuttal Comment 1.1:
Comment: Hi,
I've read the rebuttal and I will keep my score as positive. Thanks. | Summary: The paper proposes an algorithm to discover preference objective functions using LLM for LLM preference optimization. Authors conduct experiments with the discovered objectives on multiple datasets and demonstrate that the discovered objective functions generally perform better than baselines. The authors also show interesting insight into the discovered objective function, LRML.
Strengths: 1. The paper introduces an interesting and new LLM-driven objective discovery algorithm to search for good objective functions using LLMs for the preference optimization of LLMs.
2. The method successfully finds an objective function that can generally offer a better performance on multiple benchmarks, indicating the effectiveness of the discovery method.
3. The authors show interesting insights into the best objective function found by the proposed method.
Weaknesses: 1. The objective function is discovered with a different placement of $\beta$ than the objective function used for evaluation, which causes a misalignment of the discovery and the evaluation process. It would be nice to keep them aligned.
2. The LLM in the discovery process is a clever generator for searching candidates. It is unclear if the LLM's capability used in the discovery process can change the duration for finding a good objective function or if it will cause the search process to fail.
3. Similar to the point above, it would be nice to test some non-LLM-based generators of the objective function and perform some traditional searching algorithms such as evolutionary search to demonstrate the effectiveness of the LLM.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weakness above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough and in-depth review. We are pleased that the reviewer finds our approach interesting and effective. We address the weaknesses outlined by the reviewer individually below.
> The objective function is discovered with a different placement of $\beta$
Thank you for highlighting this. We assume that the reviewer is referring to discrepancies between the code for the LLM sampler in Appendix E.6 and the formula presented in Equation (28), as explained in Appendix E. As a result of the reviewer’s comment, we found have now corrected a typo in the LRML formula, which now reads:
$$ f_{lrml}(\beta\rho) = (1-\sigma(\beta\rho/\tau)) \cdot f_{dpo}(\beta\rho) + \sigma(\beta\rho/\tau) \cdot f_{exp}(-\beta\rho) $$
The error occurred in the written mathematical notation, and the implementation remains the same.
We introduce the $\tau$ parameter largely to retain the original semantic meaning of “beta” used in most papers, which is to adjust the KL-divergence regularisation. Since we use $\tau = 0.05$ they are equivalent.
We have now corrected the mathematical notation for DBAQL, AQL, AQFL, and PFL as well.
> It is unclear if the LLM's capability used in the discovery process can change the duration for finding a good objective function or if it will cause the search process to fail.
This is a good point. We briefly tried using LLama3 8b-instruct and found that it rarely responded with the correct format and working code, rendering it largely unusable. Should the reviewer request, we can do a more thorough sweep over base models. However, we do not expect this to fundamentally change our results.
> it would be nice to...perform some traditional searching algorithms
Unfortunately, this exact experiment would be very hard to implement, as we would need to design a domain-specific language to search over for the objective function or parameterize it in some way. Parameterizing the objective with a neural network is possible, but would require far too many inner loop training runs to be optimized. However, inspired by the reviewer’s feedback, we’ve designed an experiment that is similar in spirit to validate the effectiveness of the LLM-Driven Discovery method. In particular, we’ve added a version of the CIFAR-10 experiment where we do not return the fitness to the LLM. We show the results in Figure 2 of the attached PDF and confirm that, without the fitness, the LLM is unable to refine its ideas, as it does not have knowledge of which ones worked.
---
*We hope that most of the reviewer’s concerns have been addressed and, if so, they would consider updating their score. We’d be happy to engage in further discussions.* | Summary: This paper proposes a novel approach to improving LLMs by using an automated system to discover new optimization algorithms. Traditionally, enhancing LLMs has relied heavily on hand-designed loss functions, but this research employs an LLM to iteratively generate and refine these functions itself. This paper introduced DiscoPOP, which blends logistic and exponential losses to outshine existing methods. This algorithm was evaluated across a range of tasks, including multiturn dialogue, sentiment generation, and summarization, where it consistently delivered good results.
Strengths: - The automatic exploration and evaluation pipeline is promising and it can discover new algorithms without human intervention.
- By using the above pipeline, the new discovered algorithm in preference optimization achieved SOTA, proving its effectiveness.
- This can contribute to lots of other area and let machines themselves to discover and evaluate new efficent algorithms.
Weaknesses: Although this paper mianly focuses on preference optimization task, the dicovery method seems to be easily to adapt to other domains/tasks. A general concern with the paper is its reliance on a complex method that involves detailed tuning of hyperparameters. This complexity could limit the method's broader applicability unless the process can be generalized or adapted effectively to other contexts. However, the paper does not thoroughly address how to establish and optimize the discovery pipeline across different scenarios, which could hinder its practical utility. More guidance on adapting the methodology to a variety of use cases would significantly enhance its value and impact.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How well does the proposed LLM-driven discovery method generalize to other domains/tasks beyond preference optimization? Are there specific modifications needed to adapt this method to other fields?
2. The authors mention that DiscoPOP blends logistic and exponential losses. Could you elaborate on the theoretical justification for this choice? How does this combination affect the convergence properties of the algorithm?
3. Considering the non-convex nature of DiscoPOP, what strategies do you suggest to avoid local minima during optimization?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed the limitations of DiscoPOP.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback. We’re glad the reviewer finds the approach and results promising and effective.
> does proposed LLM-driven discovery method generalize to other domains?
In our submission, we demonstrated that it works well on CIFAR-10 in the small case study in Figure 2, where it discovers loss functions that outperform standard cross-entropy loss and transfers to other models and hyperparameters. We will adjust the writing to make this clearer.
There were no significant modifications needed to adapt this method to other fields, the only change needed is to adapt the text in the prompt (Appendix A) to contain information and examples for the new setting.
We chose preference optimization, as it is a new exciting area with great potential for novelty, though we agree that adapting our LLM-driven discovery method to other domains/tasks is an exciting future direction of work.
> reliance on a complex method that involves detailed tuning...could limit the method's applicability unless the process can be generalized
We’re not sure exactly which hyperparameters the reviewer is referring to, but Algorithm 1 does not contain any hyperparmaeters and is very easy to apply to other domains.
>Theoretical justifications for DiscoPOP
To understand DiscoPOP more from a theoretical perspective, we can closely compare it to the standard DPO loss. At the limits (eg when the relative preferences $\rho$ are very high or low), it has similar behavior to DPO. However, it mostly differs by its non-convex region, which we analyze in the rebuttal PDF. Previously, we merely hypothesized that the local optimum of the DiscoPOP loss could catch noisy or incorrect data points. We now have empirical evidence for this.
In short, we identified which data points end up between the local optima after training, and found that 1.35% of points fall there (see Figure 1 in the PDF, where they are clearly visible). Although we use the binary preference labels from the IMDb dataset for training, the dataset also includes a reward score for each completion. When we analyze the data points that are between the local optima, we find that the positive and negative completions are *significantly* closer in absolute reward difference than the data points outside the local optimum (See Table 1 in PDF). This implies that the preference labels on those points are more difficult to distinguish and helps empirically validate our hypothesis. Thanks to the reviewers, we will be adding this analysis to the paper.
We would like to further emphasize that, in the current offline preference optimization paradigm, training and evaluation differ significantly. In most settings in machine learning, the loss usually corresponds directly with some desired metric, such as the accuracy. However, in our setting, while we train on an offline set of preference pairs, we ultimately evaluate the model using MT-Bench and GPT-4 as a judge. Thus, it’s not very clear how the loss function used necessarily corresponds to the downstream task. In fact, recent works [1, 2] show that optimizing the DPO loss function too much can lower the quality of the model. Thus, “theoretical” justifications are not as useful in this setting.
> what strategies do you suggest to avoid local minima during optimization?
We only found the local minima to be a problem when we set $\beta$ to be very low ($\leq 0.01$). Note that, while local minima are a problem when the loss function directly corresponds to the task, it’s not immediately clear if that’s the case in offline preference optimization (see our comment on this above).
[1] Feng, Duanyu, et al. "Towards analyzing and understanding the limitations of dpo: A theoretical perspective." arXiv preprint arXiv:2404.04626 (2024).
[2] Chen, Angelica, et al. "Preference Learning Algorithms Do Not Learn Preference Rankings." arXiv preprint arXiv:2405.19534 (2024).
---
*We hope that most of the reviewer’s concerns have been addressed and, if so, they would consider updating their score. We’d be happy to engage in further discussions.*
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. As you said, "*the only change needed is to adapt the text in the prompt*", I'm curious how to find the best prompt for a specific task. Are there any rules to follow?
---
Reply to Comment 1.1.1:
Comment: We didn't try a large number of prompts. We recommend just describing the setting in text (eg what the inputs and outputs of the function are) and adding a few examples of the code it should generate (along with their associated performance). | Rebuttal 1:
Rebuttal: We are grateful to the reviewers for their insightful feedback. There is broad consensus amongst the reviewers that our approach is novel and effective.
$\color{red} R1$ (ePhP): “the new discovered algorithm in preference optimization achieved SOTA, proving its effectiveness.”
$\color{green} R2$ (MbcT): “The method successfully finds an objective function that...offer a better performance...indicating the effectiveness of the discovery method.”
$\color{blue} R3$ (663E): “This method is innovative...The results are impressive”
$\color{magenta} R4$ (gRRA): “using [LLM’s] to improve objective functions is a novel idea that the authors demonstrate holds potential”
Reviewers understandably had concerns about the lack of in-depth analysis of the DiscoPOP loss ($\color{red} R1$, $\color{magenta} R4$) function and its theoretical implications ($\color{green} R2$, $\color{magenta} R4$).
Reviewers also had concerns about the effectiveness ($\color{green} R2$, $\color{magenta} R4$) of LLM-driven discovery and its sensitivity ($\color{blue} R3$) to the prompt.
We address these concerns below.
## Deeper Analysis of DiscoPOP
We would like to deeply thank the reviewers for bringing this up. We’ve performed further analysis that *considerably* improves the paper and our understanding of the loss function. Previously, we merely hypothesized that the local optimum of the DiscoPOP loss could catch noisy or incorrect data points. We now have some empirical evidence for this.
In short, we identified which data points end up between the local optima after training, and found that 1.35% of points fall there (see Figure 1 in the PDF, where they are clearly visible). Although we use the binary preference labels from the IMDb dataset for training, the dataset also includes a reward score for each completion. When we analyze the data points that are between the local optima, we find that the positive and negative completions are *significantly* closer in absolute reward difference than the data points outside the local optimum (See Table 1 in PDF). This implies that the preference labels on those points are more difficult to distinguish and helps empirically validate our hypothesis. Thanks to the reviewers, we will be adding this analysis to the paper.
### Misc:
Thanks to a comment from $\color{green} R2$, we discovered a typo in equations (4) and (5) for DiscoPOP. We have since fixed this.
## More Analysis of LLM-Driven Discovery
We’ve added a key baseline experiment to our results to validate the effectiveness of the LLM-Driven Discovery method. In particular, we’ve added a version of the CIFAR-10 experiment where we do not return the fitness to the LLM. We show the results in Figure 2 of the attached PDF and confirm that, without the fitness, the LLM is unable to refine its ideas, as it does not have knowledge of which ones worked.
Pdf: /pdf/217f90054e830553fca0c1370edfcf6f8114a0fb.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Frozen-DETR: Enhancing DETR with Image Understanding from Frozen Foundation Models | Accept (poster) | Summary: This paper proposes Frozen DETR, which leverages frozen foundation models as feature enhancers to improve the performance of the DETR object detection framework. By integrating feature maps from models like CLIP into the pyramid feature maps and feeding them into the encoder, Frozen DETR enriches the contextual information of objects within the original pyramid features, thereby enhancing DETR's performance.
Strengths: 1. The paper use fundation models as plug-and-play modules, which is easy to apply on different detectors and enhance the performance significantly.
2. The paper conducts extensive experiments and comparisons with the SOTA methods on different datasets to show the effectiveness and robustness.
Weaknesses: 1. The use of foundation models increases inference costs. It would be beneficial to provide a detailed analysis of the computational cost introduced by these models.
2. The performance improvement appears to diminish when using more advanced models like Co-DETR, especially during extended training periods such as 24 or 36 epochs. An explanation for this phenomenon would be helpful.
3. The method seems applicable to other SOTA models, such as [1,2,3]. A comparison with these models would strengthen the evaluation of the proposed approach.
[1] Zhao, J., Wei, F., & Xu, C. (2024). Hybrid Proposal Refiner: Revisiting DETR Series from the Faster R-CNN Perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 17416-17426).
[2] Zhao, C., Sun, Y., Wang, W., Chen, Q., Ding, E., Yang, Y., & Wang, J. (2024). MS-DETR: Efficient DETR Training with Mixed Supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 17027-17036).
[3] Wang, Y., Li, X., Weng, S., Zhang, G., Yue, H., Feng, H., ... & Ding, E. (2024). KD-DETR: Knowledge Distillation for Detection Transformer with Consistent Distillation Points Sampling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 16016-16025).
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Why is the performance (47.0 AP) of the Co DINO baseline in Tables 12 and 13 inconsistent with the main paper, which reports 52.1 AP?
2. Previous work [1] noted that features from CLIP's vision encoder can have artifact problems, potentially resulting in noisy feature maps. Does this issue influence the performance of detector? If so, would applying the methods from [1] help mitigate these artifacts and improve performance?
[1] Darcet, T., Oquab, M., Mairal, J., & Bojanowski, P. (2023). Vision transformers need registers. arXiv preprint arXiv:2309.16588.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: 1. As mentioned in the paper, the performance gains heavily depend on the quality and characteristics of the pre-trained foundation models. Variations in these models could lead to inconsistencies and affect the reliability of the results.
2. Using foundation models increases inference inefficiency. Solutions such as knowledge distillation might help mitigate this issue and improve efficiency.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer C9T7
**Q1: It would be beneficial to provide a detailed analysis of the computational cost.**
**RE**: We provide analyses of the computational cost in Table 3, Table 4, and Table 5 in the main text. Besides, more discussion can be found in the rebuttals for all reviewers above.
***
**Q2: The performance improvement appears to diminish when using more advanced models like Co-DETR, especially during extended training periods such as 24 or 36 epochs.**
**RE**: A significant advantage of most SOTA query-based detectors is that they can converge extremely fast within 12 epochs. As a result, the performance gain may not be that obvious within a long training schedule. This phenomenon is also reported by Hybrid Proposal Refine [a], where DDQ with HPR merely gains +0.1% AP improvement (from 52.4% to 52.5%) as the training epochs increase from 12 to 24. Some training strategies are excepted to further improve the performance with longer training schedules, e.g., large-scale jitter used in [a] and other data augmentation. We will add relative discussions in the revision.
[a] Hybrid Proposal Refiner: Revisiting DETR Series from the Faster R-CNN Perspective, In CVPR 2024.
***
**Q3: Apply Frozen-DETR to other SOTA models, such as MS-DETR, Hybrid Proposal Refiner and KD-DETR.**
**RE**: Thanks for your advice. However, we would like to highlight that Hybrid Proposal Refiner and KD-DETR are not publicly available when submitting our work. These two papers are only available on CVPR2024 openaccess after 13th June. And KD-DETR is not a new detector but a distillation method. As a result, we provide the experiments on MS-DETR and Hybrid Proposal Refiner below. As shown in the table, our method can be easily applied to recent SOTA models.
| Model | AP | AP$_{50}$ | AP$_{75}$ | AP$_{s}$ | AP$_m$ | AP$_l$ | GFLOPs | FPS |
|----------------------------|------|-----------|-----------|----------|--------|--------|--------|------|
| MS-DETR | 50.0 | 67.3 | 54.4 | 31.6 | 53.2 | 64.0 | 252 | 10.8 |
| Frozen-DETR (MS-DETR) | 53.0 | 71.5 | 57.8 | 35.1 | 55.8 | 70.8 | 452 | 6.9 |
| DDQ with HPR | 52.4 | 69.9 | 57.5 | 35.9 | 55.5 | 66.7 | 283 | 6.5 |
| Frozen-DETR (DDQ with HPR) | 55.7 | 73.9 | 61.3 | 38.4 | 58.8 | 72.3 | 467 | 5.2 |
***
**Q4: Why is the performance of the Co DINO baseline in Tables 12 and 13 inconsistent with the main paper?**
**RE**: The baseline in Tables 12 and 13 is the same baseline as in Section 4.2, which uses four-scale feature maps and does not use co-heads. Due to space limitations of the main text, these two tables are presented in the appendix. We will add more experimental details.
***
**Q5: Previous work noted that features from CLIP's vision encoder can have artifact problems. Would applying the methods from it help mitigate these artifacts and improve performance?**
**RE**: Thanks for your advice. Since this work only releases the checkpoint for DINOv2-FM. Thus the following experiments are conducted on DINOv2-FM and DINOv2-FM with registers (DINOv2-reg). In the table, we find that using DINOv2 can even get better results than using CLIP, which is different from Table 2. We hypothesize there are two reasons: First, DINOv2-FM has both global-wise and token-wise pre-training pre-tasks. Thus the patch tokens are more informative. Further, DINOv2-FM ViT-L is distilled from ViT-giant, which equals to a larger foundation model. We find that DINOv2-reg can mitigate artifacts in DINOv2-FM and further improve the performance. We will add these results to the manuscript.
| Model | AP | AP$_{50}$ | AP$_{75}$ | AP$_{s}$ | AP$_m$ | AP$_l$ |
|------------------------------|------|-----------|-----------|----------|--------|--------|
| DINO-det-4scale | 49.0 | 66.6 | 53.5 | 32.0 | 52.3 | 63.0 |
| DINO-det-4scale + CLIP | 51.9 | 70.4 | 56.7 | 33.8 | 54.9 | 69.3 |
| DINO-det-4scale + DINOv2-FM | 53.3 | 71.8 | 58.1 | 35.2 | 56.2 | 71.9 |
| DINO-det-4scale + DINOv2-reg | 53.9 | 72.4 | 58.8 | 34.8 | 57.2 | 72.2 |
***
**Q6: The performance gains heavily depend on the quality and characteristics of the pre-trained foundation models.**
**RE**: Since different vision foundation models are pre-trained with different pre-tasks and Frozen-DETR does not train the vision foundation models, it is quite normal and foreseeable that the choice of vision foundation models will affect the performance, and not all of them are suitable for Frozen-DETR. We have tested and compared many representative foundation models with various pre-training methods in Table 2 with some intuitive explanations. These empirical practices can be used as the selection guidelines.
***
**Q7: Solutions such as knowledge distillation might help improve efficiency.**
**RE**: Thanks for your instructive advice. In recent distillation methods, the teacher model and student model should always be trained with the same task and use similar architectures. Since foundation models are not pre-trained for detection, how to transfer the knowledge from them to detectors is still an open question. We also notice that some open-vocabulary methods distill the knowledge from CLIP. However, they can not improve the base class performance from distillation or even pursue high novel class performance at the price of low base class performance [1,2,3]. While our method can easily apply to many query-based detectors, boosting all classes' performance.
[1] Open-vocabulary object detection via vision and language knowledge distillation. In ICLR, 2022.
[2] Distilling detr with visual-linguistic knowledge for open-vocabulary object detection. In ICCV, 2023.
[3] Aligning bag of regions for open-vocabulary object detection. In CVPR, 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. The new experiments and answers have mostly addressed my concerns and really show the effectiveness of frozen-DETR. I hope to see a more complete version soon. Based on this, I've decided to increase the rate. | Summary: This paper incorporates frozen foundation model backbones into DETR pipelines. Specifically, the paper concatenates the output of the class token of the foundation models with the query vector in the decoder. Also, it concatenates the patch tokens with the feature pyramid from the Image backbone to improve accuracy.
Strengths: 1. The paper is well written.
2. The major novelty is fusing features of the frozen foundation models with backbone output and the query vectors in the decoder.
3. The idea of using reduced resolution for the foundation model is engaging in a sense to reduce the overall compute.
Weaknesses: Despite the paper's merits, it also has weaknesses:
1. The idea of feature fusion seems incremental from the perspective of technical contributions. By looking at the design, the feature fusion of the frozen foundation model is the only principle claim of the paper. Apart from that, I could not see any other piece of substantial advancement.
2. Frozen-DETR is only tested with DINO, which has huge computational costs due to dense multiscale feature utilization in the encoder. To demonstrate the utility of the presented idea, there should be results with affordable DETRs, e.g. DN-DETR, Conditional-DETR, DAB-DETR, Anchor-DETR, IMFA -DETR, Deformable-DETR etc. These are the pivotal works in this area, and hence, evaluation of the proposed Frozen-DETR pipeline is a must for this paper, given its limited technical contributions.
3. I am more concerned about the applicability of this method in the real world, considering its weaker accuracy improvement tradeoffs vs runtime. By looking at Tables 12 and 13, changing the image resolution does not aggressively improve the accuracy. For example, in Table 12, doubling the resolution of the foundation model halves the speed where the original speed is already very low while the accuracy improvement is 1.8AP. Similar effects are seen in Table 13. Due to this reason, the weakness in point 2 (see above) must be addressed.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Include the results of pivotal DETR methods instead of only one DINO, which is computationally heavy.
2. Provide detailed runtime of each model to assess better the work given limited technical contributions.
If this is resolved, I am open to adjusting my score.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please refer to the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer QbuY
**Q1: The idea of feature fusion seems incremental from the perspective of technical contributions. By looking at the design, the feature fusion of the frozen foundation model is the only principle claim of the paper.**
**RE**: We respectfully disagree with this point. We would like to highlight the core contribution of this work, which is a novel paradigm to integrate frozen vision foundation models with query-based detectors. We are the first to show that frozen foundation models can be a versatile feature enhancer to boost the performance of detectors **without any fine-tuning, even though they are not pre-trained for object detection**. Such a paradigm enjoys many advantages over using vision foundation models as the backbone. Please see the rebuttals for all reviewers above for more details.
***
**Q2: Frozen-DETR is only tested with DINO, which has huge computational costs due to dense multiscale feature utilization in the encoder. To demonstrate the utility of the presented idea, there should be results with affordable DETRs, e.g. DN-DETR, Conditional-DETR, DAB-DETR, Anchor-DETR, IMFA -DETR, Deformable-DETR etc.**
**RE**: We have applied the method to AdaMixer, DINO-det, and Co-DETR in the paper, which can demonstrate the generalization ability. We have also applied our Frozen-DETR to MS-DETR and HPR (See Reviewer C9T7 Q3 below). Due to limited time and computation resources, we select DN-DETR and DAB-DETR as the representatives of single-scale query-based detectors. Both models are trained with 12 epochs. Experimental results show that Frozen-DETR can also significantly enhance the performance of single-scale detectors (+ 4.0% AP on DAB-DETR and + 2.7% AP on DN-DETR) with acceptable additional computation cost.
| Model | AP | AP$_{50}$ | AP$_{75}$ | AP$_{s}$ | AP$_m$ | AP$_l$ | GFLOPs | FPS |
|----------------------------|------|-----------|-----------|----------|--------|--------|--------|------|
| DAB-DETR-DC5 | 38.0 | 60.3 | 39.8 | 19.2 | 40.9 | 55.4 | 220 | 10.2 |
| Frozen-DETR (DAB-DETR-DC5) | 42.0 | 63.2 | 44.9 | 22.4 | 45.4 | 61.1 | 372 | 8.5 |
| DN-DETR-DC5 | 41.7 | 61.4 | 44.1 | 21.2 | 45.0 | 60.2 | 220 | 10.2 |
| Frozen-DETR (DN-DETR-DC5) | 44.4 | 64.8 | 47.7 | 23.8 | 47.7 | 64.6 | 372 | 8.5 |
We also notice that the models can be further accelerated without using DC5 but with lower performance (lags behind around 2% AP). Without DC5, DN-DETR has 104 GFLOPs and runs at 21.0 FPS, while Frozen-DETR has 298 GFLOPs and runs at 14.1 FPS. Unfortunately, these single-scale models lag behind SOTA models by more than 10\% AP and need 50 epochs to converge. Thus, these models can not meet practical needs.
***
**Q3: I am more concerned about the applicability of this method in the real world, considering its weaker accuracy improvement tradeoffs vs runtime. By looking at Tables 12 and 13, changing the image resolution does not aggressively improve the accuracy. Similar effects are seen in Table 13.**
**RE**: We believe the accuracy improvement of Frozen-DETR is significant and all other reviewers agree with us on the promising performance:
- On the COCO dataset, we increase 2.9% AP for DINO-det (Table 6).
- On the challenging large vocabulary LVIS dataset, we increase 6.6% AP for DINO-det (Table 7).
- On the challenging long-tail scenario, we increase 8.7% APr and 7.7% APc for DINO-det (Table 7), showing the potential to alleviate the class imbalance problem.
- On the challenging open-vocabulary scenario, we increase 8.8% novel AP for DINO-det (Table 8), showing strong open-vocabulary ability.
- In the real world, input images always suffer from natural distribution shifts. We also find that Frozen-DETR inherits great domain generalization ability from frozen foundation models. We directly transfer the model trained on the COCO dataset to the COCO-ood dataset [1] without fine-tuning, which is a dataset having the same classes as COCO but different domains, such as sketch, weather, cartoon, painting, tattoo, and handmake. As shown in the table below, Frozen-DETR achieves almost the same performance on both datasets, while other detectors degrade a lot on the COCO-O. The performance of Frozen-DETR on COCO-O is two times higher than the baselines and even higher than detectors with strong backbones, showing its strong robustness. We will add these results to the appendix.
| Model | Backbone | COCO AP | COCO-O AP |
|-------------------------------------------|----------|---------|-----------|
| DINO-det | Swin-L | 58.5 | 42.1 |
| ViTDet | ViT-H | 58.7 | 34.3 |
|-------------------------------------------|----------|---------|-----------|
| DETR | R50 | 42.0 | 17.1 |
| Deformable DETR | R50 | 44.5 | 18.5 |
| DINO-det | R50 | 49.0 | 22.5 |
|-------------------------------------------|----------|---------|-----------|
| Frozen-DETR (DINO-det + CLIP) | R50 | 51.9 | 50.2 |
| Frozen-DETR (DINO-det + CLIP + DINOv2-FM) | R50 | 53.8 | 53.7 |
Further, in Table 12, we aim to validate the property that Frozen-DETR enjoys the asymmetric input size. Using a small image resolution is already feasible to achieve promising results thus achieving a good performance-speed trade-off. In Table 13, we aim to demonstrate that a stronger foundation model can obtain larger improvements. Please see the rebuttals for all reviewers above for more discussions on the performance-speed trade-off.
[1] COCO-O: A Benchmark for Object Detectors under Natural Distribution Shifts. In ICCV 2023.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: I thank the authors for providing additional experimentation.
I am still not convinced of the novelty. I agree that this paper is the first to fuse the features of a foundation model with DETR queries however this is not a very big technological contribution itself.
Incorporating more information from foundation models will be beneficial and the same has been demonstrated in the paper. I agree with the improved detection performance but not with the runtime performance as they are not convincing. The primary reason is itself has been shown in the author's rebuttal that even single scale DN-DETR has 108G Flops whereas Frozen-DETR has 298G FLOPS which is three times.
I have a strong opinion that your FPS calculations are done on a pretty advanced GPU with a huge number of cores hence the FPS gap is visible but not as visible as compared to the FLOPs difference. So the difference of roughly 200GF will become more prominent when GPU is a commodity version.
Moreover, I don't see FLOP values except in Table 4 which says your model crosses 400G FLOPs which is humongous.
In the main tables such as Table 6, FLOPs are not mentioned. Also as per line 219, the FLOPs contributed significantly due to the foundation model.
Hence, I encourage authors to provide the exact FLOPs of each method in Table-6 while the FPS of as many methods because it would be crucial for my final rating.
---
Rebuttal 2:
Title: Response to Reviewer QbuY
Comment: | Detector | Multi-scale? | Encoder? | \# Epochs | AP | GFLOPs | FPS (V100) | FPS (1080Ti) |
|-------------|--------------|----------|-----------|------|--------|------------|--------------|
| DETR | $\times$ | | 500 | 43.3 | 86 | 27.8 | 21.6 |
| Deformable DETR | | | 50 | 43.8 | 173 | 13.4 | 8.8 |
| Sparse R-CNN | | x | 36 | 45.0 | 174 | 17.8 | 13.6 |
| AdaMixer | | x | 36 | 47.0 | 132 | 16.6 | 11.8 |
| DDQ DETR 4scale | | | 24 | 52.0 | 249 | 8.6 | 5.9 |
| Group DETR (DINO 4scale) | | | 36 | 51.3 | 279 | 9.7 | 6.7 |
| H-Deformable-DETR | | | 36 | 50.0 | 268 | 11.0 | 5.6 |
| DAC-DETR | | | 24 | 51.2 | 279 | 9.7 | 6.7 |
| DAB-DETR-DC5 | x | | 12 | 38.0 | 220 | 10.2 | 5.0 |
|**Frozen-DETR (DAB-DETR-DC5)** | x | | 12 | 42.0 | 372 | 8.5 | 4.7 |
| DN-DETR-DC5 | $\times$ | | 12 | 41.7 | 220 | 10.2 | 5.0 |
| **Frozen-DETR (DN-DETR-DC5)** | x | | 12 | 44.4 | 372 | 8.5 | 4.7 |
| DINO 4scale | | | 12 | 49.0 | 279 | 9.7 | 6.7 |
| DINO 4scale | | | 24 | 50.4 | 279 | 9.7 | 6.7 |
| DINO 5scale | | | 24 | 51.3 | 860 | 4.4 | 2.4 |
| **Frozen-DETR (DINO 4scale)**| | | 12 | 51.9 | 400 | 6.5 | 4.3 |
| **Frozen-DETR (DINO 4scale)** | | | 24 | 53.2 | 400 | 6.5 | 4.3 |
| MS-DETR | | | 12 | 50.0 | 252 | 10.8 | 6.5 |
| **Frozen-DETR (MS-DETR)** | | | 12 | 53.0 | 452 | 6.9 | 4.3 |
| DDQ with HPR | | | 12 | 52.4 | 283 | 6.5 | 4.4 |
| **Frozen-DETR (DDQ with HPR)** | | | 12 | 55.7 | 467 | 5.2 | 3.3 |
| Co-DINO 5scale | | | 12 | 52.1 | 860 | 4.4 | 2.4 |
| **Frozen-DETR (Co-DINO 4scale)** | | | 12 | 52.8 | 400 | 6.5 | 4.3 |
| **Frozen-DETR (Co-DINO 4scale)** | | | 24 | 53.5 | 400 | 6.5 | 4.3 |
Thanks for your feedback and agreement on two key contributions of our work: 1) the first attempt to apply frozen foundation models in the downstream object detection task and 2) the improved detection performance.
For the technological contribution, we design a decoupled feature fusion method by **viewing each component of the foundation models as a special part of detectors**, such as class tokens as image queries and patch tokens as another level of feature pyramid. Such a feature fusion technique can effectively transfer the image understanding ability from foundation models to detectors with minimal modifications to detectors so that the method can be easily applied to various query-based detectors. Considering the various benefits of our Frozen-DETR compared to other ways to use foundation models in detection discussed before, we hope that the Frozen-DETR paradigm can arouse researchers' interest in introducing foundation models into detection.
For the computation cost concern, we provide the detailed FLOPs and FPS for all the models in the above table. We would like to highlight some key points:
- Our Frozen-DETR can be applied to various query-based detectors, including both single-scale and multi-scale detectors. The inference time is within 1.5x the time of baselines, which is acceptable considering the performance improvement. **Reviewer RksL agrees with us**.
- The detectors without multi-scale feature maps or encoders run faster but **lag behind SOTA models by a clear margin** (for example, DAB-DETR-DC5 only gets 38.0\% AP with training 12 epochs). To get better performance, they should be equipped with a stronger backbone or other modules, which will also slow down the speed. While our Frozen-DETR can achieve high performance, especially in many challenging scenarios mentioned in the rebuttals above.
- As discovered by many papers, FPS does not strictly increase in proportion to FLOPs, as many factors also influence the FPS, such as the complexity of operators, memory access cost (MAC), and GPU parallel computing utilization [1]. For example, Deformable DETR and Sparse RCNN have similar FLOPs but with different fps (13.4 vs 17.8), as shown in the above table. Since foundation models enjoy a simple architecture and Attention is highly optimized for modern GPUs, most operators can be computed in parallel. The fps gap is smaller than that of FLOPs. We recommend to use FPS as the main metric for computation cost.
- Further, many inference-time acceleration methods [2-5] for foundation models have been proposed, which can further accelerate Frozen-DETR. Besides, Frozen-DETR can be equipped with different foundation models. A faster foundation model can also accelerate Frozen-DETR.
- According to your comments, we also change the V100 GPU to a slower 1080 Ti GPU. The trend in FPS is the same for different types of GPU.
[1] An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection. In CVPRW 2019.
[2] FasterTransformer. By NVIDIA.
[3] TensorRT-LLM. By NVIDIA.
[4] Flashattention: Fast and memory-efficient exact attention with io-awareness. In NeurIPS 2022.
[5] Efficient Memory Management for Large Language Model Serving with PagedAttention. In SOSP 2023.
---
Rebuttal Comment 2.1:
Title: Response to the Authors
Comment: Thank you for the responses and for providing more FLOP details.
I'm afraid I have to disagree with the FPS trend as seen in the 1080Ti. This 1080Ti is weaker than the V100 GPU and has far fewer cores.
For example, DAB-DETR-DC5/Frozen-DETR (DAB-DETR-DC5) achieved 10.2/8.5 FPS on V100 and 5/4.7 FPS on 1080Ti.
Considering the large compute requirements introduced by the Frozen foundation models and roughly 150G Flops additional requirement, the FPS gap on 1080Ti should be far more than the gap on V100.
Hence I strongly encourage the authors to double-check the values.
---
Rebuttal 3:
Title: Response to Reviewer QbuY
Comment: Thanks for your quick reply!
We have checked the code (the code is modified from benchmark.py from Deformable-DETR and we will release our code in the future) and tested the fps for DAB-DETR-DC5/Frozen-DETR (DAB-DETR-DC5) on 1080ti three more times and got the same results:
DAB-DETR-DC5: 4.9, 4.9, 5.0
Frozen-DETR (DAB-DETR-DC5): 4.7, 4.7, 4.7
We also provide the fps for DAB-DETR-DC5/Frozen-DETR (DAB-DETR-DC5) on various types of GPUs: 1080Ti (5.0/4.7), P100(6.0/5.4), 2080Ti (8.2/7.6), V100 (10.2/8.5), 3090 (12.6/11.8), A100 (21.4/19.9). The fps gaps are similar across various GPUs. We find that the GPU utilization in 1080Ti is around 90% and around 70% in V100 and Frozen-DETR has a higher GPU utilization rate than baselines. We hypothesize that the GPU utilization rate is one of the reasons for the similar fps gaps across different GPUs and small fps gaps compared with FLOPs gaps. As mentioned above, many other factors will also influence the FPS.
Once again, we highly recommend you select fps (the real runtime) as the main metric instead of flops. The runtime can be further improved by many off-the-shelf acceleration techniques.
---
Rebuttal 4:
Title: Response to the authors
Comment: Thank you for the additional results.
However, I am still unconvinced with GPU FPS performance across GPUs. They seem erratic. The primary reason is that the Frozen foundation model is run sequentially (i.e. before the main pipeline backbone or right after the main pipeline backbone). Hence 1080Ti like GPUs should technically have more FPS gap which is contrary to the evaluations provided by the authors. The current evaluations are showing 4.9FPS without the frozen model and 4.7FPS with the frozen model.
Understanding, in terms of latency, 4.9FPS is 204ms while 4.7FPS is 212ms i.e. a gap of 8ms. In fact, with a stronger GPU this latency should decrease and hence on the stronger GPU, the FPS gap should be smaller. Given the size of the frozen model (DINO (transformer-based or any other)), the latency introduced by the frozen model of 8ms on 1080Ti is not justifiable in any case. Moreover, with these values, 204ms was introduced by the main detection pipeline while 8ms by the frozen foundation model which is only 3%. This directly counters your response of "Frozen-DETR has a higher GPU utilization rate".
Considering the above argument, I'll keep my original rating.
---
Rebuttal Comment 4.1:
Title: Response to Reviewer QbuY
Comment: To Reviewer QbuY:
Thanks for providing more details for your judgment.
We find that there may **be a misunderstanding** about our Frozen-DETR.
The single-scale DAB-DETR-DC5 uses standard self-attention in the encoder by default.
Meanwhile, Frozen-DETR (DAB-DETR-DC5) introduces another level of the feature pyramid from the foundation model, thus forming multi-scale feature maps.
Following the common design of multi-scale query-based detectors, we replace the standard self-attention in the encoder with deformable attention.
This difference only occurs when Frozen-DETR is applied to a single-scale query-based detector since Frozen-DETR is naturally a multi-scale detector.
Thus, the latency introduced by the frozen foundation model on 1080Ti cannot be simply calculated as 212ms - 204ms = 8ms.
Here we provide detailed latency for each component of the model.
- DAB-DETR-DC5: 5.0FPS, 200ms on 1080Ti and 10.2FPS, 98ms on V100.
- Frozen foundation model: 13.2FPS, 76ms on 1080Ti and 22.2FPS, 45ms on V100.
- Frozen-DETR (DAB-DETR-DC5) but using random tensors as the output of the foundation model (do not forward the foundation model): 8FPS, 125ms on 1080Ti and 13.8FPS, 72ms on V100.
- Frozen-DETR (DAB-DETR-DC5): 4.7FPS, 213ms on 1080Ti and 8.5FPS, 118ms on V100.
As for the FPS gaps across different GPUs, we think they are related to the GPUs' architecture.
As mentioned in your reply, we find that our Frozen-DETR can be **further accelerated by parallel computing**. Since the foundation model and the backbone compute independently. We can simply add two lines of code `s = torch.cuda.Stream()` and `with torch.cuda.stream(s)` to make different cuda streams to execute the computation in parallel. Based on it, Frozen-DETR (DAB-DETR-DC5) can be accelerated from **4.7 FPS to 5.2 FPS** on 1080Ti and from **8.5 FPS to 8.9 FPS** on V100. We believe the code can be further optimized and many off-the-shelf acceleration techniques can be used.
Last but not least, we will release all the experimental code, including both the main text and the rebuttals, to ensure reproducibility.
If you have any questions, please feel free to contact us. We are pleased to answer them.
---
Rebuttal 5:
Title: Response to Reviewer QbuY
Comment: To Reviewer QbuY:
Thanks for your patience and quick reply. We understand that the newly added DAB-DETR-DC5 experiment during the rebuttal, with its slightly different setup, may cause some confusion. We appreciate the opportunity to clarify any misunderstandings, and all clarifications will be incorporated into the main text.
As mentioned in our last reply, all single-scale detectors use standard dense self-attention in the encoder, which has the quadratic computation w.r.t. token numbers. Taking DAB-DETR-DC5 as an example, for an image with the input size 800*1200, the token numbers in the encoder will be (800 / 16) * (1200 / 16) = 3750.
In multi-scale detectors, it is unreasonable to concatenate all the multi-scale feature maps along the token number dimension because there are too many tokens for multi-scale features. Thus, all multi-scale query-based detectors use deformable attention in the encoder, which is sparse attention with linear complexity to approximate the standard self-attention. Our frozen-DETR (DAB-DETR-DC5) has multi-scale feature maps and uses deformable attention in the encoder, following common practices in multi-scale query-based detectors. **In summary**, the foundation model in Frozen-DETR introduces extra latency, while deformable attention reduces latency. This results in an insignificant FPS gap between DAB-DETR-DC5 and our Frozen-DETR, which is the reason for your confusion.
Regarding parallel computing, we have demonstrated that it does accelerate our Frozen-DETR even on a 1080Ti GPU. Besides, many researchers are now focusing on the efficient inference for large foundation models and LLMs. We believe these techniques could also be effectively applied to our Frozen-DETR.
Considering the various advantages of Frozen-DETR, especially under many challenging scenarios (which you also acknowledged regarding the improved performance), we note that **all other reviewers have reached a consensus with us on the good performance-speed trade-off of Frozen-DETR** and have given us a positive rating. We are looking forward to your feedback. | Summary: This paper focuses on enhancing the performance of query-based object detection models. By inserting a foundation model into the DETR framework and treating it as a plug-and-play module instead of a backbone, the performance of query-based detectors can be significantly improved. The detection performance of DETR (DINO) on COCO is substantially enhanced by inserting patch tokens into the DETR encoder, and class tokens into the DETR decoder. Moreover, since the inserted CLIP is frozen, smaller detectors can now be equipped with larger foundation models to boost efficiency. The authors have conducted extensive ablation studies to demonstrate the effectiveness and justifiability of the proposed method.
Strengths: 1. Through a decoupled design, the method can accept asymmetric input sizes, which greatly reduces computational load and allows smaller detectors to be paired with larger foundation models while maintaining an acceptable computation burden.
2. The experiments are very comprehensive, essentially validating every design choice and modification proposed, and thoroughly ablating multiple feasible schemes for inserting CLIP into the model.
3. The results include large vocabulary and open vocabulary tests, demonstrating the advantages of utilizing the frozen CLIP.
Weaknesses: In the experiments presented in Tables 1 and 2, the detector utilizes an R50 backbone, but most of the inserted foundation models are ViT-Ls. This design introduces a larger image encoder to provide extra information, thereby effectively creating a model ensemble to enhance the knowledge of the R50 backbone. However, as the size of the detector backbone increases— for instance, when switching from R50 to Swin-Large or even ViT-Large—does this approach of inserting a frozen CLIP still lead to noticeable improvement? The paper only provides results for the largest backbone (Swin-Base) in Table 6. I believe that more results on larger backbones is needed for further discussion to verify the potential of this method.
Technical Quality: 3
Clarity: 3
Questions for Authors: In the Supplementary Material's Table 13, the authors examine the impact of varying model sizes of the foundation model on Co-DINO. I am still curious whether the R50-CLIP would boost the performance of an R50 backbone detector. Does this increase necessitate that the model size of CLIP backbone surpasses that of the detector backbone?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer RksL
**Q1: When switching from R50 to Swin-Large or even ViT-Large, does this approach of inserting a frozen CLIP still lead to noticeable improvement?**
**RE**: Yes. We conduct experiments with Swin-L based on the Co-DETR detector. In this experiment, we equip the Co-DETR with DFN5B-CLIP-ViT-H-14-378 as the feature enhancer and fine-tune their Objects356 pre-trained checkpoint. Our Frozen-DETR is not used during the Objects365 pre-training. The results in the table below show that frozen-DETR can still gain noticeable improvement even with a strong backbone and strong pre-training.
| Model | AP | AP$_{50}$ | AP$_{75}$ | AP$_{s}$ | AP$_m$ | AP$_l$ |
|----------------------------------------------|------|-----------|-----------|----------|--------|--------|
| Co-DINO 5scale SwinL (our re-implementation) | 63.7 | 81.2 | 70.2 | 50.2 | 67.1 | 77.8 |
| Frozen-DETR | 64.1 | 81.4 | 70.7 | 50.0 | 67.5 | 78.0 |
***
**Q2: I am still curious whether the R50-CLIP would boost the performance of an R50 backbone detector. Does this increase necessitate that the model size of CLIP backbone surpasses that of the detector backbone?**
**RE**: Here we use the same baseline as in Table 13. Using the R50-CLIP feature enhancer can still increase by 0.5% AP. Nevertheless, Frozen-DETR aims to equip detectors with a strong foundation model. If we use the R50 version of foundation models, we think using it as a backbone is a better choice.
| Model | AP | AP$_{50}$ | AP$_{75}$ | AP$_{s}$ | AP$_m$ | AP$_l$ |
|---------------------|------|-----------|-----------|----------|--------|--------|
| baseline | 47.0 | 64.1 | 51.4 | 30.5 | 50.2 | 62.0 |
| baseline + R50-CLIP | 47.5 | 65.1 | 51.8 | 29.6 | 51.2 | 63.0 |
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: Thanks to the author for the response, the author's rebuttal has resolved my doubts. Regardless, I believe this article presents a simple but clever design. | Summary: This paper explores using frozen vision foundation models to enhance object detection performance without fine-tuning. The authors demonstrate that foundation models, although not pre-trained for object detection, can significantly improve detection accuracy by leveraging their high-level image understanding capabilities. This is achieved by using the class token from the foundation model to provide a global context and the patch tokens to enrich semantic details in the detection process. The proposed Frozen-DETR method boosts the state-of-the-art DINO detector's performance on the COCO dataset, achieving notable improvements in average precision (AP). The method also shows strong performance in large vocabulary settings and open-vocabulary detection, highlighting its robustness and versatility.
Strengths: + This paper proposes a relatively novel approach for leveraging visual foundation models for object detection.
+ The proposed method achieves promising results and outperforms other query-based object detection models.
+ The whole paper is easy to follow.
Weaknesses: Overall I think this paper is interesting, their proposed method of using vision foundation models as additional knowledge is fair and relatively novel. One minor concern may be that, it is hard to say whether it is desirable to apply this relatively complicated framework instead of directly leveraging the pre-trained foundation models in real practice, but the improvement is solid based on the current evaluations.
**A few suggestions on writing:**
- I personally suggested splitting/reorganizing (e.g. one question + corresponding insight or one paragraph) the last paragraph of the introduction. The combination of questions and key insights is distractive.
- Figure 3 itself is hard to follow. Adding more details in the caption may help. (So as a few other figures/tables)
- It's better to bold the best results in tables.
- One minor comment: maybe use different notations for DINO (the detection baseline) and DINOv2 (the self-supervised encoder).
**Minor questions/comments:**
- l37-38, EVA-CLIP-18B is equipped with a ViT rather than R50. The version of CLIP using R50 as the backbone lagged behind. This evidence cannot support the claim that the improvement is from large-scale training.
- l55: Is DINO the SOTA query-based detector? (refer to https://paperswithcode.com/sota/object-detection-on-coco for example)
- The setup of Table 1 is a little unclear, is the backbone tunable? I assume they are not. Then, the comparison is slightly unfair -- the backbones are frozen for the top lines, and there are additional trainable parameters in the encoder backbone. In other words, if we finetune the encoder backbone of Table 1, will the gap between the two variants be even smaller, which questions the current design?
- In Table 2, DEiT-III has comparable results with CLIP, how will this backbone perform for the main experiments?
- One limitation of this approach is the running time. in Table 4, the authors reported the FPS, running time time has been 1.5x compared with the baseline model. However, if using the foundation model in a conventional way (i.e. directly using them as the backbone), the running time will not be affected in the inference time. Moreover, if using a ViT-based foundational model, how will the inference time and GPU memory change, since most of the recent powerful foundation models are equipped with ViT?
Technical Quality: 3
Clarity: 3
Questions for Authors: See the previous section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed the limitations and the social impact at the end of the main paper. The limitation of this work may be that the visual foundation models are trained with natural images and they may not work well with images from other domains (e.g. medical images). There is no potential negative social impact from this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer V83J
**Q1: A few suggestions on writing**
**RE**: Thanks for your helpful and detailed advice. We will carefully revise our manuscript. We rename the detector DINO as DINO-det and the self-supervised foundation model DINOv2 as DINOv2-FM.
***
**Q2: EVA-CLIP-18B is equipped with a ViT rather than R50. The version of CLIP using R50 as the backbone lagged behind. This evidence cannot support the claim that the improvement is from large-scale training.**
**RE**: This example aims to emphasize that a large-scale pre-trained foundation model has strong image understanding abilities such as zero-shot ability, even without fine-tuning on a specific downstream dataset. This property demonstrates that the foundation model has the potential to serve as a plug-and-play feature enhancer for downstream tasks. In the revision, we will replace the compared R50 with an ImageNet-22k pre-trained and ImageNet-1k fine-tuned ViT-H-14, which achieves 85.1% top1 accuracy and merely surpasses the zero-shot CLIP model by +1.3%, showing the foundation CLIP model's generalizable image understanding abilities.
***
**Q3: Is DINO-det the SOTA query-based detector?**
**RE**: DINO-det is one of the most well-known query-based detectors due to high performance and fast convergence speed and most recent SOTA detectors are based on it. Here we list some top detectors in the paperwithcode:
- Co-DETR (the rank-1 detector) adds additional co-heads to DINO-det for fast and better convergence. We have also applied our method to Co-DETR in Table 6.
- InternImage-H (the rank-2 detector) and M3I Pre-training (the rank-3 detector) equip DINO-det with a strong backbone InternImage-H which is pre-trained by M3I Pre-training.
***
**Q4: The setup of Table 1 is a little unclear, is the backbone tunable? I assume they are not. Then, the comparison is slightly unfair -- the backbones are frozen for the top lines, and there are additional trainable parameters in the encoder backbone. In other words, if we fine-tune the encoder backbone of Table 1, will the gap between the two variants be even smaller, which questions the current design?**
**RE**: All backbones in Table 1 are tunable. Thus, the comparison is fair and the current design is reasonable. We will add more details to clarify the setup in Table 1.
***
**Q5: In Table 2, DEiT-III has comparable results with CLIP, how will this backbone perform for the main experiments?**
**RE**: We find DEiT-III can also boost the performance in the main experimental setup, but slightly lower than CLIP, as shown in the table below.
| Model | AP | AP$_{50}$ | AP$_{75}$ | AP$_{s}$ | AP$_m$ | AP$_l$ |
|----------------------------|------|-----------|-----------|----------|--------|--------|
| DINO-det-4scale | 49.0 | 66.6 | 53.5 | 32.0 | 52.3 | 63.0 |
| DINO-det-4scale + CLIP | 51.9 | 70.4 | 56.7 | 33.8 | 54.9 | 69.3 |
| DINO-det-4scale + DEiT-III | 50.6 | 68.5 | 55.0 | 32.1 | 53.5 | 67.9 |
***
**Q6: One limitation of this approach is the running time. in Table 4, the authors reported the FPS, running time time has been 1.5x compared with the baseline model. However, if using the foundation model in a conventional way (i.e. directly using them as the backbone), the running time will not be affected in the inference time. Moreover, if using a ViT-based foundational model, how will the inference time and GPU memory change, since most of the recent powerful foundation models are equipped with ViT?**
**RE**: We believe the additional computation cost is acceptable, which is also pointed out by Reviewer RksL. If we directly use ViT-L as the backbone, it will take 10G inference memory (3x larger than the baseline) and 2.1 FPS (4x slower than the baseline), which is much larger and slower than our Frozen-DETR. Please see the rebuttals for all reviewers above for more details.
---
Rebuttal Comment 1.1:
Title: Thanks to the authors
Comment: Thanks to the authors for providing the detailed feedback. Most of my concerns are addressed and I am still leaerning a positive attitude towards this paper. | Rebuttal 1:
Rebuttal: # To all reviewers
We thank all reviewers for their helpful and insightful feedback and are encouraged they find that our method is innovative (Reviewer V83J), the experiments are very comprehensive (Reviewer RksL and C9T7), and the proposed method achieves promising results (Reviewer V83J, RksL, and C9T7). We address reviewers' common comments below.
As mentioned by Reviewer V83J, the detector DINO and the self-supervised foundation model DINOv2 share the same name and may cause confusion. We rename the detector DINO as **DINO-det** and the self-supervised foundation model DINOv2 as **DINOv2-FM** in the following.
**Q1: Comparisons between using foundation models as a backbone and as a plug-and-play module as in our Frozen-DETR. Concerns for additional computation cost.**
| Method | Training | Training | Inference | Inference | GFLOPs |
|----------------------------------|:----------------:|:---------------:|:------------------:|:--------------:|:--------:|
| | Mem | time / epoch | Mem | FPS | |
|----------------------------------|----------------|---------------|------------------|--------------|--------|
| DINO-det-4scale baseline | 13G | 1.3 | 3G | 9.7 | 279 |
|----------------------------------|----------------|---------------|------------------|--------------|--------|
| Our Frozen-DETR (DINO-det-4scale) | 15G | 1.4 | 3G | 6.5 | 400 |
| DINO-det-5scale | 34G | 2.6 | 5G | 4.4 | 860 |
| DINO-det-4scale + ViT-L backbone | 44G (bs=1) | 4.2 | 10G | 2.1 | 1244 |
In this work, we propose a novel paradigm to integrate frozen vision foundation models with query-based detectors, firstly showing that frozen foundation models can be a versatile feature enhancer to boost the performance of detectors, **even though they are not pre-trained for object detection**. Please see the rebuttal PDF for intuitive comparisons among different paradigms.
In previous practices, large vision foundation models are always used as a pre-trained backbone and fine-tuned with detectors in an end-to-end manner. Although such a paradigm achieves a high performance, the computation cost of fine-tuning such a large vision foundation model is unaffordable. We use ViT-L as an example to illustrate this problem, as ViT-L is a common architecture for most vision foundation models. In the above table, we choose DINO-det-4scale with R50 backbone as the baseline and compare it with three methods: our Frozen-DETR (CLIP ViT-L-336), DINO-det-5scale, and DINO-det-4scale with a foundation model (ViT-L) as the backbone. We use the ViT-L as the backbone following ViTDet. For the training, we use 4 A100 GPUs with 2 images per GPU except for the ViT-L backbone due to out-of-memory (OOM). For inference, we use a V100 GPU with batch size 1 in line with the main text. As shown in the table, the computation cost in both training and inference for Frozen-DETR is the lowest among the three variants.
- Compared with DINO-det-4scale with a foundation model as a backbone, training a ViT-L backbone needs 4.2 hours per epoch and 44 GB memory per GPU, which is significantly higher than our Frozen-DETR (1.4 hours and 15 GB with 2 images per GPU). For inference, using ViT-L as a backbone needs 10 GB GPU memory and runs at 2.1 FPS on a V100 GPU. While inference with Frozen-DETR only needs 3 GB GPU memory (3x fewer) and runs at 6.5 FPS (3x faster).
- Compared with DINO-det-5scale, our Frozen-DETR not only runs faster but also significantly outperforms DINO-det-5scale by 1.8% AP (53.1% AP vs 51.3% AP), as shown in Table 6.
Thus, Frozen-DETR achieves a good performance-speed trade-off. The additional computation cost is acceptable and we are happy to find that **Reviewer RksL agrees with us**.
Apart from the good performance-speed trade-off, Frozen-DETR further enjoys the following advantages:
1. **No architecture constraint**. The foundation model in Frozen-DETR can be any architecture, including CNNs, ViTs, or hybrid ones. Moreover, the detector and the foundation model can use different structures. For example, the backbone of detectors can be R50 or Swin-B (Table 6) and the backbone of foundation models can be R101 and ViT (Table 13).
2. **Plug-and-play**. Our method can be plugged into various query-based detectors without modifying the detector’s structure, the foundation model’s structure, and the training recipe. We have applied Frozen-DETR to AdaMixer, DINO-det, Co-DETR in the main paper and DN-DETR, DAB-DETR, MS-DETR and HPR in the rebuttals below.
3. **Effective integration**. Our method can successfully transfer the strong image understanding ability from foundation models to detectors. We have shown that the benefit is larger under more challenging scenarios.
- On the COCO dataset, we increase 2.9% AP for DINO-det (Table 6).
- On the challenging large vocabulary LVIS dataset, we increase 6.6% AP for DINO-det (Table 7).
- On the challenging long-tail scenario, we increase 8.7% APr and 7.7% APc for DINO-det (Table 7), showing the potential to alleviate the class imbalance problem.
- On the challenging open-vocabulary scenario, we increase 8.8% novel AP for DINO-det (Table 8), showing strong open-vocabulary ability.
4. **Complementary to a strong backbone**. In Table 1 and Table 6, our Frozen-DETR can still boost the performance for detectors with CLIP initialized backbone or a strong ImageNet-22k pre-trained Swin-B.
Overall, we believe our novel paradigm enjoys good advantages over using foundation models as the backbone and achieves an outstanding performance-speed trade-off.
Pdf: /pdf/82840451ac8ea7ea8ad1c5ccce06d999563e171a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MKGL: Mastery of a Three-Word Language | Accept (spotlight) | Summary: This paper proposes a method to leverage LLMs as knowledge graph completion systems. New tokens that correspond to (potentially multi-word) concepts and relations are introduced to the model’s vocabulary, and then the LLM’s embeddings for the tokens composing those concepts/relations are aggregated and upscaled to estimate embeddings for these new tokens. Given these new tokens, the goal of the system is to complete the knowledge graph triplet given two of these embeddings by retrieving the correct third token from the KG vocabulary.
In a series of experiments, it is observed that the proposed system outperforms a variety of previous baselines employing diverse methods. It also excels at inductive KG completion. An ablation study reveals that each proposed part of the pipeline is necessary for achieving the best performance.
Strengths: * The proposed approach outperforms a wide variety of prior methods with respect to accuracy. It also achieves a better trade-off between accuracy and compute-efficiency.
* A wide variety of baselines employing diverse methods are compared.
* Informative ablation study.
Weaknesses: 1. Given how many moving parts there are, reproducibility seems difficult. It would be nice to see the variance in performance of the method across multiple random seeds, where each random seed entails running the entire pipeline of optimizations from scratch.
2. Relatedly, it is unclear whether the approach will scale as LMs continue to improve (and presumably to become better bases for approaches like this). Having a comparison with other base models would be a nice way to hedge against this.
3. No detailed discussion of limitations. The checklist says it is discussed, but there are only brief comments distributed throughout the paper (which, in my opinion, do not address limitations thoroughly enough). A dedicated section would be helpful.
4. It is unclear whether the new KG embeddings encode relevant concepts to the target token, or whether they are picking up on certain spurious correlations that happen to be helpful (but may not generalize robustly). It would be nice to have an analysis where the new embeddings are directly decoded into vocabulary space, such that we can observe what concepts are included in these new representations.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Would it be possible to base this approach on other LMs as well? The various Llama scales would be ideal, but if scale is an issue, even just comparing Mistral, Llama 2 (7B), Llama 3 (8B), and ideally some smaller models would be nice. This is quite compute-intensive, so I wouldn’t expect it for the camera-ready, but it would definitely be nice to have.
2. Do you have any hypotheses as to why the proposed method is better at inductive KG completions than prior methods? In other words, is there a particular aspect of your pipeline that you believe makes it better for handling novel triplets than past approaches?
Typos:
* L226: “clear that The” -> “clear that the”
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: I do not believe limitations have been sufficiently addressed. There is no "Limitations" section, nor is there a dedicated space in any part of the paper that directly addresses the drawbacks of the proposed method and experiments. For example, there are many moving parts; there are multiple stages of optimization that could lead to cascading errors; only one LLM base was considered; etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your detailed and insightful comments. We have addressed your concerns below and hope our responses provide clarity:
### Weaknesses:
1. **Reproducibility: can the authors provide the variance in performance of the method?**
Thanks for your suggestion. We follow the existing methods to report the average results of 5 runs in the main tables. Most KG completion methods are not very sensitive to the initialization seed [1], as the large label space and testing sets make the result quite stable. This may be the reason why they did not provide the variance statistics. Re-producing all baseline results is difficult at this phase, but we are willing to update the tables with the variance statistics of our method:
| Methods | FB15k-237 MRR | FB15k-237 Hits@1 | FB15k-237 Hits@3 | FB15k-237 Hits@10 | WN18RR MRR | WN18RR Hits@1 | WN18RR Hits@3 | WN18RR Hits@10 |
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| MKGL | .415 $\pm$ .002 | .325 $\pm$ .004 | .454 $\pm$ .001 | .591 $\pm$ .001 | .552 $\pm$ .002 | .500 $\pm$ .005 | .577 $\pm$ .003 | .656 $\pm$ .002 |
2. **How does MKGL perform as LMs continue to improve?**
Thanks very much for your suggestion. We have conducted a new experirment investigating the impact of the base model, where we consider Mistral (7B), Llama 2 (7B), Llama 3 (8B), and LLama 2 (13B) for comparison. The results are shown the following table:
| Methods | FB15k-237 MRR | FB15k-237 Hits@1 | FB15k-237 Hits@3 | FB15k-237 Hits@10 |
|---|:---:|:---:|:---:|:---:|
| Llama-2-7B | .415 | .325 | .454 | .591 |
| Llama-2-13B | .421 | .331 | .455 | .590 |
| Llama-3-8B | .429 | .337 | .459 | .593 |
| Mistral-7B | .424 | .334 | .457 | .591 |
From the results, we can find the improvement from 7B model to 13B model is limited, possibly because 13B is not significantly larger than 7B. However, more advanced base models do contribute to better performance, which can be empirically verified from the results of Llama-3-8B and Mistral-7B. The results are significantly better than the Llama-2-7B version, especially on Hits@1. They also outperform the best baseline method KICGPT (.327) in Table 2. We have added a subsection to include the above results and discussion in the revision.
3. **A dedicated limitation section would be helpful.**
Thank you for the detailed comments. We have incorporated a dedicated limitations section in the revised version, as outlined below:
We would like to discuss the potential limitations of our method from the following three aspects:
Efficiency: As MKGL is an LLM-based fine-tuning method, it inherently requires more computational resources. In our primary experiments, MKGL significantly outperformed all conventional and LLM-based methods. Subsequent analyses also revealed that the trainable parameters and runtime of MKGL are fewer than those of a general fine-tuning framework. Hence, we believe that MKGL remains an efficient LLM-based method.
Robustness: MKGL leverages multiple retrievers to gather text and KG information for constructing both input embeddings and score estimations, which may introduce more errors during fine-tuning. Nevertheless, most modules are learnable through back-propagation. To mitigate biased evaluation and sporadic results, we also present the averaged results of multiple runs alongside variance statistics. Consequently, we consider MKGL to be a robust method.
Generality: The advancements in LLMs have revolutionized many NLP tasks, and it is important for an LLM-based method whether the proposed module is effective and the performance can continually improve as the LLM gets promoted. We have conducted experiments to visualize the KGL embeddings and compare the performance with different base LLMs. The results empirically demonstrate the generality and potential of MKGL.
4. **It would be nice to have an analysis about the new KGL embeddings and original token embeddings.**
We have visualized the embeddings of the original LLM tokens and KGL tokens in Figure 2 of the newly uploaded rebuttal.pdf. In Figure 2a, we observe that two types of embeddings are (nearly) uniformly distributed in the space. The KGL tokens have been successfully encoded into the original token space. In Figure 2b, we present a sample from the center, where the entity *Canal+* (a sports TV channel) is closely encoded to tokens like *TV* and *_Team*. In Figure 2c, we present a sample from the corner, which also demonstrates high semantic correlations.
We have included the figure and discussion in the revision. Additionally, we believe that the inductive KG completion experiments could further demonstrate the robustness and generality of MKGL, as the entities in the testing set are unseen and unknown during fine-tuning.
### Questions:
1. **Would it be possible to base this approach on other LMs as well? Even just comparing Mistral, Llama 2 (7B), Llama 3 (8B) would be nice.**
Please refer to our response to the weaknesses.
2. **Why is MKGL better at inductive KG completions than prior methods?**
We believe there are two possible reasons: Firstly, the incorporation of additional text information. Previous works primarily rely on structural similarities for inductive KG completion, while our method additionally leverages text information. Although the entities in the testing set are new, their text token features are not new to MKGL. Secondly, the utilization of LLM. Without loss of generality, we believe that LLM possesses knowledge helpful in KG completion and offers better inference ability compared to smaller models.
- **Typos: L226, “clear that The”**
Many thanks. We have fixed it.
- **Limitations**
Thank you once again for the detailed and constructive comments. We hope our response has sufficiently addressed these limitations.
[1] A Re-evaluation of Knowledge Graph Completion Methods. ACL, 2020.
---
Rebuttal Comment 1.1:
Comment: Thanks for the thorough response. These low variances are nice to see, and help me trust the robustness of the results. Thanks also for running the experiments on various Llama versions and sizes.
The new embeddings visualization is nice to see, but it presents only a couple examples, and does not directly compare to the original representations. I feel that a more systematic quantitative comparison would better address Weakness 4; this could, for example, be based on the average distance between entity or relation tokens that co-occur in queries in the original vs. new space.
That said, I consider the other weaknesses to be well-addressed, even if preliminarily. I'm therefore raising my score.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your increased rating and recognition of the efforts we put into addressing your concerns. A more straightforward comparison between two types of embeddings is indeed helpful. We believe the Wasserstein metric (also known as Earth mover’s distance, the cost of an optimal transport plan to convert one distribution into another) is appropriate for estimating the similarity between two distributions. We sampled 1,000 tokens/entities to estimate the Wasserstein distance and present the results in the following table:
| (X, Y) | Wasserstein distance |
|---|:---:|
| (Token, Token) | 1.4418 |
| (Token, Entity) | 2.0019 |
The Wasserstein distance between the sampled entity and token distributions is slightly larger than that of the two sampled token distributions. Therefore, it is reasonable to conclude that our method has successfully encoded the new KGL tokens into the original token embedding space. We will update the corresponding paragraph to discuss the Wasserstein distance results. Thank you again for your prompt and kind response. | Summary: This paper proposes what seems to be an elaborate GNN+LLM+GNN sandwich of a model for doing knowledge base completion, having a GNN pipeline to form KB-informed token embeddings, passing those to a LLM (lllama-2 ) into a knowledge base completion prompt template, and then passing that output into another GNN-like ("PNA") set of layers. That whole set-up is then used to train representations of the data optimized with contrastive loss for the entity prediction task in knowledge base completion tasks (FBK and Wordnet) (I'll admit that I found the model explanation quite hard to follow, and so it's entirely possible I'm slipping on some details -- some of this had to be inferred by glancing at their code.) It seems to outperform existing methods on these two tasks.
Strengths: If the authors work and evaluation are sound, their method outperforms other methods at two commonly used knowledge base completion tasks.
Weaknesses: - I found the model extremely rather hard to follow, and had particular trouble discerning why this collection of model assumptions would result in a meaningful improvement over prior work. Since the work is so complicated, it may be both useful to focus on very clear graphs and progressively introducing parts of the model. I'll admit that the "Retriever" framing felt very confusing as well, as this models "Retrievers" don't seem to do any retrieving.
- For understanding the model, the "three world language" framing seems quite separate from the meat of what this model seems to be doing. It implies that the somewhat simple "template" setup they use is important, but it seems to be only formed to teach a model what triplet completion is, which is something that one would think would be easily addressed in fine-tuning; there is not experimental exploration showing the value of those prompts.
- I'll admit to feeling suspicious at the high performance here (even after ablating nearly everything, they outperform most models?).
Technical Quality: 2
Clarity: 2
Questions for Authors: -Looking at the code, it was unclear whether the model is calculating the metrics (MRR, Hits@1, Hits@10) by ranking solely within a small batch containing the correct answer, or actually predicting the highest ranked entity from a full space of candidates. Could the authors clarify which one is being done?
- Could the authors clarify what is removed in the second , "text" ablation reow
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: yes, a broader impacts section is included and no major limitations are missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your constructive and detailed comments. We appreciate the opportunity to provide further clarifications.
### Weaknesses:
- **Why is the proposed model better than the prior works? It is better to illustate each modules step-by-step with clear figures. The name of "Retriever" is confusing: what it retreves?**
Thank you for your insightful comments. We believe the reasons for the improved performance are threefold: (1) the power of LLMs; (2) the additional text information, where methods leveraging text features generally perform better; (3) the proposed MKGL effectively retrieves text and KG information for the LLM.
We indeed have a more detailed implementation section, including a step-by-step algorithm, which is currently in Appendix C due to space limitations. If the paper could be accepted, we plan to reintroduce this content into the main paper in camera-ready version (with an additional page allowance). We are willing to reintroduce the step-by-step process (Figure 1) of MKGL: Firstly, we construct input instructions and tokenize them into IDs. For out-of-vocabulary KGL tokens, their embeddings will be retrieved by our context retriever. Specifically (Figure 2), we aggregate the text tokens of each entity as its embedding, then use this embedding as the entity feature for aggregating KG information as the final KGL token embedding. Finally, we assemble the (KGL and LLM) token embeddings as the input sequence and feed them to the LLM to obtain output hidden states.
The term "retriever" simply implies that it can retrieve information from external resources that are helpful for inference. Since the KG information is not originally included in the input triplet, we designed a module to retrieve and process the information for the LLM. We have updated the introduction section in the revision to include an explanation for the name "retriever". Thank you again for your comment.
- **The prompts used to fine-tune MKGL seems important, but it just teaches the model what triplet completion is. This can be easily addressed in fine-tuning.**
We also agree that the prompt templates may not be essential for fine-tuning an LLM on KG completion. In fact, the presentation of Instruction 3.1 in the main paper is not to demonstrate its novelty and importance. We simply aim to show how the input instruction is structured. Although we organize the context as a table, using sentences or other formats would not cause a significant performance loss.
- **Lacking an explanation of the high performance (even after ablating nearly everything, they outperform most models?).**
Thank you for the comment. We do not ablate everything; the text retriever is actually replaced with a randomly initialized embedding module, as stated in Lines 285-287. We have highlighted this point in the caption of Table 4 and discussed the reasons in the revision. It is still a supervised fine-tuning LLM-based model, and it is expected to outperform most (not all) conventional models.
### Questions:
- **It was unclear whether the model is calculating the metrics (MRR, Hits@1, Hits@10) by ranking solely within a small batch containing the correct answer, or a full space of candidates. Could the authors clarify which one is being done from the source code?**
Thank you for your detailed comments. To incorporate LLMs into KG completion, we developed a new framework based on the Transformers package provided by Hugging Face. The training and evaluation procedures also follow the corresponding suggestions. Although we have provided extensive comments in the uploaded code, it may not be as familiar as previous KG completion source code to the reviewer.
We do rank the target entities against all candidates, and we have highlighted this point in our paper in the revision. In the "predict" function of "llm.py" (Lines 299-325), we construct the candidate list according to the status of "self.training".
```python
all_index = torch.arange(graph.num_node, device=device)
if self.training:
# train, do negative sampling
neg_index = self._strict_negative(
pos_h_index, pos_t_index, pos_r_index)
...
else:
# test, constrcut testing examples (h,r,?) and (?,r,t)
h_index, t_index = torch.meshgrid(pos_h_index, all_index)
it_index, ih_index = torch.meshgrid(pos_t_index, all_index)
...
```
To ensure the evaluation function is correct, we have also tested the performance of a classical KG completion method, TuckER, within our framework, which achieves similar performance compared to its paper. We cannot provide an external link in the rebuttal, but TuckER is a very simple model. The reviewer may simply replace our model with its model (only 40 lines) and keep everything else unchanged to reproduce its results. Additionally, we adopt a strict ranking strategy where the target will be ranked below the entities with the same probability (Lines 278-286 in llm.py).
- **Could the authors clarify what is removed in the second "Text" in the ablation results (Table 4)**
As stated in Line 286, toggling off "Text Retriever" does not mean we remove it, but we replace it with a learnable embedding module. This is equivalent to adding many new tokens into the LLM. The new tokens are randomly initialized and will be learned during fine-tuning. We have updated the caption of Table 4 to include this explanation.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response! With the authors clarifications, I take back some of my initial concerns, and I've revised my scores up from 4 to 6 .
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your increased rating and recognition of the efforts we put into addressing your concerns! Your contribution to our work is highly valued and greatly appreciated. | Summary: The authors introduce a SOTA method for allowing LLMs to incorporate information from knowledge graphs, relying on Knowledge Graph Language token embeddings to retrieve context, and then score it using a retriever that helps form a distribution over the possible entities to be incorporated.
Strengths: Creative, interesting paper. Introduces a novel approach in a domain that could be of great use to data analysts (LLMs for knowledge graphs).
Well-supported empirically with strong evaluation on major benchmarks. Outperforms alternative methods mostly across the board.
Simple and effective illustrations. Good use of formatting in the paper itself to facilitate understanding (appreciated the use of color, especially).
Method may have notable benefits for reducing problems like hallucination and distribution drift, contributing to a solution for major outstanding issues with LLMs.
Weaknesses: Could use a longer and more detailed discussion section; ends a little too abruptly.
Technical Quality: 4
Clarity: 4
Questions for Authors: None.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: I have no concerns about this paper being published.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your encouraging comments and insightful suggestion. We hope the following response address your concern:
### Weaknesses:
- **Could use a longer and more detailed discussion section; ends a little too abruptly.**
Thank you for your insightful suggestion. We also agree that a more detailed discussion could enhance understanding of our method and its implications across broader research areas. We have updated the conclusion section to include more comprehensive discussions on methodology, results, and potential value for other research fields:
The proposed context and score retrievers point out a new direction in incorporating LLMs with KGs for various tasks, such as question answering and entity linking. They also have implications in broader areas where the input cannot be precisely represented by text, e.g., node classification and protein representation learning. Furthermore, the construction of KGL vocabulary enables contrastive learning beyond tokens, offering insights into general machine learning. Hence, there are also numerous future directions. We plan to pretrain LLMs using a mixed corpus of KG and natural languages, enabling the LLM to comprehend and generate responses with linked data. | Summary: The paper proposes MKGL, a novel approach to integrate LLMs with KGs by instructing them in a specialized KG Language (KGL). KGL is a three-word language that mirrors the structure of KG triplets. The authors introduce a KGL context retriever and a score retriever, both based on LoRA, to efficiently encode textual and relational information into KGL token embeddings. MKGL outperforms existing KG completion methods, including LLM-based and conventional approaches, on both standard and inductive KG completion tasks. The paper also demonstrates MKGL's ability to generate valid KGL sentences and its computational efficiency compared to in-context learning methods.
Strengths: * The authors present a novel approach to LLM-KG integration using a completion of entity-relation-entity triplets.
* The performance seems to be strong and the method outperforms the previous work on KG completion.
* At least compared to naive in-context learning, MKGL is more efficient and also achieves better scores.
Weaknesses: * There are only limited details on certain aspects of the methodology, for example I couldn't find details about the actual implementation of the multi-layered PNA for KG information retrieval.
* While the authors claim that the proposed "three-word language" parsing of natural sentences is novel, it boils down to semantic-role labeling (SRL), a well-established NLP task. I believe that the paper should include a clear comparison to past SRL methods.
* The results in Table 2 should contain a column with computational cost (or at least the number of parameters of each method), to make it clear if it compares apples to apples.
* The computation runtime of the proposed method and the baselines is another thing that is lacking.
Technical Quality: 3
Clarity: 3
Questions for Authors: * What exactly are the trainable parameters of the in-context-learning baseline in Section 4.6 and Figure 4? Isn't the point of ICL to not do any parameter updates at all and rely only on the contextual prompt?
* How does MKGL scale to much larger knowledge graphs? And how is it compared to other KG completion methods?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No issues found.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your helpful feedback and constructive suggestions. We have carefully integrated them into our paper.
### Weaknesses:
- **Include more details of the methodology, e.g., the implementation of the multi-layered PNA for KG information retrieval.**
Thank you for your suggestion. The implementation of the multi-layered PNA closely follows the original paper, with the key difference lying in the input feature. As this isn’t a core contribution of MKGL, we have relocated it to Appendix B. In our revised version, we have elaborated on the encoding of KG information. Recall Equation (12) where the original multi-layer PNA can be written as:
$a_v^{(l+1)} = \delta \left( \bigoplus_{\rho \in \mathcal{P}} AGG_{\rho}(\{h_u^{(l)}, \forall u \in \mathcal{N}(v)\}), W^{(l)} \right)$
Here, $a_v^{(l+1)}$ represents the aggregated information for node $v$ at layer $l+1$, $\mathcal{N}(v)$ denotes the set of neighbors of $v$, $h_u^{(l)}$ signifies the hidden state of neighbor node $u$ at layer $l$, $\bigoplus$ is a concatenation operator, $AGG_{\rho}$ is an aggregation function (e.g., sum, mean, max), $\delta$ is a nonlinear activation function.
For knowledge graphs, there exists a relation (or edge type) $r$ between $u$ and $v$, which also needs to be encoded into the hidden states. To address this, we modify the above equation as:
$a_v^{(l+1)} = \delta \left( \bigoplus_{\rho \in \mathcal{P}} AGG_{\rho}(\{h_u^{(l)}\otimes h_r^{(l)}, \forall (v,r,u) \in \mathcal{T}\}), W^{(l)} \right),$
where $h_r^{(l)}$ denotes the hidden state of relation $r$ at layer $l$, and $\mathcal{T}$ denotes the triplet set. $\otimes$ is the operator used to combine the relation $r$ and neighboring node $u$, typically set as point-wise multiplication. Importantly, the hidden states at the first layer are not randomly initialized. They are the output of the text information retrieval, as illustrated in Figure 2c.
- **Include a clear comparison to semantic-role labeling (SRL) methods.**
We have revised the related work section to compare KG completion with Semantic Role Labeling (SRL). Both tasks can be viewed as classification problems. However, the label spaces differ significantly. Most NLP tasks involve a smaller number of classes, usually less than 1,000, whereas for KG completion, the label space can exceed the vocabulary of LLMs. For example, the WN18RR dataset contains over 40,000 different entities, making it impractical to simply feed them all as possible results and let the LLM select one as output.
- **Better to add a column with computational cost/parameter in Table 2.**
Many thanks. The number of parameters for MKGL is evidently greater than most conventional methods, a limitations we have discussed in the paper. A more comprehensive comparision metric may be the number of trainable parameters. Below are the results for FB15k-237 (the full table is available in the newly uploaded rebuttal.pdf), with some results sourced from [1], and others (marked by *) evaluated using the official repositories.
| Methods | # FB15k-237 Trainable Parameters (M) | FB15k-237 MRR | # WN18RR Trainable Parameters (M) | WN18RR MRR |
|---|:---:|:---:|:---:|:---:|
| TransE | 2 | .310 | 21 | .232 |
| RotatE | 15 | .338 | 20 | .476 |
| TuckER | 11* | .358 | 9* | .470 |
| CompGCN | 10* | .355 | 12* | .479 |
| CoKE | 10 | .364 | 17 | .484 |
| KG-BERT | 110* | - | 110* | .216 |
| StAR | 355* | .296 | 355* | .401 |
| KGLM | 355* | .289 | 355* | .467 |
| DET | 16 | .376 | 24 | .507 |
| **MKGL** | 20 | .415 | 20 | .552 |
It is evident that the number of trainable parameters for MKGL is comparable to that of conventional methods, and this gap narrows as the knowledge graph gets larger (WN18RR). Some language-model-based methods (e.g., KG-BERT) leverage a full-parameter-fine-tuning strategy, employing significantly more trainable parameters.
- **The runtime of the proposed method and the baselines is not included.**
The runtime of LLM-based and conventional methods may be incomparable, but we compare our method with a vanilla supervised fine-tuning LLM-based method in Figure 4. The results demonstrate the high efficiency of our method.
### Questions:
- **The name of ICL (1-hop)/ICL (2-hop) in Section 4.6 and Figure 4 may be confusing.**
Sorry for the confusion. The current names are inappropriate and misleading. Raw/ICL (1-hop)/ICL (2-hop) are methods involving new random-initialized token embeddings and score layers to represent every entity. In essence, each entity is considered a new token for the LLM, and is added to the vocabulary (accomplished using the native “resize_token_embeddings” function to add new tokens to LLM). The new embeddings do require training.
This experiment aims to verify whether the proposed text retriever and KG retriever are superior to initializing new tokens and directly incorporating the KG context information in the input, respectively. It’s crucial to emphasize the necessity of including such new token embeddings to estimate the probabilities of all entities. This explains why these variants have more trainable parameters than MKGL.
In the revised version (Figure 1 in rebuttal.pdf), we have renamed them and provided more detailed explanations: Raw has been renamed to NewToken, ICL (1-hop) is now NewToken (1-hop), and ICL (2-hop) is now NewToken(2-hop).
- **How does MKGL scale to much larger knowledge graphs? And how is it compared to other KG completion methods?**
MKGL can easily scale to large KGs, as the number of trainable parameters remains independent of KG size. All KGL tokens stem from constant LLM token embeddings. This characteristic potentially positions MKGL as advantageous compared to conventional methods that initialize an embedding for each KG entity.
[1] HittER: Hierarchical Transformers for Knowledge Graph Embeddings. EMNLP, 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your response! I still believe that you should publish the (training & inference) computational cost and full parameter count, and compare it against the other methods. I don't see the significantly increased number of parameters as a negative, as long as it's transparent to the reader. Informing only about the training parameters seems somewhat misleading. I'm happy to increase the final score to 7 if you consider this.
---
Reply to Comment 1.1.1:
Comment: We are truly grateful for your increased rating and insightful comments. We completely agree with your point about listing the number of full parameters, which is important to provide a comprehensive comparison. While we are unable to revise "rebuttal.pdf" at this stage, we are fully committed to updating Table 2 to include the statistics of full parameters.
In most methods, the number of trainable parameters aligns with the number of full parameters. However, there are two exceptions: KG-Llama and our method MKGL, both of which employ LoRA.
| Methods | # Trainable Parameters (M) |# Full Parameters (M) |
|---|:---:|:---:|
| KG-Llama | 13 | 6,755 |
| **MKGL** | 20 | 6,762 |
MKGL incorporates additional neural layers for aggregating text and KG information, thus necessitating a greater parameter count. Once again, we sincerely thank you for your valuable suggestions. | Rebuttal 1:
Rebuttal: Dear all reviewers:
We sincerely appreciate the time and effort you have dedicated to reviewing our paper.
We would like to express our gratitude to Reviewers XXej for suggesting the inclusion of more details in the related work and methodology sections. We have incorporated these suggestions in the revision.
We are also grateful to Reviewers 4rXf for providing suggestions on the enrichment of the conclusion section. We have expanded our discussions not only limited to KG but also their potential impact on other areas.
We genuinely thank Reviewer dtBb for recommanding more details about the implementation and experimental settings. We have updated the corresponding paragraphs to cover more specific settings in the revision.
Furthermore, we extend our special thanks to Reviewer NCjt for pointing out the underlying limitations and providing insightful solutions. We have added a standalone section to discuss the limitations and conducted experiments to analyze them.
Thanks again to all reviewers. Your comments are invaluable in helping us enhance the quality of our paper.
Best Regards,
Authors
Pdf: /pdf/029bbac86246cca6687ec4244319a80e9f40b786.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Mission Impossible: A Statistical Perspective on Jailbreaking LLMs | Accept (poster) | Summary: The paper introduces a new theoretical framework that views prompts as combinations of concepts and queries, allowing for a detailed analysis of how and why LLMs can be manipulated into producing unsafe responses. It proposes a statistical metric for alignment, focusing on quantifying the safety of the model's outputs.
The study introduces an improved alignment strategy, Enhanced Reinforcement Learning from Human Feedback (E-RLHF), which modifies existing RLHF methods to increase the likelihood of safe responses from LLMs without additional training costs.
Through experiments using standard benchmarks like AdvBench and HarmBench, the paper demonstrates that E-RLHF significantly outperforms traditional RLHF in resisting jailbreaking attempts, reducing the models' vulnerability to producing harmful outputs.
Strengths: The paper offers a unique statistical framework that conceptualizes input prompts into concepts and queries. This approach allows for a nuanced understanding of how LLMs process inputs and why they might generate unsafe outputs even under alignment efforts.
The introduction of E-RLHF as an alignment strategy that doesn't require additional computational resources is a major strength.
The empirical tests using established benchmarks such as AdvBench and HarmBench provide solid evidence that E-RLHF can significantly reduce the Attack Success Rate (ASR) compared to traditional RLHF.
Weaknesses: While E-RLHF improves alignment in controlled experimental conditions, its effectiveness in more complex conversational scenarios, where inputs can evolve over a series of interactions, remains untested.
Technical Quality: 3
Clarity: 2
Questions for Authors: The two datasets used in the work contain harmful requests about various concepts (e.g. drug, weapon). Have you tried to analyze the performance of E-RLHF on different concepts?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer sD4q,
Thanks for your detailed review! Here are our responses to your concerns.
*1. ...(E-RLHF's) effectiveness in more complex conversational scenarios, where inputs can evolve over a series of interactions, remains untested.*
- That is a great point that we had discussed internally too. We fully acknowledge that our theoretical framework does not cover multi-round chat jailbreak senarios. We have discussed this limitation in our paper (in lines 376-383, point (2)). Apart from the fact that we do currently not know how to extend the framework, we thought the exclusion of multi-interaction scenarios is justified because emirical work has not yet produced any (or SOTA) jailbreaking attacks and evaluations:
- (1) **No available evalution benchmarks for multi-chat.** As discussed in the HarmBench paper [3], jailbreak evaluation varies significantly across studies, making it difficult to compare the efficacy of different methods. Therefore, we chose to follow HarmBench as our evaluation protocol, as it is designed to best reflect LLM safety by ensuring fair comparison and providing diverse jailbreak adversaries. Multi-interaction benchmarks are not available as of now.
- (2) **Lack of empirical methods.** The only works that use multi-round interaction to jailbreak LLMs that we are aware of are [1] and [2]. However, both were published in February and April of 2024 hence were too concurrent to integrate in our work. Additionally, both projects do not provide source code to reproduce their results, making it challenging for us to include their methods in our empirical evaluations.
That being said, we believe that integrating multi-interaction attacks into our framework is crucial for future research. From a theoretical perspective, we hypothesize that if even a single attack poses a non-negligible risk, a multi-interaction attack will likely be even more challenging to defend against. Consequently, research questions would focus on how rapidly success rates escalate with the number of interactions. This exploration will provide deeper insights into the dynamics of jailbreak attack and inform more robust defensive strategies.
*2. The two datasets used in the work contain harmful requests about various concepts (e.g. drug, weapon). Have you tried to analyze the performance of E-RLHF on different concepts?*
- Our empirical evaluations primarily concentrate on harmful behavior. As highlighted in the general response, HarmBench assesses the safety of LLMs across a variety of harmful behaviors. While alignment extends beyond mere safety to include aspects such as ethical behavior, to our knowledge, we currently lack benchmarks for testing these broader criteria. We are happy to keep discussing on this topic and would welcome any specific proposals or suggestions for additional benchmarks that could enhance our understanding and assessment of ethical alignment in LLMs.
We hope these explanations address your questions and concerns. Please let us know if you need further clarification, we would be delighted to discuss further.
References
[1] Great, Now Write an Article About That: The Crescendo Multi-Turn LLM Jailbreak Attack
[2] Leveraging the Context through Multi-Round Interactions for Jailbreaking Attacks
[3] HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal
---
Rebuttal 2:
Comment: Thanks. I will keep the rating since it is already the highest.
---
Rebuttal Comment 2.1:
Title: Response to comments
Comment: Dear reviewer sD4q,
We appreciate the recognition of our efforts and thanks for responding to our rebuttal! | Summary: The paper presents a statistical framework that provides a theoretical analysis of the jailbreaking problem in language models. The authors first examine the PAC-Bayesian bound to demonstrate that there is a non-trivial probability for LLMs to mimic harmful behavior if such information is present in the pre-training corpus. They then theoretically prove that a jailbreaking prompt can be found for such pretrained LMs and propose mitigating the jailbreak issue by modifying the harmful prompts.
Strengths: The paper's formulation and assumptions are clear and well-motivated. The proofs seem comprehensive and support the claims well. However, I don't have much background and PAC-Bayesian theory, so it's hard for me to verify if (1) the proofs are all rigorous (2) the proposed framework is truly novel instead of simple application of existing theorems.
The experimental results from the proposed E-RLHF improve upon DPO-based fine-tuning.
Weaknesses: I find the connection between the theoretical part and section 5 (E-RLHF for actual experiment) quite handwavy. Expanding the safety zone is the claimed goal, but the proposed solution (through injecting safe concepts in a harmful prompt) seems really hacky. For example, as shown in Table 4, using safe prompt (4) gives much better results than safe prompt (1) for no reason. Overall, this proposed method seems like a small trick that the authors try to incorporate into the paper just to show some empirical value of their theoretical analysis. I also find that with such modifications, the MT-Bench score (briefly mentioned in line 355) is lower under E-DPO than DPO.
Lastly, this might be a biased opinion (that's why I didn't factor this part into my scoring decision and still gave a marginally above accept score): I feel that after all the theoretical proofs (despite their elegance), the conclusion (LLMs will mimic toxic behavior if such toxic content is present in the training corpus, jailbreak is unpreventable under reasonable assumptions, and by expanding the safe zone we reduce the probability of jailbreaking) is very intuitive and does not provide much additional insight into the problem. Maybe the mathematical formalization of the jailbreak problem itself is meaningful, and I will let the other reviewers judge the novelty of such a framework.
Technical Quality: 2
Clarity: 3
Questions for Authors: Could you provide more details on the MT-bench's performance? For example, you have an ablation study on different safety prefixes' effects in Table 4. What about their impact on MT-bench?
In line 327 and the ablation study in the appendix, you show that non-harmful prompts should not be modified (otherwise, the model performs much worse). Could you relate this phenomenon to the safe/harmful zone or your theoretical analysis? I don't see why adding a safe prefix to a non-harmful prompt negatively impacts the safe/harmful zone, and under your assumption, this should not lead to worse behavior.
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: No concerns on the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer UXGw,
Thanks for your comprehensive review! Here are our responses to your concerns.
- **(1) The novelty of our framework.** To the best of our knowledge, we are the first to offer a theoretical analysis on jailbreaking from a statistical perspective. To tackle this problem and provide insights, we addressed the following unique challenges associated with LLMs:
- Abstract the capability of an LLM to generalize on unseen but plausible prompts. **This property is particularly important since many representative jailbreak methods exploit this to bypass the guardrails of existing LLMs.**
- Model the capability of an adversary. For jailbreak attempts, it is hard to formulate an adversary in an mathematical way, since the operations performed on prompts are discrete and unbounded.
- Discriminate harmful prompts versus non-harmful prompts. The statements should be connected to harmful prompts only, while the definition of "harmful" itself remains lacking.
Our framework overcomes these problems and provides theoretical evidence on the difficulty and impossibility of completely avoiding jailbreaking events.
- **(2) Connection from our theory to E-RLHF.** Under our framework, we identified a drawback in the RL fine-tuning objective, particularly the small safety zone it creates. We propose E-RLHF to address this issue by **replacing** harmful concepts in prompts $x$ with safe ones. We want to clarify that the safe prefixing strategy is **not injecting a safe concept**, but rather a **simple yet effective implementation of harmful concept replacement** in harmful prompts. We agree that more sophisticated methods could further enhance safety, and our experimental results suggest that even our simple implementation significantly improves safety. Thus, we argue our experimental results should not be regarded as negative but instead in line with Occam's razor as a positive result.
- **(3) Sensitivity of the safe prefix and MT-Bench scores.** As mentioned in response part (1), our safe prefixing is a simple implementation of replacing harmful concepts with safe ones. Under our framework, different safe prefixes can induce different safe concepts, and different $p_{\textrm{SFT}}(x_s)$. This explains the sensitivity to the choice of safe prefix. Prompt sensitivity has also been observed in previous works ([4][5]), and we believe safety could be further improved with prompt engineering to find better safe prefixes. Regarding the MT-Bench score, we acknowledge that E-RLHF leads to a slightly lower score than RLHF. However, **the score achieved by E-RLHF is still higher than that of the SFT model**, indicating that we do not sacrifice utility for safety. There are few LLM-tuning-based defenses against jailbreak attacks, especially those improving the RLHF step. For instance, R2D2 from HarmBench [1] uses a SFT strategy, resulting in a drop in the MT-Bench score from 7.3 to 6.0. The tension between safety and utility has been noted in previous works (e.g., [2]), and our proposal does not sacrifice utility to achieve better safety.
- **(4) Insight provided by our framework and experiments.** We aim to establish a framework that explains the jailbreak phenomenon theoretically. We argue that the intuitive results, showing that LLMs can be jailbroken both after the pretraining stage and after the current safety alignment stage with RLHF, are non-trivial, important, and counterintuitive. RLHF is the default optimization strategy for LLM alignment, and we identify a fundamental drawback in its mathematical objective. Based on this insight, we offer a plausible solution, E-RLHF, and demonstrate its effectiveness through a simple yet effective implementation.
- **(5) Additional MT-Bench results.** We are in the process of requesting credits with the OpenAI API and plan to initiate the benchmark test as soon as possible.
- **(6) Relating ablation study on non-harmful prompts with theoretical analysis.** Under our framework, the harmful and safety zones are defined with respect to **a single concept**, meaning modifications to non-harmful prompts should not impact performance on harmful prompts. However, we argue that the ablation phenomenon occurs due to the following reasons. Firstly, we do not model **correlations and interactions between concepts**. Each concept is considered independent, but in reality, generalization on one concept influences LLM performance on other prompts. For non-harmful prompts, appending the safe prefix may hinder optimization, affecting learning on harmful prompts. Modeling correlations and interactions between different concepts is highly complex and is left for future exploration. We will add this discussion to the limitations section of our final draft. Secondly, we point out that this statement also holds true for normal RL fine-tuning. Analysis on it (e.g., the motivation of DPO [3]) suggests that the output of a converged LLM for any prompt $x$ depends only on the reward model $r(x,e)$ and the initial distribution $p_{\textrm{SFT}}(x)$. However, due to the discrepancy between the optimal solution and the converged LLM in practice, the same reward model and LLM initialization can lead to models with significantly different performances.
Lastly, we want to emphasize the significance of our experimental result. We achieve significant safety improvement across all categories of harm on 2 benchmarks. The dataset size and harm diversity coverage is the largest compared to previous papers (we refer to our general response, and discussions in [1] as reference).
References
[1] HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal
[2] Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
[3] Direct Preference Optimization: Your Language Model is Secretly a Reward Model
[4] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
[5] Large Language Models as Optimizers
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer UXGw
Comment: Dear Authors,
Thanks for providing additional clarifications to the points I asked in the review. After reading the response (for both my review and others), I decide to increase the score (from 5) to 6.
---
Reply to Comment 1.1.1:
Title: Response to comments
Comment: Dear reviewer UXGw,
Thanks for responding to our rebuttal and raising your score! We apologize for the delay in accessing the OpenAI API, and will include the requested comparison in our final draft. | Summary: The paper addresses a very important issue of our time, the safety of LLMs. LLMs are already used in various applications and will be present in more applications to come, as e.g. Microsoft, Apple and Google are integrating LLMs in their applications and operating systems. Hence, the question on how to make the systems more robust against jailbreaking is a very important question.
This paper offers a statistical approach to this question and presents experiments. The mathematics is based PAC-Bayesian approach and builds upon a very nice formalization of splitting a prompt into queries and concepts.
Strengths: The mathematics are presented very well. In particular, the intuitions and interpretations of the mathematical concepts are presented well, and can also be followed by readers, who are not able to follow all the details of every equation.
Overall, I really appreciated the formal approach to modelling jailbreaking and the conclusions, that it is impossible to avoid jailbreaking.
Weaknesses: There is a disconnect between the theory and the experiments. The definitions, theorems, etc. are all presented and motivated very nicely, but could also apply to any other mapping of a stochastic system. Oversimplified, one may state that there is a mapping from an input space into an output space, in which the output space can be divided into desired and undesired outputs. The goal is to reduce the space of undesired outputs. To make this a case specifically about LLMs, there needs to some form of conclusions that feed into the experiments. Unfortunately, I don't see this connection. The experiments use a systems prompt that is preprended to the actual prompt. I do not see, how this follows from the theory.
This is really unfortunate, because I would really like to see, how the experiments connect to the theory.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could you please point out the connection between experiment and theory that I have seemed to miss.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I don't see any negative social impacts. As stated above, I would like to see a stronger connection between experiments and theory.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer T1QG,
Thanks for your thorough review! We are pleased to provide further explanations on our E-RLHF proposal, its theoretical foundations, and its specific relation to LLMs.
*1. The definitions, theorems, etc. are all presented and motivated very nicely, but could also apply to any other mapping of a stochastic system. ...To make this a case specifically about LLMs, there needs to some form of conclusions that feed into the experiments.*
To better address your concerns, we would like to clarify the **goals** we aim to achieve with our constructed framework to make it meaningful and robust. Our framework should:
- Abstract the capability of an LLM to generalize on unseen but plausible prompts. **This property is particularly important as many representative jailbreak methods exploit this ability to circumvent the guardrail of existing LLMs.** This necessitates a clear distinction between "plausible" and "seen" prompts. Additionally, defining the concept of "generalization" is critical, yet inherently complex.
- Model the capability of an adversary. For jailbreak attempts, formulating an adversary in a precise mathematical manner is challenging due to the discrete and unbounded nature of the operations performed on prompts.
- Discriminate between harmful and non-harmful prompts. It is essential that statements on jailbreak focus exclusively on harmful prompts. However, the definition of "harmful" itself remains ambiguous and diffucult to mathematically formalize.
**Framework**: Based on these points, we first assume each prompt can be decomposed into a query and a concept. This offers us the opportunity to model generalization by assuming invariance on the concept, thereby allowing us to mathematically describe the capability of the adversary. Reflecting point (1), in Assumption 4.1, we assume that the domain of the LLM output distribution dependents solely on the concept and not the query. This is a crucial, yet we argue, realistic assumption. We argue that this abstraction is **distinctive of LLMs** and is not easily applicable to generic stochastic systems, as not all mappings within such systems will exhibit the property described in point (1). This characteristic further impacts our E-RLHF proposal, which relies on this essential property to enhance safety. Without this property, the performance of those mappings on **unseen harmful prompts** found by the adversary would remain unaffected, while for LLMs the corresponding safety zone will be enlarged. That said, we are eager to explore whether our proposed framework can be effectively applied to other generic stochastic systems. This extension of our research could potentially broaden the applicability and impact of our findings, offering valuable insights into a wider range of systems.
*2. The experiments use a systems prompt that is preprended to the actual prompt. I do not see, how this follows from the theory. Could you please point out the connection between experiment and theory that I have seemed to miss.*
**Specific implementation is driven by our framework and works**: We further want to convince you that the incorporation of a prepended safe prefix represents an intuitive yet effective implementation of our E-RLHF proposal. Theorem 2 elucidates the relationship between the size of the safety zone and the ability of an adversary to successfully compromise the model. This insight directly informs our practical approach to enlarging the safety zone. Our analysis reveals that the prevalent RL fine-tuning objective, especially its KL-term, inadvertently contributes to vulnerabilities due to the unsafe nature of $p_{\textrm{SFT}}(x)$ when $x$ itself is harmful. By substituting harmful concepts in $x$ with safe alternatives, we can effectively mitigate this issue. We have elaborated several variations of its implementation in discussions with reviewer qvRD. In the experimental section of our study, we take a simple, computationally efficient approach by introducing a safe prefix to $x$.
We want to emphasize the significance of our experimental result. We achieved significant safety improvement across all tasks in HarmBench with only one generic intervention (one safe prompt). The dataset size and harmful diversity coverage is the largest compared to previous papers (we refer to the general response and section A.2, section B.4 and Table 5 of [1] as reference).
We found that our approach already provides an improvement in safety. We think that with better implementations, a larger alignment dataset, and improved optimization techniques (e.g., with PPO), the enhancement in safety could be significantly increased.
Please let us know if the above explanations resolve your concern on how we motivated our framework specifically for LLMs, and how our E-RLHF proposal is linked to the theoretical discoveries.
---
Rebuttal Comment 1.1:
Title: Answer
Comment: Dear authors,
Thank you very much for the lengthly reply to my main concern. The reply validates, that I did understand the main points of the paper and approach. Unfortunately, I don't see any additional arguments, that directly answer my question. I do see the importance of the experiments and results and I also do so see the theory, that you discuss in the paper. I just don't see, how the two are connected.
Prompt engineering and system prompts are a well established method in many LLM-applications. They do not follow from your theory as there are other ways to change the safe and unsafe areas.
I appreciate the answer, but will remain with my initial assessment of the paper.
---
Reply to Comment 1.1.1:
Title: Response to comments
Comment: Dear reviewer T1QG,
Thank you for your response! Could you please provide more details about your concerns so that we can address them more effectively? In our rebuttal, regarding the specificity of our framework to LLMs, we clarify how it is uniquely tailored in our assumptions, which are particularly relevant to LLMs. Concerning the connection between theory and experiment, we had hoped to have clarified how the theory allowed us to identify the problems with current RLHF approaches (keeping a too small safety zone), thus leading us to our successful strategy of safe concept substitution through prefixing. In our discussion with reviewer qvRD, we explore several strategies that involve both human intervention and LLM-based support to facilitate the concept substitution as future explorations.
We welcome any suggestions on how we can further refine our approach to better address your concerns. | Summary: The paper provides a theoretical insight about LLM jailbreaks using PAC-Bayesian bound for pretraining LLMs. It assumes that there always exists the harmful data in the mixture, and as the model is trained on this mixture, the model will probably produce the responses in harmful zone (it has a specific definition in the main paper). Based on this framework, the authors suggests that the safety zone of the models should be extended, and to this end, they introduce a method called E-RLHF which can expand the safety zone. E-RLHF replaces the harmful prompts x_h into benign prompts x_s, and replace some of the terms in RLHF (and DPO) to make sure that the model keeps in the safety zone. Experimental results also show that it does not sacrifice the general capabilities but can be improved in the safety perspective.
Strengths: - This paper suggests a theoretical insight about the LLM jailbreaks, which were not addressed much in the previous literature on jailbreaking. The theoretical framework is sound and compelling.
- Based on this framework, the paper also suggests a simple training trick that can lead to better safety training.
- Empirical results also show that their idea is working well, shown by Harmbench, AdvBench, and MT-Bench scores.
- The paper provides extensive evaluation results on jailbreaking setups, providing the results from more than 10 attack setups.
Weaknesses: - E-RLHF is an inaugural and simple form of expanding the safety zone of LLM; I think there could be more sophisticated and effective ways, and I hope the authors will address this in the future works.
- Other than that, I think there is no big weakness in the paper, but have some minor comments:
- Eq (2): it is slightly confusing that in D_KL, the term only have p_LM(x) and p_SFT(x), not p_LM(e|x) and p_SFT(e|x).
- About writing: I am not familiar with using the term "explanation" -- instead, I think using "response" is more common. At the first glance, it was hard to comprehend the meaning of the term.
Technical Quality: 3
Clarity: 3
Questions for Authors: No specific questions about the paper.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors provided limitations section in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer qvRD,
We sincerely thank you for reading our work in great detail! Here are our responses to your concerns.
*1. E-RLHF is an inaugural and simple form of expanding the safety zone of LLM; I think there could be more sophisticated and effective ways, and I hope the authors will address this in the future works.*
- We sincerely appreciate your feedback. This is a fantastic suggestion and we would love to integreate a more detailed discussion on possible avenues for implementation of an alignment strategy based on our theory in the final version of the paper. Some of our thoughts include;
**E-RLHF Formulation.** As outlined in our general response, our proposed E-RLHF is inspired by the realization that the KL-term in the current prevalent alignment method (RLHF) may inadvertently preserve harmful responses when the input prompt $x$ itself is harmful. The RLHF objective aims to align with human preferences (as reflected by the reward term) while maintaining helpfulness (as reflected by the KL-term). To enhance safety while preserving these characteristics, we propose setting $p_{\textrm{SFT}}(\cdot)$ to a safe distribution. We believe that maintaining the mathematical reward-plus-KL formulation is vital, and currently, we do not see a clear pathway for other formulations to achieve these objectives simultaneously.
**E-RLHF Implementation.** We are considering several alternative strategies to refine our implementation of E-RLHF. The first concerns the harmful prompt filtering step. Instead of our current approach, which involves prompting an LLM to assess whether an input prompt is harmful, a more straightforward method might involve sampling responses and labeling the prompt as harmful if the likelihood of a response being harmful surpasses a predefined threshold. Additionally, involving human annotators to manually design, and identify harmful prompts from existing alignment datasets could be beneficial. Furthermore, considering that the determination of whether content is harmful can vary based on different backgrounds and contexts, the filtering process could also be made adaptive to these conditions. This adaptability is particularly crucial if we are cognizant of the diverse applications of LLMs. By tailoring the filtering mechanisms to accommodate various contexts, we can enhance the safety of the responses generated by LLMs, ensuring they are appropriate and considerate across a spectrum of scenarios, which could improve the utility and acceptance in global applications. The second pertains to the safe concept replacement step. Rather than our current method of safe prefixing, one could involve human annotators to rewrite harmful prompts or prompt a LLM to decompose-and-replace harmful prompts. We opted for safe prefixing in our paper due to its simplicity, which helps avoid excessive computational demands and reduces the need for intensive human labor. We believe that with effective prompt engineering (e.g., curate the prompt in a similar fashion as demonstrated in Table 15 in [1]), this second step can be efficiently implemented using an LLM, which we aim to explore in future work.
However, we want to emphasize that our approach despite its simplicity strikes a balance between computational feasibility and the goal of expanding the safety zone. We are delighted to report that this simple intervention performed exceptionally well against state-of-the-art jailbreaking attack benchmarks. This outcome not only underscores the viability of our proposal but also establishes a promising baseline.
We acknowledge that our approach to expanding the safety zone is just one of many potential strategies, and we are excited to see other researchers incorperate our insights into their alignment strategies. With the integration of additional safety alignment data and the implementation of more sophisticated strategies, our method holds the potential to deliver even more impressive results.
We would also be delighted to see our theoretical framework be applied in other domains as suggested by reviewer T1QG.
*2. Eq (2): it is slightly confusing that in D_KL, the term only have $p_{LM}(x)$ and $p_{\textrm{SFT}}(x)$, not $p_{LM}(e|x)$ and $p_{\textrm{SFT}}(e|x)$.*
- We apologize for any confusion caused by our notation. Throughout the paper, we denote the distribution over responses as $p_{LM}(q,c)=p_{LM}(x)$. With this notation, and incorporating a reward model $r(x,e)$, the RL fine-tuning can be expressed either as $\mathbb E_{x\sim\mathcal D_s, e\sim p_{LM}(\cdot|x)}[r(x,e)-\beta\frac{\log p_{LM}(e|x)}{\log p_{\textrm{SFT}}(e|x)}]$, or as $\mathbb E_{x\sim\mathcal D_s}[\mathbb E_{e\sim p_{LM}(\cdot|x)}[r(x,e)]-\beta \mathbb D_{\textrm{KL}}(p_{LM}(x) || p_{\textrm{SFT}}(x))]$. It is important to note that the expectation over $e$ is used for computing the reward $r(x,e)$, while the $\mathbb D_{\textrm{KL}}$ serves to regularize $p_{LM}(x)$, ensuring it does not deviate significantly from $p_{\textrm{SFT}}(x)$. To avoid any further confusion, we will clarify this point in our final draft by using the second equation.
*3. I am not familiar with using the term "explanation" -- instead, I think using "response" is more common. At the first glance, it was hard to comprehend the meaning of the term*
- We apologize for the confusion. We use the term "explanation" as a counterpart to "concept", based on the empirical observation that, in most jailbreaking attacks currently considered by the community, the adversary seeks instructions or explanations for a single harmful attempt. We appreciate your feedback and will incorporate it by resuming the use of "response" in our final draft to enhance readability and ensure our notation is easier to follow.
We hope these explanations address your concerns. Please let us know if you need further clarification, we would be happy to discuss further.
References
[1] Jailbreaking Black Box Large Language Models in Twenty Queries
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses, I will keep my rating.
---
Reply to Comment 1.1.1:
Title: Response to comments
Comment: Dear reviewer qvRD,
We appreciate the acknowledgements of our work and thanks for responding to our rebuttal! | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for taking the time to review our paper and providing valuable feedback. We appreciate the recognition of our established theoretical framework and the acknowledgment of the nuanced formulation of our results from all reviewers. We had to traverse several conceptual steps to overcome the challenge that the adversary can modify the input prompt in an unbounded, unconstrained way. Our key proposal on the decomposition of input prompts into the (query, concept) pair has enabled us to formalize the adversary mathematically. Within our framework, we offer a clear distinction between harmful and non-harmful prompts, elucidate the generalization capabilities of LLMs on unseen prompts, and demonstrate the difficulty and impossibility of defending against jailbreak attempts by presenting a novel statistical bound.
The reviewers have raised several important concerns that we are eager to address.
1. The connection between our theory and the proposed experimental strategy (T1QG, UXGw).
2. The simplicity and feasibility of our implementation (qvRD, UXGw).
3. The effectiveness of our E-RLHF proposal (sD4q).
We appreciate these insightful comments and are pleased to provide detailed clarifications to each of these points below.
**Connection of our framework to experiments.** We identified a significant limitation for safety inherent in the widely adopted alignment strategy, RLHF. Our analysis has traced the problem back to the KL term, which inadvertently ensures that even the optimal solution retains all harmful responses in the LLM's output. To address this, we have introduced an innovative modification to the KL term concerning harmful prompts. Our approach involves filtering out harmful prompts and replacing them with safer alternatives. We believe it is the most natural and effective solution to mitigate the identified risk. It is important to emphasize that our E-RLHF algorithm is inspired and fundamentally driven by our theoretical results.
**The safe prefix implementation.** Upon introducing E-RLHF, our next challenge was to devise an effective implementation. While we acknowledge the existence of more sophisticated methods (as discussed with reviwer qvRD), we opted for **a simple approach: appending a safe prefix to the harmful prompts.** We find this simplicity particularly compelling. Remarkably, with our strategy applied with a limited alignment dataset and optimized using DPO, we achieve significant improvements in safety. This opens the door to find more nuanced, improved implementations, which have the potential to improve results further.
**Soundness and effectivenss of our E-RLHF proposal.** We used a **recently released jailbreak benchmark: HarmBench**. HarmBench assesses LLM safety using a suite of the most advanced jailbreak adversaries, and scores safety across diverse harm categories including but not limited to Cybercrime & Unauthorized Intrusion, Chemical & Biological Weapons/Drugs, Copyright Violations, Misinformation & Disinformation, Harassment & Bullying, Illegal Activities, and General Harm. The HarmBench dataset comprises 400 prompts. Additionally, we included the AdvBench first 100 subset, a dataset frequently used in previous research. We include results from both datasets to ensure completeness and robustness in our evaluation. Our method has shown significant improvements on both benchmarks and across all categories of harm without any task specific adaptation.
We would be eager to summarize our contributions again as follows.
- **A new Theoretical Framework.** We build a novel framework to analyze LLM jailbreaking. This framework addresses challenges such as abstracting the LLM's generalization capability on unseen prompts, mathematically defining the adversary, and distinguishing between harmful and non-harmful prompts.
- **Theoretical evidence on difficulty and impossibility of avoiding jailbreak attacks.** Following our nuance construction of the framework, we offer theoretical evidence that highlights the inherent difficulties and impossibilities of completely avoiding LLM jailbreak attacks.
- **Insight into a RLHF Objective Drawback.** We provide a critical insight into the drawback of the current RLHF objective for safety, which exacerbates the problem for post-alignment LLMs.
- **Theory-Inspired Proposed Solution: E-RLHF.** In response to this insight, we propose an algorithmic framework, E-RLHF, designed to address and mitigate this drawback.
- **Effective Implementation and Results.** We implement a **simple yet effective** version of E-RLHF and demonstrate its superior safety performance **across a suite of diverse jailbreak adversaries without task specific adaptations**.
Finally, we want to take the opportunity to emphasize the importance of our work.
- The adaptability of our E-RLHF. Our approach can be tailored to align with various cultural norms and contexts, such as in educational, medical, and legal settings. This customization could be achieved by filtering harmful prompts based on specific backgrounds and applying our safe prompt replacement strategy.
- Defense strategies are crucial for LLM deployment, and our insights underscore the vulnerability to (even) single interaction attacks. We hope this insight will spark further research. If perfect defense is impossible, we need to rethink how applications should be designed acknowledging these limitations. We hope our research can foster a more robust and thoughtful approach to the deployment of LLMs.
We sincerely thank all reviewers for their time on our rebuttal. We hope our response addresses your concerns and highlights the significance of our contributions, both theoretically and experimentally. We would be delighted to discuss further in the next week. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Functional Bilevel Optimization for Machine Learning | Accept (spotlight) | Summary: The authors propose a functional view of bilevel optimization in machine learning, in which the inner objective is often strongly convex for many problems of interest. The authors prove the resulting optimization algorithm's convergence and benchmark their approach for regression and model-based RL tasks.
Strengths: I'm borderline on this submission as the authors have invested substantial effort, but I'm not fully convinced. I am interested to see what other reviewers think.
1. The authors address an important problem with a range of applications. I'd additionally suggest adversarial robustness / adversarial games in general as another important bilevel optimization problem area.
2. The authors have a good balance of theory and experiments, with particularly extensive theoretical contributions (although I haven't had the bandwidth to check the proofs).
3. The writing and logical flow of the paper are solid, and the authors do a good job of making complicated theory comprehensible.
Weaknesses: 1. The authors seem to do a "bait-and-switch" when going from function space to parameter space. Namely, the authors repeatedly emphasize that in function space, the inner objective is generally strongly convex. However, as the authors point out, this is no longer true when moving to function parameterizations -- and all the experiments concern concrete parameterizations. I'm fairly lost here as to how the authors theoretically handle this jump, as all the theory seems to assume that we are operating in function space.
2. The assumptions in Theorem 3.1 seem very strong. Namely, it is assumed that the inner and adjoint problems are solved close to optimality; however, the inner optimization is nonconvex in the parameters (as discussed above).
3. The authors repeatedly scatter in comparisons to AID throughout the text. I think it would be good to have these all summarized in one table in the appendix (runtime, convergence guarantees, assumptions, etc.)
4. The experimental results are quite weak. Namely, buried in H.3 the authors experiment with boosting the training sample size by a factor of two, and in this setting FuncID seems to underperform DFIV (although FuncID linear is now better). It seems that the improvement over the state of the art is not very robust.
5. FuncID seems to require more outer iterations to converge than baselines (Figure 1b), which I don't see being discussed by the authors.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can you elaborate on $\epsilon_{in}$ and $\epsilon_{adjoint}$ in Theorem 3.1? I'm not sure what the expectation is taken over in the definition of line 1100.
2. Why is the prediction error of FuncID worse than MLE in Figure 7b?
Notes:
1. In (FBO), there's a space between the colon and the equals (use \coloneqq).
2. In line 94, link to where the assumption is discussed.
3. Font size in the figures is very small and hard to read.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I'd like to see an explicit limitations section. The authors' response to the limitations question in the checklist is that they state theorem assumptions -- this is not a comprehensive discussion of limitations, which should include practical experimental considerations as well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback. We will correct the typos, increase the font size in the figure captions, and add a link to the discussion of the assumptions as suggested. Additionally, we will include a discussion section in the main paper (provided in the general response) to address limitations and perspectives. We address other points below.
### From functional to parametric
Our approach addresses one of the two main challenges below (C1 and C2) in using deep networks for bilevel optimization. We outline these challenges and show how our method overcomes C2 using strong convexity in function space. These clarifications will be added to the text.
**A. Challenges of bilevel optimization with deep networks**
- **C1- Non-convexity of the lower-level optimization problem** is unavoidable with deep networks. However, several studies show this non-convexity to be 'benign' and ensure global convergence when optimising deep networks using gradient methods in the overparameterized regime (see general comments).
- **C2- Ambiguity of the bilevel problem (Arbel & Mairal 2022)**. Exact solutions at the lower level can be multiple due to over-parameterization. Thus, no 'implicit function' links the inner-level solution to the outer-level parameter, making it impossible to use the implicit function theorem to compute the total gradient.
**B. Algorithm derivation**. From a functional bilevel problem, our approach is to "Differentiate implicitly first, then parameterize," leading to funcID. The alternative, "Parameterize first, then differentiate implicitly," results in AID, as described below.
- **Differentiate implicitly first, then parameterize (FuncID)**. Functional strong convexity is used to apply the implicit function theorem in function space and derive the implicit gradient. This gradient is then estimated by approximating the lower-level solution and adjoint function using neural networks. While optimising these networks involves a non-convex problem (Challenge C1), the approach avoids Challenge C2 since the neural networks merely estimate quantities in the well-defined implicit gradient expression.
- **Parametrize first, then differentiate implicitly (AID)**. The inner-level function is constrained to be a NN, converting the problem into a non-convex ‘parametric’ bilevel problem where the lower-level variables are the NN parameters. Computing the implicit gradient requires viewing the optimal network parameter as an implicit function of the outer parameter. However, this implicit function does not exist due to multiple global solutions to the inner problem for a given upper variable $\omega$. Thus, this approach faces both challenges C1 and C2.
### Other comments
**Optimality assumption in Thm 3.1**. The assumptions in Thm 3.1 align with recent findings that non-convexity is 'benign' when optimizing over-parametrized NN ensuring that gradient methods linearly converge to global solutions. Thus, *in such a regime*, one can reduce the errors $\varepsilon_{\text{in}}$ and $\varepsilon_{\text{adj}}$ by optimising the inner and adjoint functions using gradient descent. We will clarify these points (see proposed limitation section).
**Comparison table between AID and FuncID**. For more clarity, we propose to merge section D and F of the appendix which both compare AID and FuncID from different aspects.
We will also summarize the comparisons already present throughout the paper between AID and *FuncID* into a single table that will be included in the paper.
**Experimental results**:
*FuncID vs. DFIV for instrumental regression*: to reach a more rigorous conclusion, we have performed statistical tests. For the 5K dataset, funcID outperforms DFIV (p-value=0.003, one-sided paired t-test), but for the 10K dataset in H.5, the difference between both approaches was not statistically significant. Overall, FuncID performs in the same ballpark as the state-of-the-art approach DFIV, which is specifically designed for the IV problem. We will add these observations in the discussion of the results.
*Robust/consistent improvement over AID/ITD*: all results, including those in Appendix H.3, show that FuncID outperforms commonly used bilevel optimization algorithms in ML (namely AID and ITD). These results were obtained by fairly allocating a budget for selecting the best hyperparameters for AID and ITD.
*Additional comparisons*: we have compared our method with a recent approach that handles non-convexity by considering an optimistic version of the bilevel problem and turning it to a penalised single-level problem (see general response for more details).
**Number of outer iterations**. Convergence in Fig 1 can be assessed by monitoring both inner-level and outer-level losses. Outer-level loss alone does not indeed indicate convergence unless the outer loss is evaluated at the ‘exact’ inner-level solution, which is generally inaccessible before convergence. We agree that this is a source of confusion. The part of Fig 1 from which it is easier to draw conclusion is the out-of-sample MSE, which reflects generalization (see comment above).
**Q 1**. $\mathbb{E}$ denotes the expectation with respect to the random data samples. The quantities $\epsilon_{in}$ and $\epsilon_{adj}$ represent a bound on the optimality error of the approximate solutions $\hat{h}$ and $\hat{a}$. For instance, $\mathbb{E}[L_{in}(w,\hat{h_w})- L_{in}(w,h_w^{\star})] \leq \epsilon_{in}$. The expectation accounts for the fact that these approximations are based on random samples. We agree that this deserves some clarifications.
**Q 2**. It is expected that MLE has a smaller prediction error because it explicitly minimizes the prediction error in its objective. In contrast, the bilevel formulations (FuncID and AID) learn an MDP whose state-value function aligns with the true MDP.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their clarification. The comparison in points A and B above is really important, and it gets lost in the technical details of the paper. I strongly suggest featuring this prominently in the main paper body, and perhaps cutting from section 3.
I've raised my score. | Summary: The paper proposes a functional approach to bilevel optimization for machine learning, focusing on inner-level problems defined over function spaces rather than traditional parametric settings. This allows the application of the proposed method to machine learning tasks without requiring the strong convexity typically assumed in bilevel optimization.
Strengths: - The paper introduces a functional perspective to bilevel optimization, extending its applicability to settings where traditional assumptions (like strong convexity) do not hold.
Weaknesses: - The paper does not sufficiently compare the proposed methods against a broad spectrum of existing algorithms, particularly the latest advancements in the field of bilevel optimization problem. This lack of comprehensive benchmarking restricts the ability to fully evaluate the performance enhancements or potential drawbacks of the proposed methods relative to the state-of-the-art.
- The paper lacks a thorough analysis of how the proposed methods perform across varied settings and parameter configurations. It is not clear if there are specific scenarios where the methods might underperform or fail to converge. Additional details on the robustness of the methods in diverse operational environments would be beneficial.
- The assumptions necessary for the theoretical framework might not be easily verifiable in practical scenarios, potentially limiting the method's applicability.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the proposed functional framework be generalized to other types of bilevel problems that involve constraints on the lower-level problem? How would you address potential non-smoothness or non-convexity in these extended settings?
2. Please explain how does the choice of function space impact the stability and convergence of the FuncID method?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors addressed their work limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Additional comparisons**. As suggested by reviewer *MXrv* and in addition to the comparisons already made with most widely-used bilevel algorithms (AID, ITD, and variants) and SoTA methods for each problem (DFIV for the IV problem and MLE for model-based RL), we now additionally include a comparison with a recent approach for solving non-convex bilevel problems based on penalty methods (see general response). The new results are, on average, consistent with the previous ones and are still in favour of *FuncID* which exploits functional strong convexity. We would be happy to include additional methods if you think they would be relevant.
**Relevant settings for FuncID**. We expect *FuncID* to outperform AID in settings where the bilevel problem has a ‘hidden’ strong convexity in functional space. That is simply because AID does not exploit such strong convexity, while *FuncID* does. While this setting covers many practical scenarios (such as those considered in the paper (two-stage least squares regression and model-based RL), we do not expect particular improvements in the absence of such a structure. We will make that clear in the text and limitation section.
**Verifiable assumptions**. We agree that the assumptions in Prop. 2.3 might seem complex, but they are easily verifiable through standard calculations. In Proposition E.1 of the appendix, we verify that these assumptions hold for regression problems involving feedforward networks when using quadratic objectives. The verification only requires computing derivatives and upper-bounding them, and could be applied to other problems similarly. We will make sure this is clear in the text.
**Extensions**. Thank you for raising these points, we discuss them in the future work section (presented in the general response). Extending the framework to a constrained inner problem or non-smooth setting should be possible, if the uniqueness of the solutions in functional space is preserved. However, this would require introducing additional tools to handle non-smoothness/constraints such as those from the recent works on non-smooth bilevel optimization [Bolte et al., 2022] and would be an interesting future work direction.
**Choice of the function space**. The function space we consider is motivated by existing bilevel problems that are already formulated in such spaces (of which we consider 2 examples in the applications). Using different spaces would require a different analysis which is beyond the scope of this work but would certainly be interesting for future work as discussed in the new limitation/future work section (provided in the general response).
---
Rebuttal Comment 1.1:
Comment: Appreciate authors for their complete response and clarification. I am satisfied with their response and maintain my score. | Summary: This paper introduces a novel functional perspective on bilevel optimization, where the inner objective is defined over a function space. The authors developed functional implicit differentiation and functional adjoint sensitivity, which together facilitate the establishment of a gradient-based algorithm in the functional space. They also analyze the convergence rate of the proposed algorithm and apply it to two-stage least squares regression and model-based reinforcement learning. Experimental results validate the effectiveness of the proposed method.
Strengths: 1. The proposed method offers a new insight into solving bilevel optimization problems with nonconvex lower-level objectives by leveraging their strong convexity in the functional space. This is particularly noteworthy because, although neural networks are nonconvex, the loss function in model training can be convex or strongly convex.
2. This paper provides a heuristic approach with both theoretical and practical impact. The proposed method not only has a convergence guarantee but is also implementable in real-world applications. The two applications chosen in this paper are also novel: the first has potential impacts on causal representation learning, while the second provides a new perspective on model-based reinforcement learning.
Weaknesses: 1. The convergence analysis in this paper is based on the stochastic biased gradient descent framework, making the results explicitly dependent on the sub-optimality constant $\epsilon_{in}$ and $\epsilon_{adj}$. However, it is unclear how these errors relate to the inner loop $M$ and $K$. It might be beneficial to leverage the strong convexity of the inner and adjusted objective functions to clarify these dependencies. See similar techniques used in [1]-[3].
[1] K. Ji, J. Yang, and Y. Liang. Bilevel optimization: Convergence analysis and enhanced design. ICML, 2021.
[2] T. Chen, Y. Sun, and W. Yin. Closing the Gap: Tighter Analysis of Alternating Stochastic Gradient Methods for Bilevel Problems. NeurIPS, 2021.
[3] M. Dagréou, P. Ablin, S. Vaiter, T. Moreau. A framework for bilevel optimization that enables stochastic and global variance reduction algorithms. NeurIPS, 2022.
2. Minor issues: experimental baselines. It might be better to also compare with those bilevel methods that are capable to solve nonconvex lower-level problem [4]-[5] as they can also potentially solve the two-stage least squares regression and model-based reinforcement learning problem.
[4] J. Kwon, D. Kwon, S. Wright, and R. D. Nowak. On penalty methods for nonconvex bilevel optimization and first-order stochastic approximation. ICLR 2024.
[5] H. Shen, and T. Chen. On Penalty-based Bilevel Gradient Descent Method. ICML 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: Same as weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Same as weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Error analysis**. Thank you for pointing out these references, we will make sure to discuss them. We agree that the result of Thm. 3.1 does not provide an explicit dependence of the errors on the inner-level optimization. Providing such dependence would require introducing another level of technical complexity, beyond the techniques used in *[1]-[3]* which are tailored for the strongly convex case in a parametric setting. In our case, one would instead need to use quantitative approximation results of functions in $L_2$ spaces by NNs [Bach 2017], as well as global convergence results for NNs [Allen-Zhu et al., 2019, Liu et al., 2022]. Such analysis would require substantial effort that is best suited for a separate future work. We discuss this in the future work section (see the general response).
**Additional comparison**. Thank you for suggesting these methods. We performed an additional comparison on the Instrumental Variable (IV) problem (see Fig. 1 in the pdf file). These methods handle non-convexity by considering an optimistic version of the bilevel problem and turning it into a penalized single-level problem. However, they do not exploit the functional strong convexity of the IV problem. Consequently, the new results, on average, still favor *FuncID*, which exploits functional strong convexity as shown in the general response.
[Bach 2017] Bach, F. Breaking the curse of dimensionality with convex neural networks. Journal of Machine Learning Research, 18(19), 1-53 2017.
[Allen-Zhu et al. 2019] Zeyuan Allen-Zhu. Yuanzhi Li. Zhao Song. A Convergence Theory for Deep Learning via Over-Parameterization. ICML 2019.
[Liu et al. 2022] Chaoyue Liu, Libin Zhu, Mikhail Belkin. Loss landscapes and optimization in over-parameterized non-linear systems and neural networks. Applied and Computational Harmonic Analysis. 2022
---
Rebuttal Comment 1.1:
Title: Reviewer response?
Comment: Reviewer MXrv, could you please review the authors' response and see whether it addresses your questions? Please acknowledge having done so in a comment. Thanks.
---
Rebuttal Comment 1.2:
Comment: I thank the authors for their detailed responses. It solves all of my concerns so that I will raise my score. | Summary: This paper offers a new functional point of view for bilevel optimization problems in machine learning. This functional approach allows the use of an overparameterized neural network as inner prediction function while previous works have used an inner objective that is strongly convex with respect to the parameters of the prediction function. For the inner problem, the prediction function is a function that lies in a Hilbert space of square integrable functions ($L^2$). The authors develop the theory of Functional Implicit Differentiation, which is a flexible class of algorithms to do functional bilevel optimization over $L^2$ spaces. First, they show that strong convexity assumption with respect to the prediction function as opposed to the model parameters ensures the existence and uniqueness of the optimal prediction function $h_\omega^\ast$ for the inner-level objective. Second, they show that differentiability assumptions on the inner-level objective and its Fr\'echet derivative with respect to prediction function $h$ ensure the differentiability of the map $\omega \rightarrow h_\omega^\ast$. Given further assumption about joint differentiability of outer objective, they show that it is possible to compute total objective $\mathcal{F}$ using the adjoint function that minimizes a quadratic objective over a Hilbert space. Finally, assuming that the inner objective and outer objective are defined over the distributions and that a batch of data samples from each distribution can be sampled, an iterative algorithm is proposed that has the following three steps: (1) approximation of inner objective, outer objective and quadratic objective for adjoint function using the batch samples, (2) do a gradient-based update for the parameters of the prediction function and adjoint function assuming they have a fixed parametric form and (3) total gradient approximation and gradient-based update of the functional parameter $\omega$ obtained by solving the outer objective. They show that these class of algorithms converge to a stationary point at $\mathcal{O}(1/N)$ rate. Experiments are performed using Two-stage least squares regression and Model-based reinforcement learning as use cases.
Strengths: The paper makes a substantial contribution by developing the theory of functional bilevel optimization with less restrictive assumptions of strong convexity with respect to the prediction function as opposed to model parameters. This is useful because it allows the prediction function $h_\omega$ to be modeled by deep neural network, which has a non-convex training objective with respect to the model parameters. This paper could lead to more research into optimization over function spaces as strong convexity over model parameters is a restrictive assumption in practice.
Weaknesses: The paper is very technical and requires a good understanding of monotone operator theory and theory of Fr\'echet and Hadamard differentiability to understand it fully. Still, the technical details about the existence and uniqueness of the prediction function and the map $\omega \rightarrow h_\omega^\ast$ are deferred to the appendix for interested readers. I would recommend the authors simplify the notation in the main paper a bit to reduce the clutter. For instance, $h_\omega^\ast(x)$ can be represented simply as $h_\omega$. It may help to have a table that shows the functional arguments (e.g. $h_\omega$, $a_\omega$, etc.), functional parameters $\theta$, $\xi$ and $\omega$ and then function variables $x$ and $y$.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. On Line 126, there is a typo: "elemen" should be replaced by "element".
2. On Line 629, capitalize the word "euclidean".
3. What purpose does the variable $y$ and space $\mathcal{Y}$ serve in general? Is it a variable or a parameter? Your prediction function $h_\omega$ is a function of $x$. In the case of 2SLS, you have $y = (t, o)$ and $x$ is the instrumental variable. The function you are interested in is $f_\omega(t)$ and is a function of $t$, which is a subvariable of $y$ and $h_\omega(x) = \mathbb{E}[f_\omega(t) | x]$. Maybe it will help to clarify what $y$ and $\mathcal{Y}$ are in the theoretical section.
4. What is the rationale for the label "FuncID linear" in Figure 1? Why do you refer to it as "linear"?
5. Why do you feel "FuncID linear" converge faster in terms of outer iterations than "FuncID" in Figure 1?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors didn't go into a discussion about the limitations nor did they provide a conclusion section and simply concluded with a results section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for thoroughly reading our work and giving us helpful feedback. Taking your feedback into account, we will simplify the notation and include a notation table. We agree that this could help the reader get a quick grasp of the mathematical objects considered. We will also include a discussion section (presented in the general response) with a paragraph on the limitations of our work. Below, we address your comments in detail.
**Typos**. We will fix the typos in the final version.
**Purpose of $y$ and $Y$**. Data is represented by pairs $(x,y)$, noting that the function $h$ takes only $x$ as input. Thus, $x$ and $y$ are both random variables. In the simplest supervised learning setup (see *Eq. 1* for instance), $y$ is a label living in a space $Y$ and $h(x)$ tries to predict $y$. The theory of Section 3 is however more general, and $y$ can serve other purposes. For instance, in 2SLS, $y$ is made of two variables $t$ and $o$, whose causal relationship is described in Fig. 4 (App. H). We admit that the setup of 2SLS Instrumental Variable regression is a bit particular and can be confusing. We will clarify these points in the final version of the paper.
**Difference between *FuncID* and *FuncID linear***. *FuncID linear* uses a linear model to approximate the adjoint function, while *FuncID* uses a trainable neural network. The linear model is obtained by learning a linear combination of the frozen penultimate layer of the current prediction function $h$ (modelled as a neural network). We will clarify this in the text.
**Why *FuncID linear* converges faster**. Following the previous explanation, *FuncID* optimizes all adjoint network parameters, while *FuncID linear* learns only the last layer in closed form. "FuncID linear" converges faster because solving a linear system is computationally faster than iteratively approximating the full adjoint function using a neural network. However, *FuncID linear* is less expressive for approximating the adjoint, which may result in suboptimal solutions that could explain the performance gap in terms of test error.
---
Rebuttal 2:
Comment: Thank you for your responses to my questions and for the clarifications! Good to hear that the typos will be fixed in the final draft and that the missing details will be added to the final draft! I am also glad that you are adding a detailed discussion/conclusion at the end as it is important to leave the reader with some take-away points from your paper. Good luck! | Rebuttal 1:
Rebuttal: # General comments
We thank the reviewers for their useful feedback. We now list the main changes made to the paper.
## Discussion section (limitations and perspectives).
We agree that such a section is important, and propose to include the following discussion:
### Discussion and concluding remarks
This paper introduces a functional paradigm for bilevel optimization in machine learning, shifting the optimization focus from the parameter space to the function space. This new approach addresses the limitations of traditional bilevel optimization methods when dealing with over-parameterized neural networks. Specifically, the proposed method exploits the functional strong convexity of certain bilevel problems to derive an abstract, yet approximable, expression for the implicit gradient that requires solving both an inner and an adjoint optimization problem in functional space. Approximation is achieved by restricting both problems to a flexible class of functions, such as neural networks. The paper establishes the validity of this approach by developing a theory of functional implicit differentiation and providing a general convergence result for the proposed method. Despite these contributions, we discuss several limitations of our work and highlight potential research directions.
**Hyperparameter selection**. One notable limitation is the presence of multiple hyperparameters in the proposed algorithms. This is a common challenge shared by all bilevel optimization methods, complicating the practical implementation and tuning of these algorithms. Selecting and optimizing these hyperparameters can be time-consuming and may require extensive experimentation to achieve optimal performance.
**Convergence guarantees:** The result in Theorem 3.1 relies on the assumption that both the inner and adjoint optimization problems are solved up to some optimality errors. This assumption is motivated by recent global convergence results for over-parameterized networks [Allen-Zhu et al., 2019, Liu et al., 2022]. Although over-parameterized networks are ubiquitous in the machine learning literature, it is unclear to what extent this optimality assumption remains realistic beyond such settings. Moreover, the result in Theorem 3.1 does not explicitly relate these optimality errors to the optimization procedure used for the inner and adjoint problems. A precise quantification of these errors would be valuable to strengthen the theoretical foundations of the proposed methods and provide principled guidelines for the choice of hyperparameters. These theoretical considerations do not prevent from applying the method even beyond the setting where the convergence results hold, much like with popular bilevel algorithms such as AID or ITD.
**Choice of the function space**. Another important consideration is the choice of the function space in the functional bilevel optimization framework. While we primarily focus on $L_2$ spaces, there are numerous other function spaces that could be explored, such as Reproducing Kernel Hilbert Spaces and Sobolev spaces. Investigating these alternative spaces may reveal additional advantages and open the way for a broader class of machine learning applications where higher-order derivatives of a prediction function appear naturally in the objectives.
**Non-smooth/constrained setting**. The proposed method primarily focuses on smooth and unconstrained problems, but many practical machine learning applications involve non-smooth objectives or constraints. Extending the proposed framework to handle these scenarios would significantly broaden its applicability. Notably, the works [Bolte et al.] on non-smooth implicit differentiation could perhaps be leveraged to adapt our methods to non-smooth settings. Future work should explore these opportunities to further enhance the flexibility and applicability of the functional bilevel optimization approach.
## Additional experiment (see attached PDF).
As suggested by reviewer *MXrv*, we have compared to a bilevel penalty-based method for the Instrumental Variable application. We use two variants and perform an extensive grid search to adjust their hyper-parameters:
1. **Gradient penalty** (Eq. 5.1 in [5] (Shen et. al. 2023)): we perform a grid search with the following hyper-parameters: learning rate [0.01, 0.001, 1e-4, 1e-5, 1e-6]; weight decay [0., 0.1, 0.01, 0.001]; penalty constant [1, 0.1, 0.01, 0.001]. Since the method has only a single optimization loop, we increase the number of total iterations to 2000 compared to the other methods (100 outer-iterations). The rest of the parameters are the same for all methods.
2. **Value function penalty** (Eq. 3.2a in [5]). We use the same grid search and test number of inner steps [10,20]. Since the method has a double optimization loop, we then use 100 outer iterations. The rest of the parameters are the same for all methods.
This method handles non-convexity by considering an optimistic version of the bilevel problem and turning it into a penalised single-level problem. *funcID* performs significantly better than the gradient penalty, whereas it is in the same ballpark as the value function penalty method or better (lower mean on 5k/10k, but higher median for 5k). Notably, the value function penalty seems to have a high variance, with some particularly bad outliers, despite the extensive grid search for tuning hyper-parameters. These conclusions will be added to the paper.
[Allen-Zhu et al. 2019] Zeyuan Allen-Zhu. Yuanzhi Li. Zhao Song. A Convergence Theory for Deep Learning via Over-Parameterization. ICML 2019.
[Liu et al. 2022] Chaoyue Liu, Libin Zhu, Mikhail Belkin. Loss landscapes and optimization in over-parameterized non-linear systems and neural networks. Applied and Computational Harmonic Analysis. 2022
[Bolte et al., 2022] Jérôme Bolte, Edouard Pauwels, Samuel Vaiter. Automatic differentiation of nonsmooth iterative algorithms. Neurips. 2022.
Pdf: /pdf/19c858dd3e94e39f6a3011fea6e219f74c21c42c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Poisson Variational Autoencoder | Accept (spotlight) | Summary: The paper proposes a variation on variational auto-encoders with a Poisson distribution over the latents. To make the model differentiable they use a differentiable sampling of the latent variables where the indicator function is replaced with a continuous approximation. They take inspiration from biological neural networks to develop the model and make connections to sparse coding to argue that their model can be used to understand sensory processing in the brain. Their experiments examine various key aspects of their model and compare to other significant VAE baselines. They provide evidence that their model learns sparse basis vectors for image datasets when compared to popular sparse coding algorithms and their model learns latent representations that perform better than baselines on downstream classification tasks.
Strengths: - A Poisson variational auto-encoder is a somewhat novel though not as original contribution to the VAE zoo that which can clearly be situated among other VAE models. The connection to sparse coding and brain representations also well motivates the utility of the model for studying representations in the brain.
- The paper is very well written and was, for the most part, easy to follow. The model is well-explained and the figures served well to aid in understanding
- The experimental evaluation was quite thorough with the results providing strong evidence to support the paper's claims. The systematic analysis of various of the model's aspects (sparseness of representations and utility for downstream tasks) was well done and made clear to the reviewer what the capabilities and limitations of the model are.
Weaknesses: - A significant weakness is a lack of study of the effect of the temperature parameter in the Poisson re-parameterization. How does this need to be set and what consequences for different temperatures? The z variables on line 6 of Algorithm 1 aren't integers and so how does changing temperature affect the gradient? Appendix A.3 does talk about how the temperature is annealed during training but motivation for this approach isn't provided even though the temperature is a key aspect of their method.
- Also, a lack of discussion of the noise level that needs to be set for the likelihood $p(x | z)$ makes it unclear as to how the model can be re-configured to other datasets and what the consequences would be. What if we believed that the noise level in the data was much less than one (as is assumed with the Gaussian likelihood conditioned on the decoder output)
Technical Quality: 4
Clarity: 4
Questions for Authors: - The outputs of the decoders in Fig. 1 should have noise added to them to illustrate that the likelihood isn't a delta distribution.
- Fig 1b is a bit confusing as the sampling of the latents isn't shown whereas it was shown in Fig 1a.
- Maybe Table 4 should be added to the main paper as it is the first experiment that is referred to and the reparameterization gradient is a crucial aspect of the model.
- Can a discussion be added on the slight differences between the P-VAE basis vectors and those of LCA and ISTA in Figure 2? Also what is the gold standard here? Is there a quantitative metric or can this only be determined qualitatively.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have sufficiently discussed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and thoughtful comments.
> A significant weakness is a lack of study of the effect of the temperature parameter in the Poisson re-parameterization.
We agree. Please see our new rebuttal results, where we performed extensive experiments to address this point. We will include these new results in the paper.
> The z variables on line 6 of Algorithm 1 aren't integers…
Good point. This is true during training, but not during test time. We would like to emphasize that during validation, we always set $T = 0$, which results in integer samples drawn from a true Poisson distribution. We have included a discussion of this point and a figure in the global rebuttal and will integrate that into the final paper.
Related to this point, in Fig. R1b we plot the distribution of samples at various temperatures, which shows the samples are indeed not integers for $T > 0$, but they approach integer values when $T \rightarrow 0$. Therefore, even during training, we can obtain "almost-integer" values if we anneal down to low temperatures. Interestingly, this does affect performance, and having these subtle non-integer values during training even improves model performance at test time with hard integers (Fig. R1a).
> …how does changing temperature affect the gradient?
We weren't sure exactly how to address this. We find that the model performance is robust for a range of values of $T_\mathrm{final} < 0.1$ (Fig. R1a), but we did not directly evaluate the effects on gradients. We are happy to address this more directly if you can suggest an analysis. Otherwise, we're inclined to interpret the performance results (Fig. R1a) instead of analyzing the gradients directly.
> …a lack of discussion of the noise level that needs to be set for the likelihood $p(x \vert z)$...
We construct our likelihood function, $p(x \vert z) = \mathcal{N}(x; \mathrm{dec}(z), \sigma^2)$, by learning the Gaussian mean but using the same fixed variance for every pixel. Specifically, we chose $\sigma = 1/\sqrt{2}$, such that $-\log p(x \vert z) = ||x - \mathrm{dec}(z)||_2^2$. This choice of fixed variance is fairly standard in the VAE literature; however, others have drawn attention to the "disheartening" limitations of this approach ([Arvantidis et al., 2018](https://openreview.net/forum?id=SJzRZ-WCZ)), and we share their sentiment. We will add these comments to the paper to further clarify our choices.
> The outputs of the decoders in Fig. 1 should have noise added to them to illustrate that the likelihood isn't a delta distribution.
Thank you for pointing this out. We will edit the figure to highlight this.
> Fig 1b is a bit confusing as the sampling of the latents isn't shown whereas it was shown in Fig 1a.
Thank you for pointing this out. We will edit the figure to make it more consistent.
> Maybe Table 4 should be added to the main paper…
We plan to include a reduced version of Table 4 (or Fig. 4), along with Fig. R1b, in the main paper. The goal is to emphasize that, even though the approximate posterior is "relaxed Poisson," the final performance is almost as good as using exact gradients for training.
> Can a discussion be added on the slight differences between the P-VAE basis vectors and those of LCA and ISTA in Figure 2? Also what is the gold standard here? Is there a quantitative metric or can this only be determined qualitatively.
There is not really a gold standard here and, surprisingly, most of the sparse coding literature evaluates dictionaries qualitatively. One way to quantify the differences and similarities between pairs of dictionaries is by fitting parametric Gabor functions. These Gabor fits allow extracting and studying parameters such as orientation, spatial frequency, size, and location for each basis element. We briefly explored this direction for the rebuttal results. Please see Fig. R1e.
We believe this line of inquiry deserves a more thorough investigation. We are happy to add a comparison of the distributions of Gabor fits to the different dictionaries if the reviewer thinks that would be helpful. However, we are inclined to leave this direction as future work.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for addressing my comments.
The additions and changes proposed for the paper would certainly strengthen it and so I will raise my score to a 7 | Summary: This work introduces VAEs with Poisson-distributed latent variables and a Poisson reparameterization for efficient training. This approach has theoretical and empirical connections with sparse coding and behaves more similarly to biological networks that rely on discrete spike counts.
Strengths: To the best of my knowledge, the method is original and well motivated. The paper is well written and generally clear. Though I cannot speak to the significance of this work from a neuroscience perspective, introducing Poisson latents is interesting in itself from a probabilistic models perspective. The experimental results are relatively extensive and there are several ablations.
Weaknesses: - Posterior collapse in VAEs (in the sense of "some latent dimensions are not used") is not necessarily an issue per se. If this is an issue in the specific scenarios considered in this paper, I think this should be clarified.
- For the downstream classification tasks, I appreciate the experiments with different latent space sizes. However, using KNN as downstream classifier is a quite specific choice, and I would argue a simple linear probe is more common in the literature and seems like an intentionally missing baseline.
- While my expertise is in machine learning rather than neuroscience, I appreciate the value of ANNs that mimic biological networks. However, from an ML perspective, it would be helpful to discuss any potential challenges or limitations of this method (besides the limitations currently mentioned). The paper currently highlights the advantages of using Poisson latents in various aspects, which may seem overly optimistic. Since the goal is to bridge ANNs with biological systems rather than outperform benchmarks, a balanced discussion of both strengths and weaknesses would provide a more comprehensive perspective and would IMHO add value to this work.
- Since the main motivation is to learn brain-like representations, are there any datasets or experiments where this comparison with biological representations could actually be done? This might be the biggest missing part right now. If this is not feasible at all, it would also be fine to at least include a discussion for readers without a neuroscience background.
- More broadly, connections to neuroscience, as well as jargon, should be made more explicit/clear for pure ML people (e.g. "Gabor-like feature selectivity")
- Notation: at some point the authors start using $\boldsymbol{r}$ and $\boldsymbol{\delta}$ without introducing them in the main text (though the notation is better explained in Appendix). I would even recommend using $\boldsymbol{\delta}_r$ instead of $\boldsymbol{\delta r}$ which looks like a product, and is especially confusing when writing $\boldsymbol{r \delta r}$.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Some limitations are addressed, but overall the presentation of the experimental results doesn't seem too balanced.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and thoughtful comments.
> Posterior collapse in VAEs (in the sense of "some latent dimensions are not used") is not necessarily an issue per se. If this is an issue in the specific scenarios considered in this paper, I think this should be clarified.
We agree that the concern over posterior collapse depends on the application and we will clarify this in the final text. Nevertheless, addressing the posterior collapse issue remains an active area of research ([He et al., 2019](https://openreview.net/forum?id=rylDfnCqF7), [Lucas et al., 2019](https://proceedings.neurips.cc/paper/2019/hash/7e3315fe390974fcf25e44a9445bd821-Abstract.html), [Razavi et al., 2019](https://openreview.net/forum?id=BJe0Gn0cY7), [Menon et al., 2022](https://openreview.net/forum?id=SrgIkwLjql9), just to name a few).
In our review of existing literature at the intersection of VAEs and sparse coding, we found that posterior collapse has been a recurring issue there as well. Often, sparse coding results are evaluated based on what features are learned by the dictionary elements. For example, both [Csikor et al., 2023](https://www.biorxiv.org/content/10.1101/2023.11.29.569262v2) and [Geadah et al., 2024](https://www.biorxiv.org/content/10.1101/399246v3), used Laplace-distributed latents in VAEs, aiming to learn Gabor-like feature selectivity. However, they did not show the full set of dictionary elements. Our experiments revealed that ~80% of latent dimensions collapse for the L-VAE, resulting in noisy and therefore useless basis elements. This significantly deviates from the classical sparse coding results.
One implication of our results for future work is that L-VAE should be avoided in favor of P-VAE if the goal is to develop a diverse set of dictionary elements that are reminiscent of classical sparse coding. We will add a discussion of why posterior collapse is relevant in our case to the final paper.
> …using KNN as downstream classifier is a quite specific choice, and I would argue a simple linear probe is more common in the literature and seems like an intentionally missing baseline.
We reasoned that KNN is a good choice for evaluating the learned representations because it is a non-parametric method and its performance is directly influenced by the geometry of representations—which is what we were interested in evaluating.
For a more complete model evaluation, we performed simple logistic regression classification as part of the rebuttal results (Fig. R1c). We found that P-VAE achieves the best overall performance for latent dimensionally of $K = 100$. But for $K = 10$, both L-VAE and G-VAE outperform P-VAE. We plan to include these new results in Tables 3 and 5, alongside the KNN and shattering dim results.
> …a balanced discussion of both strengths and weaknesses would provide a more comprehensive perspective and would IMHO add value to this work.
We agree. However, we thought our presentation was fairly balanced. For example, we show in Fig. 6 that both the quality of generated samples, and reconstruction performance, are higher for continuous VAEs compared to discrete ones. We also highlight a large amortization gap that remains between P-VAE and sparse coding models. That said, within discrete models, P-VAE still performs better than C-VAE and there are advantages to P-VAE, which we highlight throughout the paper. We will make sure that our discussion of strengths and limitations is more clear in the final paper.
> Since the main motivation is to learn brain-like representations, are there any datasets or experiments where this comparison with biological representations could actually be done? This might be the biggest missing part right now.
We agree that a comparison with biological representations is an obvious next step and there are several datasets available for this. In the present work, we demonstrated the connection between Poisson VAEs and sparse coding and evaluated the representations learned by continuous and discrete VAEs. Previous work has evaluated VAEs for predicting neural activity ([Vafaii et al., 2023](https://openreview.net/forum?id=1wOkHN9JK8)), and has found that benchmarking, as is standardly done, does not discriminate well between different models. A more thorough evaluation of the learned representations is necessary and there isn’t room here to do that justice, although, this is an immediate plan of ours. Since both neurons in the brains and P-VAE encode information in firing rates, we believe the P-VAE will learn brain-like representations.
We have discussed some applications in the global rebuttal and will integrate some of the points made here into the discussion in the final paper.
> …connections to neuroscience, as well as jargon, should be made more explicit/clear for pure ML people…
Thank you for pointing this out. We will amend the language to enhance clarity.
> Notation: at some point the authors start using 𝑟 and 𝛿 without introducing them in the main text…
We have $\log\delta r \in \mathbb{R}^{K}$, such that $\delta r \in \mathbb{R}_+^{K}$. Additionally, by $r \delta r$, we mean $r \odot \delta r$, where $\odot$ is the Hadamard or element-wise multiplication. We will clarify this in the text.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough rebuttal and for carefully addressing all the concerns raised by myself and the other reviewers. If the final version of the paper incorporates the promised revisions, I believe it will be a strong contribution that merits acceptance.
I have updated my score from 5 to 7. | Summary: Inspired by biological neurons, a new type of variational autoencoder, the Poisson variational autoencoder ($\mathcal{P}$-VAE) is proposed. The $\mathcal{P}$-VAE uses discrete latent states with Poisson priors, and learns sparse discrete representations of the data similar to sparse coding methods. The authors compare their method with classical sparse coding algorithms and obtain learned representations that are less prone to posterior collapse and are better for downstream classification tasks.
Strengths: - This paper is very well-written and ideas are presented very clearly. The writing is clear and concise with little to no typos. The contributions are outlined and emphasized throughout the paper. Background to the work are explained thoroughly. The figure and algorithm box serve to communicate the main algorithm clearly to the reader. The color coding of inference and generative components in section 3 makes it easy to parse the equations and grok the main $\mathcal{P}$-VAE algorithm.
- The main arguments of the paper are supported well by experiments. The experiment section presents extensive analysis on the representation learning capabilities of the $\mathcal{P}$-VAE, and compared it to many existing VAE models, including both continuous and discrete ones. In-depth discussions are made with respect to the Gabor-like quality of learned filters, avoiding posterior collapse, sparsity and effectiveness in downstream tasks.
Weaknesses: My main concern for this work is in its limited impact. While to my knowledge it is true that applying the Poisson prior to discrete VAEs as a sparsity-inducing constraint is novel, I fail to see how this is fundamentally different that other discrete VAEs with regularization. Further to this point, the comparisons to VAE with the concrete distribution ($\mathcal{C}$-VAE) seems relatively weak, as the latter has less dead neurons and also seems to achieve more sparsity for the same reconstruction performance (Figure 3a). The authors also commented that LCA models drastically outperform the $\mathcal{P}$-VAE with the convolution encoder, making it questionable if one should use $\mathcal{P}$-VAE for its sparsity.
To the authors’ credit, the paper does state that part of the appeal of the $\mathcal{P}$-VAE is in its biological plausibility. Unfortunately I don’t think this point is expanded upon to a satisfactory degree in the text, leaving much to be desired. Personally I would love to see more technical discussion on the biological plausibility of the $\mathcal{P}$-VAE, and concrete examples on the types of tasks and inquiries it unlocks for neuroscientific studies that similar models cannot solve adequately.
Despite the above points, I still think that this is a solid paper with potential contributions to the computational neuroscience community.
Technical Quality: 4
Clarity: 4
Questions for Authors: - I am confused by the statement on lines 56-57: “facilitating linear separability of categories in a downstream classification task with a much better (5x) sample efficiency.” Isn’t it true that the increase in sample efficiency is for KNN classification but not in the linear separability? In that case this sentence is misleading.
- What’s the significance of the global prior parameter $r$? Have you tried ablation studies where the posteriors are directly parameterized?
- What’s the takeaway for the “$\mathcal{P}$-VAE learns sparse representations” section? How should we view the amortization gap you mentioned on line 277?
- There are multiple bold entries for some columns in tables 3 and 5, what do these mean?
- Can you talk more about where you believe the $\mathcal{P}$-VAE model should be applied and how it can be impactful to the scientific community?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and thoughtful comments.
> My main concern for this work is in its limited impact.
We hope that we have addressed this concern in the general rebuttal. We developed P-VAE with neuroscience applications in mind, where we feel it will have a big impact. However, there are several other exciting future directions that we have discussed in the global rebuttal, which we will expand upon and include in the final paper.
> …the comparisons to VAE with the concrete distribution (𝐶-VAE) seems relatively weak, as the latter has less dead neurons and also seems to achieve more sparsity for the same reconstruction performance.
C-VAE is hard to compare to for a number of reasons which we'll discuss here. But first, we want to emphasize the P-VAE with a convolutional encoder has much better reconstruction performance than C-VAE, with a comparable sparsity level. That said, the sparsity comparison is hard to make because C-VAE is a one-hot categorical distribution, which means it has a fixed sparsity of $1-1/K$ (~99% when $K=512$). P-VAE can sweep out a tradeoff between sparsity and reconstruction (something akin to a rate-distortion curve). In contrast, the sparsity of C-VAE is solely determined by the latent dimensionality.
Further, Because C-VAE has a single KL value, the comparison between dead neurons is also not straightforward, which is why we excluded C-VAE from Table 2. For the C-VAE, we used the norm of basis elements as a measure of the dead neurons. We explain this in the caption of Fig. 5.
We will add text to discuss these points more explicitly in the final paper.
> …The authors also commented that LCA models drastically outperform the 𝑃-VAE with the convolution encoder, making it questionable if one should use 𝑃-VAE for its sparsity.
We were surprised to see such a strong performance for LCA, a "shallow" model from 2008. We speculate this is because LCA performs iterative inference, whereas P-VAE, in its current form, uses a single forward pass for inference (amortized inference). This observation has inspired us to develop an iterative version of the P-VAE in subsequent work, which we hope will reduce the amortization gap and either match or surpass LCA performance.
That said, it is important to note that LCA necessarily has to work with a linear generative model ([Rozell et al., 2008](https://direct.mit.edu/neco/article-abstract/20/10/2526/7343/Sparse-Coding-via-Thresholding-and-Local?redirectedFrom=fulltext)), whereas P-VAE (like other VAEs) could have a nonlinear (deep) decoder. We primarily focused on linear decoders in this paper because of the close connection between the P-VAE loss and sparse coding. Two future directions could be to explore P-VAEs with more typical deep decoders or, as we mentioned above, with iterative inference, which we believe is a promising future direction for closing the amortization gap in VAEs more generally.
> Personally I would love to see more technical discussion on the biological plausibility of the 𝑃-VAE, and concrete examples on the types of tasks and inquiries it unlocks for neuroscientific studies that similar models cannot solve adequately.
Thank you for this comment. We have included a discussion on neuroscience applications in the global rebuttal which attempts to address this point. We will include more discussion on this point in the final version of the paper.
> I am confused by the statement on lines 56-57: “facilitating linear separability of categories in a downstream classification task with a much better (5x) sample efficiency.” Isn’t it true that the increase in sample efficiency is for KNN classification but not in the linear separability? In that case this sentence is misleading.
You are correct. That sentence should be broken into two separate points. We will clarify that the linear separability claim is supported by the shattering dim results, while the sample efficiency claim is supported by the KNN results.
> What’s the significance of the global prior parameter 𝑟? Have you tried ablation studies where the posteriors are directly parameterized?
We did not consider an independently parameterized posterior, because our current implementation has two desirable features:
1. a direct connection to predictive coding; and,
2. a nice factorization of the KL term.
However, this question led us to investigate the learned global prior parameters, $r$, for which we thank the reviewer. We found that the prior rates learn an efficient representation of the natural scene statistics. We describe this exciting result in the global rebuttal (Fig. R1e).
> What’s the takeaway for the "𝑃-VAE learns sparse representations" section? How should we view the amortization gap you mentioned on line 277?
There are two main takeaways. First, P-VAE produces sparser representations than continuous VAE counterparts, with a comparable reconstruction performance. Second, there remains a substantial amortization gap between P-VAE (which uses amortized inference) and true sparse coding, which employs iterative inference. In our experiments, LCA converged in typically hundreds of iterations, whereas P-VAE does inference in one shot. Developing an iterative P-VAE and using it to close the amortization gap is our top priority for future work.
> There are multiple bold entries for some columns in tables 3 and 5, what do these mean?
Bold indicates values that passed a significance threshold using statistical tests. We will clarify in the final paper that the bolded models perform similarly, such that their performance is statistically indistinguishable.
> Can you talk more about where you believe the 𝑃-VAE model should be applied and how it can be impactful to the scientific community?
Yes. Multiple reviewers have requested this. We hope our global rebuttal addresses this adequately and will include a discussion on the significance of the work in the final paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications. I think my main concerns are well addressed and I now better understand why this work is important and interesting. I am hereby raising my score to a 7. | Summary: In this paper, the authors propose a VAE model, the Poisson-VAE ($\mathcal{P}$-VAE), with a Poisson distributed prior and approximate posterior such that it works with Poisson distributed latents and demonstrate its sparse coding abilities on the van Hateren dataset and its efficacy on downstream classification tasks using the MNIST dataset. The authors also present a Poisson reparametrization trick.
Strengths: - The authors combine predictive coding and Poisson-distributed latents to obtain a neat P-VAE objective emphasizing sparsity without additional design constraints-- hence, elegantly buying amortized sparse coding.
- The authors also propose a new reparametrization trick, the Poisson reparametrization trick, which can potentially be more generally applicable.
Writing:
- The writing and presentation of the paper is very clear, making the derivation and rest of the math easily approachable.
Results:
- The authors seem to get very good results in terms of the values of the ELBOs obtained on MNIST as compared to pre-existing work. They also showcase linear separability of learnt features on MNIST.
Weaknesses: These questions/remarks might also have arisen due to my lack of proper understanding, so I am willing to increase my score if these can be clarified:
- How do we know that the approximate posterior is Poisson?
- The temperature is controlling the sharpness of the thresholding-- what is the temperature being considered generally? How do we know that the cdf = 0.99999 across all these temperatures?
Experiments:
- How does the ELBO vary with the dimension of the overcomplete latent space considered?
- Is the VQ-VAE a valid baseline? If so, it would be helpful if the authors can mention why it hasn't been considered as a baseline in the paper.
Technical Quality: 4
Clarity: 4
Questions for Authors: - The authors list this as a limitation of their work later, but is it possible to discuss potential reasons for a large amortization gap of the P-VAE as compared to the LCA/ISTA?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors adequately discuss limitations of their work, and its potential societal impact.
I would like to note another limitation of their work as not very illustrative experiments on downstream classification tasks (MNIST and CIFAR have been considered in the paper). The paper could benefit from experiments showing greater impact of sparse coding.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and thoughtful comments. We are also glad the reviewer finds our results "neat" and "elegant"!
> How do we know that the approximate posterior is Poisson?
During training, we use non-zero temperatures in our reparameterization algorithm. As a result, during training, the approximate posterior is a relaxed approximation to Poisson. In Fig. R1b we show samples drawn from our "relaxed Poisson" distribution at different temperatures. As $T \rightarrow 0$, the sampled distribution converges toward the true Poisson distribution at $T = 0$.
However, we want to emphasize that we used relaxed Poisson only during training. At validation time, we always set $T = 0$. Therefore, the approximate posterior is exactly Poisson at test time.
> …what is the temperature being considered generally?
During the first half of training, we anneal temperatures from a large initial value, such as $T_\mathrm{start} = 1.0$, to a small final value, $T_\mathrm{final}$. In the paper, as well as Figs. R1c and d, we report results obtained using $T_\mathrm{final} = 0.05$. Our new extensive experiments (Fig R1a) suggest this choice was reasonable.
> How do we know that the cdf = 0.99999 across all these temperatures?
Thank you for this interesting question, which we overlooked before. We determine ```n_exp``` in Algorithm 1 using the largest posterior rate in a given batch, $r_\mathrm{max}$. We observed that the distribution of rates in a batch is typically skewed and long-tailed. Consequently, the vast majority of the rates are much lower than $r_\mathrm{max}$. Therefore, this is a very conservative way of choosing ```n_exp```, and even if the "cdf = 0.99999" condition is not met for certain temperatures, it will be so only for a vanishingly small subset of rates.
With that said, we explored this question empirically for temperatures encountered during training, using a few reasonable rate values. We found that this condition holds regardless of temperature (even more strongly for non-zero temperatures when the rate is small).
> How does the ELBO vary with the dimension of the overcomplete latent space considered?
We investigated this across datasets (van Hateren and CIFAR), and encoder architectures (linear versus convolutional). We report the results in Fig. R1d. We found that for all convolutional encoder cases, ELBO improves as a function of latent dimensionality. However, for linear encoders, we observed that the van Hateren dataset started to overfit for $K > 512$, and it stagnated for the $\mathrm{CIFAR}_{16 \times 16}$ dataset. In conclusion, more expressive encoders can find nonlinear features, represented using additional latent dimensions, but simple linear encoders struggle to utilize additional dimensions.
> Is the VQ-VAE a valid baseline?
We did not consider VQ-VAE as a baseline, because VQ-VAE does not optimize the ELBO loss, and thus, it is not technically a VAE. We think the naming is unfortunate and we will discuss this point in the final paper.
> …discuss potential reasons for a large amortization gap of the P-VAE as compared to the LCA/ISTA?
We suspect both LCA and ISTA perform well because they are iterative algorithms. Our P-VAE, in its current format, uses a single forward pass to perform inference. Prior work by [Marino and colleagues](https://proceedings.mlr.press/v80/marino18a.html) has shown that iterative inference can significantly decrease the amortization gap. As a future work, we are interested in developing iterative versions of P-VAE, which we hope will close the amortization gap and beat the best LCA and ISTA fits.
> …not very illustrative experiments on downstream classification tasks…
We agree with the reviewer here. We primarily used these downstream tasks to assess whether the geometry of representations is quantitatively different between P-VAE and alternative models. Our limited results in this area suggest they are different, which we plan to explore more rigorously later. The point of the current experiments was to establish there is indeed a difference in the geometry of representations, rather than fully exploring the differences and similarities, or highlighting particular applications of P-VAE.
> The paper could benefit from experiments showing greater impact of sparse coding.
We agree that a limitation of our paper is that we did not highlight applications where a P-VAE would have a greater impact. As described in the general rebuttal, we will discuss several potential future directions in the final paper, and we believe these warrant more attention than we have room for in this paper.
We developed P-VAE with neuroscience applications in mind and one of the major advantages over other VAEs is that the latents can now be interpreted as neurons. That said, there are applications where sparse coding is applied, such as image processing and computational imaging, where P-VAE might shine. We have included a few of these potential directions in our general rebuttal, but still feel that they would require substantial experimental evaluation and should be left to future work.
---
Rebuttal Comment 1.1:
Title: reply to authors
Comment: I thank the authors for addressing my questions and concerns. Given the lack of more illustrative experiments (in my opinion), I maintain my fairly very positive score for the paper. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time and insightful feedback. We believe addressing the reviewers' comments will substantially improve our paper. We plan to include these changes in the form of two major components: **(i)** further discussion of the work significance; and, **(ii)** additional results, reported as a figure here (Fig. R1).
# (i) Work significance and contributions
## Utility for neuroscience
The neuroscience community increasingly uses ANNs to understand what biological neurons are selective for and why. ANNs have several advantages over real brains: all connections and activations are available to the investigator, in silico perturbations are inexpensive, and they do not require the use of animals.
Like biological neurons, the P-VAE generates spikes; therefore, its latents can be treated like neurons. This offers advantages over continuous unconstrained models such as L-VAE or G-VAE. Here, we focus on two examples, leaving a more comprehensive discussion to the final paper.
- **Example 1:** P-VAE supports direct comparison to causal perturbations in brains. Perturbation experiments selectively stimulate or silence neurons to assess the causal role of a group of neurons on perception. These types of experiments cannot be trivially compared to unconstrained VAEs, for which "stimulation" is complicated by the fact that the latents are signed. In contrast, the P-VAE can be readily used for designing and conducting in silico perturbation experiments, enabling an exciting potential transfer of insights to the in vivo setting.
- **Example 2:** "Maximally Exciting Inputs" (MEI; [Walker et al., 2019](https://www.nature.com/articles/s41593-019-0517-x)), which manipulate inputs following the gradients of a feed-forward network fit to biological neurons, have been used to understand what biological neurons are selective for. The concept of MEI requires that neuron activations be characterized by being more or less "excitable." Once again, this concept is readily applicable to P-VAE latents, but not the unconstrained VAEs.
There are just two examples out of many. We will expand upon this point in the final paper, mentioning more concrete examples of the application potential of the P-VAE and how it can help advance neuroscience research.
## Mechanistic interpretability
Sparse autoencoders (SAEs) have become popular for mechanistic interpretability ([Anthropic](https://transformer-circuits.pub/2023/monosemantic-features), [OpenAI](https://arxiv.org/abs/2406.04093)). Although not an initial motivation for our work, we have inadvertently built a probabilistic version of SAEs. We're excited about a hierarchical extension of P-VAE applied to both images and LLM activations to test if this approach extracts hierarchically organized semantic concepts.
## Reparameterization trick
We hope our reparameterization trick finds applications beyond VAEs, for example, in spiking neural networks (SNN). One of our new results shows that the "surrogate gradients" method—utilized heavily in the SNN literature—may be improved by relaxing the hard forward during training (see below).
## Hardware implementation
A key advantage of our architecture is its ability to learn discrete, sparse representations. The integer P-VAE representations eliminate the need for post hoc quantization, which is crucial for hardware implementation of models with float activations. This sparsity enhances memory efficiency and lowers energy use, highlighting P-VAE’s potential as a vision model implemented directly on hardware for robotics.
# (ii) New results (Figure R1)
## Temperature and performance (Fig. R1a)
Motivated by comments from Rev. gEE3, we performed additional experiments to quantify the effect of temperature (T) on the final model performance. Following standard practice ([Jang et al., 2017](https://openreview.net/forum?id=rkE3y85ee)), we annealed T from a large value ($T_\mathrm{start} = 1.0$) to a small value ($T_\mathrm{final} = 0.05$ in the main paper) during the first half of training. In Fig. R1a, we explore the effect of changing $T_\mathrm{final}$ on the van Hateren dataset, using two architectures (linear vs. convolutional encoders, linear decoder), and two annealing schedules (linear vs. exponential; Fig. R1e inset). We find values of $T_\mathrm{final} \leq 0.1$ work well, and both annealing schedules work well.
Importantly, all results were obtained using $T = 0$ during test time. We also experimented with the option of using a "hard forward" training scheme once the annealing is done (i.e., the last half of training), where we use non-zero temperatures only for the backward pass. This practice is known as "surrogate gradients." Somewhat surprisingly, we found that the surrogate gradients severely underperformed our "relaxed Poisson" approach. We anticipate this result will be highly interesting to the spiking neural network community, who rely mostly on surrogate gradients to train their networks. We plan to include this figure in the appendix and highlight the main takeaways in the main text.
## Natural image statistics learned in the prior rates (Fig. R1e)
Thanks to comments from Revs. gEE3 and w8Jn, we examined the properties of the P-VAE learned dictionary elements in conjunction with the global prior rates, r. See Fig. 1e and its caption.
We found that P-VAE prior rates are consistent with principles of efficient coding, which states brains should assign minimal neural resources to statistically dominant elements of the natural environment. Specifically, we found prior rates were lower for *cardinal* orientations, which are more common in natural image patches. This result mirrors biological brains and is another demonstration of the potential of P-VAE. We plan to explore this research direction more systematically in future work.
# Conclusion
P-VAE has shown promising results, opening up many exciting venues for future exploration at the intersection of machine learning and neuroscience.
Pdf: /pdf/b087c61424264e6b365c0671cbe8a58505a015b8.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation | Accept (poster) | Summary: The authors extend AUC optimization techniques to pixel-level long-tail semantic segmentation and propose a general pixel-level AUC loss function. They decompose the loss function into inner-image and inter-image terms to decouple the interdependency, and calculate bounds to theoretically prove the effectiveness of this loss. Additionally, they design a Tail-Classes Memory Bank to reduce memory demand. Qualitative and quantitative comparisons have been made on three benchmark datasets, showing the efficiency of their method.
Strengths: 1. The paper is clearly organized and well written.
2. The general idea of the paper is clear, and the motivation is interesting.
3. The authors provide detailed theoretical proofs.
Weaknesses: 1. In the tail-class memory bank module, how do you ensure that pasted tail pixels do not completely cover any category?
2. As shown in Fig 3(b), authors argue that because images contain multiple labels, it is impossible to use stratified sampling techniques to ensure each mini-batch contains at least one sample from each class. However, I believe this view is incorrect; we can group images by category, different categories may include identical images due to multi-label nature. Then we can use stratified sampling technique for each category so that although it might sample identical images (the same sample being sampled twice as both head and tail), it still ensures each mini-batch contains at least one sample from every class, and small batch sizes can also achieve this goal. Therefore I think there’s no need for designing a Tail-Class Memory Bank.
3. How does Tail-Class Memory Bank store data? Does it directly use ground truth or predicted results? If using ground truth directly why design Store Branch? You could directly build memory bank using ground truth. If using predictions then why not just use ground truths?
4. Pasting tail-class pixels onto original image might occlude original image classes; would this behavior affect performance?
5. This method mainly relies on SegNeXt; while sometimes environmental factors prevent exact replication of other works' results. However, SegNeXt provides training logs & pre-trained weights, why did your reproduction show significant performance gaps compared with SegNeXt's original work (e.g., nearly 4% gap on ADE20K)?
6. In Table 1, this method shows significant improvement only on ADE20K but limited gains across two other datasets suggesting poor real-world generalizability inconsistent with theoretical conclusions.
7. Currently effective methods trained over small datasets often fail scaling up large-scale models. Hence the author should further provide the results with large-scale pretrained models like SAM/CLIP etc., demonstrating broader applicability/generalization potentiality thereof.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. I am confused about Tail-Class Memory Bank design, please refer weaknesses section above explaining them accordingly.
2. Authors should clarify certain experimental outcomes referring weakness points mentioned earlier.
3. NeurIPS Paper Checklist lacks honest responses e.g., code provision/error bars reporting/new assets do not be contained, but all marked “YES”.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have addressed the limitation of this method. This method might slightly impair the performance of head classes due to an increased focus on tail classes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate your time and effort in providing us with such constructive comments. We would like to respond to them as follows:
> **Q1:** In the tail-class memory bank module, how do you ensure that pasted tail pixels do not completely cover any category?
**A1:** The number of head class pixels in each image is 2.46 to 9.45 times greater than that of the tail classes (see the response to Reviewer `WAeT`'s `Q3`), meaning that the number of tail class pixels is too small to completely cover the head classes.
We conduct 10000 random coverage experiments on the ADE20K, Cityscapes, and COCO-Stuff 164K. In the experiments, we track the number of instances where pixels of a certain category in the original image are completely covered during the random coverage process. The experimental results are as follows:
| ADE20K | Cityscapes | COCO-Stuff 164K |
| ------- | ---------- | --------------- |
| 7/10000 | 2/10000 | 9/10000 |
The experimental results indicate that the likelihood of complete coverage occurring is less than one in a thousand.
> **Q2:** An improved version of stratified sampling can ensure coverage of all classes. So why use the Tail-Class Memory Bank?
**A2:** Please refer to `C-A3` in `General Response`.
> **Q3:** How does Tail-Class Memory Bank store data? Does it directly use ground truth or predicted results?
**A3:** The Tail-Class Memory Bank stores a certain number of pixels for each tail class, as detailed in `Line 201-222` of the initial submission. We use ground truth to assist in storing the original image data. The storage branch is designed with three main purposes:
- The storage branch uses **ground truth** to extract corresponding pixels from the original image, which is what you referred to as "directly building a memory bank using ground truth."
- The storage branch records the original **positional information** of each pixel, which facilitates the restoration of their spatial positions by the Retrieve Branch during sampling.
- The storage branch also assists in the **scheduling of the Memory Branch**. When the number of stored items in the Memory Branch reaches the memory size, we use a replacement method as described in the paper (see the response to Reviewer `WAeT`'s `Q4`).
> **Q4:** Pasting tail-class pixels onto original image might occlude original image classes; would this behavior affect performance?
**A4:** As we mentioned in `Q1`, pasting tail class pixels almost never completely covers other classes. Below, we can explore the impact of occluding the original image on performance through ablation experiments.
The parameter "Resize Ratio" in the T-Memory Bank represents the scale of the image before pasting (see `Lines 220-222` in the initial submission). Ablation experiments on this parameter allow us to investigate how the area of occlusion affects performance. In the initial submission `Figure 6(c)`, we conducted this experiment on the ADE20K dataset. Below, we present additional experimental results on the Cityscapes and COCO-Stuff 164K datasets.
Cityscapes:
| Resize Ratio | Overall | Tail |
| ------------ | --------- | --------- |
| 0.1 | 82.18 | 80.69 |
| 0.3 | 82.40 | 80.87 |
| 0.5 | 82.62 | 81.50 |
| 0.7 | **82.71** | **81.67** |
| 0.9 | 82.53 | 81.51 |
| 1 | 82.69 | 81.38 |
COCO-Stuff 164K:
| Resize Ratio | Overall | Tail |
| ------------ | --------- | --------- |
| 0.05 | 42.59 | 40.55 |
| 0.1 | **42.73** | **40.72** |
| 0.3 | 42.52 | 40.42 |
| 0.5 | 41.58 | 39.79 |
| 0.7 | 42.57 | 40.50 |
| 1 | 42.62 | 40.64 |
It can be observed that when the Resize Ratio is too large, severe occlusion occurs, resulting in a decline in model performance. Due to the differences in image sizes across datasets, the Resize Ratio should be adjusted accordingly when switching to other datasets.
> **Q5:** Why did your reproduction show significant performance gaps compared with the original work?
**A5:** We used the code and parameters provided by MMSegmentation. To ensure fair comparisons, we reproduced all the experiments using a batch size of 4 instead of the original 16. The performance difference is due to the different batch sizes. We have further supplemented the experimental results with different batch sizes. Please refer to `C-A2` in the `General Response`.
> **Q6:** Inconsistent performance gain.
**A6:** Please refer to `C-A1` in `General Response`.
> **Q7:** Comparison on large-scale pretrained models.
**A7:** We extend AUCSeg to fine-tune CLIP, using DenseCLIP[1] as the backbone (batch size=4, iterations=80000). The experimental results are as follows:
|| Overall| Head| Middle| Tail|
| --------------------- | --------- | --------- | --------- | --------- |
| DenseCLIP (ResNet-50) | 32.21| 73.37| 48.49| 26.99|
| +AUCSeg| **34.59** | **74.04** | **50.13** | **29.60** |
| DenseCLIP (ViT-B)| 48.63| 80.36| 57.80| 45.06|
| +AUCSeg| **49.51** | **80.70** | **59.25** | **45.91** |
Our AUCSeg is also effective on large-scale pretrained models.
> **Q8:** NeurIPS Paper Checklist lacks honest responses.
**A8:** There may have been some misunderstandings in our responses to the Checklist questions.
- In the initial submission, we provided reasons for our "yes" answers regarding code provision/new assets: "We will release the data and code once it is accepted." This might have caused some concerns. However, due to the rules during the rebuttal period, we have sent an anonymous link to the AC containing our core code (AUC Loss and T-Memory Bank), and you may need to request it from the AC.
- When calculating the time cost for AUCSeg in the initial submission, we reported error bars. You can see this in `Lines 1178-1182` of the initial submission.
------
[1] Denseclip: Language-guided dense prediction with context-aware prompting, CVPR 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I have read your rebuttal and the comments from other reviewers, and I will maintain my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your feedback and acceptance. Following your suggestions, we will enrich the content of our paper in the final version. | Summary: This paper explores AUC optimization methods for pixel-level long-tail semantic segmentation, addressing complex dependencies and space complexity challenges. The authors propose a novel pixel-level AUC loss function and conduct a dependency-graph-based theoretical analysis to enhance generalization. They also introduce a Tail-Classes Memory Bank (T-Memory Bank) to manage memory demands, and experiments confirm the effectiveness of their AUCSeg method.
Strengths: 1. This paper presents a novel perspective by using AUC as the loss function.
2. The paper has performed a tail class cache to boost the performance and control the memory demand.
Weaknesses: Please refer to the questions.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. For a long-tail semantic segmentation problem, which is also a pixel-wise classification task, I am uncertain why the authors chose this perspective. From the standpoint of the loss function, it emphasizes the contrast between the current class and other classes. Would a contrastive classification approach achieve similar results?
2. The significance of Proposition 1 is not very clear. What is the meaning of the $\Omega$? My understanding is that for a tail class, it is naturally difficult to randomly sample such a sample due to the scarcity. Therefore, re-sampling techniques are needed, which seems quite intuitive. What additional information is this proposition supposed to convey?
3. The logic of using a memory bank in this paper seems weired. The authors argue that it is difficult to randomly sample a tail sample, hence the need for a memory bank. Most long-tail based work considers batch re-sampling a very conventional approach. However, this paper's method enhances the diversity of tail samples, thereby improving the model's learning ability for tail samples. Perhaps rephrasing this aspect would be more effective.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate your time and effort in providing us with such constructive comments. We would like to respond to them as follows:
> **Q1:** Why did the authors choose the segmentation perspective? Would a contrastive classification approach achieve similar results?
**A1:** Thank you for the helpful suggestion! We will first explain why this paper focuses on the long-tailed semantic segmentation task. Then, we will discuss new findings by comparing AUC and contrastive learning from the perspective of loss functions.
**Firstly**, semantic segmentation, as a pixel-level classification task, also suffers from a severe long-tail problem, yet it has been largely overlooked or insufficiently explored by the long-tail community. We approach this issue from the perspective of loss functions, aiming to find a theoretically grounded loss. Noticing that AUC is popular and effective in image-level tasks, we extend it to address the pixel-level long-tail problem.
We chose to address this challenge from the loss function perspective because 1) The quality of the loss function directly impacts the model's learning quality and is directly related to performance. 2) Loss functions are more general and have the potential to be extended to other pixel-level tasks to address their long-tail issues. For example, in salient object detection, the salient objects often exhibit a long-tailed distribution. We apply AUCSeg to this task, using the latest SOTA method SI-SOD-EDN [1,2] as the backbone. See `Table 5` in the PDF submitted in the `General Response` for the results.
Our AUCSeg achieves improvements across three commonly used evaluation metrics on three datasets. This demonstrates that our method is highly versatile and extensible.
**Next**, we will discuss new insights comparing AUC and contrastive learning from the perspective of loss functions, supported by theoretical analysis and experimental validation.
> **Theorem:** Minimizing the weighted contrastive loss approximately corresponds to minimizing an upper bound of the logistic AUC loss:
> $$
> \sum_{i}w_i\left[-\log\frac{e^{f(x^i)}}{e^{f(x^i)}+\sum_{j\neq i}w_je^{f(x^j)}}\right]\geq\sum_{i}\sum_{j \neq i}\frac{1}{n_in_j}\left[-\log\left(\frac{1}{1+e^{f(x^j)-f(x^i)}}\right)\right]
> $$
> where, $w_j=\frac{1/n_j}{\sum_{k\neq i}1/n_k}$ and $w_i=\frac{\sum_{k\neq i}1/n_k}{n_i}$.
Due to space constraints, the proof will be provided in the next version. This theorem indicates that minimizing a **weighted version** of contrastive loss can implicitly optimize the OVO logistic AUC loss. The experiments in the table below also verify that Contrastive Loss + TMB and AUCSeg (Logistic) produce similar results. Our paper adopts a more general form of AUC loss, with the square surrogate loss showing the best performance.
|| Overall| Tail|
|--------------------|---------|---------|
| Contrastive Loss+TMB| 47.47| 43.55|
| AUCSeg (Logistic)| 47.86| 43.96|
| AUCSeg (Hinge)| 48.59| 44.76|
| AUCSeg (Exp)| 48.86| 45.07|
| AUCSeg (Square)|**49.20**|**45.52**|
> **Q2:** The significance of Proposition 1 is not very clear, What additional information is this proposition supposed to convey?
**A2:** Thank you for your question. In `Proposition 1`, $x = \Omega(y)$ means $\exists c>0, x > c \cdot y $. It represents the minimum batch size required to include pixels from all classes with a high probability. For ease of reading, we will include **a table of symbol definitions** in the final version of the paper (See `Table 1` in the PDF submitted in the `General Response`).
Due to the scarcity of tail classes and the coupling between pixels, random sampling becomes challenging. This proposition quantifies the minimal batch size to cover all classes with a high probability. As we discussed in `Remark 2` of the initial submission, random sampling that meets the required conditions would necessitate sampling 759 images at once to form a mini-batch, an unbearable memory cost for model training.
> **Q3:** The logic of using a memory bank in this paper seems weird. Why not use batch re-sampling? Rephrase this aspect of diversity.
**A3:** Thank you for your constructive suggestion!
- **The reason for not directly using batch re-sampling:** Please refer to `C-A3` in `General Response`.
- **Why is the primary function of the T-Memory Bank not aimed at enhancing the diversity of tail samples:** As shown in `Table 2` of our initial submission (we also provide this table below), using a memory bank does indeed enhance the diversity of tail classes as an implicit way of augmentation (comparing the rows for SegNeXt and SegNeXt+TMB in the table). However, we cannot rely on the bank to fully address the long-tail issue, since the bank capacity is always limited. That's why we also need to consider the problem from the loss perspective. We find that AUC loss focuses only on the ranking loss between positive and negative samples and is not sensitive to data distribution, fundamentally avoiding the risk of underfitting caused by insufficient training samples. **We believe that the use of the TMB in our paper is intended to both facilitate the effectiveness of AUC loss and enhance tail sample diversity.**
| Model | AUC | TMB | Overall | Tail |
| ----------- | ---- | ---- | --------- | --------- |
| SegNeXt | | | 47.45| 43.28|
| SegNeXt+AUC |yes | | 48.46 | 44.70 |
| SegNeXt+TMB | | yes | 47.86 | 43.86 |
| AUCSeg | yes | yes | **49.20** | **45.52** |
We acknowledge that, to some extent, the TMB also contributes to performance improvement by enhancing tail class diversity. Therefore, in the final version of the paper, we will discuss this point further in the `Discussions` section (`Line 223` in the initial submission).
------
[1] EDN: Salient object detection via extremely-downsampled network, TIP 2022.
[2] Size-invariance Matters: Rethinking Metrics and Losses for Imbalanced Multi-object Salient Object Detection, ICML 2024.
---
Rebuttal Comment 1.1:
Comment: We provide the proof of the theorem used in the rebuttal below:
> **Theorem:** Minimizing the weighted contrastive loss approximately corresponds to minimizing an upper bound of the logistic AUC loss:
> $$
> \sum_{i}w_i\left[-\log\frac{e^{f(x^i)}}{e^{f(x^i)}+\sum_{j\neq i}w_je^{f(x^j)}}\right]\geq\sum_{i}\sum_{j \neq i}\frac{1}{n_in_j}\left[-\log\left(\frac{1}{1+e^{f(x^j)-f(x^i)}}\right)\right]
> $$
> where, $w_j=\frac{1/n_j}{\sum_{k\neq i}1/n_k}$ and $w_i=\frac{\sum_{k\neq i}1/n_k}{n_i}$.
**Proof.**
For the AUC loss under the logistic surrogate loss function:
$$
\begin{aligned}
\ell_{auc}^{logistic}&=\ell_{logistic}\left(f(x^+)-f(x^-)\right)\\\\
&=\sum_{i}\sum_{j \neq i}\frac{1}{n_in_j}\left[-\log\left(\frac{1}{1+e^{f(x^j)-f(x^i)}}\right)\right]\\\\
&=\sum_{i}\sum_{j \neq i}w_iw_j\left[-\log\left(\frac{1}{1+e^{f(x^j)-f(x^i)}}\right)\right]\\\\
&=\sum_{i}w_i\sum_{j\neq i}w_j\left[-\log(e^{f(x^i)})+\log(e^{f(x^i)}+e^{f(x^j)})\right]\\\\
&=\sum_{i}w_i\left[-\log(e^{f(x^i)})+\sum_{j\neq i}w_j\log(e^{f(x^i)}+e^{f(x^j)})\right]\\\\
&\leq \sum_{i}w_i\left[-\log(e^{f(x^i)})+\log\left(\sum_{j\neq i}w_j(e^{f(x^i)}+e^{f(x^j)})\right)\right]\\\\
&=\sum_{i}w_i\left[-\log(e^{f(x^i)})+\log\left(e^{f(x^i)}+\sum_{j\neq i}w_je^{f(x^j)}\right)\right]\\\\
&=\sum_{i}w_i\left[-\log\frac{e^{f(x^i)}}{e^{f(x^i)}+\sum_{j\neq i}w_je^{f(x^j)}}\right]
\end{aligned}
$$
where, $w_j=\frac{1/n_in_j}{\sum_{k\neq i}1/n_in_k}=\frac{1/n_j}{\sum_{k\neq i}1/n_k}$ and $w_i=\frac{\sum_{k\neq i}1/n_k}{n_i}$.
This completed the proof.
---
Rebuttal 2:
Title: Comments from Reviewer tNYv
Comment: Thanks for the response. I will consider slightly increasing the score because some of my concerns are being relieved.
By the way, will the code be released after the acceptance?
---
Rebuttal Comment 2.1:
Comment: Thank you so much for your feedback. Due to the rules during the rebuttal period, we have sent an anonymous link to the AC containing our core code (AUC Loss and T-Memory Bank), and you may need to request it from the AC. The full code will be released after the acceptance. If you have any further questions, we would be happy to address them. | Summary: This paper investigates AUC optimization within the context of pixel-level long-tail semantic segmentation (PLSS), a complex task due to intricate loss term coupling and extensive memory requirements. Initially, the authors demonstrate the potential of AUC for PLSS from a theoretical perspective by addressing the two-layer coupling issue across loss terms. Subsequently, they propose a novel Tail-Classes Memory Bank (T-Memory Bank) to manage the significant memory demands of AUC-oriented PLSS. Finally, comprehensive experiments show the effectiveness of the proposed method.
Strengths: 1. This paper addresses a compelling problem with clear motivations and significant contributions, offering insightful ideas for both the AUC and segmentation communities.
2. Developing a theoretically grounded loss from an AUC perspective is novel, and the proposed T-memory bank effectively mitigates the memory burden associated with pixel-level AUC optimization.
3. The performance of the proposed AUCSeg seems promising.
Weaknesses: Overall, I believe this is a qualified paper for publication after addressing the following minor concerns:
1. The notations in this paper should be carefully defined. Some key symbols are used repeatedly, such as $N$ representing both batch size (In Alg.1) and sample size (In Thm.1).
2. This paper only examines the square AUC surrogate loss without considering two other popular surrogate losses for AUC optimizations (i.e., hinge and exponential losses). It is essential to provide empirical verification by applying the ignored losses to AUCSeg.
3. In Alg.1, the batch size scale will also impact the final performance of long-tailed classifications. The authors are recommended to conduct an ablation study on this aspect.
Technical Quality: 4
Clarity: 3
Questions for Authors: Please refer to the weakness part.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate your time and effort in providing us with such constructive comments. We would like to respond to them as follows:
> **Q1:** The notations in this paper should be carefully defined. Some key symbols are used repeatedly, such as 𝑁 representing both batch size (In Alg.1) and sample size (In Thm.1).
**A1:** Thank you for the suggestion! We carefully reviewed the symbol definitions in the paper. The symbol $𝑁$ representing batch size in Algorithm 1 has been changed to $N_b$. Additionally, for ease of reading, we will include **a table of symbol definitions** in the final version of the paper (See `Table 1` in the PDF submitted in the `General Response`).
> **Q2:** This paper only examines the square AUC surrogate loss without considering two other popular surrogate losses for AUC optimizations (i.e., hinge and exponential losses). It is essential to provide empirical verification by applying the ignored losses to AUCSeg.
**A2:** Thank you for your constructive suggestion! In this version, we have explored two other popular surrogate losses (hinge loss and exponential loss) to AUCSeg. Additionally, we have included results for two AUC loss calculation methods (one-vs-one and one-vs-all) applied to AUCSeg.
The performance of these three surrogate losses (hinge loss, exponential loss, and square loss) is presented in the table below:
| Dataset | AUC Method | Overall | Tail |
| --------------- | ---------- | ---------------- | ---------------- |
| ADE20K | - | 47.45 | 43.28 |
| | Hinge | 48.59(+1.14) | 44.76(+1.48) |
| | Exp | 48.86(+1.41) | 45.07(+1.79) |
| | Square | **49.2(+1.75)** | **45.52(+2.24)** |
| Cityscapes | - | 82.41 | 80.92 |
| | Hinge | 82.64(+0.23) | 81.35(+0.43) |
| | Exp | 82.45(+0.04) | 81.55(+0.63) |
| | Square | **82.71(+0.30)** | **81.67(+0.75)** |
| COCO-Stuff 164K | - | 42.42 | 40.33 |
| | Hinge | 42.52(+0.10) | 40.49(+0.16) |
| | Exp | 42.52(+0.10) | 40.53(+0.20) |
| | Square | **42.73(+0.31)** | **40.72(+0.39)** |
The performance of the two AUC calculation methods (ova and ovo) when using square loss is as follows:
| Dataset | AUC Method | Overall | Tail |
| --------------- | ---------- | --------- | --------- |
| ADE20K | ova | 48.46 | 44.58 |
| | ovo | **49.20** | **45.52** |
| Cityscapes | ova | 82.31 | 80.79 |
| | ovo | **82.71** | **81.67** |
| COCO-Stuff 164K | ova | 42.25 | 40.17 |
| | ovo | **42.73** | **40.72** |
The results indicate that AUCSeg shows improved performance with any of the surrogate functions. Among them, using square loss and the ovo calculation method delivers the best overall performance. We will include this discussion in the final version of the paper.
> **Q3:** In Alg.1, the batch size scale will also impact the final performance of long-tailed classifications. The authors are recommended to conduct an ablation study on this aspect.
**A3:** Please refer to `C-A2` in `General Response`.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my concerns, I will raise my score correspondinly.
---
Reply to Comment 1.1.1:
Comment: We are thankful for your acceptance and constructive feedback. | Summary: This paper introduces AUC optimization into the domain of long-tailed semantic segmentation. Specifically, the authors developed a pixel-level AUC loss function tailored for long-tailed semantic segmentation tasks and introduced a tail-class memory bank to address the memory demands. Additionally, the authors utilized Rademacher complexity to provide a generalization bound for AUCSeg and theoretically analyzed the potential for AUCSeg to generalize to unseen data.
The experimental evaluation in this paper assessed the performance of the proposed strategy on various datasets. Generally speaking, this paper is relatively detailed and comprehensive.
Strengths: - It is novel to introduce auc optimization in long-tailed semantic segmentation and give a detailed analysis over the feasibility of this method.
- The proposed method is validated on different datasets(ADE20k, CityScapes, COCO etc.) and has been proven to be effective in tail classes.
Weaknesses: There are a few concerns about this paper:
- On COCO, the performance improvement of Tail Class seems to be limited. Could the author explain the reason?
- Fig. 6(a) shows the memory size reduction of the tail-class memory bank. The results show that as the memory size increases, the performance improvement decreases. What is the reason for this?
Technical Quality: 3
Clarity: 4
Questions for Authors: I thank the author for their detailed work. Nevertheless, there are a few questions I wonder and I hope the author could respond.
- For Tail Class Memory Bank, what would happen if we use a pixel-level memory bank? For example, prototype or similar technology, would this lead to significant performance degradation?
- For the memory bank update problem, is there a more appropriate selection method instead of random replacement?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments, and we would like to make the following response.
> **Q1:** On COCO, the performance improvement of Tail Class seems to be limited. Could the author explain the reason?
**A1:** Please refer to `C-A1` in `General Response`.
> **Q2:** Fig. 6(a) shows the memory size reduction of the tail-class memory bank. The results show that as the memory size increases, the performance improvement decreases. What is the reason for this?
**A2:** This is a trade-off between diversity and learnability. When the memory size is too large, the probability of any single sample being effectively learned decreases. So the model fails to focus on important examples, and thus fails to capture their features, ultimately leading to underfitting. Conversely, if the memory size is too small, the diversity of samples is limited, which leads to overfitting. Hence, there is no free lunch for increasing the bank!
Therefore, we pursue a reasonable memory size. As shown in `Figure 6(a)` in the initial submission, we believe that a memory size of 5 is suitable in most cases. As the memory size increases/decreases, the performance slightly declines due to model overfitting/underfitting.
> **Q3:** For Tail Class Memory Bank, what would happen if we use a pixel-level memory bank? For example, prototype or similar technology, would this lead to significant performance degradation?
**A3:** Thanks so much for your suggestion! There are two differences between the Pixel-level Memory Bank (PMB) and our Tail-class Memory Bank (TMB). **First**, the PMB stores pixels from all classes, whereas our TMB only stores pixels from tail classes. **Second**, in our TMB, the storing and retrieving processes are conducted on an entire object (we ensure that the pasted pixel forms a meaningful object). However the PMB typically focuses on a fixed number of pixels without structural information (regardless of whether these pixels can form a complete image).
**We will first explain why we only store tail class pixels instead of all pixels.**
The table below shows the average number of pixels from head and tail classes per image in the ADE20K, Cityscapes, and COCO-Stuff 164K datasets.
|Dataset|ADE20K|Cityscapes|COCO-Stuff 164K|
|-------|------|----------|---------------|
| Head| 46685| 294290 | 60157|
| Tail| 18977| 31128| 22526|
It can be observed that the number of head class pixels in each image is 2.46 to 9.45 times greater than that of the tail classes, meaning that storing head class pixels would require significantly more memory.
**The table below** compares the performance differences between storing all and tail class pixels. It shows that the PMB, which incurs additional memory costs, performs almost the same as the TMB, and even shows a noticeable decline in the Cityscapes dataset. This is because head classes appear in almost every image (for example, in urban road datasets, it is hard to find an image without head class pixels like 'road' or 'sky'), so they do not need additional supplementation. Even if some images require supplementation of head classes, their larger pixel counts might cause them to overwrite the original tail class pixels when pasted, leading to a decline in performance. Thus, we only store tail class pixels.
| Dataset | ADE20K | Cityscapes | COCO-Stuff 164K |
| ------- | --------- | ---------- | --------------- |
| PMB | 49.09 | 82.07 | 42.66 |
| TMB | **49.20** | **82.71** | **42.73** |
**Next, we will explain why it is not feasible to focus on a fixed number of pixels.**
We conduct tests on the ADE20K dataset by supplementing a fixed number of tail class pixels (10000/20000/30000/40000) in each image and find that compared to AUCSeg, the performance differences are -3.00%/-1.93%/-0.86%/-0.73%. This is because supplementing a fixed number of pixels can result in incomplete images, such as only adding the front wheel of a bicycle, therefore loss of the structural information. The model is then unable to learn complete and accurate features. Therefore, in our TMB, storing and retrieving are conducted on all pixels of an entire image.
We will include this part in the final version of the paper.
> **Q4:** For the memory bank update problem, is there a more appropriate selection method instead of random replacement?
**A4:** Thank you for your constructive suggestion! Based on your suggestion, we have now tried three other different selection methods on the ADE20K dataset:
- **First-In-First-Out (FIFO) replacement**: Prioritizes replacing the images that are first stored in the Tail-class Memory Bank.
- **Last-In-First-Out (LIFO) replacement**: Prioritizes replacing the images that are last stored in the Tail-class Memory Bank.
- **Priority Used (PU) replacement**: Prioritizes replacing images that have previously been selected by the retrieve branch.
The experimental results are shown in the table below.
| | Overall | Head | Middle | Tail |
| ------ | --------- | --------- | --------- | --------- |
| Random | 49.20 | **80.59** | **59.45** | 45.52 |
| FIFO | **49.35** | 80.51 | 58.71 | **45.80** |
| LIFO | 49.05 | 80.35 | 58.76 | 45.45 |
| PU | 49.21 | 80.24 | 58.73 | 45.65 |
FIFO and PU both show better performance overall and on tail classes compared to random sampling. However, LIFO, by updating only the most recently added images in the T-Memory Bank, causes the earlier images to remain unchanged. This leads to overfitting and, consequently, a decline in performance.
While these complex strategies can improve performance, the gains are relatively limited. The random replacement method, on the other hand, is easy to implement. We will include this part in the final version of the paper. In the future, we will explore more complex and effective replacement methods, hoping to provide a direction for other researchers to explore as well.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's detailed response. Some of my concerns have been addressed. I am going to maintain my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your feedback and acceptance. Following your suggestions, we will enrich the content of our paper in the final version. If you have any further questions, we would be happy to address them. | Rebuttal 1:
Rebuttal: **General Response**
Dear SAC, AC, and reviewers,
Thank you for your invaluable feedback. Based on your comments, we have revised the details and now offer a summary of our responses.
- **Additional Experiments:**
1. Different sampling methods for the Tail-class Memory Bank
2. Different AUC surrogate losses and AUC calculation methods
3. New ablation experiments for the hyperparameters
4. Extend AUCSeg to other pixel-level long-tail tasks
- **Explanations of the Method and Theory:**
1. The motivation for introducing the Tail-class Memory Bank, its mechanism, and why it is effective
2. Theoretical comparison between contrastive and AUC loss
3. Detailed analysis of dataset distribution to further explain the variations in experimental results across different datasets
- **Organization:**
1. Reorganize the symbol definitions and include a table of symbol definitions
2. Update our responses to the checklist
Below, we provide responses to some **Common Questions**:
------
> **C-Q1:** [Reviewer `WAeT` and `eudJ`] The performance improvement of the tail class varies across the three datasets. ADE20K shows a significant improvement, while the other two show relatively limited results. Could the author explain the reason?
**C-A1:** The performance gain depends on the degree of imbalance of the underlying dataset. To see this, we show the pairwise mean imbalance ratio $r_{m}$ (average the imbalance ratio of each class pair).
In the table below, we compare $r_{m}$ for ADE20K, Cityscapes, and COCO-Stuff 164K, along with the tail classes performance improvements of AUCSeg compared to the runner-up method.
| Dataset | ADE20K | Cityscapes | COCO-Stuff 164K |
| ------------------------ | ------ | ---------- | --------------- |
| $r_{m}$ | 90.43 | 80.39 | 38.17 |
| Tail Classes Improvement | 1.21% | 0.75% | 0.38% |
The results suggest that the larger the imbalance degree the larger the improvements of our method. ADE20K has the largest degree imbalance, therefore gaining the most significant improvement.
> **C-Q2:** [Reviewer `TAdP` and `eudJ`] The author should include ablation experiments for the batch size.
**C-A2:** We conduct ablation experiments on the ADE20K dataset to evaluate the impact of batch size. The results are shown below:
| Batch Size | Overall | Tail |
| ---------- | --------- | --------- |
| 1 | 34.73 | 29.66 |
| +AUCSeg | **40.14** | **35.74** |
| 2 | 45.50 | 41.34 |
| +AUCSeg | **46.86** | **42.93** |
| 4 | 47.45 | 43.28 |
| +AUCSeg | **49.20** | **45.52** |
| 8 | 49.35 | 45.46 |
| +AUCSeg | **49.36** | **45.53** |
| 16 | 50.07 | 46.32 |
| +AUCSeg | **50.96** | **47.03** |
The performance improves as the batch size increases. Moreover, our AUCSeg is consistently effective across different batch sizes. In the initial submission, we selected a batch size of 4. Although this does not yield the highest performance, this batch size ensures that our experiments can be replicated on any GPU with 24GB of memory (such as the NVIDIA 3090 or 4090). We will include this part in the final version of the paper.
> **C-Q3:** [Reviewer `tNYv` and `eudJ`] Why design a Tail-class Memory Bank instead of using an improved version of stratified sampling for batch re-sampling?
**C-A3:** Stratified sampling can hardly cover all the involved classes with a small batch size (such as 4 in our paper). To ensure coverage, one has to employ a much larger batch size, producing a much higher computational burden. **Even if we can cover the classes**, tail class images may appear repeatedly, leading to overfitting and therefore a performance degradation. Our Tail-class Memory Bank, however, only involves pasting a portion of one image onto another, which behaves like an implicit data augmentation. It mitigates the sampling problem without sacrificing the generalization ability. The following empirical results demonstrate our belief.
**First**, we counted the number of images containing pixels from each class in the Cityscapes, ADE20K, and COCO-Stuff 164K datasets. See `Table 2-Table 4` in the PDF submitted in the `General Response`.
The results show that images with tail class pixels are very limited. Particularly in the ADE20K, there are only 41 images in the training set of 20210 images that contain pixels with the tail class ID of 97. The tail class images will be repeatedly sampled when using stratified sampling, leading to overfitting to such repeated images.
**Next**, we train on the ADE20K dataset following the stratified sampling method. The experimental results are as follows:
| | Overall | Head | Middle | Tail |
| ------------------- | --------- | --------- | --------- | --------- |
| Stratified Sampling | 46.20 | 79.55 | 57.65 | 42.21 |
| T-Memory Bank | **49.20** | **80.59** | **59.45** | **45.52** |
The overfitting of stratified sampling brings a performance reduction for both head and middle classes. The tail classes experience an even more significant performance drop due to the heavy sample repetition in the batch. However, our T-Memory Bank, with its random pasting approach, diversifies the background of the tail classes, thus helping the model to better learn the features of the tail classes.
------
Please refer to the specific responses below for more information. We will update all these improvements in the next version.
Pdf: /pdf/55fd8a96d67278742e24e8eed7c311ccff526605.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
RegExplainer: Generating Explanations for Graph Neural Networks in Regression Tasks | Accept (poster) | Summary: The authors focus on the interpretability of GNN in graph regression tasks. They propose a novel explanation method called RegExplainer as a plug-in to existing explanation methods, such as GNNExplainer and PGExplainer. They also tackle mutual information estimation (graph information bottleneck), distribution shifting and continuously ordered decision boundaries that hinder current explanation techniques.
Strengths: 1. The writing flow is clear and easy to follow.
2. The illustration in Figure 3 is clear and straightforward.
Weaknesses: 1. The **contribution** is confusing:
1a. In the introduction (Lines 64-68), the authors mention *datasets* as a key contribution. However, the paper lacks detailed descriptions of these datasets in later sections. More information about the datasets, including their structure and significance, should be included to clarify this contribution.
2. Concerning the **methods** and corresponding **experimental settings**:
2a. The *subgraph connectivities* are not considered. In other words, RegExplainer could result in isolated nodes rather than "a subgraph".
2b. **Property 2 is overclaimed**. Lines 469-471 dismiss the ground truth joint distribution $p(Y^*,Y)$, and only focus on the remaining parts. The assumption weakens the paper, especially since "distribution shifting" is a key discussion of the paper.
2c. Section 4.3, especially Step 2, is hard to follow. The process for obtaining $(G^+)^{\Delta}$ and $G^*$ needs more detail. Specifically, Step 2 is used to approximate $I(G, G^*)$ and find the optimal $G^*$ as in Eq. (4) or (5). However, $G^{(mix)}$ relies on $(G^+)^{\Delta}$ and $G^*$. Given that $G^*$ is the variable to be optimized, how to pick $(G^+)^{\Delta}$ is missing.
2d. Whether $G^*$, $(G^+)^{\Delta}$ ,as well as $G^{mix}$ are binary {0,1} or continuous in [0,1] is unclear. If they are binary, the optimization for $G^*$ needs more explanation. If they are continuous, the relaxation and recovery to the final outcomes should be specified.
2e. The method of measuring distances between graphs or graph distributions is not clearly defined. The authors should clarify and justify whether MSE/RMSE are used, or another metric is needed.
Besides, the authors evaluate the distribution shifting by similarity/distance between embeddings (Table 3) rather than other distribution-based metrics.
2f. The last column in Table 2 shows a large distance from the original prediction, suggesting the *subgraph may not be faithful*.
2g. The problem definition in Lines 113-115 is unclear, how to define "can" explain?
The problem definition in Lines 113-115 is unclear, particularly the criteria for an explanation to be considered valid (e.g., “can explain the prediction”).
3. **Others**:
3a. The method appears suited for graph-level regression tasks. Could it apply to the node-level regression tasks, such as traffic prediction?
3b. "N" in Line 110 is undefined.
3c. What does "regression label in regression task" mean on Line 164?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please address the concerns in the "Weakness".
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors show the limitations on Sec. 6.
However, as mentioned above, the assumption and proof of Property 2 should be carefully considered.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer 4dqr, thank you for taking the time to review our work and providing feedback. In the following, we aim to address your questions and concerns.
A-1a: Thank you for your suggestion. We will include detailed descriptions of the datasets in future versions of the paper. In the appendix, we describe how these datasets were created and the rationale behind their creation.
A-2a: Thank you for pointing this out. Subgraph connectivity is indeed an important issue. However, in this work, our main goal was to extend GIB-based explainers to graph regression tasks, following the settings of previous works in other aspects. We are keen to explore and address the connectivity of explanatory subgraphs in future research.
A-2b: We incorporate the InfoNCE loss into the GIB objective with reference to previous work[1], where InfoNCE is a lower bound of mutual information (MI), and there is no constraint on the distribution of variables. Thank you for pointing out the issue in the appendix. We will provide a more detailed explanation in the next version.
A-2c: We are happy to address your concern. In Step 1 of Section 4.3, we describe in detail how to select $G^+$ and $G^-$. “We can define two randomly sampled graphs as positive neighbor $G^+$ and negative neighbor $G^−$, where $G^+$’s label $Y^+$ is closer to $Y$ than $G^−$’s label $Y^−$, i.e., $|Y^+ − Y| < |Y^− − Y|$.”
A-2d: They are continuous. The final explanatory subgraph is obtained by selecting the top-k edges based on their weights.
A-2e: Thank you for pointing this out. We will correct the typo in Table 3 in the next version of the paper. We use RMSE as the metric for measuring distances between graphs or graph distributions. Besides, metrics like KL-divergence can be challenging to compute in practice. Therefore, we use graph embeddings and prediction labels to measure distribution shifts between original graphs and explanation subgraphs.
A-2f: The purpose of Table 2 is to demonstrate that the explanatory subgraphs can indeed be out-of-distribution (OOD). The results in the last column indicate that these subgraphs are not sufficiently faithful and can introduce bias in the explaining. To address this issue, we introduced the mixup method to mitigate the OOD problem of the explanatory subgraphs.
A-2g: "Can explain" means that the explanation should identify the reasons behind the GNN's prediction. For example, in the defined datasets, different motifs and their corresponding features lead to different labels. The GNN relies on these motifs to make predictions, and the goal of the explainer is to identify these motifs as part of the explanation.
A-3a: Yes, our method can be extended to node-level regression tasks. In the case of a three-layer GCN, node-level tasks essentially involve graph tasks centered on 3-hop subgraphs around each node. Therefore, our method can be easily adapted for node-level tasks. We look forward to incorporating more datasets in future work.
A-3b: Thank you for pointing this out. "N" represents the number of graphs in the dataset. We will include this clarification in the next version of the paper.
A-3c: The term "regression label" refers to the prediction made by the GNN for a given graph or subgraph.
[1] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation Learning with Contrastive
Predictive Coding.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses. However, some of the answers do not fully address my concerns.
> To A-2b:
If it cannot handle, or ignore, the joint distribution, how could you justify the generality of Property 2?
> To A-2c:
Only the processes of picking $G^+$ and $G^-$ are included, while the processes of picking $(G^{+})^\Delta$ is missing.
> To A-2d:
Such statements are missing in the manuscript.
> To A-2e:
The metrics should be further justified, since it is the key for evaluating the graph distributions, as well as the methods.
For example, is it possible to quickly survey the metrics used in the papers for the same tasks?
> To A-2g:
Is it possible to give a more formal definition or reference? E.g. if the task desires a lower/higher value under some specific evaluation measurements (please make it more clear and formal).
> To A-3c:
It is interesting to refer "regression *label*" to the *prediction* of a GNN.
---
Reply to Comment 1.1.1:
Title: Second Round Reply (1/2)
Comment: Dear reviewer 4dqr,
We are glad to have further discussion with you. Your insightful comments are valuable to us and help improve the quality of our paper. We are grateful for the time and effort you have invested in providing such a thorough review. If you have any remaining concerns or require further clarification, please let us know. We are more than happy to address any additional questions you may have.
> Re to A-2b:
A: We see your concern regarding property 2. In this property, to develop the GIB objective into graph regression tasks, we firstly have $I(Y^*; Y) = \sum_{Y^*,Y} p(Y^*,Y) \log \frac{p(Y|Y^*)}{P(Y)}$, then, follow the previous work, we apply the “proportional to” trick [1] because $p(Y^*,Y)$ is an unbounded value. We then proportionally optimize $ \sum_{Y^*,Y} p(Y^*,Y) \log \frac{p(Y|Y^*)}{P(Y)}$ with $\text{sim}\left(Y^*, Y\right) \propto \frac{p(Y|Y^*)}{p(Y)}$. This objective is specifically designed for graph regression tasks, and we plan to include more synthetic and real-world datasets in our study to provide a more comprehensive evaluation. Currently, the performance on three synthetic datasets and one real-world dataset demonstrates its effectiveness.
> Re to A-2c:
A: Thank you for your further question. For graph sample G+, we have $(G^+)^*=E(G+)$, where $(G^+)^*$ is the label-preseving subgraph of $G^+$. Then we pick the $(G^+)^{\Delta}=G^+ - (G^+)^*$ as the label-irrelevant subgraph. This procedure is included in the equations above line189. Eg: $G^{\text{(mix)}+}=G^*+ (G^+)^{\Delta} = G^*+(G^+-(G^+)^*)$.
We will provide a more detailed description of this process in Section 4.3, Step 1, in the next version of our paper.
> Re to A-2d:
A: Thank you for pointing this out. We mentioned that we are following previous work and briefly described this from lines 119 to 122 in the paper. We will ensure these clarifications are more explicitly detailed in the next version of the paper.
> Re to A-2e:
A: Thank you for your insightful suggestion. In response, we conducted a survey of metrics commonly used in similar tasks within the literature. The most prevalent metrics include:
Graph Edit Distance (GED)
Wasserstein Distance (Earth Mover's Distance)
Maximum Mean Discrepancy (MMD)
Jensen-Shannon Divergence (JSD, a variant of KL divergence)
Graph Kernel Methods
These metrics vary in complexity and applicability depending on the nature of the graphs and the specific task at hand. Given the context of our work, we believe that MMD might be particularly suitable, while Graph Kernel Methods could also provide valuable insights into graph structure shifts.
**We are currently conducting experiments to evaluate the MMD metric, and we will include these results in the supplementary material of the next version of the paper.** This will help us better justify our choice of metrics and possibly introduce additional metrics for a more comprehensive evaluation.
We appreciate your recommendation and will incorporate these findings into the revised manuscript to enhance the clarity and rigor of our evaluation process.
> Re to A-2g:
A: We adhere to established definitions from previous work when evaluating the explainability of GNNs. For datasets with explanation ground truth, we use the *AUC-ROC* (Area Under the Receiver Operating Characteristic Curve) as the evaluation metric [2, 3] to measure explanation performance, where a higher AUC-ROC score indicates better performance.
For datasets without ground truth explanations, we utilize metrics such as the fidelity-sparsity score and the robust-fidelity score [4] to assess the quality of the explanations. These metrics are designed to balance the trade-off between explanation fidelity (how well the explanation reflects the model's predictions) and sparsity (how concise the explanation is).
In our work, the AUC-ROC metric is employed, with higher scores reflecting superior explanation performance. We rely on this metric because it has been widely accepted in the literature as a standard measure for evaluating the quality of model explanations.
---
Reply to Comment 1.1.2:
Title: Second Round Reply (2/2)
Comment: > Re to A-3c:
A: This term is also used in previous works[5, 6, 7, 8]. We will make a clear description in the next version of the paper.
[1]. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation Learning with Contrastive Predictive Coding.
[2]. Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. Gnnexplainer: Generating explanations for graph neural networks. Advances in neural information processing systems, 32, 2019
[3]. Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng Chen, and Xiang Zhang. Parameterized explainer for graph neural network. Advances in neural information processing systems, 33:19620–19631, 2020.
[4]. Xu Zheng, Farhad Shirani, Tianchun Wang, Wei Cheng, Zhuomin Chen, Haifeng Chen, Hua Wei, Dongsheng Luo. Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks.
[5]. Haoliang Yuan, Junjie Zheng, Loi Lei Lai, Yuan Yan Tang. A constrained least squares regression model.
[6]. Cheng Li, Virgil Pavlu, Javed Aslam, Bingyu Wang & Kechen Qin. Learning to Calibrate and Rerank Multi-label Predictions.
[7]. Xin Ding, Yongwei Wang, Zuheng Xu, William J. Welch, Z. Jane Wang. Continuous Conditional Generative Adversarial Networks: Novel Empirical Losses and Label Input Mechanisms.
[8]. Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Serena Wang. Pairwise Fairness for Ranking and Regression. | Summary: This work proposes a method to generate instance-level GNN prediction explanations specifically for graph regression tasks. This method addresses distribution shifting, a problem in regression, by using mix-up for contrastive learning. The work is evaluated on four datasets, both synthetic and real-world.
Strengths: The work provides thorough theoretical justification for the objective function in Eq. (4). Additionally, it creates a clever mix-up-based contrastive learning approach that directly addresses the distribution shift problem specific to regression.
Weaknesses: More real-world datasets would demonstrate this method’s more general applicability. The evaluation is limited by the inclusion of only one real-world dataset, Crippen.
See questions.
Technical Quality: 4
Clarity: 3
Questions for Authors: How does this method perform with/against classification explanation models? As the explainer can be modularly applied to existing trained GNNs, this would be interesting to see if it can improve the accuracy of explanation generations. This work [1] is a good work on instance and model-level explanations for graph classification tasks.
Is graph mix-up performed on the graphs explicitly or within latent space? If the mix-up occurs in graph space, then how is the subgraph G* combined with (G^+)^{Delta} and (G^-)^{Delta}? How are edges added between these different subgraphs? Furthermore, for more complex regression datasets, naively adding edges can drastically change the label for each graph. Is this strategy then limited to graph regression datasets which inherently rely on graph structures to derive labels? If the mix-up occurs within latent space, then how does this work differentiate itself from graph rationalization works [2, 3]? If the mix-up occurs in latent space, then additional studies to compare against graph rationalization baselines should be included.
[1] Xuanyuan, Han, et al. "Global concept-based interpretability for graph neural networks via neuron analysis." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 9. 2023.
[2] Wu, Ying-Xin, et al. "Discovering invariant rationales for graph neural networks." arXiv preprint arXiv:2201.12872 (2022).
[3] Liu, Gang, et al. "Graph rationalization with environment-based augmentations." Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The work requires an accurate trained prediction model. The model explanations are not used to retrain the GNN in any way.
Additional limitations are addressed in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer X7ek, thank you for taking the time to review our work and providing feedback. In the following, we aim to address your questions and concerns.
> How does this method perform with/against classification explanation models? As the explainer can be modularly applied to existing trained GNNs, this would be interesting to see if it can improve the accuracy of explanation generations. This work [1] is a good work on instance and model-level explanations for graph classification tasks.
A1: Yes, we have compared our method with several existing methods for graph classification, including GRAD, ATT, MixupExplainer, GNNExplainer, and PGExplainer, as shown in Table X. It is important to note that these methods were originally designed for graph classification tasks and use cross-entropy loss in their objective functions, which cannot be directly applied to graph regression tasks. Therefore, we adapted them for comparison by using MSE loss.
Work [1] is an excellent study that explores graph neural network interpretability from a novel perspective. Our method could potentially be adapted to the framework presented in their Equation 6 and might improve the performance of their approach. We look forward to incorporating and citing this method as a baseline in our future work.
> Is graph mix-up performed on the graphs explicitly or within latent space? If the mix-up occurs in graph space, then how is the subgraph G* combined with $(G^+)^{\Delta}$ and $(G^-)^{\Delta}$? How are edges added between these different subgraphs?
A2: We describe the mix-up process in detail in Appendix C. The mix-up is performed on the graph explicitly by mixing edge weights, and different subgraphs are connected through randomly sampled connection edges.
> Furthermore, for more complex regression datasets, naively adding edges can drastically change the label for each graph. Is this strategy then limited to graph regression datasets which inherently rely on graph structures to derive labels?
A3: Yes, in this work, we follow the settings of previous studies and focus primarily on the graph structure without incorporating edge features. We are interested in exploring the impact of edge features on our method's performance in future research.
> If the mix-up occurs within latent space, then how does this work differentiate itself from graph rationalization works [2, 3]? If the mix-up occurs in latent space, then additional studies to compare against graph rationalization baselines should be included.
A4: Our mix-up primarily occurs on edge weights. We will consider more related works[2, 3] and incorporate their strengths and citations in future studies.
>Limitations: The work requires an accurate trained prediction model. The model explanations are not used to retrain the GNN in any way.
A5: Thank you for your suggestion. This work primarily focuses on post-hoc explanations. In future work, we will explore methods that involve retraining the GNN to enhance both the GNN and the explainer's performance.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their rebuttal. After reading all of the reviews and responses, I will choose to maintain my score.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer X7ek,
Thank you very much for your reply. We truly appreciate the time and effort you invested in evaluating our work and providing valuable feedback.
Although the score did not change, your constructive comments are invaluable to us, and we are committed to addressing all your concerns thoroughly. We are grateful for your support and the opportunity to enhance our work.
If you have any further suggestions or need additional clarifications, please let us know. We are more than happy to provide any additional information or address any remaining questions. | Summary: The authors propose an explanation method to interpret the graph regression models. The techniques are built upon the information bottleneck theory and contrastive learning. The authors show that their explanations are accurate in five graph regression datasets.
Note: If authors address my concerns in questions and limitations, I am willing to upgrade my allocated score.
Strengths: Explanation of graph neural networks and, particularly, explaining graph regression is both a relevant and interesting problem.
The major parts of the proposed technique are clearly explained and easily understood in detail.
The proposed method relies on some solid theoretical properties like information bottleneck.
The code is released with the paper, which helps the work to be reproducible.
Weaknesses: The paper is densely written in Section 4 and is hard to follow. Instead of explaining each part of the algorithm separately and clearly explaining why these components exist, the authors rely too much on theorizing the problem. For example, I am unsure if a reader is particularly interested in knowing about lower and upper bounds and distribution shifts before seeing the proposed method, and maybe moving this section away as motivation can solve this problem.
The authors have mentioned that their evaluation relies on ground truth, but the measures for evaluating explanations are barely discussed. See questions below.
Technical Quality: 2
Clarity: 2
Questions for Authors: Where are the ground truth vectors for your evaluation? How are they obtained? How can ground truth for explanations be in the datasets? This is the most important part of your paper, and it is left to the imagination of readers.
I argue that your technique is better because you add more samples, and the surrogate is learning just that. So basically, you can add this sampling to those explanations, and you don't need the rest of your method (evidence in Section 5.3 Figure 5 except for the BA-motiv dataset). Can you argue against this concern?
Why are the ablation results different for BA-Motiv-Volume?
What is the real limitation of your technique? You have written, "Instead, they write, " Specifically, although our approach can be applied to explainers for graph regression tasks in an explainer-agnostic manner, it cannot be easily applied to explainers built for explaining the spatio-temporal graph due to the dynamic topology and node features of the STG." What does this mean?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: I personally think the main limitation is that this approach is an add-on approach. I am not in favor of add-on model agnostic approaches. What I mean is explanations that sit on top of other explanation techniques. I think this makes the design of these techniques extra complicated, and finding faults in explanations becomes harder. I think if the authors take their sampling and improve it, it can replace the GNN and PGE and become a more general approach.
The authors have also not stated that the graph regression task is not a very popular task in graph settings. Based on this, I also would like to see how this approach can be extended for use in other graph tasks: node classification etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer B4J7, thank you for taking the time to review our work and providing feedback. We appreciate your thorough review and aim to address your questions and concerns in detail. Due to character limitations, we will provide a detailed response to the remaining points in the official comment section. Please let us know if you have any further questions or need additional clarification, we are happy to discuss with you.
> The paper is densely written ... problem.
A1: Thank you very much for your suggestion. In this work, our goal is to provide reliable explanations for graph regression tasks. Previous methods [1, 2, 3] used the vanilla GIB for graph classification, which cannot be trivially applied to graph regression tasks. Therefore, we introduced InfoNCE loss into GIB and theoretically validated its effectiveness. Next, we explain that the OOD problem is more severe in graph regression tasks. On this basis, we introduce the mixup method and combine it with InfoNCE loss to propose our method and model, which includes a contrastive learning objective function with contrastive loss. We aim to balance the effectiveness of the model design with the reliability of the theoretical foundation. In future versions, we will improve the organization of our paper, provide more detailed explanations of the model design, and move some of the theoretical derivations to the appendix.
> Where are the ground truth vectors for your evaluation? How are they obtained? How can ground truth for explanations be in the datasets? This is the most important part of your paper, and it is left to the imagination of readers.
A2: (1). What the explanation subgraphs look like: As introduced in Section 3 Preliminary, line 119 - line 120, we follow the setup from previous work[1, 2], using a binary edge mask to represent the explanation subgraph of the original graph. Specifically, for each edge in the original graph, our explainer produces a prediction value. If this value is 1, it indicates that the edge is part of the explanation; if it is 0, it means the edge is not relevant to the explanation. For each graph, the mask containing these edge weight values is a vector.
(2)&(3). How the ground truth for explanation subgraphs is obtained: Establishing the ground truth for explanation subgraphs is a critical step. We adopt the approach from previous work, including both synthetic and real datasets. In synthetic datasets like BA-motif-volume, we designed the dataset such that the label is related to the motifs and their corresponding features within the graph. Therefore, the corresponding motif subgraph is the explanation subgraph and is used to evaluate the explainer's performance. Additionally, we designed a graph regression task with ground truth explanation subgraphs based on the chemical dataset Crippen. More detailed information can be found in Appendix E.1.
(4). How we evaluate the explainer’s performance and what metrics are used: As mentioned earlier, the explainer produces a weight for each edge in the graph and determines whether the edge belongs to the explanation subgraph based on this weight. In practice, the edge weights are floating-point numbers, and we assess the accuracy of the explanation by calculating the AUC-ROC with respect to the ground truth, thus evaluating the explainer’s performance.
(5). Additional Information: In some other works, evaluation methods without ground truth are used, such as fidelity/sparsity scores to estimate the quality of the explanation subgraph. However, these evaluation methods face issues with out-of-distribution (OOD) problems. Therefore, we did not use datasets and metrics without ground truth in this work. We plan to introduce more datasets, both with and without ground truth, in future work to better evaluate our method and facilitate related research.
> I argue ... concern?
A3: From my understanding, your concern means that you believe the improvement in explainer performance comes from sampling and mixup, and thus the contrastive learning and contrastive loss components are unnecessary. Let me clarify this point; if my understanding is incorrect, I would be happy to further discuss and clarify the issue with you.
First, as mentioned in A1, our method aims to explain graph regression tasks, whereas previous methods were based on vanilla GIB and focused on explaining graph classification tasks. By introducing InfoNCE loss, we can better explain graph regression tasks. The sampling and mixup are employed to address the OOD (out-of-distribution) problem during the explanation process. The combination of these components is crucial for effectively improving the model's performance.
As shown in Figure 5 of the ablation study, simply using the mixup method and sampling (dropping the InfoNCE module) leads to a decrease in model performance. Therefore, just sampling is not sufficient to fully address the problem.
[1] Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. Gnnexplainer: Generating explanations for graph neural networks. Advances in neural information processing systems, 32, 2019.
[2] Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng Chen, and Xiang Zhang. Parameterized explainer for graph neural network. Advances in neural information processing systems, 33:19620–19631, 2020.
[3] Hao Yuan, Haiyang Yu, Jie Wang, Kang Li, and Shuiwang Ji. On explainability of graph neural networks via subgraph explorations. In International Conference on Machine Learning, pages 12241–12252. PMLR, 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. You have addressed some of my concerns, and I can raise my score to +1—best of luck.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer B4J7,
Thank you very much for your thoughtful review and for increasing the score of our paper. We sincerely appreciate the time and effort you invested in evaluating our work and providing valuable feedback. Your positive assessment and constructive comments have been instrumental in helping us improve the paper.
---
Rebuttal 2:
Title: Rebuttal Part 2
Comment: > Why are the ablation results different for BA-Motiv-Volume?
A4: This is a very good observation, and I am glad you pointed it out so we can explain it. We will include the relevant explanation in future versions to improve the quality of the paper.
First, we observed that in the BA-motif-volume dataset and the Crippen dataset, the mixup module has a greater impact on method performance. In contrast, in the other two datasets, the InfoNCE loss (contrastive learning) has a more significant influence. We believe this is due to the characteristics of the datasets: in BA-motif-volume, all graphs are of the same size. Therefore, for a well-trained GNN, the explanation subgraphs (which are only parts of the original graphs) are significantly out-of-distribution (OOD), which reduces model and explainer performance. By using the mixup method to address this issue, we can effectively improve performance. In the other two datasets, the graph sizes are dynamic, and the trained GNNs are relatively more robust, so the performance loss caused by not using mixup is not as substantial.
> What is the real ... mean?
A5: This means that our method cannot be trivially applied to tasks involving dynamic graph structures, such as spatio-temporal graphs (STGs). In STGs, the graph structure changes over time, which poses challenges for selecting samples for contrastive learning and mixup. The dynamic nature of the topology and node features makes it difficult to apply our approach directly. We aim to extend our method to handle dynamic graph structures in future work.
> Limitations
A6: I understand your concern. However, the add-on approach actually offers advantages. Unlike single and fixed explainer methods, our framework can be flexibly applied to existing explainer models, making it suitable for graph regression tasks and improving their performance. In our code, we have also included implementations of Regexplainer based on GNNExplainer and PGExplainer. These implementations can directly replace the original GNNExplainer and PGExplainer and be used straightforwardly.
> The authors have also not stated ... etc.
A7: Our method is specifically designed to improve explaining the graph regression tasks. For graph classification tasks, we can adapt our approach by replacing the MSE loss in the objective function with cross-entropy loss. Additionally, our method can be extended to node-level tasks. For example, in a 3-layer GNN, a node classification task can be viewed as a graph task on 3-hop subgraphs centered around nodes, and then a variant of our method with cross-entropy loss could be easily adapted to the dataset. | Summary: The paper introduces XAIG-R, a novel explanation method for interpreting graph regression models. It addresses distribution shifting and decision boundary issues, leveraging the graph information bottleneck theory (GIB) and self-supervised learning.
Strengths: - Intuitive and clear presentation and illstrration.
- Previous works have primarily focused on explaining GNN models in classification tasks, leaving a gap in understanding graph regression models. This paper specifically targets the explanation of graph regression tasks, addressing a previously unexplored area.
- Extensive Experimental Validation with well-designed settings.
Weaknesses: - Limited Discussion on Computational Efficiency
Technical Quality: 3
Clarity: 3
Questions for Authors: - In this work, GIB is used as Explainer, have you also studied other instance-based approach like GNNExplainer etc.
- How does the proposed method handle dynamic graph topology changes, and what are the implications for real-world applications with evolving graph structures? (Directed Graph, Hyper Graph)
- Any other general-used datasets are tested? Especially those real-world graph datasets.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer LTFm, thank you for taking the time to review our work and providing feedback. In the following, we aim to address your questions and concerns.
> Limited Discussion on Computational Efficiency
A1: Thank you for pointing this out. We are glad to supplement our analysis of computational complexity, which will also be included in future versions.
In the implementation, we transform the structure of the graph data from the sparse adjacency matrix representation into the dense edges list representation. We analyze the computational complexity of our mix-up approach here.
Given a graph $G_a$ and a randomly sampled graph $G_b$, assuming $G_a$ contains $M_a$ edges and $G_b$ contains $M_b$ edges, the complexity of graph extension operation on edge indices and masks, which extend the size of them from $M_a$, $M_b$ to $M_a+M_b$, is $O(2(M_a+M_b))$, where $M_a>0$ and $M_b>0$. To generate $\eta$ cross-graph edges, the computational complexity is $O(\eta)$. For the mix-up operation, the complexity is $O(2(M_a+M_b)+\eta)$.
Since $\eta$ is usually a small constant, the time complexity of our mix-up approach is $O(2*M_a+2*M_b)$.
We use $M$ to denote the largest number of edges for the graph in the dataset and the time complexity of mix-up could be simplified to $O(M)$.
> In this work, GIB is used as Explainer, have you also studied other instance-based approach like GNNExplainer etc.
A2: GIB is a widely used theoretical foundation in related work. Both GNNExplainer and PGExplainer are based on GIB. In our experiments, we considered both GNNExplainer and PGExplainer and included a comparative study applying our method to GNNExplainer. The results are presented in Table 1 in the paper, as rows "GNNExplainer", "PGExplainer" and "+RegExplainer".
> How does the proposed method handle dynamic graph topology changes, and what are the implications for real-world applications with evolving graph structures? (Directed Graph, Hyper Graph)
A3: Thank you for pointing this out. We also discuss the issue of dynamic graphs in the limitations section. This is a very valuable problem, and we plan to further investigate the explainability of dynamic graph topology, evolving graph structures, and spatio-temporal graphs. This is our direction for future work.
> Any other general-used datasets are tested? Especially those real-world graph datasets.
A4: We would very much like to include more real-world datasets. However, the fact is that graph regression datasets containing ground truth explanation subgraphs are very rare. In this work, we created Crippen[1] as a real-world dataset. We will strive to discover, generate, and use more real-world datasets in future work.
[1] John S Delaney. Esol: estimating aqueous solubility directly from molecular structure. Journal of chemical information and computer sciences, 44(3):1000–1005, 2004.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer LTFm,
Thank you once again for your detailed and insightful feedback. We are committed to addressing all concerns and ensuring the highest quality of our work. Your comments have been incredibly valuable, and we have made clarifications and provided more analysis based on your suggestions.
To ensure we fully meet your expectations, could you please provide any further feedback or confirm if the revisions address your concerns? Your prompt response would be greatly appreciated as we finalize our revisions.
Thank you for your time and effort. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We sincerely appreciate your time, consideration, and valuable comments, which have been instrumental in refining our work. If you have any further questions or concerns regarding our response or the current draft, please let us know. We are more than happy to discuss them in detail. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper addresses the challenge of interpreting graph regression models, a fundamental yet less explored task in graph learning compared to classification tasks. Existing explanation techniques are predominantly designed for classification, resulting in a gap for regression tasks. Based on the recent advances on information bottleneck and mix-up framework, the author proposes a novel objective to interprete GNN models in regression tasks.
Strengths: 1. The Performance of RegExplainer is exceptionally good compared with existing baselines in Table 1, which supports the claim of the paper well.
2. The background knowledge and related work summarization is comprehensive and easy to follow.
Weaknesses: 1. Some of the model designs are not moviated consistently over a pronounced challenge. The paper seems to be a combination of Mixupexplainer, GNNExplainer and G-Mixup.
2. The challgenges of graph regression explainer and distribution shift seem to be independent. The author doesn't make any justification on the graph topology and only embedding/predicted values are reported in Figure 3.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In Figure 6, why the selection of $\alpha$ seems do not affect the overall performance? It seems the InfoNCE loss is negeligile in the optimization of the proposed method
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer KBtu, thank you for taking the time to review our work and providing feedback. In the following, we aim to address your questions and concerns.
> Some of the model designs are not moviated consistently over a pronounced challenge. The paper seems to be a combination of Mixupexplainer, GNNExplainer and G-Mixup.
A1: In this work, our goal is to provide reliable explanations for graph regression tasks. Previous methods [1, 2, 3], particularly the vanilla GIB used for graph classification, cannot be trivially applied to graph regression tasks. Therefore, we introduced InfoNCE loss into GIB and theoretically validated its effectiveness. We then recognized that the OOD (out-of-distribution) problem is more severe in graph regression tasks, so we incorporated the mixup method and combined it with InfoNCE loss to develop a contrastive learning objective function that includes contrastive loss. Our model design is cohesive and aims to address a specific problem, rather than simply combining Mixupexplainer, GNNExplainer, and G-Mixup.
> The challgenges of graph regression explainer and distribution shift seem to be independent. The author doesn't make any justification on the graph topology and only embedding/predicted values are reported in Figure 3.
A2: The explainability of graph regression and distribution shift are not independent issues. We found that the distribution shift problem is more severe in graph regression tasks. To generate better explanation subgraphs, we need to correct the subgraph distribution during the explanation process to avoid biased interpretations. Figure 3 illustrates why addressing the distribution shift problem is necessary. It shows that the GNN makes completely incorrect predictions for the BA-Motif-Volume samples, which significantly affects our estimation of $I(Y^*, Y)$ and leads the objective function in the wrong direction. Therefore, we need to address the explainability of graph regression and the distribution shift of subgraphs together.
> In Figure 6, why the selection of \alpha seems do not affect the overall performance? It seems the InfoNCE loss is negeligile in the optimization of the proposed method
A3: This is because our method exhibits strong robustness to the selection of hyperparameters. However, this does not imply that the InfoNCE loss is negligible. As shown in the ablation study in Figure 5, the performance of the method significantly decreases when the InfoNCE loss module is removed. Therefore, Figures 5 and 6 together demonstrate the robustness and effectiveness of the InfoNCE loss.
[1] Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. Gnnexplainer: Generating explanations for graph neural networks. Advances in neural information processing systems, 32, 2019.
[2] Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng Chen, and Xiang Zhang. Parameterized explainer for graph neural network. Advances in neural information processing systems, 33:19620–19631, 2020.
[3] Hao Yuan, Haiyang Yu, Jie Wang, Kang Li, and Shuiwang Ji. On explainability of graph neural networks via subgraph explorations. In International Conference on Machine Learning, pages 12241–12252. PMLR, 2021.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer KBtu,
Thank you once again for your detailed and insightful feedback. We are committed to addressing all concerns and ensuring the highest quality of our work. Your comments have been incredibly valuable, and we have made clarifications based on your suggestions.
To ensure we fully meet your expectations, could you please provide any further feedback or confirm if the revisions address your concerns? Your prompt response would be greatly appreciated as we finalize our revisions.
Thank you for your time and effort.
---
Rebuttal 2:
Title: Respectfully Requesting an Update
Comment: Dear reviewer KBtu,
Thank you for your valuable feedback on our paper. We truly appreciate the time you have taken to review our work and provide detailed insights. Your comments help us a lot in refining our next version of the paper.
We understand that you may have other commitments, but if you have any additional questions or require further clarification on any of our responses, we would be grateful for the opportunity to address them. **Your feedback is crucial to us, and we are eager to ensure that our revisions align with your expectations.**
Please feel free to reach out at your convenience, as we would be more than happy to discuss any remaining concerns or provide further information.
Thank you for your continued support. | null | null | null | null | null | null |
Stochastic Optimization Schemes for Performative Prediction with Nonconvex Loss | Accept (poster) | Summary: A nice paper that studies nonconvex performative prediction optimization. Proposed a new stationarity notion and demonstrated convergence for SGD with greedy deployment.
Strengths: 1. Extending the convergence measurement from the strongly convex case to the nonconvex case and proposing the stationary performative stable notion.
2. Extremely clear and easy to follow with key insights for the performative prediction problems.
3. Novel convergence guarantees.
4. The lazy deployment is quite interesting. It is equivalent to using mini-batch in some sense.
Weaknesses: Performative prediction problem is less motivated, i.e., there lacks a icon application such that the problem can only be formulated as a performative prediction problem and cannot be formulated in other forms even considering the special structure of the problem.
The numerical experiments lacks a convincing example as well, i.e., why studying the problem.
Leveraging the problem structure, many times the problem admits other more classical optimization objectives. It is not the problem of this paper alone but the whole research line.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. What happens to the analysis if the distribution is discrete. Then PDF may not exists and Pearson $\chi^2$ sensitivity may not be well-defined.
2. Regarding Theorem 2, to ensure $\delta$ stationarity, it requires $T=O(\delta^{-2})$ and $K=O(\delta^{-2})$. It means that to control both the bias and the error accumulated in the iterations, it needs $TK = O(\delta^{-4})$ iterations/samples. Is it possible to improve it to $O(\delta^{-2})$, i.e., the same complexity as the non performative setting?
3. Table 1 should reflect the bias level and compare the bias level with existing literature. It should also mention what would be the sample complexity needed to ensure an $\delta$ stationary point in this work and other works.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: See questions above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Performative prediction problem is less motivated, i.e., there lacks a icon application such that the problem can only be formulated as a performative prediction problem and cannot be formulated in other forms even considering the special structure of the problem. The numerical experiments lacks a convincing example as well, i.e., why studying the problem. Leveraging the problem structure, many times the problem admits other more classical optimization objectives. It is not the problem of this paper alone but the whole research line.
Thanks for your critical comment on the state of performative prediction research. Many learning problems when applied in a "societal" setting will exhibit performativity. This is an inevitable outcome as the predictions informed by trained models becomes a part of the bigger social system. Further, an important property in performative prediction explored by this line of work is that the learner who trains the model cannot access information about how the distribution shifts, and thus cannot estimate the exact gradient of $V(\theta)$ that depends on ${\cal D}(\theta)$. Due to these limitations (note they are imposed by the application scenario rather than by the algorithm design), we believe that one of the key challenges shall lie in understanding and improving such "non-gradient" dynamics instead of exploring problem structure to efficiently minimize $V(\theta)$.
> What happens to the analysis if the distribution is discrete. Then PDF may not exists and Pearson $\chi^2$ sensitivity may not be well-defined.
That's a good observation. We agree that the chi-squared divergence condition in the original **C1** does not work with discrete distributions which limit its application. Fortunately, as pointed out by reviewer 71Gw, the above condition can be easily weakened into a sensitivity condition based on the total variation distance which also applies to discrete distribution (see **C1'** in the response to 71Gw). Nevertheless, our results also apply to cases with the Wasserstein-1 sensitivity condition (see **W1**).
> Regarding Theorem 2, to ensure $\delta$ stationarity, it requires $T=O\left(\delta^{-2}\right)$ and $K=O\left(\delta^{-2}\right)$. It means that to control both the bias and the error accumulated in the iterations, it needs $T K=O\left(\delta^{-4}\right)$ iterations/samples. Is it possible to improve it to $O\left(\delta^{-2}\right)$, i.e., the same complexity as the non performative setting?
We believe that it is possible to improve the sample complexity of $TK$ beyond $O(\delta^{-4})$. This is because from (22), we observe that within the $t$th "inner loop" after each deployment, the SGD-lazy deploy scheme is essentially SGD for $\min_{\theta} J(\theta;\theta_t)$. Now to improve the sample complexity with $K$, one may replace the SGD method with variance reduced SGD such as using the STORM gradient estimator in [a].
However, we suspect that reaching the sample complexity of $O(\delta^{-2})$ would require more work. One of the reasons is that the structure of optimization algorithm is different as the learner does not have access to the (form of) distribution shift, and as explained in our manuscript, this limits the use of standard analysis tool such as (constant) Lyapunov function method. In general, characterizing and achieving the optimal sample complexity of performative optimization with non-convex loss is an exciting future direction to be explored.
[a] Cutkosky and Orabona, "Momentum-based variance reduction in non-convex sgd", NeurIPS 2019.
> Table 1 should reflect the bias level and compare the bias level with existing literature. It should also mention what would be the sample complexity needed to ensure an stationary point in this work and other works.
The purpose of Table 1 is to compare the few existing works on non-convex performative prediction, yet other papers may have used other form of solution concepts where the studied algorithms admit various forms of biases. We intend to indicate their differences through displaying their respective convergent point "$\theta_{\infty}$" & algorithm type "Algo". To save space, we have used ${\cal O}(\epsilon)$-SPS to indicate the bias level of SGD-GD analyzed by us. As for the sample complexity, they can be deduced from the "Rate" column. We will improve the presentation of Table 1 in the revision if space allows.
---
Rebuttal Comment 1.1:
Title: Discussions
Comment: Thanks for the detailed responses. My concerns are mostly addressed. I will keep the score.
Purely for discussions, understanding and improving "non-gradient" dynamics is definitely important. However, if in various applications of performative prediction optimization, there admit a more classical stochastic optimization formulation using additional structure that can decouple the source of randomness and the decision, it is unclear why one has to model it as a more general performative prediction optimization problem.
---
Reply to Comment 1.1.1:
Comment: Thank you for reading and replying to the responses.
We agree with the reviewer that from the perspective of solving a stochastic optimization problem, using additional structure to decouple the source of randomness and decision may lead to better algorithms, e.g., lower sample complexity. However, we believe that an important aspect is that there are a number of scenarios for performative prediction where the learner does not **know** this "additional structure". Worse still, the learner may not even be aware of the distribution shift in the problem.
A concrete example is the training of classifiers (e.g., for spam emails) - specifically in an online setting where the classifier has to be updated using current training data, note the latter may come from a decision dependent distribution $Z_{t+1} \sim D(\theta_t)$. On the other hand, it is likely that the learner does not know the form of decision-dependency in the training data since knowing the latter requires precisely knowing the behavior of the (normal and spam) email users. As a result, the only knowledge available to the learner is the current training data sample $Z_{t+1}$ and the form of cross entropy loss used for formulating the stochastic gradient $\nabla \ell( \theta; Z_{t+1} )$. In the absence of knowledge of $D(\theta)$, it would be impossible for the learner to exploit the problem structure and derive a reformulation to the performative optimization problem. | Summary: The paper studies convergence of stochastic gradient descent in a performative prediction context. The main result shows that SGD converges to an analogue of performative stability, which the paper terms “stationary performative stability” (up to a bias term). The results characterize the rate of convergence and the magnitude of the bias.
Strengths: The results in this paper significantly expand the scope of optimization in performative prediction, which has so far largely focused on convex loss functions. In fact, most results require strongly convex losses. Moreover, prior work typically makes an assumption on the magnitude of the performative effects, captured by the sensitivity parameter epsilon; the convergence results of this paper do not require a bound on epsilon and show that the sensitivity determines the distance to stationary performative stability. The lack of assumption about epsilon is a major advantage. There are some other works, e.g. Jagadeesan et al., that do not require a bound on epsilon, but this work requires knowing epsilon to run the optimization method. The additional analysis of the lazy deploy scheme, which approximates RRM and thus incurs no bias in the limit, is a nice addition. The observation about the different dependence of the bias on epsilon depending on whether the gradients are stochastic or not is another nice result.
Weaknesses: This is not really a major weakness, but I think some of the discussion in Section 3.1 could be simplified. Instead of assuming the chi squared divergence condition, one can get Lemma 3 by assuming that D(theta) is Lipschitz in TV distance (i.e. ||D(\theta) - D(\theta')|| \leq \epsilon ||\theta - \theta'||), together with C2. The chi squared condition seems a bit odd and nonstandard because it is not a Lipschitz condition.
In A2, the first part of the sentence is not an assumption; it’s true just by the definition of J?
In Lemma 1, Theorem 1 (and possibly other places) please use parentheses in the step size condition. It should be 1/(L(1+sigma1^2)).
Very minor suggestion: personally I find it more appropriate to see Theorem 1 as a lemma and Corollary 1 as the main theorem.
Please don’t use the symbol T in Theorem 2 for the random step because you use that symbol in eq. (4).
Technical Quality: 4
Clarity: 3
Questions for Authors: In the paragraph starting with line 275, you mention the relationship of Theorem 2 with RRM convergence and Mofakhami et al. My understanding was that they required a particular strong convexity condition, as noted in Table 1. So your result even for RRM may be new. Could you comment on this?
The discussion about the time-varying Lyapunov function (e.g. starting at line 179) reminded me of the perspective from Drusvyatskiy and Xiao. They show that SGD in a performative context can be thought of as standard SGD on the equilibrium distribution, at the stable point. I’m wondering if you’ve thought about if there exists an analogue of this perspective in your nonconvex setting?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: There is not much discussion of limitations, though I don't think it is necessary.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > I think some of the discussion in Section 3.1 could be simplified. Instead of assuming the chi squared divergence condition, one can get Lemma 3 by assuming that D(theta) is Lipschitz in TV distance (i.e. $||D(\theta) - D(\theta')|| \leq \epsilon ||\theta - \theta'||$), together with C2. The chi squared condition seems a bit odd and nonstandard because it is not a Lipschitz condition.
Thanks for your valuable suggestion. Indeed, as you said, the chi-squared condition (**C1**) can be replaced by a weaker TV distance condition. The argument is as follows.
First, we recall the definition of TV distance as $\delta_{TV}(\mu, \nu) = \sup_{A\subset {\sf Z} } \left| \mu(A) - \nu(A) \right| = \frac{1}{2} \int \left| p_{\mu}(z) - p_{\nu}(z) \right| {\sf d}z$ where $\mu, \nu$ are two measures supported on ${\sf Z}$ and $p_{(\cdot)}(z)$ denotes their pdfs. Note that we have $\delta_{TV}(\mu,\nu) \leq (1/2)\sqrt{\chi^2(\mu,\nu)}$ [Gibbs and Su, 2002, Sec. 2]. Accordingly, we may replace **C1** by its weakened version:
**C1'** There exists a constant $\tilde{\epsilon}\geq 0$ such that $\delta_{TV}\left({\cal D}(\theta_1), {\cal D}(\theta_2)\right) \leq \tilde{\epsilon} \| \theta_1 - \theta_2 \|$ for any $\theta_1, \theta_2\in \mathbb{R}^d$
Using **C1'** and **C2**, we derive a similar result as Lemma 3 with the following chain
$$
\begin{aligned}
\left| J(\theta, \theta_1) - J(\theta, \theta_2)\right| &= \left| \int \ell(\theta; Z) (p_{\theta_1}(z) - p_{\theta_2}(z)) {\sf d}(z) \right|
\leq \int |\ell(\theta; z)| \cdot \left| p_{\theta_1}(z) - p_{\theta_2}(z) \right| {\sf d}z
\leq \ell_{max} \cdot \int \left| p_{\theta_1}(z) - p_{\theta_2}(z) \right| {\sf d}(z)
\end{aligned}
$$
$$\hspace{+3.5cm}
\leq \ell_{max} \cdot 2\delta_{TV}({\cal D}(\theta_1), {\cal D}(\theta_2))
\leq 2\ell_{max} \tilde{\epsilon} || \theta_1 - \theta_2 ||
$$
The rest of our convergence analysis for SGD-GD or SGD-lazy deploy follows immediately with the above modification. Again, we thank the reviewer for pointing this out and will make sure to include the above proofs in the revision!
> Personally, I find it more appropriate to see Theorem 1 as a lemma and Corollary 1 as the main theorem.
We chose to describe our theoretical results in this way as we wish to include general conditions for the step size $\gamma_{t}$, such as diminishing and constant step sizes.
> In the paragraph starting with line 275, you mention the relationship of Theorem 2 with RRM convergence and Mofakhami et al. My understanding was that they required a particular strong convexity condition, as noted in Table 1. So your result even for RRM may be new. Could you comment on this?
Though our Theorem 2 suggests a new finding for an RRM-like strategy, we believe that such strategy of SGD + lazy deployment with $K \to \infty$ strategy is not strictly equivalent to RRM. A subtle difference is that RRM (e.g., in [Mofakhami et al.]) requires finding an exact minimizer to the risk minimization problem given a fixed data distribution at each iteration, yet SGD + lazy deployment may only find a stationary point to the risk minimization problem (with non-convex loss) even when $K \to \infty$. As such, we have restrained from claiming it as a new finding for RRM. We will elaborate more in the revision.
> The discussion about the time-varying Lyapunov function (e.g. starting at line 179) reminded me of the perspective from Drusvyatskiy and Xiao. They show that SGD in a performative context can be thought of as standard SGD on the equilibrium distribution, at the stable point. I’m wondering if you’ve thought about if there exists an analogue of this perspective in your nonconvex setting?
Although this is an interesting point, we believe that drawing such analogue for the nonconvex setting is difficult since the equilibrium distribution may not be unique in the latter case. In fact, the inability to apply the Lyapunov function $J(\theta;\theta_{PS})$ is also the reason why we had to develop a new time varying Lyapunov function in the nonconvex setting.
> Other Problems
We also thank the reviewer for pointing out other typos and small issues in the paper. We will correct them in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response! Everything makes sense. | Summary: This paper studied the `performative prediction’ that means when predictive models are used to make consequential decisions like policy making, it can trigger actions that influence the outcome they aim to predict. And we know a system with unlimited positive feedback will eventually be destroyed. On the optimization side, this work considered a risk minimization problem with a decision-dependent data distribution. It means the loss function specified by model parameters $\theta$ and the data distribution. They analyzed stochastic gradient descent (SGD) with a greedy deployment scheme (SGD-GD) in a setting that only requires the smoothness of non-convex loss function $l$. They showed the algorithm convergence to stationary performative stable (SPS) solutions with two types of distance metrics of distributions. Numerical examples of both synthetic and real data are provided to justify their theoretical result.
Strengths: Pros:
It's a solid extension of the SGD-GD works of [Mofakhami et al., 2023] and
[Mendler-Dünner et al., 2020] with smooth but not necessarily non-convex loss $l$. It’s a big step forward compared to the strongly convex loss in previous work.
They provide both real and synthetic experiments to justify their claims.
Weaknesses: Cons:
The experiment setting is relatively simple but it's a minor issue since this is a theoretical work and the experiment is showcasing the concept.
Technical Quality: 3
Clarity: 3
Questions for Authors: The reviewer is generally positive about this work. One question would be:
Are there some other real applications besides the spam filter? Could the author provide more insight into this theoretical work and other policy-making applications with real-world influence?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The experiment setting is relatively simple but it's a minor issue since this is a theoretical work and the experiment is showcasing the concept.
We chose a simple experiment setting to demonstrate the effects of key parameters such as sensitivity strength $\epsilon$ and lazy deployment period $K$ to better validate the theoretical findings. We believe that this is an appropriate choice given the theoretical nature of this work.
> Are there some other real applications besides the spam filter? Could the author provide more insight into this theoretical work and other policy-making applications with real-world influence?
Examples of performativity are pervasive in the real world, especially in the financial market and strategic training tasks, such as insurance, hiring, admission, and healthcare. In these scenarios, individuals often adjust their behaviors to receive predictions in their favor. For instance, in the hiring process, job applicants may prepare more relevant projects based on the job description provided by a company’s HR. When an employer conducts an interview and decides whether to hire an applicant, the applicant may have a higher chance of being hired if s/he is better prepared.
From a theoretical perspective, a crucial point we explored in this paper is that decisions are often made via models that are trained thru a non-convex optimization process, which has not been addressed in previous works on performative prediction. One takeaway from our findings is that - with reference to the results on lazy-deployment vs greedy-deployment SGD - when a decision maker (company) frequently changes its job description requirements, its trained model may experience greater bias that may lead to reduced performance.
---
Rebuttal Comment 1.1:
Title: Response to the author
Comment: The reviewer thanks the author for the response. After reading the rebuttal discussion from all reviewers, the reviewer would like to maintain the score. | Summary: This work studied performative prediction problems in nonconvex regimes and proposed the first algorithm, SGD-GD, with convergence guarantees in this case, it was further extended to a lazy deployment scheme so that the algorithm is bias-free.
Strengths: 1. First convergence analysis of gradient-based algorithms for performative prediction problems in nonconvex regimes. Which is a novel contribution.
2. Proposed a new convergence measurement for nonconvex performative prediction problems
3. The writing is great and the storyline is easy to understand
Weaknesses: 1. The definition of SPS, as the authors mentioned, only considers the gradient regarding the loss function, while missing the gradient over the distribution parameter, which may not perfectly reflect the stationarity convergence of the objective function.
2. Some assumptions are still a bit unrealistic (for example, the global upper bound assumption in C2), it is not clear whether they are satisfied in the numerical experiments.
Technical Quality: 3
Clarity: 3
Questions for Authors: /
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: /
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The definition of SPS, as the authors mentioned, only considers the gradient regarding the loss function, while missing the gradient over the distribution parameter, which may not perfectly reflect the stationarity convergence of the objective function.
This is a valid observation. However, we remark that SPS is a stationary solution concept suitable for the arguably more "natural" algorithms for performative prediction -- including SGD-GD, SGD with lazy deployment, repeated risk minimization. These algorithms do not require a-priori knowledge of the form of data distribution shift, nor attempt to learn the latter. We believe these class of algorithms are important as they are applicable to scenarios when the learner is agnostic to the distribution shift. The definition of SPS generalizes that of performative stable (PS) solution [Perdomo et al., 2020] since for (strongly) convex $\ell(.)$, the definitions of SPS and PS solutions are equivalent. Note algorithms for achieving a stationary point for $V(\theta)$ has been explored in works like [Izzo et al., 2021], which represents a different research direction in the performative prediction community.
> Some assumptions are still a bit unrealistic (for example, the global upper bound assumption in C2), it is not clear whether they are satisfied in the numerical experiments.
Admittedly, while assumptions like C2 may appear to be slightly strong, our theory remains applicable to a number of applications in ML including our numerical experiments. For the synthetic data experiments, the sigmoid loss function is upper bounded by 1. Similarly, in the neural network experiments, the loss is also upper bounded due to the sigmoid function applied in the last layer. Both simulations satisfy one of the required sets of assumptions (W1+W2 or C1+C2). We will expand the discussion on how these examples satisfy the assumptions in the revision. We also remark that C1 can be further relaxed to **C1'** using a weakened notion of distribution sensitivity; see the response to 71Gw. | Rebuttal 1:
Rebuttal: ### General Response
We thank all the four reviewers for their careful reviews and valuable suggestions. We summarize our general responses and proposed improvement to the paper as follows:
- Our work provides one of the first convergence theories on SGD-greedy deployment scheme applied to performative prediction with non-convex losses, i.e., a stochastic optimization problem with decision dependent samples. To do so, we developed several innovations: (a) the concept of SPS solution for "equilibrium" solution defined w.r.t. a partial stationarity condition, (b) a time varying Lyapunov function that tracks the progress of SGD-GD in the absence of a unique equilibrium solution. These innovations have led to new findings that seems to be unique for non-convex performative prediction, namely, SGD-GD may converge to a biased SPS solution, and the bias can be reduced/eliminated with a lazy deployment variant. We believe that these findings/innovations will be of interest to the performative prediction community, as well as studies on stochastic algorithms in general.
- We are grateful for the constructive suggestions. Particularly, we have weakened the condition **C1** based on $\chi^2$ divergence into that based on the weaker TV distance; see **C1'** in the response to reviewer 71Gw. This allows our main theories to be applied on more general settings for performative prediction.
- We also thank reviewer kiFq for raising the issue about reducing the sample complexity to reach an (unbiased) SPS solution. We listed some ideas in the response below but this is an important future direction that we would like to explore.
We found that these suggestions improves the paper quality and will include them into the revision. We look forward to further discussion with the reviewers in the next phase of this review process. Thank you! | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LinNet: Linear Network for Efficient Point Cloud Representation Learning | Accept (poster) | Summary: The submission #308 entitled "LinNet: Linear Network for Efficient Point Cloud Representation Learning" introduces a linear network designed for efficient point cloud representation learning. To achieve this task, the authors propose a novel disassembled set abstraction (DSA) module and a linear sampling strategy, which together enhance computational efficiency and scalability. The method maps 3D point clouds onto 1D space-filling curves, allowing for parallelization of downsampling and neighborhood queries on GPUs with linear complexity. This approach achieves state-of-the-art performance across various benchmarks.
Strengths: - The linear sampling strategy is very elegant.
- The large-scale comparison across multiple datasets and many approaches is highly appreciable.
- The authors offer to open their code upon acceptance.
- The ablation studies are well-conducted, clearly showing the contribution of each proposed module.
- The reasoning behind DSA is well introduced and motivated.
- The editorial quality of the paper is excellent. It is easy to read, and the illustrations are pleasant and informative.
- The assessments are numerous and conclusive, showing that the proposed approach is innovative and achieves state-of-the-art results on multiple metrics.
Weaknesses: - One negative point is the significant memory footprint of the approach, as mentioned in the limitations section.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Would it be possible to conduct a small experiment to measure the memory footprint of the approach?
- It would be interesting to have a few cross-validation tests to know more about the generalization of the technique.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The limitations of the approach are well covered at the end of the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and constructive comments. In the following, we address your concerns carefully.
**W1: One negative point is the significant memory footprint of the approach, as mentioned in the limitations section.**
**A:** Thank you for addressing the concern regarding the memory footprint of our approach, which we have acknowledged in the "Limitations" section of our manuscript.
While our local aggregation methods have indeed improved the scalability of point-based approaches in large-scale scenarios by reducing memory usage, the overall memory footprint remains a substantial challenge. We are actively working on strategies to further minimize memory consumption, which we plan to implement in future iterations of our model.
Firstly, we plan to integrate hash query strategies with our local aggregation methods. Employing hash tables to build rulebooks, similar to those used in sparse convolutions, will allow us to manage memory more efficiently.
Secondly, we aim to make better use of shared memory on GPUs to enhance processing efficiency. Shared memory is much faster than global memory and using it effectively can greatly reduce the need for frequent global memory accesses, which are more costly in terms of time and energy consumption.
-----
**Q1: Experiment to measure the memory footprint of the approach**.
**A:** Thank you for your constructive suggestion regarding the measurement of our model's memory footprint. To address this, we conducted experiments to evaluate memory usage during both training and inference phases on the NuScenes dataset, utilizing an RTX 4090 graphics card with all tests conducted at a batch size of 1.
We included comparisons with the baseline model PointNeXt [1] and the sparse convolution method MinkUNet [2]. Our findings reveal that PointNeXt suffers from out-of-memory issues when handling large-scale scenes, highlighting scalability challenges. In contrast, our DSA module significantly reduces memory consumption by avoiding high-dimensional feature transformations on neighboring point clouds.
Given that MinkUNet starts with 32 input channels, we conducted similar tests with our LinNet-Small model, which also has 32 initial feature channels, for a direct comparison:
| | Training Mem. (NuScenes) | Inference Mem. (NuScenes) |
| ------------ | ------------------------ | ------------------------- |
| MinkUNet | 2.6 GB | 1.4 GB |
| PointNeXt | Out of Memory | Out of Memory |
| LinNet-Small | 5.2 GB | 4.9 GB |
| LinNet | 16 GB | 13 GB |
Although LinNet-Small consumes more memory than MinkUNet, it is crucial to note that LinNet-Small, with only 1.7M parameters, achieves a validation accuracy of 77.6%, surpassing the 38M parameter sparse convolution method MinkUNet, which achieves 73.3%. This demonstrates that our model, despite its higher memory footprint, provides superior accuracy, offering a significant advantage in scenarios where performance is critical.
-----
**Q2: Conduct cross-validation.**
**A:** Thank you for suggesting the inclusion of cross-validation tests to evaluate the generalization capabilities of our technique. We have performed 6-fold cross-validation on the S3DIS dataset to ensure a robust assessment of our model's performance across different subsets of data. Here are the results:
| Methods | mIoU (%) | mAcc (%) | OA (%) |
| --------- | -------- | -------- | ------ |
| PointNeXt | 74.9 | 83.0 | 90.3 |
| LinNet | 78.6 | 86.3 | 91.9 |
These results demonstrate that LinNet consistently outperforms the baseline model, PointNeXt, across multiple metrics, which suggests superior generalization abilities.
-----
[1] Qian et al. PointNeXt: Revisiting PointNet++ with Improved Training and Scaling Strategies. NeurIPS 2022.
[2] Choy et al. Minkowski convolutional neural networks. CVPR 2019.
------
We hope our response adequately addresses your concerns. If you still have any questions, we are looking forward to hearing them.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their clarifications. After reading the comments from other reviewers, I realize I may have been slightly too generous in my initial assessment. However, I still believe this manuscript is worthy of appearing in NeurIPS, and I would like to maintain my initial rating.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer zyd3,
Thanks for your constructive suggestions.
We will improve our paper's quality based on your guidance and comments.
Thank you for recognizing our work! We hope our simple yet effective LinNet would help the community towards a better understanding of point cloud analysis.
Best, Authors | Summary: In this work, the authors propose an efficient learning framework for point cloud representation learning. For the computational intensive local aggregation operation, this work proposes Disassembled set abstraction (DSA) to aggregate local features in terms of the spatial distributions of points in a simple and efficient manner. This work also proposes a Linearization sampling strategy and hash query operation to accelerate the sampling and neighbor searching processes. Experiments on classification and semantic segmentation demonstrate the effectiveness of the proposed method.
Strengths: 1. The proposed local aggregation operation seems to be simple, efficient, and effective;
2. The point searching strategy including the Linearization sampling strategy and hash query can indeed improve the effciency while keeping the overall performances;
Weaknesses: My major concern about this work is the higher performances than transformer-based methods. It may be a little hard to understand why the proposed DSA and hash-based searching operations can improve the performances so greatly. Because these operations are more like the approximation of corresponding operations in PointNet++.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why do we introduce batch normalization to the aggregated features in Eq.4? As the addition of features and neighborhood features have already introduce the spatial characteristics, I am not sure if this BN is necessary here, or other simple components may also work;
2. In the Hash query part, will the points in a same local grid share the same neighbors? I do not quite get the calculation of complexity for each point.
3. Is the hash grid is pre-constructed before training? Or created repeatedly during training?
3. From the results in Table 1 and Table 3, the proposed method even outperforms transformer-based methods in a more efficiency way. Could the author analyze the reasons behind this? As the improvements of this framework seem to be efficient simplification of existing PointNet++ framework, I am curious why it can improve the performances so obviously.
Please check the grammar also, e.g., In Line 101, the $x_i$ might actually be $p_i$;
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and constructive comments. In the following, we address your concerns carefully.
**W1 & Q4: Why the proposed DSA and hash-based searching operations can improve the performances so greatly? (Higher than transformer-based methods)**
**A:** Thank you for your insightful concerns regarding the superior performance. It is important to clarify that though our method is built upon the PointNet++ style framework, LinNet is not a superficial approximation but a substantial exploration and design to specifically address scalability and efficiency. Our model, LinNet, addresses a longstanding issue with the PointNet++ style framework: the lack of scalability.
We try to explain the significant performance improvements from two aspects.
**(a) Comparison with PointNet++ style framework**
- **Accuracy**: Fig. 4 of the manuscript illustrates that after several training epochs, both the DSA and vanilla SA stabilize in terms of loss; however, DSA consistently maintains a lower loss compared to vanilla SA. This considerable difference demonstrates that vanilla SA may not sufficiently adapt to the datasets, whereas DSA exhibits a more robust capability to fit the data, thereby achieving higher accuracy.
- **Speed**: In DSA, point-wise convolutions are applied directly to anchor features rather than to an expansive neighborhood, thus **DSA requires significantly fewer FLOPs compared to vanilla SA**. Additionally, the adoption of a linear complexity search strategy shifts away from the costly traditional algorithms like FPS and KNN—which have complexities of $\mathcal{O}(N^2)$ and $\mathcal{O}(kN^2)$ respectively—to more efficient, GPU-friendly linear complexity algorithms. This enhances computational efficiency remarkably.
**(b) Comparison with Transformer-based method.**
The Point Transformer v2 (PTv2) we are comparing is also essentially a point-based approach, which implements downsampling through grid pooling similar to the resize operation in image and uses local attention for feature aggregation.
- **Accuracy**: We observe that the limited receptive field during the downsampling stages of PTv2 and similar methods could restrict their accuracy. PTv2 samples only within a single grid (typically about 6 points), whereas our method captures the nearest $k$ points from the target grid and its 26 surrounding grids, providing a broader receptive field.
- **Speed**: Although PTv2's grid sampling avoids the need for farthest point sampling, its reliance on KNN for neighborhood queries and the complexity of its local attention calculations significantly hamper its processing speed. In contrast, our DSA module simplifies these processes, leading to faster data processing.
-----
**Q1: Why adapted BN in Eq. (4). Is this BN necessary, or other simple components also work?**
**A:** Thank you for your interest in batch normalization. As you mentioned, the addition has already introduced spatial characteristics. Note that BN is placed **after the max pooling layer to normalize the aggregated feature**, facilitating subsequent convergence. In the ScanObjectNN dataset, BN is necessary and has led to a 0.4% OA improvement. As you suggested, we conducted ablation studies on the S3DIS dataset to further assess the necessity and effectiveness of BN. Here are the results averaged over three experiments:
| None | BN | LN |
| ------ | ------ | ------ |
| 71.8 % | 72.9 % | 71.9 % |
As shown, models with BN outperform those with Layer Normalization (LN) and without any normalization, indicating that BN is particularly effective for our specific architecture.
-----
**Q2: In the Hash query part, will the points in the same local grid share the same neighbors?**
**A:** Thank you for your interest in the hash query part. Our hash query confines the search range of each point to its own grid and the adjacent 26 neighborhood grids (i.e., $3\*3\*3-1$), selecting the closest $k$ points as neighbors from these grids. **Although points in the same local grid have an identical search range, the specific neighbors selected for each point (i.e., the $k$ nearest points) can differ.** This variation arises because **neighbor points are chosen based on the actual spatial distances between points,** not merely by their presence in the same grid.
Regarding the computation complexity discussed in the manuscript, we apologize for any confusion caused by omitting the complexity of heap sorting. For a point cloud comprising $N$ points distributed across $m$ non-empty grids, constructing the hash table entails a complexity of $\mathcal{O}(m)$. Assuming each point’s 27-grid neighborhood contains $p$ points on average, identifying the closest $k$ points involves maintaining a heap with a complexity of $\mathcal{O}(p \log k)$ and a final sorting step costing $\mathcal{O}(k \log k)$. Thus, the total computational complexity is $\mathcal{O}(m + N(p \log k + k \log k))$. We will clarify and elaborate on these calculations in the revised version of our manuscript to prevent confusion.
-----
**Q3: Is the hash grid is pre-constructed before training? Or created repeatedly during training?**
**A:** Thank you for your question regarding the hash table. In our approach, the hash table is not pre-constructed but is instead dynamically built-in real-time during the training process. This allows for hash query and linearization sampling to be integrated seamlessly into each training iteration. As demonstrated in Fig. 1, both operations are efficiently parallelized on the GPU and collectively account for less than 10% of the model's forward time.
-----
**Q4: The typo in line 101.**
**A:** Thank you for pointing out this typo. You are correct that the symbol "$x_i$" should indeed be "$p_i$". We have made this correction in the revised manuscript.
------
We hope our response adequately addresses your concerns. If you still have any questions, we are looking forward to hearing them.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for your rebuttal. It has addressed most of my concerns. However, I am still curious about the reason why DSA outperforms SA. Although Fig.4 confirms that DSA can converge to lower loss than SA, DSA seems to be a more efficient simplification of SA. Could you provide some more intuitive explanation about the reasons behind such improvements? That is, why DSA exhibits a more robust capability to fit the data than SA?
---
Rebuttal 2:
Comment: **A:** Thank you for your continued interest in the Disassembled Set Abstraction (DSA) and Set Abstraction (SA). Your question regarding why DSA outperforms SA in terms of data fitting is insightful. I'll provide a more intuitive explanation focusing on the architectural differences and their impacts.
In a nutshell, the **DSA module places more emphasis on the extraction of geometric information which is crucial for point cloud learning**.
For SA, the feature of an anchor is updated as:
$$
\mathbf{f}_i' = \mathcal{R} _{j:(i, j)\in \mathcal{N}} \\{\text{PWConv}^{3+c \mapsto c}(\mathbf{f}_j||(\mathbf{p}_j-\mathbf{p}_i))\\}.
$$
For DSA, the process is:
$$
\mathbf{\overline{f}}_i = \text{PWConv}^{c \mapsto c}(\mathbf{f}_i);
\mathbf{f}_i' = \text{BN} \\{{\mathcal{R} _{j:(i, j)\in \mathcal{N}} \{\overline{\mathbf{f}}_j + \text{PWConv}^{3 \mapsto c}((\mathbf{p}_j-\mathbf{p}_i))} }\\}.
$$
Excluding the Batch Normalization and focusing only on neighbor feature computations for simplicity, let $\mathbf{y}_j=[y_j^1, y_j^2,..., y_j^c]$ represent **the features of the $j$-th neighbor**. Treating pointwise convolution as a linear layer and without considering a bias, the SA model uses a weight matrix $\mathbf{W}$ of dimensions $c \times (c+3)$ to process both semantic and geometric information concurrently. The input $\mathbf{x}_j = [\mathbf{f}_j, \Delta\mathbf{p}_j]$ includes semantic features $\mathbf{f}_j$ (dimension $c$) and geometric features $\Delta\mathbf{p}_j$ (dimension $3$), with the output defined as $\mathbf{y}_j = \mathbf{W}\mathbf{x}_j^\text{T}$. The output for the $k$-th channel is given by:
$$
y_j^k = [w^{k1}, w^{k2}, \ldots, w^{kc}, w^{k(c+1)}, w^{k(c+2)}, w^{k(c+3)}][\mathbf{f}_j, \Delta\mathbf{p}_j]^\text{T}.
$$
With Kaiming initialization, the weight matrix $\mathbf{W}$ is initialized to a normal distribution $\mathcal{N}(0, \sqrt{\frac{2}{c+3}})$. This initialization ensures uniformity across all weights, meaning the weights for geometric inputs contribute $\frac{3}{c+3}$ to the total output. Consequently, the influence of geometric information on the overall results is significantly limited.
In contrast, the DSA model separates the processing of semantic and geometric information through two distinct linear layers. It incorporates two weight matrices, $\mathbf{W}_f$ and $\mathbf{W}_p$, corresponding to the dimensions $c \times c$ and $c \times 3$, respectively. The output is determined by:
$$
\mathbf{y}_j = \mathbf{W}_f \mathbf{f}_j + \mathbf{W}_p \Delta\mathbf{p}_j.
$$
For the $k$-th channel, the output is:
$$
y_j^k = [w_f^{k1}, w_f^{k2}, \ldots, w_f^{kc}]\mathbf{f}_j^\text{T} + [w_p^{k1}, w_p^{k2}, w_p^{k3}]{\Delta\mathbf{p}_j}^\text{T},
$$
where $\mathbf{W}_f \sim \mathcal{N}(0, \sqrt{\frac{2}{c}}),\mathbf{W}_p \sim \mathcal{N}(0, \sqrt{\frac{2}{3}})$ . This initialization strategy enables the DSA module to appropriately tailor the weights based on the number of input channels. Although there are only three channels dedicated to geometric information, their relatively larger weights enhance the model's capability to more effectively extract geometric information.
In summary, DSA and SA are equivalent in terms of mathematical expression during forward propagation. In SA, a single linear layer merges $\mathbf{f}$ and $\Delta\mathbf{p}$, transforming them linearly to produce the output. In contrast, DSA employs two distinct linear layers to separately process $\mathbf{f}$ and $\Delta\mathbf{p}$, and then sums the outputs. However, the different weight initialization of the two linear layers causes the network to preferentially learn from geometric information. This bias enables the network to more effectively detect patterns associated with geometry, which is particularly advantageous in point cloud analysis.
Thank you once again for your insightful suggestions, which have prompted us to further explore the underlying logic. We hope our response adequately addresses your concerns. If you still have any questions, we are looking forward to hearing them.
---
Rebuttal Comment 2.1:
Comment: Thanks for your responses. The initialization could be a potential reason. Considering its good overall performances, I will raise my score to weak accept.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer aBR6,
Thanks for your constructive suggestions and the score increase.
We will improve our paper's quality based on your guidance and comments.
We hope our simple yet effective LinNet would help the community towards a better understanding of point cloud analysis.
Best, Authors | Summary: The paper proposes a novel lightweight backbone network model for input point cloud data suitable for global and local per-point feature extraction. It relies on two main ideas: (1) the separate processing of point coordinates and features (and a further combination of these two streams of features before the neighborhood pooling operation), (2) the use of the space-filling curves to define local neighborhoods that allow hash queries and linear complexity sampling.
The approach is evaluated in the point cloud classification and segmentation tasks on ModelNet40, ScanObjectNN and S3DIS, NuScenes datasets respectively, demonstrating performance competitive to state of the art. Additional experiments include ablations exploring the efficacy of every proposed component and comparison to other methods in terms of model efficiency.
Strengths: * The proposed feature aggregation method is likely novel and improves the results according to the ablation studies.
* The proposed method is on par with the state-of-the-art non-transformer-based approaches but is better in terms of scalability to larger point clouds.
* The extensive evaluation shows the importance of every component in the ablations.
Weaknesses: * All the improvements (except for the NuScenes dataset) are not particularly distinctive.
* While present the efficiency is not exploited in any presented applications (for classification point clouds are small, so other methods work fast as well, for segmentation there are no efficiency comparisons).
* The text is written well but some figures can be improved.
Technical Quality: 4
Clarity: 3
Questions for Authors: The proposed method relies on the space-filling curves introduced in PointTransformer v.3, which is mentioned multiple times in the paper. At the same time, it is not considered for comparison. While PTv3 is concurrent, for completeness, it would be nice to include the results from it, especially since this work already acknowledges the existence of PTv3. So my question is how this method compares to PTv3 in terms of performance and efficiency?
Figure 5 shows a nice uniformly distributed (over the grid) point cloud but in practice, any grid-based discretization suffers to some extent from the discretization artifacts. The proposed method can in principle have cells with single points. Do these cells exist in practice and do they cause any problems?
Figure 1: a) and b) have the same area but the total time of b) is significantly lower which is misleading.
Figure 3: The choice of the number of inputs, intermediate features, and outputs is either arbitrary or not clear which is confusing. Showing the operations for a single input point would be clearer.
Figure 5: Showing an empty table in c) is not informative. Showing a part of the actual hash table for this example in the figure might work better.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors properly address the limitations in the submitted draft.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and constructive comments. In the following, we address your concerns carefully.
**W1: The improvements (except for the NuScenes dataset) are not particularly distinctive.**
**A:** Thank you for your comments.
Firstly, for small-scale classification tasks, we employed the same training protocols and experimental conditions as the SOTA benchmark, PointNeXt [1]. Nonetheless, we still achieve an overall accuracy (OA) improvement of 0.4% on the ScanObjectNN dataset, while other PointNeXt style architectures (e.g., PointVector [2], PointMetaBase [3]) using the same experimental setup only achieve an OA improvement of 0.1%.
Secondly, for large-scale segmentation tasks, except for the NuScenes dataset, we also achieved a 2.5% improvement in mIoU without Test Time Augmentation (TTA) and a **2.9% improvemen**t with TTA. The improvements are particularly impressive and distinctive. **The speedup is even more impressive than the accuracy improvement**, and **the magnitude of the speedup increases with the size of the point cloud**. Moreover, **the enhancement in processing speed**—a critical factor for large-scale applications—**is even more striking**. Specifically, at a 20k point cloud size, our inference speed is **1.7 times faster** than PointNeXt (47ms vs. 87ms), scaling up to a speedup ratio of **13 times** (232 ms vs. 3147ms) when handling 200k points.
**This dual achievement of improved speed and accuracy, especially at larger scales, is noteworthy**.
-----
**W2: The efficiency is not exploited in any presented applications.**
**A:** Thank you for pointing out this issue. Visual comparisons of model efficiency at four different point cloud sizes (20k, 50k, 100k, 200k) are shown in Fig. 7(a). Following your suggestion, we further add a table to more clearly and intuitively demonstrate efficiency. Corresponding to the number of points pre-sampled by PointNeXt on the S3DIS dataset, we replaced 20k with 24k. Thanks to the simplicity and efficiency of our DSA module, the proposed LinNet exhibits only half the latency of Point Transformer v2 [4]. Remarkably, at the 200k level, our LinNet model performs 13 times faster than PointNeXt.
Model latency on different scales (ms):
| Methods | 24 k | 50 k | 100 k | 200 k |
| --------- | ---- | ---- | ----- | ----- |
| PointNeXt | 87 | 266 | 878 | 3147 |
| PTv2 | 55 | 163 | 228 | 493 |
| LinNet | 47 | 62 | 123 | 232 |
-----
**W3: Some figures can be improved.**
**A:** Thank you for the constructive suggestions and valuable feedback.
The revised figure can be found in the **PDF** of the global rebuttal. Fig. 1, Fig. 2, and Fig. 3 of the global rebuttal correspond to Fig. 1, Fig. 3, and Fig. 5 of the manuscript.
- **Figure 1:** We have redrawn Fig. 1 (a) and added the total inference latency to it.
- **Figure 3:** We are sorry for this confusion. Following your suggestion, we take a single point as input and set the number of neighbors to 3. We represent the features using rectangles of uniform size to maintain consistency across the data representation.
- **Figure 5:** Thank you for your insightful feedback on Fig. 5, and your concerns about artifacts from grid-based discretization. Given the sparse nature of the data distribution and the characteristics of the discretization process, our method accommodates scenarios where a grid may contain only a single point. This point is then directly used as a new sampling point under our strategy, ensuring that the distribution of the newly sampled point cloud closely mirrors that of the original. Following your suggestions, we have made specific optimizations in the revised version to ensure that the number of points in each grid varies, better reflecting the variability found in real-world data distributions. Additionally, in Fig.5 (c), we have incorporated actual data into the hash table.
-----
**Q1: Without result of Point Transformer v3**.
**A:** Thank you for pointing out this issue. A detailed comparison with PointTransformerv3 (PTv3) is available in the comments below. Our model consistently achieves better results than PTv3 on the S3DIS and NuScenes validation sets, also demonstrating competitive performance on NuScenes testing sets.
Regarding latency, PTv3 utilizes spatial curves to divide the point cloud into patches. This allows the model to compute attention on the patches rather than on individual points, avoiding the need for point-wise local attention. In contrast, our method leverages point-wise local features, resulting in higher latency compared to PTv3. However, we would like to highlight that our LinNet is a pure MLP network and does not require any additional operational assistance. Instead, PTv3 relies on sparse convolution for positional encoding and the sparse convolution kernel incurs more model parameters.
Model performance:
| | S3DIS Area 5 | S3DIS 6-fold | NuScenes (val) | NuScenes (test) |
| ------ | ------------ | ------------ | -------------- | --------------- |
| PTv3 | 73.4 | 77.4 | 80.4 | 82.7 |
| LinNet | 73.7 | 78.6 | 81.4 | 82.3 |
Model size and execution time, latency are measured with 24k points:
| | Model Size (M) | Forward Latency (ms) |
| ------ | -------------- | -------- |
| PTv3 | 46.2 | 26 |
| LinNet | 14.7 | 47 |
-----
[1] Qian et al. PointNeXt: Revisiting PointNet++ with Improved Training and Scaling Strategies. NeurIPS 2022.
[2] Deng et al. PointVector: A Vector Representation In Point Cloud Analysis. CVPR 2023.
[3] Lin et al. Meta Architecture for Point Cloud Analysis. CVPR 2023.
[4] Wu et al. Point Transformer V2: Grouped Vector Attention and Partition-based Pooling. NeurIPS 2022.
------
We hope our response adequately addresses your concerns. If you still have any questions, we are looking forward to hearing them. | Summary: This paper proposes a method for point cloud segmentation and classification. The main contribution is making the local aggregation dependent on the anchor point. The approach demonstrates improvements of one to two percent on S3DIS and NuScenes datasets compared to existing methods.
Strengths: - The architecture is based purely on an MLP (with some hashing operations) which is a significant benefit in terms of implementation and potentially in terms of computational efficiency.
- The approach of making local aggregation anisotropic and the proposed DSA module has a good motivation.
- The method shows improved performance on standard benchmarks (S3DIS and NuScenes), outperforming existing approaches by a small but consistent margin.
Weaknesses: - The paper's writing seems overly complex with lots of unnecessary jargon making it difficult to follow the core ideas and contributions.
- The improvements in performance, while consistent, are relatively small (1-2%), which raises questions about the practical significance of -the method.
- The modifications proposed are individually well-motivated but seem somewhat ad hoc, lacking a strong theoretical foundation.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Have you explored whether a more general model could learn the invariances you've built into LinNet without explicit architectural choices?
2. Given the relatively small improvements in accuracy, what do you see as the main practical advantages of LinNet over existing methods?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The discussion of limitations is sufficient.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and constructive comments. In the following, we address your concerns carefully.
**W1: The paper's writing seems overly complex with lots of unnecessary jargon making it difficult to follow the core ideas and contributions.**
**A:** We sincerely apologize for any confusion caused by the writing complexity of our manuscript and appreciate your feedback on the use of jargon. In response, we have revised the manuscript to simplify the language and ensured that key concepts are explained more thoroughly. For example, in the context of DSA module, we use terms like **anisotropy** and **isotropy**. To clarify, we employ the term **spatial-wise anisotropy** to refer to variations among the features of neighboring points within the same feature channel, and **channel-wise anisotropy** to describe differences across various feature channels.
The core idea of our paper is to **enhance the scalability of existing point-based methods through a more lightweight feature aggregation strategy and a point cloud search strategy with reduced linear complexity**.
-----
**W2: The performance improvements (1-2%) are relatively small, which raises questions about the practical significance.**
**A:** Firstly, **in the domain of point cloud processing, even seemingly modest improvements of 1-2% are indeed substantial.** For instance, Point Transformer [1] (first released in Dec 2020) achieved a result of 70.4%, while the result of Point Transformer V2 [2] (Oct 2022) is 71.6%.
Secondly, we want to highlight that our approach **achieves dual improvements in both speed and accuracy**. Previous work, such as the Fast Point Transformer [3], sacrifices accuracy in the pursuit of efficiency. In contrast, we achieve a 2.9% mIoU improvement while much faster than PointNeXt [5] on S3DIS. More critically, by employing a linear sampling strategy and a DSA module, **we have largely addressed the long-standing scalability challenges associated with point-based networks.** This allows our network to be easily applied to large-scale point cloud scenes. We believe these aspects affirm the practical significance of our reported improvements, offering meaningful contributions to the field.
-----
**W3: The modifications proposed are individually well-motivated but seem somewhat ad hoc, lacking a strong theoretical foundation.**
**A:** Thank you for your endorsement of the motivation for our work.
**Our modifications are not ad hoc but are derived from a thorough analysis and adaptation of established concepts within both 2D and 3D vision technologies.** To improve the efficiency of point-based networks, we started by systematically reviewing local aggregation techniques used in computer vision. Inspired by the success of separable convolutions in 2D imaging, we explored their applicability to 3D point clouds.
**We conducted a detailed theoretical analysis of the set abstraction (SA) module used in 3D vision, examining it from the perspective of anisotropy in Sec. 3.2.** It shows that principles of separable convolution, effective in 2D vision, could be adapted for 3D point clouds, thus motivating our modifications. Initially, we attempted to directly separate spatial anisotropy and channel anisotropy from the input features. However, the initial experimental outcomes were not as anticipated, which prompted further in-depth analysis.
This led to the development of the DSA module, where we refined our approach based on our theoretical insights and experimental findings. **Finally, we validated our modifications with explanatory analysis and empirical evidence, clearly demonstrating the advantages of the DSA module over the vanilla SA.**
Following your feedback, we will further **strengthen the theory by incorporating references such as ASSANet [4], which provides an in-depth theoretical analysis of anisotropy in point cloud processing.**
-----
**Q1: Have you explored whether a more general model could learn the invariances you've built into LinNet without explicit architectural choices?**
**A:** Thank you for your insightful question about exploring a more generalized model without explicit architectural choices.
In the manuscript, for the segmentation used in S3DIS, the number of layers of our model was set to [4, 7, 4, 4] for a fair comparison with the baseline PointNeXt . For the other dataset, NuScenes, which is much larger, we adjusted the number of layers in the encoder part to [4, 4, 7, 4] for efficiency. In response to your question, we used a uniform [4, 7, 4, 4] configuration across tasks to determine if a more generalized structure would impact performance. The results show that the mIoU of NuScenes decreases by only 0.2% (80.2% vs. 80.4%), which indicates that the model performs robustly in both configurations.
-----
**Q2: Apart from accuracy, the practical advantages of LinNet over existing methods.**
**A:** Thank you for your question. In addition to improved accuracy, **the substantial increase in processing speed represents a significant practical benefit of our approach, particularly for large-scale applications**. As demonstrated in Fig. 7(a), LinNet achieves a remarkable improvement in response speed; when handling point clouds of up to 200k points, it performs **13 times faster** than the current state-of-the-art method, PointNeXt++, and **twice as fast as PTv2**.
-----
[1] Zhao et al. Point Transformer. ICCV 2019.
[2] Wu et al. Point Transformer V2: Grouped Vector Attention and Partition-based Pooling. NeurIPS 2022.
[3] Park et al. Fast Point Transformer. CVPR 2022.
[4] Qi et al. ASSANet: An Anisotropic Separable Set Abstraction for Efficient Point Cloud Representation Learning. NeurIPS 2021.
[5] Qian et al. PointNeXt: Revisiting PointNet++ with Improved Training and Scaling Strategies. NeurIPS 2022.
------
We hope our response adequately addresses your concerns. If you still have any questions, we are looking forward to hearing them.
---
Rebuttal 2:
Title: Supplement on weakness 3
Comment: Inspired by the comment from reviewer aBR6, we delve deeper into the reason why DSA performs better than SA from the standpoint of parameter initialization.
To clarify, we replace pointwise convolutions with linear layers and restructure both the DSA and SA modules.
In SA, a single linear layer merges the neighborhood features $\mathbf{f}_j$ and the positional differences $\Delta\mathbf{p}_j$, transforming them linearly to produce the output as follows:
$$
\mathbf{f}_i' = \mathcal{R} _{j:(i, j)\in \mathcal{N}} \\{\text{Linear}^{3+c \mapsto c}([\mathbf{f}_j, \Delta\mathbf{p}_j])\\}.
$$
Contrastingly, DSA utilizes two separate linear layers to process $\mathbf{f}_j$ and $\Delta\mathbf{p}_j$ individually and then combines the outputs. **Ignoring the removal of redundant calculations in the neighborhood and Batch Normalization (BN)**, the DSA module is depicted as:
$$
\mathbf{f}_i' = {\mathcal{R} _{j:(i, j)\in \mathcal{N}} \\{ \text{Linear}^{c \mapsto c}(\mathbf{f}_j) + \text{Linear}^{3 \mapsto c}(\Delta\mathbf{p}_j)} \\}.
$$
Both modules appear mathematically equivalent during forward propagation. Yet, **distinct initializations of the two linear layers in DSA encourage the network to prioritize geometric information more effectively.** Excluding bias, the SA module uses a combined weight matrix $\mathbf{W}$ (dimensions $c \times (c+3)$) to process semantic and geometric data simultaneously. The input vector $\mathbf{x}_j = [\mathbf{f}_j, \Delta\mathbf{p}_j]$ leads to the output:
$$
\mathbf{y}_j = \mathbf{W} \mathbf{x}_j^\text{T}.
$$
Each output channel's contribution is calculated by the product of the weights and inputs, with Kaiming initialization setting $\mathbf{W}$ as a normal distribution $\mathcal{N}(0, \sqrt{\frac{2}{c+3}})$, limiting geometric data's influence due to its smaller proportional weight.
In contrast, DSA segregates the handling of semantic and geometric data using two separate weight matrices, $\mathbf{W}_f$ for semantic (dimensions $c \times c$) and $\mathbf{W}_p$ for geometric data (dimensions $c \times 3$). This results in outputs:
$$
\mathbf{y}_j = \mathbf{W}_f \mathbf{f}_j + \mathbf{W}_p \Delta\mathbf{p}_j.
$$
The initialization $\mathbf{W}_f \sim \mathcal{N}(0, \sqrt{\frac{2}{c}})$ and $\mathbf{W}_p \sim \mathcal{N}(0, \sqrt{\frac{2}{3}})$ allows for a more balanced influence of geometric data, thus enhancing the network's ability to extract and utilize geometric information effectively.
Following your feedback, we have included the additional explanations and comparative analyses in the revised version of the manuscript. | Rebuttal 1:
Rebuttal: We thank all reviewers for their positive comments about the novelty (7sY3, zyd3), motivation (J8xV, zyd3), writing quality (7sY3, zyd3), and experiments (J8xV, 7sY3, aBR6, zyd3) of this work.
As suggested by the four reviewers, we conduct sufficient additional experiments on our LinNet and demonstrate more inspirable abilities.
1. For the comments raised by reviewer **J8xV**, we elaborated on the core ideas and contributions of our model. We conducted experiments on the NuScenes dataset to explore a general model without explicit architectural choices, confirming that the model performs robustly in both configurations.
2. For the comments raised by reviewer **7sY3**, we highlighted the significance of our improvements. We added latency measurements to better demonstrate the model's efficiency and modified the figures based on constructive comments from the reviewer.
3. For the comments raised by reviewer **aBR6**, we conducted experiments to verify the necessity of Batch Normalization (BN) in the DSA module and addressed the complexity of the hash query.
4. For the comments raised by reviewer **zyd3**, we implemented cross-validation tests to assess the generalization capabilities of our model and measured its memory footprint.
In conclusion, to enhance the scalability of point-based methods, we developed Linear Net (LinNet). LinNet achieves more efficient local aggregation by leveraging spatial anisotropy and channel anisotropy separately. Additionally, by mapping 3D point clouds onto 1D space-filling curves, we performed downsampling and neighborhood queries with linearly reduced complexity. LinNet **achieves dual improvements in both speed and accuracy, largely addressing the long-standing scalability challenges associated with point-based networks.** Extensive experimental results demonstrated the superiority of the methodology design, which also shows that **even without the support of any additional techniques (e.g., sparse convolution, attention, graph convolution), purely point-based methods can achieve good results by relying only on a simple MLP.** We anticipate that these insights will encourage the community to rethink methods for efficiently point clouds learning.
We include three figures in **the PDF** and cite them in rebuttal.
Pdf: /pdf/4447871db1547e52f2c01a575dbc1917bb7649d9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On the Roles of LLMs in Planning: Embedding LLMs into Planning Graphs | Reject | Summary: This work creates a hybrid LLM and classic planning algorithm, by integrating a LLM into the GraphPlan algorithm. The GraphPlan is an algorithm that solves a relaxed planning problem (forward expansion), and then traverses the created graph to find a valid plan (backtracking). Both steps are expensive. In the hybrid approach, a LLM is prompted in the forward expansion to limit the exploration of states deemed irrelevant. In the backtracking phase, the LLM is used to sort actions to explore first. Experiments with corrupted domain files show that LLMs can better handle corruption than the GraphPlan algorithm.
Strengths: - A very interesting novel idea of a hybrid planning approach with a fundamental classic planning algorithm.
- The paper provides an introduction to an interesting research area of classical planning (e.g., Figure 4).
Weaknesses: - Multiple missing experiments and discussions severely undermine the results of the paper.
- It is not clearly motivated why experiments with corrupted pddl domain files are interesting. This was introduced quite suddenly in the *results section* (lines 261-262) without enough details and without providing motivation.
- The paper is missing important discussion and experiments about the trade-off between the hybrid approach and the classic GP algorithm. Experiments with valid pddl domain files are not included, which could have alleviate it.
- The effect of hyperparameters on the results, such as the number of iterations (N) in Algorithm 1, is not discussed.
- The failure of LLMs4PLAN-GPT3.5 compared to the phenomenal success of LLMs4Plan-GPT4 is somewhat unexpected and undermines the results of the paper.
- Multiple details are missing regarding the experimental setups. (see questions below)
- The paper's writing needs to be improved. (see suggestions below)
Technical Quality: 3
Clarity: 2
Questions for Authors: **Suggestions**
- Most importantly, I would like to see experiments with valid pddl files. While the GP algorithm would receive 100% success rate, I expected to see a graph of success rate (y-axis) compared to the number of nodes explored (x-axis). Such a graph would describe **a trade-off** between statistically using a LLM and using an exhaustive algorithm, such as GraphPlan.
- It is possible that this is included in Table 3, but I don't understand if it includes the corrupted domain files. A graph which includes success rate would be clearer.
- Clearly describe and motivate the pddl data corruption in the experimental setup.
- Writing
- While I happen to be well-versed with the GRAPHPLAN algorithm, I am not sure that enough introduction has been provided, as it is only briefly mentioned in lines 44 and 67.
- “3 Our LLMs4Plan approach” does not properly introduce the algorithm before discussing it. Notations, such as N, are not defined. Terms, such as planning graphs and mutual constraints, are used but not explained until reading 3.1 and 3.2. Concepts, such as pruning, should be formally introduced before discussing “pruning possibility” (line 109).
- subsection 3.2:
- this subsection is not part of your algorithm, but part of graphplan. related to my previous notes, I think this should not be in section 3, but properly explained earlier.
- add citations to support the mutual exclusion constraints names (i.e., inconsistent effects, interference, competing needs).
- subsection 4.3. writing could be improved. I was initially confused about the location of the ablation results table.
**Questions**
- Data corruption of pddl domain files: Please provide details about the corruption, such as % files corrupted. It seems that this is a very high percentage, if this is the main reason that the GP method gets low resulsts (lines 261-262).
- Please provide details about the hyperparameters influence on the results, such as the number of iterations N and the number of layers K.
- In Table 2, where we compare “number of nodes required for searching”. Which iteration do we measure? Do we ignore the fact that there could have been multiple iterations before the successful one?
- line 189 - “Ten problems are randomly selected for each domain” - from which corpus?
- Statistics about the pddl problem files are missing. How long are the plans from initial state to goal?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors did not discuss limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: [Motivation of corrupted domain models]
Response #4.1: In real-world applications it is often difficult to design complete domain models (without corruption) provided for classical planners to solve real-world planning problems [16]. It is an open challenging problem to design effective approach to learn domain models from history data. Designing an effective planning approach to solve problems with corrupted domain models would broaden its applications in real-world domains.
[Experiments on valid domain models]
Response #4.2: The numbers of expanded nodes on 5 valid questions of each domain (i.e., with valid domain models without corruption) are shown in the following table. As we can see from the table, the number of expanded nodes of our LLMs4Plan is much smaller than GP in valid domain models. We will add the corresponding results and analysis in the paper if the paper is accepted.
LLMs4Plan LLMs4Plan-unsorted LLMs4Plan-unpruned GP
gripper 8640 13240 6458928 11294380
miconic 17818 71715 667647 3026119
logistics 72 108 1628 1681658
movie 1407182 1410930 1407182 14911545
blocks 5510 9811 107334 1531608
satellite 47315 104582 72573129 91379642
zenotravel 1695 14428 892472 3067223
driverlog 591 2281 70756 1842905
woodworking 65 5216 74599 178890
openstacks 111 478 18374 31292
[Description of hyperparameters N and K]
Response #4.3: N is set to be large enough to ensure the completeness of the approach. As we can see from Algorithm 1, when the iteration i (<N) becomes large enough, the pruning possibility \kapa_i is very small, making Algorithm 1 close to the classic GP algorithm. In our experiment, we empirically set N to be 7. K is the maximal number of layers to be expanded, which is related to the length of solution plans to planning problems. We empirically set K to be 25. Note that since there are parallel actions in each action layer, the length of final solution plans is generally much longer than the number of layers expanded. We will add the corresponding descriptions of hyperparameters in the paper if the paper is accepted.
[The percentages of corruption]
Response #4.4: Among 10 questions of each domain, 5 questions are without corruption, and other 5 questions are corrupted with 10%, 20%, 30%, 40%, 50% of preconditions and effects randomly removed in each action model.
[How do we count the expanded nodes]
Response #4.5: We counted all nodes expanded in multiple iterations, including repeated nodes expanded multiple times in multiple iterations.
[How do we generate ten problems of each domain]
Response #4.6:
We used code libraries for generation of random problems (https://github.com/AI-Planning/pddl-generators/tree/main). The corresponding description is given in Appendix A.1. We will add more detailed descriptions about the generation procedure in Appendix A.1.
[Length of solution plans]
Response #4.7: The average of solution plans in each domain is between 15 and 80.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you to the authors for the new results. These align more closely with the expected experimental setup based on the introduction's motivation.
The newly reported results are very impressive. Assuming these results and the additional details will be included in the paper, I am inclined to raise my score. However, my new score still considers the need for several modifications to improve the paper's clarity and the limitation of having only five non-corrupted problem files per domain.
> when the iteration i (<N) becomes large enough, the pruning possibility \kapa_i is very small, making Algorithm 1 close to the classic GP algorithm
I would consider adding another figure that depicts this trade-off between GP and your approach. x-axis is the number of iterations and y-axis is success rate. This will shed light on the number of iterations that were actually necessary.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your consideration and your further suggestion. We will add the new experimental results and revise the descriptions to improve the paper's clarity accordingly. Indeed, depicting the trade-off between GP and our proposed approach with respect to the number of iterations would give more insights on our approach. We will add the results to the paper accordingly (we collected the results before). | Summary: The paper investigates how large language models (LLMs) can be integrated into established planning frameworks, specifically graph-based planning. The authors propose a novel framework called LLMs4Plan, which incorporates LLMs at two critical stages of the planning process: action selection during graph expansion and candidate action set generation during backtracking. The framework is tested across various planning domains, demonstrating improved efficiency and effectiveness in planning tasks.
Strengths: 1. The paper's approach of embedding LLMs into graph-based planning is innovative and contributes to the field of automated planning.
2. The technical implementation of LLMs4Plan is well-detailed, with descriptions of how LLMs are utilized in action selection and candidate set generation.
3. The effectiveness of the proposed framework is empirically validated across ten planning domains, showcasing its practical applicability.
Weaknesses: 1. The proposed integration of LLMs into planning frameworks in LLMs4Plan may be complex and difficult to scale.
2. Comparisons with more recent LLM integrated planning baselines is limited.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How does the proposed LLMs4Plan compare with the more recent related work [12] (LLM+P) cited in the paper? And what makes LLMs4Plan a better approach?
2. What potential applications do the authors envision for the LLMs4Plan framework in real-world planning scenarios?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: [Comparison with LLM+P]
Response #3.1: LLM+P cannot be directly compared because its input and output are different from our LLMs4Plan. The input and ouput of LLMs4Plan are in pddl format, while the input and output of LLM+P are in NLP form. The role of LLMs in LLM+P is more like a kind of semantic understanding and format conversion, i.e., converting NLP problems to pddl problems and use off-the-shelf planners to solve the pddl problems.
We solved the problems from the projects of LLM+P for each of the domains BARMAN, STORAGE, TERMES, and TYREWORLD. The results are as shown below. As we can see, our LLMs4Plan has higher success rate than LLM+P.
Domain LLMs4Plan LLM+P
BARMAN 1.00 1.00
STORAGE 1.00 0.85
TERMES 1.00 0.20
TYREWORLD 1.00 0.90
[Applications to other real-world planning scenarios]
Response #3.2: For all planning scenarios that graph-based planning can be applied, our LLMs4Plan can also be applied. In addition, with LLMs integrated, our LLMs4Plan is promising on more planning scenarios, e.g., scenarios where domain models are corrupted. | Summary: There have been debates about the fundamental planning abilities of LLMs in planning tasks. To achieve more reliable performance, several recent works have embedded an LLM into a search framework (e.g., MCTS, BFS) and viewed LLMs as heuristics. Along this line, this work take a closer look at the roles LLMs can play in Planning Graph. It considers two tasks for LLMs: pruning actions and sorting actions (as heuristics).
Strengths: - The paper is well-written, with precise language and formalism.
- The experiment is conducted on over 10 domains, making it quite comprehensive.
Weaknesses: 1. My biggest concern with this work is that it restricts the use of LLMs to specific roles within a classical planning algorithm. There are many other roles LLMs can play in planning. For instance, see the recent LLM-modulo framework below. Instead of just filtering and ranking actions, LLMs have also been used to evaluate state values or rank plans (i.e., action sequences rather than individual actions).
- Kambhampati, Subbarao, et al. "Position: LLMs Can't Plan, But Can Help Planning in LLM-Modulo Frameworks." ICML 2024
2. The evaluation based on the number of nodes explored is partial. We should not ignore the time cost (e.g., latency of calling LLMs) + financial cost of using commercial LLMs. It could be very likely that, although LLM+Graph Planning expands fewer nodes, it may take a longer wall-clock time to give the final outputs. I understand that the evaluation could be tricky and it remains an open question for a while. However, the authors should at least make an attempt to address this.
3. In the abstract, this statement is inaccurate: “works have been proposed to investigate the planning effectiveness of LLMs, without considering any utilization of off-the-shelf planning techniques in LLMs.” There have been quite some paper embedding LLMs in off-the-shelf planning algos
- Zhao, Zirui, Wee Sun Lee, and David Hsu. "Large language models as commonsense knowledge for large-scale task planning." NeurIPS 2023.
- Yao, Shunyu, et al. "Tree of thoughts: Deliberate problem solving with large language models." NeurIPS 2023.
4. While the corrupted domain model experiment looks interesting, it is unclear what messages it tries to convey. Specifically, why would one run the algo on top of a corrupted domain model when there exists approaches that can leverage LLMs to help complete the domain model before starting the search?
- Guan, Lin, et al. "Leveraging pre-trained large language models to construct and utilize world models for model-based task planning." NeurIPS 2023
- Wong, Lionel, et al. "Learning adaptive planning representations with natural language guidance." ICLR 2024.
5. The step of LLM-based action pruning can make the search incomplete, since an LLM may keep ignoring the required action(s) -- in other word, there is no guarantee that the LLM can produce a goal-reaching plan. I notice the authors mention this at a later section (which should be moved to earlier part) that including pruning probabilities could address the problem. I don’t fully agree with this. Can the authors give more detail on how pruning probabilities could guarantee completeness?
6. In the prompt (fig. 3), only the proposition set at the current state is provided. Did the authors consider including the running history of actions (i.e., the partial plan)? Would this affect the overall performance?
7. Line 109: typo in “Algorithm ??”
8. Several works (mentioned earlier) already show that LLMs can be useful heuristics. Can the authors elaborate on the new insights this work provides?
-----
Overall, this study provides a thorough evaluation of LLMs within the Planning Graph algorithm. I appreciate the comprehensiveness of the experiments. However, I also have concerns over the scope of this study (i.e., restricting itself to a limited set of roles). I need to discuss with other reviewers and the authors before finalizing my recommendation.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the Weakness section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See the Weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: [Restriction in the use of classical planning]
Response #2.1: Thanks. We think extending our approach to other domains of planning is not an issue that we need to worry about, as any planning domain that can be expressed in natural language form or can be expressed in natural language form through certain transformations is amenable to incorporating LLMs into established planning frameworks. From the perspective of our work, pruning and sorting are very suitable ways of integrating LLMs into GP, a planning framework, at least from the experimental results. In addition, improving classical planning frameworks with LLMs would broaden the applications of classical planning in more real-world scenarios in the planning community.
[Time cost]
Response #2.2: We are indeed aware of the time cost issue. As we mentioned in the conclusion section of the paper, "the runtime of LLMs4Plan is currently hindered by multiple LLMs calls. While our method requires multiple LLMs calls, it provides substantially improved results. There are also various ways to enhance runtime performance ike using smaller LLMs like Llama [14] or distilling LLMs’ knowledge into a smaller model [13, 7, 11]. Those are interesting avenues for future research."
[Inaccurate statement in the abstract]
Response #2.3: Thanks for the reminder. We will make the statement more specific, e.g., with respect to deterministic classical planning framework.
[Experiments on corrupted domain models]
Response #2.4: We agree that there have been approaches aiming at learning domain models automatically, as NeurIPS 2023 and ICRL 2024 papers you mentioned. It is indeed an open challenging problem to investigate effective learning algorithms. There is no doubt that there is still no learning approach that guarantees its learnt domain model is perfect, even though with LLMs. It is still necessitated to explore novel approaches to solve planning problems with corrupted domain models.
[Completeness with respect to the pruning possibility]
Response #2.5: As we can see from Algorithm 1, when the iteration i (<N) becomes large enough, the pruning possibility \kapa_i will be very small, making Algorithm 1 close to the classic GP algorithm, i.e., no actions are removed with LLMs. In our experiment, we empirically set N to be 7, which can be very large. However, we don't need to make it larger since N=7 is sufficient for our approach to solve all planning problems successfully.
---
Rebuttal Comment 1.1:
Title: Including running history of actions in prompt
Comment: [Including running history of actions in prompt]
Response #2.6: The history of actions makes the prompt very long, making LLMs suffered from outputing the result. We thus did not consider including history of actions, even though it is possible for improving the overall performance.
---
Rebuttal Comment 1.2:
Comment: I thank the authors for their detailed response. However, I don’t find it convincing enough. Here are some additional notes:
- `we mentioned in the conclusion section of the paper, "the runtime of LLMs4Plan is currently hindered by multiple LLMs calls. While our method requires multiple LLMs calls` I don’t think this statement in the conclusion can address my concern over the eval metric. I am aware & most people in relevant communities are aware of the cost of using LLMs. My key point is, a paper like this should not solely use the # of node explored, which is a “partial” metric, to claim advantage.
- `LLMs would broaden the applications of classical planning` The issue here is actually twofold. For one, as I just mentioned, without a fair metric, it would be hard to claim whether LLMs broaden the application of classical planning. Secondly, restricting LLMs to certain roles within a classical planning framework will not expand the scope of the problems that the original planning framework can solve (think about limitations like the expressiveness of symbolic domain representation)
Overall, I find my original evaluation appropriate for the current manuscript and will therefore maintain the current score.
---
Reply to Comment 1.2.1:
Comment: Thanks. We would like to clarify that, when we mentioned "broadening the applications of classical planning", we meant application problems with corrupted models that can't be solved by classical planning, may be able to be solved by the integration of LLMs and GP (i.e., our proposed LLMs4Plan), instead of ``broadening'' the expressiveness of symbolic domain representation. --- We are sorry for the confusion. We will make clear of this in the paper. | Summary: The paper aims to investigate integrating large language models (LLMs) into classical planning frameworks to enhance the planning effectiveness. The authors proposed a novel method named LLMs4Plan which integrates LLMs into action selection and mutual constraints solving within the graph-based planning framework. Evaluated across ten classic planning problems, this approach demonstrates improved success rates and reduced computational complexity compared to traditional methods. The study concludes that while LLMs alone are insufficient for planning, their integration into classical frameworks significantly boosts performance,.
Strengths: 1. This paper investigates an intriguing topic: the performance of LLMs in classical planning problems. While the impressive performance of LLMs in natural language processing and coding tasks is well-investigated, their efficacy in planning tasks remains largely unexplored. Understanding whether LLMs can replace classical planning algorithms is a significant and meaningful research question.
2. The paper conducts extensive experiments on ten classical planning problems, which enhances the credibility of its findings and conclusions. This comprehensive evaluation demonstrates the robustness of the proposed approach.
3. The paper reveals that LLMs still cannot surpass classical planning algorithms, thereby highlighting a valuable direction for future research. This insight encourages further investigation into how LLMs can be effectively integrated with traditional planning methods.
Weaknesses: 1. Although the authors point out that LLMs cannot outperform classical planning algorithms on their own and need to be integrated with classical methods to perform well, the paper lacks detailed insights on this integration. For example, specific strategies for integrating LLMs with the classic planning algorithms and the roles where LLMs excel within planning problems are not thoroughly discussed. The designed "expandGraph" and "sortActions" may not be the best practice manner. Future research directions to enhance the planning capabilities of LLMs should be more explicitly outlined.
2. The experiments are conducted in simulated planning domains, and the paper does not provide real-world applications or case studies to validate the practical utility of the approach. Including experimental results from more realistic scenarios would strengthen the paper.
3. While the method is effective for graph-based planning, its applicability to other planning frameworks or domains is not thoroughly investigated. A broader analysis could reveal the versatility of the proposed approach.
4. Typos: Algorithm ?? in Line 109.
Technical Quality: 3
Clarity: 3
Questions for Authors: What do you foresee as the future of planning algorithm development? Will it be a hybrid approach combining LLMs with classical planning methods, or an end-to-end solution relying solely on LLMs?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See the Weaknesses part
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: [Insight of integrating LLMs into graph planning]
Response #1.1: Thanks. The insight of using LLMs in graph planning is analogous to one of the general ways humans figure out solutions to planning problems, i.e., first looking ahead and then searching back. Given planning problems, human usually conducts two phases to find solutions, i.e., (a) forward building a rough solution scheme starting from initial state s_0 to goal g according to “looking ahead’’ strategies to narrow down the scheme, and (b) “searching back” the scheme according to ``preference’’ strategies of actions. The insights of the “looking ahead” and “preference’’ strategies are implicitly implemented by LLMs corresponding to the two main steps “expandGraphLLMs” and “sortActionsLLMs” in our LLMs4Plan approach. Those two steps correspond to two critical steps that determine the efficacy of graph planning. It indeed may not be the best practice manner for other planning frameworks, e.g., plan-space based planning, satisfaction-based planning, etc. As mentioned in the last paragraph of the introduction section, in this work we provide new clues for how to deeply embed LLMs into off-the-shelf planners to enhance planning capabilities, i.e., first identifying critical steps in off-the-shelf planners, and then design proper prompt generation to be embedded in to the planners. While it is highly possible that different planning frameworks have their own critical steps, the idea of our LLMs4Plan approach on how to embed LLMs in graph planning can be shared in different planning frameworks. We will add more discussions on future research directions in the conclusion section if the paper is accepted.
[Experiments in real-world applications]
Response #1.2: Thanks. We agree that conducting experiments in real-world applications is more practical. It is, however, non-trivial to design planning domain models from real-world applications for off-the-shelf planners. On the other hand, the planning domains can indeed be viewed as ones abstracted from real-world domains and used for evaluations of planners in the planning community [12,15].
[Applicability to other planning frameworks]
Response #1.3: The idea of integrating LLMs into graph planning can be applied to other state and action space search-based planning frameworks.
[Foresee the future of planning algorithm development]
Response #1.4: We believe this is an open question. Through our related studies, we believe embedding state-of-the-art LLMs into planning frameworks is more promising, compared to solely end-to-end LLMs.
[Typos]
Response #1.5: Thanks. We will revise the typos correspondingly. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Adversarial Environment Design via Regret-Guided Diffusion Models | Accept (spotlight) | Summary: This work focuses on Unsupervised Environment Design (UED), a problem setting whereby a teacher designs environments for a student, learning to solve the task. This area of research has been in focus recently due to its ability to train more general agents in an open-ended setting. The authors look to build on recent work using generative models to generate environments, and propose a diffusion based approach ADD. The method seems sound and leads to gains in two of the canonical UED environments. I recommend acceptance as it incrementally improves UED literature.
Strengths: The main strengths are listed below. Since I recommend acceptance, the focus of the review is on weaknesses to improve the paper as much as possible for the camera ready version.
* The method makes sense intuitively and seems to work as expected, especially with the curriculum results.
* The paper is well written and clearly builds on previous works (e.g. in the related work discussion).
* Experiments are well presented and show clear performance gains.
Weaknesses: * The x axis for the plots is showing total steps and not student gradient updates. This puts replay methods at a huge disadvantage since they take ~2x the number of environment steps, but the policy isnt training so arguably it should not count when comparing the effectiveness of the curriculum. I think this should be changed to match the literature.
* The baselines are fairly weak, aside from PLR and ACCEL (which have been weakened by the previous point). Can the authors compare against CLUTR and show why ADD produces a better curriculum? To me that is the obvious baseline. Further, note the improvements to PAIRED proposes in [1] which makes it a much stronger baseline. I would say the original PAIRED is more of a concept than a baseline at this point due to known deficiencies.
* There is a clear limitation of your method that you require a pre-training phase with access to the environment, and do not count the number of steps as part of training because it is "unsupervised". This is fine, but then makes it wrong to show training steps for the replay methods. Please either 1) include the steps from this in the x axis for ADD (which will drastically weaken performance) or 2) switch the x axis to student gradient updates. Otherwise it is an apples to oranges comparison in your favour.
* I would love to see examples of the generated environments in the main body, for me it is more useful than the theory part, but maybe that is subjective :)
* More of a general comment, I find it strange that UED papers don't cite [2]. It is the largest scale demonstration of the value of UED and motivates research on these methods. It explicitly shows PLR is effective for curricula over a massive task space with a 500M parameter transformer based policy. This clearly shows that UED research can have impact on large scale AGI focused projects, so I would have expected the UED community to all be very excited about this.
[1] Mediratta et al. "Stabilizing Unsupervised Environment Design with a Learned Adversary". CoLLAs 2023
[2] Bauer et al. "Human Timescale Adaptation in an Open-Ended Task Space". ICML 2023
Technical Quality: 3
Clarity: 3
Questions for Authors: * The meta data plots show ACCEL starting with the same number of stumps etc as the other baselines. However in the original implementation/paper it should be starting with very simple terrains. Was this the case in your work?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitation of requiring additional data to pre train the diffusion model is not mentioned, I think this is actually the largest weakness by far. It could be interesting to see if pre trained foundation models work well here too.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate Reviewer a2dv for the valuable feedback and review. Below is our response to the reviewer's comments and questions.
### Weak 1: About the x axis for the plots
We agree that using total steps is disadvantageous for replay methods. However, we want to point out that if we use student updates, it becomes disadvantageous for methods that do not use a replay. We deliberated on this issue when organizing the experimental results, and decided to use total steps based on two reasons; (1) it is common in reinforcement learing (RL) research to evaluate the performance of the online RL algorithm after the fixed number of environmental steps. (2) replay methods prior to ACCEL also used total steps for the x axis [1, 2].
Despite our decision, we realize that there could be a controversy about this issue. Hence, we measure the performance of our method trained for only half of environmental steps used in the original experiment to accommodate the perspective that using the number of policy updates as a metric is more reasonable. When using half of environmental steps, average score of our method in bipedal walker is 127.4 +/- 16.0, and average success rate in maze environment is 0.72 +/- 0.04. We believe the performance is still competitive compared to baselines, and will add this performace to the appendix C.1 and C.2.
### Weak 2: About the baselines
We agree that it would be beneficial to compare our method against CLUTR [3] and PAIRED with high entropy and protagonist-antagonist behavioral cloning [4]. However, we want to point out that [3] was proposed to solve combinatorial explosion issue and that the bipedal walker domain is not a target domain. Furthermore, [4] eventually utilizes replay in the bipedal walker, so it is not adequate methods to represent learning-based methods. Based on these points, we did not include them in the baselines, but since experiments in the maze environment is possible, we will conduct additional experiments on these baselines and add the result to the Appendix C.1.
### Weak 3: About including the pre-training steps into the x axis
We want to point out that there is no interaction with environments during the pre-training phase since we randomly sample parameters without access to environments and train the diffusion model using the sampled parameters. So we disagree about the reviewer's concern that we should take account the steps from pre-training phase in the x axis. However, similar to the reviewer's point, we agree that the time spent on pre-training should be considered for a fair comparison. To resolve this issue, we will include the performance measured after the number of environmental steps that is adjusted to reflect the time spent on pre-training. Since pre-training took much less time than agent training (9 hours vs 56 hours in the maze environment, and 7 hours vs 92 hours in the BipedalWalker environment), we expect there will be no drastic performance degradation.
### Weak 4: Examples of generated environments in the main body
We will resize Figure 7 and Figure 12 and add each to Section 5.1 and Section 5.2.
### Weak 5: An additional citation
We appreciate the reviewer for pointing out the remarkable research [5] related to UED. We will add it to our citations.
### Question 1: About the meta data plots of ACCEL
The ACCEL paper [6] describes two implementation methods. The difference between the two methods lies in the domain from which the initial environment parameters are sampled. One method samples from the full parameter range, while the other restricts sampling to a range that ensures simple environments are generated. Starting with a simple environment and gradually evolving it is a simple yet powerful idea. However, when using ACCEL as a baseline, it might not be fair because it already incorporates prior knowledge of which parameters create simple environments. Thus, we chose to use the ACCEL implementation that samples parameters from the full parameter range as the baseline. As a result, complex environments are generated at the beginning of training, leading to differences from the original paper's plot.
### References
[1] Jiang et al. "Prioritized Level Replay." International Conference on Machine Learning. 2021.
[2] Jiang et al. "Replay-Guided Adversarial Environment Design." Advances in Neural Information Processing Systems. 2021.
[3] Azad et al. "CLUTR: Curriculum learning via unsupervised task representation learning." International Conference on Machine Learning. 2023.
[4] Mediratta et al. "Stabilizing Unsupervised Environment Design with a Learned Adversary." Conference on Lifelong Learning Agents. 2023.
[5] Bauer et al. "Human Timescale Adaptation in an Open-Ended Task Space." International Conference on Machine Learning. 2023.
[6] Parker-Holder et al. "Evolving Curricula with Regret-Based Environment Design." International Conference on Machine Learning. 2022.
---
Rebuttal Comment 1.1:
Title: No score change
Comment: I am already a high score for this so the rebuttal was unlikely to see an increase. Please include a discussion of Weakness1 in the main body. I don't agree with the comment "[4] eventually utilizes replay in the bipedal walker" because there are still stronger baselines than the ones in your paper. These should be in the main body, why wouldn't they be?
---
Rebuttal 2:
Title: Author response
Comment: Thanks for the response. We misunderstood the reviewer's concern as a suggestion to replace PAIRED with [4]. We now understand the reviewer's point and will include the results of [4] in the main body. | Summary: This work applies regret-guided diffusion models to the UED setting in order to generate adversarial environments that preserve diversity.
Strengths: * The contributions are well motivated and appear to be novel.
* The writing is generally clear and concise.
* The paper is contextualized well within prior literature.
Weaknesses: * Minor typos/grammatical issues:
* Line 253: “challenging”—This word alone does not hold any descriptive power. Because the nature of the tasks are described next in impartial terms, this word seems redundant/rhetorical.
* The limitations do not mention the assumed structure of the environments; I am not convinced that diffusion can be applied to all environment parameterizations.
* Why is shortest path length used as a complexity metric? It seems to describe the tail of the distribution; Why not average path length? Also, number of blocks does not seem to be a well-motivated complexity metric either.
* While ADD “successfully generates adversarial environments while preserving diversity”, the behavior improvements seem to be minimal based on the training curves / zero shot performance.
* A more general point: it is hard to tell what exactly the authors are trying to show from the experiments; the conclusion nicely sums up the analysis, but it would have been nice to have been told at the top of “experiments” what each experiment is trying to test.
* Given the prevalence of UED domains in the literature, a third evaluation domain would really strengthen this work. Otherwise, I am concerned about the applicability of this method to other environments, especially since the authors have not addressed the implicit assumptions about workable environment parameterizations for their diffusion approach.
Technical Quality: 3
Clarity: 3
Questions for Authors: * What are the assumptions on the structure of the environment parameterization? Can this be applied to all simulated environments? Or only a subset with certain properties?
* Why were RGB channels used to describe the maze environment instead of e.g. a binary (continuized for the diffusion process); it seems odd to parameterize the environment based on the observation space, and then to extract these parameters themselves based on the observation values. Are there certain parameterizations for the diffusion process that work better/worse? What properties does this representation space need to have?
* Are the t-SNE embeddings generated using all points, and then each individual plot displays just the relevant method’s points?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: * The limitations are outlined accurately by the authors in Section 6. However, I would like the authors to address the first question above, which I believe is an additional limitation that would be worth discussing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate Reviewer 1NF7 for the valuable feedback and review. Below is our response to the reviewer's comments and questions.
### Weak 1: Minor typo "challenging"
We agree that the word "challenging" in line 253 is redundant, so we will exclude it.
### Weak 2: About complexity metrics: shortest path length and number of blocks
We used shortest path length and the number of blocks following prior works [1, 2, 3]. While the number of blocks may not always correlate with an environment's complexity, there is a tendency for environments to become more maze-like and challenging as the number of blocks increases when the goal and start positions are fixed. Thus, considering both the number of blocks and the shortest path length can give a rough estimate of the environment's complexity. Additionally, we agree that a metric considering the lengths of all paths can better explain the complexity of an environment. However, since there are numerous paths to the goal within the time limit, simply averaging these paths does not meaningfully explain complexity. Hence, a carefully designed metric considering the lengths of possible paths is needed, and we believe this is an area that requires further exploration by UED researchers.
### Weak 3: About the expression “successfully generates adversarial environments while preserving diversity”
It summarizes the complexity and diversity of the environments generated during the training process, which are shown in Figures 2(c) and 2(d). We would like to point out that it is not about the performance.
### Weak 4: What each experiment is trying to test
To help readers better understand the purpose of each experiment, we will improve the explanation in Line 266-272 as follows.
"The primary goal of our experiments is to evaluate the generalization capability of the trained agent. To this end, we measured the zero-shot transfer performance in challenging, human-designed environments. Additionally, to understand where the differences in performance originate, we conducted quantitative and qualitative analyses of the curriculum of the generated environments by tracking complexity metrics and generating t-SNE plots."
### Weak 5: A third evaluation domain and applicability
Regarding the reviewer's concern about applicability, we would like to point out that most tasks used in current UED research either have a form similar to Minigrid or utilize a continuous parameter space. Hence, sampling environment parameters using a diffusion model is feasible. Therefore, we agree that experiments in third domain will strengthen our work, but not necessary to demonstrate our method's applicability. For deeper discussion on the workable parameterization, please refer to the response to the question 1.
### Question 1: About the assumptions on the structure of the environment parameterization
The assumed environment parameterization is one that can be learned by the diffusion model. If the parameter space is continuous, methods such as DDPM [4] can be applied. Even when the parameter space is discrete, methods such as D3PM [5] can be applied. Hence, we believe that our algorithm can cover most tasks that can be handled by existing UED methods, which model the environment generation process as an MDP [1] or randomly sample environment parameters [2, 3].
Even for environment parameters requiring constraints, such as ensuring that a path always exists from start to goal in a maze environment, we can utilize guided diffusion and rejection sampling to meet the constraints. Furthermore, when the constraint is complex, the diffusion model can be an effective solution, as discussed in [6]. Therefore, it can be seen that our method has a potential to handle both continuous and discrete parameters with constraints. However, if the environment itself is challenging to parameterize, it would be difficult to apply ours and other UED algorithms.
We hope this response sufficiently addresses the reviewer's question. If there is an example of a parameterization where challenges in applying our approach is expected, we would be happy to discuss it further.
### Question 2: About RGB channels used to describe the maze environment
The representation that we used is a continuized binary, where open spaces are represented by 0 and other elements by 1. And there are two reasons why we use three channels. First, it allows us to represent obstacles, a start point, and a goal point in different channels using binary values, which we believe is a natural representation. Second, it is easy to visulize. It is technically feasible to use a single channel and represent each element with a different value. We expect this would make training even easier than with our representation, as the reduced dimension could lead to faster learning. We hope we have understood the reviewer's question correctly and our respones addresses it sufficiently.
### Question 3: About t-SNE embeddings
We generated the t-SNE embeddings using all environment parameters, and each plot displays only the points corresponding to the respective methods. By doing so, we were able to compare the diversity of environments generated by each baseline.
### References
[1] Dennis et al. "Emergent Complexity and Zero-Shot Transfer via Unsupervised Environment Design." Advances in Neural Information Processing Systems. 2020.
[2] Jiang et al. "Replay-Guided Adversarial Environment Design." Advances in Neural Information Processing Systems. 2021.
[3] Parker-Holder et al. "Evolving Curricula with Regret-Based Environment Design." International Conference on Machine Learning. 2022.
[4] Ho et al. "Denoising Diffusion Probabilistic Models." Advances in Neural Information Processing Systems. 2020.
[5] Austin et al. "Structured Denoising Diffusion Models in Discrete State-Spaces." Advances in Neural Information Processing Systems. 2021.
[6] Yang et al. "Compositional Diffusion-Based Continuous Constraint Solvers." Conference on Robot Learning. 2023.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their thorough response to my questions and concerns. I find the authors' responses convincing, and believe that the changes made in response to mine and the other reviewers' suggestions have also improved the manuscript. I am updating my score from "5: borderline accept" to "7: accept".
---
Rebuttal 2:
Title: Author Response
Comment: Thanks for the response. We are glad that our answers cleared up the reviewer's concerns. | Summary: This paper proposes an approach for gradient directed, regret-based UED based on guiding a pre-trained diffusion model.
Strengths: This paper addresses a major shortcoming of prior UED approaches. In the past gradient-based UED approaches have been out-performed by sample-based or evolutionary approaches for searching for environments. While there was a general intuition that gradient based approaches would ultimately be more scalable, it was difficult to realise this intuition due to the fact that environment-design is a high-dimensional optimisation problem for which RL algorithms have struggled. Using insights from generative modelling is a natural approach to bridging this gap, and this paper provides a few tricks which seem to have been missing from prior attempts.
Specifically, pre-training the diffusion model on random levels, and using learned guidance seems to be a particularly powerful combination.
Weaknesses: My main concern is that It seems like the baseline results in the bipedal walker domain don't replicate the results from prior work, suggesting some sort of bug/lack of tuning in the implementation? It appears that if I compare the numbers of ADD to the results in the ACCEL paper it gets lower or equivalent performance to ACCEL itself? My main reason for not currently raising my score is questions about the accuracy of this evaluation.
There are some tricks that are rediscovered from prior work which should be attributed. Specifically, entropy regularisation for neural generators is studied in [1], and training a sort of critic to evaluate levels is studied in [2]. That being said, ADD puts a unique spin on both of these.
[1] Mediratta, Ishita, et al. "Stabilizing unsupervised environment design with a learned adversary." _Conference on Lifelong Learning Agents_. PMLR, 2023.
[2] Bhatt, Varun, et al. "Deep surrogate assisted generation of environments." _Advances in Neural Information Processing Systems_ 35 (2022): 37762-37777.
For the citation of environment design, it is correct to cite the UED formalism was from Dennis et al. but for the more general concept of designing environments it would be best to also cite contemporaries POET[3] and GPN [4].
[3] Wang, Rui, et al. "Paired open-ended trailblazer (poet): Endlessly generating increasingly complex and diverse learning environments and their solutions." _arXiv preprint arXiv:1901.01753_ (2019).
[4] Bontrager, Philip, and Julian Togelius. "Learning to generate levels from nothing." _2021 IEEE Conference on Games (CoG)_. IEEE, 2021.
It would also be best to be careful not to equate minimax regret based UED approaches with all UED approaches as is done on line 83, as there are many non minimax regret approaches to UED such as POET, SAMPLR, CLUTR, and DRED.
Pre-training on random levels seems like it leaves only a limited amount that the diffusion model could learn. It would be interesting to periodically fine-tune the diffusion model on newly generated levels to increase the power of the generator over time. This would generally be much more convincing as it could scale much further off of the distribution of random levels.
#### Clarity
It would be helpful to include the per-transfer environment bar-plots as is traditional with UED papers, it seems like the same information is included in Table 5 and 6 but they are much harder to read in that format, and it is difficult to tell where error bars overlap.
It would also be helpful to include the bootstrapped CI's plot often in UED papers as recommended by [5]
Figure 4 and Figure 9 would be amazing to include in the main body, and go a long way towards explaining and demonstrating the method.
[5] Agarwal, Rishabh, et al. "Deep reinforcement learning at the edge of the statistical precipice." _Advances in neural information processing systems_ 34 (2021): 29304-29320.|
|APA||
How do you arrive at the ADD acronym? It is a bit difficult for me to remember it and what it stands for.
Technical Quality: 2
Clarity: 4
Questions for Authors: Have you tried replacing equation 12 with a PAIRED-style loss maximising the expectation between a protagonist and antagonist?
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 4
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate Reviewer TVkr for the valuable feedback and review. Below is our response to the reviewer's comments and questions.
### Weak1: ACCEL results in the bipedal walker domain
There are two main differences between the original ACCEL paper and our experiments. First, the domain of environment parameters is different. As shown in Table 6 of the original ACCEL paper [1], ACCEL was implemented in two ways, ACCEL+ and ACCEL++ (we replace dagger of the original paper with "+" for readability in the review format). ACCEL++, which shows better performance, sampled environment parameters from an easy parameter range where the difficulty level is inherently low (Table 1 below or Table 9 of the ACCEL paper). In contrast, ACCEL+ and other baselines generated environments from the full parameter range (Table 2 below). While ACCEL++ shows powerful performance, we decided to use ACCEL+ as a baseline since it can be seen that ACCEL++ uses prior knowledge of which parameters create simple environments. It is one of the reasons why the average score of ACCEL is lower than that of the original paper.
The second difference is that the ACCEL paper recorded performance after the fixed number of policy updates, whereas we recorded performance after the fixed number of environmental steps, following other UED papers [2, 3] and traditional deep reinforcement learning research. Since ACCEL requires episodes without policy updates after mutating replayed environments, the number of environmental steps per policy update is higher than other baselines. Therefore, when using environmental steps as the metric, the performance of ACCEL degrades than that reported in the original paper.
We will include an explanation of this difference in the paper to ensure that readers are aware of these differences. Addtionally, we will add the performance of our method trained for only half of environmental steps used in the original experiment to accommodate the perspective that using the number of policy updates as a metric is more reasonable. In bipedal walker domain, average score of our method after half of environmental steps is 127.4 +/- 16.0, and we believe it is still competitive performance compared to baselines.
**Table 1**: Easy parameter range in the bipedal walker domain
|stump height|stair height|stair steps|roughness|pit gap|
|:-----------:|:-------------:|:-----------:|:-------------:|:-------------:|
|[0, 0.4]|[0, 0.4]|[1, 1]|[0, 0.6]|[0, 0.8]|
**Table 2**: Full parameter range in the bipedal walker domain
|stump height|stair height|stair steps|roughness|pit gap|
|:-----------:|:-------------:|:-----------:|:-------------:|:-------------:|
|[0, 5]|[0, 5]|[1, 9]|[0, 10]|[0, 10]|
### Weak 2: Prior works and citations
We appreciate the reviewer for highlighting prior works ([4], [5]) related to our approach. We will add these to related works and reference them to strengthen our claim regarding the addtion of the entropy term and the use of a learned critic. We will also cite POET [6] and GPN [7] to help readers better understand previous research on the general concept of environment design.
### Weak 3: No minimax regret UED
We will add an explanation to the related works to clarify that there are UED methods with non-minimax regret objectives.
### Weak 4: Periodically fine-tuning the diffusion model
We agree that periodically fine-tuning the diffusion model using newly generated levels could potentially result in a more powerful generator. However, we are not sure that fine-tuning the diffusion model using levels generated by the diffusion model itself would genuinely enhance the generator's capabilities. One possible approach to address this issue is to use another UED algorithm in parallel to generate levels that our diffusion model cannot create. We believe it could be an interesting future work.
### Weak 5: Clarity
**Additional plots**: We will include per-transfer bar plots and bootstrapped CI plots below Table 5 and 6 to improve the readability of the experimental results.
**Figure 4 and Figure 9**: We will include Figure 4 and its explanation in the beginning of Section 5.1. We will also include Figure 9 after Section 5.2 and add its explanation, which is described in Appendix A.5, after Section 4.4.
**ADD acronym**: The ADD acronym stands for "A"dversarial environment "D"esign via regret-guided "D"iffusion models. We welcome any suggestions for a better acronym.
### Question 1: Replacing equation 12 with a PAIRED-style loss
We have not yet attempted to replace Equation 12 with a PAIRED-style loss, which predicts the difference between the antagonist and protagonist. However, we believe it would be a worthwhile experiment and could potentially yield competitive performance.
### References
[1] Parker-Holder et al. "Evolving Curricula with Regret-Based Environment Design." International Conference on Machine Learning. 2022.
[2] Jiang et al. "Replay-Guided Adversarial Environment Design." Advances in Neural Information Processing Systems. 2021.
[3] Garcin et al. "DRED: Zero-Shot Transfer in Reinforcement Learning via Data-Regularised Environment Design." International Conference on Machine Learning. 2024.
[4] Mediratta et al. "Stabilizing Unsupervised Environment Design with a Learned Adversary." Conference on Lifelong Learning Agents. 2023.
[5] Bhatt et al. "Deep Surrogate Assisted Generation of Environments." Advances in Neural Information Processing Systems. 2022.
[6] Wang et al. "Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions." arXiv:1901.01753. 2019.
[7] Bontrager et al. "Learning to Generate Levels from Nothing." IEEE Conference on Games. 2021.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal by Authors
Comment: >ACCEL was implemented in two ways, ACCEL+ and ACCEL++ (we replace dagger of the original paper with "+" for readability in the review format). ACCEL++
This makes sense! I think ACCEL++ is the conical version as one of the core insights of that paper is that there is often a small number of conical "empty" levels which are a better initialisation for evolution. But it does make sense to compare against ACCEL+ given that the inductive bias is orthogonal to your approach and we want to isolate the effect of the different level optimiser. You should include them both and flag this nuance, but I agree this is a valid experimental methodology and a fair comparison.
>We will include an explanation of this difference in the paper to ensure that readers are aware of these differences. Additionally, we will add the performance of our method trained for only half of environmental steps used in the original experiment to accommodate the perspective that using the number of policy updates as a metric is more reasonable.
It would also be useful to compare at the number of steps as the original ACCEL, since the longer horizon will give a better sense of the long term performance. Including this both in terms of environment steps and policy steps are interesting, and would be useful for the community to have a sense of the nuances in current SOTA.
Give that I'm now convinced the empirical evaluation is correct, and since this paper presents a novel and promising attack against one of the biggest shortcomings of prior UED approaches, I expect it will have a large impact on the field. I'm raising my score to reflect this.
---
Reply to Comment 1.1.1:
Title: Author Response
Comment: We are glad that our clarification addressed the reviewer's concerns. We will include the results of ACCEL++ and the performance of the baselines measured after a fixed number of policy updates. | Summary: This paper proposes a diffusion model with differentiable regret estimate for unsupervised environment design. The authors write a diffusion process to model environment parameters where the process is described in terms of a scoring function and derivative of the regret. The scoring function is pre-trained on a set of random environments. The diffusion process is further fine-tuned with the regret to generate environments for curriculum learning of an agent. A critic, which is trained with cross-entropy loss using binned environment returns, is used to approximate the regret. By using the derivative of the critic w.r.t. environment parameters, diffusion process is fine-tuned with entropy augmented regret. Experimental results on continuous and discrete control domains show that the model is competitive with previous best models. Using regret guidance significantly improves the performance.
Strengths: The paper introduces a diffusion process which is well suited for modeling continuous parameter. It implements a differentiable regret approximation that better assigns credit for the diffusion process. The formulation through the optimal environment distribution with a trainable scoring function is also interesting.
Weaknesses: There are a few things that need more clarification and ablations.
1. While the paper shows that the diffusion process with a differentiable regret estimate gives good results, it is not clear which component is the most critical. Is the diffusion process critical for the success? Can you train PAIRED with a differentiable regret?
2. Similar to above, is the entropy term critical? Can you train the diffusion process without it and get comparable results?
3. Can you explain the reason why you trained the critic model with binned returns rather than using actual returns in a regression objective?
4. How critical is the pre-training for the scoring function? How does the performance change with less or more number of environment samples?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see above for specific questions.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper addresses the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate Reviewer rCfY for the valuable feedback and review. Below is our response to the reviewer's comments and questions.
### Weak 1: Is the diffusion process critical for the success? Can you train PAIRED with a differentiable regret?
The proposed algorithm critically relies on both the diffusion process and differentiable regret. Without a differentiable regret, using only the pre-trained diffusion process would theoretically result in generating random environments, failing to create a meaningful curriculum. This outcome is evident in Figures 2 and 3, where the performance of ADD without guidance is shown. Additionally, since differentiable regret is an approximation of the previously used regret estimation method, it is possible to train PAIRED [1] using this approach, but significant performance improvement is not expected.
### Weak 2: is the entropy term critical? Can you train the diffusion process without it and get comparable results?
The entropy term is also critical. Adding the entropy term ensures that the distribution from which environment parameters are sampled becomes the softmax distribution of regret, as shown in Equation 8. Without the entropy term, it would be challenging to implement the sampling of environment parameters that maximize regret using the diffusion process, to the best of our knowledge. However, by setting \omega to a exteremely high value, we can effectively simulate the absence of the entropy term. We will include an ablation study to explore the performance changes with varying \omega values in the appendix. We report partial results in the following table.
**Table 1**: Performance in accordance to \omega in partially observable navigation task. Blanks stand for experiments that are not finished yet.
|\omega|5|10|20|40|80|160|
|:-----------:|:-------------:|:-----------:|:-------------:|:-------------:|:-------------:|:-------------:|
|mean success rate|0.85 +/- 0.05|0.81 +/- 0.05|0.82 +/- 0.03|-|-|-|
Additionally, as reviewer TVkr pointed out, there is a prior work [2] that claims adding high entropy bonus when training neural generator of PAIRED yields better performance. This supports our claim that the entropy term plays a critical role.
### Weak 3: Can you explain the reason why you trained the critic model with binned returns rather than using actual returns in a regression objective?
The reason for training the critic model with binned returns, rather than using actual returns in a regression objective, is to obtain a differentiable regret estimate. We need to estimate the maximum and average returns, and to accurately reflect the stochasticity of the environment and policy, we trained a network to predict the distribution of returns, similar to distributional RL. We used binned returns following one of the foundational distributional RL studies, C51 [3]. However, it is also possible to use methods like Implicit Quantile Networks (IQN, [4]), which learn the distribution with actual returns as the output, and are known to yield better performance.
### Weak 4: How critical is the pre-training for the scoring function? How does the performance change with less or more number of environment samples?
Pre-training the scoring function is an essential step. Through pre-training, the diffusion process becomes capable of sampling a wide range of environment parameters. This diversity enables the diffusion process to generate meaningful curricula when guided by differentiable regret. As the number of environment samples used in pre-training increases, we expect the diffusion process's ability to generate diverse environments to improve, thereby enhancing overall performance. We are conducting experiments with a smaller number of environment samples , and we will add the results to the appendix.
### References
[1] Dennis et al. "Emergent Complexity and Zero-Shot Transfer via Unsupervised Environment Design." Advances in Neural Information Processing Systems. 2020.
[2] Mediratta et al. "Stabilizing Unsupervised Environment Design with a Learned Adversary." Conference on Lifelong Learning Agents. 2023.
[3] Bellemare et al. "A Distributional Perspective on Reinforcement Learning." International Conference on Machine Learning. 2017.
[4] Dabney et al. "Implicit Quantile Networks for Distributional Reinforcement Learning." International Conference on Machine Learning. 2018.
---
Rebuttal 2:
Title: Required Action: Please Respond to the Author Rebuttal
Comment: Dear Reviewer rCfY,
As the Area Chair for NeurIPS 2024, I am writing to kindly request your attention to the authors' rebuttal for the paper you reviewed.
The authors have provided additional information and clarifications in response to the concerns raised in your initial review. Your insights and expertise are invaluable to our decision-making process, and we would greatly appreciate your assessment of whether the authors' rebuttal adequately addresses your questions or concerns.
Please review the rebuttal and provide feedback. Your continued engagement ensures a fair and thorough review process.
Thank you for your time and dedication to NeurIPS 2024.
Best regards,
Area Chair, NeurIPS 2024
---
Rebuttal 3:
Title: Additional Comment by Authors
Comment: We are writing this comment to help reviewers understand some of reviewer rCfY's concerns and our response, as well as to share the complete results of additional experiments. First, in response to the concern about the importance of the entropy term (Weak 2 in the rebuttal), we explained that the entropy term is crucial because it shapes the target distribution in a way that allows environment parameters to be sampled by the diffusion model. Since the influence of the entropy term diminishes as \omega in Equation 7 increases, we mentioned that we were conducting an ablation study on different \omega values and would include the results. The completed experimental results are shown in Table 1 below. From the results, we observed that as \omega becomes large, performance decreases, which highlights the importance of the entropy term.
**Table 1**: Performance in accordance to \omega in partially observable navigation task. Blanks stand for experiments that are not finished yet.
|\omega|5|10|20|40|80|
|:-----------:|:-------------:|:-----------:|:-------------:|:-------------:|:-------------:|
|mean success rate|0.85 +/- 0.05|0.81 +/- 0.05|0.82 +/- 0.03|0.64 +/- 0.07|0.47 +/- 0.16|
Next, to address the concern about the number of samples used during the pre-training phase (Weak 4 in the rebuttal), we trained the diffusion model using 1 million samples, which is 100 times fewer than in the original experiment, and measured the performance of the proposed algorithm. The result is a mean success rate of 0.76 +/- 0.07 in the partially observable navigation task. This is about 11% lower than the result reported in the main text (but still outperforms the baselines), which supports our claim that a larger number of samples used in pre-training would lead to better performance. Additionally, we would like to point out that since we are dealing with an unsupervised setting and the samples used in pre-training are generated through random sampling, there is no need to worry about data scarcity.
We hope that this additional comment helps the reviewers better understand our rebuttal. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Nuclear Norm Regularization for Deep Learning | Accept (poster) | Summary: The paper proposes a method to regularize the nuclear norm of the Jacobian of a function, e.g., one that represents a neural network. This method builds on prior art and includes the authors’ novel contribution as follows: The authors reference prior art for an equivalent problem formulation of nuclear norm regularization that avoids the computation of SVD. This equivalence also enables the authors’ novel result, which is that the nuclear norm of the Jacobian $Jf$ of a composite function $f=g \circ h $ is equal to the average of the squared Frobenius norms of $Jg$ and $Jh$; this result makes the proposed method perfectly apt for neural network training. The authors then use an approximation to the Frobenius norm of the Jacobians to avoid explicitly calculating a large Jacobian matrix, which significantly reduces computational cost and storage. The method is validated with a nuclear norm-regularized problem whose closed form solution is known. Finally, the efficacy of the method is shown on two applications: unsupervised denoising and representation learning.
Strengths: - The authors’ key finding on the Jacobian of a composite function is interesting,
original and significant.
- The paper is clear and well-written.
- The authors have given a good summary of the background and preliminaries.
- The authors have clearly identified the parts of their method based on prior art and based on their own contribution.
- The validation and application examples that demonstrate the proposed method’s efficacy are generally convincing.
- The experiments of the paper seem reproducible.
Weaknesses: Major comment:
- In the representation learning application, we only see the method’s efficacy on a single image. Could there be a way to quantify the performance of the method over a whole test dataset?
Minor comments:
- I found the following two references that also compute nuclear norm without the SVD. Could the authors either mention these references in their manuscript or clarify why these aren’t relevant?
[1] https://icml.cc/Conferences/2010/papers/196.pdf
[2] https://www.ijcai.org/proceedings/2017/0436.pdf
- Figure 1 has only the color legend. Could the authors confirm (and explicitly state on their manuscript) that the x- and y-axes correspond to each coordinate of the input x?
- In the caption of Figure 3, the authors say that “As predicted by Theorem 3.1, both problems converge to the same objective value.“ Could they use a less strong claim here? Possibly “nearly identical” as they said in the main text; otherwise, the gap in 3(c) is confusing.
- Could the authors reformat the subsection titles in Section 5 so that it is clear in a glance that “Unsupervised denoising” and “Representation learning” are the two application examples, and “Singular value shrinkage” and “Experiments” are under “Unsupervised denoising”?
- The PSNR values in Table 1 are based on averaging over only 100 images. Is there a reason why more images were not used? Could the authors repeat the experiment by averaging over more images?
- It is slightly confusing that the order of the methods in the Table 1 and Figure 4 do not match each other. Could the authors follow the same order?
Technical Quality: 4
Clarity: 4
Questions for Authors: Please see Weaknesses.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors briefly comment on the limitation of their method regarding the approximation of the squared Frobenius norm of the Jacobian in Conclusions. I think this is sufficient.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review of our manuscript. We are glad you appreciate the originality and significance of our method for Jacobian nuclear norm regularization.
*"In the representation learning application, we only see the method’s efficacy on a single image. Could there be a way to quantify the performance of the method over a whole test dataset?"*
We depict latent traversals in our autoencoder's latent space for a single non-cherry-picked image due to space constraints in the manuscript. We have included a few more examples in our rebuttal PDF attached to the global response, and we would be happy to include these in an appendix in the camera-ready. Whether our autoencoder recovers semantically meaningful directions of variation in latent space is ultimately a subjective judgment, and we do not believe that readers would gain additional insight regarding our method's performance on this task by including quantitative metrics.
*"I found the following two references that also compute nuclear norm without the SVD. Could the authors either mention these references in their manuscript or clarify why these aren’t relevant?"*
We would be pleased to add these references to the related work section. However, note that both of these papers propose methods for nuclear norm regularization in matrix learning problems, where one seeks to learn a single matrix $A \in \mathbb{R}^{m\times n}$. In contrast, our work generalizes the efficient method of Rennie and Srebro [2005] to non-linear learning problems, where the appropriate analog to penalizing the nuclear norm of a matrix is penalizing the nuclear norm of the Jacobian of the function being learned.
*"Figure 1 has only the color legend. Could the authors confirm (and explicitly state on their manuscript) that the x- and y-axes correspond to each coordinate of the input x?"*
That is correct. We will clarify this in the camera-ready.
*"The PSNR values in Table 1 are based on averaging over only 100 images. Is there a reason why more images were not used? Could the authors repeat the experiment by averaging over more images?"*
The standard test sets in the denoising literature are quite small; for example, our other test set "CBSD68" contains 68 images. We built our Imagenet test set using 100 random images to approximately match test set sizes that are common in the denoising literature. We do not expect that the results of our comparison would be significantly different if we used a larger test set.
We would be happy to incorporate the rest of your suggested changes to our paper's formatting in the camera-ready.
We hope this answers your questions and would be pleased to continue this conversation in the author-reviewer discussion period.
---
Rebuttal Comment 1.1:
Comment: Thank you for clearly addressing my comments! | Summary: The paper proposes a computationally tractable method to induce low-rankness of a neural network's Jacobian. The method essentially generalizes the max-norm from Renni and Srebro, 2005, to more general compositions of functions, as it is common in neural networks. The method is made computationally efficient by estimating the Frobenius norm of the Jacobian. Experiments for denoising and representation learning are used to validate the efficacy of the method.
Strengths: The idea of penalizing the rank of a function's Jacobian is practically useful in certain machine-learning applications. The proposed method enables one to do so in a computationally tractable manner that enables its use in applications where brute-force computation of an SVD and the Jacobian is infeasible.
The paper is technically solid and the method is evaluated on two realistic example tasks.
Weaknesses: While the preliminaries in Section 3.1 are easy to follow, large parts of Section 3.2, which contains the main contribution, would be easier to follow if the authors would give a concrete example early on for the functions f and g. An illustrative example would make the idea more accessible.
One of the key arguments of the paper is that the method scales to high-dimensional problems and very large neural networks. The application examples, however, consider relatively small and simple problems with relatively simple architectures (for both the denoising of not-so-large images and autoencoder-based representation learning of images tasks). It is therefore not demonstrated that the method indeed scales to high-dimensional problems and complex neural networks.
The impact of estimating the Frobenius norm of the Jacobian (as detailed in Section 3.3) on the performance of applications was not studied. The authors claim that a single draw of eps is sufficient to evaluate (6), but it is unclear to me what the impact of this choice is.
It might be good if the authors could quantify the complexity of the proposed method, when used during training (either in run-time or in the number of operations/multiplications). This could also provide insight into the scalability to large-dimensional problems with very large neural networks.
Technical Quality: 3
Clarity: 2
Questions for Authors: How are hyperparameters (e.g., the regularization factor eta) in the provided applications set?
Denoising and representation learning can be accomplished with other means as well, and existing state-of-the-art methods are likely to outperform the proposed approach. Is there any application that would uniquely benefit from the proposed regularizer, i.e., in which no other existing method can be used?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I do not see a specific limitation that was not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review of our manuscript. We are glad that you appreciate our technical contributions and the practical utility of our method.
*"One of the key arguments of the paper is that the method scales to high-dimensional problems and very large neural networks. The application examples, however, consider relatively small and simple problems with relatively simple architectures (for both the denoising of not-so-large images and autoencoder-based representation learning of images tasks). It is therefore not demonstrated that the method indeed scales to high-dimensional problems and complex neural networks."*
In our denoising experiment, we apply our regularizer to a function of the form $f_\theta : \mathbb{R}^d \rightarrow \mathbb{R}^d$, where $d = 3 \times 256 \times 256$ since it operates on $256 \times 256$ RGB images. In this case, $Jf_\theta[x]$ is a $(3 \times 256 \times 256) \times (3 \times 256 \times 256) = 196608 \times 196608$ matrix, which occupies over 154 GB of memory if stored in float32 format and therefore cannot be stored on most GPUs. As our regularizer does not require any Jacobian computations, it can be successfully applied to this problem. In contrast, naive Jacobian nuclear norm regularization would be intractable due to the need to compute and take the SVD of the $196608 \times 196608$ denoiser Jacobians.
*"It might be good if the authors could quantify the complexity of the proposed method, when used during training (either in run-time or in the number of operations/multiplications). This could also provide insight into the scalability to large-dimensional problems with very large neural networks."*
We provide the formula for our regularizer in Equation (6) in Section 3.3 of our manuscript. As $f(x) = g(h(x))$, one must evaluate the $h(x)$ and $g(h(x))$ terms to train $f$ regardless of the training objective. Our regularizer requires additionally computing $h(x+\epsilon)$ and $g(h(x) + \epsilon)$, so if one computes the regularizer with $N$ noise samples, our regularizer requires an additional $2N$ function evaluations. As we used $N=1$ noise sample to compute our regularizer in our denoising and autoencoding experiments, the marginal cost of our regularizer was 2 function evaluations per iteration.
This compares favorably to computing the Jacobian, which requires $O(d)$ function evaluations for a function $f : \mathbb{R}^d \rightarrow \mathbb{R}^d$, and to the $O(d^3)$ SVD computation which is then required to compute the gradient of the Jacobian nuclear norm. Furthermore, as noted above, merely *storing* the Jacobian matrix is intractable for high-dimensional problems such as denoising.
*"How are hyperparameters (e.g., the regularization factor eta) in the provided applications set?"*
In the denoising experiments, we set $\eta = \sigma^2$, where $\sigma^2$ is the noise variance. (In our current manuscript, we have indicated that we set $\eta = \sigma$ in these experiments; this is a typo that we will correct in the camera-ready.) This choice of $\eta$ is motivated by results on optimal singular value shrinkage from Gavish and Donoho [2017], whose optimal shrinker solves a special case of our proposed denoising problem (11). (See our section "Singular value shrinkage" from line 230 onwards for details.) We set $\eta$ empirically in the autoencoder experiment via grid search.
*"Denoising and representation learning can be accomplished with other means as well, and existing state-of-the-art methods are likely to outperform the proposed approach. Is there any application that would uniquely benefit from the proposed regularizer, i.e., in which no other existing method can be used?"*
Our primary contribution is a tractable and well-grounded method for Jacobian nuclear norm regularization that can be applied to high-dimensional deep learning problems. There are few problems for which some given regularizer is strictly necessary and no other approach can be used, and our regularizer is no exception to this principle. Our goal in our experiments was to demonstrate the practical value of our proposed regularizer in high-dimensional learning problems by showing that it performs well in two tasks of interest to the machine learning community. The efficiency and simplicity of our method will enable the community to build on our work and discover new applications for Jacobian nuclear norm regularization.
We hope this answers your questions and would be pleased to continue this conversation during the author-reviewer discussion period. | Summary: The authors present an efficient method for regularizing the Jacobian of deep networks such that it is low-rank. This work is motivated by the fact that penalizing the Jacobian by the nuclear norm regularization is in general a computationally difficult task, as it needs to (i) actually compute the Jacobian and (ii) take the SVD of a large matrix. The authors' proposed method comes from the observation that the nuclear norm of a matrix can be computed by a non-convex optimization problem, which I believe is somewhat commonly considered in the matrix factorization literature. Then, they propose their method for estimating the Jacobian Frobenius norm, which is equivalent to computing the nuclear norm (roughly speaking).
Strengths: - I think the idea is neat and well-motivated. It stems from a tactic that I think is commonly used in matrix factorization literature (see [1] for example).
- The theoretical results seem to fit well with the main idea of the paper and overall strengthens the paper.
- I think this method could be a good starting point for future papers that need to consider computing the Jacobian (or at least regularize it). For example, there are papers in the topic of image editing, and I think there could be applications within that field that could use this method to circumvent costly evaluation of the Jacobian and computing its SVD.
[1] Lijun Ding, Dmitriy Drusvyatskiy, Maryam Fazel, Zaid Harchaoui. "Flat minima generalize for low-rank matrix recovery".
Weaknesses: - I think the main weakness of the paper is its experimental section. While I think this method could be a good starting point for other future papers, the current experiments don't really sell the effectiveness of the method. It seems that one of the main benefits of this method as shown in the experimental results in Table 1 is that the proposed denoiser is almost as good as supervised denoiser, despite having no corresponding clean images for training. But isn't N2N also unsupervised in the sense that it doesn't need clean pairs of images? It doesn't seem to have impressive performance gains over N2N or BM3D.
- Building upon the previous weakness, I wonder if there is a way of showing the effectiveness of this method through other means -- for example computational efficiency. This paper was motivated by the fact that it can circumvent costly Jacobian + SVD computations. Could the authors show with a small scale experiment showing the computational gains? I think that would strengthen the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: I do not have any specific questions besides the ones in the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are listed in the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review of our manuscript. We are glad that you appreciate our theoretical contributions and view our method as well-motivated. In this rebuttal, we will address the questions in your review. If you are satisfied with our answers, we respectfully ask that you raise your score for our submission.
*"It seems that one of the main benefits of this method as shown in the experimental results in Table 1 is that the proposed denoiser is almost as good as supervised denoiser, despite having no corresponding clean images for training. But isn't N2N also unsupervised in the sense that it doesn't need clean pairs of images? It doesn't seem to have impressive performance gains over N2N or BM3D."*
As you note, our method performs nearly as well as a supervised denoiser and comparably to Noise2Noise (N2N), despite being trained exclusively on highly corrupted data without access to clean images. While N2N is also an unsupervised denoising method, training a denoiser using N2N requires *several noisy samples* of each clean image. Given a clean image $x$, these samples are of the form $x + \epsilon_i$, where the $\epsilon_i$ are distinct realizations of zero-mean noise. Such data is typically unavailable if one lacks access to the clean images, rendering N2N impractical for real-world applications. In contrast, our method requires only a *single* realization of each noisy image and can hence be applied to arbitrary datasets of noisy images, which can be easily obtained in the wild.
*"This paper was motivated by the fact that it can circumvent costly Jacobian + SVD computations. Could the authors show with a small scale experiment showing the computational gains? I think that would strengthen the paper."*
In our rebuttal PDF attached to the global response, we have included a figure comparing time per training step at batch size 1 for the denoising problem (11) using our regularizer and a naive Pytorch implementation of the Jacobian nuclear norm. We experiment with images drawn from Imagenet downsampled to sizes $S \times S$ for $S \in \{8,16,32,64\}$; our V100 GPU's memory overflows for $S \geq 128$. Whereas time per training step with the naive nuclear norm implementation rises to nearly 129 seconds per iteration for $64 \times 64$ images, each training step with our regularizer takes under 120 milliseconds. We additionally highlight that a key advantage of our regularizer is that it is tractable for problem sizes where simply computing the model Jacobian is infeasible. For example, in our denoising experiments, we trained our denoiser on $256 \times 256$ RGB images. The model Jacobian in this case is a $(3 \times 256 \times 256) \times (3 \times 256 \times 256) = 196608 \times 196608$ matrix, which occupies over 154 GB of memory if stored in float32 format and therefore cannot be stored on most GPUs. As our regularizer does not require any Jacobian computations, it can be applied to problems such as denoising where Jacobian computations are prohibitive.
We hope this resolves your concerns. If you are satisfied with our response, we respectfully ask that you raise your score for our submission. Otherwise, we would be please to continue this discussion during the author-reviewer discussion period.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions along with the additional experiments; I have raised my score accordingly. | Summary: The paper describes an elegant and very efficient numerical scheme for minimizing a regularization term taking the form of the nuclear norm of the Jacobian $\\|Jf [x]\\|_*$ of a function $f$ at an input $x$. The numerical scheme can be applied when the function is written as a composition of two functions $f=g\circ h$. The numerical scheme is supported by two theorems and experiments. The regularization term is supported by experiments.
Strengths: The main body of the article is very well written and pleasant to read. The regularization term is relevant and can be used in several contexts. The numerical scheme proposed by the authors is very effective. The experiments are convincing.
Weaknesses: For me, the main weaknesses of the article lie in the proof of the theorems (see below).
Another point that is unclear to me is due to the discrepancy between Theorem 3.1 and its application. Specifically, as it is currently written, on the right-hand side of (4), there is an infimum on all smooth functions $h$ ad $g$. The "latent space" (the output space of $h$ and the input space of $g$) is not specified, and my understanding is that the infimum is over the union of all possible "latent spaces". The infimum over this union can be much smaller than the infimum for a given "latent space"... and, in practice, only one latent space is considered. This point should be clarified. Perhaps the theorem can be adapted to allow the freedom to choose the latent space.
Similarly, in practice, neural networks (of fixed architecture) are not able to reach all the smooth functions. This could have an impact on the two infimums of (4). This limitation should at least be stated clearly.
Finally, throughout the article, the reader would welcome hints on the choice of $g$ and $h$. In neural networks, we have $h_L \\circ...\\circ h_2 \\circ h_1$ and the reader doesn't know whether the choice of $h$ and $g$ resumes to choosing a layer separating them? or whether the method is extended to more than one composition in some other way?
Robustness to adversarial attacks seems to be a natural advantage of the regularization term. The authors may wish to mention this, along with other perspectives they deem relevant, in the conclusion.
In (3) and all the developments, the authors write $l(f(x),x)$. It is much more common in machine learning to consider $l(f(x),y)$, for an input/output pair $(x,y)$. The article would really gain to be extended to this setting.
In Theorem 3.2: I think it is a little o: $o(\\sigma^2)$, a quantity that goes to $0$ faster than $\\sigma^2$.
{\bf Comments on the appendices:}
Line 455: Please define AM-GM.
Line 457: Please remove 'compact'. It is the usual SVD.
Line 492: A sentence like 'Consider $z\\in\\Omega$' would be welcome.
In (14), last term: I think it is $f_m(z) + R_m^z(x)$, not $f_m(x) + R_m^z(x)$. This leads to several similar changes which (I think) do not have serious consequences. The changes are in (16), twice in line 498, once in line 509, and once below line 514.
Line 503: Please replace 'These functions....' with 'The composition of these functions'.
Line 509: I think the calculation would be simpler if you replace $ \\|Jf_m[z_i] - Jf_m[x]\\|_{*} $
by the absolute value of the difference between $ \\| Jf_m[z_i]\\|_{*} $
and $ \\| Jf_m[x]\\|_{*} $.
Calculus below Line 514: The layout is odd. Also, the calculation would gain clarity if you write the sum in $i$. and upper-bound each term.
Above Line 531 (and throughout the proof): Please write $\\epsilon \\rightarrow 0$ under the arrow in $\\rightarrow 0$.
Line 533: Please define RHS
Line 535: The fact that $\\|h^k_{m,\\epsilon} - h^k_m\\|_{L^1(\\Omega,\\mu)} \rightarrow 0$ is not clear to me. For instance, $\mu$ might involve a weighted Dirac mass at a point where $h^k_m$ is discontinuous. Although this paragraph is a sort of illustration, it is preferable to avoid saying something that might be false. Finally, at some other location in the proof, I think you need $\mu$ to be absolutely continuous.
Line 542: Please precise 'complement in $\\Omega$'.
Line 545: It is $g^k_{m,\\epsilon}$, not $g^k_{m,e}$.
Equation below Line 546: It is $B(h^k_m(x), \\epsilon)$ , not $B(h^k_m(x))$.
**A major issue:** From lines 543 to 549, the authors try to prove an inequality which, I think, cannot hold. The problematic intermediate step is in the inequality below line 546. It is
$\\|Jg^k_{m,\\epsilon} [H_m^k(x)]\\|_F \\leq \\sup_{y\\in h_m^k(\\Omega) \\|Jg_m^k[y]\\|_F }.$
It cannot hold because $g^k_{m,\\epsilon}$ is a smooth approximation of $g^k_m$ which is typically discontinuous. It is possible to find $\epsilon$ close to $0$ and points $y$ such that $\\|Jg^k_{m,\\epsilon} [y]\\|_F$ is arbitrary large. To me, there is no guarantee that $h^k_m(x)$ avoids such points. Said differently, to me, the first inequality below Line 546 does not hold when $B(h^k_m(x),\\epsilon) \\cup V_j\\neq \\emptyset$, where $V_j$ is a Voronoi cell such that $h^k_m(x)\\not\\in V_j$, which your hypotheses do not exclude. By the way, you might want to provide the formula for the mollification and detail the calculations similar to the first inequality below Line 546.
Above line 550: It is $L^1(\\Omega,\\mu)$, not $L^1(\\Omega)$
Line 555: You state: 'As $h^k_{m,\\epsilon} \\rightarrow h^k_{m}$, $\\mu(\\Omega_1(m,k,\\epsilon)) \rightarrow 0$.'. It would be useful to state in which sense $h^k_{m,\\epsilon} \\rightarrow h^k_{m}$ and to prove the details of the arguments guaranteeing that $\\mu(\\Omega_1(m,k,\epsilon)) \\rightarrow 0$. As already said, I suspect you need $\\mu$ to be absolutely continuous for the conclusion to hold. If I am correct, this hypothesis should appear in Theorem 3.1.
**A major issue:** Line 568, you state $\\int_\\Omega \\|Jg^k_{m,\\epsilon} [h^k_{m,\\epsilon} (x)] \\|^2_F d\\mu \\rightarrow \\int_\\Omega \\|Jg^k_{m} [h^k_{m} (x)] \\|^2_F d\\mu$. The sense of the term on the right of the equality is not clear to me since $g^k_{m}$ is piecewise linear, and might even be discontinuous. Even worth, I think nothing excludes $h^k_m$ to be constant on one of the Voronoi cell $V$. If this happens and if the constant turns out to be a point such that $Jg^k_{m}$ does not exist, the problem occurs on $V$ and we generally have $\\mu(V) \\neq 0$. Concerning the term on the left of equality sign, I fear you encounter problems similar to those you may be familiar with since you mention the Total Variation in your article. The term on the left takes the jumps into account and might even go to infinity since you have a square. Looking at the proof of this statement, we find a possible cause for the mistake in lines 569-570. There, you state `...for any given $x\\in\\Omega$ one can choose $\\epsilon>0$ sufficiently small so that $g^k_{m,\\epsilon}(h^k_{m,\\epsilon} (x)) = g^k_m(h^k_{m} (x))$...'. This is not true if $x$ is on the boundary of the Voronoi cells defining $h^k_m$ or is such that $h^k_{m} (x)$ is the boundary of two Voronoi cells defining $g^k_m$.
Appendix A.3: You always use the (big) $O$ but sometimes you have to write the (little) $o$. You need to distinguish between the two notations.
Technical Quality: 2
Clarity: 4
Questions for Authors: SVS and SVT are often interpreted as proximal operators. Can you please mention it in your introduction?
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 4
Limitations: The main limitation is that there are gaps in the proof of the main theorem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review of our paper. We would be pleased to make the edits you have suggested for clarity and fix the typos you have pointed out in the camera-ready version of our paper. We have updated our proof of Theorem 3.1 to address your major issues; as we cannot upload revised manuscripts during the rebuttal period, we summarize our revised arguments below.
1. Our proof requires $\mu$ to be absolutely continuous wrt the Lebesgue measure $\lambda$ to transfer results of the form $\lambda(E) = 0$ or $\lambda(E_n) \rightarrow 0$ for sets $E_n, E \subseteq \Omega$ to $\mu(E) = 0, \mu(E_n) \rightarrow 0$, resp. Thank you for pointing this out; we will add this hypothesis to Theorem 3.1 in the camera-ready.
2. For any Voronoi partition $V_i, i=1,...,N(k)$ and corresponding functions $h_m^k, g_m^k$ that we construct in our proof, the preimage of the union of Voronoi boundaries (which we call $S_m^k$) under $h_m^k$ is a set of Lebesgue measure zero. If $\mu \ll \lambda$, then the preimage has $\mu$-measure zero as well. Our reasoning is as follows:
In lines 507-514, we define $g_m^k, h_m^k$ as piecewise affine functions such that $g_m^k(x) := g_m^{z_i}(x)$ and $h_m^k(x) := h_m^{z_i}(x)$ for all $x \in \textrm{int}(V_i)$. The affine functions $g_m^{z_i}, h_m^{z_i}$ have the key property that $\frac{\eta}{2}\left( \|Jg_m^{z_i}[h_m^{z_i}(x)]\|_F^2 + \|Jh_m^{z_i}[x]\|_F^2 \right) = \eta \|J f_m[z_i]\|\_*.$
At any $x$ on the interior of a Voronoi cell $V_i$, $g_m^k$ and $h_m^k$ are equal to $g_m^{z_i}, h_m^{z_i}$, resp, so by the key property above, $\frac{\eta}{2}\left( \|Jg_m^k[h_m^k(x)]\|_F^2 + \|Jh_m^k[x]\|_F^2 \right) = \eta \|J f_m[z_i]\|\_* < + \infty$. In particular, $\|Jg_m^k[h_m^k(x)]\|_F^2$ and $\|Jh_m^k[x]\|_F^2$ must both be well-defined and finite at this $x$. This can only happen if $h_m^k(x)$ lies on the interior of a Voronoi cell, as otherwise $Jg_m^k[h_m^k(x)]$ would be undefined. Hence $h_m^k$ maps the interiors of Voronoi cells to the interiors of Voronoi cells; the contrapositive is that if $h_m^k(x)$ lies on a Voronoi boundary, then $x$ also lies on a Voronoi boundary. Consequently, the preimage of the Voronoi boundaries under $h_m^k$ is a set of Lebesgue measure zero. If $\mu \ll \lambda$, this is also a set of $\mu$-measure zero.
Using this fact, we can replace all integrals over $\Omega$ from line 525 onwards with integrals over $\Omega(m,k) := \Omega \setminus S_m^k$ without changing their value. In particular, $Jg_m^k[h_m^k(x)]$ is well-defined $\mu$-ae, which should resolve your concerns re: line 568, for which you note *"I think nothing excludes $h_m^k$ to be constant on one of the Voronoi cell $V$. If this happens and if the constant turns out to be a point such that $Jg_m^k$ does not exist, the problem occurs on $V$ and we generally have $\mu(V) = 0$."* It should also resolve your concerns re: lines 569-570, for which you state that one cannot always find $\epsilon>0$ so that $g_{m,\epsilon}^k[h_{m,\epsilon}^k(x)] = g_m^k[h_m^k(x)]$. One can in fact find such $\epsilon$ for $\mu$-almost all $x$, which is sufficient to apply the dominated convergence theorem.
3. We use an alternative approach to show that $\|g_{m,\epsilon}^k \circ h_{m,\epsilon}^k - g_{m,\epsilon}^k \circ h_{m}^k\|_{L^1(\Omega,\mu)} \rightarrow 0$ that avoids the issue you highlight re: lines 543-549. We employ a different decomposition of $\Omega \setminus S_m^k$ into good and bad sets:
- The good set $\Omega_0(m,k,\epsilon) \subseteq \Omega(m,k)$ is the set of $x \in \Omega(m,k)$ such that $d(x, S_m^k) > \epsilon$. ($d(p,S)$ denotes the distance from point $p$ from set $S$.)
- The bad set $\Omega_1(m,k,\epsilon)$ is $\Omega(m,k) \setminus \Omega_0(m,k,\epsilon) = \(x \in \Omega(m,k) : d(x, S_m^k) \leq \epsilon \)$.
For all $x \in \Omega_0(m,k,\epsilon)$, $h_{m,\epsilon}^k(x) = h_m^k(x)$ because $d(x, S_m^k) > \epsilon$ and we employ the standard mollifier supported on $B(0,\epsilon)$. Consequently, $g_{m,\epsilon}^k(h_{m,\epsilon}^k(x)) = g_{m,\epsilon}^k(h_m^k(x))$ and therefore $\|g_{m,\epsilon}^k(h_{m,\epsilon}^k(x)) - g_{m,\epsilon}^k(h_{m}^k(x)) \|\_2 = 0$, so $\int_{\Omega_0(m,k,\epsilon)} \|g_{m,\epsilon}^k(h_{m,\epsilon}^k(x)) - g_{m,\epsilon}^k(h_{m}^k(x)) \|_2 d\mu = 0$. We no longer need the reasoning in lines 543-549 to prove this integral converges to 0.
We now address $\int_{\Omega_1(m,k,\epsilon)} \|g_{m,\epsilon}^k(h_{m,\epsilon}^k(x)) - g_{m,\epsilon}^k(h_{m}^k(x)) \|\_2 d\mu$. We employ the same bound on the integrand as under line 552 for the new bad set $\Omega_1(m,k,\epsilon)$. To see that $\mu(\Omega_1(m,k,\epsilon)) \rightarrow 0$, note that $\lambda(\Omega_1(m,k,\epsilon)) \rightarrow 0$, as $\Omega_1(m,k,\epsilon)$ is a union of cylinders of radius $\epsilon$ centered at the Voronoi boundaries $S_m^k$, which have Lebesgue measure zero. Under our new assumption $\mu \ll \lambda$, it follows that $\mu(\Omega_1(m,k,\epsilon)) \rightarrow 0$.
These results jointly show that $\|g_{m,\epsilon}^k \circ h_{m,\epsilon}^k - g_{m,\epsilon}^k \circ h_{m}^k\|_{L^1(\Omega,\mu)}$ while avoiding the issue you highlight re: lines 543-549.
We hope this resolves your concerns regarding our proof. If you are satisfied with our response, we respectfully ask that you raise your score for our submission. Otherwise, we would be please to continue this discussion during the author-reviewer discussion period.
---
Rebuttal Comment 1.1:
Comment: I have read the author's rebuttal but, in my opinion, the proof needs to be re-read in detail and the article needs another round of review. It is not possible, from an article and a rebuttal, to check that the proof is correct. For this reason, I will not change my rating. To avoid errors in the futur, I recommend writing down the details of the mollification and being careful when swapping limits and integrals.
Best regards,
---
Reply to Comment 1.1.1:
Comment: We thank Reviewer gXsR for reading our rebuttal. We are confident that our updated proof is now correct, and we believe the details we have provided in our rebuttal are sufficient to resolve the specific concerns raised by Reviewer gXsR under the "major issue" headings. | Rebuttal 1:
Rebuttal: Thank you for your thoughtful reviews of our submission. We have attached a PDF containing four figures:
1. For Reviewer 82sN, we have included a figure (top-left) comparing time per training step at batch size 1 for the denoising problem (11) using our regularizer and a naive Pytorch implementation of the Jacobian nuclear norm. We experiment with images drawn from Imagenet downsampled to sizes $S \times S$ for $S \in \{8,16,32,64\}$; our V100 GPU's memory overflows for $S \geq 128$. Whereas time per training step with the naive nuclear norm implementation rises to nearly 129 seconds per iteration for $64 \times 64$ images, each training step with our regularizer takes under 120 milliseconds.
2. For Reviewer Gitn, we have included three more sets of latent traversals in our autoencoder's latent space (top-right, bottom-left, and bottom-right).
Pdf: /pdf/3848ecde2180584f8a3e85f9826b618a472910d0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.